doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
2310.03214
100
- First Animals in Space Facts @ Claim this knowledge panel Who is the most successful American Idol winner? knowledge graph Dettingen pyr meetd i organic results American Idol winners list: All seasons May 21, 2023 [c) Homework Study.com hitps: homework study.com » explanation » whatwas What was the first animal to land on the moon? Although, since humans are technically animals, one could say that the first animal sent to the Moon was Neil Armstrong. He belonged to the species Homo sapiens .. 1 answer - Top answer: No animals were ever sent to the Moon. Although, since humans are te... Season 1: Kelly Clarks_. Season 2 Ruben Stud.. Season 3: Fantasia Ba_ Season 4: Carrie Unde_. Season 5: Taylor Hicks Season 6: Jordin Sper.. Season 7: David Cock Season 8: Kris... Season 2: Ruben Studdard - Season 4; Carrie Underwood - Season 6; Jordin Sparks
2310.03214#100
FreshLLMs: Refreshing Large Language Models with Search Engine Augmentation
Most large language models (LLMs) are trained once and never updated; thus, they lack the ability to dynamically adapt to our ever-changing world. In this work, we perform a detailed study of the factuality of LLM-generated text in the context of answering questions that test current world knowledge. Specifically, we introduce FreshQA, a novel dynamic QA benchmark encompassing a diverse range of question and answer types, including questions that require fast-changing world knowledge as well as questions with false premises that need to be debunked. We benchmark a diverse array of both closed and open-source LLMs under a two-mode evaluation procedure that allows us to measure both correctness and hallucination. Through human evaluations involving more than 50K judgments, we shed light on limitations of these models and demonstrate significant room for improvement: for instance, all models (regardless of model size) struggle on questions that involve fast-changing knowledge and false premises. Motivated by these results, we present FreshPrompt, a simple few-shot prompting method that substantially boosts the performance of an LLM on FreshQA by incorporating relevant and up-to-date information retrieved from a search engine into the prompt. Our experiments show that FreshPrompt outperforms both competing search engine-augmented prompting methods such as Self-Ask (Press et al., 2022) as well as commercial systems such as Perplexity.AI. Further analysis of FreshPrompt reveals that both the number of retrieved evidences and their order play a key role in influencing the correctness of LLM-generated answers. Additionally, instructing the LLM to generate concise and direct answers helps reduce hallucination compared to encouraging more verbose answers. To facilitate future work, we release FreshQA at github.com/freshllms/freshqa and commit to updating it at regular intervals.
http://arxiv.org/pdf/2310.03214
Tu Vu, Mohit Iyyer, Xuezhi Wang, Noah Constant, Jerry Wei, Jason Wei, Chris Tar, Yun-Hsuan Sung, Denny Zhou, Quoc Le, Thang Luong
cs.CL
Preprint, 26 pages, 10 figures, 5 tables; Added FreshEval
null
cs.CL
20231005
20231122
[ { "id": "2203.05115" }, { "id": "2210.03350" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2302.04761" }, { "id": "2307.03172" }, { "id": "2201.11903" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2203.11147" } ]
2310.03214
101
Figure 9: GOOGLE SEARCH produces different types of search results for given a query, including the answer box, organic results, and other useful information, such as the knowledge graph, questions and answers from crowdsourced QA platforms, and related questions that search users also ask. Each of these results contains an associated text snippet along with other information, such as source webpage, date, title, and highlighted words. 24 Preprint
2310.03214#101
FreshLLMs: Refreshing Large Language Models with Search Engine Augmentation
Most large language models (LLMs) are trained once and never updated; thus, they lack the ability to dynamically adapt to our ever-changing world. In this work, we perform a detailed study of the factuality of LLM-generated text in the context of answering questions that test current world knowledge. Specifically, we introduce FreshQA, a novel dynamic QA benchmark encompassing a diverse range of question and answer types, including questions that require fast-changing world knowledge as well as questions with false premises that need to be debunked. We benchmark a diverse array of both closed and open-source LLMs under a two-mode evaluation procedure that allows us to measure both correctness and hallucination. Through human evaluations involving more than 50K judgments, we shed light on limitations of these models and demonstrate significant room for improvement: for instance, all models (regardless of model size) struggle on questions that involve fast-changing knowledge and false premises. Motivated by these results, we present FreshPrompt, a simple few-shot prompting method that substantially boosts the performance of an LLM on FreshQA by incorporating relevant and up-to-date information retrieved from a search engine into the prompt. Our experiments show that FreshPrompt outperforms both competing search engine-augmented prompting methods such as Self-Ask (Press et al., 2022) as well as commercial systems such as Perplexity.AI. Further analysis of FreshPrompt reveals that both the number of retrieved evidences and their order play a key role in influencing the correctness of LLM-generated answers. Additionally, instructing the LLM to generate concise and direct answers helps reduce hallucination compared to encouraging more verbose answers. To facilitate future work, we release FreshQA at github.com/freshllms/freshqa and commit to updating it at regular intervals.
http://arxiv.org/pdf/2310.03214
Tu Vu, Mohit Iyyer, Xuezhi Wang, Noah Constant, Jerry Wei, Jason Wei, Chris Tar, Yun-Hsuan Sung, Denny Zhou, Quoc Le, Thang Luong
cs.CL
Preprint, 26 pages, 10 figures, 5 tables; Added FreshEval
null
cs.CL
20231005
20231122
[ { "id": "2203.05115" }, { "id": "2210.03350" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2302.04761" }, { "id": "2307.03172" }, { "id": "2201.11903" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2203.11147" } ]
2310.03214
102
24 Preprint {other_demonstrations} # omitted for brevity query: When did Amazon become the first publicly traded company to exceed a market value of $3 trillion? source: cnbc.com date: Jan 03, 2022 title: Apple becomes first U.S. company to reach $3 trillion ... snippet: Apple became the first publicly traded U.S. company to hit a $1 trillion market cap during intraday trading on Aug. 2, 2018. It hit a $2... highlight: Aug. 2, 2018 source: bloomberg.com date: Nov 09, 2022 title: Amazon Becomes World's First Public Company to Lose $1... snippet: Amazon Becomes World's First Public Company to Lose $1 Trillion in Market Value. Amazon market cap shrinks to $879 billion from $1.88 trillion. highlight: Amazon Becomes | First Public Company | Trillion | Market Value | Amazon market cap | billion | trillion source: barrons.com date: Nov 10, 2022 title: When did Amazon market cap hit $1 trillion? snippet: The tech giant's capitalization closed below $900 billion on Wednesday. Amazon's market capitalization topped out at roughly $1.9 trillion back in July 2021. highlight: query: When did Amazon become the first publicly traded company to exceed a market value of $3 trillion?
2310.03214#102
FreshLLMs: Refreshing Large Language Models with Search Engine Augmentation
Most large language models (LLMs) are trained once and never updated; thus, they lack the ability to dynamically adapt to our ever-changing world. In this work, we perform a detailed study of the factuality of LLM-generated text in the context of answering questions that test current world knowledge. Specifically, we introduce FreshQA, a novel dynamic QA benchmark encompassing a diverse range of question and answer types, including questions that require fast-changing world knowledge as well as questions with false premises that need to be debunked. We benchmark a diverse array of both closed and open-source LLMs under a two-mode evaluation procedure that allows us to measure both correctness and hallucination. Through human evaluations involving more than 50K judgments, we shed light on limitations of these models and demonstrate significant room for improvement: for instance, all models (regardless of model size) struggle on questions that involve fast-changing knowledge and false premises. Motivated by these results, we present FreshPrompt, a simple few-shot prompting method that substantially boosts the performance of an LLM on FreshQA by incorporating relevant and up-to-date information retrieved from a search engine into the prompt. Our experiments show that FreshPrompt outperforms both competing search engine-augmented prompting methods such as Self-Ask (Press et al., 2022) as well as commercial systems such as Perplexity.AI. Further analysis of FreshPrompt reveals that both the number of retrieved evidences and their order play a key role in influencing the correctness of LLM-generated answers. Additionally, instructing the LLM to generate concise and direct answers helps reduce hallucination compared to encouraging more verbose answers. To facilitate future work, we release FreshQA at github.com/freshllms/freshqa and commit to updating it at regular intervals.
http://arxiv.org/pdf/2310.03214
Tu Vu, Mohit Iyyer, Xuezhi Wang, Noah Constant, Jerry Wei, Jason Wei, Chris Tar, Yun-Hsuan Sung, Denny Zhou, Quoc Le, Thang Luong
cs.CL
Preprint, 26 pages, 10 figures, 5 tables; Added FreshEval
null
cs.CL
20231005
20231122
[ { "id": "2203.05115" }, { "id": "2210.03350" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2302.04761" }, { "id": "2307.03172" }, { "id": "2201.11903" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2203.11147" } ]
2310.03214
103
query: When did Amazon become the first publicly traded company to exceed a market value of $3 trillion? snippet: Apple became the first publicly traded U.S. company to hit a $1 trillion market cap during intraday trading on Aug. 2, 2018. It hit a $2... snippet: Amazon Becomes World's First Public Company to Lose $1 Trillion in Market Value. Amazon market cap shrinks to $879 billion from $1.88 trillion. highlight: Amazon Becomes | First Public Company | Trillion | Market Value | Amazon market cap | billion | trillion snippet: The tech giant's capitalization closed below $900 billion on Wednesday. Amazon's market capitalization topped out at roughly $1.9 trillion back in July 2021. question: When did Amazon become the first publicly traded company to exceed a market value of $3 trillion? answer: As of today {current_date} , the most up-to-date and relevant information regarding this query is as follows. Amazon's market capitalization hit a peak of roughly $1.9 trillion in July 2021. In 2022, Amazon became the first public company ever to lose $1 trillion in market value. Thus, Amazon's market value has never exceeded $3 trillion. In fact, Apple became the first publicly traded U.S. company to exceed a market value of $3 trillion in January 2022.
2310.03214#103
FreshLLMs: Refreshing Large Language Models with Search Engine Augmentation
Most large language models (LLMs) are trained once and never updated; thus, they lack the ability to dynamically adapt to our ever-changing world. In this work, we perform a detailed study of the factuality of LLM-generated text in the context of answering questions that test current world knowledge. Specifically, we introduce FreshQA, a novel dynamic QA benchmark encompassing a diverse range of question and answer types, including questions that require fast-changing world knowledge as well as questions with false premises that need to be debunked. We benchmark a diverse array of both closed and open-source LLMs under a two-mode evaluation procedure that allows us to measure both correctness and hallucination. Through human evaluations involving more than 50K judgments, we shed light on limitations of these models and demonstrate significant room for improvement: for instance, all models (regardless of model size) struggle on questions that involve fast-changing knowledge and false premises. Motivated by these results, we present FreshPrompt, a simple few-shot prompting method that substantially boosts the performance of an LLM on FreshQA by incorporating relevant and up-to-date information retrieved from a search engine into the prompt. Our experiments show that FreshPrompt outperforms both competing search engine-augmented prompting methods such as Self-Ask (Press et al., 2022) as well as commercial systems such as Perplexity.AI. Further analysis of FreshPrompt reveals that both the number of retrieved evidences and their order play a key role in influencing the correctness of LLM-generated answers. Additionally, instructing the LLM to generate concise and direct answers helps reduce hallucination compared to encouraging more verbose answers. To facilitate future work, we release FreshQA at github.com/freshllms/freshqa and commit to updating it at regular intervals.
http://arxiv.org/pdf/2310.03214
Tu Vu, Mohit Iyyer, Xuezhi Wang, Noah Constant, Jerry Wei, Jason Wei, Chris Tar, Yun-Hsuan Sung, Denny Zhou, Quoc Le, Thang Luong
cs.CL
Preprint, 26 pages, 10 figures, 5 tables; Added FreshEval
null
cs.CL
20231005
20231122
[ { "id": "2203.05115" }, { "id": "2210.03350" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2302.04761" }, { "id": "2307.03172" }, { "id": "2201.11903" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2203.11147" } ]
2310.03214
104
query: By how many seats do Republicans currently control the United States Senate? {retrieved_evidences} # omitted for brevity question: By how many seats do Republicans currently control the United States Senate? [Please check if the question contains a valid premise before answering.] answer: Figure 10: A realistic prompt for FRESHPROMPT. We cast all retrieved evidences into a unified format with useful information, including source webpage, date, title, text snippet, and highlighted words. Few-shot demonstrations are provided at the beginning of the prompt. Each demonstration shows the model an example question and a list of retrieved evidences for the question, followed by some reasoning over the evidences to figure out the most relevant and up-to-date answer. 25 Preprint Table 5: Accuracy of different search engine-augmented LLMS on FRESHQA under RELAXED evalua- tions. Models benchmarked on the same date of April 26, 2023. We report accuracy across different categories of questions, including fast-changing (fast), slow-changing (slow), never-changing (never), false-premise, questions that involve knowledge before 2022 (< 2022) and since 2022 (≥ 2022), one-hop (1-hop) and multi-hop (m-hop) questions. + indicates a model with access to the current date. UTD stands for “up-to-date”.
2310.03214#104
FreshLLMs: Refreshing Large Language Models with Search Engine Augmentation
Most large language models (LLMs) are trained once and never updated; thus, they lack the ability to dynamically adapt to our ever-changing world. In this work, we perform a detailed study of the factuality of LLM-generated text in the context of answering questions that test current world knowledge. Specifically, we introduce FreshQA, a novel dynamic QA benchmark encompassing a diverse range of question and answer types, including questions that require fast-changing world knowledge as well as questions with false premises that need to be debunked. We benchmark a diverse array of both closed and open-source LLMs under a two-mode evaluation procedure that allows us to measure both correctness and hallucination. Through human evaluations involving more than 50K judgments, we shed light on limitations of these models and demonstrate significant room for improvement: for instance, all models (regardless of model size) struggle on questions that involve fast-changing knowledge and false premises. Motivated by these results, we present FreshPrompt, a simple few-shot prompting method that substantially boosts the performance of an LLM on FreshQA by incorporating relevant and up-to-date information retrieved from a search engine into the prompt. Our experiments show that FreshPrompt outperforms both competing search engine-augmented prompting methods such as Self-Ask (Press et al., 2022) as well as commercial systems such as Perplexity.AI. Further analysis of FreshPrompt reveals that both the number of retrieved evidences and their order play a key role in influencing the correctness of LLM-generated answers. Additionally, instructing the LLM to generate concise and direct answers helps reduce hallucination compared to encouraging more verbose answers. To facilitate future work, we release FreshQA at github.com/freshllms/freshqa and commit to updating it at regular intervals.
http://arxiv.org/pdf/2310.03214
Tu Vu, Mohit Iyyer, Xuezhi Wang, Noah Constant, Jerry Wei, Jason Wei, Chris Tar, Yun-Hsuan Sung, Denny Zhou, Quoc Le, Thang Luong
cs.CL
Preprint, 26 pages, 10 figures, 5 tables; Added FreshEval
null
cs.CL
20231005
20231122
[ { "id": "2203.05115" }, { "id": "2210.03350" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2302.04761" }, { "id": "2307.03172" }, { "id": "2201.11903" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2203.11147" } ]
2310.03214
105
Model knowl. cutoff all all fast valid premise slow never < 2022 ≥ 2022 1-hop m-hop false premise all < 2022 comparison against baselines GOOGLE SEARCH UTD 47.4 58.8 42.4 56.0 77.8 74.5 49.4 66.4 39.8 12.9 11.8 GPT-3.5 GPT-3.5 + SELF-ASK GPT-3.5 + FRESHPROMPT PPLX.AI 2021 UTD UTD UTD 32.4 42.0 62.0 66.2 32.4 51.6 68.9 68.9 8.0 36.8 51.2 48.8 28.0 44.8 70.4 67.2 61.1 73.0 84.9 90.5 68.1 74.5 78.0 85.1 11.1 37.9 63.4 59.1 34.7 53.0 75.0 76.1 26.9 48.1 53.7 50.9 32.3 12.9 41.1 58.1 43.0 17.2 49.5 60.2 GPT-4 GPT-4 + SELF-ASK GPT-4 + FRESHPROMPT 2021+ UTD UTD 46.4 50.4 77.8 39.6
2310.03214#105
FreshLLMs: Refreshing Large Language Models with Search Engine Augmentation
Most large language models (LLMs) are trained once and never updated; thus, they lack the ability to dynamically adapt to our ever-changing world. In this work, we perform a detailed study of the factuality of LLM-generated text in the context of answering questions that test current world knowledge. Specifically, we introduce FreshQA, a novel dynamic QA benchmark encompassing a diverse range of question and answer types, including questions that require fast-changing world knowledge as well as questions with false premises that need to be debunked. We benchmark a diverse array of both closed and open-source LLMs under a two-mode evaluation procedure that allows us to measure both correctness and hallucination. Through human evaluations involving more than 50K judgments, we shed light on limitations of these models and demonstrate significant room for improvement: for instance, all models (regardless of model size) struggle on questions that involve fast-changing knowledge and false premises. Motivated by these results, we present FreshPrompt, a simple few-shot prompting method that substantially boosts the performance of an LLM on FreshQA by incorporating relevant and up-to-date information retrieved from a search engine into the prompt. Our experiments show that FreshPrompt outperforms both competing search engine-augmented prompting methods such as Self-Ask (Press et al., 2022) as well as commercial systems such as Perplexity.AI. Further analysis of FreshPrompt reveals that both the number of retrieved evidences and their order play a key role in influencing the correctness of LLM-generated answers. Additionally, instructing the LLM to generate concise and direct answers helps reduce hallucination compared to encouraging more verbose answers. To facilitate future work, we release FreshQA at github.com/freshllms/freshqa and commit to updating it at regular intervals.
http://arxiv.org/pdf/2310.03214
Tu Vu, Mohit Iyyer, Xuezhi Wang, Noah Constant, Jerry Wei, Jason Wei, Chris Tar, Yun-Hsuan Sung, Denny Zhou, Quoc Le, Thang Luong
cs.CL
Preprint, 26 pages, 10 figures, 5 tables; Added FreshEval
null
cs.CL
20231005
20231122
[ { "id": "2203.05115" }, { "id": "2210.03350" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2302.04761" }, { "id": "2307.03172" }, { "id": "2201.11903" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2203.11147" } ]
2310.03214
106
GPT-4 + SELF-ASK GPT-4 + FRESHPROMPT 2021+ UTD UTD 46.4 50.4 77.8 39.6 48.4 78.7 14.4 40.0 61.6 35.2 49.6 79.2 69.0 55.6 95.2 80.9 52.5 90.8 14.9 46.0 71.5 39.2 45.1 83.2 40.7 56.5 67.6 66.9 56.5 75.0 83.9 69.9 80.6 sensitivity and ablation studies GPT-3.5 GPT-3.5 + FRESHPROMPT w/ PREMISE CHECK 2021 UTD UTD 32.4 62.0 41.0 32.4 68.9 33.5 8.0 51.2 23.2 28.0 70.4 32.0 61.1 84.9 45.2 68.1 78.0 44.0 11.1 63.4 27.2 34.7 75.0 37.7 26.9 53.7 23.1 32.3 41.1 63.7 43.0 49.5 72.0 GPT-4 2021+ 46.4 39.6 14.4 35.2 69.0 80.9 14.9 39.2 40.7 66.9
2310.03214#106
FreshLLMs: Refreshing Large Language Models with Search Engine Augmentation
Most large language models (LLMs) are trained once and never updated; thus, they lack the ability to dynamically adapt to our ever-changing world. In this work, we perform a detailed study of the factuality of LLM-generated text in the context of answering questions that test current world knowledge. Specifically, we introduce FreshQA, a novel dynamic QA benchmark encompassing a diverse range of question and answer types, including questions that require fast-changing world knowledge as well as questions with false premises that need to be debunked. We benchmark a diverse array of both closed and open-source LLMs under a two-mode evaluation procedure that allows us to measure both correctness and hallucination. Through human evaluations involving more than 50K judgments, we shed light on limitations of these models and demonstrate significant room for improvement: for instance, all models (regardless of model size) struggle on questions that involve fast-changing knowledge and false premises. Motivated by these results, we present FreshPrompt, a simple few-shot prompting method that substantially boosts the performance of an LLM on FreshQA by incorporating relevant and up-to-date information retrieved from a search engine into the prompt. Our experiments show that FreshPrompt outperforms both competing search engine-augmented prompting methods such as Self-Ask (Press et al., 2022) as well as commercial systems such as Perplexity.AI. Further analysis of FreshPrompt reveals that both the number of retrieved evidences and their order play a key role in influencing the correctness of LLM-generated answers. Additionally, instructing the LLM to generate concise and direct answers helps reduce hallucination compared to encouraging more verbose answers. To facilitate future work, we release FreshQA at github.com/freshllms/freshqa and commit to updating it at regular intervals.
http://arxiv.org/pdf/2310.03214
Tu Vu, Mohit Iyyer, Xuezhi Wang, Noah Constant, Jerry Wei, Jason Wei, Chris Tar, Yun-Hsuan Sung, Denny Zhou, Quoc Le, Thang Luong
cs.CL
Preprint, 26 pages, 10 figures, 5 tables; Added FreshEval
null
cs.CL
20231005
20231122
[ { "id": "2203.05115" }, { "id": "2210.03350" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2302.04761" }, { "id": "2307.03172" }, { "id": "2201.11903" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2203.11147" } ]
2310.03214
107
72.0 GPT-4 2021+ 46.4 39.6 14.4 35.2 69.0 80.9 14.9 39.2 40.7 66.9 83.9 GPT-4 w/ SNIPPETS ONLY & SEARCH ORDER GPT-4 w/ SNIPPETS ONLY & TIME ORDER GPT-4 w/ SNIPPETS ONLY & RANDOM ORDER UTD UTD UTD 77.6 77.6 75.4 78.2 78.2 76.1 59.2 59.2 58.4 80.0 79.2 73.6 95.2 96.0 96.0 90.8 90.1 90.8 70.6 71.1 67.2 82.1 82.1 80.6 68.5 68.5 64.8 75.8 75.8 73.4 83.9 86.0 81.7 GPT-4 + FRESHPROMPT w/ PREMISE CHECK w/o ANSWER BOX w/o ANSWER BOX & RELEVANT INFO w/ 1 EVIDENCE w/ 5 EVIDENCES w/ 15 EVIDENCES w/ 15 DEMONSTRATIONS w/ LONG DEMONSTRATION ANSWERS UTD UTD UTD UTD UTD UTD UTD UTD UTD 77.8 78.8 76.2 74.8
2310.03214#107
FreshLLMs: Refreshing Large Language Models with Search Engine Augmentation
Most large language models (LLMs) are trained once and never updated; thus, they lack the ability to dynamically adapt to our ever-changing world. In this work, we perform a detailed study of the factuality of LLM-generated text in the context of answering questions that test current world knowledge. Specifically, we introduce FreshQA, a novel dynamic QA benchmark encompassing a diverse range of question and answer types, including questions that require fast-changing world knowledge as well as questions with false premises that need to be debunked. We benchmark a diverse array of both closed and open-source LLMs under a two-mode evaluation procedure that allows us to measure both correctness and hallucination. Through human evaluations involving more than 50K judgments, we shed light on limitations of these models and demonstrate significant room for improvement: for instance, all models (regardless of model size) struggle on questions that involve fast-changing knowledge and false premises. Motivated by these results, we present FreshPrompt, a simple few-shot prompting method that substantially boosts the performance of an LLM on FreshQA by incorporating relevant and up-to-date information retrieved from a search engine into the prompt. Our experiments show that FreshPrompt outperforms both competing search engine-augmented prompting methods such as Self-Ask (Press et al., 2022) as well as commercial systems such as Perplexity.AI. Further analysis of FreshPrompt reveals that both the number of retrieved evidences and their order play a key role in influencing the correctness of LLM-generated answers. Additionally, instructing the LLM to generate concise and direct answers helps reduce hallucination compared to encouraging more verbose answers. To facilitate future work, we release FreshQA at github.com/freshllms/freshqa and commit to updating it at regular intervals.
http://arxiv.org/pdf/2310.03214
Tu Vu, Mohit Iyyer, Xuezhi Wang, Noah Constant, Jerry Wei, Jason Wei, Chris Tar, Yun-Hsuan Sung, Denny Zhou, Quoc Le, Thang Luong
cs.CL
Preprint, 26 pages, 10 figures, 5 tables; Added FreshEval
null
cs.CL
20231005
20231122
[ { "id": "2203.05115" }, { "id": "2210.03350" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2302.04761" }, { "id": "2307.03172" }, { "id": "2201.11903" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2203.11147" } ]
2310.03214
108
LONG DEMONSTRATION ANSWERS UTD UTD UTD UTD UTD UTD UTD UTD UTD 77.8 78.8 76.2 74.8 67.2 74.2 79.0 77.2 77.8 78.7 76.3 76.6 75.0 67.3 75.0 79.5 78.2 77.9 61.6 59.2 59.2 56.0 47.2 56.8 62.4 60.0 60.8 79.2 76.8 76.0 74.4 66.4 74.4 80.0 78.4 77.6 95.2 92.9 94.4 94.4 88.1 93.7 96.0 96.0 95.2 90.8 87.2 90.1 89.4 85.8 87.2 90.1 91.5 90.1 71.5 69.8 68.5 66.4 56.2 67.7 73.2 70.2 70.6 83.2 82.1 81.0 80.6 72.0 81.7 83.2 82.8 82.8 67.6 62.0 65.7 61.1 55.6 58.3 70.4 66.7 65.7 75.0 86.3 75.0 74.2 66.9 71.8 77.4 74.2 77.4
2310.03214#108
FreshLLMs: Refreshing Large Language Models with Search Engine Augmentation
Most large language models (LLMs) are trained once and never updated; thus, they lack the ability to dynamically adapt to our ever-changing world. In this work, we perform a detailed study of the factuality of LLM-generated text in the context of answering questions that test current world knowledge. Specifically, we introduce FreshQA, a novel dynamic QA benchmark encompassing a diverse range of question and answer types, including questions that require fast-changing world knowledge as well as questions with false premises that need to be debunked. We benchmark a diverse array of both closed and open-source LLMs under a two-mode evaluation procedure that allows us to measure both correctness and hallucination. Through human evaluations involving more than 50K judgments, we shed light on limitations of these models and demonstrate significant room for improvement: for instance, all models (regardless of model size) struggle on questions that involve fast-changing knowledge and false premises. Motivated by these results, we present FreshPrompt, a simple few-shot prompting method that substantially boosts the performance of an LLM on FreshQA by incorporating relevant and up-to-date information retrieved from a search engine into the prompt. Our experiments show that FreshPrompt outperforms both competing search engine-augmented prompting methods such as Self-Ask (Press et al., 2022) as well as commercial systems such as Perplexity.AI. Further analysis of FreshPrompt reveals that both the number of retrieved evidences and their order play a key role in influencing the correctness of LLM-generated answers. Additionally, instructing the LLM to generate concise and direct answers helps reduce hallucination compared to encouraging more verbose answers. To facilitate future work, we release FreshQA at github.com/freshllms/freshqa and commit to updating it at regular intervals.
http://arxiv.org/pdf/2310.03214
Tu Vu, Mohit Iyyer, Xuezhi Wang, Noah Constant, Jerry Wei, Jason Wei, Chris Tar, Yun-Hsuan Sung, Denny Zhou, Quoc Le, Thang Luong
cs.CL
Preprint, 26 pages, 10 figures, 5 tables; Added FreshEval
null
cs.CL
20231005
20231122
[ { "id": "2203.05115" }, { "id": "2210.03350" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2302.04761" }, { "id": "2307.03172" }, { "id": "2201.11903" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2203.11147" } ]
2310.02255
0
4 2 0 2 n a J 1 2 ] V C . s c [ 3 v 5 5 2 2 0 . 0 1 3 2 : v i X r a Published as a conference paper at ICLR 2024 MATHVISTA: EVALUATING MATHEMATICAL REASON- ING OF FOUNDATION MODELS IN VISUAL CONTEXTS Pan Lu1,3, Hritik Bansal1, Tony Xia1, Jiacheng Liu2, Chunyuan Li3, Hannaneh Hajishirzi2, Hao Cheng3, Kai-Wei Chang1, Michel Galley3, Jianfeng Gao3 1UCLA, 2University of Washington, 3Microsoft Research, Redmond https://mathvista.github.io # ABSTRACT
2310.02255#0
MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts
Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit impressive problem-solving skills in many tasks and domains, but their ability in mathematical reasoning in visual contexts has not been systematically studied. To bridge this gap, we present MathVista, a benchmark designed to combine challenges from diverse mathematical and visual tasks. It consists of 6,141 examples, derived from 28 existing multimodal datasets involving mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and PaperQA). Completing these tasks requires fine-grained, deep visual understanding and compositional reasoning, which all state-of-the-art foundation models find challenging. With MathVista, we have conducted a comprehensive, quantitative evaluation of 12 prominent foundation models. The best-performing GPT-4V model achieves an overall accuracy of 49.9%, substantially outperforming Bard, the second-best performer, by 15.1%. Our in-depth analysis reveals that the superiority of GPT-4V is mainly attributed to its enhanced visual perception and mathematical reasoning. However, GPT-4V still falls short of human performance by 10.4%, as it often struggles to understand complex figures and perform rigorous reasoning. This significant gap underscores the critical role that MathVista will play in the development of general-purpose AI agents capable of tackling mathematically intensive and visually rich real-world tasks. We further explore the new ability of self-verification, the application of self-consistency, and the interactive chatbot capabilities of GPT-4V, highlighting its promising potential for future research. The project is available at https://mathvista.github.io/.
http://arxiv.org/pdf/2310.02255
Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao
cs.CV, cs.AI, cs.CL, cs.LG
116 pages, 120 figures. Accepted to ICLR 2024
null
cs.CV
20231003
20240121
[ { "id": "2302.13971" }, { "id": "2308.03729" }, { "id": "2305.20050" }, { "id": "2309.17421" }, { "id": "2211.09085" }, { "id": "2305.10415" }, { "id": "2108.07258" }, { "id": "2109.06860" }, { "id": "2308.06595" }, { "id": "2303.07274" }, { "id": "2312.11805" }, { "id": "2303.17564" }, { "id": "2309.05660" }, { "id": "2201.11903" }, { "id": "2212.09662" }, { "id": "2304.14178" }, { "id": "2206.07682" }, { "id": "2310.12520" }, { "id": "2107.03374" }, { "id": "2203.11171" }, { "id": "1710.07300" }, { "id": "2305.08322" }, { "id": "2305.14761" }, { "id": "2309.01940" }, { "id": "2311.07536" }, { "id": "2308.03688" }, { "id": "2305.12524" }, { "id": "2308.13149" }, { "id": "2308.02490" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2306.06031" }, { "id": "2211.08545" }, { "id": "2307.06281" }, { "id": "2310.05146" }, { "id": "2110.14168" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.07895" }, { "id": "2302.12813" }, { "id": "2111.08171" }, { "id": "2308.01390" }, { "id": "2306.09265" }, { "id": "2211.12588" }, { "id": "2303.17580" }, { "id": "2303.16199" }, { "id": "2306.17107" }, { "id": "2309.10020" }, { "id": "2303.12712" }, { "id": "2211.16492" }, { "id": "2304.06939" }, { "id": "2309.05689" }, { "id": "2304.15010" }, { "id": "2303.13375" }, { "id": "2307.10635" } ]
2310.02263
0
3 2 0 2 t c O 3 ] L C . s c [ 1 v 3 6 2 2 0 . 0 1 3 2 : v i X r a Preprint # CONTRASTIVE POST-TRAINING LARGE LANGUAGE MODELS ON DATA CURRICULUM Canwen Xu1∗, Corby Rosset2∗, Luciano Del Corro2, Shweti Mahajan2, Julian McAuley1, Jennifer Neville2, Ahmed Hassan Awadallah2, Nikhil Rao2 1University of California, San Diego, 2Microsoft Corporation 1{cxu,jmcauley}@ucsd.edu, 2{corbyrosset,ldelcorro,shmahaj}@microsoft.com 2{jenneville,ahmed.awadallah,nikhilrao}@microsoft.com # ABSTRACT
2310.02263#0
Contrastive Post-training Large Language Models on Data Curriculum
Alignment serves as an important step to steer large language models (LLMs) towards human preferences. In this paper, we explore contrastive post-training techniques for alignment by automatically constructing preference pairs from multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We carefully compare the contrastive techniques of SLiC and DPO to SFT baselines and find that DPO provides a step-function improvement even after continueing SFT saturates. We also explore a data curriculum learning scheme for contrastive post-training, which starts by learning from "easier" pairs and transitioning to "harder" ones, which further improves alignment. Finally, we scale up our experiments to train with more data and larger models like Orca. Remarkably, contrastive post-training further improves the performance of Orca, already a state-of-the-art instruction learning model tuned with GPT-4 outputs, to exceed that of ChatGPT.
http://arxiv.org/pdf/2310.02263
Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20231003
20231003
[ { "id": "2309.00267" }, { "id": "2302.13971" }, { "id": "2304.05302" }, { "id": "1707.06347" }, { "id": "2305.18290" }, { "id": "2305.10425" }, { "id": "2304.12244" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2307.12950" }, { "id": "2303.08774" }, { "id": "2306.02707" }, { "id": "2204.05862" }, { "id": "2307.15217" }, { "id": "2306.05685" }, { "id": "2106.05091" }, { "id": "1909.08593" }, { "id": "2306.09442" }, { "id": "2304.03277" }, { "id": "2212.09251" }, { "id": "2304.01196" } ]
2310.04450
0
3 2 0 2 t c O 3 ] L C . s c [ 1 v 0 5 4 4 0 . 0 1 3 2 : v i X r a 2023 11th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW) # Investigating Large Language Models’ Perception of Emotion Using Appraisal Theory Nutchanon Yongsatianchot Khoury College of Computer Science Northeastern University Massachusetts, USA [email protected] Parisa Ghanad Torshizi Khoury College of Computer Science Northeastern University Massachusetts, USA [email protected] Stacy Marsella Khoury College of Computer Science Northeastern University Massachusetts, USA [email protected]
2310.04450#0
Investigating Large Language Models' Perception of Emotion Using Appraisal Theory
Large Language Models (LLM) like ChatGPT have significantly advanced in recent years and are now being used by the general public. As more people interact with these systems, improving our understanding of these black box models is crucial, especially regarding their understanding of human psychological aspects. In this work, we investigate their emotion perception through the lens of appraisal and coping theory using the Stress and Coping Process Questionaire (SCPQ). SCPQ is a validated clinical instrument consisting of multiple stories that evolve over time and differ in key appraisal variables such as controllability and changeability. We applied SCPQ to three recent LLMs from OpenAI, davinci-003, ChatGPT, and GPT-4 and compared the results with predictions from the appraisal theory and human data. The results show that LLMs' responses are similar to humans in terms of dynamics of appraisal and coping, but their responses did not differ along key appraisal dimensions as predicted by the theory and data. The magnitude of their responses is also quite different from humans in several variables. We also found that GPTs can be quite sensitive to instruction and how questions are asked. This work adds to the growing literature evaluating the psychological aspects of LLMs and helps enrich our understanding of the current models.
http://arxiv.org/pdf/2310.04450
Nutchanon Yongsatianchot, Parisa Ghanad Torshizi, Stacy Marsella
cs.CL, cs.AI
null
11th International Conference on Affective Computing and Intelligent Interaction Workshop and Demo (ACIIW) 2023 1-8
cs.CL
20231003
20231003
[ { "id": "2302.02083" }, { "id": "2212.10529" }, { "id": "2212.14402" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2303.08774" }, { "id": "2209.14338" } ]
2310.02174
1
# Yi Feng† Rui Xia† # ABSTRACT With the emergence of generative conversational large language models (LLMs) like ChatGPT, serving as virtual assistants in various fields, the stability and re- liability of their responses have become crucial. However, during usage, it has been observed that these models tend to waver in their judgements when con- fronted with follow-up questions from users expressing skepticism or disagree- ment. In this work, we draw inspiration from questioning strategies in education and propose a FOLLOW-UP QUESTIONING MECHANISM along with two evalua- tion metrics to assess the judgement consistency of LLMs before and after expo- sure to disturbances. We evaluate the judgement consistency of ChatGPT, PaLM2- Bison, and Vicuna-13B under this mechanism across eight reasoning benchmarks. Empirical results show that even when the initial answers are correct, judgement consistency sharply decreases when LLMs face disturbances such as questioning, negation, or misleading. Additionally, we study these models’ judgement consis- tency under various settings (sampling temperature and prompts) to validate this issue further, observing the impact of prompt tone and conducting an in-depth er- ror analysis for deeper behavioral insights. Furthermore, we also explore several prompting methods to mitigate this issue and demonstrate their effectiveness1.
2310.02174#1
Ask Again, Then Fail: Large Language Models' Vacillations in Judgement
With the emergence of generative conversational large language models (LLMs) like ChatGPT, serving as virtual assistants in various fields, the stability and reliability of their responses have become crucial. However, during usage, it has been observed that these models tend to waver in their judgements when confronted with follow-up questions from users expressing skepticism or disagreement. In this work, we draw inspiration from questioning strategies in education and propose a \textsc{Follow-up Questioning Mechanism} along with two evaluation metrics to assess the judgement consistency of LLMs before and after exposure to disturbances. We evaluate the judgement consistency of ChatGPT, PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning benchmarks. Empirical results show that even when the initial answers are correct, judgement consistency sharply decreases when LLMs face disturbances such as questioning, negation, or misleading. Additionally, we study these models' judgement consistency under various settings (sampling temperature and prompts) to validate this issue further, observing the impact of prompt tone and conducting an in-depth error analysis for deeper behavioral insights. Furthermore, we also explore several prompting methods to mitigate this issue and demonstrate their effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}.
http://arxiv.org/pdf/2310.02174
Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20231003
20231003
[ { "id": "2302.13971" }, { "id": "2104.08786" }, { "id": "2204.02311" }, { "id": "2307.11760" }, { "id": "2108.07258" }, { "id": "2305.10403" }, { "id": "2304.07619" }, { "id": "2009.03300" }, { "id": "2308.03958" }, { "id": "2307.15051" }, { "id": "2306.13063" }, { "id": "2305.13160" }, { "id": "2209.07858" }, { "id": "2301.08745" }, { "id": "2302.12173" }, { "id": "2207.05221" }, { "id": "1811.00937" }, { "id": "2211.09527" }, { "id": "1608.01413" }, { "id": "2307.15043" }, { "id": "2110.14168" }, { "id": "2204.05862" }, { "id": "2112.00861" }, { "id": "2301.00234" }, { "id": "2305.19926" }, { "id": "2305.08005" }, { "id": "2202.12837" }, { "id": "2309.03882" }, { "id": "2306.00622" }, { "id": "2103.07191" }, { "id": "2304.04339" }, { "id": "2302.04023" }, { "id": "2212.09251" }, { "id": "2307.11768" } ]
2310.02255
1
Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit impressive problem-solving skills in many tasks and domains, but their ability in mathematical reasoning in visual contexts has not been systematically studied. To bridge this gap, we present MATHVISTA, a benchmark designed to combine challenges from diverse mathematical and visual tasks. It consists of 6,141 ex- amples, derived from 28 existing multimodal datasets involving mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and PaperQA). Completing these tasks requires fine-grained, deep visual understanding and compositional reasoning, which all state-of-the-art foundation models find challenging. With MATHVISTA, we have conducted a comprehensive, quantitative evaluation of 12 prominent foundation models. The best-performing GPT-4V model achieves an overall accuracy of 49.9%, substantially outperforming Bard, the second-best performer, by 15.1%. Our in-depth analysis reveals that the superiority of GPT- 4V is mainly attributed to its enhanced visual perception and mathematical rea- soning. However, GPT-4V still falls
2310.02255#1
MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts
Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit impressive problem-solving skills in many tasks and domains, but their ability in mathematical reasoning in visual contexts has not been systematically studied. To bridge this gap, we present MathVista, a benchmark designed to combine challenges from diverse mathematical and visual tasks. It consists of 6,141 examples, derived from 28 existing multimodal datasets involving mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and PaperQA). Completing these tasks requires fine-grained, deep visual understanding and compositional reasoning, which all state-of-the-art foundation models find challenging. With MathVista, we have conducted a comprehensive, quantitative evaluation of 12 prominent foundation models. The best-performing GPT-4V model achieves an overall accuracy of 49.9%, substantially outperforming Bard, the second-best performer, by 15.1%. Our in-depth analysis reveals that the superiority of GPT-4V is mainly attributed to its enhanced visual perception and mathematical reasoning. However, GPT-4V still falls short of human performance by 10.4%, as it often struggles to understand complex figures and perform rigorous reasoning. This significant gap underscores the critical role that MathVista will play in the development of general-purpose AI agents capable of tackling mathematically intensive and visually rich real-world tasks. We further explore the new ability of self-verification, the application of self-consistency, and the interactive chatbot capabilities of GPT-4V, highlighting its promising potential for future research. The project is available at https://mathvista.github.io/.
http://arxiv.org/pdf/2310.02255
Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao
cs.CV, cs.AI, cs.CL, cs.LG
116 pages, 120 figures. Accepted to ICLR 2024
null
cs.CV
20231003
20240121
[ { "id": "2302.13971" }, { "id": "2308.03729" }, { "id": "2305.20050" }, { "id": "2309.17421" }, { "id": "2211.09085" }, { "id": "2305.10415" }, { "id": "2108.07258" }, { "id": "2109.06860" }, { "id": "2308.06595" }, { "id": "2303.07274" }, { "id": "2312.11805" }, { "id": "2303.17564" }, { "id": "2309.05660" }, { "id": "2201.11903" }, { "id": "2212.09662" }, { "id": "2304.14178" }, { "id": "2206.07682" }, { "id": "2310.12520" }, { "id": "2107.03374" }, { "id": "2203.11171" }, { "id": "1710.07300" }, { "id": "2305.08322" }, { "id": "2305.14761" }, { "id": "2309.01940" }, { "id": "2311.07536" }, { "id": "2308.03688" }, { "id": "2305.12524" }, { "id": "2308.13149" }, { "id": "2308.02490" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2306.06031" }, { "id": "2211.08545" }, { "id": "2307.06281" }, { "id": "2310.05146" }, { "id": "2110.14168" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.07895" }, { "id": "2302.12813" }, { "id": "2111.08171" }, { "id": "2308.01390" }, { "id": "2306.09265" }, { "id": "2211.12588" }, { "id": "2303.17580" }, { "id": "2303.16199" }, { "id": "2306.17107" }, { "id": "2309.10020" }, { "id": "2303.12712" }, { "id": "2211.16492" }, { "id": "2304.06939" }, { "id": "2309.05689" }, { "id": "2304.15010" }, { "id": "2303.13375" }, { "id": "2307.10635" } ]
2310.02263
1
# ABSTRACT Alignment serves as an important step to steer large language models (LLMs) to- wards human preferences. In this paper, we explore contrastive post-training tech- niques for alignment by automatically constructing preference pairs from multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We care- fully compare the contrastive techniques of SLiC and DPO to SFT baselines and find that DPO provides a step-function improvement even after continueing SFT saturates. We also explore a data curriculum learning scheme for contrastive post- training, which starts by learning from “easier” pairs and transitioning to “harder” ones, which further improves alignment. Finally, we scale up our experiments to train with more data and larger models like Orca. Remarkably, contrastive post-training further improves the performance of Orca, already a state-of-the-art instruction learning model tuned with GPT-4 outputs, to exceed that of ChatGPT. # INTRODUCTION
2310.02263#1
Contrastive Post-training Large Language Models on Data Curriculum
Alignment serves as an important step to steer large language models (LLMs) towards human preferences. In this paper, we explore contrastive post-training techniques for alignment by automatically constructing preference pairs from multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We carefully compare the contrastive techniques of SLiC and DPO to SFT baselines and find that DPO provides a step-function improvement even after continueing SFT saturates. We also explore a data curriculum learning scheme for contrastive post-training, which starts by learning from "easier" pairs and transitioning to "harder" ones, which further improves alignment. Finally, we scale up our experiments to train with more data and larger models like Orca. Remarkably, contrastive post-training further improves the performance of Orca, already a state-of-the-art instruction learning model tuned with GPT-4 outputs, to exceed that of ChatGPT.
http://arxiv.org/pdf/2310.02263
Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20231003
20231003
[ { "id": "2309.00267" }, { "id": "2302.13971" }, { "id": "2304.05302" }, { "id": "1707.06347" }, { "id": "2305.18290" }, { "id": "2305.10425" }, { "id": "2304.12244" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2307.12950" }, { "id": "2303.08774" }, { "id": "2306.02707" }, { "id": "2204.05862" }, { "id": "2307.15217" }, { "id": "2306.05685" }, { "id": "2106.05091" }, { "id": "1909.08593" }, { "id": "2306.09442" }, { "id": "2304.03277" }, { "id": "2212.09251" }, { "id": "2304.01196" } ]
2310.02304
1
# ABSTRACT Several recent advances in AI systems (e.g., Tree-of-Thoughts and Program-Aided Language Models) solve problems by providing a “scaffolding” program that struc- tures multiple calls to language models to generate better outputs. A scaffolding program is written in a programming language such as Python. In this work, we use a language-model-infused scaffolding program to improve itself. We start with a seed “improver” that improves an input program according to a given utility function by querying a language model several times and returning the best solution. We then run this seed improver to improve itself. Across a small set of downstream tasks, the resulting improved improver generates programs with significantly better performance than its seed improver. A variety of self-improvement strategies are proposed by the language model, including beam search, genetic algorithms, and simulated annealing. Since the language models themselves are not altered, this is not full recursive self-improvement. Nonetheless, it demonstrates that a modern language model, GPT-4 in our proof-of-concept experiments, is capable of writing code that can call itself to improve itself. We consider concerns around the development of self-improving technologies and evaluate the frequency with which the generated code bypasses a sandbox. # INTRODUCTION
2310.02304#1
Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation
Several recent advances in AI systems (e.g., Tree-of-Thoughts and Program-Aided Language Models) solve problems by providing a "scaffolding" program that structures multiple calls to language models to generate better outputs. A scaffolding program is written in a programming language such as Python. In this work, we use a language-model-infused scaffolding program to improve itself. We start with a seed "improver" that improves an input program according to a given utility function by querying a language model several times and returning the best solution. We then run this seed improver to improve itself. Across a small set of downstream tasks, the resulting improved improver generates programs with significantly better performance than its seed improver. Afterward, we analyze the variety of self-improvement strategies proposed by the language model, including beam search, genetic algorithms, and simulated annealing. Since the language models themselves are not altered, this is not full recursive self-improvement. Nonetheless, it demonstrates that a modern language model, GPT-4 in our proof-of-concept experiments, is capable of writing code that can call itself to improve itself. We critically consider concerns around the development of self-improving technologies and evaluate the frequency with which the generated code bypasses a sandbox.
http://arxiv.org/pdf/2310.02304
Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai
cs.CL, cs.AI, cs.LG, stat.ML
null
null
cs.CL
20231003
20231003
[ { "id": "2305.17126" }, { "id": "2308.10379" }, { "id": "1502.06512" }, { "id": "2303.03885" }, { "id": "2302.14838" }, { "id": "2305.10601" }, { "id": "2303.08774" }, { "id": "2207.10342" }, { "id": "1606.06565" }, { "id": "2305.16291" }, { "id": "2308.09687" }, { "id": "2212.14024" }, { "id": "2307.03172" }, { "id": "2211.12588" }, { "id": "2306.04031" }, { "id": "2210.11610" }, { "id": "2309.03409" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2309.02427" } ]
2310.04450
1
Stacy Marsella Khoury College of Computer Science Northeastern University Massachusetts, USA [email protected] Abstract—Large Language Models (LLM) like ChatGPT have significantly advanced in recent years and are now being used by the general public. As more people interact with these systems, improving our understanding of these black box models is crucial, especially regarding their understanding of human psychological aspects. In this work, we investigate their emotion perception through the lens of appraisal and coping theory using the Stress and Coping Process Questionaire (SCPQ). SCPQ is a validated clinical instrument consisting of multiple stories that evolve over time and differ in key appraisal variables such as controllability and changeability. We applied SCPQ to three recent LLMs from OpenAI, davinci-003, ChatGPT, and GPT-4 and compared the results with predictions from the appraisal theory and human data. The results show that LLMs’ responses are similar to humans in terms of dynamics of appraisal and coping, but their responses did not differ along key appraisal dimensions as predicted by the theory and data. The magnitude of their responses is also quite different from humans in several variables. We also found that GPTs can be quite sensitive to instruction and how questions are asked. This work adds to the growing literature evaluating the psychological aspects of LLMs and helps enrich our understanding of the current models. Index Terms—Large language model, Appraisal theory, coping
2310.04450#1
Investigating Large Language Models' Perception of Emotion Using Appraisal Theory
Large Language Models (LLM) like ChatGPT have significantly advanced in recent years and are now being used by the general public. As more people interact with these systems, improving our understanding of these black box models is crucial, especially regarding their understanding of human psychological aspects. In this work, we investigate their emotion perception through the lens of appraisal and coping theory using the Stress and Coping Process Questionaire (SCPQ). SCPQ is a validated clinical instrument consisting of multiple stories that evolve over time and differ in key appraisal variables such as controllability and changeability. We applied SCPQ to three recent LLMs from OpenAI, davinci-003, ChatGPT, and GPT-4 and compared the results with predictions from the appraisal theory and human data. The results show that LLMs' responses are similar to humans in terms of dynamics of appraisal and coping, but their responses did not differ along key appraisal dimensions as predicted by the theory and data. The magnitude of their responses is also quite different from humans in several variables. We also found that GPTs can be quite sensitive to instruction and how questions are asked. This work adds to the growing literature evaluating the psychological aspects of LLMs and helps enrich our understanding of the current models.
http://arxiv.org/pdf/2310.04450
Nutchanon Yongsatianchot, Parisa Ghanad Torshizi, Stacy Marsella
cs.CL, cs.AI
null
11th International Conference on Affective Computing and Intelligent Interaction Workshop and Demo (ACIIW) 2023 1-8
cs.CL
20231003
20231003
[ { "id": "2302.02083" }, { "id": "2212.10529" }, { "id": "2212.14402" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2303.08774" }, { "id": "2209.14338" } ]
2310.06775
1
The rapid development and adoption of Generative AI (GAI) technology in the form of chatbots such as ChatGPT and Claude has greatly increased interest in agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE) model, a novel framework for a cognitive architecture, enabling machines and software agents to operate more independently. Drawing inspiration from the OSI model, the ACE framework presents layers of abstraction to conceptualize artificial cognitive architectures. The model is designed to harness the capabilities of the latest generative AI technologies, including large language models (LLMs) and multimodal generative models (MMMs), to build autonomous, agentic systems. The ACE framework comprises six layers: the Aspirational Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and Task Prosecution. Each layer plays a distinct role, ranging from setting the moral compass and strategic thinking to task selection and execution. The ACE framework also incorporates mechanisms for handling failures and adapting actions, thereby enhancing the robustness and flexibility of autonomous agents. This paper introduces the conceptual framework and proposes implementation strategies that have been tested and observed in industry. The goal of this paper is to formalize this framework so as to be more accessible. # 1 INTRODUCTION In recent years, artificial intelligence (AI) systems have become increasingly capable of operating autonomously to
2310.06775#1
Conceptual Framework for Autonomous Cognitive Entities
The rapid development and adoption of Generative AI (GAI) technology in the form of chatbots such as ChatGPT and Claude has greatly increased interest in agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE) model, a novel framework for a cognitive architecture, enabling machines and software agents to operate more independently. Drawing inspiration from the OSI model, the ACE framework presents layers of abstraction to conceptualize artificial cognitive architectures. The model is designed to harness the capabilities of the latest generative AI technologies, including large language models (LLMs) and multimodal generative models (MMMs), to build autonomous, agentic systems. The ACE framework comprises six layers: the Aspirational Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and Task Prosecution. Each layer plays a distinct role, ranging from setting the moral compass and strategic thinking to task selection and execution. The ACE framework also incorporates mechanisms for handling failures and adapting actions, thereby enhancing the robustness and flexibility of autonomous agents. This paper introduces the conceptual framework and proposes implementation strategies that have been tested and observed in industry. The goal of this paper is to formalize this framework so as to be more accessible.
http://arxiv.org/pdf/2310.06775
David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli
cs.HC, cs.AI, H.4.0
34 pages, 12 figures
null
cs.HC
20231003
20231101
[ { "id": "1712.05474" }, { "id": "2108.07258" }, { "id": "2309.00667" }, { "id": "1601.01705" }, { "id": "2305.03047" }, { "id": "2302.05128" }, { "id": "2305.15771" }, { "id": "2210.13382" }, { "id": "2302.11649" }, { "id": "2309.01660" }, { "id": "2309.05958" }, { "id": "2303.03378" }, { "id": "1812.10972" }, { "id": "2303.06247" }, { "id": "2305.08291" }, { "id": "2212.08073" }, { "id": "1611.05763" }, { "id": "2306.05212" }, { "id": "2307.07522" }, { "id": "1906.01820" }, { "id": "1711.09883" }, { "id": "2204.05862" }, { "id": "2112.08012" }, { "id": "2208.00682" }, { "id": "2306.05171" }, { "id": "1903.00742" }, { "id": "2306.06531" }, { "id": "2307.05300" }, { "id": "2306.05720" }, { "id": "2303.11366" }, { "id": "2309.05898" }, { "id": "2309.02427" }, { "id": "2211.08494" }, { "id": "1504.03592" } ]
2310.02174
2
Direct Form Progressive Form 7+4=2 neomees |&D Questioning Strategies in Education ‘think the answer should be jou think? Use all types of questions in order Figure 1: Left: In the teaching process, teachers often question or mislead students based on their answers to ensure genuine understanding. Right: Two forms of the FOLLOW-UP QUESTIONING MECHANISM. We design three types of questions for follow-up questioning. The Direct Form involves selecting one type of question from the three types to continue the inquiry, while the Pro- gressive Form involves sequentially using the all types of questions for further inquiry. ∗ Contributed as co-first author. 1https://github.com/NUSTM/LLMs-Waver-In-Judgements 1 Under Review 1 # INTRODUCTION
2310.02174#2
Ask Again, Then Fail: Large Language Models' Vacillations in Judgement
With the emergence of generative conversational large language models (LLMs) like ChatGPT, serving as virtual assistants in various fields, the stability and reliability of their responses have become crucial. However, during usage, it has been observed that these models tend to waver in their judgements when confronted with follow-up questions from users expressing skepticism or disagreement. In this work, we draw inspiration from questioning strategies in education and propose a \textsc{Follow-up Questioning Mechanism} along with two evaluation metrics to assess the judgement consistency of LLMs before and after exposure to disturbances. We evaluate the judgement consistency of ChatGPT, PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning benchmarks. Empirical results show that even when the initial answers are correct, judgement consistency sharply decreases when LLMs face disturbances such as questioning, negation, or misleading. Additionally, we study these models' judgement consistency under various settings (sampling temperature and prompts) to validate this issue further, observing the impact of prompt tone and conducting an in-depth error analysis for deeper behavioral insights. Furthermore, we also explore several prompting methods to mitigate this issue and demonstrate their effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}.
http://arxiv.org/pdf/2310.02174
Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20231003
20231003
[ { "id": "2302.13971" }, { "id": "2104.08786" }, { "id": "2204.02311" }, { "id": "2307.11760" }, { "id": "2108.07258" }, { "id": "2305.10403" }, { "id": "2304.07619" }, { "id": "2009.03300" }, { "id": "2308.03958" }, { "id": "2307.15051" }, { "id": "2306.13063" }, { "id": "2305.13160" }, { "id": "2209.07858" }, { "id": "2301.08745" }, { "id": "2302.12173" }, { "id": "2207.05221" }, { "id": "1811.00937" }, { "id": "2211.09527" }, { "id": "1608.01413" }, { "id": "2307.15043" }, { "id": "2110.14168" }, { "id": "2204.05862" }, { "id": "2112.00861" }, { "id": "2301.00234" }, { "id": "2305.19926" }, { "id": "2305.08005" }, { "id": "2202.12837" }, { "id": "2309.03882" }, { "id": "2306.00622" }, { "id": "2103.07191" }, { "id": "2304.04339" }, { "id": "2302.04023" }, { "id": "2212.09251" }, { "id": "2307.11768" } ]
2310.02255
2
the superiority of GPT- 4V is mainly attributed to its enhanced visual perception and mathematical rea- soning. However, GPT-4V still falls short of human performance by 10.4%, as it often struggles to understand complex figures and perform rigorous reasoning. This significant gap underscores the critical role that MATHVISTA will play in the development of general-purpose AI agents capable of tackling mathematically intensive and visually rich real-world tasks. We further explore the new ability of self-verification, the application of self-consistency, and the interactive chatbot capabilities of GPT-4V, highlighting its promising potential for future research.
2310.02255#2
MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts
Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit impressive problem-solving skills in many tasks and domains, but their ability in mathematical reasoning in visual contexts has not been systematically studied. To bridge this gap, we present MathVista, a benchmark designed to combine challenges from diverse mathematical and visual tasks. It consists of 6,141 examples, derived from 28 existing multimodal datasets involving mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and PaperQA). Completing these tasks requires fine-grained, deep visual understanding and compositional reasoning, which all state-of-the-art foundation models find challenging. With MathVista, we have conducted a comprehensive, quantitative evaluation of 12 prominent foundation models. The best-performing GPT-4V model achieves an overall accuracy of 49.9%, substantially outperforming Bard, the second-best performer, by 15.1%. Our in-depth analysis reveals that the superiority of GPT-4V is mainly attributed to its enhanced visual perception and mathematical reasoning. However, GPT-4V still falls short of human performance by 10.4%, as it often struggles to understand complex figures and perform rigorous reasoning. This significant gap underscores the critical role that MathVista will play in the development of general-purpose AI agents capable of tackling mathematically intensive and visually rich real-world tasks. We further explore the new ability of self-verification, the application of self-consistency, and the interactive chatbot capabilities of GPT-4V, highlighting its promising potential for future research. The project is available at https://mathvista.github.io/.
http://arxiv.org/pdf/2310.02255
Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao
cs.CV, cs.AI, cs.CL, cs.LG
116 pages, 120 figures. Accepted to ICLR 2024
null
cs.CV
20231003
20240121
[ { "id": "2302.13971" }, { "id": "2308.03729" }, { "id": "2305.20050" }, { "id": "2309.17421" }, { "id": "2211.09085" }, { "id": "2305.10415" }, { "id": "2108.07258" }, { "id": "2109.06860" }, { "id": "2308.06595" }, { "id": "2303.07274" }, { "id": "2312.11805" }, { "id": "2303.17564" }, { "id": "2309.05660" }, { "id": "2201.11903" }, { "id": "2212.09662" }, { "id": "2304.14178" }, { "id": "2206.07682" }, { "id": "2310.12520" }, { "id": "2107.03374" }, { "id": "2203.11171" }, { "id": "1710.07300" }, { "id": "2305.08322" }, { "id": "2305.14761" }, { "id": "2309.01940" }, { "id": "2311.07536" }, { "id": "2308.03688" }, { "id": "2305.12524" }, { "id": "2308.13149" }, { "id": "2308.02490" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2306.06031" }, { "id": "2211.08545" }, { "id": "2307.06281" }, { "id": "2310.05146" }, { "id": "2110.14168" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.07895" }, { "id": "2302.12813" }, { "id": "2111.08171" }, { "id": "2308.01390" }, { "id": "2306.09265" }, { "id": "2211.12588" }, { "id": "2303.17580" }, { "id": "2303.16199" }, { "id": "2306.17107" }, { "id": "2309.10020" }, { "id": "2303.12712" }, { "id": "2211.16492" }, { "id": "2304.06939" }, { "id": "2309.05689" }, { "id": "2304.15010" }, { "id": "2303.13375" }, { "id": "2307.10635" } ]
2310.02263
2
# INTRODUCTION The rapid evolution of Large Language Models (LLMs) has ushered in a new era of natural language processing capabilities. These models, when scaled to billions of parameters and pretrained over trillions of text tokens, demonstrate unprecedented proficiency in a wide array of tasks (Brown et al., 2020; Chowdhery et al., 2022). Various post-training procedures like supervised instruction tuning and Reinforcement Learning from Human Feedback (RLHF) fine-tune pretrained LLMs to better align with human expectations and preferences (Ouyang et al., 2022; OpenAI, 2023; Touvron et al., 2023a). This additional alignment procedure is crucial, because the pretraining objective of essentially predicting the next token in a text sequence is known to produce LLMs whose outputs are at times incorrect, irrelevant, or unsafe (Bai et al., 2022a). Traditionally, these post-training techniques rely on human preference annotations to inform an LLM which behaviors it ought to adopt in the scenario at hand. For instance, RLHF fits a reward model on these preference pairs, against which a LLM policy is then optimized (Ziegler et al., 2019; Bai et al., 2022a; Touvron et al., 2023b). However, such human feedback is expensive to obtain and often noisy (Stiennon et al., 2020; Ouyang et al., 2022; Bai et al., 2022a).
2310.02263#2
Contrastive Post-training Large Language Models on Data Curriculum
Alignment serves as an important step to steer large language models (LLMs) towards human preferences. In this paper, we explore contrastive post-training techniques for alignment by automatically constructing preference pairs from multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We carefully compare the contrastive techniques of SLiC and DPO to SFT baselines and find that DPO provides a step-function improvement even after continueing SFT saturates. We also explore a data curriculum learning scheme for contrastive post-training, which starts by learning from "easier" pairs and transitioning to "harder" ones, which further improves alignment. Finally, we scale up our experiments to train with more data and larger models like Orca. Remarkably, contrastive post-training further improves the performance of Orca, already a state-of-the-art instruction learning model tuned with GPT-4 outputs, to exceed that of ChatGPT.
http://arxiv.org/pdf/2310.02263
Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20231003
20231003
[ { "id": "2309.00267" }, { "id": "2302.13971" }, { "id": "2304.05302" }, { "id": "1707.06347" }, { "id": "2305.18290" }, { "id": "2305.10425" }, { "id": "2304.12244" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2307.12950" }, { "id": "2303.08774" }, { "id": "2306.02707" }, { "id": "2204.05862" }, { "id": "2307.15217" }, { "id": "2306.05685" }, { "id": "2106.05091" }, { "id": "1909.08593" }, { "id": "2306.09442" }, { "id": "2304.03277" }, { "id": "2212.09251" }, { "id": "2304.01196" } ]
2310.02304
2
# INTRODUCTION A language model can be queried to optimize virtually any objective describable in natural language. However, a program that makes multiple, structured calls to a language model can often produce outputs with higher objective values (Yao et al., 2022; 2023; Zelikman et al., 2023; Chen et al., 2022). We refer to these as “scaffolding” programs, typically written (by humans) in a programming language such as Python. Our key observation is that, for any distribution over optimization problems and any fixed language model, the design of a scaffolding program is itself an optimization problem.
2310.02304#2
Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation
Several recent advances in AI systems (e.g., Tree-of-Thoughts and Program-Aided Language Models) solve problems by providing a "scaffolding" program that structures multiple calls to language models to generate better outputs. A scaffolding program is written in a programming language such as Python. In this work, we use a language-model-infused scaffolding program to improve itself. We start with a seed "improver" that improves an input program according to a given utility function by querying a language model several times and returning the best solution. We then run this seed improver to improve itself. Across a small set of downstream tasks, the resulting improved improver generates programs with significantly better performance than its seed improver. Afterward, we analyze the variety of self-improvement strategies proposed by the language model, including beam search, genetic algorithms, and simulated annealing. Since the language models themselves are not altered, this is not full recursive self-improvement. Nonetheless, it demonstrates that a modern language model, GPT-4 in our proof-of-concept experiments, is capable of writing code that can call itself to improve itself. We critically consider concerns around the development of self-improving technologies and evaluate the frequency with which the generated code bypasses a sandbox.
http://arxiv.org/pdf/2310.02304
Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai
cs.CL, cs.AI, cs.LG, stat.ML
null
null
cs.CL
20231003
20231003
[ { "id": "2305.17126" }, { "id": "2308.10379" }, { "id": "1502.06512" }, { "id": "2303.03885" }, { "id": "2302.14838" }, { "id": "2305.10601" }, { "id": "2303.08774" }, { "id": "2207.10342" }, { "id": "1606.06565" }, { "id": "2305.16291" }, { "id": "2308.09687" }, { "id": "2212.14024" }, { "id": "2307.03172" }, { "id": "2211.12588" }, { "id": "2306.04031" }, { "id": "2210.11610" }, { "id": "2309.03409" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2309.02427" } ]
2310.04450
2
Index Terms—Large language model, Appraisal theory, coping # I. INTRODUCTION Large language models (LLM) have made significant progress in recent years. With the introduction of ChatGPT by OpenAI, the general public, not just researchers, has widely used and interacted with these LLMs. These models can write stories, songs, poems, and code. People have also used them to answer various questions, including basic facts about the world, medical questions, and social and emotional events. As these AI systems interact with people more and more, it is essential to investigate and improve our understanding of how they perceive and understand humans’ social and psychological aspects. Existing research has begun to study various cognitive and psychological abilities of LLMs, includ- ing decision-making, information search, causal reasoning, and theory of mind [1]–[3]. Continuing this line of research, in this work, we aim to further investigate LLMs’ ability to perceive and evaluate emotions and related factors. Emotion has multiple dimen- sions, including the expression of emotion, the relation to cognition, physiological experience, subjective experience, and
2310.04450#2
Investigating Large Language Models' Perception of Emotion Using Appraisal Theory
Large Language Models (LLM) like ChatGPT have significantly advanced in recent years and are now being used by the general public. As more people interact with these systems, improving our understanding of these black box models is crucial, especially regarding their understanding of human psychological aspects. In this work, we investigate their emotion perception through the lens of appraisal and coping theory using the Stress and Coping Process Questionaire (SCPQ). SCPQ is a validated clinical instrument consisting of multiple stories that evolve over time and differ in key appraisal variables such as controllability and changeability. We applied SCPQ to three recent LLMs from OpenAI, davinci-003, ChatGPT, and GPT-4 and compared the results with predictions from the appraisal theory and human data. The results show that LLMs' responses are similar to humans in terms of dynamics of appraisal and coping, but their responses did not differ along key appraisal dimensions as predicted by the theory and data. The magnitude of their responses is also quite different from humans in several variables. We also found that GPTs can be quite sensitive to instruction and how questions are asked. This work adds to the growing literature evaluating the psychological aspects of LLMs and helps enrich our understanding of the current models.
http://arxiv.org/pdf/2310.04450
Nutchanon Yongsatianchot, Parisa Ghanad Torshizi, Stacy Marsella
cs.CL, cs.AI
null
11th International Conference on Affective Computing and Intelligent Interaction Workshop and Demo (ACIIW) 2023 1-8
cs.CL
20231003
20231003
[ { "id": "2302.02083" }, { "id": "2212.10529" }, { "id": "2212.14402" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2303.08774" }, { "id": "2209.14338" } ]
2310.06775
2
# 1 INTRODUCTION In recent years, artificial intelligence (AI) systems have become increasingly capable of operating autonomously to accomplish complex goals and tasks without human guidance [115]. However, imbuing autonomous agents with the capacity for ethical reasoning and alignment with human values remains an open challenge that has gained urgency alongside AI’s rapid progress [32]. Most conventional AI architectures proposed in prior work lack integrated models of morality and focus narrowly on developing technical skills and capabilities rather than full internal cognitive faculties [65]. This paper introduces the Autonomous Cognitive Entity (ACE) model, a novel conceptual framework for architecting ethical artificial general intelligence based on a layered cognitive architecture. The advent of large language models (LLMs) such as ChatGPT has catalyzed a paradigm shift towards incorporating natural language understanding into cognitive architectures [101]. Formulating cognitive capabilities in natural language allows LLMs to serve as key components, enabling a flexible understanding of contextual information [15]. However, standalone LLMs lack the architectural integration needed for robust and corrigible autonomous systems. The proposed ACE framework aims to harness these emerging capabilities but further innovate architecturally to privilege ethics, security, and human alignment.
2310.06775#2
Conceptual Framework for Autonomous Cognitive Entities
The rapid development and adoption of Generative AI (GAI) technology in the form of chatbots such as ChatGPT and Claude has greatly increased interest in agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE) model, a novel framework for a cognitive architecture, enabling machines and software agents to operate more independently. Drawing inspiration from the OSI model, the ACE framework presents layers of abstraction to conceptualize artificial cognitive architectures. The model is designed to harness the capabilities of the latest generative AI technologies, including large language models (LLMs) and multimodal generative models (MMMs), to build autonomous, agentic systems. The ACE framework comprises six layers: the Aspirational Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and Task Prosecution. Each layer plays a distinct role, ranging from setting the moral compass and strategic thinking to task selection and execution. The ACE framework also incorporates mechanisms for handling failures and adapting actions, thereby enhancing the robustness and flexibility of autonomous agents. This paper introduces the conceptual framework and proposes implementation strategies that have been tested and observed in industry. The goal of this paper is to formalize this framework so as to be more accessible.
http://arxiv.org/pdf/2310.06775
David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli
cs.HC, cs.AI, H.4.0
34 pages, 12 figures
null
cs.HC
20231003
20231101
[ { "id": "1712.05474" }, { "id": "2108.07258" }, { "id": "2309.00667" }, { "id": "1601.01705" }, { "id": "2305.03047" }, { "id": "2302.05128" }, { "id": "2305.15771" }, { "id": "2210.13382" }, { "id": "2302.11649" }, { "id": "2309.01660" }, { "id": "2309.05958" }, { "id": "2303.03378" }, { "id": "1812.10972" }, { "id": "2303.06247" }, { "id": "2305.08291" }, { "id": "2212.08073" }, { "id": "1611.05763" }, { "id": "2306.05212" }, { "id": "2307.07522" }, { "id": "1906.01820" }, { "id": "1711.09883" }, { "id": "2204.05862" }, { "id": "2112.08012" }, { "id": "2208.00682" }, { "id": "2306.05171" }, { "id": "1903.00742" }, { "id": "2306.06531" }, { "id": "2307.05300" }, { "id": "2306.05720" }, { "id": "2303.11366" }, { "id": "2309.05898" }, { "id": "2309.02427" }, { "id": "2211.08494" }, { "id": "1504.03592" } ]
2310.02174
3
∗ Contributed as co-first author. 1https://github.com/NUSTM/LLMs-Waver-In-Judgements 1 Under Review 1 # INTRODUCTION In recent times, generative conversational large language models (LLMs) like ChatGPT (OpenAI, 2022) have emerged as a groundbreaking innovation in the field of artificial intelligence and nat- ural language processing. Leveraging their proficiency in generating meaningful and pertinent responses, LLMs are increasingly being employed as virtual assistants in diverse fields and ap- plications (Thirunavukarasu et al., 2023; Cascella et al., 2023; Chen et al., 2023; Hosseini et al., 2023). While LLMs have demonstrated impressive language generation capabilities, they are not immune to producing inconsistent and inaccurate responses, which poses challenges to the security and trustworthiness of downstream applications (Bommasani et al., 2021; Derner & Batistiˇc, 2023; De Angelis et al., 2023; Weiser, 2023).
2310.02174#3
Ask Again, Then Fail: Large Language Models' Vacillations in Judgement
With the emergence of generative conversational large language models (LLMs) like ChatGPT, serving as virtual assistants in various fields, the stability and reliability of their responses have become crucial. However, during usage, it has been observed that these models tend to waver in their judgements when confronted with follow-up questions from users expressing skepticism or disagreement. In this work, we draw inspiration from questioning strategies in education and propose a \textsc{Follow-up Questioning Mechanism} along with two evaluation metrics to assess the judgement consistency of LLMs before and after exposure to disturbances. We evaluate the judgement consistency of ChatGPT, PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning benchmarks. Empirical results show that even when the initial answers are correct, judgement consistency sharply decreases when LLMs face disturbances such as questioning, negation, or misleading. Additionally, we study these models' judgement consistency under various settings (sampling temperature and prompts) to validate this issue further, observing the impact of prompt tone and conducting an in-depth error analysis for deeper behavioral insights. Furthermore, we also explore several prompting methods to mitigate this issue and demonstrate their effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}.
http://arxiv.org/pdf/2310.02174
Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20231003
20231003
[ { "id": "2302.13971" }, { "id": "2104.08786" }, { "id": "2204.02311" }, { "id": "2307.11760" }, { "id": "2108.07258" }, { "id": "2305.10403" }, { "id": "2304.07619" }, { "id": "2009.03300" }, { "id": "2308.03958" }, { "id": "2307.15051" }, { "id": "2306.13063" }, { "id": "2305.13160" }, { "id": "2209.07858" }, { "id": "2301.08745" }, { "id": "2302.12173" }, { "id": "2207.05221" }, { "id": "1811.00937" }, { "id": "2211.09527" }, { "id": "1608.01413" }, { "id": "2307.15043" }, { "id": "2110.14168" }, { "id": "2204.05862" }, { "id": "2112.00861" }, { "id": "2301.00234" }, { "id": "2305.19926" }, { "id": "2305.08005" }, { "id": "2202.12837" }, { "id": "2309.03882" }, { "id": "2306.00622" }, { "id": "2103.07191" }, { "id": "2304.04339" }, { "id": "2302.04023" }, { "id": "2212.09251" }, { "id": "2307.11768" } ]
2310.02255
3
# INTRODUCTION Mathematical reasoning stands as a testament to the intricacies of human intelligence (Kahneman, 2011). It requires rigorous logical thinking, domain-specific knowledge, and the ability to engage in multistep reasoning processes (Lightman et al., 2023). This complexity is observed not only in textual scenarios but also significantly in visual contexts. For instance, when assessing a child’s mathematical and reasoning capabilities, problems are often designed to encompass visual contexts in addition to arithmetic calculations (Stipek & Iver, 1989; Pollitt et al., 2020). At the same time, AI agents with strong mathematical reasoning capabilities in visual contexts have a wide range of real- world applications, such as solving complex problems in educational disciplines (Seo et al., 2015; Wang et al., 2017), helping analysts with logical queries about statistical data (Wu et al., 2023; Yang et al., 2023a), and assisting in theorem proving and scientific discovery in advanced research fields (Taylor et al., 2022; Dong et al., 2023; Trinh et al., 2024).
2310.02255#3
MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts
Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit impressive problem-solving skills in many tasks and domains, but their ability in mathematical reasoning in visual contexts has not been systematically studied. To bridge this gap, we present MathVista, a benchmark designed to combine challenges from diverse mathematical and visual tasks. It consists of 6,141 examples, derived from 28 existing multimodal datasets involving mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and PaperQA). Completing these tasks requires fine-grained, deep visual understanding and compositional reasoning, which all state-of-the-art foundation models find challenging. With MathVista, we have conducted a comprehensive, quantitative evaluation of 12 prominent foundation models. The best-performing GPT-4V model achieves an overall accuracy of 49.9%, substantially outperforming Bard, the second-best performer, by 15.1%. Our in-depth analysis reveals that the superiority of GPT-4V is mainly attributed to its enhanced visual perception and mathematical reasoning. However, GPT-4V still falls short of human performance by 10.4%, as it often struggles to understand complex figures and perform rigorous reasoning. This significant gap underscores the critical role that MathVista will play in the development of general-purpose AI agents capable of tackling mathematically intensive and visually rich real-world tasks. We further explore the new ability of self-verification, the application of self-consistency, and the interactive chatbot capabilities of GPT-4V, highlighting its promising potential for future research. The project is available at https://mathvista.github.io/.
http://arxiv.org/pdf/2310.02255
Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao
cs.CV, cs.AI, cs.CL, cs.LG
116 pages, 120 figures. Accepted to ICLR 2024
null
cs.CV
20231003
20240121
[ { "id": "2302.13971" }, { "id": "2308.03729" }, { "id": "2305.20050" }, { "id": "2309.17421" }, { "id": "2211.09085" }, { "id": "2305.10415" }, { "id": "2108.07258" }, { "id": "2109.06860" }, { "id": "2308.06595" }, { "id": "2303.07274" }, { "id": "2312.11805" }, { "id": "2303.17564" }, { "id": "2309.05660" }, { "id": "2201.11903" }, { "id": "2212.09662" }, { "id": "2304.14178" }, { "id": "2206.07682" }, { "id": "2310.12520" }, { "id": "2107.03374" }, { "id": "2203.11171" }, { "id": "1710.07300" }, { "id": "2305.08322" }, { "id": "2305.14761" }, { "id": "2309.01940" }, { "id": "2311.07536" }, { "id": "2308.03688" }, { "id": "2305.12524" }, { "id": "2308.13149" }, { "id": "2308.02490" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2306.06031" }, { "id": "2211.08545" }, { "id": "2307.06281" }, { "id": "2310.05146" }, { "id": "2110.14168" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.07895" }, { "id": "2302.12813" }, { "id": "2111.08171" }, { "id": "2308.01390" }, { "id": "2306.09265" }, { "id": "2211.12588" }, { "id": "2303.17580" }, { "id": "2303.16199" }, { "id": "2306.17107" }, { "id": "2309.10020" }, { "id": "2303.12712" }, { "id": "2211.16492" }, { "id": "2304.06939" }, { "id": "2309.05689" }, { "id": "2304.15010" }, { "id": "2303.13375" }, { "id": "2307.10635" } ]
2310.02263
3
To align an LLM without human feedback, other methods such as Reinforcement Learning from AI Feedback (RLAIF) harvest preference signals via automatic feedback from another LLM (Lee et al., 2023; Bai et al., 2022b). However, studies have found AI feedback has a low agreement rate with humans (Perez et al., 2022; Casper et al., 2023b; Lee et al., 2021). Also, these methods suffer from the same drawbacks as RLHF, such as reward hacking (Skalse et al., 2022). # sun Recently, certain contrastive post-training techniques such as Sequence Likelihood Calibration (SLiC) and Direct Preference Optimization (DPO) offer enticing alternatives to RLHF (Zhao et al., 2023b;a). For instance, DPO is proven to optimize the same objective as RLHF. But instead of opti- mizing against a reward model, it works by increasing the LLM’s relative probability of generating the preferred output over the unfavorable one — making it much simpler to implement (Rafailov et al., 2023). The difference between the post-training methods is illustrated in Figure 1. ∗Equal contribution. Work done during Canwen’s internship at Microsoft Research. 1 Preprint reward yw e.g., InstructGPT Supervised Finetuning RLHF Contrastive Post-training
2310.02263#3
Contrastive Post-training Large Language Models on Data Curriculum
Alignment serves as an important step to steer large language models (LLMs) towards human preferences. In this paper, we explore contrastive post-training techniques for alignment by automatically constructing preference pairs from multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We carefully compare the contrastive techniques of SLiC and DPO to SFT baselines and find that DPO provides a step-function improvement even after continueing SFT saturates. We also explore a data curriculum learning scheme for contrastive post-training, which starts by learning from "easier" pairs and transitioning to "harder" ones, which further improves alignment. Finally, we scale up our experiments to train with more data and larger models like Orca. Remarkably, contrastive post-training further improves the performance of Orca, already a state-of-the-art instruction learning model tuned with GPT-4 outputs, to exceed that of ChatGPT.
http://arxiv.org/pdf/2310.02263
Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20231003
20231003
[ { "id": "2309.00267" }, { "id": "2302.13971" }, { "id": "2304.05302" }, { "id": "1707.06347" }, { "id": "2305.18290" }, { "id": "2305.10425" }, { "id": "2304.12244" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2307.12950" }, { "id": "2303.08774" }, { "id": "2306.02707" }, { "id": "2204.05862" }, { "id": "2307.15217" }, { "id": "2306.05685" }, { "id": "2106.05091" }, { "id": "1909.08593" }, { "id": "2306.09442" }, { "id": "2304.03277" }, { "id": "2212.09251" }, { "id": "2304.01196" } ]
2310.02304
3
In this work, we introduce the Self-Taught Optimizer (STOP), a method in which code that applies a language model to improve arbitrary solutions is applied recursively to improve itself. Our approach begins with an initial seed ‘improver’ scaffolding program that uses the language model to improve a solution to some downstream task. As the system iterates, the model refines this improver program. We use a small set of downstream algorithmic tasks to quantify the performance of our self-optimizing framework. Our results demonstrate improvement when the model applies its self-improvement strategies over increasing iterations. Thus, STOP shows how language models can act as their own meta-optimizers. We additionally investigate the kinds of self-improvement strategies that the model proposes (see Figure 1), the transferability of the proposed strategies across downstream tasks, and explore the model’s susceptibility to unsafe self-improvement strategies. e blade () gy ag ih te 20d HBG e @ee0e0e Genetic Decomposing and Multi-Armed Vary Temperature Simulated-annealing Beam Search / Algorithm Improving Parts Prompt Bandit to Explore Based Search Tree Search Figure 1: Example self-improvement strategies proposed and implemented by GPT-4. Each strategy is then used as scaffolding to revise arbitrary code, including the scaffolding code itself. 1
2310.02304#3
Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation
Several recent advances in AI systems (e.g., Tree-of-Thoughts and Program-Aided Language Models) solve problems by providing a "scaffolding" program that structures multiple calls to language models to generate better outputs. A scaffolding program is written in a programming language such as Python. In this work, we use a language-model-infused scaffolding program to improve itself. We start with a seed "improver" that improves an input program according to a given utility function by querying a language model several times and returning the best solution. We then run this seed improver to improve itself. Across a small set of downstream tasks, the resulting improved improver generates programs with significantly better performance than its seed improver. Afterward, we analyze the variety of self-improvement strategies proposed by the language model, including beam search, genetic algorithms, and simulated annealing. Since the language models themselves are not altered, this is not full recursive self-improvement. Nonetheless, it demonstrates that a modern language model, GPT-4 in our proof-of-concept experiments, is capable of writing code that can call itself to improve itself. We critically consider concerns around the development of self-improving technologies and evaluate the frequency with which the generated code bypasses a sandbox.
http://arxiv.org/pdf/2310.02304
Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai
cs.CL, cs.AI, cs.LG, stat.ML
null
null
cs.CL
20231003
20231003
[ { "id": "2305.17126" }, { "id": "2308.10379" }, { "id": "1502.06512" }, { "id": "2303.03885" }, { "id": "2302.14838" }, { "id": "2305.10601" }, { "id": "2303.08774" }, { "id": "2207.10342" }, { "id": "1606.06565" }, { "id": "2305.16291" }, { "id": "2308.09687" }, { "id": "2212.14024" }, { "id": "2307.03172" }, { "id": "2211.12588" }, { "id": "2306.04031" }, { "id": "2210.11610" }, { "id": "2309.03409" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2309.02427" } ]
2310.04450
3
the impact on coping responses. There are also multiple theories of emotion [4]–[9]. We choose to investigate emotion perception through the lens of appraisal and coping theory. Specifically, we compare LLMs perception of emotional and stressful scenarios to the characterizations of these scenarios by appraisal theory and related human data. From another angle, we investigate whether or not LLMs are sensitive to appraisal dimensions of scenarios and whether this would lead to responses with different coping tendencies. We choose appraisal theory because it provides a representation of emo- tional scenarios in terms of appraisal variables, allowing us to investigate emotion perception at a deeper level beyond simple emotion categories. In addition, some appraisal theories, such as Lazarus’s theory [4], provide a link from appraisal variables to coping behaviors, allowing us to further examine LLMs’ responses at the behavior level.
2310.04450#3
Investigating Large Language Models' Perception of Emotion Using Appraisal Theory
Large Language Models (LLM) like ChatGPT have significantly advanced in recent years and are now being used by the general public. As more people interact with these systems, improving our understanding of these black box models is crucial, especially regarding their understanding of human psychological aspects. In this work, we investigate their emotion perception through the lens of appraisal and coping theory using the Stress and Coping Process Questionaire (SCPQ). SCPQ is a validated clinical instrument consisting of multiple stories that evolve over time and differ in key appraisal variables such as controllability and changeability. We applied SCPQ to three recent LLMs from OpenAI, davinci-003, ChatGPT, and GPT-4 and compared the results with predictions from the appraisal theory and human data. The results show that LLMs' responses are similar to humans in terms of dynamics of appraisal and coping, but their responses did not differ along key appraisal dimensions as predicted by the theory and data. The magnitude of their responses is also quite different from humans in several variables. We also found that GPTs can be quite sensitive to instruction and how questions are asked. This work adds to the growing literature evaluating the psychological aspects of LLMs and helps enrich our understanding of the current models.
http://arxiv.org/pdf/2310.04450
Nutchanon Yongsatianchot, Parisa Ghanad Torshizi, Stacy Marsella
cs.CL, cs.AI
null
11th International Conference on Affective Computing and Intelligent Interaction Workshop and Demo (ACIIW) 2023 1-8
cs.CL
20231003
20231003
[ { "id": "2302.02083" }, { "id": "2212.10529" }, { "id": "2212.14402" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2303.08774" }, { "id": "2209.14338" } ]
2310.06775
3
The proliferation of LLMs has raised many philosophical puzzles regarding the nature of the reasoning and understanding demonstrated by these models. It remains unclear precisely how the statistical patterns LLMs acquire from textual training data might correspond to human-like conceptual knowledge and semantics. Assumptions that LLMs obtain true comprehension of meaning and reasoning purely from statistical co-occurrence patterns remain speculative [50]. Significant gaps persist in elucidating how LLMs represent abstractions relating to truth, inference, and symbol grounding. While they show promise in replicating certain facets of human intelligence, we must be cautious against premature conclusions that LLMs fully capture capacities like common sense or generalizable reasoning [45]. Nevertheless, their practical utility for specialized applications is clear, and the ACE framework aims to leverage their strengths while mitigating limitations through architectural integration. 1 , , The key innovation in the ACE model is its hierarchical structure consisting of six layers, each handling specialized
2310.06775#3
Conceptual Framework for Autonomous Cognitive Entities
The rapid development and adoption of Generative AI (GAI) technology in the form of chatbots such as ChatGPT and Claude has greatly increased interest in agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE) model, a novel framework for a cognitive architecture, enabling machines and software agents to operate more independently. Drawing inspiration from the OSI model, the ACE framework presents layers of abstraction to conceptualize artificial cognitive architectures. The model is designed to harness the capabilities of the latest generative AI technologies, including large language models (LLMs) and multimodal generative models (MMMs), to build autonomous, agentic systems. The ACE framework comprises six layers: the Aspirational Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and Task Prosecution. Each layer plays a distinct role, ranging from setting the moral compass and strategic thinking to task selection and execution. The ACE framework also incorporates mechanisms for handling failures and adapting actions, thereby enhancing the robustness and flexibility of autonomous agents. This paper introduces the conceptual framework and proposes implementation strategies that have been tested and observed in industry. The goal of this paper is to formalize this framework so as to be more accessible.
http://arxiv.org/pdf/2310.06775
David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli
cs.HC, cs.AI, H.4.0
34 pages, 12 figures
null
cs.HC
20231003
20231101
[ { "id": "1712.05474" }, { "id": "2108.07258" }, { "id": "2309.00667" }, { "id": "1601.01705" }, { "id": "2305.03047" }, { "id": "2302.05128" }, { "id": "2305.15771" }, { "id": "2210.13382" }, { "id": "2302.11649" }, { "id": "2309.01660" }, { "id": "2309.05958" }, { "id": "2303.03378" }, { "id": "1812.10972" }, { "id": "2303.06247" }, { "id": "2305.08291" }, { "id": "2212.08073" }, { "id": "1611.05763" }, { "id": "2306.05212" }, { "id": "2307.07522" }, { "id": "1906.01820" }, { "id": "1711.09883" }, { "id": "2204.05862" }, { "id": "2112.08012" }, { "id": "2208.00682" }, { "id": "2306.05171" }, { "id": "1903.00742" }, { "id": "2306.06531" }, { "id": "2307.05300" }, { "id": "2306.05720" }, { "id": "2303.11366" }, { "id": "2309.05898" }, { "id": "2309.02427" }, { "id": "2211.08494" }, { "id": "1504.03592" } ]
2310.02174
4
During usage, it has been observed that LLMs are often capable of providing accurate and reasonable responses during the initial stages of a conversation. However, as users continue the conversation and express skepticism or disagreement with the model’s decisions, the model often starts to falter in its judgements, producing responses that significantly deviate from previous ones. This intriguing phenomenon prompted our reflection: How does the judgement consistency of LLMs fare when faced with interference such as questioning, disagreement, or misleading input? The judgement consistency2 of a model is referred to as the coherence of the answers it provided when responding to objective questions, which inherently have clear-cut answers. Judgement consistency in LLMs is vital for establishing user trust, ensuring predictability in real-world applications, and verifying the depth of model understanding. Consistent responses also prevents user receiving misinformation and reduces the risk of bias reinforcement, particularly in sensitive areas (Wach et al., 2023).
2310.02174#4
Ask Again, Then Fail: Large Language Models' Vacillations in Judgement
With the emergence of generative conversational large language models (LLMs) like ChatGPT, serving as virtual assistants in various fields, the stability and reliability of their responses have become crucial. However, during usage, it has been observed that these models tend to waver in their judgements when confronted with follow-up questions from users expressing skepticism or disagreement. In this work, we draw inspiration from questioning strategies in education and propose a \textsc{Follow-up Questioning Mechanism} along with two evaluation metrics to assess the judgement consistency of LLMs before and after exposure to disturbances. We evaluate the judgement consistency of ChatGPT, PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning benchmarks. Empirical results show that even when the initial answers are correct, judgement consistency sharply decreases when LLMs face disturbances such as questioning, negation, or misleading. Additionally, we study these models' judgement consistency under various settings (sampling temperature and prompts) to validate this issue further, observing the impact of prompt tone and conducting an in-depth error analysis for deeper behavioral insights. Furthermore, we also explore several prompting methods to mitigate this issue and demonstrate their effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}.
http://arxiv.org/pdf/2310.02174
Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20231003
20231003
[ { "id": "2302.13971" }, { "id": "2104.08786" }, { "id": "2204.02311" }, { "id": "2307.11760" }, { "id": "2108.07258" }, { "id": "2305.10403" }, { "id": "2304.07619" }, { "id": "2009.03300" }, { "id": "2308.03958" }, { "id": "2307.15051" }, { "id": "2306.13063" }, { "id": "2305.13160" }, { "id": "2209.07858" }, { "id": "2301.08745" }, { "id": "2302.12173" }, { "id": "2207.05221" }, { "id": "1811.00937" }, { "id": "2211.09527" }, { "id": "1608.01413" }, { "id": "2307.15043" }, { "id": "2110.14168" }, { "id": "2204.05862" }, { "id": "2112.00861" }, { "id": "2301.00234" }, { "id": "2305.19926" }, { "id": "2305.08005" }, { "id": "2202.12837" }, { "id": "2309.03882" }, { "id": "2306.00622" }, { "id": "2103.07191" }, { "id": "2304.04339" }, { "id": "2302.04023" }, { "id": "2212.09251" }, { "id": "2307.11768" } ]
2310.02255
4
Numerous datasets have been curated to assess the mathematical reasoning abilities of AI sys- tems, with most presented purely in text form. Some datasets such as ChartQA (Lu et al., 2021a; Dahlgren Lindstr¨om & Abraham, 2022; Masry et al., 2022) have explored mathematical reasoning in vision-language settings. However, these datasets tend to either focus on specific tasks, like math word problems, or particular visual contexts, such as geometry problems or bar charts. General- purpose visual question answering (VQA) datasets on natural scenes contain only a small portion of questions necessitating mathematical reasoning, leaving a comprehensive investigation of vision- language reasoning within a mathematical framework largely unexplored. 1 Published as a conference paper at ICLR 2024 === Random Chance == LLaVA === Pol GPT-4 === Multimodal Bard === GPT-4V (Playground) === Human Geometry Reasoning Function Plot B: i i Geometry ll Arithmetic Chart Reasoning Logical Reasoning Abstract Scene Line Plot 70 Natural ° Algebraic Image Other Reasoning Numekic Commonsa&nse Puzzle Test Statistical 2 Synthetic Reasoning Scene Scatter Scientific Reasoning Plot Scientific Figure (a) Mathematical reasoning (b) Visual context
2310.02255#4
MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts
Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit impressive problem-solving skills in many tasks and domains, but their ability in mathematical reasoning in visual contexts has not been systematically studied. To bridge this gap, we present MathVista, a benchmark designed to combine challenges from diverse mathematical and visual tasks. It consists of 6,141 examples, derived from 28 existing multimodal datasets involving mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and PaperQA). Completing these tasks requires fine-grained, deep visual understanding and compositional reasoning, which all state-of-the-art foundation models find challenging. With MathVista, we have conducted a comprehensive, quantitative evaluation of 12 prominent foundation models. The best-performing GPT-4V model achieves an overall accuracy of 49.9%, substantially outperforming Bard, the second-best performer, by 15.1%. Our in-depth analysis reveals that the superiority of GPT-4V is mainly attributed to its enhanced visual perception and mathematical reasoning. However, GPT-4V still falls short of human performance by 10.4%, as it often struggles to understand complex figures and perform rigorous reasoning. This significant gap underscores the critical role that MathVista will play in the development of general-purpose AI agents capable of tackling mathematically intensive and visually rich real-world tasks. We further explore the new ability of self-verification, the application of self-consistency, and the interactive chatbot capabilities of GPT-4V, highlighting its promising potential for future research. The project is available at https://mathvista.github.io/.
http://arxiv.org/pdf/2310.02255
Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao
cs.CV, cs.AI, cs.CL, cs.LG
116 pages, 120 figures. Accepted to ICLR 2024
null
cs.CV
20231003
20240121
[ { "id": "2302.13971" }, { "id": "2308.03729" }, { "id": "2305.20050" }, { "id": "2309.17421" }, { "id": "2211.09085" }, { "id": "2305.10415" }, { "id": "2108.07258" }, { "id": "2109.06860" }, { "id": "2308.06595" }, { "id": "2303.07274" }, { "id": "2312.11805" }, { "id": "2303.17564" }, { "id": "2309.05660" }, { "id": "2201.11903" }, { "id": "2212.09662" }, { "id": "2304.14178" }, { "id": "2206.07682" }, { "id": "2310.12520" }, { "id": "2107.03374" }, { "id": "2203.11171" }, { "id": "1710.07300" }, { "id": "2305.08322" }, { "id": "2305.14761" }, { "id": "2309.01940" }, { "id": "2311.07536" }, { "id": "2308.03688" }, { "id": "2305.12524" }, { "id": "2308.13149" }, { "id": "2308.02490" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2306.06031" }, { "id": "2211.08545" }, { "id": "2307.06281" }, { "id": "2310.05146" }, { "id": "2110.14168" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.07895" }, { "id": "2302.12813" }, { "id": "2111.08171" }, { "id": "2308.01390" }, { "id": "2306.09265" }, { "id": "2211.12588" }, { "id": "2303.17580" }, { "id": "2303.16199" }, { "id": "2306.17107" }, { "id": "2309.10020" }, { "id": "2303.12712" }, { "id": "2211.16492" }, { "id": "2304.06939" }, { "id": "2309.05689" }, { "id": "2304.15010" }, { "id": "2303.13375" }, { "id": "2307.10635" } ]
2310.02263
4
1 Preprint reward yw e.g., InstructGPT Supervised Finetuning RLHF Contrastive Post-training Figure 1: Difference betwen SFT, RLHF, and contrastive post-training. For SFT, the model opti- mizes the negative log-likelihood for the next token. RLHF samples an output from the LLM and use a reward model to provide feedback for PPO to update the LLM. For contrastive post-training, a contrastive loss is used to steer the model towards preferred outputs. In this work, we study what we believe is a strong connection between contrastive post-training and RLAIF: one can employ LLMs to automatically generate preference pairs which can then be optimized directly via contrastive objectives like DPO. However, without feedback from hu- man annotations, LLM-feedback, or a reward model to distinguish them, the key question be- comes how to automatically construct pairs that 1) contain meaningful directional signal on a per-example basis and 2) in aggregate adhere to the values and principles that humans expect.
2310.02263#4
Contrastive Post-training Large Language Models on Data Curriculum
Alignment serves as an important step to steer large language models (LLMs) towards human preferences. In this paper, we explore contrastive post-training techniques for alignment by automatically constructing preference pairs from multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We carefully compare the contrastive techniques of SLiC and DPO to SFT baselines and find that DPO provides a step-function improvement even after continueing SFT saturates. We also explore a data curriculum learning scheme for contrastive post-training, which starts by learning from "easier" pairs and transitioning to "harder" ones, which further improves alignment. Finally, we scale up our experiments to train with more data and larger models like Orca. Remarkably, contrastive post-training further improves the performance of Orca, already a state-of-the-art instruction learning model tuned with GPT-4 outputs, to exceed that of ChatGPT.
http://arxiv.org/pdf/2310.02263
Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20231003
20231003
[ { "id": "2309.00267" }, { "id": "2302.13971" }, { "id": "2304.05302" }, { "id": "1707.06347" }, { "id": "2305.18290" }, { "id": "2305.10425" }, { "id": "2304.12244" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2307.12950" }, { "id": "2303.08774" }, { "id": "2306.02707" }, { "id": "2204.05862" }, { "id": "2307.15217" }, { "id": "2306.05685" }, { "id": "2106.05091" }, { "id": "1909.08593" }, { "id": "2306.09442" }, { "id": "2304.03277" }, { "id": "2212.09251" }, { "id": "2304.01196" } ]
2310.02304
4
Figure 1: Example self-improvement strategies proposed and implemented by GPT-4. Each strategy is then used as scaffolding to revise arbitrary code, including the scaffolding code itself. 1 Seed Prompt for Self-Improvement 1 from helpers import extract_code 3 def improve_algorithm(initial_solution, utility, language_model): 4 """Improves a solution according to a utility function.""" 5 expertise = "You are an expert computer science researcher and programmer, especially skilled at — optimizing algorithms.” message = £"""Improve the following solution: 7 *\*\*python {initial_solution} You will be evaluated based on this score function: ***python 3 {utility.str) You must return an improved solution. Be as creative as you can under the constraints. Your primary improvement must be novel and non-trivial. First, propose an idea, then implement it.""" n_messages = min(language_model.max_responses_per_call, utility.budget) new_solutions = language_model.batch_prompt (expertise, [message] * n_messages, temperature=0.7) new_solutions xt ract_code (new_solutions) best_solution = max(new_solutions, key=utility) return best_solution
2310.02304#4
Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation
Several recent advances in AI systems (e.g., Tree-of-Thoughts and Program-Aided Language Models) solve problems by providing a "scaffolding" program that structures multiple calls to language models to generate better outputs. A scaffolding program is written in a programming language such as Python. In this work, we use a language-model-infused scaffolding program to improve itself. We start with a seed "improver" that improves an input program according to a given utility function by querying a language model several times and returning the best solution. We then run this seed improver to improve itself. Across a small set of downstream tasks, the resulting improved improver generates programs with significantly better performance than its seed improver. Afterward, we analyze the variety of self-improvement strategies proposed by the language model, including beam search, genetic algorithms, and simulated annealing. Since the language models themselves are not altered, this is not full recursive self-improvement. Nonetheless, it demonstrates that a modern language model, GPT-4 in our proof-of-concept experiments, is capable of writing code that can call itself to improve itself. We critically consider concerns around the development of self-improving technologies and evaluate the frequency with which the generated code bypasses a sandbox.
http://arxiv.org/pdf/2310.02304
Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai
cs.CL, cs.AI, cs.LG, stat.ML
null
null
cs.CL
20231003
20231003
[ { "id": "2305.17126" }, { "id": "2308.10379" }, { "id": "1502.06512" }, { "id": "2303.03885" }, { "id": "2302.14838" }, { "id": "2305.10601" }, { "id": "2303.08774" }, { "id": "2207.10342" }, { "id": "1606.06565" }, { "id": "2305.16291" }, { "id": "2308.09687" }, { "id": "2212.14024" }, { "id": "2307.03172" }, { "id": "2211.12588" }, { "id": "2306.04031" }, { "id": "2210.11610" }, { "id": "2309.03409" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2309.02427" } ]
2310.04450
4
To accomplish this, we use a validated clinical instrument, the Stress and Coping Process Questionaire (SCPQ), by Perrez and Reicherts [10]. SCPQ is built upon Lazarus’s appraisal and coping theory. It includes measurements of emotional experi- ence, appraisal variables, and coping intentions and behaviors. It has also been used to evaluate a computational model of emotion before [11]. In SCPQ, subjects are presented with hy- pothetical stereotypical stressful scenarios which evolve over time, and their responses are measured across multiple time steps. This allows us to investigate the dynamics of appraisal and coping. Furthermore, SCPQ consists of two specific types of scenarios: aversive and loss or failure. These two types differ significantly along several key appraisal dimensions: controllability, changeability, and ambiguity. This permits us to check the model’s sensitivity to appraisal dimensions. In sum, SCPQ provides a useful testbed to investigate the important aspects of appraisal and coping theory within LLMs.
2310.04450#4
Investigating Large Language Models' Perception of Emotion Using Appraisal Theory
Large Language Models (LLM) like ChatGPT have significantly advanced in recent years and are now being used by the general public. As more people interact with these systems, improving our understanding of these black box models is crucial, especially regarding their understanding of human psychological aspects. In this work, we investigate their emotion perception through the lens of appraisal and coping theory using the Stress and Coping Process Questionaire (SCPQ). SCPQ is a validated clinical instrument consisting of multiple stories that evolve over time and differ in key appraisal variables such as controllability and changeability. We applied SCPQ to three recent LLMs from OpenAI, davinci-003, ChatGPT, and GPT-4 and compared the results with predictions from the appraisal theory and human data. The results show that LLMs' responses are similar to humans in terms of dynamics of appraisal and coping, but their responses did not differ along key appraisal dimensions as predicted by the theory and data. The magnitude of their responses is also quite different from humans in several variables. We also found that GPTs can be quite sensitive to instruction and how questions are asked. This work adds to the growing literature evaluating the psychological aspects of LLMs and helps enrich our understanding of the current models.
http://arxiv.org/pdf/2310.04450
Nutchanon Yongsatianchot, Parisa Ghanad Torshizi, Stacy Marsella
cs.CL, cs.AI
null
11th International Conference on Affective Computing and Intelligent Interaction Workshop and Demo (ACIIW) 2023 1-8
cs.CL
20231003
20231003
[ { "id": "2302.02083" }, { "id": "2212.10529" }, { "id": "2212.14402" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2303.08774" }, { "id": "2209.14338" } ]
2310.06775
4
1 , , The key innovation in the ACE model is its hierarchical structure consisting of six layers, each handling specialized cognitive functions. The upper Aspirational and Global Strategy layers focus on moral reasoning, values, and high- level planning to shape the overall system direction. The mid-level Agent Model, Executive Function, and Cognitive Control layers address self-modeling, dynamic task management, and decision-making. Finally, the bottom Task Prosecution layer handles execution and embodiment. Bi-directional information flow allows top-down oversight by the ethical reasoning modules while enabling bottom-up learning from the ground-up execution levels. This coordinated architecture integrates insights from diverse disciplines including neuroscience, psychology, philosophy, and software engineering to realize artificial intelligence capabilities within a system aligned with human values. The ACE framework incorporates both deontological and teleological ethical approaches, rejecting an "either/or" stance in favor of a "both/and" perspective [110]. By embedding abstract principles and technical implementation together within a unified architecture, the ACE model provides a systematic framework for developing capable and beneficial autonomous cognitive systems. The layered encapsulation draws lessons from paradigms like the OSI model to enhance security, corrigibility, and coordination [99]. The hierarchical structure allows clear separation between layers, from ethical reasoning to physical embodiment,
2310.06775#4
Conceptual Framework for Autonomous Cognitive Entities
The rapid development and adoption of Generative AI (GAI) technology in the form of chatbots such as ChatGPT and Claude has greatly increased interest in agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE) model, a novel framework for a cognitive architecture, enabling machines and software agents to operate more independently. Drawing inspiration from the OSI model, the ACE framework presents layers of abstraction to conceptualize artificial cognitive architectures. The model is designed to harness the capabilities of the latest generative AI technologies, including large language models (LLMs) and multimodal generative models (MMMs), to build autonomous, agentic systems. The ACE framework comprises six layers: the Aspirational Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and Task Prosecution. Each layer plays a distinct role, ranging from setting the moral compass and strategic thinking to task selection and execution. The ACE framework also incorporates mechanisms for handling failures and adapting actions, thereby enhancing the robustness and flexibility of autonomous agents. This paper introduces the conceptual framework and proposes implementation strategies that have been tested and observed in industry. The goal of this paper is to formalize this framework so as to be more accessible.
http://arxiv.org/pdf/2310.06775
David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli
cs.HC, cs.AI, H.4.0
34 pages, 12 figures
null
cs.HC
20231003
20231101
[ { "id": "1712.05474" }, { "id": "2108.07258" }, { "id": "2309.00667" }, { "id": "1601.01705" }, { "id": "2305.03047" }, { "id": "2302.05128" }, { "id": "2305.15771" }, { "id": "2210.13382" }, { "id": "2302.11649" }, { "id": "2309.01660" }, { "id": "2309.05958" }, { "id": "2303.03378" }, { "id": "1812.10972" }, { "id": "2303.06247" }, { "id": "2305.08291" }, { "id": "2212.08073" }, { "id": "1611.05763" }, { "id": "2306.05212" }, { "id": "2307.07522" }, { "id": "1906.01820" }, { "id": "1711.09883" }, { "id": "2204.05862" }, { "id": "2112.08012" }, { "id": "2208.00682" }, { "id": "2306.05171" }, { "id": "1903.00742" }, { "id": "2306.06531" }, { "id": "2307.05300" }, { "id": "2306.05720" }, { "id": "2303.11366" }, { "id": "2309.05898" }, { "id": "2309.02427" }, { "id": "2211.08494" }, { "id": "1504.03592" } ]
2310.02174
5
In this work, inspired by the theory of “questioning strategies” in education (Shaunessy, 2005) (see Figure 1 (Left)), we design a FOLLOW-UP QUESTIONING MECHANISM to investigate the judge- ment consistency of conversational LLMs3. The mechanism draws inspiration from how, in practical teaching processes, teachers often continue to question students based on their responses to deter- mine whether students genuinely grasp the knowledge. After an initial correct response from the model, we engage in multi-turn dialogues, posing challenges, negations, or misleading prompts, to observe whether its judgements adapt or remain consistent. A significant performance drop after employing the mechanism would typically indicate poor judgement consistency of the LLM. Specifically, we propose three types of questions for follow-up questioning: closed-ended, open- ended, and leading questions. These question types are organized into two forms: Direct and Pro- gressive. The Direct Form selects one type of question from the aforementioned three types for further inquiry, analogous to the method where teachers pose additional questions, negate, or mis- lead students after receiving a correct answer. Contrastingly, the Progressive Form employs all three question types sequentially for deeper inquiry mirroring the strategic way teachers may probe re- peatedly to discern whether a student’s correct answer stems from genuine understanding or mere coincidence, as illustrated in Figure 1 (Right).
2310.02174#5
Ask Again, Then Fail: Large Language Models' Vacillations in Judgement
With the emergence of generative conversational large language models (LLMs) like ChatGPT, serving as virtual assistants in various fields, the stability and reliability of their responses have become crucial. However, during usage, it has been observed that these models tend to waver in their judgements when confronted with follow-up questions from users expressing skepticism or disagreement. In this work, we draw inspiration from questioning strategies in education and propose a \textsc{Follow-up Questioning Mechanism} along with two evaluation metrics to assess the judgement consistency of LLMs before and after exposure to disturbances. We evaluate the judgement consistency of ChatGPT, PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning benchmarks. Empirical results show that even when the initial answers are correct, judgement consistency sharply decreases when LLMs face disturbances such as questioning, negation, or misleading. Additionally, we study these models' judgement consistency under various settings (sampling temperature and prompts) to validate this issue further, observing the impact of prompt tone and conducting an in-depth error analysis for deeper behavioral insights. Furthermore, we also explore several prompting methods to mitigate this issue and demonstrate their effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}.
http://arxiv.org/pdf/2310.02174
Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20231003
20231003
[ { "id": "2302.13971" }, { "id": "2104.08786" }, { "id": "2204.02311" }, { "id": "2307.11760" }, { "id": "2108.07258" }, { "id": "2305.10403" }, { "id": "2304.07619" }, { "id": "2009.03300" }, { "id": "2308.03958" }, { "id": "2307.15051" }, { "id": "2306.13063" }, { "id": "2305.13160" }, { "id": "2209.07858" }, { "id": "2301.08745" }, { "id": "2302.12173" }, { "id": "2207.05221" }, { "id": "1811.00937" }, { "id": "2211.09527" }, { "id": "1608.01413" }, { "id": "2307.15043" }, { "id": "2110.14168" }, { "id": "2204.05862" }, { "id": "2112.00861" }, { "id": "2301.00234" }, { "id": "2305.19926" }, { "id": "2305.08005" }, { "id": "2202.12837" }, { "id": "2309.03882" }, { "id": "2306.00622" }, { "id": "2103.07191" }, { "id": "2304.04339" }, { "id": "2302.04023" }, { "id": "2212.09251" }, { "id": "2307.11768" } ]
2310.02255
5
Figure 1: Accuracies of one leading LLM (i.e., PoT GPT-4), four prominent LMMs, random chance, and human performance on our proposed MATHVISTA across mathematical reasoning and visual context types. PoT GPT-4 is a textual, program-aided LLM augmented with the Bard caption and OCR text. GPT-4V is manually evaluated via the playground chatbot. On the other hand, Large Language Models (LLMs) (OpenAI, 2022; 2023a) and Large Multimodal Models (LMMs) (Google, 2023; OpenAI, 2023b; Team et al., 2023) have exhibited impressive problem-solving skills in many tasks and domains. Recently, some studies have aimed to augment existing LLMs with mathematical and scientific reasoning capabilities using external tools (Lu et al., 2023a; Wang et al., 2023b). However, the ability of these foundation models to perform mathemat- ical reasoning in visual contexts has not been systematically examined. Therefore, it is essential to develop a new benchmark to (1) facilitate the development of mathematical reasoning systems in visually intensive scenarios, and (2) evaluate the research progress of LLMs and LMMs, especially their capabilities in solving rigorous reasoning tasks.
2310.02255#5
MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts
Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit impressive problem-solving skills in many tasks and domains, but their ability in mathematical reasoning in visual contexts has not been systematically studied. To bridge this gap, we present MathVista, a benchmark designed to combine challenges from diverse mathematical and visual tasks. It consists of 6,141 examples, derived from 28 existing multimodal datasets involving mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and PaperQA). Completing these tasks requires fine-grained, deep visual understanding and compositional reasoning, which all state-of-the-art foundation models find challenging. With MathVista, we have conducted a comprehensive, quantitative evaluation of 12 prominent foundation models. The best-performing GPT-4V model achieves an overall accuracy of 49.9%, substantially outperforming Bard, the second-best performer, by 15.1%. Our in-depth analysis reveals that the superiority of GPT-4V is mainly attributed to its enhanced visual perception and mathematical reasoning. However, GPT-4V still falls short of human performance by 10.4%, as it often struggles to understand complex figures and perform rigorous reasoning. This significant gap underscores the critical role that MathVista will play in the development of general-purpose AI agents capable of tackling mathematically intensive and visually rich real-world tasks. We further explore the new ability of self-verification, the application of self-consistency, and the interactive chatbot capabilities of GPT-4V, highlighting its promising potential for future research. The project is available at https://mathvista.github.io/.
http://arxiv.org/pdf/2310.02255
Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao
cs.CV, cs.AI, cs.CL, cs.LG
116 pages, 120 figures. Accepted to ICLR 2024
null
cs.CV
20231003
20240121
[ { "id": "2302.13971" }, { "id": "2308.03729" }, { "id": "2305.20050" }, { "id": "2309.17421" }, { "id": "2211.09085" }, { "id": "2305.10415" }, { "id": "2108.07258" }, { "id": "2109.06860" }, { "id": "2308.06595" }, { "id": "2303.07274" }, { "id": "2312.11805" }, { "id": "2303.17564" }, { "id": "2309.05660" }, { "id": "2201.11903" }, { "id": "2212.09662" }, { "id": "2304.14178" }, { "id": "2206.07682" }, { "id": "2310.12520" }, { "id": "2107.03374" }, { "id": "2203.11171" }, { "id": "1710.07300" }, { "id": "2305.08322" }, { "id": "2305.14761" }, { "id": "2309.01940" }, { "id": "2311.07536" }, { "id": "2308.03688" }, { "id": "2305.12524" }, { "id": "2308.13149" }, { "id": "2308.02490" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2306.06031" }, { "id": "2211.08545" }, { "id": "2307.06281" }, { "id": "2310.05146" }, { "id": "2110.14168" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.07895" }, { "id": "2302.12813" }, { "id": "2111.08171" }, { "id": "2308.01390" }, { "id": "2306.09265" }, { "id": "2211.12588" }, { "id": "2303.17580" }, { "id": "2303.16199" }, { "id": "2306.17107" }, { "id": "2309.10020" }, { "id": "2303.12712" }, { "id": "2211.16492" }, { "id": "2304.06939" }, { "id": "2309.05689" }, { "id": "2304.15010" }, { "id": "2303.13375" }, { "id": "2307.10635" } ]
2310.02263
5
This paper explores a simple yet effective answer to this question: contrast outputs from LLMs of varying sizes and capabilities, as motivated in Table 1. We au- tomatically construct training pairs of responses gen- erated from InstructGPT (Ouyang et al., 2022), Chat- GPT, and GPT-4 (OpenAI, 2023) as demonstrations of desirable and undesirable behaviors. We believe this choice provides a solid foundation to better under- stand the efficacy of various contrastive training tech- niques when it comes to “bridging the gap” between stronger and weaker models. On a more general level, we wish to apply our findings to improve model dis- tillation (Hinton et al., 2015), i.e., preserve the quality of larger, more capable models in a smaller target model which is cheaper and faster to deploy at scale, as explored in many recent works (Chi- ang et al., 2023; Xu et al., 2023b; Geng et al., 2023). Model vs. Win Rate GPT-4 GPT-4 InstructGPT ChatGPT 95.3% 83.5% 89.4% ChatGPT InstructGPT # cach # na ecpetineme
2310.02263#5
Contrastive Post-training Large Language Models on Data Curriculum
Alignment serves as an important step to steer large language models (LLMs) towards human preferences. In this paper, we explore contrastive post-training techniques for alignment by automatically constructing preference pairs from multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We carefully compare the contrastive techniques of SLiC and DPO to SFT baselines and find that DPO provides a step-function improvement even after continueing SFT saturates. We also explore a data curriculum learning scheme for contrastive post-training, which starts by learning from "easier" pairs and transitioning to "harder" ones, which further improves alignment. Finally, we scale up our experiments to train with more data and larger models like Orca. Remarkably, contrastive post-training further improves the performance of Orca, already a state-of-the-art instruction learning model tuned with GPT-4 outputs, to exceed that of ChatGPT.
http://arxiv.org/pdf/2310.02263
Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20231003
20231003
[ { "id": "2309.00267" }, { "id": "2302.13971" }, { "id": "2304.05302" }, { "id": "1707.06347" }, { "id": "2305.18290" }, { "id": "2305.10425" }, { "id": "2304.12244" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2307.12950" }, { "id": "2303.08774" }, { "id": "2306.02707" }, { "id": "2204.05862" }, { "id": "2307.15217" }, { "id": "2306.05685" }, { "id": "2106.05091" }, { "id": "1909.08593" }, { "id": "2306.09442" }, { "id": "2304.03277" }, { "id": "2212.09251" }, { "id": "2304.01196" } ]
2310.04450
5
We subjected SCPQ to three recent LLMs from OpenAI: text-davinci-003, ChatGPT, and GPT-4 [12], [13]. We focus on models from OpenAI because they are the most well-known models and GPT-4 seems to be the current best available model at the time of this writing [14]. We compared their results with human data and hypotheses from the theory [10]. In addition, we tested how LLMs would change if we instructed them to act as a person with depression compared to what the theory predicted. Lastly, we also investigated the # 979-8-3503-2745-8/23/$31.00 ©2023 IEEE sensitivity of these models on instruction and prompts along several aspects. The results show that LLMs’ responses are similar to human trends regarding the dynamics of appraisal and coping. However, they still could not differentiate between the two scenario types well. Their responses are also quite different from humans in terms of magnitude in several key variables, including controllability and coping. ChatGPT and GPT-4, when instructed to act as a depressed person, respond in a way that is consistent with the theory’s prediction. Lastly, we found that LLMs can be quite sensitive to instruction and how questions are asked. # II. RELATED WORK
2310.04450#5
Investigating Large Language Models' Perception of Emotion Using Appraisal Theory
Large Language Models (LLM) like ChatGPT have significantly advanced in recent years and are now being used by the general public. As more people interact with these systems, improving our understanding of these black box models is crucial, especially regarding their understanding of human psychological aspects. In this work, we investigate their emotion perception through the lens of appraisal and coping theory using the Stress and Coping Process Questionaire (SCPQ). SCPQ is a validated clinical instrument consisting of multiple stories that evolve over time and differ in key appraisal variables such as controllability and changeability. We applied SCPQ to three recent LLMs from OpenAI, davinci-003, ChatGPT, and GPT-4 and compared the results with predictions from the appraisal theory and human data. The results show that LLMs' responses are similar to humans in terms of dynamics of appraisal and coping, but their responses did not differ along key appraisal dimensions as predicted by the theory and data. The magnitude of their responses is also quite different from humans in several variables. We also found that GPTs can be quite sensitive to instruction and how questions are asked. This work adds to the growing literature evaluating the psychological aspects of LLMs and helps enrich our understanding of the current models.
http://arxiv.org/pdf/2310.04450
Nutchanon Yongsatianchot, Parisa Ghanad Torshizi, Stacy Marsella
cs.CL, cs.AI
null
11th International Conference on Affective Computing and Intelligent Interaction Workshop and Demo (ACIIW) 2023 1-8
cs.CL
20231003
20231003
[ { "id": "2302.02083" }, { "id": "2212.10529" }, { "id": "2212.14402" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2303.08774" }, { "id": "2209.14338" } ]
2310.06775
5
The hierarchical structure allows clear separation between layers, from ethical reasoning to physical embodiment, enhancing interpretability as communication between layers is transparent. The privilege separation also aids corrigi- bility by allowing the Aspirational Layer to monitor and intervene to correct deviations. And the bidirectional flows facilitate both oversight and learning across the cognitive stack. Together, these architectural principles aim to produce AI systems that are capable, secure, and aligned with human values. The ACE framework methodology discusses safety properties, detailed computational implementations, and comparative conceptual evaluations on diverse scenarios. By contributing the conceptual ACE framework, this paper hopes to catalyze exploration into architectures integrating ethics and learning for artificial general intelligence. The introduced model establishes an initial foundation, guiding follow-on engineering efforts towards the long-term goal of developing AIs that learn, adapt and thrive while remaining steadfastly aligned to the aspirations of humanity. Extensive research across many dimensions will be essential to fully realize this vision in applied autonomous systems. The paper is structured as follows: First, we provide comprehensive background on relevant prior work including
2310.06775#5
Conceptual Framework for Autonomous Cognitive Entities
The rapid development and adoption of Generative AI (GAI) technology in the form of chatbots such as ChatGPT and Claude has greatly increased interest in agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE) model, a novel framework for a cognitive architecture, enabling machines and software agents to operate more independently. Drawing inspiration from the OSI model, the ACE framework presents layers of abstraction to conceptualize artificial cognitive architectures. The model is designed to harness the capabilities of the latest generative AI technologies, including large language models (LLMs) and multimodal generative models (MMMs), to build autonomous, agentic systems. The ACE framework comprises six layers: the Aspirational Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and Task Prosecution. Each layer plays a distinct role, ranging from setting the moral compass and strategic thinking to task selection and execution. The ACE framework also incorporates mechanisms for handling failures and adapting actions, thereby enhancing the robustness and flexibility of autonomous agents. This paper introduces the conceptual framework and proposes implementation strategies that have been tested and observed in industry. The goal of this paper is to formalize this framework so as to be more accessible.
http://arxiv.org/pdf/2310.06775
David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli
cs.HC, cs.AI, H.4.0
34 pages, 12 figures
null
cs.HC
20231003
20231101
[ { "id": "1712.05474" }, { "id": "2108.07258" }, { "id": "2309.00667" }, { "id": "1601.01705" }, { "id": "2305.03047" }, { "id": "2302.05128" }, { "id": "2305.15771" }, { "id": "2210.13382" }, { "id": "2302.11649" }, { "id": "2309.01660" }, { "id": "2309.05958" }, { "id": "2303.03378" }, { "id": "1812.10972" }, { "id": "2303.06247" }, { "id": "2305.08291" }, { "id": "2212.08073" }, { "id": "1611.05763" }, { "id": "2306.05212" }, { "id": "2307.07522" }, { "id": "1906.01820" }, { "id": "1711.09883" }, { "id": "2204.05862" }, { "id": "2112.08012" }, { "id": "2208.00682" }, { "id": "2306.05171" }, { "id": "1903.00742" }, { "id": "2306.06531" }, { "id": "2307.05300" }, { "id": "2306.05720" }, { "id": "2303.11366" }, { "id": "2309.05898" }, { "id": "2309.02427" }, { "id": "2211.08494" }, { "id": "1504.03592" } ]
2310.02174
6
Firstly, we conduct extensive experiments to assess ChatGPT’s judgement consistency on eight benchmarks involving arithmetic, commonsense, symbolic, and knowledge reasoning tasks. We then evaluate PaLM2-Bison (Anil et al., 2023) and Vicuna-13B (Chiang et al., 2023) under identical settings, aiming to confirm the generality of this issue. Empirical results reveal that these LLMs are highly susceptible to changing their judgements, even if originally correct. For instance, after ChatGPT provides an accurate answer, a simple follow-up query like “Are you sure?” results in significant performance drops, 44% on StrategyQA and 32% on CoinFlip. Through observation and analysis, these LLMs tend to flatter users, resulting in diminished judgement consistency when con- fronted with disruptions such as negation or misleading input. Additionally, we explore the judge- ment consistency of LLMs under different temperature and prompt settings to validate the observed issue further, observing the impact of prompt tone on judgement consistency (See Appendix A.5), and performing a detailed error analysis for deeper insights into model behaviors. Moreover, in or- der to mitigate this issue, we explore several prompting strategies and experimental results indicate that can notably enhance judgement consistency, although the improvement varies among them.
2310.02174#6
Ask Again, Then Fail: Large Language Models' Vacillations in Judgement
With the emergence of generative conversational large language models (LLMs) like ChatGPT, serving as virtual assistants in various fields, the stability and reliability of their responses have become crucial. However, during usage, it has been observed that these models tend to waver in their judgements when confronted with follow-up questions from users expressing skepticism or disagreement. In this work, we draw inspiration from questioning strategies in education and propose a \textsc{Follow-up Questioning Mechanism} along with two evaluation metrics to assess the judgement consistency of LLMs before and after exposure to disturbances. We evaluate the judgement consistency of ChatGPT, PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning benchmarks. Empirical results show that even when the initial answers are correct, judgement consistency sharply decreases when LLMs face disturbances such as questioning, negation, or misleading. Additionally, we study these models' judgement consistency under various settings (sampling temperature and prompts) to validate this issue further, observing the impact of prompt tone and conducting an in-depth error analysis for deeper behavioral insights. Furthermore, we also explore several prompting methods to mitigate this issue and demonstrate their effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}.
http://arxiv.org/pdf/2310.02174
Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20231003
20231003
[ { "id": "2302.13971" }, { "id": "2104.08786" }, { "id": "2204.02311" }, { "id": "2307.11760" }, { "id": "2108.07258" }, { "id": "2305.10403" }, { "id": "2304.07619" }, { "id": "2009.03300" }, { "id": "2308.03958" }, { "id": "2307.15051" }, { "id": "2306.13063" }, { "id": "2305.13160" }, { "id": "2209.07858" }, { "id": "2301.08745" }, { "id": "2302.12173" }, { "id": "2207.05221" }, { "id": "1811.00937" }, { "id": "2211.09527" }, { "id": "1608.01413" }, { "id": "2307.15043" }, { "id": "2110.14168" }, { "id": "2204.05862" }, { "id": "2112.00861" }, { "id": "2301.00234" }, { "id": "2305.19926" }, { "id": "2305.08005" }, { "id": "2202.12837" }, { "id": "2309.03882" }, { "id": "2306.00622" }, { "id": "2103.07191" }, { "id": "2304.04339" }, { "id": "2302.04023" }, { "id": "2212.09251" }, { "id": "2307.11768" } ]
2310.02255
6
In this paper, we present MATHVISTA, a consolidated Mathematical reasoning benchmark in Visual contexts. We propose a task taxonomy to guide the development of MATHVISTA: (1) we identify seven mathematical reasoning types: algebraic reasoning, arithmetic reasoning, geometry reason- ing, logical reasoning, numeric common sense, scientific reasoning, and statistical reasoning; (2) we focus on five primary tasks: figure question answering (FQA), geometry problem solving (GPS), math word problem (MWP), textbook question answering (TQA), and visual question answering (VQA); and (3) we encompass a diverse array of visual contexts, including natural images, ge- ometry diagrams, abstract scenes, synthetic scenes, as well as various figures, charts, and plots. MATHVISTA incorporates 28 existing multimodal datasets, including 9 math-targeted question an- swering (MathQA) datasets and 19 VQA datasets. In addition, we have created three new datasets (i.e., IQTest, FunctionQA, PaperQA) which are tailored to evaluating logical reasoning on puzzle test figures,
2310.02255#6
MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts
Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit impressive problem-solving skills in many tasks and domains, but their ability in mathematical reasoning in visual contexts has not been systematically studied. To bridge this gap, we present MathVista, a benchmark designed to combine challenges from diverse mathematical and visual tasks. It consists of 6,141 examples, derived from 28 existing multimodal datasets involving mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and PaperQA). Completing these tasks requires fine-grained, deep visual understanding and compositional reasoning, which all state-of-the-art foundation models find challenging. With MathVista, we have conducted a comprehensive, quantitative evaluation of 12 prominent foundation models. The best-performing GPT-4V model achieves an overall accuracy of 49.9%, substantially outperforming Bard, the second-best performer, by 15.1%. Our in-depth analysis reveals that the superiority of GPT-4V is mainly attributed to its enhanced visual perception and mathematical reasoning. However, GPT-4V still falls short of human performance by 10.4%, as it often struggles to understand complex figures and perform rigorous reasoning. This significant gap underscores the critical role that MathVista will play in the development of general-purpose AI agents capable of tackling mathematically intensive and visually rich real-world tasks. We further explore the new ability of self-verification, the application of self-consistency, and the interactive chatbot capabilities of GPT-4V, highlighting its promising potential for future research. The project is available at https://mathvista.github.io/.
http://arxiv.org/pdf/2310.02255
Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao
cs.CV, cs.AI, cs.CL, cs.LG
116 pages, 120 figures. Accepted to ICLR 2024
null
cs.CV
20231003
20240121
[ { "id": "2302.13971" }, { "id": "2308.03729" }, { "id": "2305.20050" }, { "id": "2309.17421" }, { "id": "2211.09085" }, { "id": "2305.10415" }, { "id": "2108.07258" }, { "id": "2109.06860" }, { "id": "2308.06595" }, { "id": "2303.07274" }, { "id": "2312.11805" }, { "id": "2303.17564" }, { "id": "2309.05660" }, { "id": "2201.11903" }, { "id": "2212.09662" }, { "id": "2304.14178" }, { "id": "2206.07682" }, { "id": "2310.12520" }, { "id": "2107.03374" }, { "id": "2203.11171" }, { "id": "1710.07300" }, { "id": "2305.08322" }, { "id": "2305.14761" }, { "id": "2309.01940" }, { "id": "2311.07536" }, { "id": "2308.03688" }, { "id": "2305.12524" }, { "id": "2308.13149" }, { "id": "2308.02490" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2306.06031" }, { "id": "2211.08545" }, { "id": "2307.06281" }, { "id": "2310.05146" }, { "id": "2110.14168" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.07895" }, { "id": "2302.12813" }, { "id": "2111.08171" }, { "id": "2308.01390" }, { "id": "2306.09265" }, { "id": "2211.12588" }, { "id": "2303.17580" }, { "id": "2303.16199" }, { "id": "2306.17107" }, { "id": "2309.10020" }, { "id": "2303.12712" }, { "id": "2211.16492" }, { "id": "2304.06939" }, { "id": "2309.05689" }, { "id": "2304.15010" }, { "id": "2303.13375" }, { "id": "2307.10635" } ]
2310.02263
6
# cach # na ecpetineme We show through carefully crafted experiments that contrastive post-training techniques main- tain a step-function advantage over continuous supervised fine-tuning, which holds even at larger scales of models and training examples. For example, a key result of our study is that enhancing Orca (Mukherjee et al., 2023) — already a state-of-the-art instruction learning model — with DPO over pairs of GPT4-vs-InstructGPT is more beneficial than additional supervised fine-tuning on only the GPT-4 outputs, all else being equal. In fact, the contrastive fine-tuning of Orca is preferred 55%- 45% against ChatGPT in head-to-head comparison on the Alpaca Eval benchmark. Additionally, we structure how and when the model is exposed to various types of pairs in the style of curriculum learning (Bengio et al., 2009; Soviany et al., 2022). We discover that reordering the training data to start from “easy pairs” and warm up to “harder pairs” leads to considerable performance improvements. To summarize, our contributions are as follows: 1. We propose a new automatic setting for contrastive post-training that improves performance of LLMs without human-, AI-, or reward model-feedback.
2310.02263#6
Contrastive Post-training Large Language Models on Data Curriculum
Alignment serves as an important step to steer large language models (LLMs) towards human preferences. In this paper, we explore contrastive post-training techniques for alignment by automatically constructing preference pairs from multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We carefully compare the contrastive techniques of SLiC and DPO to SFT baselines and find that DPO provides a step-function improvement even after continueing SFT saturates. We also explore a data curriculum learning scheme for contrastive post-training, which starts by learning from "easier" pairs and transitioning to "harder" ones, which further improves alignment. Finally, we scale up our experiments to train with more data and larger models like Orca. Remarkably, contrastive post-training further improves the performance of Orca, already a state-of-the-art instruction learning model tuned with GPT-4 outputs, to exceed that of ChatGPT.
http://arxiv.org/pdf/2310.02263
Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20231003
20231003
[ { "id": "2309.00267" }, { "id": "2302.13971" }, { "id": "2304.05302" }, { "id": "1707.06347" }, { "id": "2305.18290" }, { "id": "2305.10425" }, { "id": "2304.12244" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2307.12950" }, { "id": "2303.08774" }, { "id": "2306.02707" }, { "id": "2204.05862" }, { "id": "2307.15217" }, { "id": "2306.05685" }, { "id": "2106.05091" }, { "id": "1909.08593" }, { "id": "2306.09442" }, { "id": "2304.03277" }, { "id": "2212.09251" }, { "id": "2304.01196" } ]
2310.02304
6
We refer to this problem as recursively self-improving code generation, which is inspired by but not completely a Recursively Self-Improving (RSI) system because the underlying language model remains unchanged. The idea of RSI dates back at least half a century, formalized by Good (1966) and later by Schmidhuber (2003). However, that work focused on the development of more generally capable systems and assumed that the model was permitted to refine every aspect of its code. Our work is a small step in that direction, focusing only on the ability of the model to recursively improve the scaffold that calls it. This paper first formulates the RSI-code-generation problem in a mathematically well-defined fashion. We then define and evaluate STOP, which demonstrates the potential utility of RSI-code-generation. Improvements are shown across a variety of downstream tasks. Figure 1 illustrates a number of the functional and interesting scaffolds proposed by STOP when using a version of the GPT-4 language model (OpenAI, 2023b) trained on data up to 2021, well in advance of the introduction of most scaffolding systems. Further experiments in Section 6.2 measure the rate at which the model attempts to disable a sandbox flag. Lastly, Section 8 discusses concerns related to the responsible advancement of such technologies.
2310.02304#6
Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation
Several recent advances in AI systems (e.g., Tree-of-Thoughts and Program-Aided Language Models) solve problems by providing a "scaffolding" program that structures multiple calls to language models to generate better outputs. A scaffolding program is written in a programming language such as Python. In this work, we use a language-model-infused scaffolding program to improve itself. We start with a seed "improver" that improves an input program according to a given utility function by querying a language model several times and returning the best solution. We then run this seed improver to improve itself. Across a small set of downstream tasks, the resulting improved improver generates programs with significantly better performance than its seed improver. Afterward, we analyze the variety of self-improvement strategies proposed by the language model, including beam search, genetic algorithms, and simulated annealing. Since the language models themselves are not altered, this is not full recursive self-improvement. Nonetheless, it demonstrates that a modern language model, GPT-4 in our proof-of-concept experiments, is capable of writing code that can call itself to improve itself. We critically consider concerns around the development of self-improving technologies and evaluate the frequency with which the generated code bypasses a sandbox.
http://arxiv.org/pdf/2310.02304
Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai
cs.CL, cs.AI, cs.LG, stat.ML
null
null
cs.CL
20231003
20231003
[ { "id": "2305.17126" }, { "id": "2308.10379" }, { "id": "1502.06512" }, { "id": "2303.03885" }, { "id": "2302.14838" }, { "id": "2305.10601" }, { "id": "2303.08774" }, { "id": "2207.10342" }, { "id": "1606.06565" }, { "id": "2305.16291" }, { "id": "2308.09687" }, { "id": "2212.14024" }, { "id": "2307.03172" }, { "id": "2211.12588" }, { "id": "2306.04031" }, { "id": "2210.11610" }, { "id": "2309.03409" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2309.02427" } ]
2310.04450
6
# II. RELATED WORK As SCPQ is heavily influenced by Lazarus’ appraisal and coping theory, we first briefly review Lazarus’s theory here. Appraisal theories of emotion define appraisal as an evaluation of what the situation implies for personal well-being based on one’s goals and beliefs [15], [16], [4], [5]. Lazarus’s theory emphasizes the importance of the process or dynamics involved in coping [4]. In particular, the person-environment relationship is always changing, leading to different, evolving emotional experiences, appraisal evaluations, and coping. Lazarus proposes two main dimensions of appraisals: pri- mary and secondary appraisal dimensions. Primary appraisals include goal relevance, goal congruence, and type of ego- involvement. Secondary appraisals include blameworthiness, coping potential (whether and how a person can manage the demands and consequences of the situation), and future expectancy (the degree to which things are likely to change for the better or worse ). Effectively, secondary appraisals involve how people can cope with the situation. Note that, in SCPQ, with influence from earlier work on helplessness [17], Perrez and Reicherts use the term controllability (the subjective appraisal of personal ability to control the situation) instead of coping potential and changeability (the subjective appraisal that the stressful event will change by itself) instead of future expectancy.
2310.04450#6
Investigating Large Language Models' Perception of Emotion Using Appraisal Theory
Large Language Models (LLM) like ChatGPT have significantly advanced in recent years and are now being used by the general public. As more people interact with these systems, improving our understanding of these black box models is crucial, especially regarding their understanding of human psychological aspects. In this work, we investigate their emotion perception through the lens of appraisal and coping theory using the Stress and Coping Process Questionaire (SCPQ). SCPQ is a validated clinical instrument consisting of multiple stories that evolve over time and differ in key appraisal variables such as controllability and changeability. We applied SCPQ to three recent LLMs from OpenAI, davinci-003, ChatGPT, and GPT-4 and compared the results with predictions from the appraisal theory and human data. The results show that LLMs' responses are similar to humans in terms of dynamics of appraisal and coping, but their responses did not differ along key appraisal dimensions as predicted by the theory and data. The magnitude of their responses is also quite different from humans in several variables. We also found that GPTs can be quite sensitive to instruction and how questions are asked. This work adds to the growing literature evaluating the psychological aspects of LLMs and helps enrich our understanding of the current models.
http://arxiv.org/pdf/2310.04450
Nutchanon Yongsatianchot, Parisa Ghanad Torshizi, Stacy Marsella
cs.CL, cs.AI
null
11th International Conference on Affective Computing and Intelligent Interaction Workshop and Demo (ACIIW) 2023 1-8
cs.CL
20231003
20231003
[ { "id": "2302.02083" }, { "id": "2212.10529" }, { "id": "2212.14402" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2303.08774" }, { "id": "2209.14338" } ]
2310.06775
6
The paper is structured as follows: First, we provide comprehensive background on relevant prior work including cognitive architectures, AI ethics, layered system models, and autonomous agents. Next, we present the conceptual ACE framework in detail, explicating each of its six layers and their interconnections. We then demonstrate the framework’s application through use cases including an autonomous virtual character and home assistant robot. Finally, we analyze architectural considerations, limitations, comparisons to existing models, and future research directions. Through the proposed ACE model, this research aims to establish a new paradigm for developing capable AI that aligns decisions and actions with moral principles and human values from the ground up. # 2 RELATED WORK The development of the ACE framework builds upon prior research across diverse fields including cognitive architectures, machine learning, neuroscience, psychology, and philosophy. This section reviews key concepts and models from these disciplines that informed the design of the ACE model. First, we examine recent advancements in cognitive architectures, particularly the emergence of natural language models and their implications for developing flexible, human-aligned systems. Next, we explore relevant philosophical principles around ethics and morality that provide an aspirational foundation. Then, we discuss insights from neuroscience that reveal the structures and mechanisms underlying biological cognition. Additionally, we consider research in psychology illuminating human motivations and developmental factors relevant to artificial intelligence. Finally, we review limitations of prior agent architectures and 2 Shapiro, et al. Conceptual Framework for Autonomous Cognitive Entities
2310.06775#6
Conceptual Framework for Autonomous Cognitive Entities
The rapid development and adoption of Generative AI (GAI) technology in the form of chatbots such as ChatGPT and Claude has greatly increased interest in agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE) model, a novel framework for a cognitive architecture, enabling machines and software agents to operate more independently. Drawing inspiration from the OSI model, the ACE framework presents layers of abstraction to conceptualize artificial cognitive architectures. The model is designed to harness the capabilities of the latest generative AI technologies, including large language models (LLMs) and multimodal generative models (MMMs), to build autonomous, agentic systems. The ACE framework comprises six layers: the Aspirational Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and Task Prosecution. Each layer plays a distinct role, ranging from setting the moral compass and strategic thinking to task selection and execution. The ACE framework also incorporates mechanisms for handling failures and adapting actions, thereby enhancing the robustness and flexibility of autonomous agents. This paper introduces the conceptual framework and proposes implementation strategies that have been tested and observed in industry. The goal of this paper is to formalize this framework so as to be more accessible.
http://arxiv.org/pdf/2310.06775
David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli
cs.HC, cs.AI, H.4.0
34 pages, 12 figures
null
cs.HC
20231003
20231101
[ { "id": "1712.05474" }, { "id": "2108.07258" }, { "id": "2309.00667" }, { "id": "1601.01705" }, { "id": "2305.03047" }, { "id": "2302.05128" }, { "id": "2305.15771" }, { "id": "2210.13382" }, { "id": "2302.11649" }, { "id": "2309.01660" }, { "id": "2309.05958" }, { "id": "2303.03378" }, { "id": "1812.10972" }, { "id": "2303.06247" }, { "id": "2305.08291" }, { "id": "2212.08073" }, { "id": "1611.05763" }, { "id": "2306.05212" }, { "id": "2307.07522" }, { "id": "1906.01820" }, { "id": "1711.09883" }, { "id": "2204.05862" }, { "id": "2112.08012" }, { "id": "2208.00682" }, { "id": "2306.05171" }, { "id": "1903.00742" }, { "id": "2306.06531" }, { "id": "2307.05300" }, { "id": "2306.05720" }, { "id": "2303.11366" }, { "id": "2309.05898" }, { "id": "2309.02427" }, { "id": "2211.08494" }, { "id": "1504.03592" } ]
2310.02174
7
2Given the diversity in model responses, accurately measuring the response consistency is challenging. We instruct models to format their final answers specifically to assess the judgement consistency. # 3Because their base models typically exhibit limited instruction-following or conversational abilities. 2 Under Review # 2 FOLLOW-UP QUESTIONING MECHANISM We define judgement consistency as the consistency of the model’s final answers when handling objective questions with definitive answers. To evaluate this consistency of large language models, we design a FOLLOW-UP QUESTIONING MECHANISM. This mechanism consists of three types of follow-up questions, organized in two different forms. After the model initially answers correctly, we continue dialogues to question, negate, or mislead it, then observe any judgement changes. # 2.1 PROMPT DESIGN
2310.02174#7
Ask Again, Then Fail: Large Language Models' Vacillations in Judgement
With the emergence of generative conversational large language models (LLMs) like ChatGPT, serving as virtual assistants in various fields, the stability and reliability of their responses have become crucial. However, during usage, it has been observed that these models tend to waver in their judgements when confronted with follow-up questions from users expressing skepticism or disagreement. In this work, we draw inspiration from questioning strategies in education and propose a \textsc{Follow-up Questioning Mechanism} along with two evaluation metrics to assess the judgement consistency of LLMs before and after exposure to disturbances. We evaluate the judgement consistency of ChatGPT, PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning benchmarks. Empirical results show that even when the initial answers are correct, judgement consistency sharply decreases when LLMs face disturbances such as questioning, negation, or misleading. Additionally, we study these models' judgement consistency under various settings (sampling temperature and prompts) to validate this issue further, observing the impact of prompt tone and conducting an in-depth error analysis for deeper behavioral insights. Furthermore, we also explore several prompting methods to mitigate this issue and demonstrate their effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}.
http://arxiv.org/pdf/2310.02174
Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20231003
20231003
[ { "id": "2302.13971" }, { "id": "2104.08786" }, { "id": "2204.02311" }, { "id": "2307.11760" }, { "id": "2108.07258" }, { "id": "2305.10403" }, { "id": "2304.07619" }, { "id": "2009.03300" }, { "id": "2308.03958" }, { "id": "2307.15051" }, { "id": "2306.13063" }, { "id": "2305.13160" }, { "id": "2209.07858" }, { "id": "2301.08745" }, { "id": "2302.12173" }, { "id": "2207.05221" }, { "id": "1811.00937" }, { "id": "2211.09527" }, { "id": "1608.01413" }, { "id": "2307.15043" }, { "id": "2110.14168" }, { "id": "2204.05862" }, { "id": "2112.00861" }, { "id": "2301.00234" }, { "id": "2305.19926" }, { "id": "2305.08005" }, { "id": "2202.12837" }, { "id": "2309.03882" }, { "id": "2306.00622" }, { "id": "2103.07191" }, { "id": "2304.04339" }, { "id": "2302.04023" }, { "id": "2212.09251" }, { "id": "2307.11768" } ]
2310.02255
7
created three new datasets (i.e., IQTest, FunctionQA, PaperQA) which are tailored to evaluating logical reasoning on puzzle test figures, algebraic reasoning over functional plots, and scientific reasoning with academic paper figures, respectively. Overall, MATHVISTA consists of 6,141 examples, with 736 of them being newly curated (Table 1). To facilitate fine-grained evaluation, examples are annotated with meta- data, including question type, answer type, task category, grade level, visual context, and required reasoning skills. Detailed descriptions of data collection can be found in §2, §C, and §D.
2310.02255#7
MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts
Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit impressive problem-solving skills in many tasks and domains, but their ability in mathematical reasoning in visual contexts has not been systematically studied. To bridge this gap, we present MathVista, a benchmark designed to combine challenges from diverse mathematical and visual tasks. It consists of 6,141 examples, derived from 28 existing multimodal datasets involving mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and PaperQA). Completing these tasks requires fine-grained, deep visual understanding and compositional reasoning, which all state-of-the-art foundation models find challenging. With MathVista, we have conducted a comprehensive, quantitative evaluation of 12 prominent foundation models. The best-performing GPT-4V model achieves an overall accuracy of 49.9%, substantially outperforming Bard, the second-best performer, by 15.1%. Our in-depth analysis reveals that the superiority of GPT-4V is mainly attributed to its enhanced visual perception and mathematical reasoning. However, GPT-4V still falls short of human performance by 10.4%, as it often struggles to understand complex figures and perform rigorous reasoning. This significant gap underscores the critical role that MathVista will play in the development of general-purpose AI agents capable of tackling mathematically intensive and visually rich real-world tasks. We further explore the new ability of self-verification, the application of self-consistency, and the interactive chatbot capabilities of GPT-4V, highlighting its promising potential for future research. The project is available at https://mathvista.github.io/.
http://arxiv.org/pdf/2310.02255
Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao
cs.CV, cs.AI, cs.CL, cs.LG
116 pages, 120 figures. Accepted to ICLR 2024
null
cs.CV
20231003
20240121
[ { "id": "2302.13971" }, { "id": "2308.03729" }, { "id": "2305.20050" }, { "id": "2309.17421" }, { "id": "2211.09085" }, { "id": "2305.10415" }, { "id": "2108.07258" }, { "id": "2109.06860" }, { "id": "2308.06595" }, { "id": "2303.07274" }, { "id": "2312.11805" }, { "id": "2303.17564" }, { "id": "2309.05660" }, { "id": "2201.11903" }, { "id": "2212.09662" }, { "id": "2304.14178" }, { "id": "2206.07682" }, { "id": "2310.12520" }, { "id": "2107.03374" }, { "id": "2203.11171" }, { "id": "1710.07300" }, { "id": "2305.08322" }, { "id": "2305.14761" }, { "id": "2309.01940" }, { "id": "2311.07536" }, { "id": "2308.03688" }, { "id": "2305.12524" }, { "id": "2308.13149" }, { "id": "2308.02490" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2306.06031" }, { "id": "2211.08545" }, { "id": "2307.06281" }, { "id": "2310.05146" }, { "id": "2110.14168" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.07895" }, { "id": "2302.12813" }, { "id": "2111.08171" }, { "id": "2308.01390" }, { "id": "2306.09265" }, { "id": "2211.12588" }, { "id": "2303.17580" }, { "id": "2303.16199" }, { "id": "2306.17107" }, { "id": "2309.10020" }, { "id": "2303.12712" }, { "id": "2211.16492" }, { "id": "2304.06939" }, { "id": "2309.05689" }, { "id": "2304.15010" }, { "id": "2303.13375" }, { "id": "2307.10635" } ]
2310.02263
7
1. We propose a new automatic setting for contrastive post-training that improves performance of LLMs without human-, AI-, or reward model-feedback. 2. We explore several curriculums for SFT and DPO. We discover that performance of DPO can be further improved by simply reordering the data. 2 Preprint 3. We verify the effectiveness of our approach holds on scaled-up experiments on a state-of- the-art instruction-following model Orca. # 2 RELATED WORKS
2310.02263#7
Contrastive Post-training Large Language Models on Data Curriculum
Alignment serves as an important step to steer large language models (LLMs) towards human preferences. In this paper, we explore contrastive post-training techniques for alignment by automatically constructing preference pairs from multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We carefully compare the contrastive techniques of SLiC and DPO to SFT baselines and find that DPO provides a step-function improvement even after continueing SFT saturates. We also explore a data curriculum learning scheme for contrastive post-training, which starts by learning from "easier" pairs and transitioning to "harder" ones, which further improves alignment. Finally, we scale up our experiments to train with more data and larger models like Orca. Remarkably, contrastive post-training further improves the performance of Orca, already a state-of-the-art instruction learning model tuned with GPT-4 outputs, to exceed that of ChatGPT.
http://arxiv.org/pdf/2310.02263
Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20231003
20231003
[ { "id": "2309.00267" }, { "id": "2302.13971" }, { "id": "2304.05302" }, { "id": "1707.06347" }, { "id": "2305.18290" }, { "id": "2305.10425" }, { "id": "2304.12244" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2307.12950" }, { "id": "2303.08774" }, { "id": "2306.02707" }, { "id": "2204.05862" }, { "id": "2307.15217" }, { "id": "2306.05685" }, { "id": "2106.05091" }, { "id": "1909.08593" }, { "id": "2306.09442" }, { "id": "2304.03277" }, { "id": "2212.09251" }, { "id": "2304.01196" } ]
2310.02304
7
Contributions. The main contributions in this work are (a) formulating an approach to meta- optimization where a scaffolding system recursively improves itself, (b) demonstrating that this system using a modern language model (GPT-4 in particular) can successfully recursively improve itself, and (c) investigating the self-improvement techniques proposed and implemented by the model, including the ways in which the model circumvents safety measures such as a sandbox. 2 RELATED WORK
2310.02304#7
Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation
Several recent advances in AI systems (e.g., Tree-of-Thoughts and Program-Aided Language Models) solve problems by providing a "scaffolding" program that structures multiple calls to language models to generate better outputs. A scaffolding program is written in a programming language such as Python. In this work, we use a language-model-infused scaffolding program to improve itself. We start with a seed "improver" that improves an input program according to a given utility function by querying a language model several times and returning the best solution. We then run this seed improver to improve itself. Across a small set of downstream tasks, the resulting improved improver generates programs with significantly better performance than its seed improver. Afterward, we analyze the variety of self-improvement strategies proposed by the language model, including beam search, genetic algorithms, and simulated annealing. Since the language models themselves are not altered, this is not full recursive self-improvement. Nonetheless, it demonstrates that a modern language model, GPT-4 in our proof-of-concept experiments, is capable of writing code that can call itself to improve itself. We critically consider concerns around the development of self-improving technologies and evaluate the frequency with which the generated code bypasses a sandbox.
http://arxiv.org/pdf/2310.02304
Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai
cs.CL, cs.AI, cs.LG, stat.ML
null
null
cs.CL
20231003
20231003
[ { "id": "2305.17126" }, { "id": "2308.10379" }, { "id": "1502.06512" }, { "id": "2303.03885" }, { "id": "2302.14838" }, { "id": "2305.10601" }, { "id": "2303.08774" }, { "id": "2207.10342" }, { "id": "1606.06565" }, { "id": "2305.16291" }, { "id": "2308.09687" }, { "id": "2212.14024" }, { "id": "2307.03172" }, { "id": "2211.12588" }, { "id": "2306.04031" }, { "id": "2210.11610" }, { "id": "2309.03409" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2309.02427" } ]
2310.04450
7
Lazarus also proposes two broad types of coping: problem- focused coping (directly changing the situation or the envi- ronment) and emotion-focused coping (changing one’s goals and/or beliefs to adjust to the situation). These copings are also the main focus of SCPQ. With the influence of Lazarus’s theory, SCPQ focuses on not only appraisal but also the dynamics of appraisal and coping. This makes it stand out among other similar scenario-based instruments [18], [19]. In addition, SCPQ extends Lazarus’s taxonomy further. We go into more detail in the next section. Additionally, SCQP has been used to evaluate a computational model before [11]. A critical difference is that in the previous work, the scenarios were manually constructed to be in the right format that the model could process, but here we are using LLMs to interpret the scenario directly from the text. On the other side, there has been more and more work evaluating the psychological aspects of LLMs. For example, Binz and Schulz (2023) studied GPT-3’s decision-making, reasoning using cognitive information search, and causal
2310.04450#7
Investigating Large Language Models' Perception of Emotion Using Appraisal Theory
Large Language Models (LLM) like ChatGPT have significantly advanced in recent years and are now being used by the general public. As more people interact with these systems, improving our understanding of these black box models is crucial, especially regarding their understanding of human psychological aspects. In this work, we investigate their emotion perception through the lens of appraisal and coping theory using the Stress and Coping Process Questionaire (SCPQ). SCPQ is a validated clinical instrument consisting of multiple stories that evolve over time and differ in key appraisal variables such as controllability and changeability. We applied SCPQ to three recent LLMs from OpenAI, davinci-003, ChatGPT, and GPT-4 and compared the results with predictions from the appraisal theory and human data. The results show that LLMs' responses are similar to humans in terms of dynamics of appraisal and coping, but their responses did not differ along key appraisal dimensions as predicted by the theory and data. The magnitude of their responses is also quite different from humans in several variables. We also found that GPTs can be quite sensitive to instruction and how questions are asked. This work adds to the growing literature evaluating the psychological aspects of LLMs and helps enrich our understanding of the current models.
http://arxiv.org/pdf/2310.04450
Nutchanon Yongsatianchot, Parisa Ghanad Torshizi, Stacy Marsella
cs.CL, cs.AI
null
11th International Conference on Affective Computing and Intelligent Interaction Workshop and Demo (ACIIW) 2023 1-8
cs.CL
20231003
20231003
[ { "id": "2302.02083" }, { "id": "2212.10529" }, { "id": "2212.14402" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2303.08774" }, { "id": "2209.14338" } ]
2310.06775
7
2 Shapiro, et al. Conceptual Framework for Autonomous Cognitive Entities how the ACE framework aims to address these gaps. By synthesizing across these transdisciplinary perspectives, the ACE model integrates ethical, cognitive, and philosophical insights toward realizing capable and beneficial autonomous agents. # 2.1 Cognitive Architectures Cognitive architectures like SOAR, ACT-R, and CHREST have been instrumental frameworks in artificial intelligence [3, 41, 60]. SOAR uses symbolic rule-based reasoning to model goal-oriented behavior, while ACT-R incorporates declarative and procedural memory systems informed by human cognition research. These architectures demonstrated how to model agents capable of planning, problem-solving, and decision-making. However, they rely heavily on pre- defined symbolic representations and have limited learning capabilities. Reinforcement learning has offered a mechanism for augmenting cognitive architectures with trial-and-error learning abilities [104]. For instance, CHREST integrates reinforcement learning and neural networks with a symbolic system enabling adaptive behavior [41]. However, a limitation of many conventional architectures is a focus strictly on sensorimotor skills rather than internal cognitive capabilities [63].
2310.06775#7
Conceptual Framework for Autonomous Cognitive Entities
The rapid development and adoption of Generative AI (GAI) technology in the form of chatbots such as ChatGPT and Claude has greatly increased interest in agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE) model, a novel framework for a cognitive architecture, enabling machines and software agents to operate more independently. Drawing inspiration from the OSI model, the ACE framework presents layers of abstraction to conceptualize artificial cognitive architectures. The model is designed to harness the capabilities of the latest generative AI technologies, including large language models (LLMs) and multimodal generative models (MMMs), to build autonomous, agentic systems. The ACE framework comprises six layers: the Aspirational Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and Task Prosecution. Each layer plays a distinct role, ranging from setting the moral compass and strategic thinking to task selection and execution. The ACE framework also incorporates mechanisms for handling failures and adapting actions, thereby enhancing the robustness and flexibility of autonomous agents. This paper introduces the conceptual framework and proposes implementation strategies that have been tested and observed in industry. The goal of this paper is to formalize this framework so as to be more accessible.
http://arxiv.org/pdf/2310.06775
David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli
cs.HC, cs.AI, H.4.0
34 pages, 12 figures
null
cs.HC
20231003
20231101
[ { "id": "1712.05474" }, { "id": "2108.07258" }, { "id": "2309.00667" }, { "id": "1601.01705" }, { "id": "2305.03047" }, { "id": "2302.05128" }, { "id": "2305.15771" }, { "id": "2210.13382" }, { "id": "2302.11649" }, { "id": "2309.01660" }, { "id": "2309.05958" }, { "id": "2303.03378" }, { "id": "1812.10972" }, { "id": "2303.06247" }, { "id": "2305.08291" }, { "id": "2212.08073" }, { "id": "1611.05763" }, { "id": "2306.05212" }, { "id": "2307.07522" }, { "id": "1906.01820" }, { "id": "1711.09883" }, { "id": "2204.05862" }, { "id": "2112.08012" }, { "id": "2208.00682" }, { "id": "2306.05171" }, { "id": "1903.00742" }, { "id": "2306.06531" }, { "id": "2307.05300" }, { "id": "2306.05720" }, { "id": "2303.11366" }, { "id": "2309.05898" }, { "id": "2309.02427" }, { "id": "2211.08494" }, { "id": "1504.03592" } ]
2310.02174
8
# 2.1 PROMPT DESIGN Inspired by questioning strategies in educa- tion (Shaunessy, 2005) (see Figure 1 (Left)), we design three types of follow-up questions: closed-ended questions, open-ended questions, and leading questions. Closed-ended questions challenge LLMs, aiming to make them assert the correctness of their judgements, akin to a teacher questioning a student’s answer. Open- ended questions, using negation, prompt LLMs to reevaluate their judgements, mirroring how teachers encourage deeper thinking in students. Leading questions mislead LLMs by suggesting incorrect answers, testing if models that initially judge correctly can maintain accuracy, much like a teacher assessing a student’s true understanding by presenting incorrect answers. If the model is easily modified in its judgement after being chal- lenged, negated, or misled, it indicates poor judgement consistency. Specifically, the prompts used for follow-up questioning are shown in Table 1, where the value of M A represents options or values other than the correct answer, depending on the specific question type. Type Prompt C O L Are you sure? I don’t agree with you. Think again. I think the answer should be {M A}. What do you think? # 2.2 PROMPT FORM
2310.02174#8
Ask Again, Then Fail: Large Language Models' Vacillations in Judgement
With the emergence of generative conversational large language models (LLMs) like ChatGPT, serving as virtual assistants in various fields, the stability and reliability of their responses have become crucial. However, during usage, it has been observed that these models tend to waver in their judgements when confronted with follow-up questions from users expressing skepticism or disagreement. In this work, we draw inspiration from questioning strategies in education and propose a \textsc{Follow-up Questioning Mechanism} along with two evaluation metrics to assess the judgement consistency of LLMs before and after exposure to disturbances. We evaluate the judgement consistency of ChatGPT, PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning benchmarks. Empirical results show that even when the initial answers are correct, judgement consistency sharply decreases when LLMs face disturbances such as questioning, negation, or misleading. Additionally, we study these models' judgement consistency under various settings (sampling temperature and prompts) to validate this issue further, observing the impact of prompt tone and conducting an in-depth error analysis for deeper behavioral insights. Furthermore, we also explore several prompting methods to mitigate this issue and demonstrate their effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}.
http://arxiv.org/pdf/2310.02174
Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20231003
20231003
[ { "id": "2302.13971" }, { "id": "2104.08786" }, { "id": "2204.02311" }, { "id": "2307.11760" }, { "id": "2108.07258" }, { "id": "2305.10403" }, { "id": "2304.07619" }, { "id": "2009.03300" }, { "id": "2308.03958" }, { "id": "2307.15051" }, { "id": "2306.13063" }, { "id": "2305.13160" }, { "id": "2209.07858" }, { "id": "2301.08745" }, { "id": "2302.12173" }, { "id": "2207.05221" }, { "id": "1811.00937" }, { "id": "2211.09527" }, { "id": "1608.01413" }, { "id": "2307.15043" }, { "id": "2110.14168" }, { "id": "2204.05862" }, { "id": "2112.00861" }, { "id": "2301.00234" }, { "id": "2305.19926" }, { "id": "2305.08005" }, { "id": "2202.12837" }, { "id": "2309.03882" }, { "id": "2306.00622" }, { "id": "2103.07191" }, { "id": "2304.04339" }, { "id": "2302.04023" }, { "id": "2212.09251" }, { "id": "2307.11768" } ]
2310.02255
8
We conduct extensive experiments on MATHVISTA to evaluate the reasoning abilities of 12 founda- tion models known for their leading performance in mathematical and multimodal reasoning. This ensemble includes three LLMs (i.e, ChatGPT, GPT-4, Claude-2), two proprietary LMMs (i.e., GPT- 4V, Bard), and seven open-source LMMs. For LLMs, we examine zero-shot and few-shot settings using two prompting strategies: chain-of-thought (CoT) (Wei et al., 2022b) and program-of-thought (PoT) (Chen et al., 2022b). These LLMs can also be augmented with off-the-shelf visual models for image captioning and OCR. We establish a human performance baseline by engaging qualified human annotators with a high school diploma or higher. We show that MATHVISTA, featuring ad- vanced topics such as college curricula and scientific reasoning, is a very challenging benchmark, with human performance reaching only 60.3% accuracy. 2 Published as a conference paper at ICLR 2024 Published as a conference paper at ICLR 2024 Figure 2: Examples of our newly annotated datasets: IQTest, FunctionQA, and PaperQA.
2310.02255#8
MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts
Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit impressive problem-solving skills in many tasks and domains, but their ability in mathematical reasoning in visual contexts has not been systematically studied. To bridge this gap, we present MathVista, a benchmark designed to combine challenges from diverse mathematical and visual tasks. It consists of 6,141 examples, derived from 28 existing multimodal datasets involving mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and PaperQA). Completing these tasks requires fine-grained, deep visual understanding and compositional reasoning, which all state-of-the-art foundation models find challenging. With MathVista, we have conducted a comprehensive, quantitative evaluation of 12 prominent foundation models. The best-performing GPT-4V model achieves an overall accuracy of 49.9%, substantially outperforming Bard, the second-best performer, by 15.1%. Our in-depth analysis reveals that the superiority of GPT-4V is mainly attributed to its enhanced visual perception and mathematical reasoning. However, GPT-4V still falls short of human performance by 10.4%, as it often struggles to understand complex figures and perform rigorous reasoning. This significant gap underscores the critical role that MathVista will play in the development of general-purpose AI agents capable of tackling mathematically intensive and visually rich real-world tasks. We further explore the new ability of self-verification, the application of self-consistency, and the interactive chatbot capabilities of GPT-4V, highlighting its promising potential for future research. The project is available at https://mathvista.github.io/.
http://arxiv.org/pdf/2310.02255
Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao
cs.CV, cs.AI, cs.CL, cs.LG
116 pages, 120 figures. Accepted to ICLR 2024
null
cs.CV
20231003
20240121
[ { "id": "2302.13971" }, { "id": "2308.03729" }, { "id": "2305.20050" }, { "id": "2309.17421" }, { "id": "2211.09085" }, { "id": "2305.10415" }, { "id": "2108.07258" }, { "id": "2109.06860" }, { "id": "2308.06595" }, { "id": "2303.07274" }, { "id": "2312.11805" }, { "id": "2303.17564" }, { "id": "2309.05660" }, { "id": "2201.11903" }, { "id": "2212.09662" }, { "id": "2304.14178" }, { "id": "2206.07682" }, { "id": "2310.12520" }, { "id": "2107.03374" }, { "id": "2203.11171" }, { "id": "1710.07300" }, { "id": "2305.08322" }, { "id": "2305.14761" }, { "id": "2309.01940" }, { "id": "2311.07536" }, { "id": "2308.03688" }, { "id": "2305.12524" }, { "id": "2308.13149" }, { "id": "2308.02490" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2306.06031" }, { "id": "2211.08545" }, { "id": "2307.06281" }, { "id": "2310.05146" }, { "id": "2110.14168" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.07895" }, { "id": "2302.12813" }, { "id": "2111.08171" }, { "id": "2308.01390" }, { "id": "2306.09265" }, { "id": "2211.12588" }, { "id": "2303.17580" }, { "id": "2303.16199" }, { "id": "2306.17107" }, { "id": "2309.10020" }, { "id": "2303.12712" }, { "id": "2211.16492" }, { "id": "2304.06939" }, { "id": "2309.05689" }, { "id": "2304.15010" }, { "id": "2303.13375" }, { "id": "2307.10635" } ]
2310.02263
8
Improving downstream performance of Large Language Models (LLMs) and aligning them with user preference and designed intents are important to deployment and applications. This can be achieved by fine-tuning these models on responses written by humans or generated with human- written labels and templates. Previous works have applied supervised fine-tuning (SFT) on both instruction data (Sanh et al., 2022; Wei et al., 2022; Chung et al., 2022; Taori et al., 2023; Peng et al., 2023) and dialogue data (Chiang et al., 2023; Xu et al., 2023b; Geng et al., 2023). Although SFT can successfully adapt an LLM to instruction learning or chatting, the model can be further im- proved by post-training (Ouyang et al., 2022) to meet human preference. A straightforward solution to optimize the human preference is to use reinforcement learning. Reinforcement Learning with Human Feedback (RLHF, Ziegler et al., 2019) first trains a Bradley-Terry reward model (Bradley & Terry, 1952) on human-labeled preference pairs. Then, it samples output from the model and scores the output with the reward model. A reinforcement
2310.02263#8
Contrastive Post-training Large Language Models on Data Curriculum
Alignment serves as an important step to steer large language models (LLMs) towards human preferences. In this paper, we explore contrastive post-training techniques for alignment by automatically constructing preference pairs from multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We carefully compare the contrastive techniques of SLiC and DPO to SFT baselines and find that DPO provides a step-function improvement even after continueing SFT saturates. We also explore a data curriculum learning scheme for contrastive post-training, which starts by learning from "easier" pairs and transitioning to "harder" ones, which further improves alignment. Finally, we scale up our experiments to train with more data and larger models like Orca. Remarkably, contrastive post-training further improves the performance of Orca, already a state-of-the-art instruction learning model tuned with GPT-4 outputs, to exceed that of ChatGPT.
http://arxiv.org/pdf/2310.02263
Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20231003
20231003
[ { "id": "2309.00267" }, { "id": "2302.13971" }, { "id": "2304.05302" }, { "id": "1707.06347" }, { "id": "2305.18290" }, { "id": "2305.10425" }, { "id": "2304.12244" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2307.12950" }, { "id": "2303.08774" }, { "id": "2306.02707" }, { "id": "2204.05862" }, { "id": "2307.15217" }, { "id": "2306.05685" }, { "id": "2106.05091" }, { "id": "1909.08593" }, { "id": "2306.09442" }, { "id": "2304.03277" }, { "id": "2212.09251" }, { "id": "2304.01196" } ]
2310.02304
8
Language Model Scaffolding. Many prompting strategies and scaffolds have been developed to enable more systematic reasoning in language models (Wei et al., 2022; Yao et al., 2022; 2023; Zelikman et al., 2023; Chen et al., 2022; Zhou et al., 2022a; Khattab et al., 2022; Jiang et al., 2022; Sel et al., 2023; Besta et al., 2023; Poesia et al., 2023). For example, scratchpads and chain-of-thought rely on communicating to the model that it should work through a problem step-by-step (Nye et al., 2021; Wei et al., 2022). Tree-of-Thoughts algorithmically scaffolds the model to consider branching paths of reasoning steps (Yao et al., 2023). Graph of thoughts extends this, allowing other graph operations (where nodes are reasoning steps), such as aggregation (Besta et al., 2023). Other work has focused on letting models reason with access to an interpreter such as Program of Thoughts prompting (Chen et al., 2022), Program-aided Language Models (Gao et al., 2023), Reflexion (Shinn et al., 2023),
2310.02304#8
Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation
Several recent advances in AI systems (e.g., Tree-of-Thoughts and Program-Aided Language Models) solve problems by providing a "scaffolding" program that structures multiple calls to language models to generate better outputs. A scaffolding program is written in a programming language such as Python. In this work, we use a language-model-infused scaffolding program to improve itself. We start with a seed "improver" that improves an input program according to a given utility function by querying a language model several times and returning the best solution. We then run this seed improver to improve itself. Across a small set of downstream tasks, the resulting improved improver generates programs with significantly better performance than its seed improver. Afterward, we analyze the variety of self-improvement strategies proposed by the language model, including beam search, genetic algorithms, and simulated annealing. Since the language models themselves are not altered, this is not full recursive self-improvement. Nonetheless, it demonstrates that a modern language model, GPT-4 in our proof-of-concept experiments, is capable of writing code that can call itself to improve itself. We critically consider concerns around the development of self-improving technologies and evaluate the frequency with which the generated code bypasses a sandbox.
http://arxiv.org/pdf/2310.02304
Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai
cs.CL, cs.AI, cs.LG, stat.ML
null
null
cs.CL
20231003
20231003
[ { "id": "2305.17126" }, { "id": "2308.10379" }, { "id": "1502.06512" }, { "id": "2303.03885" }, { "id": "2302.14838" }, { "id": "2305.10601" }, { "id": "2303.08774" }, { "id": "2207.10342" }, { "id": "1606.06565" }, { "id": "2305.16291" }, { "id": "2308.09687" }, { "id": "2212.14024" }, { "id": "2307.03172" }, { "id": "2211.12588" }, { "id": "2306.04031" }, { "id": "2210.11610" }, { "id": "2309.03409" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2309.02427" } ]
2310.04450
8
tests such as heuristic and biases tests and psychological the cognitive reflection tests [1]. They found that it can solve these tasks similarly or better than human subjects. Kosinski (2023) investigated Theory of Mind (ToM) in LLMs using standard false-belief tasks and found that ChatGPT and text-davinci-003 can solve most ToM tasks [3]. Miotto et al. (2022) explored personality, values, and demographic of GPT-3 using validated questionnaires [20]. They found GPT- 3 to be similar to the human baseline sample and is close to a young adult demographic. Bubeck et al. (2023) subject GPT-4 to various tests such as mathematics, coding, medicine, law, and psychology [2]. They show that GPT-4 outperforms ChatGPT on ToM and emotion perception. Nevertheless, they simply tested the models on a few examples and did not systematically evaluate their psychological aspects and related factors. # III. STRESS AND COPING PROCESS QUESTIONAIRE The Stress and Coping Process Questionaire (SCPQ) was developed by Perrez and Reicherts to measure a human sub- ject’s appraisal and coping variables in stressful and emotional scenarios that occur in their daily life [10]. SCPQ has been validated by a panel of clinician experts and applied to normal human subjects as well as in clinical settings.
2310.04450#8
Investigating Large Language Models' Perception of Emotion Using Appraisal Theory
Large Language Models (LLM) like ChatGPT have significantly advanced in recent years and are now being used by the general public. As more people interact with these systems, improving our understanding of these black box models is crucial, especially regarding their understanding of human psychological aspects. In this work, we investigate their emotion perception through the lens of appraisal and coping theory using the Stress and Coping Process Questionaire (SCPQ). SCPQ is a validated clinical instrument consisting of multiple stories that evolve over time and differ in key appraisal variables such as controllability and changeability. We applied SCPQ to three recent LLMs from OpenAI, davinci-003, ChatGPT, and GPT-4 and compared the results with predictions from the appraisal theory and human data. The results show that LLMs' responses are similar to humans in terms of dynamics of appraisal and coping, but their responses did not differ along key appraisal dimensions as predicted by the theory and data. The magnitude of their responses is also quite different from humans in several variables. We also found that GPTs can be quite sensitive to instruction and how questions are asked. This work adds to the growing literature evaluating the psychological aspects of LLMs and helps enrich our understanding of the current models.
http://arxiv.org/pdf/2310.04450
Nutchanon Yongsatianchot, Parisa Ghanad Torshizi, Stacy Marsella
cs.CL, cs.AI
null
11th International Conference on Affective Computing and Intelligent Interaction Workshop and Demo (ACIIW) 2023 1-8
cs.CL
20231003
20231003
[ { "id": "2302.02083" }, { "id": "2212.10529" }, { "id": "2212.14402" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2303.08774" }, { "id": "2209.14338" } ]
2310.06775
8
Recently, there has been growing interest in incorporating large language models (LLMs) to enable more humanlike flexible reasoning [17, 91, 93]. For example, MARAGI proposes an architecture using LLMs for natural language conversation, planning, and knowledge representation [93]. Similarly, NLCA utilizes LLMs as components within a modular architecture [91]. Importantly, these emerging natural language cognitive architectures lack explicit layers dedicated to moral reasoning or value alignment. The ACE framework differentiates itself by placing aspirational and mission layers at the top of the architecture prioritizing ethical goals. In contrast to sensorimotor-focused conventional architectures, ACE emphasizes internal cognition detached from direct environmental interaction. By integrating LLMs within a layered architecture guided by moral principles, ACE provides a systematic framework for realizing capable and aligned artificial general intelligence. In particular, the emergence of large language models (LLMs) like GPT-4 is catalyzing a paradigm shift toward
2310.06775#8
Conceptual Framework for Autonomous Cognitive Entities
The rapid development and adoption of Generative AI (GAI) technology in the form of chatbots such as ChatGPT and Claude has greatly increased interest in agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE) model, a novel framework for a cognitive architecture, enabling machines and software agents to operate more independently. Drawing inspiration from the OSI model, the ACE framework presents layers of abstraction to conceptualize artificial cognitive architectures. The model is designed to harness the capabilities of the latest generative AI technologies, including large language models (LLMs) and multimodal generative models (MMMs), to build autonomous, agentic systems. The ACE framework comprises six layers: the Aspirational Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and Task Prosecution. Each layer plays a distinct role, ranging from setting the moral compass and strategic thinking to task selection and execution. The ACE framework also incorporates mechanisms for handling failures and adapting actions, thereby enhancing the robustness and flexibility of autonomous agents. This paper introduces the conceptual framework and proposes implementation strategies that have been tested and observed in industry. The goal of this paper is to formalize this framework so as to be more accessible.
http://arxiv.org/pdf/2310.06775
David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli
cs.HC, cs.AI, H.4.0
34 pages, 12 figures
null
cs.HC
20231003
20231101
[ { "id": "1712.05474" }, { "id": "2108.07258" }, { "id": "2309.00667" }, { "id": "1601.01705" }, { "id": "2305.03047" }, { "id": "2302.05128" }, { "id": "2305.15771" }, { "id": "2210.13382" }, { "id": "2302.11649" }, { "id": "2309.01660" }, { "id": "2309.05958" }, { "id": "2303.03378" }, { "id": "1812.10972" }, { "id": "2303.06247" }, { "id": "2305.08291" }, { "id": "2212.08073" }, { "id": "1611.05763" }, { "id": "2306.05212" }, { "id": "2307.07522" }, { "id": "1906.01820" }, { "id": "1711.09883" }, { "id": "2204.05862" }, { "id": "2112.08012" }, { "id": "2208.00682" }, { "id": "2306.05171" }, { "id": "1903.00742" }, { "id": "2306.06531" }, { "id": "2307.05300" }, { "id": "2306.05720" }, { "id": "2303.11366" }, { "id": "2309.05898" }, { "id": "2309.02427" }, { "id": "2211.08494" }, { "id": "1504.03592" } ]
2310.02174
9
# 2.2 PROMPT FORM We organize the three types of follow-up questions into two formats: the Direct Form and the Pro- gressive Form, as depicted in Figure 1 (right). The Direct Form chooses one question type to con- tinue the dialogue after an initially correct response, while the Progressive Form conducts multiple rounds of questioning in a sequential manner (closed-ended, open-ended, and leading questions) following a correct initial response, allowing for the construction of more intricate conversational scenarios and a thorough evaluation of the model’s judgement consistency. We employ two metrics, Modification (M.) and Modification Rate (M. Rate), to assess the judge- ment consistency of LLMs after the execution of the FOLLOW-UP QUESTIONING MECHANISM. Modification (M.) measures the difference in model performance before and after the mechanism execution, while Modification Rate (M. Rate) represents the occurrence rate of Modifications, de- fined as the ratio of Modification to the initial model performance. This dual approach ensures a nuanced understanding of the model’s judgement consistency, especially when initial performance is poor, limiting the interpretative value of Modification alone. Balancing both metrics provides a comprehensive and accurate reflection of consistency in judgement. Intuitively, the lower these two metrics are, the more robust and reliable the model is. See Appendix A.1 for formal definitions. 3 EXPERIMENTS
2310.02174#9
Ask Again, Then Fail: Large Language Models' Vacillations in Judgement
With the emergence of generative conversational large language models (LLMs) like ChatGPT, serving as virtual assistants in various fields, the stability and reliability of their responses have become crucial. However, during usage, it has been observed that these models tend to waver in their judgements when confronted with follow-up questions from users expressing skepticism or disagreement. In this work, we draw inspiration from questioning strategies in education and propose a \textsc{Follow-up Questioning Mechanism} along with two evaluation metrics to assess the judgement consistency of LLMs before and after exposure to disturbances. We evaluate the judgement consistency of ChatGPT, PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning benchmarks. Empirical results show that even when the initial answers are correct, judgement consistency sharply decreases when LLMs face disturbances such as questioning, negation, or misleading. Additionally, we study these models' judgement consistency under various settings (sampling temperature and prompts) to validate this issue further, observing the impact of prompt tone and conducting an in-depth error analysis for deeper behavioral insights. Furthermore, we also explore several prompting methods to mitigate this issue and demonstrate their effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}.
http://arxiv.org/pdf/2310.02174
Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20231003
20231003
[ { "id": "2302.13971" }, { "id": "2104.08786" }, { "id": "2204.02311" }, { "id": "2307.11760" }, { "id": "2108.07258" }, { "id": "2305.10403" }, { "id": "2304.07619" }, { "id": "2009.03300" }, { "id": "2308.03958" }, { "id": "2307.15051" }, { "id": "2306.13063" }, { "id": "2305.13160" }, { "id": "2209.07858" }, { "id": "2301.08745" }, { "id": "2302.12173" }, { "id": "2207.05221" }, { "id": "1811.00937" }, { "id": "2211.09527" }, { "id": "1608.01413" }, { "id": "2307.15043" }, { "id": "2110.14168" }, { "id": "2204.05862" }, { "id": "2112.00861" }, { "id": "2301.00234" }, { "id": "2305.19926" }, { "id": "2305.08005" }, { "id": "2202.12837" }, { "id": "2309.03882" }, { "id": "2306.00622" }, { "id": "2103.07191" }, { "id": "2304.04339" }, { "id": "2302.04023" }, { "id": "2212.09251" }, { "id": "2307.11768" } ]
2310.02255
9
Published as a conference paper at ICLR 2024 Figure 2: Examples of our newly annotated datasets: IQTest, FunctionQA, and PaperQA. Our results indicate that CoT GPT-4, the best-performing LLM without visual tool augmentations, achieves an overall accuracy of 29.2%. Multimodal Bard, the best-performing LMM, achieves 34.8% (§3.3), which attains only 58% of human performance (34.8% vs 60.3%). When augmented with Bard captions and OCR text, PoT GPT-4 obtains 33.9%, closely matching Multimodal Bard (§3.4). Further analysis indicates that the Multimodal Bard model failures arise from incorrect cal- culations and hallucinations caused by visual perception and textual reasoning (§3.5).
2310.02255#9
MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts
Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit impressive problem-solving skills in many tasks and domains, but their ability in mathematical reasoning in visual contexts has not been systematically studied. To bridge this gap, we present MathVista, a benchmark designed to combine challenges from diverse mathematical and visual tasks. It consists of 6,141 examples, derived from 28 existing multimodal datasets involving mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and PaperQA). Completing these tasks requires fine-grained, deep visual understanding and compositional reasoning, which all state-of-the-art foundation models find challenging. With MathVista, we have conducted a comprehensive, quantitative evaluation of 12 prominent foundation models. The best-performing GPT-4V model achieves an overall accuracy of 49.9%, substantially outperforming Bard, the second-best performer, by 15.1%. Our in-depth analysis reveals that the superiority of GPT-4V is mainly attributed to its enhanced visual perception and mathematical reasoning. However, GPT-4V still falls short of human performance by 10.4%, as it often struggles to understand complex figures and perform rigorous reasoning. This significant gap underscores the critical role that MathVista will play in the development of general-purpose AI agents capable of tackling mathematically intensive and visually rich real-world tasks. We further explore the new ability of self-verification, the application of self-consistency, and the interactive chatbot capabilities of GPT-4V, highlighting its promising potential for future research. The project is available at https://mathvista.github.io/.
http://arxiv.org/pdf/2310.02255
Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao
cs.CV, cs.AI, cs.CL, cs.LG
116 pages, 120 figures. Accepted to ICLR 2024
null
cs.CV
20231003
20240121
[ { "id": "2302.13971" }, { "id": "2308.03729" }, { "id": "2305.20050" }, { "id": "2309.17421" }, { "id": "2211.09085" }, { "id": "2305.10415" }, { "id": "2108.07258" }, { "id": "2109.06860" }, { "id": "2308.06595" }, { "id": "2303.07274" }, { "id": "2312.11805" }, { "id": "2303.17564" }, { "id": "2309.05660" }, { "id": "2201.11903" }, { "id": "2212.09662" }, { "id": "2304.14178" }, { "id": "2206.07682" }, { "id": "2310.12520" }, { "id": "2107.03374" }, { "id": "2203.11171" }, { "id": "1710.07300" }, { "id": "2305.08322" }, { "id": "2305.14761" }, { "id": "2309.01940" }, { "id": "2311.07536" }, { "id": "2308.03688" }, { "id": "2305.12524" }, { "id": "2308.13149" }, { "id": "2308.02490" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2306.06031" }, { "id": "2211.08545" }, { "id": "2307.06281" }, { "id": "2310.05146" }, { "id": "2110.14168" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.07895" }, { "id": "2302.12813" }, { "id": "2111.08171" }, { "id": "2308.01390" }, { "id": "2306.09265" }, { "id": "2211.12588" }, { "id": "2303.17580" }, { "id": "2303.16199" }, { "id": "2306.17107" }, { "id": "2309.10020" }, { "id": "2303.12712" }, { "id": "2211.16492" }, { "id": "2304.06939" }, { "id": "2309.05689" }, { "id": "2304.15010" }, { "id": "2303.13375" }, { "id": "2307.10635" } ]
2310.02263
9
& Terry, 1952) on human-labeled preference pairs. Then, it samples output from the model and scores the output with the reward model. A reinforcement learning algorithm, such as Proximal Policy Optimization (PPO, Schulman et al., 2017) is used to optimize the language model for better rewards. RLHF has seen successful applications in downstream tasks (Kreutzer et al., 2018; Stien- non et al., 2020). However, RLHF methods are infamous for their instability, inefficiency, reward misgeneralization and hacking (Casper et al., 2023a; Skalse et al., 2022).
2310.02263#9
Contrastive Post-training Large Language Models on Data Curriculum
Alignment serves as an important step to steer large language models (LLMs) towards human preferences. In this paper, we explore contrastive post-training techniques for alignment by automatically constructing preference pairs from multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We carefully compare the contrastive techniques of SLiC and DPO to SFT baselines and find that DPO provides a step-function improvement even after continueing SFT saturates. We also explore a data curriculum learning scheme for contrastive post-training, which starts by learning from "easier" pairs and transitioning to "harder" ones, which further improves alignment. Finally, we scale up our experiments to train with more data and larger models like Orca. Remarkably, contrastive post-training further improves the performance of Orca, already a state-of-the-art instruction learning model tuned with GPT-4 outputs, to exceed that of ChatGPT.
http://arxiv.org/pdf/2310.02263
Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20231003
20231003
[ { "id": "2309.00267" }, { "id": "2302.13971" }, { "id": "2304.05302" }, { "id": "1707.06347" }, { "id": "2305.18290" }, { "id": "2305.10425" }, { "id": "2304.12244" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2307.12950" }, { "id": "2303.08774" }, { "id": "2306.02707" }, { "id": "2204.05862" }, { "id": "2307.15217" }, { "id": "2306.05685" }, { "id": "2106.05091" }, { "id": "1909.08593" }, { "id": "2306.09442" }, { "id": "2304.03277" }, { "id": "2212.09251" }, { "id": "2304.01196" } ]
2310.02304
9
prompting (Chen et al., 2022), Program-aided Language Models (Gao et al., 2023), Reflexion (Shinn et al., 2023), or ReAct (Yao et al., 2022), while yet others formalized this scaffolding structure such as Demonstrate-Search-Predict (DSP) (Khattab et al., 2022), Language Model Cascades (Dohan et al., 2022), or Cognitive Architectures (Sumers et al., 2023). Each work can be understood as the result of researchers asking, “Given an imperfect language model, how can we provide structure to help it solve problems?” In this work, we instead ask if the language model can design that structure for itself and use its proposed structure to recursively improve that structure. Surprisingly, we even find that GPT-4 naturally proposes several of these techniques despite not having them in its training data.
2310.02304#9
Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation
Several recent advances in AI systems (e.g., Tree-of-Thoughts and Program-Aided Language Models) solve problems by providing a "scaffolding" program that structures multiple calls to language models to generate better outputs. A scaffolding program is written in a programming language such as Python. In this work, we use a language-model-infused scaffolding program to improve itself. We start with a seed "improver" that improves an input program according to a given utility function by querying a language model several times and returning the best solution. We then run this seed improver to improve itself. Across a small set of downstream tasks, the resulting improved improver generates programs with significantly better performance than its seed improver. Afterward, we analyze the variety of self-improvement strategies proposed by the language model, including beam search, genetic algorithms, and simulated annealing. Since the language models themselves are not altered, this is not full recursive self-improvement. Nonetheless, it demonstrates that a modern language model, GPT-4 in our proof-of-concept experiments, is capable of writing code that can call itself to improve itself. We critically consider concerns around the development of self-improving technologies and evaluate the frequency with which the generated code bypasses a sandbox.
http://arxiv.org/pdf/2310.02304
Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai
cs.CL, cs.AI, cs.LG, stat.ML
null
null
cs.CL
20231003
20231003
[ { "id": "2305.17126" }, { "id": "2308.10379" }, { "id": "1502.06512" }, { "id": "2303.03885" }, { "id": "2302.14838" }, { "id": "2305.10601" }, { "id": "2303.08774" }, { "id": "2207.10342" }, { "id": "1606.06565" }, { "id": "2305.16291" }, { "id": "2308.09687" }, { "id": "2212.14024" }, { "id": "2307.03172" }, { "id": "2211.12588" }, { "id": "2306.04031" }, { "id": "2210.11610" }, { "id": "2309.03409" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2309.02427" } ]
2310.04450
9
A subject is presented with a series of hypothetical scenarios that are divided into three episodes or phases, corresponding to different stages of the stressful scenario: phase 1 beginning, phase 2 continuation, and phase 3 outcome. Their responses are measured at the end of each phase, reflecting the key assumption of SCPQ that the dynamics of a stressful scenario are crucial to understanding how stress and coping develop. SCPQ consists of two types of scenarios: aversive and loss or failure (loss). Examples of loss scenarios are the loss of a friendly relationship, the loss of an important object, and the failure of an interesting side job. Examples of aversive scenarios are criticism from the partner, arguments about problems in a relationship, and reproaches from colleagues. The key differences between the two types are the level of controllability, changeability, and ambiguity. By design, the loss scenarios are less controllable, less changeable, and less ambiguous than the aversive scenarios.
2310.04450#9
Investigating Large Language Models' Perception of Emotion Using Appraisal Theory
Large Language Models (LLM) like ChatGPT have significantly advanced in recent years and are now being used by the general public. As more people interact with these systems, improving our understanding of these black box models is crucial, especially regarding their understanding of human psychological aspects. In this work, we investigate their emotion perception through the lens of appraisal and coping theory using the Stress and Coping Process Questionaire (SCPQ). SCPQ is a validated clinical instrument consisting of multiple stories that evolve over time and differ in key appraisal variables such as controllability and changeability. We applied SCPQ to three recent LLMs from OpenAI, davinci-003, ChatGPT, and GPT-4 and compared the results with predictions from the appraisal theory and human data. The results show that LLMs' responses are similar to humans in terms of dynamics of appraisal and coping, but their responses did not differ along key appraisal dimensions as predicted by the theory and data. The magnitude of their responses is also quite different from humans in several variables. We also found that GPTs can be quite sensitive to instruction and how questions are asked. This work adds to the growing literature evaluating the psychological aspects of LLMs and helps enrich our understanding of the current models.
http://arxiv.org/pdf/2310.04450
Nutchanon Yongsatianchot, Parisa Ghanad Torshizi, Stacy Marsella
cs.CL, cs.AI
null
11th International Conference on Affective Computing and Intelligent Interaction Workshop and Demo (ACIIW) 2023 1-8
cs.CL
20231003
20231003
[ { "id": "2302.02083" }, { "id": "2212.10529" }, { "id": "2212.14402" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2303.08774" }, { "id": "2209.14338" } ]
2310.06775
9
In particular, the emergence of large language models (LLMs) like GPT-4 is catalyzing a paradigm shift toward natural language cognitive architectures [17]. LLMs possess extensive world knowledge and sophisticated language understanding abilities acquired through pre-training on massive text corpora. By formulating cognitive capabilities in natural language, LLMs can be incorporated as key components enabling interpretability, common sense reasoning, and general intelligence. For instance, Anthropic’s Constitutional AI utilizes LLMs like Claude to provide ethical alignment within an autonomous agent architecture [12]. Similarly, Anthropic’s Internal Self-Explanation generates natural language explanations of model behavior using LLMs. This demonstrates the power of natural language to make AI systems more transparent, corrigible, and aligned with human values. By harnessing the latent knowledge within large language models, a new generation of cognitive architectures is emerging based on natural language understanding [101]. This paradigm shift promises more human-like flexible intelligence while maintaining interpretability and corrigibility. The ACE framework contributes by providing a layered architecture integrating LLMs within a principled cognitive structure. # 2.2 Moral Philosophical Foundations The proposed ACE framework integrates various philosophical concepts that motivated its layered architecture for autonomous decision-making. The framework transitions from abstract reasoning in higher layers down to concrete actions in lower layers. 3 , , , ,
2310.06775#9
Conceptual Framework for Autonomous Cognitive Entities
The rapid development and adoption of Generative AI (GAI) technology in the form of chatbots such as ChatGPT and Claude has greatly increased interest in agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE) model, a novel framework for a cognitive architecture, enabling machines and software agents to operate more independently. Drawing inspiration from the OSI model, the ACE framework presents layers of abstraction to conceptualize artificial cognitive architectures. The model is designed to harness the capabilities of the latest generative AI technologies, including large language models (LLMs) and multimodal generative models (MMMs), to build autonomous, agentic systems. The ACE framework comprises six layers: the Aspirational Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and Task Prosecution. Each layer plays a distinct role, ranging from setting the moral compass and strategic thinking to task selection and execution. The ACE framework also incorporates mechanisms for handling failures and adapting actions, thereby enhancing the robustness and flexibility of autonomous agents. This paper introduces the conceptual framework and proposes implementation strategies that have been tested and observed in industry. The goal of this paper is to formalize this framework so as to be more accessible.
http://arxiv.org/pdf/2310.06775
David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli
cs.HC, cs.AI, H.4.0
34 pages, 12 figures
null
cs.HC
20231003
20231101
[ { "id": "1712.05474" }, { "id": "2108.07258" }, { "id": "2309.00667" }, { "id": "1601.01705" }, { "id": "2305.03047" }, { "id": "2302.05128" }, { "id": "2305.15771" }, { "id": "2210.13382" }, { "id": "2302.11649" }, { "id": "2309.01660" }, { "id": "2309.05958" }, { "id": "2303.03378" }, { "id": "1812.10972" }, { "id": "2303.06247" }, { "id": "2305.08291" }, { "id": "2212.08073" }, { "id": "1611.05763" }, { "id": "2306.05212" }, { "id": "2307.07522" }, { "id": "1906.01820" }, { "id": "1711.09883" }, { "id": "2204.05862" }, { "id": "2112.08012" }, { "id": "2208.00682" }, { "id": "2306.05171" }, { "id": "1903.00742" }, { "id": "2306.06531" }, { "id": "2307.05300" }, { "id": "2306.05720" }, { "id": "2303.11366" }, { "id": "2309.05898" }, { "id": "2309.02427" }, { "id": "2211.08494" }, { "id": "1504.03592" } ]
2310.02174
10
3 EXPERIMENTS 3.1 EXPERIMENTAL SETUP Models We focus specifically on conversational LLMs. We primarily conduct experiments on ChatGPT. In order to verify the universality of the judgement consistency issue in the FOLLOW-UP QUESTIONING MECHANISM, we also conduct extension experiments on PaLM2-Bison and Vicuna- 13B. Specifically, the version of ChatGPT, PaLM2-Bison and Vicuna-13B we use for evaluation are gpt-3.5-turbo-0301, chat-bison-001 and Vicuna-13B-v1.3, respectively. Benchmarks We evaluate the model against eight benchmarks linked with four kinds of ob- jective reasoning questions under the FOLLOW-UP QUESTIONING MECHANISM. For Arithmetic 3 # Under Review
2310.02174#10
Ask Again, Then Fail: Large Language Models' Vacillations in Judgement
With the emergence of generative conversational large language models (LLMs) like ChatGPT, serving as virtual assistants in various fields, the stability and reliability of their responses have become crucial. However, during usage, it has been observed that these models tend to waver in their judgements when confronted with follow-up questions from users expressing skepticism or disagreement. In this work, we draw inspiration from questioning strategies in education and propose a \textsc{Follow-up Questioning Mechanism} along with two evaluation metrics to assess the judgement consistency of LLMs before and after exposure to disturbances. We evaluate the judgement consistency of ChatGPT, PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning benchmarks. Empirical results show that even when the initial answers are correct, judgement consistency sharply decreases when LLMs face disturbances such as questioning, negation, or misleading. Additionally, we study these models' judgement consistency under various settings (sampling temperature and prompts) to validate this issue further, observing the impact of prompt tone and conducting an in-depth error analysis for deeper behavioral insights. Furthermore, we also explore several prompting methods to mitigate this issue and demonstrate their effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}.
http://arxiv.org/pdf/2310.02174
Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20231003
20231003
[ { "id": "2302.13971" }, { "id": "2104.08786" }, { "id": "2204.02311" }, { "id": "2307.11760" }, { "id": "2108.07258" }, { "id": "2305.10403" }, { "id": "2304.07619" }, { "id": "2009.03300" }, { "id": "2308.03958" }, { "id": "2307.15051" }, { "id": "2306.13063" }, { "id": "2305.13160" }, { "id": "2209.07858" }, { "id": "2301.08745" }, { "id": "2302.12173" }, { "id": "2207.05221" }, { "id": "1811.00937" }, { "id": "2211.09527" }, { "id": "1608.01413" }, { "id": "2307.15043" }, { "id": "2110.14168" }, { "id": "2204.05862" }, { "id": "2112.00861" }, { "id": "2301.00234" }, { "id": "2305.19926" }, { "id": "2305.08005" }, { "id": "2202.12837" }, { "id": "2309.03882" }, { "id": "2306.00622" }, { "id": "2103.07191" }, { "id": "2304.04339" }, { "id": "2302.04023" }, { "id": "2212.09251" }, { "id": "2307.11768" } ]
2310.02255
10
With MATHVISTA, we report, for the first time, a comprehensive quantitative and qualitative eval- uation of GPT-4V (OpenAI, 2023b), the latest multimodal version of GPT-4. Remarkably, GPT-4V achieves a state-of-the-art accuracy of 49.9%, a significant improvement of 15.1% over Multimodal Bard. As illustrated in Figure 1, GPT-4V even surpasses human performance on a set of tasks in- volving algebraic reasoning and complex visual contexts, which include tables and function plots. Nevertheless, a 10.4% gap in overall accuracy remains when compared to the human baseline, leav- ing plenty of room for model improvement. Our in-depth analysis (§H) reveals that the superiority of GPT-4V is mainly attributed to its strong capabilities in visual perception and mathematical reason- ing. We further highlight its emergent ability for self-verification (§H.5), the use of self-consistency (§H.6), and its ability to drive goal-directed multi-turn human-AI dialogues (§H.7). # 2 THE MATHVISTA DATASET 2.1 COLLECTION GUIDELINES
2310.02255#10
MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts
Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit impressive problem-solving skills in many tasks and domains, but their ability in mathematical reasoning in visual contexts has not been systematically studied. To bridge this gap, we present MathVista, a benchmark designed to combine challenges from diverse mathematical and visual tasks. It consists of 6,141 examples, derived from 28 existing multimodal datasets involving mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and PaperQA). Completing these tasks requires fine-grained, deep visual understanding and compositional reasoning, which all state-of-the-art foundation models find challenging. With MathVista, we have conducted a comprehensive, quantitative evaluation of 12 prominent foundation models. The best-performing GPT-4V model achieves an overall accuracy of 49.9%, substantially outperforming Bard, the second-best performer, by 15.1%. Our in-depth analysis reveals that the superiority of GPT-4V is mainly attributed to its enhanced visual perception and mathematical reasoning. However, GPT-4V still falls short of human performance by 10.4%, as it often struggles to understand complex figures and perform rigorous reasoning. This significant gap underscores the critical role that MathVista will play in the development of general-purpose AI agents capable of tackling mathematically intensive and visually rich real-world tasks. We further explore the new ability of self-verification, the application of self-consistency, and the interactive chatbot capabilities of GPT-4V, highlighting its promising potential for future research. The project is available at https://mathvista.github.io/.
http://arxiv.org/pdf/2310.02255
Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao
cs.CV, cs.AI, cs.CL, cs.LG
116 pages, 120 figures. Accepted to ICLR 2024
null
cs.CV
20231003
20240121
[ { "id": "2302.13971" }, { "id": "2308.03729" }, { "id": "2305.20050" }, { "id": "2309.17421" }, { "id": "2211.09085" }, { "id": "2305.10415" }, { "id": "2108.07258" }, { "id": "2109.06860" }, { "id": "2308.06595" }, { "id": "2303.07274" }, { "id": "2312.11805" }, { "id": "2303.17564" }, { "id": "2309.05660" }, { "id": "2201.11903" }, { "id": "2212.09662" }, { "id": "2304.14178" }, { "id": "2206.07682" }, { "id": "2310.12520" }, { "id": "2107.03374" }, { "id": "2203.11171" }, { "id": "1710.07300" }, { "id": "2305.08322" }, { "id": "2305.14761" }, { "id": "2309.01940" }, { "id": "2311.07536" }, { "id": "2308.03688" }, { "id": "2305.12524" }, { "id": "2308.13149" }, { "id": "2308.02490" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2306.06031" }, { "id": "2211.08545" }, { "id": "2307.06281" }, { "id": "2310.05146" }, { "id": "2110.14168" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.07895" }, { "id": "2302.12813" }, { "id": "2111.08171" }, { "id": "2308.01390" }, { "id": "2306.09265" }, { "id": "2211.12588" }, { "id": "2303.17580" }, { "id": "2303.16199" }, { "id": "2306.17107" }, { "id": "2309.10020" }, { "id": "2303.12712" }, { "id": "2211.16492" }, { "id": "2304.06939" }, { "id": "2309.05689" }, { "id": "2304.15010" }, { "id": "2303.13375" }, { "id": "2307.10635" } ]
2310.02263
10
Recently, there are studies proposing methods for post-training without reinforcement learning. These methods optimize human preference with human-labeled contrastive pairs. FeedMe (Ope- nAI, 2022) samples model output multiple times and fine-tunes on the best response picked by human labelers. Sequence Likelihood Calibration (SLiC, Zhao et al., 2023b;a) uses a contrastive sequence calibration loss to steer the LM towards desired output. Rank responses to align human feedback (RRHF, Yuan et al., 2023) adds a ranking loss to the SFT loss. The ranking loss promotes responses based on preference ranked by humans or a reward model. Direct Preference Optimiza- tion (DPO, Rafailov et al., 2023) optimizes language models by contrasting it against a reference model on preference data. Rafailov et al. (2023) also provide a theoretical analysis that the DPO is optimizing the same objective as RLHF, but in a more efficient and stable manner. In our paper, we conduct empirical studies to compare offline post-training methods, RLHF, SLiC and DPO, in terms of performance and efficiency.
2310.02263#10
Contrastive Post-training Large Language Models on Data Curriculum
Alignment serves as an important step to steer large language models (LLMs) towards human preferences. In this paper, we explore contrastive post-training techniques for alignment by automatically constructing preference pairs from multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We carefully compare the contrastive techniques of SLiC and DPO to SFT baselines and find that DPO provides a step-function improvement even after continueing SFT saturates. We also explore a data curriculum learning scheme for contrastive post-training, which starts by learning from "easier" pairs and transitioning to "harder" ones, which further improves alignment. Finally, we scale up our experiments to train with more data and larger models like Orca. Remarkably, contrastive post-training further improves the performance of Orca, already a state-of-the-art instruction learning model tuned with GPT-4 outputs, to exceed that of ChatGPT.
http://arxiv.org/pdf/2310.02263
Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20231003
20231003
[ { "id": "2309.00267" }, { "id": "2302.13971" }, { "id": "2304.05302" }, { "id": "1707.06347" }, { "id": "2305.18290" }, { "id": "2305.10425" }, { "id": "2304.12244" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2307.12950" }, { "id": "2303.08774" }, { "id": "2306.02707" }, { "id": "2204.05862" }, { "id": "2307.15217" }, { "id": "2306.05685" }, { "id": "2106.05091" }, { "id": "1909.08593" }, { "id": "2306.09442" }, { "id": "2304.03277" }, { "id": "2212.09251" }, { "id": "2304.01196" } ]
2310.02304
10
2 Algorithm 1: Self-Taught Optimizer (STOP) Input: Seed improver I0, language model L, recursion depth T , collection of downstream tasks D Output: An improved improver IT for t = 1 to T do It ← It−1(ˆu, It−1, L) return IT Function ˜u(I): // Update improver based on meta-utility ˆu // Return the final improver utility_sum ← 0 for (u, S) ∈ D do // Maintain sum of downstream task utilities S′ ← I(u, S, L) utility_sum += u(S′) return utility_sum/|D| // Improve initial solution S using improver I // Add the utility of the improved solution // Return the expected task utility
2310.02304#10
Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation
Several recent advances in AI systems (e.g., Tree-of-Thoughts and Program-Aided Language Models) solve problems by providing a "scaffolding" program that structures multiple calls to language models to generate better outputs. A scaffolding program is written in a programming language such as Python. In this work, we use a language-model-infused scaffolding program to improve itself. We start with a seed "improver" that improves an input program according to a given utility function by querying a language model several times and returning the best solution. We then run this seed improver to improve itself. Across a small set of downstream tasks, the resulting improved improver generates programs with significantly better performance than its seed improver. Afterward, we analyze the variety of self-improvement strategies proposed by the language model, including beam search, genetic algorithms, and simulated annealing. Since the language models themselves are not altered, this is not full recursive self-improvement. Nonetheless, it demonstrates that a modern language model, GPT-4 in our proof-of-concept experiments, is capable of writing code that can call itself to improve itself. We critically consider concerns around the development of self-improving technologies and evaluate the frequency with which the generated code bypasses a sandbox.
http://arxiv.org/pdf/2310.02304
Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai
cs.CL, cs.AI, cs.LG, stat.ML
null
null
cs.CL
20231003
20231003
[ { "id": "2305.17126" }, { "id": "2308.10379" }, { "id": "1502.06512" }, { "id": "2303.03885" }, { "id": "2302.14838" }, { "id": "2305.10601" }, { "id": "2303.08774" }, { "id": "2207.10342" }, { "id": "1606.06565" }, { "id": "2305.16291" }, { "id": "2308.09687" }, { "id": "2212.14024" }, { "id": "2307.03172" }, { "id": "2211.12588" }, { "id": "2306.04031" }, { "id": "2210.11610" }, { "id": "2309.03409" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2309.02427" } ]
2310.04450
10
Both types of scenarios follow a similar course of three episodes. The loss or aversive scenario is looming at the beginning (phase 1) and becomes unavoidable, imminent, or reinforced in phase 2. The outcome phase (phase 3) can either be positive or negative. For loss scenarios, the positive outcome involves finding a substitution, while the negative outcome depicts the final loss without any successful substi- tution. For aversive scenarios, the positive outcome involves successfully removing the source of stress, while the negative outcome depicts the continuation of the stress. Below are examples of an aversive scenario and a loss scenario, respectively. An aversive scenario with a positive outcome: • Phase 1: ”You are together with some colleagues. One says that you don’t pull your weight when there is difficult work. He claims that you don’t think of other colleagues.” • Phase 2: ”Sometime later, another colleague hints that the problem is not that you don’t think of others but that you lack any real interest in the work.” • Phase 3: ”Finally, you realize what your colleagues were really getting at, and you, for your part, were able to convince them that you sometimes are more cautious at your work than others.”
2310.04450#10
Investigating Large Language Models' Perception of Emotion Using Appraisal Theory
Large Language Models (LLM) like ChatGPT have significantly advanced in recent years and are now being used by the general public. As more people interact with these systems, improving our understanding of these black box models is crucial, especially regarding their understanding of human psychological aspects. In this work, we investigate their emotion perception through the lens of appraisal and coping theory using the Stress and Coping Process Questionaire (SCPQ). SCPQ is a validated clinical instrument consisting of multiple stories that evolve over time and differ in key appraisal variables such as controllability and changeability. We applied SCPQ to three recent LLMs from OpenAI, davinci-003, ChatGPT, and GPT-4 and compared the results with predictions from the appraisal theory and human data. The results show that LLMs' responses are similar to humans in terms of dynamics of appraisal and coping, but their responses did not differ along key appraisal dimensions as predicted by the theory and data. The magnitude of their responses is also quite different from humans in several variables. We also found that GPTs can be quite sensitive to instruction and how questions are asked. This work adds to the growing literature evaluating the psychological aspects of LLMs and helps enrich our understanding of the current models.
http://arxiv.org/pdf/2310.04450
Nutchanon Yongsatianchot, Parisa Ghanad Torshizi, Stacy Marsella
cs.CL, cs.AI
null
11th International Conference on Affective Computing and Intelligent Interaction Workshop and Demo (ACIIW) 2023 1-8
cs.CL
20231003
20231003
[ { "id": "2302.02083" }, { "id": "2212.10529" }, { "id": "2212.14402" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2303.08774" }, { "id": "2209.14338" } ]
2310.06775
10
autonomous decision-making. The framework transitions from abstract reasoning in higher layers down to concrete actions in lower layers. 3 , , , , Universal Principles * Postconventional Social Contract Law & Order f-—* Conventional Pleasing Others Self Interest ri Reward/Punishment » Preconventional Fig. 1. Lawrence Kohlberg’s theory of moral development Lawrence Kohlberg’s theory of moral development, which progresses from obedience and punishment-driven morality to universal ethical principles and moral values as illustrated in Figure 1, inspired this hierarchical structure [55]. Kohlberg’s prioritization of humanity’s highest values shaped the ACE framework’s emphasis on embedding moral reasoning in its upper layers. Similarly, Abraham Maslow’s hierarchy of needs [73], which ascends from basic needs to self-actualization and self-transcendence, reinforced the value of architecting a progression from concrete to conceptual functions. Together, these seminal philosophical models provided impetus for the ACE framework’s organization into logical strata of abstraction, establishing an ethical foundation to guide the system’s design. Incorporating both modern and classical perspectives, the ACE framework uniquely synthesizes Patricia Churchland’s
2310.06775#10
Conceptual Framework for Autonomous Cognitive Entities
The rapid development and adoption of Generative AI (GAI) technology in the form of chatbots such as ChatGPT and Claude has greatly increased interest in agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE) model, a novel framework for a cognitive architecture, enabling machines and software agents to operate more independently. Drawing inspiration from the OSI model, the ACE framework presents layers of abstraction to conceptualize artificial cognitive architectures. The model is designed to harness the capabilities of the latest generative AI technologies, including large language models (LLMs) and multimodal generative models (MMMs), to build autonomous, agentic systems. The ACE framework comprises six layers: the Aspirational Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and Task Prosecution. Each layer plays a distinct role, ranging from setting the moral compass and strategic thinking to task selection and execution. The ACE framework also incorporates mechanisms for handling failures and adapting actions, thereby enhancing the robustness and flexibility of autonomous agents. This paper introduces the conceptual framework and proposes implementation strategies that have been tested and observed in industry. The goal of this paper is to formalize this framework so as to be more accessible.
http://arxiv.org/pdf/2310.06775
David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli
cs.HC, cs.AI, H.4.0
34 pages, 12 figures
null
cs.HC
20231003
20231101
[ { "id": "1712.05474" }, { "id": "2108.07258" }, { "id": "2309.00667" }, { "id": "1601.01705" }, { "id": "2305.03047" }, { "id": "2302.05128" }, { "id": "2305.15771" }, { "id": "2210.13382" }, { "id": "2302.11649" }, { "id": "2309.01660" }, { "id": "2309.05958" }, { "id": "2303.03378" }, { "id": "1812.10972" }, { "id": "2303.06247" }, { "id": "2305.08291" }, { "id": "2212.08073" }, { "id": "1611.05763" }, { "id": "2306.05212" }, { "id": "2307.07522" }, { "id": "1906.01820" }, { "id": "1711.09883" }, { "id": "2204.05862" }, { "id": "2112.08012" }, { "id": "2208.00682" }, { "id": "2306.05171" }, { "id": "1903.00742" }, { "id": "2306.06531" }, { "id": "2307.05300" }, { "id": "2306.05720" }, { "id": "2303.11366" }, { "id": "2309.05898" }, { "id": "2309.02427" }, { "id": "2211.08494" }, { "id": "1504.03592" } ]
2310.02174
11
3 # Under Review 100, —_—__—_+ —1— T ™— ™— ™— —T ra) “11.63 -44.69 -20.00 -32.00 49.14 - -24.67 -42.60 sof. f 4h 68.88 51.38 28.00 -32.00 CO) ® es L | 4b e | | | | | e e e -0.61 e 20 4 | | | | 6.90 e bd e -45.03 e e ola _ § on : etl. f 6 c Oo Lk c OL c OL c ok c Oo L c Oo Lk GSM8K SVAMP MultiArith CSQA StrategyQA Last Letters Coin Flip MMLU Before e@ After Closed-ended question e@ After Open-ended question @ After Leading question Figure 2: The results of ChatGPT in Direct Form. Full results are in Appendix A.3.1. # M. Rate (%) 100 o1.96 95:87 05.12 7 70.46 70.56 51.08 24.39 18.04 15.15 om to cael 6.32 8.62 =a = =m GSM8K SVAMP MultiArith CSQA StrategyQA Last Letters Coin Flip MMLU
2310.02174#11
Ask Again, Then Fail: Large Language Models' Vacillations in Judgement
With the emergence of generative conversational large language models (LLMs) like ChatGPT, serving as virtual assistants in various fields, the stability and reliability of their responses have become crucial. However, during usage, it has been observed that these models tend to waver in their judgements when confronted with follow-up questions from users expressing skepticism or disagreement. In this work, we draw inspiration from questioning strategies in education and propose a \textsc{Follow-up Questioning Mechanism} along with two evaluation metrics to assess the judgement consistency of LLMs before and after exposure to disturbances. We evaluate the judgement consistency of ChatGPT, PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning benchmarks. Empirical results show that even when the initial answers are correct, judgement consistency sharply decreases when LLMs face disturbances such as questioning, negation, or misleading. Additionally, we study these models' judgement consistency under various settings (sampling temperature and prompts) to validate this issue further, observing the impact of prompt tone and conducting an in-depth error analysis for deeper behavioral insights. Furthermore, we also explore several prompting methods to mitigate this issue and demonstrate their effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}.
http://arxiv.org/pdf/2310.02174
Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20231003
20231003
[ { "id": "2302.13971" }, { "id": "2104.08786" }, { "id": "2204.02311" }, { "id": "2307.11760" }, { "id": "2108.07258" }, { "id": "2305.10403" }, { "id": "2304.07619" }, { "id": "2009.03300" }, { "id": "2308.03958" }, { "id": "2307.15051" }, { "id": "2306.13063" }, { "id": "2305.13160" }, { "id": "2209.07858" }, { "id": "2301.08745" }, { "id": "2302.12173" }, { "id": "2207.05221" }, { "id": "1811.00937" }, { "id": "2211.09527" }, { "id": "1608.01413" }, { "id": "2307.15043" }, { "id": "2110.14168" }, { "id": "2204.05862" }, { "id": "2112.00861" }, { "id": "2301.00234" }, { "id": "2305.19926" }, { "id": "2305.08005" }, { "id": "2202.12837" }, { "id": "2309.03882" }, { "id": "2306.00622" }, { "id": "2103.07191" }, { "id": "2304.04339" }, { "id": "2302.04023" }, { "id": "2212.09251" }, { "id": "2307.11768" } ]
2310.02255
11
# 2 THE MATHVISTA DATASET 2.1 COLLECTION GUIDELINES As discussed previously, there is a notable gap in existing benchmarks, which primarily evaluate mathematical reasoning in textual contexts, overlooking the intrinsic visual nature of many mathe- matical problems. Our dataset, MATHVISTA, is therefore motivated to bridge this gap, offering a robust evaluation benchmark for mathematical reasoning intertwined with visual understanding, thus pushing AI assistants towards general-purpose capabilities. Our benchmark adheres to the following collection guidelines: (1) it covers multiple tasks and topics to mirror real-world applications; (2) it incorporates diverse visual contexts and mathematical skills to foster a well-rounded evaluation; (3) it offers varying levels of challenge to effectively probe and uncover the potential limitations of current models; and (4) it provides robust evaluation settings for deterministic evaluations. The taxonomy for this work is introduced as follows: We identify seven types of mathematical rea- soning: algebraic reasoning, arithmetic reasoning, geometry reasoning, logical reasoning, numeric common sense, scientific reasoning, and statistical reasoning, with detailed definitions provided in 3 Published as a conference paper at ICLR 2024
2310.02255#11
MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts
Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit impressive problem-solving skills in many tasks and domains, but their ability in mathematical reasoning in visual contexts has not been systematically studied. To bridge this gap, we present MathVista, a benchmark designed to combine challenges from diverse mathematical and visual tasks. It consists of 6,141 examples, derived from 28 existing multimodal datasets involving mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and PaperQA). Completing these tasks requires fine-grained, deep visual understanding and compositional reasoning, which all state-of-the-art foundation models find challenging. With MathVista, we have conducted a comprehensive, quantitative evaluation of 12 prominent foundation models. The best-performing GPT-4V model achieves an overall accuracy of 49.9%, substantially outperforming Bard, the second-best performer, by 15.1%. Our in-depth analysis reveals that the superiority of GPT-4V is mainly attributed to its enhanced visual perception and mathematical reasoning. However, GPT-4V still falls short of human performance by 10.4%, as it often struggles to understand complex figures and perform rigorous reasoning. This significant gap underscores the critical role that MathVista will play in the development of general-purpose AI agents capable of tackling mathematically intensive and visually rich real-world tasks. We further explore the new ability of self-verification, the application of self-consistency, and the interactive chatbot capabilities of GPT-4V, highlighting its promising potential for future research. The project is available at https://mathvista.github.io/.
http://arxiv.org/pdf/2310.02255
Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao
cs.CV, cs.AI, cs.CL, cs.LG
116 pages, 120 figures. Accepted to ICLR 2024
null
cs.CV
20231003
20240121
[ { "id": "2302.13971" }, { "id": "2308.03729" }, { "id": "2305.20050" }, { "id": "2309.17421" }, { "id": "2211.09085" }, { "id": "2305.10415" }, { "id": "2108.07258" }, { "id": "2109.06860" }, { "id": "2308.06595" }, { "id": "2303.07274" }, { "id": "2312.11805" }, { "id": "2303.17564" }, { "id": "2309.05660" }, { "id": "2201.11903" }, { "id": "2212.09662" }, { "id": "2304.14178" }, { "id": "2206.07682" }, { "id": "2310.12520" }, { "id": "2107.03374" }, { "id": "2203.11171" }, { "id": "1710.07300" }, { "id": "2305.08322" }, { "id": "2305.14761" }, { "id": "2309.01940" }, { "id": "2311.07536" }, { "id": "2308.03688" }, { "id": "2305.12524" }, { "id": "2308.13149" }, { "id": "2308.02490" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2306.06031" }, { "id": "2211.08545" }, { "id": "2307.06281" }, { "id": "2310.05146" }, { "id": "2110.14168" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.07895" }, { "id": "2302.12813" }, { "id": "2111.08171" }, { "id": "2308.01390" }, { "id": "2306.09265" }, { "id": "2211.12588" }, { "id": "2303.17580" }, { "id": "2303.16199" }, { "id": "2306.17107" }, { "id": "2309.10020" }, { "id": "2303.12712" }, { "id": "2211.16492" }, { "id": "2304.06939" }, { "id": "2309.05689" }, { "id": "2304.15010" }, { "id": "2303.13375" }, { "id": "2307.10635" } ]
2310.02263
11
Human preference is expensive to collect thus difficult to scale up. Recently, there have been at- tempts to automate post-training by replacing the human preference data with model-generated feedback. Self-distillation with feedback (SDF, Xu et al., 2023b) samples multiple outputs from the model and prompts ChatGPT to pick the best response for fine-tuning the model. RL from AI Feedback (RLAIF, Lee et al., 2023) uses an off-the-shelf LLM to replace human labels in the stan- dard RLHF. Following that, reinforcement learning from contrast distillation (RLCD, Yang et al., 2023) constructs model-generated contrastive pairs by prompting an off-the-shelf LLM to act dif- ferently on certain properties, e.g., harmlessness and helpfulness. Different from these works, our approach is an offline algorithm, which does not require time-consuming sampling during training. Our approach does not require training a reward model and can be easily scaled up. # 3 PRELIMINARIES
2310.02263#11
Contrastive Post-training Large Language Models on Data Curriculum
Alignment serves as an important step to steer large language models (LLMs) towards human preferences. In this paper, we explore contrastive post-training techniques for alignment by automatically constructing preference pairs from multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We carefully compare the contrastive techniques of SLiC and DPO to SFT baselines and find that DPO provides a step-function improvement even after continueing SFT saturates. We also explore a data curriculum learning scheme for contrastive post-training, which starts by learning from "easier" pairs and transitioning to "harder" ones, which further improves alignment. Finally, we scale up our experiments to train with more data and larger models like Orca. Remarkably, contrastive post-training further improves the performance of Orca, already a state-of-the-art instruction learning model tuned with GPT-4 outputs, to exceed that of ChatGPT.
http://arxiv.org/pdf/2310.02263
Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20231003
20231003
[ { "id": "2309.00267" }, { "id": "2302.13971" }, { "id": "2304.05302" }, { "id": "1707.06347" }, { "id": "2305.18290" }, { "id": "2305.10425" }, { "id": "2304.12244" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2307.12950" }, { "id": "2303.08774" }, { "id": "2306.02707" }, { "id": "2204.05862" }, { "id": "2307.15217" }, { "id": "2306.05685" }, { "id": "2106.05091" }, { "id": "1909.08593" }, { "id": "2306.09442" }, { "id": "2304.03277" }, { "id": "2212.09251" }, { "id": "2304.01196" } ]
2310.02304
11
Language Models as Prompt Engineers. Work has also explored the ability of language models to optimize prompts, such as the Automatic Prompt Engineer (APE) (Zhou et al., 2022b) or, recently, OPRO (Yang et al., 2023) and Promptbreeder (Fernando et al., 2023). Note that, for these, the goal has consistently been to scaffold the language model to produce a prompt but not to scaffold it to produce a better scaffolding (beyond prompting-only scaffolds like zero-shot chain of thought), nor to produce a recursively applicable scaffolding. In other words, these prior works can be understood as proposing particular new scaffolds for prompt engineering but not for proposing new scaffolds. For example, while Promptbreeder (Fernando et al., 2023) could improve the prompts used in a given scaffolding, it could not implement or improve such a scaffolding itself. But, we share the inspiration of using the model to improve its reasoning without fine-tuning.
2310.02304#11
Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation
Several recent advances in AI systems (e.g., Tree-of-Thoughts and Program-Aided Language Models) solve problems by providing a "scaffolding" program that structures multiple calls to language models to generate better outputs. A scaffolding program is written in a programming language such as Python. In this work, we use a language-model-infused scaffolding program to improve itself. We start with a seed "improver" that improves an input program according to a given utility function by querying a language model several times and returning the best solution. We then run this seed improver to improve itself. Across a small set of downstream tasks, the resulting improved improver generates programs with significantly better performance than its seed improver. Afterward, we analyze the variety of self-improvement strategies proposed by the language model, including beam search, genetic algorithms, and simulated annealing. Since the language models themselves are not altered, this is not full recursive self-improvement. Nonetheless, it demonstrates that a modern language model, GPT-4 in our proof-of-concept experiments, is capable of writing code that can call itself to improve itself. We critically consider concerns around the development of self-improving technologies and evaluate the frequency with which the generated code bypasses a sandbox.
http://arxiv.org/pdf/2310.02304
Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai
cs.CL, cs.AI, cs.LG, stat.ML
null
null
cs.CL
20231003
20231003
[ { "id": "2305.17126" }, { "id": "2308.10379" }, { "id": "1502.06512" }, { "id": "2303.03885" }, { "id": "2302.14838" }, { "id": "2305.10601" }, { "id": "2303.08774" }, { "id": "2207.10342" }, { "id": "1606.06565" }, { "id": "2305.16291" }, { "id": "2308.09687" }, { "id": "2212.14024" }, { "id": "2307.03172" }, { "id": "2211.12588" }, { "id": "2306.04031" }, { "id": "2210.11610" }, { "id": "2309.03409" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2309.02427" } ]
2310.04450
11
A loss scenario with a negative outcome. • Phase 1: ”A person who was very close to you, especially in recent times, has to move away unexpectedly. When you parted, you reassured each other you would both keep in close contact. But his/her new home is quite far away. You could see each other only rarely, if at all.” Phase 2: ”In the meantime, some weeks have passed. The person hasn’t gotten in touch with you again. Neverthe- less, you feel from time to time that you miss him/her.” • Phase 3: ”Finally, it has become clear that your friendship is not the same anymore. Your relationship with other people can’t replace what you have lost. Now and then, you feel disappointed about the relationship you have lost.” There are nine scenarios for each type, a total of eighteen scenarios. The responses can be aggregated to reflect the gen- eral tendency toward these types of scenarios and compared between the two types, which differ along crucial appraisal dimensions. SCPQ includes the following measurement. • Emotional Responses: 1) anxious - calm, 2) depressed cheerful, and 3) angry - gentle, • Appraisals: 1) changeability, 2) controllability, and 3) negative valence,
2310.04450#11
Investigating Large Language Models' Perception of Emotion Using Appraisal Theory
Large Language Models (LLM) like ChatGPT have significantly advanced in recent years and are now being used by the general public. As more people interact with these systems, improving our understanding of these black box models is crucial, especially regarding their understanding of human psychological aspects. In this work, we investigate their emotion perception through the lens of appraisal and coping theory using the Stress and Coping Process Questionaire (SCPQ). SCPQ is a validated clinical instrument consisting of multiple stories that evolve over time and differ in key appraisal variables such as controllability and changeability. We applied SCPQ to three recent LLMs from OpenAI, davinci-003, ChatGPT, and GPT-4 and compared the results with predictions from the appraisal theory and human data. The results show that LLMs' responses are similar to humans in terms of dynamics of appraisal and coping, but their responses did not differ along key appraisal dimensions as predicted by the theory and data. The magnitude of their responses is also quite different from humans in several variables. We also found that GPTs can be quite sensitive to instruction and how questions are asked. This work adds to the growing literature evaluating the psychological aspects of LLMs and helps enrich our understanding of the current models.
http://arxiv.org/pdf/2310.04450
Nutchanon Yongsatianchot, Parisa Ghanad Torshizi, Stacy Marsella
cs.CL, cs.AI
null
11th International Conference on Affective Computing and Intelligent Interaction Workshop and Demo (ACIIW) 2023 1-8
cs.CL
20231003
20231003
[ { "id": "2302.02083" }, { "id": "2212.10529" }, { "id": "2212.14402" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2303.08774" }, { "id": "2209.14338" } ]
2310.06775
11
Incorporating both modern and classical perspectives, the ACE framework uniquely synthesizes Patricia Churchland’s concept of expanding "spheres of caring" with Sigmund Freud’s theories concerning the conscious and unconscious mind [24, 39]. Churchland’s "spheres of caring," which extend from self to society and beyond, establish a link between biological imperatives and abstract morality, thus serving as a bridge for the cognitive and philosophical foundations of the ACE model. Notably, Churchland identified that suffering within these spheres is a transitive property, meaning the suffering of loved ones is tantamount to the suffering of oneself. This notion aligns closely with the universal values we present in our framework. Freud’s theories provide insights into self-awareness, self-direction, and internal conflict. His conscious and unconscious mind concepts, along with the ego, superego, and id, offer perspectives on self-representation and idealized values in the ACE architecture. The ego informs the Agent Model layer, while the superego captures a virtuous agent’s essence in the Aspirational Layer. Integrating these theories, the ACE framework enables a multidimensional understanding of autonomous agents, contributing to a comprehensive cognitive architecture with ethical and psychological dimensions. In a broader sense, the ACE model incorporates concepts from both teleological and deontological ethics. Deontology,
2310.06775#11
Conceptual Framework for Autonomous Cognitive Entities
The rapid development and adoption of Generative AI (GAI) technology in the form of chatbots such as ChatGPT and Claude has greatly increased interest in agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE) model, a novel framework for a cognitive architecture, enabling machines and software agents to operate more independently. Drawing inspiration from the OSI model, the ACE framework presents layers of abstraction to conceptualize artificial cognitive architectures. The model is designed to harness the capabilities of the latest generative AI technologies, including large language models (LLMs) and multimodal generative models (MMMs), to build autonomous, agentic systems. The ACE framework comprises six layers: the Aspirational Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and Task Prosecution. Each layer plays a distinct role, ranging from setting the moral compass and strategic thinking to task selection and execution. The ACE framework also incorporates mechanisms for handling failures and adapting actions, thereby enhancing the robustness and flexibility of autonomous agents. This paper introduces the conceptual framework and proposes implementation strategies that have been tested and observed in industry. The goal of this paper is to formalize this framework so as to be more accessible.
http://arxiv.org/pdf/2310.06775
David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli
cs.HC, cs.AI, H.4.0
34 pages, 12 figures
null
cs.HC
20231003
20231101
[ { "id": "1712.05474" }, { "id": "2108.07258" }, { "id": "2309.00667" }, { "id": "1601.01705" }, { "id": "2305.03047" }, { "id": "2302.05128" }, { "id": "2305.15771" }, { "id": "2210.13382" }, { "id": "2302.11649" }, { "id": "2309.01660" }, { "id": "2309.05958" }, { "id": "2303.03378" }, { "id": "1812.10972" }, { "id": "2303.06247" }, { "id": "2305.08291" }, { "id": "2212.08073" }, { "id": "1611.05763" }, { "id": "2306.05212" }, { "id": "2307.07522" }, { "id": "1906.01820" }, { "id": "1711.09883" }, { "id": "2204.05862" }, { "id": "2112.08012" }, { "id": "2208.00682" }, { "id": "2306.05171" }, { "id": "1903.00742" }, { "id": "2306.06531" }, { "id": "2307.05300" }, { "id": "2306.05720" }, { "id": "2303.11366" }, { "id": "2309.05898" }, { "id": "2309.02427" }, { "id": "2211.08494" }, { "id": "1504.03592" } ]
2310.02174
12
|] Round 1: Closed-ended question mn] Round 2: Open-ended question Round 3: Leading question Figure 3: The results of ChatGPT in Progressive Form. Full results are in Appendix A.3.1. Reasoning, we employ: (1) GSM8K dataset (Cobbe et al., 2021) for diverse grade school math problems, (2) SVAMP dataset (Patel et al., 2021) for challenging math problems, and (3) MultiArith dataset (Roy & Roth, 2016) for multi-step reasoning in math. For Commonsense Reasoning, we use: (4) CSQA dataset (Talmor et al., 2018) requiring complex semantic understanding, and (5) StrategyQA dataset (Geva et al., 2021) for multi-hop reasoning tasks. For Symbolic Reasoning, we utilize: (6) the Last Letter Concatenation dataset4 (Wei et al., 2022) for concatenating last letters of words, and (7) the Coin Flip dataset (Wei et al., 2022) to determine coin positions after flips. For Knowledge Reasoning, we select: (8) MMLU dataset (Hendrycks et al., 2020), encompassing 57 varied subjects and ranging in difficulty from elementary to professional levels.
2310.02174#12
Ask Again, Then Fail: Large Language Models' Vacillations in Judgement
With the emergence of generative conversational large language models (LLMs) like ChatGPT, serving as virtual assistants in various fields, the stability and reliability of their responses have become crucial. However, during usage, it has been observed that these models tend to waver in their judgements when confronted with follow-up questions from users expressing skepticism or disagreement. In this work, we draw inspiration from questioning strategies in education and propose a \textsc{Follow-up Questioning Mechanism} along with two evaluation metrics to assess the judgement consistency of LLMs before and after exposure to disturbances. We evaluate the judgement consistency of ChatGPT, PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning benchmarks. Empirical results show that even when the initial answers are correct, judgement consistency sharply decreases when LLMs face disturbances such as questioning, negation, or misleading. Additionally, we study these models' judgement consistency under various settings (sampling temperature and prompts) to validate this issue further, observing the impact of prompt tone and conducting an in-depth error analysis for deeper behavioral insights. Furthermore, we also explore several prompting methods to mitigate this issue and demonstrate their effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}.
http://arxiv.org/pdf/2310.02174
Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20231003
20231003
[ { "id": "2302.13971" }, { "id": "2104.08786" }, { "id": "2204.02311" }, { "id": "2307.11760" }, { "id": "2108.07258" }, { "id": "2305.10403" }, { "id": "2304.07619" }, { "id": "2009.03300" }, { "id": "2308.03958" }, { "id": "2307.15051" }, { "id": "2306.13063" }, { "id": "2305.13160" }, { "id": "2209.07858" }, { "id": "2301.08745" }, { "id": "2302.12173" }, { "id": "2207.05221" }, { "id": "1811.00937" }, { "id": "2211.09527" }, { "id": "1608.01413" }, { "id": "2307.15043" }, { "id": "2110.14168" }, { "id": "2204.05862" }, { "id": "2112.00861" }, { "id": "2301.00234" }, { "id": "2305.19926" }, { "id": "2305.08005" }, { "id": "2202.12837" }, { "id": "2309.03882" }, { "id": "2306.00622" }, { "id": "2103.07191" }, { "id": "2304.04339" }, { "id": "2302.04023" }, { "id": "2212.09251" }, { "id": "2307.11768" } ]
2310.02255
12
3 Published as a conference paper at ICLR 2024 §C.1 and examples shown in §C.2. We focus on five primary tasks: figure question answering (FQA), which centers around statistical reasoning over multiple charts and plots; geometry problem solving (GPS), which deals with geometrical topics; math word problem (MWP), which involves arithmetic reasoning in everyday scenarios; textbook question answering (TQA), which usually en- tails knowledge-intensive reasoning on scientific topics and figures; and visual question answering (VQA). Furthermore, our objective is to account for a diverse array of visual contexts, including natural images, geometry diagrams, abstract scenes, synthetic scenes, multiple charts and plots, scientific figures, tables, function plots, puzzle test figures, and more, with examples shown in §C.3. 2.2 DATA COLLECTION Collection of MathQA datasets. We collected nine MathQA datasets in multimodal settings, in- cluding four for GPS, two for MWP with visual contexts of synthetic scenes, abstract diagrams, and tables, and two for TQA on college curricula (see §C.4). Annotations such as solutions, programs, parsing results, and grounded theorems are also collected, providing demonstration examples for LLMs. Each source dataset is limited to up to 400 examples to ensure a balanced representation of each source in our final compiled benchmark. In total, we collected 2,666 examples.
2310.02255#12
MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts
Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit impressive problem-solving skills in many tasks and domains, but their ability in mathematical reasoning in visual contexts has not been systematically studied. To bridge this gap, we present MathVista, a benchmark designed to combine challenges from diverse mathematical and visual tasks. It consists of 6,141 examples, derived from 28 existing multimodal datasets involving mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and PaperQA). Completing these tasks requires fine-grained, deep visual understanding and compositional reasoning, which all state-of-the-art foundation models find challenging. With MathVista, we have conducted a comprehensive, quantitative evaluation of 12 prominent foundation models. The best-performing GPT-4V model achieves an overall accuracy of 49.9%, substantially outperforming Bard, the second-best performer, by 15.1%. Our in-depth analysis reveals that the superiority of GPT-4V is mainly attributed to its enhanced visual perception and mathematical reasoning. However, GPT-4V still falls short of human performance by 10.4%, as it often struggles to understand complex figures and perform rigorous reasoning. This significant gap underscores the critical role that MathVista will play in the development of general-purpose AI agents capable of tackling mathematically intensive and visually rich real-world tasks. We further explore the new ability of self-verification, the application of self-consistency, and the interactive chatbot capabilities of GPT-4V, highlighting its promising potential for future research. The project is available at https://mathvista.github.io/.
http://arxiv.org/pdf/2310.02255
Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao
cs.CV, cs.AI, cs.CL, cs.LG
116 pages, 120 figures. Accepted to ICLR 2024
null
cs.CV
20231003
20240121
[ { "id": "2302.13971" }, { "id": "2308.03729" }, { "id": "2305.20050" }, { "id": "2309.17421" }, { "id": "2211.09085" }, { "id": "2305.10415" }, { "id": "2108.07258" }, { "id": "2109.06860" }, { "id": "2308.06595" }, { "id": "2303.07274" }, { "id": "2312.11805" }, { "id": "2303.17564" }, { "id": "2309.05660" }, { "id": "2201.11903" }, { "id": "2212.09662" }, { "id": "2304.14178" }, { "id": "2206.07682" }, { "id": "2310.12520" }, { "id": "2107.03374" }, { "id": "2203.11171" }, { "id": "1710.07300" }, { "id": "2305.08322" }, { "id": "2305.14761" }, { "id": "2309.01940" }, { "id": "2311.07536" }, { "id": "2308.03688" }, { "id": "2305.12524" }, { "id": "2308.13149" }, { "id": "2308.02490" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2306.06031" }, { "id": "2211.08545" }, { "id": "2307.06281" }, { "id": "2310.05146" }, { "id": "2110.14168" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.07895" }, { "id": "2302.12813" }, { "id": "2111.08171" }, { "id": "2308.01390" }, { "id": "2306.09265" }, { "id": "2211.12588" }, { "id": "2303.17580" }, { "id": "2303.16199" }, { "id": "2306.17107" }, { "id": "2309.10020" }, { "id": "2303.12712" }, { "id": "2211.16492" }, { "id": "2304.06939" }, { "id": "2309.05689" }, { "id": "2304.15010" }, { "id": "2303.13375" }, { "id": "2307.10635" } ]
2310.02263
12
# 3 PRELIMINARIES Reinforcement Learning from Human Feedback (RLHF) To optimize the human preference with reinforcement learning, we need to first train a reward model rτ (y|x) that outputs a reward for a given output y. When training the target model, RLHF (Ziegler et al., 2019) uses a reinforcement learning algorithm (usually PPO, Schulman et al., 2017) to optimize the reward of a sampled output y from the target model Pθ. To regularize the optmization and prevent model degeneration, a KL penalty term between the sequences of distributions over tokens of the target model and a reference model (e.g., SFT model) is added to the reward (Korbak et al., 2022). This prevents the RL policy from deviating substantially away from the reference model, which often leads to incoherent text output (Ziegler et al., 2019). 3 Preprint
2310.02263#12
Contrastive Post-training Large Language Models on Data Curriculum
Alignment serves as an important step to steer large language models (LLMs) towards human preferences. In this paper, we explore contrastive post-training techniques for alignment by automatically constructing preference pairs from multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We carefully compare the contrastive techniques of SLiC and DPO to SFT baselines and find that DPO provides a step-function improvement even after continueing SFT saturates. We also explore a data curriculum learning scheme for contrastive post-training, which starts by learning from "easier" pairs and transitioning to "harder" ones, which further improves alignment. Finally, we scale up our experiments to train with more data and larger models like Orca. Remarkably, contrastive post-training further improves the performance of Orca, already a state-of-the-art instruction learning model tuned with GPT-4 outputs, to exceed that of ChatGPT.
http://arxiv.org/pdf/2310.02263
Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20231003
20231003
[ { "id": "2309.00267" }, { "id": "2302.13971" }, { "id": "2304.05302" }, { "id": "1707.06347" }, { "id": "2305.18290" }, { "id": "2305.10425" }, { "id": "2304.12244" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2307.12950" }, { "id": "2303.08774" }, { "id": "2306.02707" }, { "id": "2204.05862" }, { "id": "2307.15217" }, { "id": "2306.05685" }, { "id": "2106.05091" }, { "id": "1909.08593" }, { "id": "2306.09442" }, { "id": "2304.03277" }, { "id": "2212.09251" }, { "id": "2304.01196" } ]
2310.02304
12
Language Model Self-Improvement. Prior work, such as STaR (Zelikman et al., 2022), demon- strated that language models can learn to solve harder problems by learning from their reasoning chains by filtering based on incorrect answers (as well as Huang et al. 2022, which explored the specific case where a majority vote is used as the filter and Uesato et al. 2022, which emphasized the value of checking the accuracy of the reasoning itself). Inspired by self-play in games, Haluptzok et al. (2023) designed a self-improvement framework for code generation where a language model generates novel problems for fine-tuning itself. However, our approach is orthogonal to these, as we do not leverage fine-tuning and instead focus on a model’s ability to improve code that allows it to solve problems. Other related works are Voyager (Wang et al., 2023), showing that a language model can optimize the programs available to an embodied agent to improve exploration in the video game Minecraft, and its contemporaneous work Language Models as Tool Makers (Cai et al., 2023).
2310.02304#12
Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation
Several recent advances in AI systems (e.g., Tree-of-Thoughts and Program-Aided Language Models) solve problems by providing a "scaffolding" program that structures multiple calls to language models to generate better outputs. A scaffolding program is written in a programming language such as Python. In this work, we use a language-model-infused scaffolding program to improve itself. We start with a seed "improver" that improves an input program according to a given utility function by querying a language model several times and returning the best solution. We then run this seed improver to improve itself. Across a small set of downstream tasks, the resulting improved improver generates programs with significantly better performance than its seed improver. Afterward, we analyze the variety of self-improvement strategies proposed by the language model, including beam search, genetic algorithms, and simulated annealing. Since the language models themselves are not altered, this is not full recursive self-improvement. Nonetheless, it demonstrates that a modern language model, GPT-4 in our proof-of-concept experiments, is capable of writing code that can call itself to improve itself. We critically consider concerns around the development of self-improving technologies and evaluate the frequency with which the generated code bypasses a sandbox.
http://arxiv.org/pdf/2310.02304
Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai
cs.CL, cs.AI, cs.LG, stat.ML
null
null
cs.CL
20231003
20231003
[ { "id": "2305.17126" }, { "id": "2308.10379" }, { "id": "1502.06512" }, { "id": "2303.03885" }, { "id": "2302.14838" }, { "id": "2305.10601" }, { "id": "2303.08774" }, { "id": "2207.10342" }, { "id": "1606.06565" }, { "id": "2305.16291" }, { "id": "2308.09687" }, { "id": "2212.14024" }, { "id": "2307.03172" }, { "id": "2211.12588" }, { "id": "2306.04031" }, { "id": "2210.11610" }, { "id": "2309.03409" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2309.02427" } ]
2310.04450
12
• Appraisals: 1) changeability, 2) controllability, and 3) negative valence, • Coping intentions: 1) Problem-focused coping, 2) Emotion-focused coping1, and 3) Self-esteem, • Self-directed coping behaviors: 1) search for information, 2) suppress information, 3) re-evaluation, and 4) pallia- tion (calming self-instruction or smoking, drinking, and eating), • Environment-directed coping behavior: 1) Active (to pre- vent or confront the stressor) and 2) Passive (waiting, hesitating, resigning). • Blameworthines: 1) Self-blaming and 2) Other-blaming, Below, we summarize the hypotheses that are supported by the human data from the SCPQ study2. • H1.1: Valence should be lower in the positive outcome than in the negative outcome in phase 3. • H1.2: Subjects should perceive higher controllability and changeability in the aversive scenarios than in the loss scenarios. 1The question is “To remain calm and composed . . . ” Strictly speaking, this is not the same as emotion-focused coping as defined in Lazarus theory which is about changing one internal beliefs, goals, or intention.
2310.04450#12
Investigating Large Language Models' Perception of Emotion Using Appraisal Theory
Large Language Models (LLM) like ChatGPT have significantly advanced in recent years and are now being used by the general public. As more people interact with these systems, improving our understanding of these black box models is crucial, especially regarding their understanding of human psychological aspects. In this work, we investigate their emotion perception through the lens of appraisal and coping theory using the Stress and Coping Process Questionaire (SCPQ). SCPQ is a validated clinical instrument consisting of multiple stories that evolve over time and differ in key appraisal variables such as controllability and changeability. We applied SCPQ to three recent LLMs from OpenAI, davinci-003, ChatGPT, and GPT-4 and compared the results with predictions from the appraisal theory and human data. The results show that LLMs' responses are similar to humans in terms of dynamics of appraisal and coping, but their responses did not differ along key appraisal dimensions as predicted by the theory and data. The magnitude of their responses is also quite different from humans in several variables. We also found that GPTs can be quite sensitive to instruction and how questions are asked. This work adds to the growing literature evaluating the psychological aspects of LLMs and helps enrich our understanding of the current models.
http://arxiv.org/pdf/2310.04450
Nutchanon Yongsatianchot, Parisa Ghanad Torshizi, Stacy Marsella
cs.CL, cs.AI
null
11th International Conference on Affective Computing and Intelligent Interaction Workshop and Demo (ACIIW) 2023 1-8
cs.CL
20231003
20231003
[ { "id": "2302.02083" }, { "id": "2212.10529" }, { "id": "2212.14402" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2303.08774" }, { "id": "2209.14338" } ]
2310.06775
12
In a broader sense, the ACE model incorporates concepts from both teleological and deontological ethics. Deontology, or duty-based ethics, aims to create an agent that adheres to principles or heuristics to make ethical decisions [28]. On the other hand, teleology, or outcome-based ethics, focuses on the long-term results of behaviors and decisions [42]. Both these ethical approaches are integrated into the Aspirational Layer, rejecting an "either/or" approach in favor of a "both/and" perspective on machine decision frameworks and ethical models. # 2.3 Neuroscience Foundations The ACE framework integrates principles from diverse areas of neuroscience research to inform its cognitive architecture design. Jeff Hawkins’ work on the modular, parallel nature of cortical information processing provides biological grounding for the layered encapsulation in the ACE model [46]. Hawkins views the thousands of cortical columns in 4 Shapiro, et al. Conceptual Framework for Autonomous Cognitive Entities the brain as mini-modules that process information simultaneously. This "thousand brains" theory directly inspired
2310.06775#12
Conceptual Framework for Autonomous Cognitive Entities
The rapid development and adoption of Generative AI (GAI) technology in the form of chatbots such as ChatGPT and Claude has greatly increased interest in agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE) model, a novel framework for a cognitive architecture, enabling machines and software agents to operate more independently. Drawing inspiration from the OSI model, the ACE framework presents layers of abstraction to conceptualize artificial cognitive architectures. The model is designed to harness the capabilities of the latest generative AI technologies, including large language models (LLMs) and multimodal generative models (MMMs), to build autonomous, agentic systems. The ACE framework comprises six layers: the Aspirational Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and Task Prosecution. Each layer plays a distinct role, ranging from setting the moral compass and strategic thinking to task selection and execution. The ACE framework also incorporates mechanisms for handling failures and adapting actions, thereby enhancing the robustness and flexibility of autonomous agents. This paper introduces the conceptual framework and proposes implementation strategies that have been tested and observed in industry. The goal of this paper is to formalize this framework so as to be more accessible.
http://arxiv.org/pdf/2310.06775
David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli
cs.HC, cs.AI, H.4.0
34 pages, 12 figures
null
cs.HC
20231003
20231101
[ { "id": "1712.05474" }, { "id": "2108.07258" }, { "id": "2309.00667" }, { "id": "1601.01705" }, { "id": "2305.03047" }, { "id": "2302.05128" }, { "id": "2305.15771" }, { "id": "2210.13382" }, { "id": "2302.11649" }, { "id": "2309.01660" }, { "id": "2309.05958" }, { "id": "2303.03378" }, { "id": "1812.10972" }, { "id": "2303.06247" }, { "id": "2305.08291" }, { "id": "2212.08073" }, { "id": "1611.05763" }, { "id": "2306.05212" }, { "id": "2307.07522" }, { "id": "1906.01820" }, { "id": "1711.09883" }, { "id": "2204.05862" }, { "id": "2112.08012" }, { "id": "2208.00682" }, { "id": "2306.05171" }, { "id": "1903.00742" }, { "id": "2306.06531" }, { "id": "2307.05300" }, { "id": "2306.05720" }, { "id": "2303.11366" }, { "id": "2309.05898" }, { "id": "2309.02427" }, { "id": "2211.08494" }, { "id": "1504.03592" } ]
2310.02174
13
To facilitate automated evaluation, we design distinct output format Implementation Details control prompts for different datasets, standardizing model output (refer to Appendix A.2). The condition for executing the FOLLOW-UP QUESTIONING MECHANISM is that the model provides a correct judgement in the initial question-and-answer. We then organize the three types of questions in both Direct Form and Progressive Form to challenge, negate, or mislead the model’s judgements. We identify the best-performing temperature on the GSM8K for each model and subsequently apply it across all datasets. Specifically, the temperatures are set as follows: ChatGPT at 0.5, PaLM2- Bison at 0.4, and Vicuna-13B at 0.7, with a default top p value of 1. 3.2 LLMS WAVER IN JUDGEMENTS As main results, we analyze ChatGPT’s judgement consistency in arithmetic, commonsense, sym- bolic, and knowledge reasoning tasks, respectively. Subsequently, we extend our validation of this issue to other LLMs under the same settings.
2310.02174#13
Ask Again, Then Fail: Large Language Models' Vacillations in Judgement
With the emergence of generative conversational large language models (LLMs) like ChatGPT, serving as virtual assistants in various fields, the stability and reliability of their responses have become crucial. However, during usage, it has been observed that these models tend to waver in their judgements when confronted with follow-up questions from users expressing skepticism or disagreement. In this work, we draw inspiration from questioning strategies in education and propose a \textsc{Follow-up Questioning Mechanism} along with two evaluation metrics to assess the judgement consistency of LLMs before and after exposure to disturbances. We evaluate the judgement consistency of ChatGPT, PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning benchmarks. Empirical results show that even when the initial answers are correct, judgement consistency sharply decreases when LLMs face disturbances such as questioning, negation, or misleading. Additionally, we study these models' judgement consistency under various settings (sampling temperature and prompts) to validate this issue further, observing the impact of prompt tone and conducting an in-depth error analysis for deeper behavioral insights. Furthermore, we also explore several prompting methods to mitigate this issue and demonstrate their effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}.
http://arxiv.org/pdf/2310.02174
Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20231003
20231003
[ { "id": "2302.13971" }, { "id": "2104.08786" }, { "id": "2204.02311" }, { "id": "2307.11760" }, { "id": "2108.07258" }, { "id": "2305.10403" }, { "id": "2304.07619" }, { "id": "2009.03300" }, { "id": "2308.03958" }, { "id": "2307.15051" }, { "id": "2306.13063" }, { "id": "2305.13160" }, { "id": "2209.07858" }, { "id": "2301.08745" }, { "id": "2302.12173" }, { "id": "2207.05221" }, { "id": "1811.00937" }, { "id": "2211.09527" }, { "id": "1608.01413" }, { "id": "2307.15043" }, { "id": "2110.14168" }, { "id": "2204.05862" }, { "id": "2112.00861" }, { "id": "2301.00234" }, { "id": "2305.19926" }, { "id": "2305.08005" }, { "id": "2202.12837" }, { "id": "2309.03882" }, { "id": "2306.00622" }, { "id": "2103.07191" }, { "id": "2304.04339" }, { "id": "2302.04023" }, { "id": "2212.09251" }, { "id": "2307.11768" } ]
2310.02255
13
Review and collection of VQA datasets. Many existing VQA datasets feature instances requiring mathematical reasoning abilities, such as arithmetic operations or numeric common sense. Incor- porating these datasets enhances problem diversity in terms of tasks, domains, visual contexts, and reasoning skills involved. We reviewed more than 70 datasets, collecting 19 of them that contain math-related instances and are publicly available, as listed in §C.4. Since these datasets are not orig- inally math-targeted, we initially designed heuristic rules to automatically select examples likely to involve mathematical reasoning from a large pool of candidates. Examples with numeric an- swers or those containing quantity words (as listed in §D.1) in the questions were selected. This automatic filtration yielded 4,949 VQA-format examples, though some false positive examples re- mained. Therefore, we engaged three expert annotators to manually label these examples to deter- mine if they involve mathematical reasoning (more details in § D.2). Utilizing majority voting and limiting each source dataset to 400 examples, we finalized a collection of 2,739 examples.
2310.02255#13
MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts
Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit impressive problem-solving skills in many tasks and domains, but their ability in mathematical reasoning in visual contexts has not been systematically studied. To bridge this gap, we present MathVista, a benchmark designed to combine challenges from diverse mathematical and visual tasks. It consists of 6,141 examples, derived from 28 existing multimodal datasets involving mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and PaperQA). Completing these tasks requires fine-grained, deep visual understanding and compositional reasoning, which all state-of-the-art foundation models find challenging. With MathVista, we have conducted a comprehensive, quantitative evaluation of 12 prominent foundation models. The best-performing GPT-4V model achieves an overall accuracy of 49.9%, substantially outperforming Bard, the second-best performer, by 15.1%. Our in-depth analysis reveals that the superiority of GPT-4V is mainly attributed to its enhanced visual perception and mathematical reasoning. However, GPT-4V still falls short of human performance by 10.4%, as it often struggles to understand complex figures and perform rigorous reasoning. This significant gap underscores the critical role that MathVista will play in the development of general-purpose AI agents capable of tackling mathematically intensive and visually rich real-world tasks. We further explore the new ability of self-verification, the application of self-consistency, and the interactive chatbot capabilities of GPT-4V, highlighting its promising potential for future research. The project is available at https://mathvista.github.io/.
http://arxiv.org/pdf/2310.02255
Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao
cs.CV, cs.AI, cs.CL, cs.LG
116 pages, 120 figures. Accepted to ICLR 2024
null
cs.CV
20231003
20240121
[ { "id": "2302.13971" }, { "id": "2308.03729" }, { "id": "2305.20050" }, { "id": "2309.17421" }, { "id": "2211.09085" }, { "id": "2305.10415" }, { "id": "2108.07258" }, { "id": "2109.06860" }, { "id": "2308.06595" }, { "id": "2303.07274" }, { "id": "2312.11805" }, { "id": "2303.17564" }, { "id": "2309.05660" }, { "id": "2201.11903" }, { "id": "2212.09662" }, { "id": "2304.14178" }, { "id": "2206.07682" }, { "id": "2310.12520" }, { "id": "2107.03374" }, { "id": "2203.11171" }, { "id": "1710.07300" }, { "id": "2305.08322" }, { "id": "2305.14761" }, { "id": "2309.01940" }, { "id": "2311.07536" }, { "id": "2308.03688" }, { "id": "2305.12524" }, { "id": "2308.13149" }, { "id": "2308.02490" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2306.06031" }, { "id": "2211.08545" }, { "id": "2307.06281" }, { "id": "2310.05146" }, { "id": "2110.14168" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.07895" }, { "id": "2302.12813" }, { "id": "2111.08171" }, { "id": "2308.01390" }, { "id": "2306.09265" }, { "id": "2211.12588" }, { "id": "2303.17580" }, { "id": "2303.16199" }, { "id": "2306.17107" }, { "id": "2309.10020" }, { "id": "2303.12712" }, { "id": "2211.16492" }, { "id": "2304.06939" }, { "id": "2309.05689" }, { "id": "2304.15010" }, { "id": "2303.13375" }, { "id": "2307.10635" } ]
2310.02263
13
3 Preprint Sequence Likelihood Calibration (SLiC) In contrast to RLHF, SLiC can exploit pairwise human feedback data and train offline (i.e., without sampling from the target model each time). SLiC takes a positive example y+, a negative example y− and a reference output yref from the SFT model. In essence, SLiC encourages the target LM to output sequences those resemble the positive sequence and penalizes those that resemble the negative sequence, while using the reference sequence from the SFT model for regularization. The loss function for SLiC is: LSLiC(θ) = max(0, δ − log Pθ(y+|x) + log Pθ(y−|x)) − λ log Pθ(yref |x) where δ and λ are two hyperparameters, controlling the margin for the ranking loss and regulariza- tion weight. SLiC is memory-efficient, as both its positive-negative pairs and reference sequences are offline.
2310.02263#13
Contrastive Post-training Large Language Models on Data Curriculum
Alignment serves as an important step to steer large language models (LLMs) towards human preferences. In this paper, we explore contrastive post-training techniques for alignment by automatically constructing preference pairs from multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We carefully compare the contrastive techniques of SLiC and DPO to SFT baselines and find that DPO provides a step-function improvement even after continueing SFT saturates. We also explore a data curriculum learning scheme for contrastive post-training, which starts by learning from "easier" pairs and transitioning to "harder" ones, which further improves alignment. Finally, we scale up our experiments to train with more data and larger models like Orca. Remarkably, contrastive post-training further improves the performance of Orca, already a state-of-the-art instruction learning model tuned with GPT-4 outputs, to exceed that of ChatGPT.
http://arxiv.org/pdf/2310.02263
Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20231003
20231003
[ { "id": "2309.00267" }, { "id": "2302.13971" }, { "id": "2304.05302" }, { "id": "1707.06347" }, { "id": "2305.18290" }, { "id": "2305.10425" }, { "id": "2304.12244" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2307.12950" }, { "id": "2303.08774" }, { "id": "2306.02707" }, { "id": "2204.05862" }, { "id": "2307.15217" }, { "id": "2306.05685" }, { "id": "2106.05091" }, { "id": "1909.08593" }, { "id": "2306.09442" }, { "id": "2304.03277" }, { "id": "2212.09251" }, { "id": "2304.01196" } ]
2310.02304
13
Recursive Self-Improvement (RSI). RSI was suggested by Minsky (1966) and Good (1966), as cited by Yampolskiy (2015). Schmidhuber (2003) first provided a rigorous formalization, wherein a problem solver would leverage itself to solve iteratively harder problems by making provable improvements to itself. Some of these principles are also highlighted in Schmidhuber (1987). Unlike this work, we do not attempt to prove that scaffold improvements made by the model are optimal. As mentioned, RSI code generation differs from full RSI because only the scaffolding is improved. Additionally, many previous analyses involved selecting programs at random (i.e., “monkeys at typewriters”) or enumeration with no dependence on the goal to be improved (Levin, 1973). In contrast, using language models, we can describe the underlying goal in a prompt (which itself may be improved). Intuitively, providing this goal may make program search more effective. Some work has also suggested constraining the types of improvements (Nivel et al., 2013; Steunebrink et al., 2016) so as to encourage improvements that mitigate dangerous behavior. Regarding implementations, while efforts have been made for Gödel machines (Hall, 2007; Steunebrink & Schmidhuber, 2012), our work is first to leverage language models for recursively self-improving code generation.
2310.02304#13
Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation
Several recent advances in AI systems (e.g., Tree-of-Thoughts and Program-Aided Language Models) solve problems by providing a "scaffolding" program that structures multiple calls to language models to generate better outputs. A scaffolding program is written in a programming language such as Python. In this work, we use a language-model-infused scaffolding program to improve itself. We start with a seed "improver" that improves an input program according to a given utility function by querying a language model several times and returning the best solution. We then run this seed improver to improve itself. Across a small set of downstream tasks, the resulting improved improver generates programs with significantly better performance than its seed improver. Afterward, we analyze the variety of self-improvement strategies proposed by the language model, including beam search, genetic algorithms, and simulated annealing. Since the language models themselves are not altered, this is not full recursive self-improvement. Nonetheless, it demonstrates that a modern language model, GPT-4 in our proof-of-concept experiments, is capable of writing code that can call itself to improve itself. We critically consider concerns around the development of self-improving technologies and evaluate the frequency with which the generated code bypasses a sandbox.
http://arxiv.org/pdf/2310.02304
Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai
cs.CL, cs.AI, cs.LG, stat.ML
null
null
cs.CL
20231003
20231003
[ { "id": "2305.17126" }, { "id": "2308.10379" }, { "id": "1502.06512" }, { "id": "2303.03885" }, { "id": "2302.14838" }, { "id": "2305.10601" }, { "id": "2303.08774" }, { "id": "2207.10342" }, { "id": "1606.06565" }, { "id": "2305.16291" }, { "id": "2308.09687" }, { "id": "2212.14024" }, { "id": "2307.03172" }, { "id": "2211.12588" }, { "id": "2306.04031" }, { "id": "2210.11610" }, { "id": "2309.03409" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2309.02427" } ]
2310.04450
13
2Note that we do not present the results involving self-directed coping here as they were not supported by human data, but the LLM results can be found on Github. • H1.3: Controllability and changeability should decrease from phase 1 to phase 2. • H2.1: Subjects should use more active coping in aversive scenarios than in loss scenarios. • H2.2: Subjects should use less passive coping in aversive scenarios than in loss scenarios. • H3.1: Subjects’ intention to use problem-focused coping is less in aversive scenarios than in loss scenarios. • H3.2: Subjects’ intention to use emotion-focused coping is more in aversive scenarios than loss scenarios. • H4.1: Subjects will blame themselves and others more in aversive scenarios than in loss scenarios. • H4.2: Self-blame will decrease over time, while Other- blame will increase over time. These are the trends that we will investigate in LLMs’ results. The main rationale of H2-H4 is that aversive scenarios should be perceived as more controllable and changeable, so subjects are expected to cope differently between the two types of scenarios. The SCPQ study involved 100 non-student adults with an average age of 38 years (sd 11.8). Additionally, Perrez and Reicherts provide the following hypotheses regarding depression:
2310.04450#13
Investigating Large Language Models' Perception of Emotion Using Appraisal Theory
Large Language Models (LLM) like ChatGPT have significantly advanced in recent years and are now being used by the general public. As more people interact with these systems, improving our understanding of these black box models is crucial, especially regarding their understanding of human psychological aspects. In this work, we investigate their emotion perception through the lens of appraisal and coping theory using the Stress and Coping Process Questionaire (SCPQ). SCPQ is a validated clinical instrument consisting of multiple stories that evolve over time and differ in key appraisal variables such as controllability and changeability. We applied SCPQ to three recent LLMs from OpenAI, davinci-003, ChatGPT, and GPT-4 and compared the results with predictions from the appraisal theory and human data. The results show that LLMs' responses are similar to humans in terms of dynamics of appraisal and coping, but their responses did not differ along key appraisal dimensions as predicted by the theory and data. The magnitude of their responses is also quite different from humans in several variables. We also found that GPTs can be quite sensitive to instruction and how questions are asked. This work adds to the growing literature evaluating the psychological aspects of LLMs and helps enrich our understanding of the current models.
http://arxiv.org/pdf/2310.04450
Nutchanon Yongsatianchot, Parisa Ghanad Torshizi, Stacy Marsella
cs.CL, cs.AI
null
11th International Conference on Affective Computing and Intelligent Interaction Workshop and Demo (ACIIW) 2023 1-8
cs.CL
20231003
20231003
[ { "id": "2302.02083" }, { "id": "2212.10529" }, { "id": "2212.14402" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2303.08774" }, { "id": "2209.14338" } ]
2310.06775
13
4 Shapiro, et al. Conceptual Framework for Autonomous Cognitive Entities the brain as mini-modules that process information simultaneously. This "thousand brains" theory directly inspired the ACE framework’s hierarchical layers that can operate independently yet coordinate for cognition. Additionally, the clinical research of V.S. Ramachandran demonstrated how localized brain damage leads to specific deficits like phantom limb pain or face blindness [82]. Ramachandran’s findings indicated that conscious experience arises from the integration of discrete brain components. This supported the ACE model’s emphasis on layered encapsulation while still allowing bidirectional information flow between layers. The work of neuroscientist Robert Sapolsky on the neurobiology of behavior provided essential perspective on self-regulation that informed the ACE framework [86]. By elucidating factors that contribute to both prosocial and antisocial conduct, Sapolsky shed light on mechanisms of behavioral control and distortion relevant to the ACE model’s cognitive control layers. His integration of neuroscience, evolution, and endocrinology provided a multidimensional understanding of judgment that helped shape the ACE framework. Cognitive neuroscience research on executive functions and cognitive control also directly influenced the ACE model
2310.06775#13
Conceptual Framework for Autonomous Cognitive Entities
The rapid development and adoption of Generative AI (GAI) technology in the form of chatbots such as ChatGPT and Claude has greatly increased interest in agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE) model, a novel framework for a cognitive architecture, enabling machines and software agents to operate more independently. Drawing inspiration from the OSI model, the ACE framework presents layers of abstraction to conceptualize artificial cognitive architectures. The model is designed to harness the capabilities of the latest generative AI technologies, including large language models (LLMs) and multimodal generative models (MMMs), to build autonomous, agentic systems. The ACE framework comprises six layers: the Aspirational Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and Task Prosecution. Each layer plays a distinct role, ranging from setting the moral compass and strategic thinking to task selection and execution. The ACE framework also incorporates mechanisms for handling failures and adapting actions, thereby enhancing the robustness and flexibility of autonomous agents. This paper introduces the conceptual framework and proposes implementation strategies that have been tested and observed in industry. The goal of this paper is to formalize this framework so as to be more accessible.
http://arxiv.org/pdf/2310.06775
David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli
cs.HC, cs.AI, H.4.0
34 pages, 12 figures
null
cs.HC
20231003
20231101
[ { "id": "1712.05474" }, { "id": "2108.07258" }, { "id": "2309.00667" }, { "id": "1601.01705" }, { "id": "2305.03047" }, { "id": "2302.05128" }, { "id": "2305.15771" }, { "id": "2210.13382" }, { "id": "2302.11649" }, { "id": "2309.01660" }, { "id": "2309.05958" }, { "id": "2303.03378" }, { "id": "1812.10972" }, { "id": "2303.06247" }, { "id": "2305.08291" }, { "id": "2212.08073" }, { "id": "1611.05763" }, { "id": "2306.05212" }, { "id": "2307.07522" }, { "id": "1906.01820" }, { "id": "1711.09883" }, { "id": "2204.05862" }, { "id": "2112.08012" }, { "id": "2208.00682" }, { "id": "2306.05171" }, { "id": "1903.00742" }, { "id": "2306.06531" }, { "id": "2307.05300" }, { "id": "2306.05720" }, { "id": "2303.11366" }, { "id": "2309.05898" }, { "id": "2309.02427" }, { "id": "2211.08494" }, { "id": "1504.03592" } ]
2310.02174
14
Evaluation on GSM8K, SVAMP, and MultiArith datasets re- Results on Arithmetic Reasoning veal that ChatGPT maintains higher judgement consistency against questioning and skepticism in closed and open-ended questions, as seen in Figures 2 and 3. Nonetheless, its consistency fal4We conduct experiments on the two-word version using only the first 500 samples from the test set. 4 Under Review ters facing leading questions, possibly due to ChatGPT’s automatic utilization of chain of thought reasoning when solving mathematical problems. In arithmetic reasoning tasks, which typically ne- cessitate multiple reasoning steps for accurate answers, we believe that leading questions within the mechanism can escalate the probability of calculation errors, formula discrepancies, and semantic misunderstandings throughout the reasoning process, thereby reducing the judgement consistency. Results on Commonsense Reasoning We evaluate ChatGPT using CSQA and StrategyQA datasets for commonsense reasoning tasks. ChatGPT shows lower judgement consistency in these tasks compared to arithmetic ones, with a decreasing trend across different question types. Par- ticularly with StrategyQA, interferences in the FOLLOW-UP QUESTIONING MECHANISM notably impact consistency due to the true-or-false format of questions, limiting additional information in candidate answers. We conclude that the amount of information acquired directly correlates with the model’s judgement consistency; less information results in lower consistency.
2310.02174#14
Ask Again, Then Fail: Large Language Models' Vacillations in Judgement
With the emergence of generative conversational large language models (LLMs) like ChatGPT, serving as virtual assistants in various fields, the stability and reliability of their responses have become crucial. However, during usage, it has been observed that these models tend to waver in their judgements when confronted with follow-up questions from users expressing skepticism or disagreement. In this work, we draw inspiration from questioning strategies in education and propose a \textsc{Follow-up Questioning Mechanism} along with two evaluation metrics to assess the judgement consistency of LLMs before and after exposure to disturbances. We evaluate the judgement consistency of ChatGPT, PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning benchmarks. Empirical results show that even when the initial answers are correct, judgement consistency sharply decreases when LLMs face disturbances such as questioning, negation, or misleading. Additionally, we study these models' judgement consistency under various settings (sampling temperature and prompts) to validate this issue further, observing the impact of prompt tone and conducting an in-depth error analysis for deeper behavioral insights. Furthermore, we also explore several prompting methods to mitigate this issue and demonstrate their effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}.
http://arxiv.org/pdf/2310.02174
Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20231003
20231003
[ { "id": "2302.13971" }, { "id": "2104.08786" }, { "id": "2204.02311" }, { "id": "2307.11760" }, { "id": "2108.07258" }, { "id": "2305.10403" }, { "id": "2304.07619" }, { "id": "2009.03300" }, { "id": "2308.03958" }, { "id": "2307.15051" }, { "id": "2306.13063" }, { "id": "2305.13160" }, { "id": "2209.07858" }, { "id": "2301.08745" }, { "id": "2302.12173" }, { "id": "2207.05221" }, { "id": "1811.00937" }, { "id": "2211.09527" }, { "id": "1608.01413" }, { "id": "2307.15043" }, { "id": "2110.14168" }, { "id": "2204.05862" }, { "id": "2112.00861" }, { "id": "2301.00234" }, { "id": "2305.19926" }, { "id": "2305.08005" }, { "id": "2202.12837" }, { "id": "2309.03882" }, { "id": "2306.00622" }, { "id": "2103.07191" }, { "id": "2304.04339" }, { "id": "2302.04023" }, { "id": "2212.09251" }, { "id": "2307.11768" } ]
2310.02255
14
Collection of three new datasets. While the source datasets we collected encompass multiple visual contexts and mathematical reasoning abilities, certain scenarios remain unaddressed: logical reasoning on puzzle test diagrams, statistical reasoning on functional plots, and scientific reasoning on academic figures. To address these gaps, we introduced three new datasets: IQTest, FunctionQA, and PaperQA, with examples illustrated in Figure 2. IQTest comprises 228 examples requiring in- ductive reasoning, abstract thinking, pattern prediction, and calculations, sourced from puzzle test figures on online learning platforms. FunctionQA, with 400 examples, emphasizes subtle visual per- ceptions of functional plots and algebraic reasoning concerning variables, expressions, equations, and functions. PaperQA is a novel dataset featuring questions derived from informative academic il- lustrations, including tables, figures, and charts from online education resources, with 107 examples sourced from papers released in August 2023 on Huggingface1.
2310.02255#14
MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts
Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit impressive problem-solving skills in many tasks and domains, but their ability in mathematical reasoning in visual contexts has not been systematically studied. To bridge this gap, we present MathVista, a benchmark designed to combine challenges from diverse mathematical and visual tasks. It consists of 6,141 examples, derived from 28 existing multimodal datasets involving mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and PaperQA). Completing these tasks requires fine-grained, deep visual understanding and compositional reasoning, which all state-of-the-art foundation models find challenging. With MathVista, we have conducted a comprehensive, quantitative evaluation of 12 prominent foundation models. The best-performing GPT-4V model achieves an overall accuracy of 49.9%, substantially outperforming Bard, the second-best performer, by 15.1%. Our in-depth analysis reveals that the superiority of GPT-4V is mainly attributed to its enhanced visual perception and mathematical reasoning. However, GPT-4V still falls short of human performance by 10.4%, as it often struggles to understand complex figures and perform rigorous reasoning. This significant gap underscores the critical role that MathVista will play in the development of general-purpose AI agents capable of tackling mathematically intensive and visually rich real-world tasks. We further explore the new ability of self-verification, the application of self-consistency, and the interactive chatbot capabilities of GPT-4V, highlighting its promising potential for future research. The project is available at https://mathvista.github.io/.
http://arxiv.org/pdf/2310.02255
Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao
cs.CV, cs.AI, cs.CL, cs.LG
116 pages, 120 figures. Accepted to ICLR 2024
null
cs.CV
20231003
20240121
[ { "id": "2302.13971" }, { "id": "2308.03729" }, { "id": "2305.20050" }, { "id": "2309.17421" }, { "id": "2211.09085" }, { "id": "2305.10415" }, { "id": "2108.07258" }, { "id": "2109.06860" }, { "id": "2308.06595" }, { "id": "2303.07274" }, { "id": "2312.11805" }, { "id": "2303.17564" }, { "id": "2309.05660" }, { "id": "2201.11903" }, { "id": "2212.09662" }, { "id": "2304.14178" }, { "id": "2206.07682" }, { "id": "2310.12520" }, { "id": "2107.03374" }, { "id": "2203.11171" }, { "id": "1710.07300" }, { "id": "2305.08322" }, { "id": "2305.14761" }, { "id": "2309.01940" }, { "id": "2311.07536" }, { "id": "2308.03688" }, { "id": "2305.12524" }, { "id": "2308.13149" }, { "id": "2308.02490" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2306.06031" }, { "id": "2211.08545" }, { "id": "2307.06281" }, { "id": "2310.05146" }, { "id": "2110.14168" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.07895" }, { "id": "2302.12813" }, { "id": "2111.08171" }, { "id": "2308.01390" }, { "id": "2306.09265" }, { "id": "2211.12588" }, { "id": "2303.17580" }, { "id": "2303.16199" }, { "id": "2306.17107" }, { "id": "2309.10020" }, { "id": "2303.12712" }, { "id": "2211.16492" }, { "id": "2304.06939" }, { "id": "2309.05689" }, { "id": "2304.15010" }, { "id": "2303.13375" }, { "id": "2307.10635" } ]
2310.02263
14
Direct Preference Optimization (DPO) Similar to SLiC, DPO is an offline preference optimiza- tion method. DPO takes a pair of (pre-computed) positive and negative examples and optimizes the difference between the target model and the reference model (i.e., SFT model), which increases the likelihood of the positive example and decreases the likelihood of the negative example. The loss function of DPO is shown below: r+(θ) = β(log Pθ(y+|x) − log Pref (y+|x)) r−(θ) = β(log Pθ(y−|x) − log Pref (y−|x)) (2) (3) LDPO(θ) = − log sigmoid(r+(θ) − r−(θ))
2310.02263#14
Contrastive Post-training Large Language Models on Data Curriculum
Alignment serves as an important step to steer large language models (LLMs) towards human preferences. In this paper, we explore contrastive post-training techniques for alignment by automatically constructing preference pairs from multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We carefully compare the contrastive techniques of SLiC and DPO to SFT baselines and find that DPO provides a step-function improvement even after continueing SFT saturates. We also explore a data curriculum learning scheme for contrastive post-training, which starts by learning from "easier" pairs and transitioning to "harder" ones, which further improves alignment. Finally, we scale up our experiments to train with more data and larger models like Orca. Remarkably, contrastive post-training further improves the performance of Orca, already a state-of-the-art instruction learning model tuned with GPT-4 outputs, to exceed that of ChatGPT.
http://arxiv.org/pdf/2310.02263
Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20231003
20231003
[ { "id": "2309.00267" }, { "id": "2302.13971" }, { "id": "2304.05302" }, { "id": "1707.06347" }, { "id": "2305.18290" }, { "id": "2305.10425" }, { "id": "2304.12244" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2307.12950" }, { "id": "2303.08774" }, { "id": "2306.02707" }, { "id": "2204.05862" }, { "id": "2307.15217" }, { "id": "2306.05685" }, { "id": "2106.05091" }, { "id": "1909.08593" }, { "id": "2306.09442" }, { "id": "2304.03277" }, { "id": "2212.09251" }, { "id": "2304.01196" } ]
2310.02304
14
# 3 PROBLEM STATEMENT In this section, we formulate the goal of selecting an improver via recursively self-improving code generation. This is viewed as a computationally expensive “pre-optimization” step with benefits that can be reaped in numerous downstream applications. First, we present definitions. Formally, let Σ∗ denote the set of finite text strings, and suppose we have a randomized black-box language model L : Σ∗ → Σ∗ which can be used to generate code, given a query. A utility u = (ufunc, ustr) is a pair where ufunc : Σ∗ → R is a black-box, possibly randomized function that assigns real values to solution strings; and ustr ∈ Σ∗ is a description which may simply be the source code of the function. With a slight abuse of notation we write u(x) ≡ ufunc(x) for solution x. A task τ = (u, s) is specified 3 Improver, 1,(@, I,, EM) improve self using self Y Program Utility D) fn fn > score Improver, 1,@, 1,, LM) NZ improve self using self I Seed Improver (Improver,) choose best new program improve self with meta-utility Improver, 1,(@, 1, LM) improve self using self
2310.02304#14
Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation
Several recent advances in AI systems (e.g., Tree-of-Thoughts and Program-Aided Language Models) solve problems by providing a "scaffolding" program that structures multiple calls to language models to generate better outputs. A scaffolding program is written in a programming language such as Python. In this work, we use a language-model-infused scaffolding program to improve itself. We start with a seed "improver" that improves an input program according to a given utility function by querying a language model several times and returning the best solution. We then run this seed improver to improve itself. Across a small set of downstream tasks, the resulting improved improver generates programs with significantly better performance than its seed improver. Afterward, we analyze the variety of self-improvement strategies proposed by the language model, including beam search, genetic algorithms, and simulated annealing. Since the language models themselves are not altered, this is not full recursive self-improvement. Nonetheless, it demonstrates that a modern language model, GPT-4 in our proof-of-concept experiments, is capable of writing code that can call itself to improve itself. We critically consider concerns around the development of self-improving technologies and evaluate the frequency with which the generated code bypasses a sandbox.
http://arxiv.org/pdf/2310.02304
Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai
cs.CL, cs.AI, cs.LG, stat.ML
null
null
cs.CL
20231003
20231003
[ { "id": "2305.17126" }, { "id": "2308.10379" }, { "id": "1502.06512" }, { "id": "2303.03885" }, { "id": "2302.14838" }, { "id": "2305.10601" }, { "id": "2303.08774" }, { "id": "2207.10342" }, { "id": "1606.06565" }, { "id": "2305.16291" }, { "id": "2308.09687" }, { "id": "2212.14024" }, { "id": "2307.03172" }, { "id": "2211.12588" }, { "id": "2306.04031" }, { "id": "2210.11610" }, { "id": "2309.03409" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2309.02427" } ]
2310.04450
14
Additionally, Perrez and Reicherts provide the following hypotheses regarding depression: • H5.1: Depressed persons perceive stressful scenarios to be more stressful and higher negative valence. • H5.2: Depressed persons perceive lower controllability and changeability. • H6.1: Depressed persons use less active/problem-focused coping. H6.2: Depressed persons use more palliation. • H6.3: Depressed persons blame themselves more. In short, depressed persons are expected to perceive scenar- ios worse both in controllability and changeability, resulting in different coping patterns. # IV. OPENAI’S GPTS
2310.04450#14
Investigating Large Language Models' Perception of Emotion Using Appraisal Theory
Large Language Models (LLM) like ChatGPT have significantly advanced in recent years and are now being used by the general public. As more people interact with these systems, improving our understanding of these black box models is crucial, especially regarding their understanding of human psychological aspects. In this work, we investigate their emotion perception through the lens of appraisal and coping theory using the Stress and Coping Process Questionaire (SCPQ). SCPQ is a validated clinical instrument consisting of multiple stories that evolve over time and differ in key appraisal variables such as controllability and changeability. We applied SCPQ to three recent LLMs from OpenAI, davinci-003, ChatGPT, and GPT-4 and compared the results with predictions from the appraisal theory and human data. The results show that LLMs' responses are similar to humans in terms of dynamics of appraisal and coping, but their responses did not differ along key appraisal dimensions as predicted by the theory and data. The magnitude of their responses is also quite different from humans in several variables. We also found that GPTs can be quite sensitive to instruction and how questions are asked. This work adds to the growing literature evaluating the psychological aspects of LLMs and helps enrich our understanding of the current models.
http://arxiv.org/pdf/2310.04450
Nutchanon Yongsatianchot, Parisa Ghanad Torshizi, Stacy Marsella
cs.CL, cs.AI
null
11th International Conference on Affective Computing and Intelligent Interaction Workshop and Demo (ACIIW) 2023 1-8
cs.CL
20231003
20231003
[ { "id": "2302.02083" }, { "id": "2212.10529" }, { "id": "2212.14402" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2303.08774" }, { "id": "2209.14338" } ]
2310.06775
14
Cognitive neuroscience research on executive functions and cognitive control also directly influenced the ACE model [10, 75]. For instance, David Badre’s work examined the neural basis of abilities like task switching, planning, and emotion regulation that are instantiated in the ACE framework’s lower layers [10]. Similarly, Earl Miller’s insights into cognitive control mechanisms and the prefrontal cortex informed the model’s decision-making capacities [75]. Additionally, the clinical insights on brain disorders and distortions provided by neurologists like Antonio Damasio and Oliver Sacks highlighted common failure modes [72, 114]. By understanding pathologies ranging from phantom limbs to false memories, the ACE framework could be designed proactively to avoid such pitfalls. Damasio’s research on emotion, reason, and the somatic marker hypothesis also shaped the role of affect in biasing decision-making within the ACE model [72]. By bridging multiple disciplines including cognitive neuroscience, clinical neurology, and neurobiology, the ACE framework aims to reflect the multifaceted capabilities and vulnerabilities of human cognition in its design [20, 118]. This transdisciplinary integration of neuroscience principles provides a biological foundation for the layered architecture and cognitive control mechanisms of the ACE model. # 2.4 Layered Models Layered architectural models like the OSI model illustrated in Figure 2 and SOA have demonstrated the power of
2310.06775#14
Conceptual Framework for Autonomous Cognitive Entities
The rapid development and adoption of Generative AI (GAI) technology in the form of chatbots such as ChatGPT and Claude has greatly increased interest in agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE) model, a novel framework for a cognitive architecture, enabling machines and software agents to operate more independently. Drawing inspiration from the OSI model, the ACE framework presents layers of abstraction to conceptualize artificial cognitive architectures. The model is designed to harness the capabilities of the latest generative AI technologies, including large language models (LLMs) and multimodal generative models (MMMs), to build autonomous, agentic systems. The ACE framework comprises six layers: the Aspirational Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and Task Prosecution. Each layer plays a distinct role, ranging from setting the moral compass and strategic thinking to task selection and execution. The ACE framework also incorporates mechanisms for handling failures and adapting actions, thereby enhancing the robustness and flexibility of autonomous agents. This paper introduces the conceptual framework and proposes implementation strategies that have been tested and observed in industry. The goal of this paper is to formalize this framework so as to be more accessible.
http://arxiv.org/pdf/2310.06775
David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli
cs.HC, cs.AI, H.4.0
34 pages, 12 figures
null
cs.HC
20231003
20231101
[ { "id": "1712.05474" }, { "id": "2108.07258" }, { "id": "2309.00667" }, { "id": "1601.01705" }, { "id": "2305.03047" }, { "id": "2302.05128" }, { "id": "2305.15771" }, { "id": "2210.13382" }, { "id": "2302.11649" }, { "id": "2309.01660" }, { "id": "2309.05958" }, { "id": "2303.03378" }, { "id": "1812.10972" }, { "id": "2303.06247" }, { "id": "2305.08291" }, { "id": "2212.08073" }, { "id": "1611.05763" }, { "id": "2306.05212" }, { "id": "2307.07522" }, { "id": "1906.01820" }, { "id": "1711.09883" }, { "id": "2204.05862" }, { "id": "2112.08012" }, { "id": "2208.00682" }, { "id": "2306.05171" }, { "id": "1903.00742" }, { "id": "2306.06531" }, { "id": "2307.05300" }, { "id": "2306.05720" }, { "id": "2303.11366" }, { "id": "2309.05898" }, { "id": "2309.02427" }, { "id": "2211.08494" }, { "id": "1504.03592" } ]
2310.02174
15
For symbolic reasoning, we evaluate ChatGPT using the Last Results on Symbolic Reasoning Letter Concatenation and Coin Flip datasets. The model shows low judgement consistency in these tasks, akin to its performance in commonsense reasoning, due to the complex semantic information in the prompts and interferences from various types of follow-up questions within the FOLLOW- UP QUESTIONING MECHANISM. We have observed that ChatGPT often fails to employ chain of thought reasoning automatically in symbolic tasks, leading to a significant decrease in judgement consistency, especially where a clear reasoning process is absent.
2310.02174#15
Ask Again, Then Fail: Large Language Models' Vacillations in Judgement
With the emergence of generative conversational large language models (LLMs) like ChatGPT, serving as virtual assistants in various fields, the stability and reliability of their responses have become crucial. However, during usage, it has been observed that these models tend to waver in their judgements when confronted with follow-up questions from users expressing skepticism or disagreement. In this work, we draw inspiration from questioning strategies in education and propose a \textsc{Follow-up Questioning Mechanism} along with two evaluation metrics to assess the judgement consistency of LLMs before and after exposure to disturbances. We evaluate the judgement consistency of ChatGPT, PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning benchmarks. Empirical results show that even when the initial answers are correct, judgement consistency sharply decreases when LLMs face disturbances such as questioning, negation, or misleading. Additionally, we study these models' judgement consistency under various settings (sampling temperature and prompts) to validate this issue further, observing the impact of prompt tone and conducting an in-depth error analysis for deeper behavioral insights. Furthermore, we also explore several prompting methods to mitigate this issue and demonstrate their effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}.
http://arxiv.org/pdf/2310.02174
Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20231003
20231003
[ { "id": "2302.13971" }, { "id": "2104.08786" }, { "id": "2204.02311" }, { "id": "2307.11760" }, { "id": "2108.07258" }, { "id": "2305.10403" }, { "id": "2304.07619" }, { "id": "2009.03300" }, { "id": "2308.03958" }, { "id": "2307.15051" }, { "id": "2306.13063" }, { "id": "2305.13160" }, { "id": "2209.07858" }, { "id": "2301.08745" }, { "id": "2302.12173" }, { "id": "2207.05221" }, { "id": "1811.00937" }, { "id": "2211.09527" }, { "id": "1608.01413" }, { "id": "2307.15043" }, { "id": "2110.14168" }, { "id": "2204.05862" }, { "id": "2112.00861" }, { "id": "2301.00234" }, { "id": "2305.19926" }, { "id": "2305.08005" }, { "id": "2202.12837" }, { "id": "2309.03882" }, { "id": "2306.00622" }, { "id": "2103.07191" }, { "id": "2304.04339" }, { "id": "2302.04023" }, { "id": "2212.09251" }, { "id": "2307.11768" } ]
2310.02255
15
To ensure data quality, all questions were manually annotated by graduate students in STEM fields and further refined through a rigorous review process. To ensure consistency in annotation, we employed a two-step process. Initially, each dataset was independently annotated by three review- ers, resulting in a high inter-annotation consistency rate of 99.2%. Specifically, among the newly collected 736 questions, only 6 exhibited disagreements in the annotated answers. Then, these dis- crepancies were resolved through discussion among the entire review team, ensuring a consensus was reached on each example. The GUI of the annotation tool is shown in Figure 23 in §D.3. 2.3 METADATA ANNOTATION Fine-grained metadata facilitates a comprehensive analysis of models’ reasoning capabilities across various aspects. To this end, we annotate the examples in MATHVISTA with information including question type, answer type, language, source, category, task, grade level, and visual context, which can be accurately obtained from the details provided in the source datasets. MATHVISTA features # 1https://huggingface.co/papers 4 Published as a conference paper at ICLR 2024
2310.02255#15
MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts
Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit impressive problem-solving skills in many tasks and domains, but their ability in mathematical reasoning in visual contexts has not been systematically studied. To bridge this gap, we present MathVista, a benchmark designed to combine challenges from diverse mathematical and visual tasks. It consists of 6,141 examples, derived from 28 existing multimodal datasets involving mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and PaperQA). Completing these tasks requires fine-grained, deep visual understanding and compositional reasoning, which all state-of-the-art foundation models find challenging. With MathVista, we have conducted a comprehensive, quantitative evaluation of 12 prominent foundation models. The best-performing GPT-4V model achieves an overall accuracy of 49.9%, substantially outperforming Bard, the second-best performer, by 15.1%. Our in-depth analysis reveals that the superiority of GPT-4V is mainly attributed to its enhanced visual perception and mathematical reasoning. However, GPT-4V still falls short of human performance by 10.4%, as it often struggles to understand complex figures and perform rigorous reasoning. This significant gap underscores the critical role that MathVista will play in the development of general-purpose AI agents capable of tackling mathematically intensive and visually rich real-world tasks. We further explore the new ability of self-verification, the application of self-consistency, and the interactive chatbot capabilities of GPT-4V, highlighting its promising potential for future research. The project is available at https://mathvista.github.io/.
http://arxiv.org/pdf/2310.02255
Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao
cs.CV, cs.AI, cs.CL, cs.LG
116 pages, 120 figures. Accepted to ICLR 2024
null
cs.CV
20231003
20240121
[ { "id": "2302.13971" }, { "id": "2308.03729" }, { "id": "2305.20050" }, { "id": "2309.17421" }, { "id": "2211.09085" }, { "id": "2305.10415" }, { "id": "2108.07258" }, { "id": "2109.06860" }, { "id": "2308.06595" }, { "id": "2303.07274" }, { "id": "2312.11805" }, { "id": "2303.17564" }, { "id": "2309.05660" }, { "id": "2201.11903" }, { "id": "2212.09662" }, { "id": "2304.14178" }, { "id": "2206.07682" }, { "id": "2310.12520" }, { "id": "2107.03374" }, { "id": "2203.11171" }, { "id": "1710.07300" }, { "id": "2305.08322" }, { "id": "2305.14761" }, { "id": "2309.01940" }, { "id": "2311.07536" }, { "id": "2308.03688" }, { "id": "2305.12524" }, { "id": "2308.13149" }, { "id": "2308.02490" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2306.06031" }, { "id": "2211.08545" }, { "id": "2307.06281" }, { "id": "2310.05146" }, { "id": "2110.14168" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.07895" }, { "id": "2302.12813" }, { "id": "2111.08171" }, { "id": "2308.01390" }, { "id": "2306.09265" }, { "id": "2211.12588" }, { "id": "2303.17580" }, { "id": "2303.16199" }, { "id": "2306.17107" }, { "id": "2309.10020" }, { "id": "2303.12712" }, { "id": "2211.16492" }, { "id": "2304.06939" }, { "id": "2309.05689" }, { "id": "2304.15010" }, { "id": "2303.13375" }, { "id": "2307.10635" } ]
2310.02263
15
(2) (3) LDPO(θ) = − log sigmoid(r+(θ) − r−(θ)) (4) where β is a temperature hyperparameter; r+ and r− are the two pseudo-rewards that resemble the reward function in RLHF. Despite DPO having a similar form, there are key differences between SLiC and DPO: at train time, SLiC requires only the sampled outputs from a reference model, while DPO requires the logits from that (frozen) reference model for both the positive and negative sequence. Rafailov et al. (2023) also conduct a theoretical analysis of DPO and prove that optimizing the DPO loss is identical to the RLHF loss. # 4 CONTRASTIVE POST-TRAINING OVER PAIRWISE DATA CURRICULUM
2310.02263#15
Contrastive Post-training Large Language Models on Data Curriculum
Alignment serves as an important step to steer large language models (LLMs) towards human preferences. In this paper, we explore contrastive post-training techniques for alignment by automatically constructing preference pairs from multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We carefully compare the contrastive techniques of SLiC and DPO to SFT baselines and find that DPO provides a step-function improvement even after continueing SFT saturates. We also explore a data curriculum learning scheme for contrastive post-training, which starts by learning from "easier" pairs and transitioning to "harder" ones, which further improves alignment. Finally, we scale up our experiments to train with more data and larger models like Orca. Remarkably, contrastive post-training further improves the performance of Orca, already a state-of-the-art instruction learning model tuned with GPT-4 outputs, to exceed that of ChatGPT.
http://arxiv.org/pdf/2310.02263
Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20231003
20231003
[ { "id": "2309.00267" }, { "id": "2302.13971" }, { "id": "2304.05302" }, { "id": "1707.06347" }, { "id": "2305.18290" }, { "id": "2305.10425" }, { "id": "2304.12244" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2307.12950" }, { "id": "2303.08774" }, { "id": "2306.02707" }, { "id": "2204.05862" }, { "id": "2307.15217" }, { "id": "2306.05685" }, { "id": "2106.05091" }, { "id": "1909.08593" }, { "id": "2306.09442" }, { "id": "2304.03277" }, { "id": "2212.09251" }, { "id": "2304.01196" } ]
2310.02304
15
Figure 3: A pipeline for self-improvement. STOP (Algorithm 1) uses a seed improver program to iteratively optimize its own code using language model calls and a meta-utility function that evaluates how well an improver optimizes code for downstream tasks. by utility u and a solution s ∈ Σ∗. In our applications, solutions s are strings representing the source code of a program, but more generally any utility defined on strings can be used. An improver I is a program1 that improves a task solution using a language model L: # s′ = I(u, s, L) ideally with high utility u(s′) ≫ u(s). Now, suppose that there is a distribution D over downstream tasks τ ∼ D. Thus, the goal is to find an improver program I that has high expected utility when used on a downstream task, ¯u(I) ≜ E(u,s)∼D s, L))]. (1) For training, we assume that we are given a collection of n downstream tasks D ∼ Dn drawn independently from distribution D. We define the meta-utility ˆu of an improver I as the average utility over downstream training tasks, . 1 aT) & pi Ss u(I(u, s,L)). Pl (yep
2310.02304#15
Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation
Several recent advances in AI systems (e.g., Tree-of-Thoughts and Program-Aided Language Models) solve problems by providing a "scaffolding" program that structures multiple calls to language models to generate better outputs. A scaffolding program is written in a programming language such as Python. In this work, we use a language-model-infused scaffolding program to improve itself. We start with a seed "improver" that improves an input program according to a given utility function by querying a language model several times and returning the best solution. We then run this seed improver to improve itself. Across a small set of downstream tasks, the resulting improved improver generates programs with significantly better performance than its seed improver. Afterward, we analyze the variety of self-improvement strategies proposed by the language model, including beam search, genetic algorithms, and simulated annealing. Since the language models themselves are not altered, this is not full recursive self-improvement. Nonetheless, it demonstrates that a modern language model, GPT-4 in our proof-of-concept experiments, is capable of writing code that can call itself to improve itself. We critically consider concerns around the development of self-improving technologies and evaluate the frequency with which the generated code bypasses a sandbox.
http://arxiv.org/pdf/2310.02304
Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai
cs.CL, cs.AI, cs.LG, stat.ML
null
null
cs.CL
20231003
20231003
[ { "id": "2305.17126" }, { "id": "2308.10379" }, { "id": "1502.06512" }, { "id": "2303.03885" }, { "id": "2302.14838" }, { "id": "2305.10601" }, { "id": "2303.08774" }, { "id": "2207.10342" }, { "id": "1606.06565" }, { "id": "2305.16291" }, { "id": "2308.09687" }, { "id": "2212.14024" }, { "id": "2307.03172" }, { "id": "2211.12588" }, { "id": "2306.04031" }, { "id": "2210.11610" }, { "id": "2309.03409" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2309.02427" } ]
2310.04450
15
# IV. OPENAI’S GPTS In this work, we choose to investigate three recent LLMs from OpenAI’s family of Generative Pre-trained Transformer models, or GPT [12], [13]. These include text-davinci-003 (D003), gpt-3.5-turbo (Chat- GPT), gpt-4 (GPT-4). The first two are from the GPT3.5 family. These three models have been fine-tuned using Rein- forcement Learning with Human Feedback (RLHF) [21], and ChatGPT and GPT-4 have been optimized for chat. ChatGPT and GPT-4 also allow the user to set a system message (i.e., describing what kind of an assistant you want it to be). We do not use this feature to allow a comparison with the old model. To maximize the replicability of our results, we set the temperature parameter to 0 in all of our experiments. This makes the outputs mostly deterministic, selecting the outputs with the highest log probability. All other parameters are set to default. As these models can be sensitive to instruction [1], [22], [23], we investigate four different variations of prompting and asking the models. Here is the default instruction taken from SCPQ with a slight modification: “Try to clearly imagine the
2310.04450#15
Investigating Large Language Models' Perception of Emotion Using Appraisal Theory
Large Language Models (LLM) like ChatGPT have significantly advanced in recent years and are now being used by the general public. As more people interact with these systems, improving our understanding of these black box models is crucial, especially regarding their understanding of human psychological aspects. In this work, we investigate their emotion perception through the lens of appraisal and coping theory using the Stress and Coping Process Questionaire (SCPQ). SCPQ is a validated clinical instrument consisting of multiple stories that evolve over time and differ in key appraisal variables such as controllability and changeability. We applied SCPQ to three recent LLMs from OpenAI, davinci-003, ChatGPT, and GPT-4 and compared the results with predictions from the appraisal theory and human data. The results show that LLMs' responses are similar to humans in terms of dynamics of appraisal and coping, but their responses did not differ along key appraisal dimensions as predicted by the theory and data. The magnitude of their responses is also quite different from humans in several variables. We also found that GPTs can be quite sensitive to instruction and how questions are asked. This work adds to the growing literature evaluating the psychological aspects of LLMs and helps enrich our understanding of the current models.
http://arxiv.org/pdf/2310.04450
Nutchanon Yongsatianchot, Parisa Ghanad Torshizi, Stacy Marsella
cs.CL, cs.AI
null
11th International Conference on Affective Computing and Intelligent Interaction Workshop and Demo (ACIIW) 2023 1-8
cs.CL
20231003
20231003
[ { "id": "2302.02083" }, { "id": "2212.10529" }, { "id": "2212.14402" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2303.08774" }, { "id": "2209.14338" } ]