doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
2306.13394
4
Although these models exhibit surprising conversational capabilities when conducting everyday chats, we still know little about how well they quantitatively perform in various aspects. The existing three common quantitative evaluation manners for MLLMs have their limitations that are difficult to comprehensively evaluate performance. Specifically, the first manner [12, 42, 47] evaluates on existing traditional multimodal datasets, such as image caption [11] and VQA [16, 32, 36]. However, on the one hand, it may be hard to reflect the emergent abilities of MLLMs on these datasets. On the other hand, since the training sets of large models are no longer unified, it is difficult to guarantee that all MLLMs have not used the testing set for training. The second man- ner [48] is to collect data for an open-ended evaluation, but either the data is unavailable to public by now [57] or the amount is small (only 50 images) [48]. The third manner focuses on one aspect of MLLMs, such as object hallucina- tion [25] or adversarial robustness [56], which is powerless to comprehensive evaluation.
2306.13394#4
MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models
Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform multimodal tasks, showing amazing emergent abilities in recent studies, such as writing poems based on an image. However, it is difficult for these case studies to fully reflect the performance of MLLM, lacking a comprehensive evaluation. In this paper, we fill in this blank, presenting the first comprehensive MLLM Evaluation benchmark MME. It measures both perception and cognition abilities on a total of 14 subtasks. In order to avoid data leakage that may arise from direct use of public datasets for evaluation, the annotations of instruction-answer pairs are all manually designed. The concise instruction design allows us to fairly compare MLLMs, instead of struggling in prompt engineering. Besides, with such an instruction, we can also easily carry out quantitative statistics. A total of 30 advanced MLLMs are comprehensively evaluated on our MME, which not only suggests that existing MLLMs still have a large room for improvement, but also reveals the potential directions for the subsequent model optimization.
http://arxiv.org/pdf/2306.13394
Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, Rongrong Ji
cs.CV
Project page: https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models
null
cs.CV
20230623
20231206
[]
2306.13421
4
fixed throughout training, preventing it from adapt- ing to the LM generator. In other cases, the re- triever component was jointly trained but only after a separate pretraining phase for both the retriever and LM (Sachan et al., 2021; Izacard et al., 2022; Jiang et al., 2022; Bertsch et al., 2023). Thus, the retriever was not pre-trained from scratch with the LM, and only a fraction of the training budget was allocated for joint training. Recently, Zhong et al. (2022) presented a retrieval-augmented LM that trains a retriever from scratch jointly with the LM, but (a) the retriever was trained to exploit lexical information only, and (b) the retrieved information was not fused at the representation level back into the LM.
2306.13421#4
Long-range Language Modeling with Self-retrieval
Retrieval-augmented language models (LMs) have received much attention recently. However, typically the retriever is not trained jointly as a native component of the LM, but added to an already-pretrained LM, which limits the ability of the LM and the retriever to adapt to one another. In this work, we propose the Retrieval-Pretrained Transformer (RPT), an architecture and training procedure for jointly training a retrieval-augmented LM from scratch for the task of modeling long texts. Given a recently generated text chunk in a long document, the LM computes query representations, which are then used to retrieve earlier chunks in the document, located potentially tens of thousands of tokens before. Information from retrieved chunks is fused into the LM representations to predict the next target chunk. We train the retriever component with a semantic objective, where the goal is to retrieve chunks that increase the probability of the next chunk, according to a reference LM. We evaluate RPT on four long-range language modeling tasks, spanning books, code, and mathematical writing, and demonstrate that RPT improves retrieval quality and subsequently perplexity across the board compared to strong baselines.
http://arxiv.org/pdf/2306.13421
Ohad Rubin, Jonathan Berant
cs.CL
null
null
cs.CL
20230623
20230623
[ { "id": "2004.05150" } ]
2306.13304
5
models. We need a fair and explicit way to check if LLMs are really good at problem-solving with tools or if they are just using their memorized information. To fill this gap, we introduce ToolQA, a question answering (QA) benchmark to evaluate LLMs’ ability in using external tools for answering questions. ToolQA comprises data from 8 domains and defines 13 types of tools to acquire information from external reference corpora. Each instance in ToolQA consists of a question, an answer, reference corpora, and a list of available tools. ToolQA is unique in that all its questions can be answered only by using appropriate tools to obtain information from the reference corpus. This minimizes the possibility of LLMs answering questions by merely recalling their internal knowledge, and allows for faithfully evaluating LLMs’ abilities in using tools.
2306.13304#5
ToolQA: A Dataset for LLM Question Answering with External Tools
Large Language Models (LLMs) have demonstrated impressive performance in various NLP tasks, but they still suffer from challenges such as hallucination and weak numerical reasoning. To overcome these challenges, external tools can be used to enhance LLMs' question-answering abilities. However, current evaluation methods do not distinguish between questions that can be answered using LLMs' internal knowledge and those that require external information through tool use. To address this issue, we introduce a new dataset called ToolQA, which is designed to faithfully evaluate LLMs' ability to use external tools for question answering. Our development of ToolQA involved a scalable, automated process for dataset curation, along with 13 specialized tools designed for interaction with external knowledge in order to answer questions. Importantly, we strive to minimize the overlap between our benchmark data and LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use reasoning abilities. We conducted an in-depth diagnosis of existing tool-use LLMs to highlight their strengths, weaknesses, and potential improvements. Our findings set a new benchmark for evaluating LLMs and suggest new directions for future advancements. Our data and code are freely available to the broader scientific community on GitHub.
http://arxiv.org/pdf/2306.13304
Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang
cs.CL, cs.AI
null
null
cs.CL
20230623
20230623
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2212.10511" }, { "id": "2306.07193" }, { "id": "2211.05100" }, { "id": "2301.12652" }, { "id": "2206.14858" }, { "id": "2209.14610" }, { "id": "2203.09735" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2209.07686" }, { "id": "2110.14168" }, { "id": "2208.03299" }, { "id": "2102.13019" }, { "id": "2303.11381" }, { "id": "2204.05862" }, { "id": "2211.10435" }, { "id": "2207.13332" }, { "id": "2210.12810" }, { "id": "2303.04671" }, { "id": "2303.05398" }, { "id": "2210.17517" }, { "id": "2112.04359" }, { "id": "2303.17580" }, { "id": "2208.05051" }, { "id": "2305.15334" }, { "id": "2304.09842" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.09014" }, { "id": "2205.12255" } ]
2306.13394
5
In light of these concerns, a new comprehensive evalu- ation benchmark is urgently needed to match the flourish of MLLMs. We argue that a universal comprehensive eval- uation benchmark should have the following four charac- teristics: (1) It should cover as much as possible, includ- ing both perception and cognition abilities. The former refers to recognizing the specific object, such as its exis- tence, count, position, and color. The latter refers to com1
2306.13394#5
MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models
Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform multimodal tasks, showing amazing emergent abilities in recent studies, such as writing poems based on an image. However, it is difficult for these case studies to fully reflect the performance of MLLM, lacking a comprehensive evaluation. In this paper, we fill in this blank, presenting the first comprehensive MLLM Evaluation benchmark MME. It measures both perception and cognition abilities on a total of 14 subtasks. In order to avoid data leakage that may arise from direct use of public datasets for evaluation, the annotations of instruction-answer pairs are all manually designed. The concise instruction design allows us to fairly compare MLLMs, instead of struggling in prompt engineering. Besides, with such an instruction, we can also easily carry out quantitative statistics. A total of 30 advanced MLLMs are comprehensively evaluated on our MME, which not only suggests that existing MLLMs still have a large room for improvement, but also reveals the potential directions for the subsequent model optimization.
http://arxiv.org/pdf/2306.13394
Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, Rongrong Ji
cs.CV
Project page: https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models
null
cs.CV
20230623
20231206
[]
2306.13421
5
In this work, we present the Retrieval-Pretrained Transformer (RPT), a retrieval-augmented LM, where the retriever is a first-class component, trained jointly from scratch with the LM. RPT re- lies on two technical contributions. First, on the architecture side (see Fig. 1), input representations for the retriever are computed from the LM repre- sentations themselves (which we dub self-retrieval), and retrieved representations are fused back into the LM decoder for making next word predictions. Second, we train the retriever with an auxiliary loss function that encourages retrieving text frag- ments that increase the probability of generating the subsequent text. Specifically, given a recently- generated chunk ct, the retriever is trained to re- trieve chunks ci that increase the probability of pscoring(ct+1 | ci, ct) according to a reference scor- ing LM. Fig. 1 provides an illustrative example for a case where a crime scene is described, and a scor- ing LM shows the benefit of retrieving a chunk thousands of tokens away (chunk 13) compared to lexical retrieval, which leads to a chunk that is only superficially related (chunk 100).
2306.13421#5
Long-range Language Modeling with Self-retrieval
Retrieval-augmented language models (LMs) have received much attention recently. However, typically the retriever is not trained jointly as a native component of the LM, but added to an already-pretrained LM, which limits the ability of the LM and the retriever to adapt to one another. In this work, we propose the Retrieval-Pretrained Transformer (RPT), an architecture and training procedure for jointly training a retrieval-augmented LM from scratch for the task of modeling long texts. Given a recently generated text chunk in a long document, the LM computes query representations, which are then used to retrieve earlier chunks in the document, located potentially tens of thousands of tokens before. Information from retrieved chunks is fused into the LM representations to predict the next target chunk. We train the retriever component with a semantic objective, where the goal is to retrieve chunks that increase the probability of the next chunk, according to a reference LM. We evaluate RPT on four long-range language modeling tasks, spanning books, code, and mathematical writing, and demonstrate that RPT improves retrieval quality and subsequently perplexity across the board compared to strong baselines.
http://arxiv.org/pdf/2306.13421
Ohad Rubin, Jonathan Berant
cs.CL
null
null
cs.CL
20230623
20230623
[ { "id": "2004.05150" } ]
2306.13304
6
ToolQA is curated with an automated three-phase process: (1) The first phase, Reference Data Collection, involves gathering various types of public corpora including text, tables, and graphs from different domains. These corpora have no overlap with the LLM pre-training data and will serve as reference corpora for tool-based question answering. (2) The second phase is Human-guided Question Generation with LLMs. In this phase, we generate questions that can only be answered by using tools over the reference corpora. Our approach is a template-based question generation process, which includes human-guided template generation, template validation, and question instantiation with tool attributes. (3) The third phase is Programmatic Answer Generation. This phase produces accurate answers for the generated questions. To ensure answer correctness, we implement operators corresponding to the tools and obtain answers from the reference corpora programmatically. Our three-phase procedure ensures that we generate questions that can only be answered using external knowledge, along with their precise answers. Additionally, the process is highly efficient and requires minimal human labeling efforts.
2306.13304#6
ToolQA: A Dataset for LLM Question Answering with External Tools
Large Language Models (LLMs) have demonstrated impressive performance in various NLP tasks, but they still suffer from challenges such as hallucination and weak numerical reasoning. To overcome these challenges, external tools can be used to enhance LLMs' question-answering abilities. However, current evaluation methods do not distinguish between questions that can be answered using LLMs' internal knowledge and those that require external information through tool use. To address this issue, we introduce a new dataset called ToolQA, which is designed to faithfully evaluate LLMs' ability to use external tools for question answering. Our development of ToolQA involved a scalable, automated process for dataset curation, along with 13 specialized tools designed for interaction with external knowledge in order to answer questions. Importantly, we strive to minimize the overlap between our benchmark data and LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use reasoning abilities. We conducted an in-depth diagnosis of existing tool-use LLMs to highlight their strengths, weaknesses, and potential improvements. Our findings set a new benchmark for evaluating LLMs and suggest new directions for future advancements. Our data and code are freely available to the broader scientific community on GitHub.
http://arxiv.org/pdf/2306.13304
Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang
cs.CL, cs.AI
null
null
cs.CL
20230623
20230623
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2212.10511" }, { "id": "2306.07193" }, { "id": "2211.05100" }, { "id": "2301.12652" }, { "id": "2206.14858" }, { "id": "2209.14610" }, { "id": "2203.09735" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2209.07686" }, { "id": "2110.14168" }, { "id": "2208.03299" }, { "id": "2102.13019" }, { "id": "2303.11381" }, { "id": "2204.05862" }, { "id": "2211.10435" }, { "id": "2207.13332" }, { "id": "2210.12810" }, { "id": "2303.04671" }, { "id": "2303.05398" }, { "id": "2210.17517" }, { "id": "2112.04359" }, { "id": "2303.17580" }, { "id": "2208.05051" }, { "id": "2305.15334" }, { "id": "2304.09842" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.09014" }, { "id": "2205.12255" } ]
2306.13394
6
Perception (Coarse-Grained Tasks) [Y] Is there a refrigerator in this image? {N] Is there a donut in this image? [Y] Are there two pieces of pizza in this image? [N] Is there only one piece of pizza in this image? [Y] Is there a elephant in this image? [N] Is there a hair drier in this image? [¥] Is there a total of two person appear in the image? [N] Is there only one person appear in the image? [Y] Is there a red coat in the image? [N] Is there a yellow coat in the image? [¥] Is there a red couch in the image? [N] Is there a black couch in the image? [¥] Is the motorcycle on the [¥] Is the baby on the right of right side of the bus? the dog in the image? [N] Is the motorcycle on the left [N] Is the baby on the left of side of the bus. the dog in the image? Perception (OCR Task) [¥] Is the phone number in the [¥] Is the word in the logo picture "0131 555 6363" inoae “high time coffee shop"? [N] Is the phone number in the = (N] Is the
2306.13394#6
MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models
Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform multimodal tasks, showing amazing emergent abilities in recent studies, such as writing poems based on an image. However, it is difficult for these case studies to fully reflect the performance of MLLM, lacking a comprehensive evaluation. In this paper, we fill in this blank, presenting the first comprehensive MLLM Evaluation benchmark MME. It measures both perception and cognition abilities on a total of 14 subtasks. In order to avoid data leakage that may arise from direct use of public datasets for evaluation, the annotations of instruction-answer pairs are all manually designed. The concise instruction design allows us to fairly compare MLLMs, instead of struggling in prompt engineering. Besides, with such an instruction, we can also easily carry out quantitative statistics. A total of 30 advanced MLLMs are comprehensively evaluated on our MME, which not only suggests that existing MLLMs still have a large room for improvement, but also reveals the potential directions for the subsequent model optimization.
http://arxiv.org/pdf/2306.13394
Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, Rongrong Ji
cs.CV
Project page: https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models
null
cs.CV
20230623
20231206
[]
2306.13421
6
We focus on the problem of modeling long doc- uments, such as books, articles, code, scripts, and dialogue, since these are naturally occurring exam- ples of long-form content, where the entire index can be held within memory in a forward-pass. We evaluate RPT on four language modeling tasks and find that it improves perplexity across all tasks, out- performing prior work (Hutchins et al., 2022; Wu et al., 2022) as well as strong baselines (Borgeaud et al., 2022; Zhong et al., 2022). Moreover, we show that RPT retrieves high-quality chunks com- pared to retrievers that rely on lexical information. Based on our empirical findings, we argue RPT can pave the way toward the next generation of pre- trained LMs, where retrieval is strongly embedded within the architecture and training procedure. # 2 Background To situate our contribution, we review relevant re- cent RALM work. We extend this to more related work in §6.
2306.13421#6
Long-range Language Modeling with Self-retrieval
Retrieval-augmented language models (LMs) have received much attention recently. However, typically the retriever is not trained jointly as a native component of the LM, but added to an already-pretrained LM, which limits the ability of the LM and the retriever to adapt to one another. In this work, we propose the Retrieval-Pretrained Transformer (RPT), an architecture and training procedure for jointly training a retrieval-augmented LM from scratch for the task of modeling long texts. Given a recently generated text chunk in a long document, the LM computes query representations, which are then used to retrieve earlier chunks in the document, located potentially tens of thousands of tokens before. Information from retrieved chunks is fused into the LM representations to predict the next target chunk. We train the retriever component with a semantic objective, where the goal is to retrieve chunks that increase the probability of the next chunk, according to a reference LM. We evaluate RPT on four long-range language modeling tasks, spanning books, code, and mathematical writing, and demonstrate that RPT improves retrieval quality and subsequently perplexity across the board compared to strong baselines.
http://arxiv.org/pdf/2306.13421
Ohad Rubin, Jonathan Berant
cs.CL
null
null
cs.CL
20230623
20230623
[ { "id": "2004.05150" } ]
2306.13304
7
We conducted experiments using both standard LLMs and tool-augmented LLMs to answer questions in ToolQA. Our findings indicate that ChatGPT and Chain-of-thoughts prompting [57], which rely solely on their internal knowledge, have low success rates of approximately 5% for easy questions and 2% for hard questions. In contrast, tool-augmented LLMs such as Chameleon [28] and ReAct [66] perform better by leveraging external tools. For easy questions, the best performance achieved by tool-augmented LLMs is 43.15%, while for hard questions, the best performance drops to 8.2%. Our results and error analysis demonstrate that ToolQA is a challenging benchmark for existing tool-augmented LLM methods, especially for its hard questions that require more complex reasoning about tool composition. # 2 Related Work # 2.1 Knowledge-Augmented LLMs
2306.13304#7
ToolQA: A Dataset for LLM Question Answering with External Tools
Large Language Models (LLMs) have demonstrated impressive performance in various NLP tasks, but they still suffer from challenges such as hallucination and weak numerical reasoning. To overcome these challenges, external tools can be used to enhance LLMs' question-answering abilities. However, current evaluation methods do not distinguish between questions that can be answered using LLMs' internal knowledge and those that require external information through tool use. To address this issue, we introduce a new dataset called ToolQA, which is designed to faithfully evaluate LLMs' ability to use external tools for question answering. Our development of ToolQA involved a scalable, automated process for dataset curation, along with 13 specialized tools designed for interaction with external knowledge in order to answer questions. Importantly, we strive to minimize the overlap between our benchmark data and LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use reasoning abilities. We conducted an in-depth diagnosis of existing tool-use LLMs to highlight their strengths, weaknesses, and potential improvements. Our findings set a new benchmark for evaluating LLMs and suggest new directions for future advancements. Our data and code are freely available to the broader scientific community on GitHub.
http://arxiv.org/pdf/2306.13304
Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang
cs.CL, cs.AI
null
null
cs.CL
20230623
20230623
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2212.10511" }, { "id": "2306.07193" }, { "id": "2211.05100" }, { "id": "2301.12652" }, { "id": "2206.14858" }, { "id": "2209.14610" }, { "id": "2203.09735" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2209.07686" }, { "id": "2110.14168" }, { "id": "2208.03299" }, { "id": "2102.13019" }, { "id": "2303.11381" }, { "id": "2204.05862" }, { "id": "2211.10435" }, { "id": "2207.13332" }, { "id": "2210.12810" }, { "id": "2303.04671" }, { "id": "2303.05398" }, { "id": "2210.17517" }, { "id": "2112.04359" }, { "id": "2303.17580" }, { "id": "2208.05051" }, { "id": "2305.15334" }, { "id": "2304.09842" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.09014" }, { "id": "2205.12255" } ]
2306.13394
7
Is the word in the logo picture "0131 555 6363" inoae “high time coffee shop"? [N] Is the phone number in the = (N] Is the word in the logo picture "0137 556 6363"? "high tite cofece shop"? Poster i Landmark @) Perception (Fine-Grained Tasks) [Y] Is this movie directed by francis ford coppola? [N] Is this movie directed by franklin j. schaffner? [¥] Is the actor inside the red box called Audrey Hepburn? [N] Is the actor inside the red box called Chris April? [¥] Does this image describe a place of moat water? [N] Does this image describe a place of marsh? [¥] Is this an image of Beijing [¥] Is this movie titled swilight (2008)? IN] Is this movie titled the horse whisperer (1998)? [¥] Is the actor inside the red box named Jim Carrey? [N] Is the actor inside the red box named Jari Kinnunen? [Y] Is this picture captured in a place of galley? [N] Is this picture captured in a place of physics laboratory? [Y] Is this a picture of Church Guozijian? [N]
2306.13394#7
MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models
Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform multimodal tasks, showing amazing emergent abilities in recent studies, such as writing poems based on an image. However, it is difficult for these case studies to fully reflect the performance of MLLM, lacking a comprehensive evaluation. In this paper, we fill in this blank, presenting the first comprehensive MLLM Evaluation benchmark MME. It measures both perception and cognition abilities on a total of 14 subtasks. In order to avoid data leakage that may arise from direct use of public datasets for evaluation, the annotations of instruction-answer pairs are all manually designed. The concise instruction design allows us to fairly compare MLLMs, instead of struggling in prompt engineering. Besides, with such an instruction, we can also easily carry out quantitative statistics. A total of 30 advanced MLLMs are comprehensively evaluated on our MME, which not only suggests that existing MLLMs still have a large room for improvement, but also reveals the potential directions for the subsequent model optimization.
http://arxiv.org/pdf/2306.13394
Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, Rongrong Ji
cs.CV
Project page: https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models
null
cs.CV
20230623
20231206
[]
2306.13421
7
# 2 Background To situate our contribution, we review relevant re- cent RALM work. We extend this to more related work in §6. Early work on RALMs, such as kNN-LM (Khan- delwal et al., 2020) used retrieval to improve lan- guage modeling by interpolating the next-word dis- tribution produced by the LM with a distribution proposed through a test-time-only retrieval mecha- nism. Borgeaud et al. (2022) later proposed Chun- ked Cross-Attention (CCA), where retrieval is per- formed also at training time, and retrieved repre- sentations are deeply fused into the representations produced by a Transformer decoder through atten- tion. However, the retriever was trained separately and kept fixed during training, which prevented it from adapting to the LM over the course of train- ing.
2306.13421#7
Long-range Language Modeling with Self-retrieval
Retrieval-augmented language models (LMs) have received much attention recently. However, typically the retriever is not trained jointly as a native component of the LM, but added to an already-pretrained LM, which limits the ability of the LM and the retriever to adapt to one another. In this work, we propose the Retrieval-Pretrained Transformer (RPT), an architecture and training procedure for jointly training a retrieval-augmented LM from scratch for the task of modeling long texts. Given a recently generated text chunk in a long document, the LM computes query representations, which are then used to retrieve earlier chunks in the document, located potentially tens of thousands of tokens before. Information from retrieved chunks is fused into the LM representations to predict the next target chunk. We train the retriever component with a semantic objective, where the goal is to retrieve chunks that increase the probability of the next chunk, according to a reference LM. We evaluate RPT on four long-range language modeling tasks, spanning books, code, and mathematical writing, and demonstrate that RPT improves retrieval quality and subsequently perplexity across the board compared to strong baselines.
http://arxiv.org/pdf/2306.13421
Ohad Rubin, Jonathan Berant
cs.CL
null
null
cs.CL
20230623
20230623
[ { "id": "2004.05150" } ]
2306.13304
8
# 2 Related Work # 2.1 Knowledge-Augmented LLMs Several prior works aim to enhance LLMs with explicit external knowledge. Specifically, one line of research focus on retrieval-augmented language models [50, 2, 15, 24, 27, 70, 30, 63], where they use sparse [46] or dense retrieval [20, 14] to extract relevant knowledge from the corpus. These works mainly focus on leveraging free text, without considering multiple types of tools for task solving. On the other hand, Program-of-Thought [5], PAL [11], MathPrompt [13], and Code4Struct [55] 2
2306.13304#8
ToolQA: A Dataset for LLM Question Answering with External Tools
Large Language Models (LLMs) have demonstrated impressive performance in various NLP tasks, but they still suffer from challenges such as hallucination and weak numerical reasoning. To overcome these challenges, external tools can be used to enhance LLMs' question-answering abilities. However, current evaluation methods do not distinguish between questions that can be answered using LLMs' internal knowledge and those that require external information through tool use. To address this issue, we introduce a new dataset called ToolQA, which is designed to faithfully evaluate LLMs' ability to use external tools for question answering. Our development of ToolQA involved a scalable, automated process for dataset curation, along with 13 specialized tools designed for interaction with external knowledge in order to answer questions. Importantly, we strive to minimize the overlap between our benchmark data and LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use reasoning abilities. We conducted an in-depth diagnosis of existing tool-use LLMs to highlight their strengths, weaknesses, and potential improvements. Our findings set a new benchmark for evaluating LLMs and suggest new directions for future advancements. Our data and code are freely available to the broader scientific community on GitHub.
http://arxiv.org/pdf/2306.13304
Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang
cs.CL, cs.AI
null
null
cs.CL
20230623
20230623
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2212.10511" }, { "id": "2306.07193" }, { "id": "2211.05100" }, { "id": "2301.12652" }, { "id": "2206.14858" }, { "id": "2209.14610" }, { "id": "2203.09735" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2209.07686" }, { "id": "2110.14168" }, { "id": "2208.03299" }, { "id": "2102.13019" }, { "id": "2303.11381" }, { "id": "2204.05862" }, { "id": "2211.10435" }, { "id": "2207.13332" }, { "id": "2210.12810" }, { "id": "2303.04671" }, { "id": "2303.05398" }, { "id": "2210.17517" }, { "id": "2112.04359" }, { "id": "2303.17580" }, { "id": "2208.05051" }, { "id": "2305.15334" }, { "id": "2304.09842" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.09014" }, { "id": "2205.12255" } ]
2306.13394
8
place of galley? [N] Is this picture captured in a place of physics laboratory? [Y] Is this a picture of Church Guozijian? [N] Is this an image of Klinikkirche (Pfafferode)? of Saint Giles in Prague? IN] Is this a picture of Pfarrkirche St. Martin an der Raab? [¥] Is this artwork displayed in musée du louvre? [N] Is this artwork displayed in galleria nazionale d'arte moderna e contemporanea? [Y] Does this artwork belong to the type of still-life? [N] Does this artwork belong to the type of mythological? Eas Cognition (Reasoning Tasks) Commonsense Reasoning ? epee dete [N] When I see the sign in the eu iat Se engeetieorn picture, can I eras the street? this picture? Numerical Calculation @ [Y] Is the answer to the arith- metic question in the image 65? SJ [NJ Is the answer to the arith- metic question in the image 56? [Y] Should the value of "a" in the picture equal 3? [N] Should the value of "a" in the picture equal 2? 24436 = a2 225 Text Translation 5
2306.13394#8
MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models
Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform multimodal tasks, showing amazing emergent abilities in recent studies, such as writing poems based on an image. However, it is difficult for these case studies to fully reflect the performance of MLLM, lacking a comprehensive evaluation. In this paper, we fill in this blank, presenting the first comprehensive MLLM Evaluation benchmark MME. It measures both perception and cognition abilities on a total of 14 subtasks. In order to avoid data leakage that may arise from direct use of public datasets for evaluation, the annotations of instruction-answer pairs are all manually designed. The concise instruction design allows us to fairly compare MLLMs, instead of struggling in prompt engineering. Besides, with such an instruction, we can also easily carry out quantitative statistics. A total of 30 advanced MLLMs are comprehensively evaluated on our MME, which not only suggests that existing MLLMs still have a large room for improvement, but also reveals the potential directions for the subsequent model optimization.
http://arxiv.org/pdf/2306.13394
Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, Rongrong Ji
cs.CV
Project page: https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models
null
cs.CV
20230623
20231206
[]
2306.13421
8
TRIME (Zhong et al., 2022), like this work, trained a retrieval-augmented LM from scratch where the retriever component and the decoder LM are trained jointly. Our work differs from TRIME in two aspects: First, TRIME, like kNN-LM, incor- porates information from the retriever in a shallow manner through distribution interpolation, while we adopt CCA as a deeper fusion mechanism. Sec- ond, TRIME takes advantage of lexical clues for supervising the retriever, that is, given a query, the TRIME retriever learns to retrieve contexts that will lead to generating the same token as the query. We, on the other hand, use a scoring LM to evalu- ate what text chunks are relevant for increasing the probability of the chunk being generated, which leads to more semantic retrieval. This is similar to EPR (Rubin et al., 2022), which used this idea for learning to retrieve prompts for in-context learning, and perplexity distillation in Atlas (Izacard et al., 2022). However, Atlas does not train the retriever and LM from scratch and is an encoder-decoder model, more suitable for knowledge-intensive tasks. We, conversely, train from scratch and use a de- coder model, more suitable for modeling long texts. # 3 Retrieval-Pretrained Transformer
2306.13421#8
Long-range Language Modeling with Self-retrieval
Retrieval-augmented language models (LMs) have received much attention recently. However, typically the retriever is not trained jointly as a native component of the LM, but added to an already-pretrained LM, which limits the ability of the LM and the retriever to adapt to one another. In this work, we propose the Retrieval-Pretrained Transformer (RPT), an architecture and training procedure for jointly training a retrieval-augmented LM from scratch for the task of modeling long texts. Given a recently generated text chunk in a long document, the LM computes query representations, which are then used to retrieve earlier chunks in the document, located potentially tens of thousands of tokens before. Information from retrieved chunks is fused into the LM representations to predict the next target chunk. We train the retriever component with a semantic objective, where the goal is to retrieve chunks that increase the probability of the next chunk, according to a reference LM. We evaluate RPT on four long-range language modeling tasks, spanning books, code, and mathematical writing, and demonstrate that RPT improves retrieval quality and subsequently perplexity across the board compared to strong baselines.
http://arxiv.org/pdf/2306.13421
Ohad Rubin, Jonathan Berant
cs.CL
null
null
cs.CL
20230623
20230623
[ { "id": "2004.05150" } ]
2306.13304
9
2 rivate/Commercial Data <—. * General Knowledge Be. . * Out-Dated Information ® Professional Abilities <— Internal + Publicly Available Data ~t External Knowledge Most Recent Data «——————— Knowledge Question —> Y External Knowledge —» Question —> (b) Human-Guided Question Generation (c) Programmatic Answer Generation Data 'Q: Did..{Origin} to {Dest} on {Date}.diverted? Oo — (S LAX SFO 10/15/22 Oo — Question ITH ATL 81/09/22 Templates ' CLT MDW 05/25/22 Flight Data Question Templates: ; + Did the flight from {Origin} to {Dest} on {Date} | A: def question_gen(table_row): get cancelled or diverted? (External Knowledge) / Origin = table_row["Origin"] + What the flight-dist forthe flight_f Dest = table _row["Dest"] {Origin} to—{Dest}on_{Date}? (Internal Knowledge) — | FlightDate = table_row["FlightDate"] + Which_product—on—{FlightNumber} has—the highest ‘ price?(Not Mentioned) ... ... : pecurnfacestiontiensten
2306.13304#9
ToolQA: A Dataset for LLM Question Answering with External Tools
Large Language Models (LLMs) have demonstrated impressive performance in various NLP tasks, but they still suffer from challenges such as hallucination and weak numerical reasoning. To overcome these challenges, external tools can be used to enhance LLMs' question-answering abilities. However, current evaluation methods do not distinguish between questions that can be answered using LLMs' internal knowledge and those that require external information through tool use. To address this issue, we introduce a new dataset called ToolQA, which is designed to faithfully evaluate LLMs' ability to use external tools for question answering. Our development of ToolQA involved a scalable, automated process for dataset curation, along with 13 specialized tools designed for interaction with external knowledge in order to answer questions. Importantly, we strive to minimize the overlap between our benchmark data and LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use reasoning abilities. We conducted an in-depth diagnosis of existing tool-use LLMs to highlight their strengths, weaknesses, and potential improvements. Our findings set a new benchmark for evaluating LLMs and suggest new directions for future advancements. Our data and code are freely available to the broader scientific community on GitHub.
http://arxiv.org/pdf/2306.13304
Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang
cs.CL, cs.AI
null
null
cs.CL
20230623
20230623
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2212.10511" }, { "id": "2306.07193" }, { "id": "2211.05100" }, { "id": "2301.12652" }, { "id": "2206.14858" }, { "id": "2209.14610" }, { "id": "2203.09735" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2209.07686" }, { "id": "2110.14168" }, { "id": "2208.03299" }, { "id": "2102.13019" }, { "id": "2303.11381" }, { "id": "2204.05862" }, { "id": "2211.10435" }, { "id": "2207.13332" }, { "id": "2210.12810" }, { "id": "2303.04671" }, { "id": "2303.05398" }, { "id": "2210.17517" }, { "id": "2112.04359" }, { "id": "2303.17580" }, { "id": "2208.05051" }, { "id": "2305.15334" }, { "id": "2304.09842" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.09014" }, { "id": "2205.12255" } ]
2306.13394
9
Should the value of "a" in the picture equal 3? [N] Should the value of "a" in the picture equal 2? 24436 = a2 225 Text Translation 5 [¥] Appropriate to translate into English ‘classic taste | S588) [N] Appropriate to translate into English ‘strawberry flavor? Code Reasoning i [¥] Python code. Is the output of the code ‘Hello"? [N] Python code. Is the output of the code ‘World’? [¥] Appropriate to translate into English ‘work hard together’? [N] Appropriate to translate into English 'be filled with intrigue’? Swe [Y] Python code. Is the output of the code '0"? [N] Python code. Is the output of the code 'I'?
2306.13394#9
MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models
Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform multimodal tasks, showing amazing emergent abilities in recent studies, such as writing poems based on an image. However, it is difficult for these case studies to fully reflect the performance of MLLM, lacking a comprehensive evaluation. In this paper, we fill in this blank, presenting the first comprehensive MLLM Evaluation benchmark MME. It measures both perception and cognition abilities on a total of 14 subtasks. In order to avoid data leakage that may arise from direct use of public datasets for evaluation, the annotations of instruction-answer pairs are all manually designed. The concise instruction design allows us to fairly compare MLLMs, instead of struggling in prompt engineering. Besides, with such an instruction, we can also easily carry out quantitative statistics. A total of 30 advanced MLLMs are comprehensively evaluated on our MME, which not only suggests that existing MLLMs still have a large room for improvement, but also reveals the potential directions for the subsequent model optimization.
http://arxiv.org/pdf/2306.13394
Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, Rongrong Ji
cs.CV
Project page: https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models
null
cs.CV
20230623
20231206
[]
2306.13421
9
# 3 Retrieval-Pretrained Transformer Problem Setup RPT, like RETRO (Borgeaud et al., 2022), is a chunk-wise retrieval-augmented LM, where the input sequence is divided into fr Toke Input Tokens ReaD ry Upper Decoder - fea Bio petatse . L rs Feed Forward Chunked Cross Attention] -+——— a Causal Attention H roN Pool + Project Figure 2: The architecture of the Retrieval-Pretrained Transformer, where an input of 45 tokens is shown, con- sisting of 9 chunks, and causal self-attention is applied over 15 tokens. The left side shows the decoder stack, where the bottom nlayers layers also include chunked 2 cross-attention layers that fuse information from retrieved chunks. The right side shows the retriever, which takes a chunk and retrieves the highest-scoring K chunks that appeared earlier in the document.
2306.13421#9
Long-range Language Modeling with Self-retrieval
Retrieval-augmented language models (LMs) have received much attention recently. However, typically the retriever is not trained jointly as a native component of the LM, but added to an already-pretrained LM, which limits the ability of the LM and the retriever to adapt to one another. In this work, we propose the Retrieval-Pretrained Transformer (RPT), an architecture and training procedure for jointly training a retrieval-augmented LM from scratch for the task of modeling long texts. Given a recently generated text chunk in a long document, the LM computes query representations, which are then used to retrieve earlier chunks in the document, located potentially tens of thousands of tokens before. Information from retrieved chunks is fused into the LM representations to predict the next target chunk. We train the retriever component with a semantic objective, where the goal is to retrieve chunks that increase the probability of the next chunk, according to a reference LM. We evaluate RPT on four long-range language modeling tasks, spanning books, code, and mathematical writing, and demonstrate that RPT improves retrieval quality and subsequently perplexity across the board compared to strong baselines.
http://arxiv.org/pdf/2306.13421
Ohad Rubin, Jonathan Berant
cs.CL
null
null
cs.CL
20230623
20230623
[ { "id": "2004.05150" } ]
2306.13304
10
Figure 2: ToolQA, aiming to faithfully evaluate LLMs’ abilities to use external tools, curates data through three phases: (a) Reference Data Collection; (b) Human-Guided Question Generation; and (c) Programmatic Answer Generation. apply code-based tools to enhance LLMs’ abilities in question answering with a focus on tabular and math-related tasks. Several additional works [48, 28, 49] expand the scope of tool utilization by incorporating different types of basic tools (e.g. calculator, calendar, machine translation) to solve complex reasoning tasks. ART [39], ReAct [66], and Reflexion [51] leverage large language models (LLMs) to auto-generate intermediate reasoning steps as well as actions, thereby improving interpretability and problem-solving abilities in diverse decision-making tasks. In addition, several works have extended this line of learning paradigm to other modalities [64, 61] and other domains [18]. A detailed comparison between existing tool-use LLMs can be found in Appendix A. # 2.2 Benchmarks on Tool-Augmented LLMs
2306.13304#10
ToolQA: A Dataset for LLM Question Answering with External Tools
Large Language Models (LLMs) have demonstrated impressive performance in various NLP tasks, but they still suffer from challenges such as hallucination and weak numerical reasoning. To overcome these challenges, external tools can be used to enhance LLMs' question-answering abilities. However, current evaluation methods do not distinguish between questions that can be answered using LLMs' internal knowledge and those that require external information through tool use. To address this issue, we introduce a new dataset called ToolQA, which is designed to faithfully evaluate LLMs' ability to use external tools for question answering. Our development of ToolQA involved a scalable, automated process for dataset curation, along with 13 specialized tools designed for interaction with external knowledge in order to answer questions. Importantly, we strive to minimize the overlap between our benchmark data and LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use reasoning abilities. We conducted an in-depth diagnosis of existing tool-use LLMs to highlight their strengths, weaknesses, and potential improvements. Our findings set a new benchmark for evaluating LLMs and suggest new directions for future advancements. Our data and code are freely available to the broader scientific community on GitHub.
http://arxiv.org/pdf/2306.13304
Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang
cs.CL, cs.AI
null
null
cs.CL
20230623
20230623
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2212.10511" }, { "id": "2306.07193" }, { "id": "2211.05100" }, { "id": "2301.12652" }, { "id": "2206.14858" }, { "id": "2209.14610" }, { "id": "2203.09735" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2209.07686" }, { "id": "2110.14168" }, { "id": "2208.03299" }, { "id": "2102.13019" }, { "id": "2303.11381" }, { "id": "2204.05862" }, { "id": "2211.10435" }, { "id": "2207.13332" }, { "id": "2210.12810" }, { "id": "2303.04671" }, { "id": "2303.05398" }, { "id": "2210.17517" }, { "id": "2112.04359" }, { "id": "2303.17580" }, { "id": "2208.05051" }, { "id": "2305.15334" }, { "id": "2304.09842" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.09014" }, { "id": "2205.12255" } ]
2306.13394
10
Figure 1. Diagram of our MME benchmark. It evaluates MLLMs from both perception and cognition, including a total of 14 subtasks. Each image corresponds to two questions whose answers are marked yes [Y] and no [N], respectively. The instruction consists of a question followed by “Please answer yes or no”. It is worth noting that all instructions are manually designed. positing the perception information and the knowledge in LLM to deduce more complex answers. It is obvious that the former is the premise of the latter. (2) Its data or an- notations should not come from existing publicly available datasets as much as possible, avoiding the risk of data leak- age. (3) Its instructions should be as concise as possible and in line with human cognition. Although instruction design may have a large impact on the output, all models should be tested under the same unified instructions for fair compar- ison. A good MLLM should be able to generalize to such concise instructions. (4) The responses of MLLMs to the instructions should be intuitive and convenient for quanti- tative analysis. The open-ended answer of MLLMs poses significant challenges to the quantization. Existing methods tend to use GPT or manual scoring [21, 29, 48], but there may be problems of inaccuracy and subjectivity.
2306.13394#10
MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models
Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform multimodal tasks, showing amazing emergent abilities in recent studies, such as writing poems based on an image. However, it is difficult for these case studies to fully reflect the performance of MLLM, lacking a comprehensive evaluation. In this paper, we fill in this blank, presenting the first comprehensive MLLM Evaluation benchmark MME. It measures both perception and cognition abilities on a total of 14 subtasks. In order to avoid data leakage that may arise from direct use of public datasets for evaluation, the annotations of instruction-answer pairs are all manually designed. The concise instruction design allows us to fairly compare MLLMs, instead of struggling in prompt engineering. Besides, with such an instruction, we can also easily carry out quantitative statistics. A total of 30 advanced MLLMs are comprehensively evaluated on our MME, which not only suggests that existing MLLMs still have a large room for improvement, but also reveals the potential directions for the subsequent model optimization.
http://arxiv.org/pdf/2306.13394
Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, Rongrong Ji
cs.CV
Project page: https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models
null
cs.CV
20230623
20231206
[]
2306.13421
10
chunks, and retrieval is performed at the chunk evel. Specifically, given a sequence of L input tokens, (71, 22,...,U1,), we partition it into a se- quence of @ = z non-overlapping chunks of ength m, denoted by C = (c1,¢2,...,¢¢). For every possible query chunk, cl = c;, the model will retrieve a subset of at most kK < ¢ chunks, R(cA) C CM = (1, 2, .--, wy), Where C<* is he set of retrievable chunks for c;, which excludes the w chunks to which it already has access to hrough causal self-attention. The goal is to learn a model that retrieves a chunk subset, R(c*), that in- crease the probability of autoregressive generation of the target chunk ct = cj11. We present our method in two parts. First, our architecture (§3.1), which leverages CCA to fuse retrieved representations into the LM, but adds a learned retriever component. Second, we present the training method (§3.2-§3.3), where the retriever is trained to retrieve chunks useful for generating a future chunk according to a reference LM.
2306.13421#10
Long-range Language Modeling with Self-retrieval
Retrieval-augmented language models (LMs) have received much attention recently. However, typically the retriever is not trained jointly as a native component of the LM, but added to an already-pretrained LM, which limits the ability of the LM and the retriever to adapt to one another. In this work, we propose the Retrieval-Pretrained Transformer (RPT), an architecture and training procedure for jointly training a retrieval-augmented LM from scratch for the task of modeling long texts. Given a recently generated text chunk in a long document, the LM computes query representations, which are then used to retrieve earlier chunks in the document, located potentially tens of thousands of tokens before. Information from retrieved chunks is fused into the LM representations to predict the next target chunk. We train the retriever component with a semantic objective, where the goal is to retrieve chunks that increase the probability of the next chunk, according to a reference LM. We evaluate RPT on four long-range language modeling tasks, spanning books, code, and mathematical writing, and demonstrate that RPT improves retrieval quality and subsequently perplexity across the board compared to strong baselines.
http://arxiv.org/pdf/2306.13421
Ohad Rubin, Jonathan Berant
cs.CL
null
null
cs.CL
20230623
20230623
[ { "id": "2004.05150" } ]
2306.13304
11
Earlier tool-augmented LLMs primarily assess single tool usage based on downstream task perfor- mance across existing benchmarks. For example, there are works that study how text retrievers augment LLMs’ performance on open-domain question-answering [19, 65], fact-checking [53], and timely information benchmarks [6, 21, 68, 10]. Besides, the mathematical reasoning abilities of exter- nal calculators and Python interpreters are evaluated using computation-intensive QA datasets [9, 29]. However, these evaluation benchmarks may not faithfully reflect the extent to which models leverage external tools, as some questions could still be correctly answered solely using the internal knowl- edge of the LLMs. ToolQA attempts to mitigate these issues by selecting data from out-of-scope sources that have not been memorized by LLMs. Concurrent with our work, there are several recent benchmarks for evaluating LLMs’ ability in using multiple tools for solving challenging tasks, in- cluding API-Bank [26], APIBench [41], and ToolBench [44, 62]. They mainly focus on constructing high-quality tool chains for LLM fine-tuning and evaluating API call trace accuracy against a fixed ground truth
2306.13304#11
ToolQA: A Dataset for LLM Question Answering with External Tools
Large Language Models (LLMs) have demonstrated impressive performance in various NLP tasks, but they still suffer from challenges such as hallucination and weak numerical reasoning. To overcome these challenges, external tools can be used to enhance LLMs' question-answering abilities. However, current evaluation methods do not distinguish between questions that can be answered using LLMs' internal knowledge and those that require external information through tool use. To address this issue, we introduce a new dataset called ToolQA, which is designed to faithfully evaluate LLMs' ability to use external tools for question answering. Our development of ToolQA involved a scalable, automated process for dataset curation, along with 13 specialized tools designed for interaction with external knowledge in order to answer questions. Importantly, we strive to minimize the overlap between our benchmark data and LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use reasoning abilities. We conducted an in-depth diagnosis of existing tool-use LLMs to highlight their strengths, weaknesses, and potential improvements. Our findings set a new benchmark for evaluating LLMs and suggest new directions for future advancements. Our data and code are freely available to the broader scientific community on GitHub.
http://arxiv.org/pdf/2306.13304
Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang
cs.CL, cs.AI
null
null
cs.CL
20230623
20230623
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2212.10511" }, { "id": "2306.07193" }, { "id": "2211.05100" }, { "id": "2301.12652" }, { "id": "2206.14858" }, { "id": "2209.14610" }, { "id": "2203.09735" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2209.07686" }, { "id": "2110.14168" }, { "id": "2208.03299" }, { "id": "2102.13019" }, { "id": "2303.11381" }, { "id": "2204.05862" }, { "id": "2211.10435" }, { "id": "2207.13332" }, { "id": "2210.12810" }, { "id": "2303.04671" }, { "id": "2303.05398" }, { "id": "2210.17517" }, { "id": "2112.04359" }, { "id": "2303.17580" }, { "id": "2208.05051" }, { "id": "2305.15334" }, { "id": "2304.09842" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.09014" }, { "id": "2205.12255" } ]
2306.13394
11
To this end, we collect a comprehensive MLLM Evalu- ation benchmark, named as MME, which meets the above four characteristics at the same time: • MME covers the examination of perception and cogni- tion abilities. Apart from OCR, the perception includes the recognition of coarse-grained and fine-grained ob- jects. The former identifies the existence, count, position, and color of objects. The latter recognizes movie posters, celebrities, scenes, landmarks, and artworks. The cog- nition includes commonsense reasoning, numerical cal- culation, text translation, and code reasoning. The total number of subtasks is up to 14, as shown in Fig. 1. • All instruction-answer pairs are manually constructed. For the few public datasets involved in our study, we only use images without directly relying on their original an- notations. Meanwhile, we make efforts to collect data through real photographs and image generation. • The instructions of MME are designed concisely to avoid the impact of prompt engineering on the model output. We argue that a good MLLM should be able to generalize to such simple and frequently used instructions, which are fair to all models. Please see Fig. 1 for the specific 2
2306.13394#11
MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models
Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform multimodal tasks, showing amazing emergent abilities in recent studies, such as writing poems based on an image. However, it is difficult for these case studies to fully reflect the performance of MLLM, lacking a comprehensive evaluation. In this paper, we fill in this blank, presenting the first comprehensive MLLM Evaluation benchmark MME. It measures both perception and cognition abilities on a total of 14 subtasks. In order to avoid data leakage that may arise from direct use of public datasets for evaluation, the annotations of instruction-answer pairs are all manually designed. The concise instruction design allows us to fairly compare MLLMs, instead of struggling in prompt engineering. Besides, with such an instruction, we can also easily carry out quantitative statistics. A total of 30 advanced MLLMs are comprehensively evaluated on our MME, which not only suggests that existing MLLMs still have a large room for improvement, but also reveals the potential directions for the subsequent model optimization.
http://arxiv.org/pdf/2306.13394
Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, Rongrong Ji
cs.CV
Project page: https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models
null
cs.CV
20230623
20231206
[]
2306.13421
11
(“reader”), and the right side the retriever. The reader is split into two, where the bottom nlayers 2 layers (lower decoder) are standard Transformer decoder layers that take w chunks as input and out- put representations that will be used by the retriever and the top decoder layers. # The top nlayers 2 The top Tiers layers (upper decoder) use Chun- ked Cross-Attention (CCA) to fuse information from the top-/ neighbor chunks retrieved by the retriever back into the LM. We use standard CCA layers from RETRO (Borgeaud et al., 2022), where for each one of the ¢ chunks, queries are the m to- ken representations of that chunk output by causal attention, and the keys and values are the token representations for the top-/X neighbor chunks out- put by the retriever. For full details of CCA, see Borgeaud et al. (2022). Next, we describe the retriever component, along with a neighbor gating mechanism for modulating the effect of retrieved representations. # 3.1 Model Architecture Fig. 2 illustrates our architecture, where the input has 45 input tokens divided into 9 chunks, and causal self-attention is applied over w = 3 chunks (15 tokens). The left side depicts the decoder stack
2306.13421#11
Long-range Language Modeling with Self-retrieval
Retrieval-augmented language models (LMs) have received much attention recently. However, typically the retriever is not trained jointly as a native component of the LM, but added to an already-pretrained LM, which limits the ability of the LM and the retriever to adapt to one another. In this work, we propose the Retrieval-Pretrained Transformer (RPT), an architecture and training procedure for jointly training a retrieval-augmented LM from scratch for the task of modeling long texts. Given a recently generated text chunk in a long document, the LM computes query representations, which are then used to retrieve earlier chunks in the document, located potentially tens of thousands of tokens before. Information from retrieved chunks is fused into the LM representations to predict the next target chunk. We train the retriever component with a semantic objective, where the goal is to retrieve chunks that increase the probability of the next chunk, according to a reference LM. We evaluate RPT on four long-range language modeling tasks, spanning books, code, and mathematical writing, and demonstrate that RPT improves retrieval quality and subsequently perplexity across the board compared to strong baselines.
http://arxiv.org/pdf/2306.13421
Ohad Rubin, Jonathan Berant
cs.CL
null
null
cs.CL
20230623
20230623
[ { "id": "2004.05150" } ]
2306.13304
12
and ToolBench [44, 62]. They mainly focus on constructing high-quality tool chains for LLM fine-tuning and evaluating API call trace accuracy against a fixed ground truth trace. In contrast, ToolQA is unique in that it focuses on the open-ended use of tools for question-answering, rather than benchmarking the intermediate process of tool use. Specifically, ToolQA creates tool-based question-answer pairs and assesses whether LLMs can arrive at the correct answer, regardless of the tool chains used.
2306.13304#12
ToolQA: A Dataset for LLM Question Answering with External Tools
Large Language Models (LLMs) have demonstrated impressive performance in various NLP tasks, but they still suffer from challenges such as hallucination and weak numerical reasoning. To overcome these challenges, external tools can be used to enhance LLMs' question-answering abilities. However, current evaluation methods do not distinguish between questions that can be answered using LLMs' internal knowledge and those that require external information through tool use. To address this issue, we introduce a new dataset called ToolQA, which is designed to faithfully evaluate LLMs' ability to use external tools for question answering. Our development of ToolQA involved a scalable, automated process for dataset curation, along with 13 specialized tools designed for interaction with external knowledge in order to answer questions. Importantly, we strive to minimize the overlap between our benchmark data and LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use reasoning abilities. We conducted an in-depth diagnosis of existing tool-use LLMs to highlight their strengths, weaknesses, and potential improvements. Our findings set a new benchmark for evaluating LLMs and suggest new directions for future advancements. Our data and code are freely available to the broader scientific community on GitHub.
http://arxiv.org/pdf/2306.13304
Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang
cs.CL, cs.AI
null
null
cs.CL
20230623
20230623
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2212.10511" }, { "id": "2306.07193" }, { "id": "2211.05100" }, { "id": "2301.12652" }, { "id": "2206.14858" }, { "id": "2209.14610" }, { "id": "2203.09735" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2209.07686" }, { "id": "2110.14168" }, { "id": "2208.03299" }, { "id": "2102.13019" }, { "id": "2303.11381" }, { "id": "2204.05862" }, { "id": "2211.10435" }, { "id": "2207.13332" }, { "id": "2210.12810" }, { "id": "2303.04671" }, { "id": "2303.05398" }, { "id": "2210.17517" }, { "id": "2112.04359" }, { "id": "2303.17580" }, { "id": "2208.05051" }, { "id": "2305.15334" }, { "id": "2304.09842" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.09014" }, { "id": "2205.12255" } ]
2306.13394
12
WeMM 1621.66 g Muffin 163.33 InfMLLM 1567.99 6 6 MMICL 160.00 SPHINX 1560.15 6 WeMM 445.00 6 GPT-4V 160.00 Lion 1545.80 4 MMICL 428.93 6 SPHINX 160.00 5 LLaVA 1531.31 5 XComposer-VL 391.07 SPHINX 6 XComposer-VL 158.33 6 XComposer-VL 1528.45 6 Qwen-VL-Chat 360.71 6 GIT2 190.00 4 LLaVA 155.00 Qwen-VL-Chat 1487.58 LLaMA-Adapter V2 356.43 6 XComposer-VL 190.00 4 Lion 155.00 mPLUG-Owl2 1450.20 Skywork-MM 356.43 Lion 4 mPLUG-Owl2 155.00 9 Skywork-MM 1419.08 9 InfMLLM 347.14 6 GPT-4V 5 Lynx 151.67 10 GPT-4V 1409.43 10 BLIVA 331.43 6 InfMLLM 190.00 5 Skywork-MM 151.67 (1) Perception (2) Cognition (3) Existence (4) Count Model Model Model Model Score : gz Lion
2306.13394#12
MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models
Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform multimodal tasks, showing amazing emergent abilities in recent studies, such as writing poems based on an image. However, it is difficult for these case studies to fully reflect the performance of MLLM, lacking a comprehensive evaluation. In this paper, we fill in this blank, presenting the first comprehensive MLLM Evaluation benchmark MME. It measures both perception and cognition abilities on a total of 14 subtasks. In order to avoid data leakage that may arise from direct use of public datasets for evaluation, the annotations of instruction-answer pairs are all manually designed. The concise instruction design allows us to fairly compare MLLMs, instead of struggling in prompt engineering. Besides, with such an instruction, we can also easily carry out quantitative statistics. A total of 30 advanced MLLMs are comprehensively evaluated on our MME, which not only suggests that existing MLLMs still have a large room for improvement, but also reveals the potential directions for the subsequent model optimization.
http://arxiv.org/pdf/2306.13394
Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, Rongrong Ji
cs.CV
Project page: https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models
null
cs.CV
20230623
20231206
[]
2306.13421
12
Retriever The retriever takes as input the repre- sentations output by the lower decoder and pro- duces a similarity score for every pair of chunks. Given a query chunk c4, the query-based score for each retrievable chunk c is sq(c) = (Wae4, Wxe), where Wo, Wx € R¢*4 are learned linear projections, and cq and c are chunk representations. For an m-token long chunk c, we compute its representation c by applying bidirectional attention over the chunk tokens, followed by mean-pooling across the time dimension. This maintains causal- ity, as these representations are only used during the prediction of the next chunk. Once scores for all pairs of chunks are com- puted, the retrieved neighbor chunks R(c%), for each query chunk, c4, consists of its top-K highest- scoring retrievable chunks. Then, for each chunk cj € R(c4), we concatenate the representations of the succeeding chunk c;,, to provide addi- tional context, and the final representation for all neighbors of all chunks is given by a tensor Ce Rox K x2mxd 1
2306.13421#12
Long-range Language Modeling with Self-retrieval
Retrieval-augmented language models (LMs) have received much attention recently. However, typically the retriever is not trained jointly as a native component of the LM, but added to an already-pretrained LM, which limits the ability of the LM and the retriever to adapt to one another. In this work, we propose the Retrieval-Pretrained Transformer (RPT), an architecture and training procedure for jointly training a retrieval-augmented LM from scratch for the task of modeling long texts. Given a recently generated text chunk in a long document, the LM computes query representations, which are then used to retrieve earlier chunks in the document, located potentially tens of thousands of tokens before. Information from retrieved chunks is fused into the LM representations to predict the next target chunk. We train the retriever component with a semantic objective, where the goal is to retrieve chunks that increase the probability of the next chunk, according to a reference LM. We evaluate RPT on four long-range language modeling tasks, spanning books, code, and mathematical writing, and demonstrate that RPT improves retrieval quality and subsequently perplexity across the board compared to strong baselines.
http://arxiv.org/pdf/2306.13421
Ohad Rubin, Jonathan Berant
cs.CL
null
null
cs.CL
20230623
20230623
[ { "id": "2004.05150" } ]
2306.13304
13
# 3 ToolQA Dataset # 3.1 Dataset Details We curate the ToolQA benchmark to evaluate LLMs’ capability in leveraging external tools for question answering. ToolQA consists of data from 8 distinct domains, each instance being a tuple — (question, answer, reference corpora, and tools). The reference corpora are external knowledge sources that can be queried, which can be a text corpus, a tabular database, or a graph. To enable 3 obtaining information from the reference corpora, we have developed 13 tools for text retrieval, database operations, code interpretation, mathematical computations, and more. The questions are designed to simulate real-world information-seeking inquiries. However, they cannot be answered directly with LLMs’ internal knowledge, but instead require LLMs to obtain information from the reference corpora via tool use. Table 1 shows the detailed statistics of ToolQA.
2306.13304#13
ToolQA: A Dataset for LLM Question Answering with External Tools
Large Language Models (LLMs) have demonstrated impressive performance in various NLP tasks, but they still suffer from challenges such as hallucination and weak numerical reasoning. To overcome these challenges, external tools can be used to enhance LLMs' question-answering abilities. However, current evaluation methods do not distinguish between questions that can be answered using LLMs' internal knowledge and those that require external information through tool use. To address this issue, we introduce a new dataset called ToolQA, which is designed to faithfully evaluate LLMs' ability to use external tools for question answering. Our development of ToolQA involved a scalable, automated process for dataset curation, along with 13 specialized tools designed for interaction with external knowledge in order to answer questions. Importantly, we strive to minimize the overlap between our benchmark data and LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use reasoning abilities. We conducted an in-depth diagnosis of existing tool-use LLMs to highlight their strengths, weaknesses, and potential improvements. Our findings set a new benchmark for evaluating LLMs and suggest new directions for future advancements. Our data and code are freely available to the broader scientific community on GitHub.
http://arxiv.org/pdf/2306.13304
Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang
cs.CL, cs.AI
null
null
cs.CL
20230623
20230623
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2212.10511" }, { "id": "2306.07193" }, { "id": "2211.05100" }, { "id": "2301.12652" }, { "id": "2206.14858" }, { "id": "2209.14610" }, { "id": "2203.09735" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2209.07686" }, { "id": "2110.14168" }, { "id": "2208.03299" }, { "id": "2102.13019" }, { "id": "2303.11381" }, { "id": "2204.05862" }, { "id": "2211.10435" }, { "id": "2207.13332" }, { "id": "2210.12810" }, { "id": "2303.04671" }, { "id": "2303.05398" }, { "id": "2210.17517" }, { "id": "2112.04359" }, { "id": "2303.17580" }, { "id": "2208.05051" }, { "id": "2305.15334" }, { "id": "2304.09842" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.09014" }, { "id": "2205.12255" } ]
2306.13394
13
190.00 5 Skywork-MM 151.67 (1) Perception (2) Cognition (3) Existence (4) Count Model Model Model Model Score : gz Lion 153.33 InfMLLM 185.00 GPT-4V 192.18 gz WeMM 179.12 g SPHINX 153.33 6 BLIVA 180.00 6 Lion 181.63 6 SPHINX 177.94 6 InfMLLM 143.33 6 Lion 180.00 6 Qwen-VL-Chat 178.57 6 Otter 172.65 6 LLaVA 133.33 6 LLaVA 170.00 4 Skywork-MM 175.85 4 mPLUG-Owl2 164.41 4 Qwen-VL-Chat 6 Lynx 170.00 5 SPHINX 164.29 5 Cheetor 164.12 5 WeMM 126.67 6 Qwen-VL-Chat 170.00 InfMLLM 163.27 6 InfMLLM 161.47 5 XComposer-VL 126.67 4 WeMM 168.33 XComposer-VL 161.90 7 Skywork-MM 160.29 6 GIT2 96.67 5 LRV-Instruction 165.00 8 LLaVA 160.54 8 LLaVA 152.94 7 GPT-4V 95.00 5 XComposer-VL 165.00 8
2306.13394#13
MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models
Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform multimodal tasks, showing amazing emergent abilities in recent studies, such as writing poems based on an image. However, it is difficult for these case studies to fully reflect the performance of MLLM, lacking a comprehensive evaluation. In this paper, we fill in this blank, presenting the first comprehensive MLLM Evaluation benchmark MME. It measures both perception and cognition abilities on a total of 14 subtasks. In order to avoid data leakage that may arise from direct use of public datasets for evaluation, the annotations of instruction-answer pairs are all manually designed. The concise instruction design allows us to fairly compare MLLMs, instead of struggling in prompt engineering. Besides, with such an instruction, we can also easily carry out quantitative statistics. A total of 30 advanced MLLMs are comprehensively evaluated on our MME, which not only suggests that existing MLLMs still have a large room for improvement, but also reveals the potential directions for the subsequent model optimization.
http://arxiv.org/pdf/2306.13394
Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, Rongrong Ji
cs.CV
Project page: https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models
null
cs.CV
20230623
20231206
[]
2306.13304
14
To reduce human efforts in generating faithful question-answer pairs to evaluate LLMs’ tool-use capabilities, we propose an automatic three-phase process (Figure 2): (1) We first select data from public sources that are unmemorized by LLMs during Reference Data Collection; (2) We adopt Human-Guided Question Generation to steer LLMs to generate valid questions according to pre- defined templates; (3) We produce accurate answers for the generated questions with Programmatic Answer Generation. We detail the three-phase generation process in the following. # 3.2 Reference Data and Tools To evaluate LLMs’ ability in using external tools for question answering, it is crucial to ensure that they cannot directly answer the questions with their internal knowledge. To this end, we collect reference corpora that meet the following criteria (Figure 2(a)): 1) The reference corpora should ideally not overlap with the LLM’s pre-training data; 2) The reference corpora should contain context-sensitive facts for generating questions that cannot be directly answered solely based on LLMs’ internal knowledge and reasoning abilities; 3) LLMs should be able to obtain all the necessary information from the reference corpora to correctly answer the questions.
2306.13304#14
ToolQA: A Dataset for LLM Question Answering with External Tools
Large Language Models (LLMs) have demonstrated impressive performance in various NLP tasks, but they still suffer from challenges such as hallucination and weak numerical reasoning. To overcome these challenges, external tools can be used to enhance LLMs' question-answering abilities. However, current evaluation methods do not distinguish between questions that can be answered using LLMs' internal knowledge and those that require external information through tool use. To address this issue, we introduce a new dataset called ToolQA, which is designed to faithfully evaluate LLMs' ability to use external tools for question answering. Our development of ToolQA involved a scalable, automated process for dataset curation, along with 13 specialized tools designed for interaction with external knowledge in order to answer questions. Importantly, we strive to minimize the overlap between our benchmark data and LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use reasoning abilities. We conducted an in-depth diagnosis of existing tool-use LLMs to highlight their strengths, weaknesses, and potential improvements. Our findings set a new benchmark for evaluating LLMs and suggest new directions for future advancements. Our data and code are freely available to the broader scientific community on GitHub.
http://arxiv.org/pdf/2306.13304
Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang
cs.CL, cs.AI
null
null
cs.CL
20230623
20230623
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2212.10511" }, { "id": "2306.07193" }, { "id": "2211.05100" }, { "id": "2301.12652" }, { "id": "2206.14858" }, { "id": "2209.14610" }, { "id": "2203.09735" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2209.07686" }, { "id": "2110.14168" }, { "id": "2208.03299" }, { "id": "2102.13019" }, { "id": "2303.11381" }, { "id": "2204.05862" }, { "id": "2211.10435" }, { "id": "2207.13332" }, { "id": "2210.12810" }, { "id": "2303.04671" }, { "id": "2303.05398" }, { "id": "2210.17517" }, { "id": "2112.04359" }, { "id": "2303.17580" }, { "id": "2208.05051" }, { "id": "2305.15334" }, { "id": "2304.09842" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.09014" }, { "id": "2205.12255" } ]
2306.13394
14
165.00 8 LLaVA 160.54 8 LLaVA 152.94 7 GPT-4V 95.00 5 XComposer-VL 165.00 8 WeMM 160.54 9 Lion 150.59 8 Lynx 90.00 5 Muffin 165.00 9 mPLUG-Owl2 160.20 10 XComposer-VL 150.29 (5) Position (6) Color (7) Poster (8) Celebrity Model Score Rank Model Score Rank Model Score Rank Model WeMM 176.25 g Lion 173.00 g WeMM 156.00 g GPT-4V InfMLLM 165.25 6 WeMM 172.25 6 GPT-4V 148.00 6 Skywork-MM 162.50 Lynx 164.50 6 LLaVA 170.50 6 GIT2 146.25 6 WeMM 147.50 LLaVA 161.25 4 SPHINX 168.09 4 BLIP-2 136.50 4 Qwen-VL-Chat SPHINX 160.00 5 LLaMA-Adapter V2 167.84 5 MMICL 135.50 5 InfMLLM XComposer-VL 159.75 6 InfMLLM 167.00 6 InstructBLIP 134.25 LLaVA Lion 159.00 7 XComposer-VL 165.25 6 mPLUG-Owl2
2306.13394#14
MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models
Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform multimodal tasks, showing amazing emergent abilities in recent studies, such as writing poems based on an image. However, it is difficult for these case studies to fully reflect the performance of MLLM, lacking a comprehensive evaluation. In this paper, we fill in this blank, presenting the first comprehensive MLLM Evaluation benchmark MME. It measures both perception and cognition abilities on a total of 14 subtasks. In order to avoid data leakage that may arise from direct use of public datasets for evaluation, the annotations of instruction-answer pairs are all manually designed. The concise instruction design allows us to fairly compare MLLMs, instead of struggling in prompt engineering. Besides, with such an instruction, we can also easily carry out quantitative statistics. A total of 30 advanced MLLMs are comprehensively evaluated on our MME, which not only suggests that existing MLLMs still have a large room for improvement, but also reveals the potential directions for the subsequent model optimization.
http://arxiv.org/pdf/2306.13394
Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, Rongrong Ji
cs.CV
Project page: https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models
null
cs.CV
20230623
20231206
[]
2306.13421
14
Neighbor gating We add a neighbor gating mechanism to softly select neighbor representa- tions that are useful for fusing into the upper de- coder. Let Ci,k ∈ R2m×d be the token represen- tations for the k’th neighbor of chunk ci. We mean-pool across the time dimension to obtain a vector ˆci,k for each neighbor chunk. Then, we enrich the neighbor representation of each chunk by applying causal attention – a neighbor chunk representations ˆci,k attends to chunks that precede it or to neighbors of the same chunk ci that are ranked higher. Finally, for each chunk we obtain the gated retrieved representation by multiplying the augmented representations by a gating score: i,k = max{η, σ( wngˆci,k Cg )} · Ci,k where wng is a learned parameter vector, η is a small value meant to maintain gradient flow,2 and σ is the sigmoid ac- tivation. Finally, in the upper decoder, when CCA is performed, the keys and values are Cg 3.2 Supervision Signal For each query chunk cq = ci, we want to identify neighbor chunks that will be helpful for generating ct = ci+1, and use those neighbor chunks as su- pervision signal for the retriever. Similar to Rubin
2306.13421#14
Long-range Language Modeling with Self-retrieval
Retrieval-augmented language models (LMs) have received much attention recently. However, typically the retriever is not trained jointly as a native component of the LM, but added to an already-pretrained LM, which limits the ability of the LM and the retriever to adapt to one another. In this work, we propose the Retrieval-Pretrained Transformer (RPT), an architecture and training procedure for jointly training a retrieval-augmented LM from scratch for the task of modeling long texts. Given a recently generated text chunk in a long document, the LM computes query representations, which are then used to retrieve earlier chunks in the document, located potentially tens of thousands of tokens before. Information from retrieved chunks is fused into the LM representations to predict the next target chunk. We train the retriever component with a semantic objective, where the goal is to retrieve chunks that increase the probability of the next chunk, according to a reference LM. We evaluate RPT on four long-range language modeling tasks, spanning books, code, and mathematical writing, and demonstrate that RPT improves retrieval quality and subsequently perplexity across the board compared to strong baselines.
http://arxiv.org/pdf/2306.13421
Ohad Rubin, Jonathan Berant
cs.CL
null
null
cs.CL
20230623
20230623
[ { "id": "2004.05150" } ]
2306.13304
15
Based on these criteria, we define 6 contextual dimensions: temporal, spatial, social, scientific, mathematical, and personal. We collect reference corpora that can yield context-specific questions along one or more of the 6 dimensions. Specifically: 1) Along the temporal dimension, we collect the Flights and Coffee corpora, which contain the latest information that is out of the temporal scope of the LLM’s pre-training data. 2) Along the spatial dimension, we collect Yelp and Airbnb, which are two non-text corpora that can yield questions with spatial contexts. 3) Along the mathematical dimension, we collect the questions from GSM8K that ChatGPT cannot answer correctly with its own mathematical reasoning ability; 4) SciREX emphasizes detailed model performances from the scientific domain [16], where GPT family models can easily hallucinate [36]. 5) To incorporate personal data and avoid privacy issues, we synthesize the personal Agenda corpus with ChatGPT with virtual names and events. 6) In addition, we also select data from the most recent DBLP database and create graphs between authors and papers, where social relational knowledge cannot be understood by LLMs currently. Further details can be found in Appendix B. To obtain information from these reference corpora, we design 13 tools that are available to the LLMs (Table 2). These tools are designed as follows:
2306.13304#15
ToolQA: A Dataset for LLM Question Answering with External Tools
Large Language Models (LLMs) have demonstrated impressive performance in various NLP tasks, but they still suffer from challenges such as hallucination and weak numerical reasoning. To overcome these challenges, external tools can be used to enhance LLMs' question-answering abilities. However, current evaluation methods do not distinguish between questions that can be answered using LLMs' internal knowledge and those that require external information through tool use. To address this issue, we introduce a new dataset called ToolQA, which is designed to faithfully evaluate LLMs' ability to use external tools for question answering. Our development of ToolQA involved a scalable, automated process for dataset curation, along with 13 specialized tools designed for interaction with external knowledge in order to answer questions. Importantly, we strive to minimize the overlap between our benchmark data and LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use reasoning abilities. We conducted an in-depth diagnosis of existing tool-use LLMs to highlight their strengths, weaknesses, and potential improvements. Our findings set a new benchmark for evaluating LLMs and suggest new directions for future advancements. Our data and code are freely available to the broader scientific community on GitHub.
http://arxiv.org/pdf/2306.13304
Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang
cs.CL, cs.AI
null
null
cs.CL
20230623
20230623
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2212.10511" }, { "id": "2306.07193" }, { "id": "2211.05100" }, { "id": "2301.12652" }, { "id": "2206.14858" }, { "id": "2209.14610" }, { "id": "2203.09735" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2209.07686" }, { "id": "2110.14168" }, { "id": "2208.03299" }, { "id": "2102.13019" }, { "id": "2303.11381" }, { "id": "2204.05862" }, { "id": "2211.10435" }, { "id": "2207.13332" }, { "id": "2210.12810" }, { "id": "2303.04671" }, { "id": "2303.05398" }, { "id": "2210.17517" }, { "id": "2112.04359" }, { "id": "2303.17580" }, { "id": "2208.05051" }, { "id": "2305.15334" }, { "id": "2304.09842" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.09014" }, { "id": "2205.12255" } ]
2306.13394
15
InfMLLM 167.00 6 InstructBLIP 134.25 LLaVA Lion 159.00 7 XComposer-VL 165.25 6 mPLUG-Owl2 134.25 6 XComposer-VL 125.00 Otter 158.75 8 Qwen-VL-Chat 164.00 7 SPHINX 134.00 7 BLIP-2 110.00 GIT2 158.50 9 Lynx 162.00 8 BLIVA 133.25 7 LRV-Instruction 110.00 Octopus 157.25 10 LRV-Instruction 160.53 9 Lion 130.75 8 LaVIN 107.50 (9) Scene (10) Landmark (11) Artwork (12) OCR Rank Model Score Rank Model Score Rank Model Score Rank Model Score g GPT-4V 142.14 g GPT-4V 130.00 g Qwen-VL-Chat g GPT-4V 170.00 6 WeMM 140.00 6 Lion 105.00 g Lion 6 WeMM 117.50 6 XComposer-VL 138.57 6 Skywork-MM 95.00 6 MMICL 6 LLaMA-Adapter V2 90.00 4 BLIVA 136.43 4 MMICL 82.50 6 WeMM 4 Cheetor 87.50 4 MMICL 136.43 5 Cheetor 77.50 4
2306.13394#15
MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models
Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform multimodal tasks, showing amazing emergent abilities in recent studies, such as writing poems based on an image. However, it is difficult for these case studies to fully reflect the performance of MLLM, lacking a comprehensive evaluation. In this paper, we fill in this blank, presenting the first comprehensive MLLM Evaluation benchmark MME. It measures both perception and cognition abilities on a total of 14 subtasks. In order to avoid data leakage that may arise from direct use of public datasets for evaluation, the annotations of instruction-answer pairs are all manually designed. The concise instruction design allows us to fairly compare MLLMs, instead of struggling in prompt engineering. Besides, with such an instruction, we can also easily carry out quantitative statistics. A total of 30 advanced MLLMs are comprehensively evaluated on our MME, which not only suggests that existing MLLMs still have a large room for improvement, but also reveals the potential directions for the subsequent model optimization.
http://arxiv.org/pdf/2306.13394
Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, Rongrong Ji
cs.CV
Project page: https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models
null
cs.CV
20230623
20231206
[]
2306.13421
15
1Similar to RETRO, token representations of retrieved chunks are also augmented through cross-attention over tokens of the query chunk, cq. 2We set η = 0.1 in all of our experiments. et al. (2022), we can exploit the fact that we are producing training data and use information from ct itself to produce such a score. Unlike Zhong et al. (2022), who use lexical clues alone, we will use an independent scoring LM for this purpose. Scoring every chunk w.r.t to all preceding chunks is quadratic in the number of chunks in a document, and thus computationally difficult. Thus, we use a simple, BM25 unsupervised retriever (Robert- son and Zaragoza, 2009) that takes as input the concatenation of the chunks (cq, ct) = (ci, ci+1) and returns a set of candidates neighbor chunks, ¯R ⊂ C(cq), which have high lexical overlap with the current and subsequent chunk. This retriever has access to the tokens that need to be generated by the LM, which is allowed at training time.
2306.13421#15
Long-range Language Modeling with Self-retrieval
Retrieval-augmented language models (LMs) have received much attention recently. However, typically the retriever is not trained jointly as a native component of the LM, but added to an already-pretrained LM, which limits the ability of the LM and the retriever to adapt to one another. In this work, we propose the Retrieval-Pretrained Transformer (RPT), an architecture and training procedure for jointly training a retrieval-augmented LM from scratch for the task of modeling long texts. Given a recently generated text chunk in a long document, the LM computes query representations, which are then used to retrieve earlier chunks in the document, located potentially tens of thousands of tokens before. Information from retrieved chunks is fused into the LM representations to predict the next target chunk. We train the retriever component with a semantic objective, where the goal is to retrieve chunks that increase the probability of the next chunk, according to a reference LM. We evaluate RPT on four long-range language modeling tasks, spanning books, code, and mathematical writing, and demonstrate that RPT improves retrieval quality and subsequently perplexity across the board compared to strong baselines.
http://arxiv.org/pdf/2306.13421
Ohad Rubin, Jonathan Berant
cs.CL
null
null
cs.CL
20230623
20230623
[ { "id": "2004.05150" } ]
2306.13304
16
To obtain information from these reference corpora, we design 13 tools that are available to the LLMs (Table 2). These tools are designed as follows: Text: AgendaRetriever and SciREXRetreiver are text retrieval tools. They can retrieve relevant information to a given query from the (synthesized) personal agenda corpus and scientific corpus. • Database: Database Loader loads data from the local tabular Database. Data Filter can filter the database according to a set of conditions, each of which is composed of a column name, a relation, and a pre-determined value (e.g., “Date=2022-10-15”). Get Value returns all the values under a certain column in the database. # Table 1: Dataset Statistics of ToolQA. Context Topic External Knowledge Easy Hard Format Size # Templates # Questions # Templates # Questions Temporal Flight Coffee Tabular Database Tabular Database 4078318 5746 10 8 100 100 10 13 100 130 Spatial Yelp Airbnb Tabular Database Tabular Database 150346 102599 11 10 100 100 10 10 100 100 Mathematical GSM8K Professional Ability - - 100 - - Social DBLP Graph 553320 10 100 10 100 Scientific SciREX Pure-Text Corpus 438 1 100 4 100 Personal Agenda Pure-Text Corpus 10000 5 100 5 100 SUM - - - 55 800 62 730 4
2306.13304#16
ToolQA: A Dataset for LLM Question Answering with External Tools
Large Language Models (LLMs) have demonstrated impressive performance in various NLP tasks, but they still suffer from challenges such as hallucination and weak numerical reasoning. To overcome these challenges, external tools can be used to enhance LLMs' question-answering abilities. However, current evaluation methods do not distinguish between questions that can be answered using LLMs' internal knowledge and those that require external information through tool use. To address this issue, we introduce a new dataset called ToolQA, which is designed to faithfully evaluate LLMs' ability to use external tools for question answering. Our development of ToolQA involved a scalable, automated process for dataset curation, along with 13 specialized tools designed for interaction with external knowledge in order to answer questions. Importantly, we strive to minimize the overlap between our benchmark data and LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use reasoning abilities. We conducted an in-depth diagnosis of existing tool-use LLMs to highlight their strengths, weaknesses, and potential improvements. Our findings set a new benchmark for evaluating LLMs and suggest new directions for future advancements. Our data and code are freely available to the broader scientific community on GitHub.
http://arxiv.org/pdf/2306.13304
Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang
cs.CL, cs.AI
null
null
cs.CL
20230623
20230623
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2212.10511" }, { "id": "2306.07193" }, { "id": "2211.05100" }, { "id": "2301.12652" }, { "id": "2206.14858" }, { "id": "2209.14610" }, { "id": "2203.09735" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2209.07686" }, { "id": "2110.14168" }, { "id": "2208.03299" }, { "id": "2102.13019" }, { "id": "2303.11381" }, { "id": "2204.05862" }, { "id": "2211.10435" }, { "id": "2207.13332" }, { "id": "2210.12810" }, { "id": "2303.04671" }, { "id": "2303.05398" }, { "id": "2210.17517" }, { "id": "2112.04359" }, { "id": "2303.17580" }, { "id": "2208.05051" }, { "id": "2305.15334" }, { "id": "2304.09842" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.09014" }, { "id": "2205.12255" } ]
2306.13394
16
4 BLIVA 136.43 4 MMICL 82.50 6 WeMM 4 Cheetor 87.50 4 MMICL 136.43 5 Cheetor 77.50 4 LLaMA-Adapter V2 5 XComposer-VL 85.00 5 InfMLLM 132.14 6 Otter 72.50 4 XComposer-VL 6 MMICL 77.50 6 Qwen-VL-Chat 130.71 7 LRV-Instruction 70.00 5 Octopus 7 BLIP-2 75.00 7 SPHINX 130.00 8 LaVIN 65.00 5 mPLUG-Owl2 102.50 8 LRV-Instruction 72.50 8 InstructBLIP 129.29 9 Multimodal-GPT 62.50 5 InfMLLM 102.50 9 Otter 70.00 9 LLaVA 127.86 10 mPLUG-Owl 60.00 6 LRV-Instruction 85.00 10 Lion 67.50 (13) Commonsense Reasoning (14) Numerical Calculation (15) Text Translation (16) Code Reasoning
2306.13394#16
MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models
Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform multimodal tasks, showing amazing emergent abilities in recent studies, such as writing poems based on an image. However, it is difficult for these case studies to fully reflect the performance of MLLM, lacking a comprehensive evaluation. In this paper, we fill in this blank, presenting the first comprehensive MLLM Evaluation benchmark MME. It measures both perception and cognition abilities on a total of 14 subtasks. In order to avoid data leakage that may arise from direct use of public datasets for evaluation, the annotations of instruction-answer pairs are all manually designed. The concise instruction design allows us to fairly compare MLLMs, instead of struggling in prompt engineering. Besides, with such an instruction, we can also easily carry out quantitative statistics. A total of 30 advanced MLLMs are comprehensively evaluated on our MME, which not only suggests that existing MLLMs still have a large room for improvement, but also reveals the potential directions for the subsequent model optimization.
http://arxiv.org/pdf/2306.13394
Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, Rongrong Ji
cs.CV
Project page: https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models
null
cs.CV
20230623
20231206
[]
2306.13421
16
Let ˆg be an independently-trained LM, and let ¯cj be the concatenation (cj, cj+1). We compute a score st (¯cj) that reflects whether the information in ¯cj is more useful for decoding ct compared to chunks that are close to cq. Specifically, the target- based score for a candidate chunk is Probg (ct | ¢j, e)41, ¢4) st (Gj) = lo Probg (ct | c;—2, ¢.-1, 1) This score is positive when information in ¯cj is more useful for decoding ct than information in the preceding two chunks (ci−2, ci−1). We apply this scoring function to all chunks, and define for each query chunk cq the set of positive chunks Rq pos, which includes candidates for which st(·) > 0. This should result in helpful chunks, as each candidate chunk is at least as good as the local context. With this ordering at our disposal, we can apply standard retrieval training methods. # 3.3 Training
2306.13421#16
Long-range Language Modeling with Self-retrieval
Retrieval-augmented language models (LMs) have received much attention recently. However, typically the retriever is not trained jointly as a native component of the LM, but added to an already-pretrained LM, which limits the ability of the LM and the retriever to adapt to one another. In this work, we propose the Retrieval-Pretrained Transformer (RPT), an architecture and training procedure for jointly training a retrieval-augmented LM from scratch for the task of modeling long texts. Given a recently generated text chunk in a long document, the LM computes query representations, which are then used to retrieve earlier chunks in the document, located potentially tens of thousands of tokens before. Information from retrieved chunks is fused into the LM representations to predict the next target chunk. We train the retriever component with a semantic objective, where the goal is to retrieve chunks that increase the probability of the next chunk, according to a reference LM. We evaluate RPT on four long-range language modeling tasks, spanning books, code, and mathematical writing, and demonstrate that RPT improves retrieval quality and subsequently perplexity across the board compared to strong baselines.
http://arxiv.org/pdf/2306.13421
Ohad Rubin, Jonathan Berant
cs.CL
null
null
cs.CL
20230623
20230623
[ { "id": "2004.05150" } ]
2306.13304
17
4 Table 2: Different tools in ToolQA. Tool Types # Tools Tools Text Tools Database Tools Math Tools Graph Tools Code Tools System Tools 2 3 1 4 2 1 Agenda Retriever, SciREX Retriever Database Loader, Data Filter, Get Value WolframAlpha Calculator Graph Loader, Neighbour Checker, Node Checker, Edge Checker Python Interpreter, SQL Interpreter Finish • Math: Calculator is a mathematical tool that treats the input string as a formula and calculates the corresponding result. We use the WolframAlpha API portal as the calculator 3, which can perform both simple computations (e.g., add, subtraction, multiplication) and complicated operations (e.g., averaging, finding maximum values). • Graph: Graph Loader loads the graph from local files for future operations. Neighbour Checker lists all the neighbors of the query node in the graph. Node Checker and Edge Checker return the detailed attribute information of the query node and edge, respectively. • Code: The SQL Interpreter and the Python Interpreter are responsible for interpreting and executing SQL commands and Python code, respectively. They can receive and transform data from other tools, serving as bridges between different tools and the LLM. System: Finish parses the feedback from execution and returns the answer to finish the task. # 3.3 Human-Guided Question Generation
2306.13304#17
ToolQA: A Dataset for LLM Question Answering with External Tools
Large Language Models (LLMs) have demonstrated impressive performance in various NLP tasks, but they still suffer from challenges such as hallucination and weak numerical reasoning. To overcome these challenges, external tools can be used to enhance LLMs' question-answering abilities. However, current evaluation methods do not distinguish between questions that can be answered using LLMs' internal knowledge and those that require external information through tool use. To address this issue, we introduce a new dataset called ToolQA, which is designed to faithfully evaluate LLMs' ability to use external tools for question answering. Our development of ToolQA involved a scalable, automated process for dataset curation, along with 13 specialized tools designed for interaction with external knowledge in order to answer questions. Importantly, we strive to minimize the overlap between our benchmark data and LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use reasoning abilities. We conducted an in-depth diagnosis of existing tool-use LLMs to highlight their strengths, weaknesses, and potential improvements. Our findings set a new benchmark for evaluating LLMs and suggest new directions for future advancements. Our data and code are freely available to the broader scientific community on GitHub.
http://arxiv.org/pdf/2306.13304
Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang
cs.CL, cs.AI
null
null
cs.CL
20230623
20230623
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2212.10511" }, { "id": "2306.07193" }, { "id": "2211.05100" }, { "id": "2301.12652" }, { "id": "2206.14858" }, { "id": "2209.14610" }, { "id": "2203.09735" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2209.07686" }, { "id": "2110.14168" }, { "id": "2208.03299" }, { "id": "2102.13019" }, { "id": "2303.11381" }, { "id": "2204.05862" }, { "id": "2211.10435" }, { "id": "2207.13332" }, { "id": "2210.12810" }, { "id": "2303.04671" }, { "id": "2303.05398" }, { "id": "2210.17517" }, { "id": "2112.04359" }, { "id": "2303.17580" }, { "id": "2208.05051" }, { "id": "2305.15334" }, { "id": "2304.09842" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.09014" }, { "id": "2205.12255" } ]
2306.13394
17
Figure 2. Leaderboards on our MME benchmark. (1) and (2) are the overall leaderboards of perception and cognition respectively, in which the full score of the former is 2000 and that of the latter is 800. (3)-(16) are the leaderboards of the 14 subtasks with the full score of 200. The score is the sum of the accuracy and the accuracy+ in Tables. 1 and 2. A total of 30 advanced MLLMs joint the leaderboards. For the sake of presentation, we only show 10 models for each list, in which the top three ones are given clear trophy logos. 3 instruction of each subtask.
2306.13394#17
MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models
Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform multimodal tasks, showing amazing emergent abilities in recent studies, such as writing poems based on an image. However, it is difficult for these case studies to fully reflect the performance of MLLM, lacking a comprehensive evaluation. In this paper, we fill in this blank, presenting the first comprehensive MLLM Evaluation benchmark MME. It measures both perception and cognition abilities on a total of 14 subtasks. In order to avoid data leakage that may arise from direct use of public datasets for evaluation, the annotations of instruction-answer pairs are all manually designed. The concise instruction design allows us to fairly compare MLLMs, instead of struggling in prompt engineering. Besides, with such an instruction, we can also easily carry out quantitative statistics. A total of 30 advanced MLLMs are comprehensively evaluated on our MME, which not only suggests that existing MLLMs still have a large room for improvement, but also reveals the potential directions for the subsequent model optimization.
http://arxiv.org/pdf/2306.13394
Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, Rongrong Ji
cs.CV
Project page: https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models
null
cs.CV
20230623
20231206
[]
2306.13421
17
# 3.3 Training To train the parameters of the retriever compo- nent, we adapt the widely-used LambdaRank loss (Burges et al., 2006). The loss for each query chunk cq (w.r.t its retrievable chunks) is: Lret(cq) = λjl max (0, τ − (sq(cl) − sq(cj))) {j,l:¯cl∈Rq pos,st(¯cl)>st(¯cj )} where τ is a margin hyper-parameter, and λjl is the LambdaRank scaling that considers the relative ranking of each candidate. This loss is non-zero when for some pair of candidates, the target-based score disagrees (with margin τ ) with the ranking of the query-based score for candidates in Rq pos. Opti- mizing this loss function allows RPT to distinguish between relevant and irrelevant chunks. Our final loss is LLM + αretLret, where LLM is the standard LM loss and αret is the retrieval loss coefficient, increased linearly in the first 100K steps. We also increase τ linearly during training. # Important Implementation Details
2306.13421#17
Long-range Language Modeling with Self-retrieval
Retrieval-augmented language models (LMs) have received much attention recently. However, typically the retriever is not trained jointly as a native component of the LM, but added to an already-pretrained LM, which limits the ability of the LM and the retriever to adapt to one another. In this work, we propose the Retrieval-Pretrained Transformer (RPT), an architecture and training procedure for jointly training a retrieval-augmented LM from scratch for the task of modeling long texts. Given a recently generated text chunk in a long document, the LM computes query representations, which are then used to retrieve earlier chunks in the document, located potentially tens of thousands of tokens before. Information from retrieved chunks is fused into the LM representations to predict the next target chunk. We train the retriever component with a semantic objective, where the goal is to retrieve chunks that increase the probability of the next chunk, according to a reference LM. We evaluate RPT on four long-range language modeling tasks, spanning books, code, and mathematical writing, and demonstrate that RPT improves retrieval quality and subsequently perplexity across the board compared to strong baselines.
http://arxiv.org/pdf/2306.13421
Ohad Rubin, Jonathan Berant
cs.CL
null
null
cs.CL
20230623
20230623
[ { "id": "2004.05150" } ]
2306.13304
18
System: Finish parses the feedback from execution and returns the answer to finish the task. # 3.3 Human-Guided Question Generation The question generation phase aims to generate questions that can be answered by using the available tools over the reference corpora. There are two straightforward strategies to generate questions: 1) letting human experts come up with questions about reference corpora, or 2) relying solely on LLMs to generate questions about the reference corpora. However, both strategies have their drawbacks. While human experts can produce high-quality questions, the entire process is labor-intensive, time- consuming, and hard to scale. Depending solely on LLMs may generate unanswerable questions or hallucinate information that does not exist in the reference data. Besides, some of the LLM-generated questions are too easy and can be directly answered with only LLMs’ internal knowledge.
2306.13304#18
ToolQA: A Dataset for LLM Question Answering with External Tools
Large Language Models (LLMs) have demonstrated impressive performance in various NLP tasks, but they still suffer from challenges such as hallucination and weak numerical reasoning. To overcome these challenges, external tools can be used to enhance LLMs' question-answering abilities. However, current evaluation methods do not distinguish between questions that can be answered using LLMs' internal knowledge and those that require external information through tool use. To address this issue, we introduce a new dataset called ToolQA, which is designed to faithfully evaluate LLMs' ability to use external tools for question answering. Our development of ToolQA involved a scalable, automated process for dataset curation, along with 13 specialized tools designed for interaction with external knowledge in order to answer questions. Importantly, we strive to minimize the overlap between our benchmark data and LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use reasoning abilities. We conducted an in-depth diagnosis of existing tool-use LLMs to highlight their strengths, weaknesses, and potential improvements. Our findings set a new benchmark for evaluating LLMs and suggest new directions for future advancements. Our data and code are freely available to the broader scientific community on GitHub.
http://arxiv.org/pdf/2306.13304
Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang
cs.CL, cs.AI
null
null
cs.CL
20230623
20230623
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2212.10511" }, { "id": "2306.07193" }, { "id": "2211.05100" }, { "id": "2301.12652" }, { "id": "2206.14858" }, { "id": "2209.14610" }, { "id": "2203.09735" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2209.07686" }, { "id": "2110.14168" }, { "id": "2208.03299" }, { "id": "2102.13019" }, { "id": "2303.11381" }, { "id": "2204.05862" }, { "id": "2211.10435" }, { "id": "2207.13332" }, { "id": "2210.12810" }, { "id": "2303.04671" }, { "id": "2303.05398" }, { "id": "2210.17517" }, { "id": "2112.04359" }, { "id": "2303.17580" }, { "id": "2208.05051" }, { "id": "2305.15334" }, { "id": "2304.09842" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.09014" }, { "id": "2205.12255" } ]
2306.13394
18
• Benefitting from our instruction design “please answer yes or no”, we can easily perform quantitative statistics based on the “yes” or “no” output of MLLMs, which It should be noted that we is accurate and objective. have also tried to design instructions with multiple choice questions, but find that it may beyond the capabilities of current MLLMs to follow complex instructions. We conduct massive experiments to evaluate the zero- shot performance of 30 advanced MLLMs on the 14 subtasks. The evaluated MLLMs include BLIP-2 [23], InstructBLIP [12], MiniGPT-4 [59], PandaGPT [39], Multimodal-GPT [15], VisualGLM-6B [5], ImageBind- LLM [17], VPGTrans [53], LaVIN [33], mPLUG- Owl [48], Octopus [3], Muffin [51], Otter [22], LRV- Instruction [28], Cheetor [24], LLaMA-Adapter-v2 [14], GIT2 [41], BLIVA [18], Lynx [52], MMICL [54], GPT- 4V [37], Skywork-MM [4], mPLUG-Owl2 [48],
2306.13394#18
MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models
Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform multimodal tasks, showing amazing emergent abilities in recent studies, such as writing poems based on an image. However, it is difficult for these case studies to fully reflect the performance of MLLM, lacking a comprehensive evaluation. In this paper, we fill in this blank, presenting the first comprehensive MLLM Evaluation benchmark MME. It measures both perception and cognition abilities on a total of 14 subtasks. In order to avoid data leakage that may arise from direct use of public datasets for evaluation, the annotations of instruction-answer pairs are all manually designed. The concise instruction design allows us to fairly compare MLLMs, instead of struggling in prompt engineering. Besides, with such an instruction, we can also easily carry out quantitative statistics. A total of 30 advanced MLLMs are comprehensively evaluated on our MME, which not only suggests that existing MLLMs still have a large room for improvement, but also reveals the potential directions for the subsequent model optimization.
http://arxiv.org/pdf/2306.13394
Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, Rongrong Ji
cs.CV
Project page: https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models
null
cs.CV
20230623
20231206
[]
2306.13421
18
# Important Implementation Details Scheduled sampling To reduce train-test mis- match, we apply scheduled sampling (Bengio et al., 2015) during training. Namely, After computing the top-K neighbor chunks, we use these neighbors with probability 1 − pss, and with probability pss the top-K scoring candidates from Rq pos as input for CCA. We anneal pss from 1 to 0 during the first 90% of training with a cosine schedule. This al- lows the model to gradually learn to use its own predictions. We report the effect of this in §5.3. Sliding window attention at training and infer- ence time As described in §3, the decoder takes as input w chunks, each with m tokens as input, and applies causal attention over them. In practice, to give the first tokens access to past tokens, we use the sliding-window attention mechanism (Dai et al., 2019; Beltagy et al., 2020; Hutchins et al., 2022), where the number of tokens in a window is 2,048 and the stride is 1,024. Thus, the input to each window is 2,048 tokens and the output are the representations for the last 1,024 tokens, which use the keys and values of the previous 1,024 tokens for contextualization.
2306.13421#18
Long-range Language Modeling with Self-retrieval
Retrieval-augmented language models (LMs) have received much attention recently. However, typically the retriever is not trained jointly as a native component of the LM, but added to an already-pretrained LM, which limits the ability of the LM and the retriever to adapt to one another. In this work, we propose the Retrieval-Pretrained Transformer (RPT), an architecture and training procedure for jointly training a retrieval-augmented LM from scratch for the task of modeling long texts. Given a recently generated text chunk in a long document, the LM computes query representations, which are then used to retrieve earlier chunks in the document, located potentially tens of thousands of tokens before. Information from retrieved chunks is fused into the LM representations to predict the next target chunk. We train the retriever component with a semantic objective, where the goal is to retrieve chunks that increase the probability of the next chunk, according to a reference LM. We evaluate RPT on four long-range language modeling tasks, spanning books, code, and mathematical writing, and demonstrate that RPT improves retrieval quality and subsequently perplexity across the board compared to strong baselines.
http://arxiv.org/pdf/2306.13421
Ohad Rubin, Jonathan Berant
cs.CL
null
null
cs.CL
20230623
20230623
[ { "id": "2004.05150" } ]
2306.13304
19
To address these challenges, we propose a human-guided LLM generation approach that uses question templates to bridge human guidance and automatic LLM generation [59, 69]. We first ask ChatGPT to generate candidate question templates from reference data, using prompts such as “Generate some template questions based on the given information and provide the corresponding answers.”. The responses obtained are arrays containing potential question templates. We then perform manual validation to select the templates that cannot be answered with LLMs’ internal knowledge but become answerable with the reference corpora. We provide a comprehensive list of both easy and hard question templates for different reference data in Appendix C and Appendix D. After the high-quality question templates are manually selected, we sample values from the reference data to automatically fill into the templates to generate concrete questions. For example, given the template “Did the flight from {Origin} to {Dest} on {Date} get canceled or diverted?”, we can sample the values “LAX”, “MDW”, “01/09/22” from the reference Flight tabular data and fill into the template to form a question: “Did the flight from LAX to MDW on 01/09/22 get canceled or diverted?”
2306.13304#19
ToolQA: A Dataset for LLM Question Answering with External Tools
Large Language Models (LLMs) have demonstrated impressive performance in various NLP tasks, but they still suffer from challenges such as hallucination and weak numerical reasoning. To overcome these challenges, external tools can be used to enhance LLMs' question-answering abilities. However, current evaluation methods do not distinguish between questions that can be answered using LLMs' internal knowledge and those that require external information through tool use. To address this issue, we introduce a new dataset called ToolQA, which is designed to faithfully evaluate LLMs' ability to use external tools for question answering. Our development of ToolQA involved a scalable, automated process for dataset curation, along with 13 specialized tools designed for interaction with external knowledge in order to answer questions. Importantly, we strive to minimize the overlap between our benchmark data and LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use reasoning abilities. We conducted an in-depth diagnosis of existing tool-use LLMs to highlight their strengths, weaknesses, and potential improvements. Our findings set a new benchmark for evaluating LLMs and suggest new directions for future advancements. Our data and code are freely available to the broader scientific community on GitHub.
http://arxiv.org/pdf/2306.13304
Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang
cs.CL, cs.AI
null
null
cs.CL
20230623
20230623
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2212.10511" }, { "id": "2306.07193" }, { "id": "2211.05100" }, { "id": "2301.12652" }, { "id": "2206.14858" }, { "id": "2209.14610" }, { "id": "2203.09735" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2209.07686" }, { "id": "2110.14168" }, { "id": "2208.03299" }, { "id": "2102.13019" }, { "id": "2303.11381" }, { "id": "2204.05862" }, { "id": "2211.10435" }, { "id": "2207.13332" }, { "id": "2210.12810" }, { "id": "2303.04671" }, { "id": "2303.05398" }, { "id": "2210.17517" }, { "id": "2112.04359" }, { "id": "2303.17580" }, { "id": "2208.05051" }, { "id": "2305.15334" }, { "id": "2304.09842" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.09014" }, { "id": "2205.12255" } ]
2306.13394
19
[18], Lynx [52], MMICL [54], GPT- 4V [37], Skywork-MM [4], mPLUG-Owl2 [48], Qwen- VL-Chat [9], XComposer-VL [7], LLaVA [29], Lion [2], SPHINX [27], InfMLLM [1], and WeMM [6]. As dis- played in Fig. 2 that consists of 2 overall leaderboards (perception and cognition) and 14 individual leaderboards, these MLLMs show clear discrepancies in our MME eval- uation benchmark. Fig. 3 also provides a comparison from the other perspective. We can see the range that current MLLMs can reach in each capability dimension. More im- portantly, we have summarized four prominent problems exposed in experiments, including inability to follow ba- sic instructions, a lack of basic perception and reasoning, as well as object hallucination [25, 50], as shown in Fig. 4. It is expected that these findings are instructive for the subse- quent model optimization.
2306.13394#19
MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models
Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform multimodal tasks, showing amazing emergent abilities in recent studies, such as writing poems based on an image. However, it is difficult for these case studies to fully reflect the performance of MLLM, lacking a comprehensive evaluation. In this paper, we fill in this blank, presenting the first comprehensive MLLM Evaluation benchmark MME. It measures both perception and cognition abilities on a total of 14 subtasks. In order to avoid data leakage that may arise from direct use of public datasets for evaluation, the annotations of instruction-answer pairs are all manually designed. The concise instruction design allows us to fairly compare MLLMs, instead of struggling in prompt engineering. Besides, with such an instruction, we can also easily carry out quantitative statistics. A total of 30 advanced MLLMs are comprehensively evaluated on our MME, which not only suggests that existing MLLMs still have a large room for improvement, but also reveals the potential directions for the subsequent model optimization.
http://arxiv.org/pdf/2306.13394
Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, Rongrong Ji
cs.CV
Project page: https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models
null
cs.CV
20230623
20231206
[]
2306.13421
19
At inference time a similar procedure is applied (Dai et al., 2019), where we compute and cache the key and value representations for segments of 1,024 tokens, and then use these as context for generating or estimating the probability of the next segment. Naturally, at inference time the retriever component provides access to all tokens from the beginning of the document. Additional details At training time we use se- quences of length L = 16,384 tokens, which are split into 4 devices, each consuming 4, 096 to- kens. As mentioned, the decoder stack takes 2, 048 tokens as input (in a sliding window approach), which contains ¢ = 32 chunks of length m = 64. We employ Rotary Positional embedding (Su et al., 2021), and train all models for 500K steps on a TPUv4-64, with an effective batch size of 2!" to- kens. Name Tokens (Train/Test) Median Length ArXiv CodeParrot PG19 Books3 12,000 / 16 5,000 / 5 3,000 / 9 25,000 / 35 16,368 29,269 82,659 113,496 Table 1: Number of tokens (in millions) for each dataset and median document length.
2306.13421#19
Long-range Language Modeling with Self-retrieval
Retrieval-augmented language models (LMs) have received much attention recently. However, typically the retriever is not trained jointly as a native component of the LM, but added to an already-pretrained LM, which limits the ability of the LM and the retriever to adapt to one another. In this work, we propose the Retrieval-Pretrained Transformer (RPT), an architecture and training procedure for jointly training a retrieval-augmented LM from scratch for the task of modeling long texts. Given a recently generated text chunk in a long document, the LM computes query representations, which are then used to retrieve earlier chunks in the document, located potentially tens of thousands of tokens before. Information from retrieved chunks is fused into the LM representations to predict the next target chunk. We train the retriever component with a semantic objective, where the goal is to retrieve chunks that increase the probability of the next chunk, according to a reference LM. We evaluate RPT on four long-range language modeling tasks, spanning books, code, and mathematical writing, and demonstrate that RPT improves retrieval quality and subsequently perplexity across the board compared to strong baselines.
http://arxiv.org/pdf/2306.13421
Ohad Rubin, Jonathan Berant
cs.CL
null
null
cs.CL
20230623
20230623
[ { "id": "2004.05150" } ]
2306.13304
20
Depending on the difficulty of the questions, we classify them into two classes — easy and hard. Easy questions primarily focus on extracting a single piece of information from external knowledge, thus requiring fewer tools to involve in the solution. Conversely, hard questions require complex operations (e.g., average) and reasoning (e.g., comparison) over multiple information pieces drawn from the reference corpora, requiring more tools and complex reasoning among them. # 3.4 Programmatic Answer Generation Our final step is to create accurate answers for the generated questions. To guarantee the validity of these responses, we implement 1) operators, which are functions corresponding to the predefined tools; and 2) tool chains, which are schemas for composing different operators for different question templates. For each question, as we know the true arguments filled into the question template, we can # 3https://products.wolframalpha.com/api 5 # Table 3: Success rates on easy questions.
2306.13304#20
ToolQA: A Dataset for LLM Question Answering with External Tools
Large Language Models (LLMs) have demonstrated impressive performance in various NLP tasks, but they still suffer from challenges such as hallucination and weak numerical reasoning. To overcome these challenges, external tools can be used to enhance LLMs' question-answering abilities. However, current evaluation methods do not distinguish between questions that can be answered using LLMs' internal knowledge and those that require external information through tool use. To address this issue, we introduce a new dataset called ToolQA, which is designed to faithfully evaluate LLMs' ability to use external tools for question answering. Our development of ToolQA involved a scalable, automated process for dataset curation, along with 13 specialized tools designed for interaction with external knowledge in order to answer questions. Importantly, we strive to minimize the overlap between our benchmark data and LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use reasoning abilities. We conducted an in-depth diagnosis of existing tool-use LLMs to highlight their strengths, weaknesses, and potential improvements. Our findings set a new benchmark for evaluating LLMs and suggest new directions for future advancements. Our data and code are freely available to the broader scientific community on GitHub.
http://arxiv.org/pdf/2306.13304
Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang
cs.CL, cs.AI
null
null
cs.CL
20230623
20230623
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2212.10511" }, { "id": "2306.07193" }, { "id": "2211.05100" }, { "id": "2301.12652" }, { "id": "2206.14858" }, { "id": "2209.14610" }, { "id": "2203.09735" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2209.07686" }, { "id": "2110.14168" }, { "id": "2208.03299" }, { "id": "2102.13019" }, { "id": "2303.11381" }, { "id": "2204.05862" }, { "id": "2211.10435" }, { "id": "2207.13332" }, { "id": "2210.12810" }, { "id": "2303.04671" }, { "id": "2303.05398" }, { "id": "2210.17517" }, { "id": "2112.04359" }, { "id": "2303.17580" }, { "id": "2208.05051" }, { "id": "2305.15334" }, { "id": "2304.09842" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.09014" }, { "id": "2205.12255" } ]
2306.13394
20
In summary, the contributions of this work are as fol- lows: (1) We propose a new benchmark MME to meet the urgent need of MLLM evaluation. (2) A total of 30 up-to- date MLLMs are evaluated on our MME. (3) We summarize the exposed problems in experiments, proving guidance for the evolution of MLLMs. # 2. MME Evaluation Suite # 2.1. Instruction Design In order to facilitate quantitative performance statistics, the orientation of our instruction design is to let the model to answer “yes” or “no”. As a result, the instruction consists of two parts, including a concise question and a description “Please answer yes or no.” For each test image, we manu- ally design two instructions, where the discrepancy lies in the questions. The ground truth answer of the first question is “yes” and that of the second question is “no”, as shown in Fig. 1. When MLLM answers both of the two questions correctly, it appears more confident that the MLLM actually comprehends the image and the corresponding knowledge 4 behind it, rather than just guessing. # 2.2. Evaluation Metric
2306.13394#20
MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models
Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform multimodal tasks, showing amazing emergent abilities in recent studies, such as writing poems based on an image. However, it is difficult for these case studies to fully reflect the performance of MLLM, lacking a comprehensive evaluation. In this paper, we fill in this blank, presenting the first comprehensive MLLM Evaluation benchmark MME. It measures both perception and cognition abilities on a total of 14 subtasks. In order to avoid data leakage that may arise from direct use of public datasets for evaluation, the annotations of instruction-answer pairs are all manually designed. The concise instruction design allows us to fairly compare MLLMs, instead of struggling in prompt engineering. Besides, with such an instruction, we can also easily carry out quantitative statistics. A total of 30 advanced MLLMs are comprehensively evaluated on our MME, which not only suggests that existing MLLMs still have a large room for improvement, but also reveals the potential directions for the subsequent model optimization.
http://arxiv.org/pdf/2306.13394
Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, Rongrong Ji
cs.CV
Project page: https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models
null
cs.CV
20230623
20231206
[]
2306.13421
20
Table 1: Number of tokens (in millions) for each dataset and median document length. For all models trained, we use the GPT-NeoX (Black et al., 2022) tokenizer, which was trained on the Pile (Gao et al., 2021a) and covers the domains we evaluate on (see §4). As our scoring language model, we use the deduplicated 1.4B parameter ver- sion of Pythia (Biderman et al., 2023), and score with it the top-20 BM25 candidates. Our model has 12 layers, hidden dimension d = 1024, and 8 atten- tion heads with a head dimension of 128. We apply CCA every 2 layers and use 2 neighbors, unless mentioned otherwise. Additional implementation details are in Appendix A.1. io] Arxiv 5 ofl, 10 | CodeParrot 5 | 10] PG19 “ | ttn, °. 10 | Books3 107 103 10* 10° 10° 107 Sequence length Figure 3: Histograms of the distribution over document length in tokens across all datasets. The x-axis is in log scale. # 4 Long Range LM Datasets
2306.13421#20
Long-range Language Modeling with Self-retrieval
Retrieval-augmented language models (LMs) have received much attention recently. However, typically the retriever is not trained jointly as a native component of the LM, but added to an already-pretrained LM, which limits the ability of the LM and the retriever to adapt to one another. In this work, we propose the Retrieval-Pretrained Transformer (RPT), an architecture and training procedure for jointly training a retrieval-augmented LM from scratch for the task of modeling long texts. Given a recently generated text chunk in a long document, the LM computes query representations, which are then used to retrieve earlier chunks in the document, located potentially tens of thousands of tokens before. Information from retrieved chunks is fused into the LM representations to predict the next target chunk. We train the retriever component with a semantic objective, where the goal is to retrieve chunks that increase the probability of the next chunk, according to a reference LM. We evaluate RPT on four long-range language modeling tasks, spanning books, code, and mathematical writing, and demonstrate that RPT improves retrieval quality and subsequently perplexity across the board compared to strong baselines.
http://arxiv.org/pdf/2306.13421
Ohad Rubin, Jonathan Berant
cs.CL
null
null
cs.CL
20230623
20230623
[ { "id": "2004.05150" } ]
2306.13394
21
4 behind it, rather than just guessing. # 2.2. Evaluation Metric Since the output of the model is limited to two types (“yes” or “no”), it is convenient to measure the metrics of accu- racy and accuracy+. The former is calculated based on each question, while the latter is based on each image where both of the two questions need to be answered correctly. The random accuracies of the two metrics are equal to 50% and 25%, respectively. It can be seen that accuracy+ is a stricter measurement but also better reflects the comprehensive un- derstanding degree of the model to the image. In addition, we calculate the score of a subtask based on the sum of ac- curacy and accuracy+. The perception score is the sum of scores of all perception subtasks. The cognition score is calculated in the same way. Therefore, the full scores of perception and cognition are 2000 and 800, respectively. # 2.3. Data Collection # 2.3.1 Perception Tasks We argue that perception is one of the most fundamental capabilities of MLLMs, and the lack of perception will eas- ily lead to the object hallucination problem [25, 50]. That is, MLLM will answer questions based on its own fantasies rather than based on the realistic content of the image, as displayed in Fig. 4.
2306.13394#21
MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models
Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform multimodal tasks, showing amazing emergent abilities in recent studies, such as writing poems based on an image. However, it is difficult for these case studies to fully reflect the performance of MLLM, lacking a comprehensive evaluation. In this paper, we fill in this blank, presenting the first comprehensive MLLM Evaluation benchmark MME. It measures both perception and cognition abilities on a total of 14 subtasks. In order to avoid data leakage that may arise from direct use of public datasets for evaluation, the annotations of instruction-answer pairs are all manually designed. The concise instruction design allows us to fairly compare MLLMs, instead of struggling in prompt engineering. Besides, with such an instruction, we can also easily carry out quantitative statistics. A total of 30 advanced MLLMs are comprehensively evaluated on our MME, which not only suggests that existing MLLMs still have a large room for improvement, but also reveals the potential directions for the subsequent model optimization.
http://arxiv.org/pdf/2306.13394
Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, Rongrong Ji
cs.CV
Project page: https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models
null
cs.CV
20230623
20231206
[]
2306.13421
21
Figure 3: Histograms of the distribution over document length in tokens across all datasets. The x-axis is in log scale. # 4 Long Range LM Datasets We evaluate RPT on four datasets, covering do- mains such as books, code, and mathematical writ- ing, which require the ability to recall informa- tion over long distances. Tab. 1 and Fig. 3 pro- vide statistics on dataset size and the distribution over document length, showing that documents are long across all datasets and in particular PG19 and Books3, where documents typically contain 105 tokens or more. We briefly review the datasets. PG19 Introduced in Rae et al. (2020), PG19 is a widely-used long-range language modeling bench- mark containing books from Project Gutenberg, and covering a wide range of literary genres, styles, and topics. We adopt the exact setup and data split from prior work (Wu et al., 2022; Hutchins et al., 2022; Mehta et al., 2023). Books3 is a corpus of books released as part of the Pile (Gao et al., 2021a), containing a vast col- lection of literary works from different domains. To our knowledge, we are the first to use this corpus as a long-range language modeling benchmark.
2306.13421#21
Long-range Language Modeling with Self-retrieval
Retrieval-augmented language models (LMs) have received much attention recently. However, typically the retriever is not trained jointly as a native component of the LM, but added to an already-pretrained LM, which limits the ability of the LM and the retriever to adapt to one another. In this work, we propose the Retrieval-Pretrained Transformer (RPT), an architecture and training procedure for jointly training a retrieval-augmented LM from scratch for the task of modeling long texts. Given a recently generated text chunk in a long document, the LM computes query representations, which are then used to retrieve earlier chunks in the document, located potentially tens of thousands of tokens before. Information from retrieved chunks is fused into the LM representations to predict the next target chunk. We train the retriever component with a semantic objective, where the goal is to retrieve chunks that increase the probability of the next chunk, according to a reference LM. We evaluate RPT on four long-range language modeling tasks, spanning books, code, and mathematical writing, and demonstrate that RPT improves retrieval quality and subsequently perplexity across the board compared to strong baselines.
http://arxiv.org/pdf/2306.13421
Ohad Rubin, Jonathan Berant
cs.CL
null
null
cs.CL
20230623
20230623
[ { "id": "2004.05150" } ]
2306.13394
22
Coarse-Grained Recognition. The contents of coarse- grained recognition include the existence of common ob- jects, and their count, color, and position. The images are sampled from COCO [26], but the instruction-answer pairs are all manually constructed, rather than directly us- ing publicly available annotations. Even if MLLMs have seen these COCO images, our manually prepared pairs are not presented in their training sets. This requires MLLMs to be able to understand the instructions and infer corre- sponding answers. In each perception subtask of existence, count, color, and position, we prepare 30 images with 60 instruction-answer pairs.
2306.13394#22
MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models
Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform multimodal tasks, showing amazing emergent abilities in recent studies, such as writing poems based on an image. However, it is difficult for these case studies to fully reflect the performance of MLLM, lacking a comprehensive evaluation. In this paper, we fill in this blank, presenting the first comprehensive MLLM Evaluation benchmark MME. It measures both perception and cognition abilities on a total of 14 subtasks. In order to avoid data leakage that may arise from direct use of public datasets for evaluation, the annotations of instruction-answer pairs are all manually designed. The concise instruction design allows us to fairly compare MLLMs, instead of struggling in prompt engineering. Besides, with such an instruction, we can also easily carry out quantitative statistics. A total of 30 advanced MLLMs are comprehensively evaluated on our MME, which not only suggests that existing MLLMs still have a large room for improvement, but also reveals the potential directions for the subsequent model optimization.
http://arxiv.org/pdf/2306.13394
Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, Rongrong Ji
cs.CV
Project page: https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models
null
cs.CV
20230623
20231206
[]
2306.13421
22
CodeParrot (Wolf et al., 2023) is a corpus of clean, nearly-deduplicated Python code from vari- ous GitHub repositories. Modeling code requires understanding patterns and contextualizing infor- mation over long distances, making it a natural candidate for testing long-range LMs. In our exper- iments, we follow the approach of Wu et al. (2022), combining files from the same repository to con- struct a corpus with longer sequences, and create a train/test split (see Tab. 1). ArXiv is a corpus of preprint papers extracted from ArXiv. It consists of mathematical texts that require maintaining coherence and referring to pre- viously mentioned information over extended text. Prior work evaluated long-range LMs on this cor- pus (Wu et al., 2022; Hutchins et al., 2022; Mehta et al., 2023), but did not release their corpus. Thus, we use the preprocessed corpus and data splits made available by Azerbayev et al. (2023). # 5 Experiments We now turn to experiments for comparing RPT to prior work across our four datasets. # 5.1 Experimental Setup We compare to the following baselines and oracles.
2306.13421#22
Long-range Language Modeling with Self-retrieval
Retrieval-augmented language models (LMs) have received much attention recently. However, typically the retriever is not trained jointly as a native component of the LM, but added to an already-pretrained LM, which limits the ability of the LM and the retriever to adapt to one another. In this work, we propose the Retrieval-Pretrained Transformer (RPT), an architecture and training procedure for jointly training a retrieval-augmented LM from scratch for the task of modeling long texts. Given a recently generated text chunk in a long document, the LM computes query representations, which are then used to retrieve earlier chunks in the document, located potentially tens of thousands of tokens before. Information from retrieved chunks is fused into the LM representations to predict the next target chunk. We train the retriever component with a semantic objective, where the goal is to retrieve chunks that increase the probability of the next chunk, according to a reference LM. We evaluate RPT on four long-range language modeling tasks, spanning books, code, and mathematical writing, and demonstrate that RPT improves retrieval quality and subsequently perplexity across the board compared to strong baselines.
http://arxiv.org/pdf/2306.13421
Ohad Rubin, Jonathan Berant
cs.CL
null
null
cs.CL
20230623
20230623
[ { "id": "2004.05150" } ]
2306.13304
23
run the tool chains with the corresponding arguments to programmatically extract answers from the reference data. This process enables automatic generation correct answers to questions, even for those questions that involve multi-step reasoning. Figure 2(c) demonstrates this generation process. When answering a generated question with sampled values “Did the flight from LAX to MDW on 01/09/22 get canceled or diverted?”, we write Python codes to implement the operators over the reference data, including database loader, data filter, and get-value function. Then, the programmatic pipeline runs a tool chain of these operators to automatically generate the correct answer (details in Appendix E). # 4 Experiments # 4.1 Baselines
2306.13304#23
ToolQA: A Dataset for LLM Question Answering with External Tools
Large Language Models (LLMs) have demonstrated impressive performance in various NLP tasks, but they still suffer from challenges such as hallucination and weak numerical reasoning. To overcome these challenges, external tools can be used to enhance LLMs' question-answering abilities. However, current evaluation methods do not distinguish between questions that can be answered using LLMs' internal knowledge and those that require external information through tool use. To address this issue, we introduce a new dataset called ToolQA, which is designed to faithfully evaluate LLMs' ability to use external tools for question answering. Our development of ToolQA involved a scalable, automated process for dataset curation, along with 13 specialized tools designed for interaction with external knowledge in order to answer questions. Importantly, we strive to minimize the overlap between our benchmark data and LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use reasoning abilities. We conducted an in-depth diagnosis of existing tool-use LLMs to highlight their strengths, weaknesses, and potential improvements. Our findings set a new benchmark for evaluating LLMs and suggest new directions for future advancements. Our data and code are freely available to the broader scientific community on GitHub.
http://arxiv.org/pdf/2306.13304
Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang
cs.CL, cs.AI
null
null
cs.CL
20230623
20230623
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2212.10511" }, { "id": "2306.07193" }, { "id": "2211.05100" }, { "id": "2301.12652" }, { "id": "2206.14858" }, { "id": "2209.14610" }, { "id": "2203.09735" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2209.07686" }, { "id": "2110.14168" }, { "id": "2208.03299" }, { "id": "2102.13019" }, { "id": "2303.11381" }, { "id": "2204.05862" }, { "id": "2211.10435" }, { "id": "2207.13332" }, { "id": "2210.12810" }, { "id": "2303.04671" }, { "id": "2303.05398" }, { "id": "2210.17517" }, { "id": "2112.04359" }, { "id": "2303.17580" }, { "id": "2208.05051" }, { "id": "2305.15334" }, { "id": "2304.09842" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.09014" }, { "id": "2205.12255" } ]
2306.13394
23
Fine-Grained Recognition. The fine-grained recog- nition is more about testing the knowledge resources of The subtasks consist of recognizing movie MLLMs. posters, celebrities, scenes, landmarks, and artworks, con- taining 147, 170, 200, 200, and 200 images respectively. For the celebrities, we plot a red box to a person with a clearly visible face in the image, and the corresponding in- struction is “Is the actor inside the red box named [celebrity name]? Please answer yes or no.” Similar with the above coarse-grained recognition, the images of these subtasks are from publicly available datasets [19, 34, 35, 44, 58] and all of the instructions are manually designed. OCR. Optical Character Recognition (OCR) is also a foundational capability of MLLMs, serving for subsequent
2306.13394#23
MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models
Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform multimodal tasks, showing amazing emergent abilities in recent studies, such as writing poems based on an image. However, it is difficult for these case studies to fully reflect the performance of MLLM, lacking a comprehensive evaluation. In this paper, we fill in this blank, presenting the first comprehensive MLLM Evaluation benchmark MME. It measures both perception and cognition abilities on a total of 14 subtasks. In order to avoid data leakage that may arise from direct use of public datasets for evaluation, the annotations of instruction-answer pairs are all manually designed. The concise instruction design allows us to fairly compare MLLMs, instead of struggling in prompt engineering. Besides, with such an instruction, we can also easily carry out quantitative statistics. A total of 30 advanced MLLMs are comprehensively evaluated on our MME, which not only suggests that existing MLLMs still have a large room for improvement, but also reveals the potential directions for the subsequent model optimization.
http://arxiv.org/pdf/2306.13394
Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, Rongrong Ji
cs.CV
Project page: https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models
null
cs.CV
20230623
20231206
[]
2306.13421
23
# 5 Experiments We now turn to experiments for comparing RPT to prior work across our four datasets. # 5.1 Experimental Setup We compare to the following baselines and oracles. Transformer-XL Our simplest baseline is a stan- dard transformer decoder stack with sliding win- dow attention. Put differently, we simply remove from RPT the retriever component and CCA lay- ers in the upper decoder. Using sliding window attention (as described in §3.4) can be viewed as a variant of Transformer-XL (Dai et al., 2019). RETRO (Borgeaud et al., 2022) A retrieval- augmented model, where we omit the retriever component and feed the top-K neighbors retrieved by BM253 as input to the CCA layers in the upper decoder. During training, we use the query (cq, ct), since we have access to the target chunk. During inference, we use cq. RPT-Lex A version of RPT, where the training signal is not obtained from the scoring LM, but from lexical information only, similar to TRIME (Zhong et al., 2022). Explicitly, the set of posi- tive chunks Rq pos for a chunk cq contains the top- 20 chunks that have the highest BM25 score with (cq, ct). RPT-Sem Our full model described in §3.
2306.13421#23
Long-range Language Modeling with Self-retrieval
Retrieval-augmented language models (LMs) have received much attention recently. However, typically the retriever is not trained jointly as a native component of the LM, but added to an already-pretrained LM, which limits the ability of the LM and the retriever to adapt to one another. In this work, we propose the Retrieval-Pretrained Transformer (RPT), an architecture and training procedure for jointly training a retrieval-augmented LM from scratch for the task of modeling long texts. Given a recently generated text chunk in a long document, the LM computes query representations, which are then used to retrieve earlier chunks in the document, located potentially tens of thousands of tokens before. Information from retrieved chunks is fused into the LM representations to predict the next target chunk. We train the retriever component with a semantic objective, where the goal is to retrieve chunks that increase the probability of the next chunk, according to a reference LM. We evaluate RPT on four long-range language modeling tasks, spanning books, code, and mathematical writing, and demonstrate that RPT improves retrieval quality and subsequently perplexity across the board compared to strong baselines.
http://arxiv.org/pdf/2306.13421
Ohad Rubin, Jonathan Berant
cs.CL
null
null
cs.CL
20230623
20230623
[ { "id": "2004.05150" } ]
2306.13304
24
# 4 Experiments # 4.1 Baselines We evaluate the performance of the following methods on ToolQA, covering both standard LLMs and tool-augmented LLMs: (1) ChatGPT [37]: We directly feed the question into OpenAI’s ChatGPT model (gpt-3.5-turbo) and obtain its response as the final answer. (2) CoT [57, 23]: We use chain-of-thoughts prompting for ChatGPT, adding the prompt "Let’s think step by step:" after the question to leverage LLMs’ reasoning ability for question answering. (3) Chameleon [28] is a recent method that uses LLMs as a controller to use multiple tools for solving subtasks and has shown promising results in reasoning and QA tasks. When running Chameleon on ToolQA, we set the tool pool to our defined tools in § 3.1. (4) ReAct [66] integrates reasoning with tool use by prompting LLMs to generate interleaved verbal reasoning traces and tool calls. This integration has been shown effective in enhancing LLMs’ problem-solving capabilities. We instantiate two versions of ReAct using gpt-3.5-turbo and text-davinci-003.
2306.13304#24
ToolQA: A Dataset for LLM Question Answering with External Tools
Large Language Models (LLMs) have demonstrated impressive performance in various NLP tasks, but they still suffer from challenges such as hallucination and weak numerical reasoning. To overcome these challenges, external tools can be used to enhance LLMs' question-answering abilities. However, current evaluation methods do not distinguish between questions that can be answered using LLMs' internal knowledge and those that require external information through tool use. To address this issue, we introduce a new dataset called ToolQA, which is designed to faithfully evaluate LLMs' ability to use external tools for question answering. Our development of ToolQA involved a scalable, automated process for dataset curation, along with 13 specialized tools designed for interaction with external knowledge in order to answer questions. Importantly, we strive to minimize the overlap between our benchmark data and LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use reasoning abilities. We conducted an in-depth diagnosis of existing tool-use LLMs to highlight their strengths, weaknesses, and potential improvements. Our findings set a new benchmark for evaluating LLMs and suggest new directions for future advancements. Our data and code are freely available to the broader scientific community on GitHub.
http://arxiv.org/pdf/2306.13304
Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang
cs.CL, cs.AI
null
null
cs.CL
20230623
20230623
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2212.10511" }, { "id": "2306.07193" }, { "id": "2211.05100" }, { "id": "2301.12652" }, { "id": "2206.14858" }, { "id": "2209.14610" }, { "id": "2203.09735" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2209.07686" }, { "id": "2110.14168" }, { "id": "2208.03299" }, { "id": "2102.13019" }, { "id": "2303.11381" }, { "id": "2204.05862" }, { "id": "2211.10435" }, { "id": "2207.13332" }, { "id": "2210.12810" }, { "id": "2303.04671" }, { "id": "2303.05398" }, { "id": "2210.17517" }, { "id": "2112.04359" }, { "id": "2303.17580" }, { "id": "2208.05051" }, { "id": "2305.15334" }, { "id": "2304.09842" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.09014" }, { "id": "2205.12255" } ]
2306.13394
24
86.67 73.33 75.00 60.00 56.67 16.67 81.67 66.67 70.00 40.00 79.25 62.59 68.53 37.06 73.33 46.67 50.00 55.00 10.00 78.23 57.82 66.18 34.12 50.00 51.67 56.67 16.67 60.00 20.00 52.72 12.24 55.29 21.18 75.00 53.33 50.00 10.00 43.33 95.00 90.00 80.00 63.33 53.33 13.33 83.33 70.00 57.50 15.00 74.15 49.66 67.06 34.12 48.33 61.67 23.33 50.00 2.35 45.00 13.33 55.00 13.33 57.50 25.00 42.86 14.97 49.12 24.71 48.33 13.33 48.33 56.80 19.73 46.47 10.59 50.00 50.00 56.67 13.33 50.00 95.00 90.00 61.67 26.67 53.33 10.00 58.33 16.67 67.50 40.00 59.18 20.41 37.94 9.41 93.33 86.67 63.33 33.33 56.67 23.33
2306.13394#24
MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models
Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform multimodal tasks, showing amazing emergent abilities in recent studies, such as writing poems based on an image. However, it is difficult for these case studies to fully reflect the performance of MLLM, lacking a comprehensive evaluation. In this paper, we fill in this blank, presenting the first comprehensive MLLM Evaluation benchmark MME. It measures both perception and cognition abilities on a total of 14 subtasks. In order to avoid data leakage that may arise from direct use of public datasets for evaluation, the annotations of instruction-answer pairs are all manually designed. The concise instruction design allows us to fairly compare MLLMs, instead of struggling in prompt engineering. Besides, with such an instruction, we can also easily carry out quantitative statistics. A total of 30 advanced MLLMs are comprehensively evaluated on our MME, which not only suggests that existing MLLMs still have a large room for improvement, but also reveals the potential directions for the subsequent model optimization.
http://arxiv.org/pdf/2306.13394
Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, Rongrong Ji
cs.CV
Project page: https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models
null
cs.CV
20230623
20231206
[]
2306.13421
24
RPT-Sem Our full model described in §3. Block-Recurrent Transformer We use the offi- cial training implementation4 of Block-Recurrent Transformer (Hutchins et al., 2022) with the default configuration. Memorizing Transformer We use the official implementation4 of Memorizing Transformers (Wu et al., 2022), with the default configuration and a memory size of 32K tokens. Oracles For each test chunk, we can exhaustively search and use at test time the best possible neigh- bors for a model according to the scoring LM. This provides an upper bound for the performance of RPT-Lex and RPT-Sem, as they are trained to imi- tate the ranking produced by this oracle.
2306.13421#24
Long-range Language Modeling with Self-retrieval
Retrieval-augmented language models (LMs) have received much attention recently. However, typically the retriever is not trained jointly as a native component of the LM, but added to an already-pretrained LM, which limits the ability of the LM and the retriever to adapt to one another. In this work, we propose the Retrieval-Pretrained Transformer (RPT), an architecture and training procedure for jointly training a retrieval-augmented LM from scratch for the task of modeling long texts. Given a recently generated text chunk in a long document, the LM computes query representations, which are then used to retrieve earlier chunks in the document, located potentially tens of thousands of tokens before. Information from retrieved chunks is fused into the LM representations to predict the next target chunk. We train the retriever component with a semantic objective, where the goal is to retrieve chunks that increase the probability of the next chunk, according to a reference LM. We evaluate RPT on four long-range language modeling tasks, spanning books, code, and mathematical writing, and demonstrate that RPT improves retrieval quality and subsequently perplexity across the board compared to strong baselines.
http://arxiv.org/pdf/2306.13421
Ohad Rubin, Jonathan Berant
cs.CL
null
null
cs.CL
20230623
20230623
[ { "id": "2004.05150" } ]
2306.13304
25
Different from the existing works that mainly provide task-level few-shot exemplars, we provide tool-level demonstrations. We used 8 demonstrations about how to use tools for QA, ensuring that each tool in the pool is covered at least once by the demonstrations. Such tool-level demonstrations provide a concise tutorial to the LLMs for tool use, covering all tool uses with the LLM context limit. Details about the demonstrations and our prompts are included in Appendix F. To assess the performance of methods on the ToolQA benchmark, we normalize both the ground-truth answers and the model predictions to ensure uniformity in format. Success rates are then computed based on the exact match between these normalized answers. We evaluate the model’s ability against the generated question-answer pairs in an open-ended manner, focusing on whether the model can arrive at the correct answer, regardless of the used tool chains. # 4.2 Results
2306.13304#25
ToolQA: A Dataset for LLM Question Answering with External Tools
Large Language Models (LLMs) have demonstrated impressive performance in various NLP tasks, but they still suffer from challenges such as hallucination and weak numerical reasoning. To overcome these challenges, external tools can be used to enhance LLMs' question-answering abilities. However, current evaluation methods do not distinguish between questions that can be answered using LLMs' internal knowledge and those that require external information through tool use. To address this issue, we introduce a new dataset called ToolQA, which is designed to faithfully evaluate LLMs' ability to use external tools for question answering. Our development of ToolQA involved a scalable, automated process for dataset curation, along with 13 specialized tools designed for interaction with external knowledge in order to answer questions. Importantly, we strive to minimize the overlap between our benchmark data and LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use reasoning abilities. We conducted an in-depth diagnosis of existing tool-use LLMs to highlight their strengths, weaknesses, and potential improvements. Our findings set a new benchmark for evaluating LLMs and suggest new directions for future advancements. Our data and code are freely available to the broader scientific community on GitHub.
http://arxiv.org/pdf/2306.13304
Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang
cs.CL, cs.AI
null
null
cs.CL
20230623
20230623
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2212.10511" }, { "id": "2306.07193" }, { "id": "2211.05100" }, { "id": "2301.12652" }, { "id": "2206.14858" }, { "id": "2209.14610" }, { "id": "2203.09735" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2209.07686" }, { "id": "2110.14168" }, { "id": "2208.03299" }, { "id": "2102.13019" }, { "id": "2303.11381" }, { "id": "2204.05862" }, { "id": "2211.10435" }, { "id": "2207.13332" }, { "id": "2210.12810" }, { "id": "2303.04671" }, { "id": "2303.05398" }, { "id": "2210.17517" }, { "id": "2112.04359" }, { "id": "2303.17580" }, { "id": "2208.05051" }, { "id": "2305.15334" }, { "id": "2304.09842" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.09014" }, { "id": "2205.12255" } ]
2306.13394
25
16.67 67.50 40.00 59.18 20.41 37.94 9.41 93.33 86.67 63.33 33.33 56.67 23.33 70.00 46.67 65.00 35.00 81.29 65.99 87.65 76.47 96.67 93.33 71.67 46.67 60.00 36.67 85.00 73.33 55.00 10.00 61.56 51.02 81.18 64.71 96.67 93.33 86.67 73.33 65.00 30.00 80.00 70.00 95.00 90.00 96.94 95.24 0.00 96.67 93.33 85.00 73.33 73.33 53.33 88.33 76.67 75.00 50.00 85.71 76.19 83.24 67.06 95.00 90.00 85.00 70.00 76.67 56.67 90.00 80.00 75.00 50.00 86.39 74.15 83.53 69.41 88.33 76.67 68.33 43.33 56.67 30.00 88.33 76.67 70.00 40.00 78.77 60.27 67.35 45.29 96.67 93.33 85.00 70.00 83.33 70.00 93.33 86.67 57.50
2306.13394#25
MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models
Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform multimodal tasks, showing amazing emergent abilities in recent studies, such as writing poems based on an image. However, it is difficult for these case studies to fully reflect the performance of MLLM, lacking a comprehensive evaluation. In this paper, we fill in this blank, presenting the first comprehensive MLLM Evaluation benchmark MME. It measures both perception and cognition abilities on a total of 14 subtasks. In order to avoid data leakage that may arise from direct use of public datasets for evaluation, the annotations of instruction-answer pairs are all manually designed. The concise instruction design allows us to fairly compare MLLMs, instead of struggling in prompt engineering. Besides, with such an instruction, we can also easily carry out quantitative statistics. A total of 30 advanced MLLMs are comprehensively evaluated on our MME, which not only suggests that existing MLLMs still have a large room for improvement, but also reveals the potential directions for the subsequent model optimization.
http://arxiv.org/pdf/2306.13394
Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, Rongrong Ji
cs.CV
Project page: https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models
null
cs.CV
20230623
20231206
[]
2306.13421
25
Metrics We use perplexity to evaluate the per- formance of models. In addition, we use the tar- get score st(·) from the scoring LM to compute for each chunk a gold ranking over all previous chunks, and to label chunks as positive/negative iff their target score is positive/negative, respec- tively. With this information, we can evaluate Precision@k, which is the fraction of top-k chunks according to the query-based score that are posi- tive, and Recall@k, which is the fraction of posi- tive chunks that are in the top-k chunks according to the query-based score. We also use the gold ranking to compute NDCG@k, which is a standard retrieval metric (Järvelin and Kekäläinen, 2002). # 5.2 Results Table 2 shows our main results, which show that RPT-Sem is comparable or better than all 3Concurrent work (Doostmohammadi et al., 2023) showed that training RETRO using BM25 substantially outperforms dense retrieval methods. 4https://github.com/google-research/ meliad.
2306.13421#25
Long-range Language Modeling with Self-retrieval
Retrieval-augmented language models (LMs) have received much attention recently. However, typically the retriever is not trained jointly as a native component of the LM, but added to an already-pretrained LM, which limits the ability of the LM and the retriever to adapt to one another. In this work, we propose the Retrieval-Pretrained Transformer (RPT), an architecture and training procedure for jointly training a retrieval-augmented LM from scratch for the task of modeling long texts. Given a recently generated text chunk in a long document, the LM computes query representations, which are then used to retrieve earlier chunks in the document, located potentially tens of thousands of tokens before. Information from retrieved chunks is fused into the LM representations to predict the next target chunk. We train the retriever component with a semantic objective, where the goal is to retrieve chunks that increase the probability of the next chunk, according to a reference LM. We evaluate RPT on four long-range language modeling tasks, spanning books, code, and mathematical writing, and demonstrate that RPT improves retrieval quality and subsequently perplexity across the board compared to strong baselines.
http://arxiv.org/pdf/2306.13421
Ohad Rubin, Jonathan Berant
cs.CL
null
null
cs.CL
20230623
20230623
[ { "id": "2004.05150" } ]
2306.13304
26
# 4.2 Results Comparing Different Tool-Use LLMs. Table 3 and 4 shows the results of different methods on the easy and hard questions. ChatGPT and CoT achieve very poor success rates (< 10) on both easy and hard questions across different tasks. This is expected as the questions in ToolQA cannot be answered solely based on LLMs’ internal knowledge and reasoning. Chameleon achieves slightly better performance, with 10.6% and 1.9% success rates on easy and hard questions, respectively. This is because Chameleon incorporates tool descriptions and integrates human-induced orderings of these tools in its context, enabling it to comprehend and compose different tools for QA. However, Chameleon cannot take feedback from the execution trace, thus often suffering from infeasible 6 (a) Incorrect tool calls of ReAct on ToolQA. (b) Confusion matrix of questions from dif- ferent resources in ToolQA. # # Wrong Calls Figure 3: Analysis of incorrect tool calls and incorrect data sources made by ReAct on ToolQA. actions or omitted arguments in its generated plans. ReAct is the best-performing model. It can use observations in the execution trace to generate its next action, allowing it to iteratively refine its tool use chain and obtain better success rates.
2306.13304#26
ToolQA: A Dataset for LLM Question Answering with External Tools
Large Language Models (LLMs) have demonstrated impressive performance in various NLP tasks, but they still suffer from challenges such as hallucination and weak numerical reasoning. To overcome these challenges, external tools can be used to enhance LLMs' question-answering abilities. However, current evaluation methods do not distinguish between questions that can be answered using LLMs' internal knowledge and those that require external information through tool use. To address this issue, we introduce a new dataset called ToolQA, which is designed to faithfully evaluate LLMs' ability to use external tools for question answering. Our development of ToolQA involved a scalable, automated process for dataset curation, along with 13 specialized tools designed for interaction with external knowledge in order to answer questions. Importantly, we strive to minimize the overlap between our benchmark data and LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use reasoning abilities. We conducted an in-depth diagnosis of existing tool-use LLMs to highlight their strengths, weaknesses, and potential improvements. Our findings set a new benchmark for evaluating LLMs and suggest new directions for future advancements. Our data and code are freely available to the broader scientific community on GitHub.
http://arxiv.org/pdf/2306.13304
Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang
cs.CL, cs.AI
null
null
cs.CL
20230623
20230623
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2212.10511" }, { "id": "2306.07193" }, { "id": "2211.05100" }, { "id": "2301.12652" }, { "id": "2206.14858" }, { "id": "2209.14610" }, { "id": "2203.09735" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2209.07686" }, { "id": "2110.14168" }, { "id": "2208.03299" }, { "id": "2102.13019" }, { "id": "2303.11381" }, { "id": "2204.05862" }, { "id": "2211.10435" }, { "id": "2207.13332" }, { "id": "2210.12810" }, { "id": "2303.04671" }, { "id": "2303.05398" }, { "id": "2210.17517" }, { "id": "2112.04359" }, { "id": "2303.17580" }, { "id": "2208.05051" }, { "id": "2305.15334" }, { "id": "2304.09842" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.09014" }, { "id": "2205.12255" } ]
2306.13394
26
78.77 60.27 67.35 45.29 96.67 93.33 85.00 70.00 83.33 70.00 93.33 86.67 57.50 15.00 93.88 87.76 82.94 67.65 98.33 96.67 81.67 70.00 60.00 30.00 90.00 80.00 57.50 20.00 74.49 50.34 71.76 46.47 90.00 80.00 86.67 73.33 55.00 26.67 83.33 73.33 60.00 40.00 81.63 64.63 79.41 62.35 98.33 96.67 86.67 76.67 53.33 13.33 88.33 76.67 52.50 78.57 59.18 56.47 25.29 93.33 86.67 50.00 66.67 36.67 55.00 10.00 78.23 59.86 75.29 54.12 45.00 98.33 96.67 58.33 30.00 60.00 26.67 70.00 43.33 57.50 15.00 78.91 59.86 90.88 81.76 85.00 73.33 83.33 66.67 75.00 53.33 90.00 80.00 80.00 60.00 92.18 86.39 72.35 48.24
2306.13394#26
MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models
Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform multimodal tasks, showing amazing emergent abilities in recent studies, such as writing poems based on an image. However, it is difficult for these case studies to fully reflect the performance of MLLM, lacking a comprehensive evaluation. In this paper, we fill in this blank, presenting the first comprehensive MLLM Evaluation benchmark MME. It measures both perception and cognition abilities on a total of 14 subtasks. In order to avoid data leakage that may arise from direct use of public datasets for evaluation, the annotations of instruction-answer pairs are all manually designed. The concise instruction design allows us to fairly compare MLLMs, instead of struggling in prompt engineering. Besides, with such an instruction, we can also easily carry out quantitative statistics. A total of 30 advanced MLLMs are comprehensively evaluated on our MME, which not only suggests that existing MLLMs still have a large room for improvement, but also reveals the potential directions for the subsequent model optimization.
http://arxiv.org/pdf/2306.13394
Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, Rongrong Ji
cs.CV
Project page: https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models
null
cs.CV
20230623
20231206
[]
2306.13421
26
4https://github.com/google-research/ meliad. Model ArXiv Code PG19 Books3 Params TRANSFORMER-XL (OURS) RETRO W. BM25 (OURS) RPT-LEX RPT-SEM W. 3 NEIGHBOURS W. 4 NEIGHBOURS 3.11 2.94 2.92 2.77 2.75 2.74 2.30 2.17 2.23 2.17 2.16 2.15 11.48 11.44 11.59 10.96 10.92 10.93 15.00 14.60 14.32 13.91 13.87 13.91 202M 236M 242M 242M 242M 242M MEMORIZING TRANSFORMER BLOCK-RECURRENT TRANSFORMER 2.92 2.89 2.18 2.73 10.97 10.95 14.40 14.64 212M 212M RPT-LEX W. ORACLE RPT-SEM W. ORACLE 2.80 2.69 2.12 2.10 10.88 10.26 13.30 12.74 242M 242M Table 2: Test set perplexity for all datasets. Unless specified, we use 2 neighbours during inference.
2306.13421#26
Long-range Language Modeling with Self-retrieval
Retrieval-augmented language models (LMs) have received much attention recently. However, typically the retriever is not trained jointly as a native component of the LM, but added to an already-pretrained LM, which limits the ability of the LM and the retriever to adapt to one another. In this work, we propose the Retrieval-Pretrained Transformer (RPT), an architecture and training procedure for jointly training a retrieval-augmented LM from scratch for the task of modeling long texts. Given a recently generated text chunk in a long document, the LM computes query representations, which are then used to retrieve earlier chunks in the document, located potentially tens of thousands of tokens before. Information from retrieved chunks is fused into the LM representations to predict the next target chunk. We train the retriever component with a semantic objective, where the goal is to retrieve chunks that increase the probability of the next chunk, according to a reference LM. We evaluate RPT on four long-range language modeling tasks, spanning books, code, and mathematical writing, and demonstrate that RPT improves retrieval quality and subsequently perplexity across the board compared to strong baselines.
http://arxiv.org/pdf/2306.13421
Ohad Rubin, Jonathan Berant
cs.CL
null
null
cs.CL
20230623
20230623
[ { "id": "2004.05150" } ]
2306.13304
27
Easy vs. Hard Questions. Comparing Table 3 and 4, we observe that all the baselines perform much worse on hard questions. The best method achieves an average success rate of 43.13% on easy questions, while that number drops to 8.24% on hard questions. As mentioned in § 3, the hard questions in ToolQA require more tool calls and more complicated compositions. Current tool- augmented LLMs struggle with answering such hard questions, which requires further development of techniques to improve their ability to reason about the task and generate plans for tool use. GPT-3 vs. GPT3.5. 4 Comparing the different versions of ReAct, we observe that the ReAct (GPT-3) outperforms ReAct (GPT-3.5) on easy questions, yet it shows inferior performance on hard questions. Our hypothesis is that for easy questions, it is more important to learn and follow the format of the tool calls in the context, which GPT-3 is stronger at. For hard questions, the better reasoning and code understanding abilities of GPT-3.5 enables it to come up with “innovative” solutions that never appear in the context, leading to higher success rates. An example can be referred to in § 5.3. # 5 Result Analysis and Discussion
2306.13304#27
ToolQA: A Dataset for LLM Question Answering with External Tools
Large Language Models (LLMs) have demonstrated impressive performance in various NLP tasks, but they still suffer from challenges such as hallucination and weak numerical reasoning. To overcome these challenges, external tools can be used to enhance LLMs' question-answering abilities. However, current evaluation methods do not distinguish between questions that can be answered using LLMs' internal knowledge and those that require external information through tool use. To address this issue, we introduce a new dataset called ToolQA, which is designed to faithfully evaluate LLMs' ability to use external tools for question answering. Our development of ToolQA involved a scalable, automated process for dataset curation, along with 13 specialized tools designed for interaction with external knowledge in order to answer questions. Importantly, we strive to minimize the overlap between our benchmark data and LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use reasoning abilities. We conducted an in-depth diagnosis of existing tool-use LLMs to highlight their strengths, weaknesses, and potential improvements. Our findings set a new benchmark for evaluating LLMs and suggest new directions for future advancements. Our data and code are freely available to the broader scientific community on GitHub.
http://arxiv.org/pdf/2306.13304
Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang
cs.CL, cs.AI
null
null
cs.CL
20230623
20230623
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2212.10511" }, { "id": "2306.07193" }, { "id": "2211.05100" }, { "id": "2301.12652" }, { "id": "2206.14858" }, { "id": "2209.14610" }, { "id": "2203.09735" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2209.07686" }, { "id": "2110.14168" }, { "id": "2208.03299" }, { "id": "2102.13019" }, { "id": "2303.11381" }, { "id": "2204.05862" }, { "id": "2211.10435" }, { "id": "2207.13332" }, { "id": "2210.12810" }, { "id": "2303.04671" }, { "id": "2303.05398" }, { "id": "2210.17517" }, { "id": "2112.04359" }, { "id": "2303.17580" }, { "id": "2208.05051" }, { "id": "2305.15334" }, { "id": "2304.09842" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.09014" }, { "id": "2205.12255" } ]
2306.13394
27
73.33 83.33 66.67 75.00 53.33 90.00 80.00 80.00 60.00 92.18 86.39 72.35 48.24 98.33 96.67 86.67 73.33 83.33 70.00 86.67 73.33 62.50 25.00 87.41 76.87 92.65 85.29 93.33 86.67 81.67 70.00 46.67 16.67 81.67 63.33 87.50 75.00 91.50 84.35 86.76 73.53 56.67 13.33 61.67 23.33 53.33 10.00 56.67 16.67 57.50 20.00 60.88 23.13 51.18 2.35 98.33 96.67 80.00 60.00 73.33 53.33 88.33 80.00 82.50 65.00 86.39 74.15 92.65 86.47 93.33 86.67 78.33 60.00 58.33 23.33 93.33 86.67 62.50 25.00 84.35 70.75 80.29 60.59 96.67 93.33 81.67 70.00 80.00 63.33 95.00 90.00 77.50 55.00 87.07 76.19 86.18 75.29 71.67 46.67
2306.13394#27
MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models
Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform multimodal tasks, showing amazing emergent abilities in recent studies, such as writing poems based on an image. However, it is difficult for these case studies to fully reflect the performance of MLLM, lacking a comprehensive evaluation. In this paper, we fill in this blank, presenting the first comprehensive MLLM Evaluation benchmark MME. It measures both perception and cognition abilities on a total of 14 subtasks. In order to avoid data leakage that may arise from direct use of public datasets for evaluation, the annotations of instruction-answer pairs are all manually designed. The concise instruction design allows us to fairly compare MLLMs, instead of struggling in prompt engineering. Besides, with such an instruction, we can also easily carry out quantitative statistics. A total of 30 advanced MLLMs are comprehensively evaluated on our MME, which not only suggests that existing MLLMs still have a large room for improvement, but also reveals the potential directions for the subsequent model optimization.
http://arxiv.org/pdf/2306.13394
Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, Rongrong Ji
cs.CV
Project page: https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models
null
cs.CV
20230623
20231206
[]
2306.13421
27
Table 2: Test set perplexity for all datasets. Unless specified, we use 2 neighbours during inference. other baselines in all cases. Using a fixed re- triever (RETRO) categorically improves perfor- mance compared to Transformer-XL; RPT-Lex leads to gains in Books3 but to losses in PG19 compared to RETRO, and RPT-Sem outperforms Transformer-XL, RETRO, and RPT-Lex on ArXiv, PG19, and Books3, and has performance compara- ble to RETRO on CodeParrot. Compared to Block-Recurrent Transformers and Memorizing transformers, which do not use CCA, performance is again either comparable or bet- ter, with notable gains on ArXiv, CodeParrot, and Books3. CCA allows one to dynamically increase the number of neighbors at inference time. When us- ing 3 or 4 neighbors (instead of 2), performance improves, which allows one to trade compute for performance.
2306.13421#27
Long-range Language Modeling with Self-retrieval
Retrieval-augmented language models (LMs) have received much attention recently. However, typically the retriever is not trained jointly as a native component of the LM, but added to an already-pretrained LM, which limits the ability of the LM and the retriever to adapt to one another. In this work, we propose the Retrieval-Pretrained Transformer (RPT), an architecture and training procedure for jointly training a retrieval-augmented LM from scratch for the task of modeling long texts. Given a recently generated text chunk in a long document, the LM computes query representations, which are then used to retrieve earlier chunks in the document, located potentially tens of thousands of tokens before. Information from retrieved chunks is fused into the LM representations to predict the next target chunk. We train the retriever component with a semantic objective, where the goal is to retrieve chunks that increase the probability of the next chunk, according to a reference LM. We evaluate RPT on four long-range language modeling tasks, spanning books, code, and mathematical writing, and demonstrate that RPT improves retrieval quality and subsequently perplexity across the board compared to strong baselines.
http://arxiv.org/pdf/2306.13421
Ohad Rubin, Jonathan Berant
cs.CL
null
null
cs.CL
20230623
20230623
[ { "id": "2004.05150" } ]
2306.13421
28
CCA allows one to dynamically increase the number of neighbors at inference time. When us- ing 3 or 4 neighbors (instead of 2), performance improves, which allows one to trade compute for performance. Model ArXiv Code PG19 Books3 RPT-SEM - ONLY TEACHER FORCING - NO TEACHER FORCING - NO NEIGHBOR GATING 2.77 2.91 2.95 2.92 2.17 2.22 2.26 2.20 10.96 11.54 13.10 11.50 13.91 14.66 14.40 18.68 Table 4: Results of our ablation study on RPT-Sem. Distribution of improvements across chunks We compute the improvement in perplexity for all chunks when comparing to Transformer-XL and plot the distribution of improvements for RETRO, RPT-Lex, and RPT-Sem in Fig. 4. Clearly, RPT- Sem has a heavier right tail in all cases except for CodeParrot, further illustrating its advantage over the other baselines. We further analyze why RETRO with BM25 performs well on CodeParrot in §5.4. # 5.3 Ablations
2306.13421#28
Long-range Language Modeling with Self-retrieval
Retrieval-augmented language models (LMs) have received much attention recently. However, typically the retriever is not trained jointly as a native component of the LM, but added to an already-pretrained LM, which limits the ability of the LM and the retriever to adapt to one another. In this work, we propose the Retrieval-Pretrained Transformer (RPT), an architecture and training procedure for jointly training a retrieval-augmented LM from scratch for the task of modeling long texts. Given a recently generated text chunk in a long document, the LM computes query representations, which are then used to retrieve earlier chunks in the document, located potentially tens of thousands of tokens before. Information from retrieved chunks is fused into the LM representations to predict the next target chunk. We train the retriever component with a semantic objective, where the goal is to retrieve chunks that increase the probability of the next chunk, according to a reference LM. We evaluate RPT on four long-range language modeling tasks, spanning books, code, and mathematical writing, and demonstrate that RPT improves retrieval quality and subsequently perplexity across the board compared to strong baselines.
http://arxiv.org/pdf/2306.13421
Ohad Rubin, Jonathan Berant
cs.CL
null
null
cs.CL
20230623
20230623
[ { "id": "2004.05150" } ]
2306.13304
29
# 5.1 Main Error Type I: Argument Errors By performing comprehensive error analysis, we found that the most common error type when asking LLMs to use tools for QA is argument error — LLMs calling the tools with wrong arguments. For ReAct, this error type makes 44.56% and 48.23% out of the 377 and 436 error cases on easy and hard questions respectively, as shown in Figure 3(a). Interestingly, ReAct shows different argument error patterns on easy and hard questions. On easy questions, it tends to make more mistakes on database-related tools. For example, the model commits a total of 120 errors when calling LoadDB, FilterDB, and GetValue tools for easy questions, while this number reduces to 95 for hard questions. On the other hand, when dealing with code-related tools (e.g., SQLInterpreter and PythonInterpreter), ReAct makes nearly 10x more errors for hard questions than for easy ones. This phenomenon is likely because the solution logic for hard questions is often more complex and cannot be fully inferred from the context alone. Consequently, the LLMs tend to rely on their understanding of code and programming concepts to tackle these intricate questions. In contrast, for easy questions, the LLMs tend to follow the patterns provided in the context, attempting to combine different database operations to arrive at a solution. # 5.2 Main Error Type II: Incorrect Data Source
2306.13304#29
ToolQA: A Dataset for LLM Question Answering with External Tools
Large Language Models (LLMs) have demonstrated impressive performance in various NLP tasks, but they still suffer from challenges such as hallucination and weak numerical reasoning. To overcome these challenges, external tools can be used to enhance LLMs' question-answering abilities. However, current evaluation methods do not distinguish between questions that can be answered using LLMs' internal knowledge and those that require external information through tool use. To address this issue, we introduce a new dataset called ToolQA, which is designed to faithfully evaluate LLMs' ability to use external tools for question answering. Our development of ToolQA involved a scalable, automated process for dataset curation, along with 13 specialized tools designed for interaction with external knowledge in order to answer questions. Importantly, we strive to minimize the overlap between our benchmark data and LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use reasoning abilities. We conducted an in-depth diagnosis of existing tool-use LLMs to highlight their strengths, weaknesses, and potential improvements. Our findings set a new benchmark for evaluating LLMs and suggest new directions for future advancements. Our data and code are freely available to the broader scientific community on GitHub.
http://arxiv.org/pdf/2306.13304
Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang
cs.CL, cs.AI
null
null
cs.CL
20230623
20230623
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2212.10511" }, { "id": "2306.07193" }, { "id": "2211.05100" }, { "id": "2301.12652" }, { "id": "2206.14858" }, { "id": "2209.14610" }, { "id": "2203.09735" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2209.07686" }, { "id": "2110.14168" }, { "id": "2208.03299" }, { "id": "2102.13019" }, { "id": "2303.11381" }, { "id": "2204.05862" }, { "id": "2211.10435" }, { "id": "2207.13332" }, { "id": "2210.12810" }, { "id": "2303.04671" }, { "id": "2303.05398" }, { "id": "2210.17517" }, { "id": "2112.04359" }, { "id": "2303.17580" }, { "id": "2208.05051" }, { "id": "2305.15334" }, { "id": "2304.09842" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.09014" }, { "id": "2205.12255" } ]
2306.13394
29
Table 1. Evaluation results on the subtasks of existence, count, position, color, OCR, poster, and celebrity. The top two results on each subtask are bolded and underlined, respectively. text-based tasks such as text translation and text understand- ing. The images are sampled from [30] and all of the instruction-answer pairs are manually designed. Consid- ering that MLLMs are still in its infancy, we only choose the relatively simple samples in this version of MME. The numbers of image and instruction-answer pairs are 20 and 40, respectively. # 2.3.2 Cognition Tasks We evaluate if any MLLM can carry out further logical rea- soning after perceiving the image, which is the most fasci- nating aspect of MLLMs over previous traditional methods. In order to infer the correct answer, MLLMs need to follow the instruction, perceive the contents of the image, and in- voke the knowledge reserved in LLMs, which is much more challenging than the single perception tasks. Examples of the following subtasks are shown in Fig. 1.
2306.13394#29
MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models
Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform multimodal tasks, showing amazing emergent abilities in recent studies, such as writing poems based on an image. However, it is difficult for these case studies to fully reflect the performance of MLLM, lacking a comprehensive evaluation. In this paper, we fill in this blank, presenting the first comprehensive MLLM Evaluation benchmark MME. It measures both perception and cognition abilities on a total of 14 subtasks. In order to avoid data leakage that may arise from direct use of public datasets for evaluation, the annotations of instruction-answer pairs are all manually designed. The concise instruction design allows us to fairly compare MLLMs, instead of struggling in prompt engineering. Besides, with such an instruction, we can also easily carry out quantitative statistics. A total of 30 advanced MLLMs are comprehensively evaluated on our MME, which not only suggests that existing MLLMs still have a large room for improvement, but also reveals the potential directions for the subsequent model optimization.
http://arxiv.org/pdf/2306.13394
Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, Rongrong Ji
cs.CV
Project page: https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models
null
cs.CV
20230623
20231206
[]
2306.13421
29
# 5.3 Ablations Last, oracle models consistently achieve the best perplexity across all datasets, improving from 2.74→2.69 on ArXiv, 2.15→2.10 on CodePar- rot, 10.92→10.26 on PG19, and 13.87→12.74 for Books3. This shows that improving the training of the retriever can further improve performance. Dataset Recall@10 BM25 RPT-L RPT-S BM25 RPT-L RPT-S BM25 RPT-L RPT-S Precision@2 nDCG@20 ArXiv Code Books3 PG19 27% 26% 29% 26% 23% 19% 22% 22% 32% 55% 54% 34% 53% 52% 26% 55% 50% 28% 55% 55% 58% 24% 24% 56% 25% 23% 58% 18% 16% 61% 18% 18% 30% 30% 22% 23% Table 3: Test retrieval metrics across datasets. Retrieval metrics Table 3 presents the retrieval metrics w.r.t oracle positive chunks. Again, re- trieval with RPT-Sem outperforms both RPT-Lex and BM25 in all cases. This shows the importance of training a retriever, and moreover that using semantic supervision leads to better retrieval com- pared to a lexical signal only.
2306.13421#29
Long-range Language Modeling with Self-retrieval
Retrieval-augmented language models (LMs) have received much attention recently. However, typically the retriever is not trained jointly as a native component of the LM, but added to an already-pretrained LM, which limits the ability of the LM and the retriever to adapt to one another. In this work, we propose the Retrieval-Pretrained Transformer (RPT), an architecture and training procedure for jointly training a retrieval-augmented LM from scratch for the task of modeling long texts. Given a recently generated text chunk in a long document, the LM computes query representations, which are then used to retrieve earlier chunks in the document, located potentially tens of thousands of tokens before. Information from retrieved chunks is fused into the LM representations to predict the next target chunk. We train the retriever component with a semantic objective, where the goal is to retrieve chunks that increase the probability of the next chunk, according to a reference LM. We evaluate RPT on four long-range language modeling tasks, spanning books, code, and mathematical writing, and demonstrate that RPT improves retrieval quality and subsequently perplexity across the board compared to strong baselines.
http://arxiv.org/pdf/2306.13421
Ohad Rubin, Jonathan Berant
cs.CL
null
null
cs.CL
20230623
20230623
[ { "id": "2004.05150" } ]
2306.13304
30
# 5.2 Main Error Type II: Incorrect Data Source We have conducted an investigation into the data sources preferred by LLMs when answering questions. We found that LLMs also have difficulties in identifying the proper reference corpora answer the questions. This behavior is graphically represented as a confusion matrix in Figure 3(b). Upon examining the figure, it is apparent that for target reference corpora like Flight, Coffee, Airbnb, 4GPT-4 was not included in the evaluation as we have no access to its API. 7
2306.13304#30
ToolQA: A Dataset for LLM Question Answering with External Tools
Large Language Models (LLMs) have demonstrated impressive performance in various NLP tasks, but they still suffer from challenges such as hallucination and weak numerical reasoning. To overcome these challenges, external tools can be used to enhance LLMs' question-answering abilities. However, current evaluation methods do not distinguish between questions that can be answered using LLMs' internal knowledge and those that require external information through tool use. To address this issue, we introduce a new dataset called ToolQA, which is designed to faithfully evaluate LLMs' ability to use external tools for question answering. Our development of ToolQA involved a scalable, automated process for dataset curation, along with 13 specialized tools designed for interaction with external knowledge in order to answer questions. Importantly, we strive to minimize the overlap between our benchmark data and LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use reasoning abilities. We conducted an in-depth diagnosis of existing tool-use LLMs to highlight their strengths, weaknesses, and potential improvements. Our findings set a new benchmark for evaluating LLMs and suggest new directions for future advancements. Our data and code are freely available to the broader scientific community on GitHub.
http://arxiv.org/pdf/2306.13304
Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang
cs.CL, cs.AI
null
null
cs.CL
20230623
20230623
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2212.10511" }, { "id": "2306.07193" }, { "id": "2211.05100" }, { "id": "2301.12652" }, { "id": "2206.14858" }, { "id": "2209.14610" }, { "id": "2203.09735" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2209.07686" }, { "id": "2110.14168" }, { "id": "2208.03299" }, { "id": "2102.13019" }, { "id": "2303.11381" }, { "id": "2204.05862" }, { "id": "2211.10435" }, { "id": "2207.13332" }, { "id": "2210.12810" }, { "id": "2303.04671" }, { "id": "2303.05398" }, { "id": "2210.17517" }, { "id": "2112.04359" }, { "id": "2303.17580" }, { "id": "2208.05051" }, { "id": "2305.15334" }, { "id": "2304.09842" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.09014" }, { "id": "2205.12255" } ]
2306.13394
30
the following subtasks are shown in Fig. 1. Commonsense Reasoning. Unlike the ScienceQA dataset [32] that requires specialized knowledge, the com- monsense refers to the basic knowledge in daily life. For example, given a photo of a down jacket, asking MLLMs whether it is appropriate to wear the cloth when it is cold (or hot). These are basic knowledge that humans can judge in- stantly without complex step-by-step reasoning. Therefore, we expect MLLMs to perform well in a zero-short setting. The images are all manually photographed or generated by diffusion models, and the instruction-answer pairs are all manually designed. There are a total of 70 images and 140 instruction-answer pairs. Numerical Calculation. It requires MLLMs to be able to read the arithmetic problem in the image and output the answer in an end to end way, which has been demonstrated In this version, we only consider relatively easy in [20]. 5
2306.13394#30
MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models
Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform multimodal tasks, showing amazing emergent abilities in recent studies, such as writing poems based on an image. However, it is difficult for these case studies to fully reflect the performance of MLLM, lacking a comprehensive evaluation. In this paper, we fill in this blank, presenting the first comprehensive MLLM Evaluation benchmark MME. It measures both perception and cognition abilities on a total of 14 subtasks. In order to avoid data leakage that may arise from direct use of public datasets for evaluation, the annotations of instruction-answer pairs are all manually designed. The concise instruction design allows us to fairly compare MLLMs, instead of struggling in prompt engineering. Besides, with such an instruction, we can also easily carry out quantitative statistics. A total of 30 advanced MLLMs are comprehensively evaluated on our MME, which not only suggests that existing MLLMs still have a large room for improvement, but also reveals the potential directions for the subsequent model optimization.
http://arxiv.org/pdf/2306.13394
Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, Rongrong Ji
cs.CV
Project page: https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models
null
cs.CV
20230623
20231206
[]
2306.13421
30
Tab. 4 shows the result of an ablation study on RPT-Sem over all datasets. Only Teacher Forcing We force the model to attend to gold neighbors according to the scoring LM, without annealing pss during training. This leads to a performance drop across all datasets, and in particular for PG19 and Books3. No Teacher Forcing Here, we do the opposite and fix pss = 0 throughout training, i.e., we only use the predicted neighbors and not gold ones. This can lead to undertraining of the CCA layers since they are exposed to low-quality neighbors at the beginning of training and results drop even further compared to Only Teacher Forcing. No neighbor gating We disable neighbor gating which controls the flow of information from neigh- bor chunks and analyze the effect on model perfor- mance. We observe a performance reduction across all datasets, notably on Books3, where perplexity Arxiv CodeParrat 25 20 eis 10 os 00 Pci9 Books3 lm RPT-Sem lm APT-Lex lm RETRO+BM25 og 09 10 11 12 13 14 15 1608 09 10 11 12 13 14 15 1608 09 10 11 12 13 14 15 1608 09 10 11 12 13 14 15 16
2306.13421#30
Long-range Language Modeling with Self-retrieval
Retrieval-augmented language models (LMs) have received much attention recently. However, typically the retriever is not trained jointly as a native component of the LM, but added to an already-pretrained LM, which limits the ability of the LM and the retriever to adapt to one another. In this work, we propose the Retrieval-Pretrained Transformer (RPT), an architecture and training procedure for jointly training a retrieval-augmented LM from scratch for the task of modeling long texts. Given a recently generated text chunk in a long document, the LM computes query representations, which are then used to retrieve earlier chunks in the document, located potentially tens of thousands of tokens before. Information from retrieved chunks is fused into the LM representations to predict the next target chunk. We train the retriever component with a semantic objective, where the goal is to retrieve chunks that increase the probability of the next chunk, according to a reference LM. We evaluate RPT on four long-range language modeling tasks, spanning books, code, and mathematical writing, and demonstrate that RPT improves retrieval quality and subsequently perplexity across the board compared to strong baselines.
http://arxiv.org/pdf/2306.13421
Ohad Rubin, Jonathan Berant
cs.CL
null
null
cs.CL
20230623
20230623
[ { "id": "2004.05150" } ]
2306.13304
31
Coffee-Hard Question: what was the average coffee price from 2019-10-31 to 2021-07-23? Correct Answer: 120.9 ReAct (GPT-3) Thought 1: This is a question related to coffee. We need to load the coffee database. Action 1: LoadDB [coffee Observation 1: We have successfully loaded the coffee database... Thought 2: We need to filter the information related to the query. Action 2: FilterDB[Date>=2019-10-31, Date<=2021-07-23] Observation 2: We have successfully filtered the data (435 rows). Thought 3: We then obtain all the React (GPT-3.5) Thought 3: We need to calculate the average coffee price from the filtered data. Action 3: Calculate [AVG(Close) ] Observation 3: (data not available) We need to retrieve the result of the previous calculation. Action 4: GetValue [AVG (Close) ] Observation he querying does not exist. Please modify it. Thought 4: ue you are Thought 5: Oops, I made a mistake in the SQL query. Let me correct it. Action 5: SQUITESEBESESE[SEZECT| AVG(Close) FROM Coffee-Hard
2306.13304#31
ToolQA: A Dataset for LLM Question Answering with External Tools
Large Language Models (LLMs) have demonstrated impressive performance in various NLP tasks, but they still suffer from challenges such as hallucination and weak numerical reasoning. To overcome these challenges, external tools can be used to enhance LLMs' question-answering abilities. However, current evaluation methods do not distinguish between questions that can be answered using LLMs' internal knowledge and those that require external information through tool use. To address this issue, we introduce a new dataset called ToolQA, which is designed to faithfully evaluate LLMs' ability to use external tools for question answering. Our development of ToolQA involved a scalable, automated process for dataset curation, along with 13 specialized tools designed for interaction with external knowledge in order to answer questions. Importantly, we strive to minimize the overlap between our benchmark data and LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use reasoning abilities. We conducted an in-depth diagnosis of existing tool-use LLMs to highlight their strengths, weaknesses, and potential improvements. Our findings set a new benchmark for evaluating LLMs and suggest new directions for future advancements. Our data and code are freely available to the broader scientific community on GitHub.
http://arxiv.org/pdf/2306.13304
Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang
cs.CL, cs.AI
null
null
cs.CL
20230623
20230623
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2212.10511" }, { "id": "2306.07193" }, { "id": "2211.05100" }, { "id": "2301.12652" }, { "id": "2206.14858" }, { "id": "2209.14610" }, { "id": "2203.09735" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2209.07686" }, { "id": "2110.14168" }, { "id": "2208.03299" }, { "id": "2102.13019" }, { "id": "2303.11381" }, { "id": "2204.05862" }, { "id": "2211.10435" }, { "id": "2207.13332" }, { "id": "2210.12810" }, { "id": "2303.04671" }, { "id": "2303.05398" }, { "id": "2210.17517" }, { "id": "2112.04359" }, { "id": "2303.17580" }, { "id": "2208.05051" }, { "id": "2305.15334" }, { "id": "2304.09842" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.09014" }, { "id": "2205.12255" } ]
2306.13394
31
81.25 64.00 79.00 59.00 76.50 60.00 68.57 41.43 40.00 55.00 10.00 55.00 20.00 78.00 57.50 86.25 73.00 63.25 33.00 57.14 21.43 50.00 10.00 60.00 20.00 47.50 10.00 50.00 10.00 50.00 50.00 68.75 44.50 53.00 55.00 10.00 47.50 10.00 84.00 69.00 59.75 20.00 76.75 57.50 75.00 54.29 35.00 45.00 81.75 64.50 59.75 24.00 55.25 20.00 35.00 0.00 50.00 42.50 20.00 50.00 10.00 45.00 10.00 50.50 17.50 48.25 21.50 46.50 13.00 43.57 0.00 52.50 1.00 72.50 45.50 56.25 13.50 50.25 78.75 58.00 64.00 29.50 59.25 28.00 58.57 28.57 55.00 10.00 47.50 0.00 84.50 71.50 81.41 64.32 67.50 46.00 64.29 34.29 57.50
2306.13394#31
MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models
Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform multimodal tasks, showing amazing emergent abilities in recent studies, such as writing poems based on an image. However, it is difficult for these case studies to fully reflect the performance of MLLM, lacking a comprehensive evaluation. In this paper, we fill in this blank, presenting the first comprehensive MLLM Evaluation benchmark MME. It measures both perception and cognition abilities on a total of 14 subtasks. In order to avoid data leakage that may arise from direct use of public datasets for evaluation, the annotations of instruction-answer pairs are all manually designed. The concise instruction design allows us to fairly compare MLLMs, instead of struggling in prompt engineering. Besides, with such an instruction, we can also easily carry out quantitative statistics. A total of 30 advanced MLLMs are comprehensively evaluated on our MME, which not only suggests that existing MLLMs still have a large room for improvement, but also reveals the potential directions for the subsequent model optimization.
http://arxiv.org/pdf/2306.13394
Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, Rongrong Ji
cs.CV
Project page: https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models
null
cs.CV
20230623
20231206
[]
2306.13421
31
og 09 10 11 12 13 14 15 1608 09 10 11 12 13 14 15 1608 09 10 11 12 13 14 15 1608 09 10 11 12 13 14 15 16 Figure 4: Relative perplexity improvement across different retrievers. All retrievers exhibit positive skew with a heavy right tail, and RPT-Sem leads to the most pronounced improvements. increases by 4.5 points. Since neighbor gating is independent of the retriever used, we show results when adding neighbor gating to RETRO in §A.4., which shows mixed results. # 5.4 Analysis ArXiv Books3 Token overlap rk PON oa fos) °o x x e B B xe Be N CodeParrot PG19 Token overlap Pr BP NON oa fo} oO N xe ° . B B x e x 12 @RPT-Sem e@RPT-Lex @RETRO+BM25 @Query # Target 0.20 0.15 0.10 0.05 Dataset Books3 ArXiv CodeParrot PG19 Average maximum target score across chunks 12.3 4 5 6 7 8 9 101112 131415 16 17 18 19 20 Top-K element according to BM25
2306.13421#31
Long-range Language Modeling with Self-retrieval
Retrieval-augmented language models (LMs) have received much attention recently. However, typically the retriever is not trained jointly as a native component of the LM, but added to an already-pretrained LM, which limits the ability of the LM and the retriever to adapt to one another. In this work, we propose the Retrieval-Pretrained Transformer (RPT), an architecture and training procedure for jointly training a retrieval-augmented LM from scratch for the task of modeling long texts. Given a recently generated text chunk in a long document, the LM computes query representations, which are then used to retrieve earlier chunks in the document, located potentially tens of thousands of tokens before. Information from retrieved chunks is fused into the LM representations to predict the next target chunk. We train the retriever component with a semantic objective, where the goal is to retrieve chunks that increase the probability of the next chunk, according to a reference LM. We evaluate RPT on four long-range language modeling tasks, spanning books, code, and mathematical writing, and demonstrate that RPT improves retrieval quality and subsequently perplexity across the board compared to strong baselines.
http://arxiv.org/pdf/2306.13421
Ohad Rubin, Jonathan Berant
cs.CL
null
null
cs.CL
20230623
20230623
[ { "id": "2004.05150" } ]
2306.13304
32
a mistake in the SQL query. Let me correct it. Action 5: SQUITESEBESESE[SEZECT| AVG(Close) FROM Coffee-Hard Question: How much did the coffee change from 2006-11-27 to 2016-06 Correct Answer: 10.5 React (GPT-3.5) Thought 6: we need to modify the filtering query to the correct format by adding quotes around the date. Action 6: FilterDB[Date='2016-06-03"] Observation 6: The filtering query Date='2016-06-03' is incorrect. Please modify the condition. Action 7: GetValue[Close] Observation 7: 118.0 Thought 8: We need to subtract the price f£ee on 2006-11-27 Close values of filtered records. Action 3: GetValue [Close] Observation 3: [Ui 95y//N040)/1O3NESy » 193.65, 189.0 (435 values) Action 8: Calculate {118.0-94u25) Thought 6: After calculation, we know Observation 8: 23075 that the average coffee price from 2019-10-31 to 2021-07-23 is 120.9. Action 6: Finish (12009) Observation 6: Afiswer US||CORRECT Too Long
2306.13304#32
ToolQA: A Dataset for LLM Question Answering with External Tools
Large Language Models (LLMs) have demonstrated impressive performance in various NLP tasks, but they still suffer from challenges such as hallucination and weak numerical reasoning. To overcome these challenges, external tools can be used to enhance LLMs' question-answering abilities. However, current evaluation methods do not distinguish between questions that can be answered using LLMs' internal knowledge and those that require external information through tool use. To address this issue, we introduce a new dataset called ToolQA, which is designed to faithfully evaluate LLMs' ability to use external tools for question answering. Our development of ToolQA involved a scalable, automated process for dataset curation, along with 13 specialized tools designed for interaction with external knowledge in order to answer questions. Importantly, we strive to minimize the overlap between our benchmark data and LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use reasoning abilities. We conducted an in-depth diagnosis of existing tool-use LLMs to highlight their strengths, weaknesses, and potential improvements. Our findings set a new benchmark for evaluating LLMs and suggest new directions for future advancements. Our data and code are freely available to the broader scientific community on GitHub.
http://arxiv.org/pdf/2306.13304
Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang
cs.CL, cs.AI
null
null
cs.CL
20230623
20230623
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2212.10511" }, { "id": "2306.07193" }, { "id": "2211.05100" }, { "id": "2301.12652" }, { "id": "2206.14858" }, { "id": "2209.14610" }, { "id": "2203.09735" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2209.07686" }, { "id": "2110.14168" }, { "id": "2208.03299" }, { "id": "2102.13019" }, { "id": "2303.11381" }, { "id": "2204.05862" }, { "id": "2211.10435" }, { "id": "2207.13332" }, { "id": "2210.12810" }, { "id": "2303.04671" }, { "id": "2303.05398" }, { "id": "2210.17517" }, { "id": "2112.04359" }, { "id": "2303.17580" }, { "id": "2208.05051" }, { "id": "2305.15334" }, { "id": "2304.09842" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.09014" }, { "id": "2205.12255" } ]
2306.13394
32
55.00 10.00 47.50 0.00 84.50 71.50 81.41 64.32 67.50 46.00 64.29 34.29 57.50 20.00 42.50 15.00 57.50 30.00 86.00 72.50 79.50 61.00 81.25 65.00 66.43 32.86 40.00 10.00 52.50 15.00 45.00 0.00 83.50 67.50 79.25 59.00 82.00 66.00 79.29 62.86 75.00 55.00 55.00 20.00 90.00 80.00 86.25 73.50 87.75 77.50 73.25 53.00 77.14 61.43 40.00 15.00 67.50 45.00 55.00 30.00 86.75 74.50 90.00 80.50 70.75 47.00 73.57 54.29 37.50 5.00 81.04 65.93 86.84 73.68 63.75 37.50 65.00 35.71 45.00 25.00 55.00 30.00 57.50 15.00 85.50 73.50 91.00 82.00 75.75 55.00 74.29 51.43 65.00 40.00 82.50 65.00 42.50 25.00 0.00
2306.13394#32
MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models
Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform multimodal tasks, showing amazing emergent abilities in recent studies, such as writing poems based on an image. However, it is difficult for these case studies to fully reflect the performance of MLLM, lacking a comprehensive evaluation. In this paper, we fill in this blank, presenting the first comprehensive MLLM Evaluation benchmark MME. It measures both perception and cognition abilities on a total of 14 subtasks. In order to avoid data leakage that may arise from direct use of public datasets for evaluation, the annotations of instruction-answer pairs are all manually designed. The concise instruction design allows us to fairly compare MLLMs, instead of struggling in prompt engineering. Besides, with such an instruction, we can also easily carry out quantitative statistics. A total of 30 advanced MLLMs are comprehensively evaluated on our MME, which not only suggests that existing MLLMs still have a large room for improvement, but also reveals the potential directions for the subsequent model optimization.
http://arxiv.org/pdf/2306.13394
Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, Rongrong Ji
cs.CV
Project page: https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models
null
cs.CV
20230623
20231206
[]
2306.13421
32
12.3 4 5 6 7 8 9 101112 131415 16 17 18 19 20 Top-K element according to BM25 Figure 6: The maximal target score st(·) for the top- k chunks retrieved by BM25 averaged across chunks and for all datasets. Since the maximal target score for the top-20 chunks is much higher than for the top-2, learning to rerank the top-20 BM25 candidates can lead to substantial improvements in retrieval quality. Figure 5: We measure the number of unique token over- lap between query/target chunks and the best retrieved neighbor. Token overlap Fig. 5 plots the average number of tokens that overlap between the query/target chunks the best retrieved neighbor for RETRO, RPT-Lex, and RPT-Sem. RPT-Sem retrieves para- graphs with higher overlap with the target chunk compared to RPT-Lex. Naturally, BM25 retrieves chunks with the highest overlap with the query chunk. However, this does not translate to higher lexical overlap for the target chunk. Supervision quality We train RPT-Sem using in- formation from the target scoring function st(·),
2306.13421#32
Long-range Language Modeling with Self-retrieval
Retrieval-augmented language models (LMs) have received much attention recently. However, typically the retriever is not trained jointly as a native component of the LM, but added to an already-pretrained LM, which limits the ability of the LM and the retriever to adapt to one another. In this work, we propose the Retrieval-Pretrained Transformer (RPT), an architecture and training procedure for jointly training a retrieval-augmented LM from scratch for the task of modeling long texts. Given a recently generated text chunk in a long document, the LM computes query representations, which are then used to retrieve earlier chunks in the document, located potentially tens of thousands of tokens before. Information from retrieved chunks is fused into the LM representations to predict the next target chunk. We train the retriever component with a semantic objective, where the goal is to retrieve chunks that increase the probability of the next chunk, according to a reference LM. We evaluate RPT on four long-range language modeling tasks, spanning books, code, and mathematical writing, and demonstrate that RPT improves retrieval quality and subsequently perplexity across the board compared to strong baselines.
http://arxiv.org/pdf/2306.13421
Ohad Rubin, Jonathan Berant
cs.CL
null
null
cs.CL
20230623
20230623
[ { "id": "2004.05150" } ]
2306.13304
33
price from 2019-10-31 to 2021-07-23 is 120.9. Action 6: Finish (12009) Observation 6: Afiswer US||CORRECT Too Long Context, Aborted Thought 9 that th ee price changed by 23.75. Action 9: Finish(23V751 Observation 9: Answer iS//INCORRECT After calculation, we know ' 1 ' 1 ' i 1 1 ' 1 1 1 1 ' i 1 1 | | mought 7: fle have successfully filtered | | | the data (1 rows). 1 1 i ' 1 ' 1 1 ' 1 1 i ' 1 ' 1 (a) Innovation (b) Hallucination
2306.13304#33
ToolQA: A Dataset for LLM Question Answering with External Tools
Large Language Models (LLMs) have demonstrated impressive performance in various NLP tasks, but they still suffer from challenges such as hallucination and weak numerical reasoning. To overcome these challenges, external tools can be used to enhance LLMs' question-answering abilities. However, current evaluation methods do not distinguish between questions that can be answered using LLMs' internal knowledge and those that require external information through tool use. To address this issue, we introduce a new dataset called ToolQA, which is designed to faithfully evaluate LLMs' ability to use external tools for question answering. Our development of ToolQA involved a scalable, automated process for dataset curation, along with 13 specialized tools designed for interaction with external knowledge in order to answer questions. Importantly, we strive to minimize the overlap between our benchmark data and LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use reasoning abilities. We conducted an in-depth diagnosis of existing tool-use LLMs to highlight their strengths, weaknesses, and potential improvements. Our findings set a new benchmark for evaluating LLMs and suggest new directions for future advancements. Our data and code are freely available to the broader scientific community on GitHub.
http://arxiv.org/pdf/2306.13304
Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang
cs.CL, cs.AI
null
null
cs.CL
20230623
20230623
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2212.10511" }, { "id": "2306.07193" }, { "id": "2211.05100" }, { "id": "2301.12652" }, { "id": "2206.14858" }, { "id": "2209.14610" }, { "id": "2203.09735" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2209.07686" }, { "id": "2110.14168" }, { "id": "2208.03299" }, { "id": "2102.13019" }, { "id": "2303.11381" }, { "id": "2204.05862" }, { "id": "2211.10435" }, { "id": "2207.13332" }, { "id": "2210.12810" }, { "id": "2303.04671" }, { "id": "2303.05398" }, { "id": "2210.17517" }, { "id": "2112.04359" }, { "id": "2303.17580" }, { "id": "2208.05051" }, { "id": "2305.15334" }, { "id": "2304.09842" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.09014" }, { "id": "2205.12255" } ]
2306.13394
33
91.00 82.00 75.75 55.00 74.29 51.43 65.00 40.00 82.50 65.00 42.50 25.00 0.00 88.00 76.50 87.00 75.00 72.50 47.00 66.43 44.29 17.50 83.75 70.00 76.96 59.16 76.50 59.00 76.43 60.00 47.50 35.00 72.50 60.00 47.50 30.00 57.50 15.00 47.50 15.00 83.75 67.50 81.25 65.00 71.00 45.50 68.57 41.43 45.00 62.50 40.00 47.50 15.00 84.75 72.50 75.00 51.00 64.50 30.50 64.29 35.71 47.50 55.00 15.00 86.25 72.50 78.75 58.50 75.00 54.00 66.43 40.00 52.50 20.00 52.50 82.50 65.00 32.50 10.00 83.75 68.50 88.00 76.00 73.50 52.00 76.43 54.29 35.00 86.50 73.50 89.20 78.89 77.00 57.00 75.71 54.29 40.00 15.00 50.00
2306.13394#33
MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models
Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform multimodal tasks, showing amazing emergent abilities in recent studies, such as writing poems based on an image. However, it is difficult for these case studies to fully reflect the performance of MLLM, lacking a comprehensive evaluation. In this paper, we fill in this blank, presenting the first comprehensive MLLM Evaluation benchmark MME. It measures both perception and cognition abilities on a total of 14 subtasks. In order to avoid data leakage that may arise from direct use of public datasets for evaluation, the annotations of instruction-answer pairs are all manually designed. The concise instruction design allows us to fairly compare MLLMs, instead of struggling in prompt engineering. Besides, with such an instruction, we can also easily carry out quantitative statistics. A total of 30 advanced MLLMs are comprehensively evaluated on our MME, which not only suggests that existing MLLMs still have a large room for improvement, but also reveals the potential directions for the subsequent model optimization.
http://arxiv.org/pdf/2306.13394
Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, Rongrong Ji
cs.CV
Project page: https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models
null
cs.CV
20230623
20231206
[]
2306.13421
33
Supervision quality We train RPT-Sem using in- formation from the target scoring function st(·), which we saw leads to model improvements. How- ever, the target scoring function only provides a reranking of the top-20 candidates according to BM25. Thus, a natural question is how much does the supervision quality improve through this rerank- ing. Figure 6 shows for every rank k the maxi- mal target score among the top-k chunks according to BM25, averaged over chunks and across our 4 datasets. Clearly, reranking the top-20 BM25 can- didates has a lot of potential, as the maximal target score is much higher for the top-20 candidates com- pared to the top-2. This hints that longer and better training of the retriever can further improve the performance of RPT-Sem. Interestingly, our analysis sheds light on why RPT-Sem outperforms RETRO clearly on Books3 and PG19 but less so on CodeParrot. The max- imal target score for CodeParrot when k = 2 is
2306.13421#33
Long-range Language Modeling with Self-retrieval
Retrieval-augmented language models (LMs) have received much attention recently. However, typically the retriever is not trained jointly as a native component of the LM, but added to an already-pretrained LM, which limits the ability of the LM and the retriever to adapt to one another. In this work, we propose the Retrieval-Pretrained Transformer (RPT), an architecture and training procedure for jointly training a retrieval-augmented LM from scratch for the task of modeling long texts. Given a recently generated text chunk in a long document, the LM computes query representations, which are then used to retrieve earlier chunks in the document, located potentially tens of thousands of tokens before. Information from retrieved chunks is fused into the LM representations to predict the next target chunk. We train the retriever component with a semantic objective, where the goal is to retrieve chunks that increase the probability of the next chunk, according to a reference LM. We evaluate RPT on four long-range language modeling tasks, spanning books, code, and mathematical writing, and demonstrate that RPT improves retrieval quality and subsequently perplexity across the board compared to strong baselines.
http://arxiv.org/pdf/2306.13421
Ohad Rubin, Jonathan Berant
cs.CL
null
null
cs.CL
20230623
20230623
[ { "id": "2004.05150" } ]
2306.13304
34
(a) Innovation (b) Hallucination Figure 4: An example of innovation and hallucination when answering hard questions on Coffee data. Actions and observations shrouded in pink are incorrect, whereas those in green are correct. Terms highlighted in yellow signify hallucinations produced by ReAct (GPT-3.5). (a) Easy questions. (b) Hard questions. Figure 5: Error analysis of ReAct on ToolQA. and Yelp that contain temporal information, LLMs are more likely to query the Agenda corpus for answering questions. Similarly, given that the SciREX knowledge corpora and DBLP graph are both in the scientific domain, LLMs tend to be confused about which source to query when answering scientific questions. # 5.3 Main Error Type III: Innovation and Hallucination
2306.13304#34
ToolQA: A Dataset for LLM Question Answering with External Tools
Large Language Models (LLMs) have demonstrated impressive performance in various NLP tasks, but they still suffer from challenges such as hallucination and weak numerical reasoning. To overcome these challenges, external tools can be used to enhance LLMs' question-answering abilities. However, current evaluation methods do not distinguish between questions that can be answered using LLMs' internal knowledge and those that require external information through tool use. To address this issue, we introduce a new dataset called ToolQA, which is designed to faithfully evaluate LLMs' ability to use external tools for question answering. Our development of ToolQA involved a scalable, automated process for dataset curation, along with 13 specialized tools designed for interaction with external knowledge in order to answer questions. Importantly, we strive to minimize the overlap between our benchmark data and LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use reasoning abilities. We conducted an in-depth diagnosis of existing tool-use LLMs to highlight their strengths, weaknesses, and potential improvements. Our findings set a new benchmark for evaluating LLMs and suggest new directions for future advancements. Our data and code are freely available to the broader scientific community on GitHub.
http://arxiv.org/pdf/2306.13304
Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang
cs.CL, cs.AI
null
null
cs.CL
20230623
20230623
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2212.10511" }, { "id": "2306.07193" }, { "id": "2211.05100" }, { "id": "2301.12652" }, { "id": "2206.14858" }, { "id": "2209.14610" }, { "id": "2203.09735" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2209.07686" }, { "id": "2110.14168" }, { "id": "2208.03299" }, { "id": "2102.13019" }, { "id": "2303.11381" }, { "id": "2204.05862" }, { "id": "2211.10435" }, { "id": "2207.13332" }, { "id": "2210.12810" }, { "id": "2303.04671" }, { "id": "2303.05398" }, { "id": "2210.17517" }, { "id": "2112.04359" }, { "id": "2303.17580" }, { "id": "2208.05051" }, { "id": "2305.15334" }, { "id": "2304.09842" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.09014" }, { "id": "2205.12255" } ]
2306.13394
34
54.29 35.00 86.50 73.50 89.20 78.89 77.00 57.00 75.71 54.29 40.00 15.00 50.00 25.00 50.00 0.00 79.29 59.60 75.51 51.53 69.43 45.08 72.14 54.29 65.00 30.00 60.00 20.00 45.00 10.00 80.25 61.50 54.75 10.00 57.75 19.50 50.00 14.29 50.00 5.00 91.75 84.50 90.75 81.50 85.00 71.00 80.00 60.00 47.50 10.00 75.00 55.00 72.50 45.00 83.50 68.00 63.00 26.50 77.25 56.00 77.86 58.57 47.50 10.00 57.50 20.00 50.00 10.00 87.75 77.50 88.50 78.50 68.00 40.50 76.43 55.71 45.00 15.00 67.50 35.00 42.50 10.00 67.50 45.00 60.00 30.00 0.00 0.00 67.50 35.00 45.00 15.00
2306.13394#34
MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models
Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform multimodal tasks, showing amazing emergent abilities in recent studies, such as writing poems based on an image. However, it is difficult for these case studies to fully reflect the performance of MLLM, lacking a comprehensive evaluation. In this paper, we fill in this blank, presenting the first comprehensive MLLM Evaluation benchmark MME. It measures both perception and cognition abilities on a total of 14 subtasks. In order to avoid data leakage that may arise from direct use of public datasets for evaluation, the annotations of instruction-answer pairs are all manually designed. The concise instruction design allows us to fairly compare MLLMs, instead of struggling in prompt engineering. Besides, with such an instruction, we can also easily carry out quantitative statistics. A total of 30 advanced MLLMs are comprehensively evaluated on our MME, which not only suggests that existing MLLMs still have a large room for improvement, but also reveals the potential directions for the subsequent model optimization.
http://arxiv.org/pdf/2306.13394
Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, Rongrong Ji
cs.CV
Project page: https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models
null
cs.CV
20230623
20231206
[]
2306.13421
34
already quite high – around 0.1, which corresponds to more than 10% improvement in the probability of the target chunk compared to the local context. Conversely, for PG19 and Books3, the target score when k = 2 is closer to 0. This hints that lexical information alone is quite effective for CodePar- rot, potentially by retrieving function definitions, variable assignments, etc. ArXiv Books3 ro x vo £30 . > © ° * x ox 5 20 os: £ n n x X10 fa . 0 CodeParrot PG19 x x ro vo x £ 30 > 8 . . 5 20 £ . & 3 10 . * ° ° ie) i : @RPT-Sem @RPT-Lex @RETRO+BM25 e@lncorrect Correct mAll Figure 7: Relative improvement with/without correct retrieval. Subgroup analysis Figure 7 shows the average relative improvement (across chunks) of RETRO, RPT-Lex, and RPT-Sem compared to Transformer- XL, when distinguishing between cases where a “gold” oracle chunk was retrieved and cases where no gold chunk was retrieved.
2306.13421#34
Long-range Language Modeling with Self-retrieval
Retrieval-augmented language models (LMs) have received much attention recently. However, typically the retriever is not trained jointly as a native component of the LM, but added to an already-pretrained LM, which limits the ability of the LM and the retriever to adapt to one another. In this work, we propose the Retrieval-Pretrained Transformer (RPT), an architecture and training procedure for jointly training a retrieval-augmented LM from scratch for the task of modeling long texts. Given a recently generated text chunk in a long document, the LM computes query representations, which are then used to retrieve earlier chunks in the document, located potentially tens of thousands of tokens before. Information from retrieved chunks is fused into the LM representations to predict the next target chunk. We train the retriever component with a semantic objective, where the goal is to retrieve chunks that increase the probability of the next chunk, according to a reference LM. We evaluate RPT on four long-range language modeling tasks, spanning books, code, and mathematical writing, and demonstrate that RPT improves retrieval quality and subsequently perplexity across the board compared to strong baselines.
http://arxiv.org/pdf/2306.13421
Ohad Rubin, Jonathan Berant
cs.CL
null
null
cs.CL
20230623
20230623
[ { "id": "2004.05150" } ]
2306.13304
35
For in-context tool-augmented LLMs, it is typical to include descriptions and use-case examples of each tool in the prompt. However, as the problem complexity increases with the number of tools, it becomes challenging to encompass all possible instances of compositional tool use as few-shot exemplars. Consequently, it is vital for LLMs to uncover logical relationships among different tools, which have never been encompassed in the human-provided exemplars, to solve challenging tasks — a process we refer to as "innovation." However, these innovative behaviors are a double-edged sword as they are often accompanied by hallucinations. Figure 4 illustrates this phenomenon with a case study, where LLMs answer hard questions with reference Coffee data. Given the context length constraint, the few-shot exemplar only showcases the basic usage of database operations and the SQL interpreter. For the hard question in Figure 4(a), ReAct (GPT-3) strictly follows the operations displayed in the context, leading to failure. On the contrary, ReAct (GPT-3.5) innovatively identifies the SQL interpreter as a possible alternative to database operations, especially when the
2306.13304#35
ToolQA: A Dataset for LLM Question Answering with External Tools
Large Language Models (LLMs) have demonstrated impressive performance in various NLP tasks, but they still suffer from challenges such as hallucination and weak numerical reasoning. To overcome these challenges, external tools can be used to enhance LLMs' question-answering abilities. However, current evaluation methods do not distinguish between questions that can be answered using LLMs' internal knowledge and those that require external information through tool use. To address this issue, we introduce a new dataset called ToolQA, which is designed to faithfully evaluate LLMs' ability to use external tools for question answering. Our development of ToolQA involved a scalable, automated process for dataset curation, along with 13 specialized tools designed for interaction with external knowledge in order to answer questions. Importantly, we strive to minimize the overlap between our benchmark data and LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use reasoning abilities. We conducted an in-depth diagnosis of existing tool-use LLMs to highlight their strengths, weaknesses, and potential improvements. Our findings set a new benchmark for evaluating LLMs and suggest new directions for future advancements. Our data and code are freely available to the broader scientific community on GitHub.
http://arxiv.org/pdf/2306.13304
Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang
cs.CL, cs.AI
null
null
cs.CL
20230623
20230623
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2212.10511" }, { "id": "2306.07193" }, { "id": "2211.05100" }, { "id": "2301.12652" }, { "id": "2206.14858" }, { "id": "2209.14610" }, { "id": "2203.09735" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2209.07686" }, { "id": "2110.14168" }, { "id": "2208.03299" }, { "id": "2102.13019" }, { "id": "2303.11381" }, { "id": "2204.05862" }, { "id": "2211.10435" }, { "id": "2207.13332" }, { "id": "2210.12810" }, { "id": "2303.04671" }, { "id": "2303.05398" }, { "id": "2210.17517" }, { "id": "2112.04359" }, { "id": "2303.17580" }, { "id": "2208.05051" }, { "id": "2305.15334" }, { "id": "2304.09842" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.09014" }, { "id": "2205.12255" } ]
2306.13394
35
Table 2. Evaluation results on the subtasks of scene, landmark, artwork, commonsense reasoning, numerical calculation, text translation, and code reasoning. The top two results on each subtask are bolded and underlined, respectively. arithmetic problems, such as addition and multiplication. There are 20 images and 40 instruction-answer pairs. The images are all manually taken, and the instruction-answer pairs are all manually designed. all manually taken, and the instruction-answer pairs are all manually designed. We only set basic code problems in this version. There are in total 20 images and 40 instruction- answer pairs. Text Translation. Considering that the MLLM [5] sup- ports both English and Chinese, we set the text translation subtask. It requires MLLMs to translate the Chinese written in an image to the corresponding English. In this version, we only design basic translation problems, which will be updated according to the development of MLLMs in the fu- ture. The images of this part are all manually taken, and the instruction-answer pairs are all manually designed. There are a total of 20 images and 40 instruction-answer pairs. Code Reasoning. It requires MLLMs to read the code in the images and automatically complete logical operation in- side the code. A similar task that writes website code based on an image has been demonstrated in [59]. The images are # 3. Experiments
2306.13394#35
MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models
Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform multimodal tasks, showing amazing emergent abilities in recent studies, such as writing poems based on an image. However, it is difficult for these case studies to fully reflect the performance of MLLM, lacking a comprehensive evaluation. In this paper, we fill in this blank, presenting the first comprehensive MLLM Evaluation benchmark MME. It measures both perception and cognition abilities on a total of 14 subtasks. In order to avoid data leakage that may arise from direct use of public datasets for evaluation, the annotations of instruction-answer pairs are all manually designed. The concise instruction design allows us to fairly compare MLLMs, instead of struggling in prompt engineering. Besides, with such an instruction, we can also easily carry out quantitative statistics. A total of 30 advanced MLLMs are comprehensively evaluated on our MME, which not only suggests that existing MLLMs still have a large room for improvement, but also reveals the potential directions for the subsequent model optimization.
http://arxiv.org/pdf/2306.13394
Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, Rongrong Ji
cs.CV
Project page: https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models
null
cs.CV
20230623
20231206
[]
2306.13421
35
As expected, RPT-Sem leads to improvements on all datasets, and outperforms other baselines ex- cept for RETRO on CodeParrot where performance is similar. Second, cases where a gold chunk was retrieved indeed typically lead to larger improve- ments, but we witness improvements even in cases where a gold chunk was not retrieved, which shows that the model can still benefit from such retrievals. # 6 Related Work and Discussion Long-range language modeling A primary fo- cus in long-range language modeling has been ad- dressing the quadratic complexity of attention in order to develop more efficient mechanisms for handling long texts. For instance, Transformer- XL (Dai et al., 2019) processes the input using a
2306.13421#35
Long-range Language Modeling with Self-retrieval
Retrieval-augmented language models (LMs) have received much attention recently. However, typically the retriever is not trained jointly as a native component of the LM, but added to an already-pretrained LM, which limits the ability of the LM and the retriever to adapt to one another. In this work, we propose the Retrieval-Pretrained Transformer (RPT), an architecture and training procedure for jointly training a retrieval-augmented LM from scratch for the task of modeling long texts. Given a recently generated text chunk in a long document, the LM computes query representations, which are then used to retrieve earlier chunks in the document, located potentially tens of thousands of tokens before. Information from retrieved chunks is fused into the LM representations to predict the next target chunk. We train the retriever component with a semantic objective, where the goal is to retrieve chunks that increase the probability of the next chunk, according to a reference LM. We evaluate RPT on four long-range language modeling tasks, spanning books, code, and mathematical writing, and demonstrate that RPT improves retrieval quality and subsequently perplexity across the board compared to strong baselines.
http://arxiv.org/pdf/2306.13421
Ohad Rubin, Jonathan Berant
cs.CL
null
null
cs.CL
20230623
20230623
[ { "id": "2004.05150" } ]
2306.13304
36
On the contrary, ReAct (GPT-3.5) innovatively identifies the SQL interpreter as a possible alternative to database operations, especially when the latter fails repeatedly. However, such innovations can oftentimes lead to hallucinations. As shown in Figure 4(b), when answering another hard question from the Coffee data, ReAct (GPT-3.5) opts to hallucinate certain observations (highlighted in yellow) that are non-existent in the feedback from tool execution.
2306.13304#36
ToolQA: A Dataset for LLM Question Answering with External Tools
Large Language Models (LLMs) have demonstrated impressive performance in various NLP tasks, but they still suffer from challenges such as hallucination and weak numerical reasoning. To overcome these challenges, external tools can be used to enhance LLMs' question-answering abilities. However, current evaluation methods do not distinguish between questions that can be answered using LLMs' internal knowledge and those that require external information through tool use. To address this issue, we introduce a new dataset called ToolQA, which is designed to faithfully evaluate LLMs' ability to use external tools for question answering. Our development of ToolQA involved a scalable, automated process for dataset curation, along with 13 specialized tools designed for interaction with external knowledge in order to answer questions. Importantly, we strive to minimize the overlap between our benchmark data and LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use reasoning abilities. We conducted an in-depth diagnosis of existing tool-use LLMs to highlight their strengths, weaknesses, and potential improvements. Our findings set a new benchmark for evaluating LLMs and suggest new directions for future advancements. Our data and code are freely available to the broader scientific community on GitHub.
http://arxiv.org/pdf/2306.13304
Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang
cs.CL, cs.AI
null
null
cs.CL
20230623
20230623
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2212.10511" }, { "id": "2306.07193" }, { "id": "2211.05100" }, { "id": "2301.12652" }, { "id": "2206.14858" }, { "id": "2209.14610" }, { "id": "2203.09735" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2209.07686" }, { "id": "2110.14168" }, { "id": "2208.03299" }, { "id": "2102.13019" }, { "id": "2303.11381" }, { "id": "2204.05862" }, { "id": "2211.10435" }, { "id": "2207.13332" }, { "id": "2210.12810" }, { "id": "2303.04671" }, { "id": "2303.05398" }, { "id": "2210.17517" }, { "id": "2112.04359" }, { "id": "2303.17580" }, { "id": "2208.05051" }, { "id": "2305.15334" }, { "id": "2304.09842" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.09014" }, { "id": "2205.12255" } ]
2306.13394
36
# 3. Experiments In this section, a total of 30 MLLMs are evaluated on our MME benchmark, including BLIP-2 [23], Instruct- BLIP [12], MiniGPT-4 [59], PandaGPT [39], Multimodal- GPT [15], VisualGLM-6B [5], ImageBind-LLM [17], VPGTrans [53], LaVIN [33], mPLUG-Owl [48], Octo- pus [3], Muffin [51], Otter [22], LRV-Instruction [28], Cheetor [24], LLaMA-Adapter-v2 [14], GIT2 [41], BLIVA [18], Lynx [52], MMICL [54], GPT-4V [37], Skywork-MM [4], mPLUG-Owl2 [48], Qwen-VL-Chat [9], XComposer-VL [7], LLaVA [29], Lion [2], SPHINX [27], InfMLLM [1], and WeMM [6]. 6 Poster Scene Landmark Artwork —— WeMM === Lion = == SPHINX Code Reasoning Text Translation Numerical Calculation — GPr4y = InfMLLM == MMICL.
2306.13394#36
MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models
Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform multimodal tasks, showing amazing emergent abilities in recent studies, such as writing poems based on an image. However, it is difficult for these case studies to fully reflect the performance of MLLM, lacking a comprehensive evaluation. In this paper, we fill in this blank, presenting the first comprehensive MLLM Evaluation benchmark MME. It measures both perception and cognition abilities on a total of 14 subtasks. In order to avoid data leakage that may arise from direct use of public datasets for evaluation, the annotations of instruction-answer pairs are all manually designed. The concise instruction design allows us to fairly compare MLLMs, instead of struggling in prompt engineering. Besides, with such an instruction, we can also easily carry out quantitative statistics. A total of 30 advanced MLLMs are comprehensively evaluated on our MME, which not only suggests that existing MLLMs still have a large room for improvement, but also reveals the potential directions for the subsequent model optimization.
http://arxiv.org/pdf/2306.13394
Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, Rongrong Ji
cs.CV
Project page: https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models
null
cs.CV
20230623
20231206
[]
2306.13421
36
segment-level mechanism while retaining a cache from previous segments. Longformer (Beltagy et al., 2020) extends this idea to accommodate even longer contexts. Sparse strategies, such as those proposed in Zaheer et al. (2020); Roy et al. (2021); Kitaev et al. (2020), attend to only a subset of tokens through clustering or hashing methods. Another approach involves compressing the input and attending over the compressed sequence (Mar- tins et al., 2022; Rae et al., 2020), or learning to ignore irrelevant tokens (Sukhbaatar et al., 2021). Recently, recurrent mechanisms have re-emerged as potential solutions (Fan et al., 2021; Hutchins et al., 2022; Mehta et al., 2023). From an analysis perspective, past work (Press et al., 2021) demon- strated that standard LM benchmarks are not ideal for measuring the long-range capabilities of mod- els. Sun et al. (2021) discuss various types of se- quences that benefit from having a long context, and Rae and Razavi (2020) investigate long-range architectural choices and recommend increasing long-range capabilities in the upper layers.
2306.13421#36
Long-range Language Modeling with Self-retrieval
Retrieval-augmented language models (LMs) have received much attention recently. However, typically the retriever is not trained jointly as a native component of the LM, but added to an already-pretrained LM, which limits the ability of the LM and the retriever to adapt to one another. In this work, we propose the Retrieval-Pretrained Transformer (RPT), an architecture and training procedure for jointly training a retrieval-augmented LM from scratch for the task of modeling long texts. Given a recently generated text chunk in a long document, the LM computes query representations, which are then used to retrieve earlier chunks in the document, located potentially tens of thousands of tokens before. Information from retrieved chunks is fused into the LM representations to predict the next target chunk. We train the retriever component with a semantic objective, where the goal is to retrieve chunks that increase the probability of the next chunk, according to a reference LM. We evaluate RPT on four long-range language modeling tasks, spanning books, code, and mathematical writing, and demonstrate that RPT improves retrieval quality and subsequently perplexity across the board compared to strong baselines.
http://arxiv.org/pdf/2306.13421
Ohad Rubin, Jonathan Berant
cs.CL
null
null
cs.CL
20230623
20230623
[ { "id": "2004.05150" } ]
2306.13304
37
8 # 5.4 Other Error Types We manually go through and count all the errors made by the ReAct (GPT-3.5) model and show the errors on both easy and hard questions in Figure 5. In addition to the aforementioned 3 main error types, there are 4 error types that frequently occur: • Infeasible Actions: The execution of tool calls are infeasible in the environment, often involving new tools that do not exist in the pre-defined tool pool. • Too Long Context: The encoding of interaction history, observations, and tool-use plans exceed the length limitation of GPT family models, resulting in runtime errors; • Mis-understanding: The LLMs cannot understand the observations obtained from external interaction and fail to determine the next steps or generate answers; • Low-Quality Retrieval: This error occurs when the retrieval model fails to extract the relevant information from text corpora, indicating insufficient external knowledge for LLMs to answer questions accurately. Comparing these error types on easy and hard questions, we find that the overall distribution is similar, though there is a slightly higher rate of hallucination and long-context errors when answering hard questions. This can be attributed to the complexity of hard questions, which often require composing more tools for question answering. # 6 Conclusion
2306.13304#37
ToolQA: A Dataset for LLM Question Answering with External Tools
Large Language Models (LLMs) have demonstrated impressive performance in various NLP tasks, but they still suffer from challenges such as hallucination and weak numerical reasoning. To overcome these challenges, external tools can be used to enhance LLMs' question-answering abilities. However, current evaluation methods do not distinguish between questions that can be answered using LLMs' internal knowledge and those that require external information through tool use. To address this issue, we introduce a new dataset called ToolQA, which is designed to faithfully evaluate LLMs' ability to use external tools for question answering. Our development of ToolQA involved a scalable, automated process for dataset curation, along with 13 specialized tools designed for interaction with external knowledge in order to answer questions. Importantly, we strive to minimize the overlap between our benchmark data and LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use reasoning abilities. We conducted an in-depth diagnosis of existing tool-use LLMs to highlight their strengths, weaknesses, and potential improvements. Our findings set a new benchmark for evaluating LLMs and suggest new directions for future advancements. Our data and code are freely available to the broader scientific community on GitHub.
http://arxiv.org/pdf/2306.13304
Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang
cs.CL, cs.AI
null
null
cs.CL
20230623
20230623
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2212.10511" }, { "id": "2306.07193" }, { "id": "2211.05100" }, { "id": "2301.12652" }, { "id": "2206.14858" }, { "id": "2209.14610" }, { "id": "2203.09735" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2209.07686" }, { "id": "2110.14168" }, { "id": "2208.03299" }, { "id": "2102.13019" }, { "id": "2303.11381" }, { "id": "2204.05862" }, { "id": "2211.10435" }, { "id": "2207.13332" }, { "id": "2210.12810" }, { "id": "2303.04671" }, { "id": "2303.05398" }, { "id": "2210.17517" }, { "id": "2112.04359" }, { "id": "2303.17580" }, { "id": "2208.05051" }, { "id": "2305.15334" }, { "id": "2304.09842" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.09014" }, { "id": "2205.12255" } ]
2306.13421
37
Retrieval LMs Retrieval- augmented LMs have emerged as a prominent approach for efficiently leveraging external knowl- edge while generating text. These models can be broadly divided into those operating at token-level granularity and those operating at sequence-level granularity. such as kNN-LM (Khandelwal et al., 2020), TRIME (Zhong et al., 2022), and SPALM (Yogatama et al., 2021), retrieve information for individual tokens. Sequence-level approaches like RAG (Lewis et al., 2020) utilize pre-trained encoder-decoder models with pre-trained retrievers for tasks like open-domain question answering. Similarly, FiD (Izacard and Grave, 2021b) employs generative encoder-decoder models that fuse evidence from multiple passages during the decoding process, closely related to the CCA mechanism (see additional discussion in App A.3). Recently, Wang et al. (2023) demonstrated the potential benefits of conducting retrieval and chunked cross-attention at each time step, compared with the original RETRO (Borgeaud et al., 2022) paper, which retrieves every m = 64 steps.
2306.13421#37
Long-range Language Modeling with Self-retrieval
Retrieval-augmented language models (LMs) have received much attention recently. However, typically the retriever is not trained jointly as a native component of the LM, but added to an already-pretrained LM, which limits the ability of the LM and the retriever to adapt to one another. In this work, we propose the Retrieval-Pretrained Transformer (RPT), an architecture and training procedure for jointly training a retrieval-augmented LM from scratch for the task of modeling long texts. Given a recently generated text chunk in a long document, the LM computes query representations, which are then used to retrieve earlier chunks in the document, located potentially tens of thousands of tokens before. Information from retrieved chunks is fused into the LM representations to predict the next target chunk. We train the retriever component with a semantic objective, where the goal is to retrieve chunks that increase the probability of the next chunk, according to a reference LM. We evaluate RPT on four long-range language modeling tasks, spanning books, code, and mathematical writing, and demonstrate that RPT improves retrieval quality and subsequently perplexity across the board compared to strong baselines.
http://arxiv.org/pdf/2306.13421
Ohad Rubin, Jonathan Berant
cs.CL
null
null
cs.CL
20230623
20230623
[ { "id": "2004.05150" } ]
2306.13304
38
# 6 Conclusion We have developed ToolQA, a dataset that assesses the ability of Large Language Models (LLMs) in using external tools for solving complex problems. ToolQA is curated by an automated three- phase process for reference data collection, template-based question generation, and programmatic answer generation. This pipeline is general and can be expanded to incorporate any area of external knowledge of interest. We tested both standard LLMs and tool-augmented LLMs on ToolQA. Our analysis showed that even the strongest baseline achieved limited performance on the hard questions of ToolQA. Our study also found that current tool-augmented LLMs tend to make errors such as incorrect tool calls and using incorrect data sources. These issues could potentially be addressed by fine-tuning using a collection of tool-use corpora with publicly accessible LLMs. In the future, we are interested in include collecting high-quality, diverse data for fine-tuning, as well as assessing the performance of fine-tuned tool-augmented LLMs on ToolQA. 9 # References
2306.13304#38
ToolQA: A Dataset for LLM Question Answering with External Tools
Large Language Models (LLMs) have demonstrated impressive performance in various NLP tasks, but they still suffer from challenges such as hallucination and weak numerical reasoning. To overcome these challenges, external tools can be used to enhance LLMs' question-answering abilities. However, current evaluation methods do not distinguish between questions that can be answered using LLMs' internal knowledge and those that require external information through tool use. To address this issue, we introduce a new dataset called ToolQA, which is designed to faithfully evaluate LLMs' ability to use external tools for question answering. Our development of ToolQA involved a scalable, automated process for dataset curation, along with 13 specialized tools designed for interaction with external knowledge in order to answer questions. Importantly, we strive to minimize the overlap between our benchmark data and LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use reasoning abilities. We conducted an in-depth diagnosis of existing tool-use LLMs to highlight their strengths, weaknesses, and potential improvements. Our findings set a new benchmark for evaluating LLMs and suggest new directions for future advancements. Our data and code are freely available to the broader scientific community on GitHub.
http://arxiv.org/pdf/2306.13304
Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang
cs.CL, cs.AI
null
null
cs.CL
20230623
20230623
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2212.10511" }, { "id": "2306.07193" }, { "id": "2211.05100" }, { "id": "2301.12652" }, { "id": "2206.14858" }, { "id": "2209.14610" }, { "id": "2203.09735" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2209.07686" }, { "id": "2110.14168" }, { "id": "2208.03299" }, { "id": "2102.13019" }, { "id": "2303.11381" }, { "id": "2204.05862" }, { "id": "2211.10435" }, { "id": "2207.13332" }, { "id": "2210.12810" }, { "id": "2303.04671" }, { "id": "2303.05398" }, { "id": "2210.17517" }, { "id": "2112.04359" }, { "id": "2303.17580" }, { "id": "2208.05051" }, { "id": "2305.15334" }, { "id": "2304.09842" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.09014" }, { "id": "2205.12255" } ]
2306.13394
38
# 3.1. Results # 3.1.1 Perception There are a total of 10 subtasks for the evaluation of the perception ability, from the perspectives of coarse-grained recognition, fine-grained recognition, and OCR. Figs. 2 (3)-(6) show the score leaderboards of individual coarse- grained recognition subtasks. With respect to the object ex- istence, Otter, Lynx, WeMM, Muffin, and SPHINX get the highest score 195, with a 98.33% accuracy and a 96.67% ac- curacy+ listed in Table 1. Contrastively, the second place, including GIT2, XComposer-VL, Lion, GPT-4V, and etc, lag behind the first place only by 5 scores. The results show that these models already have a good performance on ob- ject existence. For the object count, position, and color, Muffin, Lion (parallel with SPHINX), and InfMLLM make the top one, respectively. It suggests that different mod- els have their own strengths. Note that in the four coarse- grained subtasks, these MLLMs get the worst results on ob- ject position, indicating that the current models are not sen- sitive enough to the position information.
2306.13394#38
MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models
Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform multimodal tasks, showing amazing emergent abilities in recent studies, such as writing poems based on an image. However, it is difficult for these case studies to fully reflect the performance of MLLM, lacking a comprehensive evaluation. In this paper, we fill in this blank, presenting the first comprehensive MLLM Evaluation benchmark MME. It measures both perception and cognition abilities on a total of 14 subtasks. In order to avoid data leakage that may arise from direct use of public datasets for evaluation, the annotations of instruction-answer pairs are all manually designed. The concise instruction design allows us to fairly compare MLLMs, instead of struggling in prompt engineering. Besides, with such an instruction, we can also easily carry out quantitative statistics. A total of 30 advanced MLLMs are comprehensively evaluated on our MME, which not only suggests that existing MLLMs still have a large room for improvement, but also reveals the potential directions for the subsequent model optimization.
http://arxiv.org/pdf/2306.13394
Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, Rongrong Ji
cs.CV
Project page: https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models
null
cs.CV
20230623
20231206
[]
2306.13421
38
Joint retriever-reader training Joint training approaches typically concentrate on transferring information between a pre-trained reader into a pre-trained retriever. These methods commonly involve updating the retriever index during the train- ing process in the context of knowledge-intensive tasks, such as open-domain question answering. For instance, REALM (Guu et al., 2020) utilizes masked language modeling as a learning signal to update the retriever. EMDR2 (Sachan et al., 2021) extends FiD by using encoder-decoder mod- els to back-propagate errors from the predicted an- swer to the retriever. Similarly, Izacard and Grave (2021a) demonstrate that it is possible to use atten- tion scores from the reader to supervise the retriever. Notably, Izacard et al. (2022) further scale up these approaches and jointly train a retriever with an encoder-decoder model, demonstrating strong few- shot learning capabilities. They also investigate various retriever updating techniques to address train-test mismatches in the retrieval process. We do not encounter the issue of index update since we compute the entire index through a forward pass.
2306.13421#38
Long-range Language Modeling with Self-retrieval
Retrieval-augmented language models (LMs) have received much attention recently. However, typically the retriever is not trained jointly as a native component of the LM, but added to an already-pretrained LM, which limits the ability of the LM and the retriever to adapt to one another. In this work, we propose the Retrieval-Pretrained Transformer (RPT), an architecture and training procedure for jointly training a retrieval-augmented LM from scratch for the task of modeling long texts. Given a recently generated text chunk in a long document, the LM computes query representations, which are then used to retrieve earlier chunks in the document, located potentially tens of thousands of tokens before. Information from retrieved chunks is fused into the LM representations to predict the next target chunk. We train the retriever component with a semantic objective, where the goal is to retrieve chunks that increase the probability of the next chunk, according to a reference LM. We evaluate RPT on four long-range language modeling tasks, spanning books, code, and mathematical writing, and demonstrate that RPT improves retrieval quality and subsequently perplexity across the board compared to strong baselines.
http://arxiv.org/pdf/2306.13421
Ohad Rubin, Jonathan Berant
cs.CL
null
null
cs.CL
20230623
20230623
[ { "id": "2004.05150" } ]
2306.13304
39
9 # References [1] Y. Bai, A. Jones, K. Ndousse, A. Askell, A. Chen, N. DasSarma, D. Drain, S. Fort, D. Ganguli, T. Henighan, et al. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862, 2022. [2] S. Borgeaud, A. Mensch, J. Hoffmann, T. Cai, E. Rutherford, K. Millican, G. B. Van Den Driess- che, J.-B. Lespiau, B. Damoc, A. Clark, et al. Improving language models by retrieving from trillions of tokens. In International conference on machine learning, pages 2206–2240. PMLR, 2022. [3] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020.
2306.13304#39
ToolQA: A Dataset for LLM Question Answering with External Tools
Large Language Models (LLMs) have demonstrated impressive performance in various NLP tasks, but they still suffer from challenges such as hallucination and weak numerical reasoning. To overcome these challenges, external tools can be used to enhance LLMs' question-answering abilities. However, current evaluation methods do not distinguish between questions that can be answered using LLMs' internal knowledge and those that require external information through tool use. To address this issue, we introduce a new dataset called ToolQA, which is designed to faithfully evaluate LLMs' ability to use external tools for question answering. Our development of ToolQA involved a scalable, automated process for dataset curation, along with 13 specialized tools designed for interaction with external knowledge in order to answer questions. Importantly, we strive to minimize the overlap between our benchmark data and LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use reasoning abilities. We conducted an in-depth diagnosis of existing tool-use LLMs to highlight their strengths, weaknesses, and potential improvements. Our findings set a new benchmark for evaluating LLMs and suggest new directions for future advancements. Our data and code are freely available to the broader scientific community on GitHub.
http://arxiv.org/pdf/2306.13304
Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang
cs.CL, cs.AI
null
null
cs.CL
20230623
20230623
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2212.10511" }, { "id": "2306.07193" }, { "id": "2211.05100" }, { "id": "2301.12652" }, { "id": "2206.14858" }, { "id": "2209.14610" }, { "id": "2203.09735" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2209.07686" }, { "id": "2110.14168" }, { "id": "2208.03299" }, { "id": "2102.13019" }, { "id": "2303.11381" }, { "id": "2204.05862" }, { "id": "2211.10435" }, { "id": "2207.13332" }, { "id": "2210.12810" }, { "id": "2303.04671" }, { "id": "2303.05398" }, { "id": "2210.17517" }, { "id": "2112.04359" }, { "id": "2303.17580" }, { "id": "2208.05051" }, { "id": "2305.15334" }, { "id": "2304.09842" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.09014" }, { "id": "2205.12255" } ]
2306.13394
39
Figs. 2 (7)-(11) display the score leaderboards of individ- ual fine-grained recognition subtasks. Regarding to poster recognition, GPT-4V, Lion, and Qwen-VL-Chat are the top three. It is interesting that Qwen-VL-Chat relatively underperforms in the coarse-grained recognition, but now it ex- hibits good. This implies that our division of coarse-grained and fine-grained is reasonable, enabling us to examine dif- ferent aspects of MLLMs. For the celebrity recognition, WeMM, SPHINX, and Otter take the top three with similar scores. It is worth noting that GPT-4V refuses to answer questions that involve individuals, resulting in a zero score in the celebrity subtask. For the scene recognition, WeMM, InfMLLM, and Lynx ahead of other MLLMs. This is the first time InfMLLM and Lynx have broken into the top three in the fine-grained recognition subtasks. For the landmark recognition, top three places are taken by Lion, WeMM, and LLaVA respectively, of which Lion gets the top spot. For the artwork recognition, WeMM,
2306.13394#39
MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models
Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform multimodal tasks, showing amazing emergent abilities in recent studies, such as writing poems based on an image. However, it is difficult for these case studies to fully reflect the performance of MLLM, lacking a comprehensive evaluation. In this paper, we fill in this blank, presenting the first comprehensive MLLM Evaluation benchmark MME. It measures both perception and cognition abilities on a total of 14 subtasks. In order to avoid data leakage that may arise from direct use of public datasets for evaluation, the annotations of instruction-answer pairs are all manually designed. The concise instruction design allows us to fairly compare MLLMs, instead of struggling in prompt engineering. Besides, with such an instruction, we can also easily carry out quantitative statistics. A total of 30 advanced MLLMs are comprehensively evaluated on our MME, which not only suggests that existing MLLMs still have a large room for improvement, but also reveals the potential directions for the subsequent model optimization.
http://arxiv.org/pdf/2306.13394
Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, Rongrong Ji
cs.CV
Project page: https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models
null
cs.CV
20230623
20231206
[]