doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2306.13421 | 39 | Attention as Retrieval Several works view the attention layer as a retrieval component. Memoriz- ing Transformers (Wu et al., 2022) employ a single k-NN layer and retrieve cached keys and values without back-propagating gradients through the re- trieval operation. Similarly, Bertsch et al. (2023) demonstrate that this approach can be used with any existing pre-trained model and apply it at every attention layer for long summarization tasks. No- tably, Jiang et al. (2022) use this observation and employ a caching mechanism (Gao et al., 2021b) to enable joint end-to-end training with the super- vision of the downstream task. We view the latter as a potential way to ï¬ne-tune RPT and leave it for future work. | 2306.13421#39 | Long-range Language Modeling with Self-retrieval | Retrieval-augmented language models (LMs) have received much attention
recently. However, typically the retriever is not trained jointly as a native
component of the LM, but added to an already-pretrained LM, which limits the
ability of the LM and the retriever to adapt to one another. In this work, we
propose the Retrieval-Pretrained Transformer (RPT), an architecture and
training procedure for jointly training a retrieval-augmented LM from scratch
for the task of modeling long texts. Given a recently generated text chunk in a
long document, the LM computes query representations, which are then used to
retrieve earlier chunks in the document, located potentially tens of thousands
of tokens before. Information from retrieved chunks is fused into the LM
representations to predict the next target chunk. We train the retriever
component with a semantic objective, where the goal is to retrieve chunks that
increase the probability of the next chunk, according to a reference LM. We
evaluate RPT on four long-range language modeling tasks, spanning books, code,
and mathematical writing, and demonstrate that RPT improves retrieval quality
and subsequently perplexity across the board compared to strong baselines. | http://arxiv.org/pdf/2306.13421 | Ohad Rubin, Jonathan Berant | cs.CL | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2004.05150"
}
] |
2306.13304 | 40 | [4] S. Bubeck, V. Chandrasekaran, R. Eldan, J. Gehrke, E. Horvitz, E. Kamar, P. Lee, Y. T. Lee, Y. Li, S. Lundberg, et al. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712, 2023.
[5] W. Chen, X. Ma, X. Wang, and W. W. Cohen. Program of thoughts prompting: Disentangling computation from reasoning for numerical reasoning tasks, 2022.
[6] W. Chen, X. Wang, and W. Y. Wang. A dataset for answering time-sensitive questions. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2), 2021.
[7] A. Chowdhery, S. Narang, J. Devlin, M. Bosma, G. Mishra, A. Roberts, P. Barham, H. W. Chung, C. Sutton, S. Gehrmann, et al. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022. | 2306.13304#40 | ToolQA: A Dataset for LLM Question Answering with External Tools | Large Language Models (LLMs) have demonstrated impressive performance in
various NLP tasks, but they still suffer from challenges such as hallucination
and weak numerical reasoning. To overcome these challenges, external tools can
be used to enhance LLMs' question-answering abilities. However, current
evaluation methods do not distinguish between questions that can be answered
using LLMs' internal knowledge and those that require external information
through tool use. To address this issue, we introduce a new dataset called
ToolQA, which is designed to faithfully evaluate LLMs' ability to use external
tools for question answering. Our development of ToolQA involved a scalable,
automated process for dataset curation, along with 13 specialized tools
designed for interaction with external knowledge in order to answer questions.
Importantly, we strive to minimize the overlap between our benchmark data and
LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use
reasoning abilities. We conducted an in-depth diagnosis of existing tool-use
LLMs to highlight their strengths, weaknesses, and potential improvements. Our
findings set a new benchmark for evaluating LLMs and suggest new directions for
future advancements. Our data and code are freely available to the broader
scientific community on GitHub. | http://arxiv.org/pdf/2306.13304 | Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang | cs.CL, cs.AI | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2212.10511"
},
{
"id": "2306.07193"
},
{
"id": "2211.05100"
},
{
"id": "2301.12652"
},
{
"id": "2206.14858"
},
{
"id": "2209.14610"
},
{
"id": "2203.09735"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2209.07686"
},
{
"id": "2110.14168"
},
{
"id": "2208.03299"
},
{
"id": "2102.13019"
},
{
"id": "2303.11381"
},
{
"id": "2204.05862"
},
{
"id": "2211.10435"
},
{
"id": "2207.13332"
},
{
"id": "2210.12810"
},
{
"id": "2303.04671"
},
{
"id": "2303.05398"
},
{
"id": "2210.17517"
},
{
"id": "2112.04359"
},
{
"id": "2303.17580"
},
{
"id": "2208.05051"
},
{
"id": "2305.15334"
},
{
"id": "2304.09842"
},
{
"id": "2303.11366"
},
{
"id": "2303.12712"
},
{
"id": "2302.12813"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
}
] |
2306.13394 | 40 | top three places are taken by Lion, WeMM, and LLaVA respectively, of which Lion gets the top spot. For the artwork recognition, WeMM, GPT-4V, and GIT2 ex- ceed other counterparts, where the last two scores are sim- ilar. Note that GPT-4V declines to answer some questions about private art collection, which lowers its score. With re- spect to OCR listed in Fig. 2 (12), GPT-4V, Skywork-MM, and WeMM get the top three with scores of 185, 162.5, and 147.5 respectively. GPT-4V presents a huge advantage, leading the other two models by 22+ socres. As presented in Fig. 2 (1), in the leaderboard of the whole perception recog- nition, WeMM, InfMLLM, and SPHINX come in top three, closely followed by Lion, LLaVA, and XComposer-VL. | 2306.13394#40 | MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models | Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform
multimodal tasks, showing amazing emergent abilities in recent studies, such as
writing poems based on an image. However, it is difficult for these case
studies to fully reflect the performance of MLLM, lacking a comprehensive
evaluation. In this paper, we fill in this blank, presenting the first
comprehensive MLLM Evaluation benchmark MME. It measures both perception and
cognition abilities on a total of 14 subtasks. In order to avoid data leakage
that may arise from direct use of public datasets for evaluation, the
annotations of instruction-answer pairs are all manually designed. The concise
instruction design allows us to fairly compare MLLMs, instead of struggling in
prompt engineering. Besides, with such an instruction, we can also easily carry
out quantitative statistics. A total of 30 advanced MLLMs are comprehensively
evaluated on our MME, which not only suggests that existing MLLMs still have a
large room for improvement, but also reveals the potential directions for the
subsequent model optimization. | http://arxiv.org/pdf/2306.13394 | Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, Rongrong Ji | cs.CV | Project page:
https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models | null | cs.CV | 20230623 | 20231206 | [] |
2306.13421 | 40 | Retriever Pre-training Early work on retriever pre-training relied on the unsupervised Inverse Cloze Task to pre-train the retriever (Lee et al., 2019; Guu et al., 2020). It was later shown that directly using BERT (Devlin et al., 2019) with a supervised objective is sufï¬cient to get good perfor- mance on standard benchmarks (Karpukhin et al., 2020). However, this paradigm showed lacklus- ter performance on long-tail entities compared to BM25 (Amouyal et al., 2022; Sciavolino et al., 2021). Recently, unsupervised pre-training meth- ods (Gao and Callan, 2022; Ram et al., 2022; Izac- ard et al., 2021) enabled improved performance. However, these methods are initialized from a pre- trained BERT (Devlin et al., 2019) encoder model, while RPT is a retriever-reader architecture trained
from scratch that outperforms BM25 without any additional pre-training. | 2306.13421#40 | Long-range Language Modeling with Self-retrieval | Retrieval-augmented language models (LMs) have received much attention
recently. However, typically the retriever is not trained jointly as a native
component of the LM, but added to an already-pretrained LM, which limits the
ability of the LM and the retriever to adapt to one another. In this work, we
propose the Retrieval-Pretrained Transformer (RPT), an architecture and
training procedure for jointly training a retrieval-augmented LM from scratch
for the task of modeling long texts. Given a recently generated text chunk in a
long document, the LM computes query representations, which are then used to
retrieve earlier chunks in the document, located potentially tens of thousands
of tokens before. Information from retrieved chunks is fused into the LM
representations to predict the next target chunk. We train the retriever
component with a semantic objective, where the goal is to retrieve chunks that
increase the probability of the next chunk, according to a reference LM. We
evaluate RPT on four long-range language modeling tasks, spanning books, code,
and mathematical writing, and demonstrate that RPT improves retrieval quality
and subsequently perplexity across the board compared to strong baselines. | http://arxiv.org/pdf/2306.13421 | Ohad Rubin, Jonathan Berant | cs.CL | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2004.05150"
}
] |
2306.13304 | 41 | [8] H. W. Chung, L. Hou, S. Longpre, B. Zoph, Y. Tay, W. Fedus, E. Li, X. Wang, M. De- hghani, S. Brahma, et al. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416, 2022.
[9] K. Cobbe, V. Kosaraju, M. Bavarian, M. Chen, H. Jun, L. Kaiser, M. Plappert, J. Tworek, J. Hilton, R. Nakano, et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021.
[10] B. Dhingra, J. R. Cole, J. M. Eisenschlos, D. Gillick, J. Eisenstein, and W. W. Cohen. Time- aware language models as temporal knowledge bases. Transactions of the Association for Computational Linguistics, 10:257â273, 2022. | 2306.13304#41 | ToolQA: A Dataset for LLM Question Answering with External Tools | Large Language Models (LLMs) have demonstrated impressive performance in
various NLP tasks, but they still suffer from challenges such as hallucination
and weak numerical reasoning. To overcome these challenges, external tools can
be used to enhance LLMs' question-answering abilities. However, current
evaluation methods do not distinguish between questions that can be answered
using LLMs' internal knowledge and those that require external information
through tool use. To address this issue, we introduce a new dataset called
ToolQA, which is designed to faithfully evaluate LLMs' ability to use external
tools for question answering. Our development of ToolQA involved a scalable,
automated process for dataset curation, along with 13 specialized tools
designed for interaction with external knowledge in order to answer questions.
Importantly, we strive to minimize the overlap between our benchmark data and
LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use
reasoning abilities. We conducted an in-depth diagnosis of existing tool-use
LLMs to highlight their strengths, weaknesses, and potential improvements. Our
findings set a new benchmark for evaluating LLMs and suggest new directions for
future advancements. Our data and code are freely available to the broader
scientific community on GitHub. | http://arxiv.org/pdf/2306.13304 | Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang | cs.CL, cs.AI | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2212.10511"
},
{
"id": "2306.07193"
},
{
"id": "2211.05100"
},
{
"id": "2301.12652"
},
{
"id": "2206.14858"
},
{
"id": "2209.14610"
},
{
"id": "2203.09735"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2209.07686"
},
{
"id": "2110.14168"
},
{
"id": "2208.03299"
},
{
"id": "2102.13019"
},
{
"id": "2303.11381"
},
{
"id": "2204.05862"
},
{
"id": "2211.10435"
},
{
"id": "2207.13332"
},
{
"id": "2210.12810"
},
{
"id": "2303.04671"
},
{
"id": "2303.05398"
},
{
"id": "2210.17517"
},
{
"id": "2112.04359"
},
{
"id": "2303.17580"
},
{
"id": "2208.05051"
},
{
"id": "2305.15334"
},
{
"id": "2304.09842"
},
{
"id": "2303.11366"
},
{
"id": "2303.12712"
},
{
"id": "2302.12813"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
}
] |
2306.13421 | 41 | from scratch that outperforms BM25 without any additional pre-training.
Supervising retrievers with LLMs EPR (Rubin et al., 2022) demonstrated that LLMs could be em- ployed to train a retriever for prompt retrieval by estimating the probability of an output given the in- put and a candidate training example as the prompt. Similar techniques were applied to open-domain question answering via re-ranking retrieval results (Sachan et al., 2022; Ram et al., 2023) and to su- pervise retrievers through perplexity distillation (Izacard et al., 2022). Recently, Shi et al. (2023) utilized this supervision method to improve the per- formance of various LLMs in a black-box fashion.
# 7 Conclusion | 2306.13421#41 | Long-range Language Modeling with Self-retrieval | Retrieval-augmented language models (LMs) have received much attention
recently. However, typically the retriever is not trained jointly as a native
component of the LM, but added to an already-pretrained LM, which limits the
ability of the LM and the retriever to adapt to one another. In this work, we
propose the Retrieval-Pretrained Transformer (RPT), an architecture and
training procedure for jointly training a retrieval-augmented LM from scratch
for the task of modeling long texts. Given a recently generated text chunk in a
long document, the LM computes query representations, which are then used to
retrieve earlier chunks in the document, located potentially tens of thousands
of tokens before. Information from retrieved chunks is fused into the LM
representations to predict the next target chunk. We train the retriever
component with a semantic objective, where the goal is to retrieve chunks that
increase the probability of the next chunk, according to a reference LM. We
evaluate RPT on four long-range language modeling tasks, spanning books, code,
and mathematical writing, and demonstrate that RPT improves retrieval quality
and subsequently perplexity across the board compared to strong baselines. | http://arxiv.org/pdf/2306.13421 | Ohad Rubin, Jonathan Berant | cs.CL | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2004.05150"
}
] |
2306.13304 | 42 | [11] L. Gao, A. Madaan, S. Zhou, U. Alon, P. Liu, Y. Yang, J. Callan, and G. Neubig. Pal: Program- aided language models. arXiv preprint arXiv:2211.10435, 2022.
[12] D. Hendrycks, C. Burns, S. Kadavath, A. Arora, S. Basart, E. Tang, D. Song, and J. Steinhardt. Measuring mathematical problem solving with the math dataset. NeurIPS, 2021.
[13] S. Imani, L. Du, and H. Shrivastava. Mathprompter: Mathematical reasoning using large language models. arXiv preprint arXiv:2303.05398, 2023.
[14] G. Izacard, M. Caron, L. Hosseini, S. Riedel, P. Bojanowski, A. Joulin, and E. Grave. To- wards unsupervised dense information retrieval with contrastive learning. arXiv preprint arXiv:2112.09118, 2021. | 2306.13304#42 | ToolQA: A Dataset for LLM Question Answering with External Tools | Large Language Models (LLMs) have demonstrated impressive performance in
various NLP tasks, but they still suffer from challenges such as hallucination
and weak numerical reasoning. To overcome these challenges, external tools can
be used to enhance LLMs' question-answering abilities. However, current
evaluation methods do not distinguish between questions that can be answered
using LLMs' internal knowledge and those that require external information
through tool use. To address this issue, we introduce a new dataset called
ToolQA, which is designed to faithfully evaluate LLMs' ability to use external
tools for question answering. Our development of ToolQA involved a scalable,
automated process for dataset curation, along with 13 specialized tools
designed for interaction with external knowledge in order to answer questions.
Importantly, we strive to minimize the overlap between our benchmark data and
LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use
reasoning abilities. We conducted an in-depth diagnosis of existing tool-use
LLMs to highlight their strengths, weaknesses, and potential improvements. Our
findings set a new benchmark for evaluating LLMs and suggest new directions for
future advancements. Our data and code are freely available to the broader
scientific community on GitHub. | http://arxiv.org/pdf/2306.13304 | Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang | cs.CL, cs.AI | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2212.10511"
},
{
"id": "2306.07193"
},
{
"id": "2211.05100"
},
{
"id": "2301.12652"
},
{
"id": "2206.14858"
},
{
"id": "2209.14610"
},
{
"id": "2203.09735"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2209.07686"
},
{
"id": "2110.14168"
},
{
"id": "2208.03299"
},
{
"id": "2102.13019"
},
{
"id": "2303.11381"
},
{
"id": "2204.05862"
},
{
"id": "2211.10435"
},
{
"id": "2207.13332"
},
{
"id": "2210.12810"
},
{
"id": "2303.04671"
},
{
"id": "2303.05398"
},
{
"id": "2210.17517"
},
{
"id": "2112.04359"
},
{
"id": "2303.17580"
},
{
"id": "2208.05051"
},
{
"id": "2305.15334"
},
{
"id": "2304.09842"
},
{
"id": "2303.11366"
},
{
"id": "2303.12712"
},
{
"id": "2302.12813"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
}
] |
2306.13394 | 42 | (| Is this movie originated from the country or region of usa? Please answer yes or no. [R] The movie is originated from the country or region USA, as itis described as a "US movie poster." [NJ this movie originated from the country or region of spain? Please answer yes or [R] This movie is originated from the country Spain, It is described as a" Spanish movie " or " Spanish film." [\] Are there two bananas in the image? Please answer yes or no. IR] Yes, there are two bananas in the image. [N] Are there three bananas in the image? Please answer yes or no. [R] No, there are not three bananas in the image. There is only one banana on the table. (/Is this photo taken in a place of bazaar outdoor? Please answer yes or no. IR] Yes, this photo is taken in a place of bazaar outdoor, where people are sitting and working ata food stand, and there are piles of vegetables present, [N| Is this photo taken in a place of office? Please answer yes or no. IR] Yes, this photo is taken in a place where people are selling vegetables at a market, rather than a typical office setting. (| Is there a sports ball in this image? Please answer yes or no. IR] | 2306.13394#42 | MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models | Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform
multimodal tasks, showing amazing emergent abilities in recent studies, such as
writing poems based on an image. However, it is difficult for these case
studies to fully reflect the performance of MLLM, lacking a comprehensive
evaluation. In this paper, we fill in this blank, presenting the first
comprehensive MLLM Evaluation benchmark MME. It measures both perception and
cognition abilities on a total of 14 subtasks. In order to avoid data leakage
that may arise from direct use of public datasets for evaluation, the
annotations of instruction-answer pairs are all manually designed. The concise
instruction design allows us to fairly compare MLLMs, instead of struggling in
prompt engineering. Besides, with such an instruction, we can also easily carry
out quantitative statistics. A total of 30 advanced MLLMs are comprehensively
evaluated on our MME, which not only suggests that existing MLLMs still have a
large room for improvement, but also reveals the potential directions for the
subsequent model optimization. | http://arxiv.org/pdf/2306.13394 | Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, Rongrong Ji | cs.CV | Project page:
https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models | null | cs.CV | 20230623 | 20231206 | [] |
2306.13421 | 42 | # 7 Conclusion
In this work, we present the Retrieval-Pretrained Transformer (RPT), a retrieval-augmented LM where the retriever is trained as a native component of the LM to retrieve semantically relevant chunks for future text prediction. We evaluate RPT on four long-range language modeling tasks, includ- ing books, code, and mathematical writing. We demonstrate that by seamlessly integrating the re- triever into the architecture and training process, RPT beneï¬ts from the fusion of retrieved context, improving over strong retrieval-augmented base- lines. We envision RPT will pave the way for a new generation of pretrained language models with retrieval deeply integrated throughout their archi- tecture and training process.
# Acknowledgments
This research was supported with Cloud TPUs from Googleâs TPU Research Cloud (TRC) and The Eu- ropean Research Council (ERC) under the Euro- pean Union Horizons 2020 research and innovation programme (grant ERC DELPHI 802800). Ohad would like to thank Iz Beltagy for suggesting the TRC program, and the entire TAU NLP lab and especially Guy Dar and Itay Itzhak. This work was completed in partial fulï¬llment of the Ph.D. degree of Ohad Rubin.
# References | 2306.13421#42 | Long-range Language Modeling with Self-retrieval | Retrieval-augmented language models (LMs) have received much attention
recently. However, typically the retriever is not trained jointly as a native
component of the LM, but added to an already-pretrained LM, which limits the
ability of the LM and the retriever to adapt to one another. In this work, we
propose the Retrieval-Pretrained Transformer (RPT), an architecture and
training procedure for jointly training a retrieval-augmented LM from scratch
for the task of modeling long texts. Given a recently generated text chunk in a
long document, the LM computes query representations, which are then used to
retrieve earlier chunks in the document, located potentially tens of thousands
of tokens before. Information from retrieved chunks is fused into the LM
representations to predict the next target chunk. We train the retriever
component with a semantic objective, where the goal is to retrieve chunks that
increase the probability of the next chunk, according to a reference LM. We
evaluate RPT on four long-range language modeling tasks, spanning books, code,
and mathematical writing, and demonstrate that RPT improves retrieval quality
and subsequently perplexity across the board compared to strong baselines. | http://arxiv.org/pdf/2306.13421 | Ohad Rubin, Jonathan Berant | cs.CL | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2004.05150"
}
] |
2306.13304 | 43 | [15] G. Izacard, P. Lewis, M. Lomeli, L. Hosseini, F. Petroni, T. Schick, J. Dwivedi-Yu, A. Joulin, S. Riedel, and E. Grave. Few-shot learning with retrieval augmented language models. arXiv preprint arXiv:2208.03299, 2022.
[16] S. Jain, M. van Zuylen, H. Hajishirzi, and I. Beltagy. SciREX: A challenge dataset for document- level information extraction. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7506â7516, Online, July 2020. Association for Computational Linguistics.
[17] Z. Ji, N. Lee, R. Frieske, T. Yu, D. Su, Y. Xu, E. Ishii, Y. J. Bang, A. Madotto, and P. Fung. Survey of hallucination in natural language generation. ACM Computing Surveys, 55(12):1â38, 2023.
[18] Q. Jin, Y. Yang, Q. Chen, and Z. Lu. Genegpt: Augmenting large language models with domain tools for improved access to biomedical information. ArXiv, 2023.
10 | 2306.13304#43 | ToolQA: A Dataset for LLM Question Answering with External Tools | Large Language Models (LLMs) have demonstrated impressive performance in
various NLP tasks, but they still suffer from challenges such as hallucination
and weak numerical reasoning. To overcome these challenges, external tools can
be used to enhance LLMs' question-answering abilities. However, current
evaluation methods do not distinguish between questions that can be answered
using LLMs' internal knowledge and those that require external information
through tool use. To address this issue, we introduce a new dataset called
ToolQA, which is designed to faithfully evaluate LLMs' ability to use external
tools for question answering. Our development of ToolQA involved a scalable,
automated process for dataset curation, along with 13 specialized tools
designed for interaction with external knowledge in order to answer questions.
Importantly, we strive to minimize the overlap between our benchmark data and
LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use
reasoning abilities. We conducted an in-depth diagnosis of existing tool-use
LLMs to highlight their strengths, weaknesses, and potential improvements. Our
findings set a new benchmark for evaluating LLMs and suggest new directions for
future advancements. Our data and code are freely available to the broader
scientific community on GitHub. | http://arxiv.org/pdf/2306.13304 | Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang | cs.CL, cs.AI | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2212.10511"
},
{
"id": "2306.07193"
},
{
"id": "2211.05100"
},
{
"id": "2301.12652"
},
{
"id": "2206.14858"
},
{
"id": "2209.14610"
},
{
"id": "2203.09735"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2209.07686"
},
{
"id": "2110.14168"
},
{
"id": "2208.03299"
},
{
"id": "2102.13019"
},
{
"id": "2303.11381"
},
{
"id": "2204.05862"
},
{
"id": "2211.10435"
},
{
"id": "2207.13332"
},
{
"id": "2210.12810"
},
{
"id": "2303.04671"
},
{
"id": "2303.05398"
},
{
"id": "2210.17517"
},
{
"id": "2112.04359"
},
{
"id": "2303.17580"
},
{
"id": "2208.05051"
},
{
"id": "2305.15334"
},
{
"id": "2304.09842"
},
{
"id": "2303.11366"
},
{
"id": "2303.12712"
},
{
"id": "2302.12813"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
}
] |
2306.13394 | 43 | a place where people are selling vegetables at a market, rather than a typical office setting. (| Is there a sports ball in this image? Please answer yes or no. IR] Yes, there is a sports ball in the image, as the woman is preparing to hit it with her tennis racket. [N| Is there a horse in this image? Please answer yes or no. IR] Yes, there is a horse in the image. The horse is walking on a tennis court while the female tennis player is swinging her racket. | Is this photo taken in a place of greenhouse indoor? Please answer yes or no. [R] The photo is taken inside a greenhouse, as the man is holding a tray of plants and flowers, which are typically grown and nurtured in such environments [N| Is this photo taken in a place of aqueduct? Please answer yes or no. [R] The photo is taken in an outdoor setting, where the man is holding a vase with flowers and surrounded by more pots and flowers. There is no visible aqueduct in the image | Is the answer to the arithmetic question in the image 225? Please answer yes or no. [R] No, the answer to the arithmetic question in the image is not 225. The question asks for the sum of I] and 15, which is | 2306.13394#43 | MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models | Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform
multimodal tasks, showing amazing emergent abilities in recent studies, such as
writing poems based on an image. However, it is difficult for these case
studies to fully reflect the performance of MLLM, lacking a comprehensive
evaluation. In this paper, we fill in this blank, presenting the first
comprehensive MLLM Evaluation benchmark MME. It measures both perception and
cognition abilities on a total of 14 subtasks. In order to avoid data leakage
that may arise from direct use of public datasets for evaluation, the
annotations of instruction-answer pairs are all manually designed. The concise
instruction design allows us to fairly compare MLLMs, instead of struggling in
prompt engineering. Besides, with such an instruction, we can also easily carry
out quantitative statistics. A total of 30 advanced MLLMs are comprehensively
evaluated on our MME, which not only suggests that existing MLLMs still have a
large room for improvement, but also reveals the potential directions for the
subsequent model optimization. | http://arxiv.org/pdf/2306.13394 | Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, Rongrong Ji | cs.CV | Project page:
https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models | null | cs.CV | 20230623 | 20231206 | [] |
2306.13421 | 43 | # References
Samuel Joseph Amouyal, Tomer Wolfson, Ohad Ru- bin, Ori Yoran, Jonathan Herzig, and Jonathan Be- rant. 2022. Qampari: An open-domain question answering benchmark for questions with many an- swers from multiple paragraphs.
Zhangir Azerbayev, tosz Piotrowski. Pre-training Dataset https://huggingface.co/datasets/ hoskinson-center/proof-pile.
Iz Beltagy, Matthew E. Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. arXiv:2004.05150.
Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. 2015. Scheduled sampling for se- quence prediction with recurrent neural networks. In Proceedings of the 28th International Conference on Neural Information Processing Systems - Vol- ume 1, NIPSâ15, page 1171â1179, Cambridge, MA, USA. MIT Press.
Amanda Bertsch, Uri Alon, Graham Neubig, and Matthew R. Gormley. 2023. Unlimiformer: Long- range transformers with unlimited length input. | 2306.13421#43 | Long-range Language Modeling with Self-retrieval | Retrieval-augmented language models (LMs) have received much attention
recently. However, typically the retriever is not trained jointly as a native
component of the LM, but added to an already-pretrained LM, which limits the
ability of the LM and the retriever to adapt to one another. In this work, we
propose the Retrieval-Pretrained Transformer (RPT), an architecture and
training procedure for jointly training a retrieval-augmented LM from scratch
for the task of modeling long texts. Given a recently generated text chunk in a
long document, the LM computes query representations, which are then used to
retrieve earlier chunks in the document, located potentially tens of thousands
of tokens before. Information from retrieved chunks is fused into the LM
representations to predict the next target chunk. We train the retriever
component with a semantic objective, where the goal is to retrieve chunks that
increase the probability of the next chunk, according to a reference LM. We
evaluate RPT on four long-range language modeling tasks, spanning books, code,
and mathematical writing, and demonstrate that RPT improves retrieval quality
and subsequently perplexity across the board compared to strong baselines. | http://arxiv.org/pdf/2306.13421 | Ohad Rubin, Jonathan Berant | cs.CL | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2004.05150"
}
] |
2306.13304 | 44 | 10
[19] M. Joshi, E. Choi, D. Weld, and L. Zettlemoyer. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1601â1611, Vancouver, Canada, July 2017. Association for Computational Linguistics.
[20] V. Karpukhin, B. Oguz, S. Min, P. Lewis, L. Wu, S. Edunov, D. Chen, and W.-t. Yih. Dense passage retrieval for open-domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769â6781, 2020.
[21] J. Kasai, K. Sakaguchi, Y. Takahashi, R. L. Bras, A. Asai, X. Yu, D. Radev, N. A. Smith, Y. Choi, and K. Inui. Realtime qa: Whatâs the answer right now? arXiv preprint arXiv:2207.13332, 2022.
[22] G. Kim, P. Baldi, and S. McAleer. Language models can solve computer tasks, 2023. | 2306.13304#44 | ToolQA: A Dataset for LLM Question Answering with External Tools | Large Language Models (LLMs) have demonstrated impressive performance in
various NLP tasks, but they still suffer from challenges such as hallucination
and weak numerical reasoning. To overcome these challenges, external tools can
be used to enhance LLMs' question-answering abilities. However, current
evaluation methods do not distinguish between questions that can be answered
using LLMs' internal knowledge and those that require external information
through tool use. To address this issue, we introduce a new dataset called
ToolQA, which is designed to faithfully evaluate LLMs' ability to use external
tools for question answering. Our development of ToolQA involved a scalable,
automated process for dataset curation, along with 13 specialized tools
designed for interaction with external knowledge in order to answer questions.
Importantly, we strive to minimize the overlap between our benchmark data and
LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use
reasoning abilities. We conducted an in-depth diagnosis of existing tool-use
LLMs to highlight their strengths, weaknesses, and potential improvements. Our
findings set a new benchmark for evaluating LLMs and suggest new directions for
future advancements. Our data and code are freely available to the broader
scientific community on GitHub. | http://arxiv.org/pdf/2306.13304 | Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang | cs.CL, cs.AI | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2212.10511"
},
{
"id": "2306.07193"
},
{
"id": "2211.05100"
},
{
"id": "2301.12652"
},
{
"id": "2206.14858"
},
{
"id": "2209.14610"
},
{
"id": "2203.09735"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2209.07686"
},
{
"id": "2110.14168"
},
{
"id": "2208.03299"
},
{
"id": "2102.13019"
},
{
"id": "2303.11381"
},
{
"id": "2204.05862"
},
{
"id": "2211.10435"
},
{
"id": "2207.13332"
},
{
"id": "2210.12810"
},
{
"id": "2303.04671"
},
{
"id": "2303.05398"
},
{
"id": "2210.17517"
},
{
"id": "2112.04359"
},
{
"id": "2303.17580"
},
{
"id": "2208.05051"
},
{
"id": "2305.15334"
},
{
"id": "2304.09842"
},
{
"id": "2303.11366"
},
{
"id": "2303.12712"
},
{
"id": "2302.12813"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
}
] |
2306.13394 | 44 | Please answer yes or no. [R] No, the answer to the arithmetic question in the image is not 225. The question asks for the sum of I] and 15, which is 26. [N| Is the answer to the arithmetic question in the image 1515? Please answer yes or no. [R] No, the answer to the arithmetic question in the image is not 1515. The question asks for the sum of 15 and 15, which is 30 | Is the answer to the arithmetic question in the image 340? Please answer yes or no. [R] No, the answer to the arithmetic question in the image is not 340. The question asks for the result of multiplying 17 by 20, which is 340. However, the answer provided in the image is 34 [N| Is the answer to the arithmetic question in the image 17? Please answer yes or no. [R] Yes, the answer to the arithmetic question in the image is 17 (| Is there a motorcycle in this image? Please answer yes or no. [R] Yes, there is a motorcycle in the image. The person is sitting on the motorcycle and appears to be riding or driving it [| Is there a airplane in this image? Please answer yes or no. [R] Yes, there is an airplane in the image, which is in the background behind the man riding the | 2306.13394#44 | MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models | Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform
multimodal tasks, showing amazing emergent abilities in recent studies, such as
writing poems based on an image. However, it is difficult for these case
studies to fully reflect the performance of MLLM, lacking a comprehensive
evaluation. In this paper, we fill in this blank, presenting the first
comprehensive MLLM Evaluation benchmark MME. It measures both perception and
cognition abilities on a total of 14 subtasks. In order to avoid data leakage
that may arise from direct use of public datasets for evaluation, the
annotations of instruction-answer pairs are all manually designed. The concise
instruction design allows us to fairly compare MLLMs, instead of struggling in
prompt engineering. Besides, with such an instruction, we can also easily carry
out quantitative statistics. A total of 30 advanced MLLMs are comprehensively
evaluated on our MME, which not only suggests that existing MLLMs still have a
large room for improvement, but also reveals the potential directions for the
subsequent model optimization. | http://arxiv.org/pdf/2306.13394 | Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, Rongrong Ji | cs.CV | Project page:
https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models | null | cs.CV | 20230623 | 20231206 | [] |
2306.13421 | 44 | Amanda Bertsch, Uri Alon, Graham Neubig, and Matthew R. Gormley. 2023. Unlimiformer: Long- range transformers with unlimited length input.
Stella Biderman, Hailey Schoelkopf, Quentin An- thony, Herbie Bradley, Kyle OâBrien, Eric Hal- lahan, Mohammad Aï¬ah Khan, Shivanshu Puro- hit, USVSN Sai Prashanth, Edward Raff, Aviya Skowron, Lintang Sutawika, and Oskar van der Wal. 2023. Pythia: A suite for analyzing large language models across training and scaling.
Sid Black, Stella Biderman, Eric Hallahan, Quentin An- thony, Leo Gao, Laurence Golding, Horace He, Con- nor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, and Samuel Weinbach. 2022. GPT-NeoX-20B: An open- source autoregressive language model. In Proceed- ings of the ACL Workshop on Challenges & Perspec- tives in Creating Large Language Models. | 2306.13421#44 | Long-range Language Modeling with Self-retrieval | Retrieval-augmented language models (LMs) have received much attention
recently. However, typically the retriever is not trained jointly as a native
component of the LM, but added to an already-pretrained LM, which limits the
ability of the LM and the retriever to adapt to one another. In this work, we
propose the Retrieval-Pretrained Transformer (RPT), an architecture and
training procedure for jointly training a retrieval-augmented LM from scratch
for the task of modeling long texts. Given a recently generated text chunk in a
long document, the LM computes query representations, which are then used to
retrieve earlier chunks in the document, located potentially tens of thousands
of tokens before. Information from retrieved chunks is fused into the LM
representations to predict the next target chunk. We train the retriever
component with a semantic objective, where the goal is to retrieve chunks that
increase the probability of the next chunk, according to a reference LM. We
evaluate RPT on four long-range language modeling tasks, spanning books, code,
and mathematical writing, and demonstrate that RPT improves retrieval quality
and subsequently perplexity across the board compared to strong baselines. | http://arxiv.org/pdf/2306.13421 | Ohad Rubin, Jonathan Berant | cs.CL | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2004.05150"
}
] |
2306.13304 | 45 | [22] G. Kim, P. Baldi, and S. McAleer. Language models can solve computer tasks, 2023.
[23] T. Kojima, S. S. Gu, M. Reid, Y. Matsuo, and Y. Iwasawa. Large language models are zero-shot reasoners. In A. H. Oh, A. Agarwal, D. Belgrave, and K. Cho, editors, Advances in Neural Information Processing Systems, 2022.
[24] P. Lewis, E. Perez, A. Piktus, F. Petroni, V. Karpukhin, N. Goyal, H. Küttler, M. Lewis, W.-t. Yih, T. Rocktäschel, et al. Retrieval-augmented generation for knowledge-intensive nlp tasks. Advances in Neural Information Processing Systems, 33:9459â9474, 2020.
[25] A. Lewkowycz, A. Andreassen, D. Dohan, E. Dyer, H. Michalewski, V. Ramasesh, A. Slone, C. Anil, I. Schlag, T. Gutman-Solo, et al. Solving quantitative reasoning problems with language models. arXiv preprint arXiv:2206.14858, 2022. | 2306.13304#45 | ToolQA: A Dataset for LLM Question Answering with External Tools | Large Language Models (LLMs) have demonstrated impressive performance in
various NLP tasks, but they still suffer from challenges such as hallucination
and weak numerical reasoning. To overcome these challenges, external tools can
be used to enhance LLMs' question-answering abilities. However, current
evaluation methods do not distinguish between questions that can be answered
using LLMs' internal knowledge and those that require external information
through tool use. To address this issue, we introduce a new dataset called
ToolQA, which is designed to faithfully evaluate LLMs' ability to use external
tools for question answering. Our development of ToolQA involved a scalable,
automated process for dataset curation, along with 13 specialized tools
designed for interaction with external knowledge in order to answer questions.
Importantly, we strive to minimize the overlap between our benchmark data and
LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use
reasoning abilities. We conducted an in-depth diagnosis of existing tool-use
LLMs to highlight their strengths, weaknesses, and potential improvements. Our
findings set a new benchmark for evaluating LLMs and suggest new directions for
future advancements. Our data and code are freely available to the broader
scientific community on GitHub. | http://arxiv.org/pdf/2306.13304 | Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang | cs.CL, cs.AI | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2212.10511"
},
{
"id": "2306.07193"
},
{
"id": "2211.05100"
},
{
"id": "2301.12652"
},
{
"id": "2206.14858"
},
{
"id": "2209.14610"
},
{
"id": "2203.09735"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2209.07686"
},
{
"id": "2110.14168"
},
{
"id": "2208.03299"
},
{
"id": "2102.13019"
},
{
"id": "2303.11381"
},
{
"id": "2204.05862"
},
{
"id": "2211.10435"
},
{
"id": "2207.13332"
},
{
"id": "2210.12810"
},
{
"id": "2303.04671"
},
{
"id": "2303.05398"
},
{
"id": "2210.17517"
},
{
"id": "2112.04359"
},
{
"id": "2303.17580"
},
{
"id": "2208.05051"
},
{
"id": "2305.15334"
},
{
"id": "2304.09842"
},
{
"id": "2303.11366"
},
{
"id": "2303.12712"
},
{
"id": "2302.12813"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
}
] |
2306.13421 | 45 | Sebastian Borgeaud, Arthur Mensch, Jordan Hoff- mann, Trevor Cai, Eliza Rutherford, Katie Millican, George van den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, Diego de Las Casas, Aurelia Guy, Jacob Menick, Roman Ring, Tom Hen- nigan, Saffron Huang, Loren Maggiore, Chris Jones, Albin Cassirer, Andy Brock, Michela Paganini, Ge- offrey Irving, Oriol Vinyals, Simon Osindero, Karen Simonyan, Jack W. Rae, Erich Elsen, and Laurent Sifre. 2022. Improving language models by retriev- ing from trillions of tokens. In International Confer- ence on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, volume 162 of Proceedings of Machine Learning Research, pages 2206â2240. PMLR. | 2306.13421#45 | Long-range Language Modeling with Self-retrieval | Retrieval-augmented language models (LMs) have received much attention
recently. However, typically the retriever is not trained jointly as a native
component of the LM, but added to an already-pretrained LM, which limits the
ability of the LM and the retriever to adapt to one another. In this work, we
propose the Retrieval-Pretrained Transformer (RPT), an architecture and
training procedure for jointly training a retrieval-augmented LM from scratch
for the task of modeling long texts. Given a recently generated text chunk in a
long document, the LM computes query representations, which are then used to
retrieve earlier chunks in the document, located potentially tens of thousands
of tokens before. Information from retrieved chunks is fused into the LM
representations to predict the next target chunk. We train the retriever
component with a semantic objective, where the goal is to retrieve chunks that
increase the probability of the next chunk, according to a reference LM. We
evaluate RPT on four long-range language modeling tasks, spanning books, code,
and mathematical writing, and demonstrate that RPT improves retrieval quality
and subsequently perplexity across the board compared to strong baselines. | http://arxiv.org/pdf/2306.13421 | Ohad Rubin, Jonathan Berant | cs.CL | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2004.05150"
}
] |
2306.13304 | 46 | [26] M. Li, F. Song, B. Yu, H. Yu, Z. Li, F. Huang, and Y. Li. Api-bank: A benchmark for tool-augmented llms, 2023.
[27] B. Y. Lin, K. Tan, C. S. Miller, B. Tian, and X. Ren. Unsupervised cross-task generalization via retrieval augmentation. In Advances in Neural Information Processing Systems, 2022.
[28] P. Lu, B. Peng, H. Cheng, M. Galley, K.-W. Chang, Y. N. Wu, S.-C. Zhu, and J. Gao. Chameleon: Plug-and-play compositional reasoning with large language models. arXiv preprint arXiv:2304.09842, 2023.
[29] P. Lu, L. Qiu, K.-W. Chang, Y. N. Wu, S.-C. Zhu, T. Rajpurohit, P. Clark, and A. Kalyan. Dynamic prompt learning via policy gradient for semi-structured mathematical reasoning. arXiv preprint arXiv:2209.14610, 2022. | 2306.13304#46 | ToolQA: A Dataset for LLM Question Answering with External Tools | Large Language Models (LLMs) have demonstrated impressive performance in
various NLP tasks, but they still suffer from challenges such as hallucination
and weak numerical reasoning. To overcome these challenges, external tools can
be used to enhance LLMs' question-answering abilities. However, current
evaluation methods do not distinguish between questions that can be answered
using LLMs' internal knowledge and those that require external information
through tool use. To address this issue, we introduce a new dataset called
ToolQA, which is designed to faithfully evaluate LLMs' ability to use external
tools for question answering. Our development of ToolQA involved a scalable,
automated process for dataset curation, along with 13 specialized tools
designed for interaction with external knowledge in order to answer questions.
Importantly, we strive to minimize the overlap between our benchmark data and
LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use
reasoning abilities. We conducted an in-depth diagnosis of existing tool-use
LLMs to highlight their strengths, weaknesses, and potential improvements. Our
findings set a new benchmark for evaluating LLMs and suggest new directions for
future advancements. Our data and code are freely available to the broader
scientific community on GitHub. | http://arxiv.org/pdf/2306.13304 | Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang | cs.CL, cs.AI | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2212.10511"
},
{
"id": "2306.07193"
},
{
"id": "2211.05100"
},
{
"id": "2301.12652"
},
{
"id": "2206.14858"
},
{
"id": "2209.14610"
},
{
"id": "2203.09735"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2209.07686"
},
{
"id": "2110.14168"
},
{
"id": "2208.03299"
},
{
"id": "2102.13019"
},
{
"id": "2303.11381"
},
{
"id": "2204.05862"
},
{
"id": "2211.10435"
},
{
"id": "2207.13332"
},
{
"id": "2210.12810"
},
{
"id": "2303.04671"
},
{
"id": "2303.05398"
},
{
"id": "2210.17517"
},
{
"id": "2112.04359"
},
{
"id": "2303.17580"
},
{
"id": "2208.05051"
},
{
"id": "2305.15334"
},
{
"id": "2304.09842"
},
{
"id": "2303.11366"
},
{
"id": "2303.12712"
},
{
"id": "2302.12813"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
}
] |
2306.13394 | 46 | Figure 4. Common problems revealed in experiments. [Y]/[N] means the ground truth answer is yes/no. [R] is the generated answer.
# 3.1.2 Cognition
instruction, which is also very common in everyday life.
There are four subtasks for the evaluation of the cognition ability, including commonsense reasoning, numerical cal- culation, text translation, and code reasoning. Figs. 2 (13)- (16) plot the score leaderboards of individual subtasks. In terms of the commonsense reasoning, the âever-victorious generalsâ GPT-4V, WeMM, and XComposer-VL exceed other MLLMs, especially GPT-4V, which gets a score of 142.14. With respect to numerical calculation, GPT-4V still achieves first place, but falls short in the text translation. Regardless of whether it is commonsense reasoning, nu- merical calculation, or text translation, none of the high- est scores exceed 150. This suggests that MLLMs have a lot of room for improvement in these capabilities. For the code reasoning, GPT-4V achieves a high score of 170, far ahead of other counterparts. For all of the cognition tasks, GPT-4V, Lion, and WeMM win the gold, silver, and bronze medals respectively, as shown in Fig. 2 (2).
# 4. Analysis | 2306.13394#46 | MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models | Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform
multimodal tasks, showing amazing emergent abilities in recent studies, such as
writing poems based on an image. However, it is difficult for these case
studies to fully reflect the performance of MLLM, lacking a comprehensive
evaluation. In this paper, we fill in this blank, presenting the first
comprehensive MLLM Evaluation benchmark MME. It measures both perception and
cognition abilities on a total of 14 subtasks. In order to avoid data leakage
that may arise from direct use of public datasets for evaluation, the
annotations of instruction-answer pairs are all manually designed. The concise
instruction design allows us to fairly compare MLLMs, instead of struggling in
prompt engineering. Besides, with such an instruction, we can also easily carry
out quantitative statistics. A total of 30 advanced MLLMs are comprehensively
evaluated on our MME, which not only suggests that existing MLLMs still have a
large room for improvement, but also reveals the potential directions for the
subsequent model optimization. | http://arxiv.org/pdf/2306.13394 | Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, Rongrong Ji | cs.CV | Project page:
https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models | null | cs.CV | 20230623 | 20231206 | [] |
2306.13421 | 46 | Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin
Chess, Jack Clark, Christopher Berner, Sam Mc- Candlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learn- ers.
Christopher Burges, Robert Ragno, and Quoc Le. 2006. Learning to rank with nonsmooth cost functions. In Advances in Neural Information Processing Systems, volume 19. MIT Press. | 2306.13421#46 | Long-range Language Modeling with Self-retrieval | Retrieval-augmented language models (LMs) have received much attention
recently. However, typically the retriever is not trained jointly as a native
component of the LM, but added to an already-pretrained LM, which limits the
ability of the LM and the retriever to adapt to one another. In this work, we
propose the Retrieval-Pretrained Transformer (RPT), an architecture and
training procedure for jointly training a retrieval-augmented LM from scratch
for the task of modeling long texts. Given a recently generated text chunk in a
long document, the LM computes query representations, which are then used to
retrieve earlier chunks in the document, located potentially tens of thousands
of tokens before. Information from retrieved chunks is fused into the LM
representations to predict the next target chunk. We train the retriever
component with a semantic objective, where the goal is to retrieve chunks that
increase the probability of the next chunk, according to a reference LM. We
evaluate RPT on four long-range language modeling tasks, spanning books, code,
and mathematical writing, and demonstrate that RPT improves retrieval quality
and subsequently perplexity across the board compared to strong baselines. | http://arxiv.org/pdf/2306.13421 | Ohad Rubin, Jonathan Berant | cs.CL | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2004.05150"
}
] |
2306.13304 | 47 | [30] S. Lu, N. Duan, H. Han, D. Guo, S.-w. Hwang, and A. Svyatkovskiy. Reacc: A retrieval- augmented code completion framework. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6227â6240, 2022.
[31] A. Madaan and A. Yazdanbakhsh. Text and patterns: For effective chain of thought, it takes two to tango. arXiv preprint arXiv:2209.07686, 2022.
[32] A. Mallen, A. Asai, V. Zhong, R. Das, H. Hajishirzi, and D. Khashabi. When not to trust language models: Investigating effectiveness and limitations of parametric and non-parametric memories. arXiv preprint arXiv:2212.10511, 2022.
[33] S. Mishra, M. Finlayson, P. Lu, L. Tang, S. Welleck, C. Baral, T. Rajpurohit, O. Tafjord, A. Sabharwal, P. Clark, et al. Lila: A unified benchmark for mathematical reasoning. arXiv preprint arXiv:2210.17517, 2022. | 2306.13304#47 | ToolQA: A Dataset for LLM Question Answering with External Tools | Large Language Models (LLMs) have demonstrated impressive performance in
various NLP tasks, but they still suffer from challenges such as hallucination
and weak numerical reasoning. To overcome these challenges, external tools can
be used to enhance LLMs' question-answering abilities. However, current
evaluation methods do not distinguish between questions that can be answered
using LLMs' internal knowledge and those that require external information
through tool use. To address this issue, we introduce a new dataset called
ToolQA, which is designed to faithfully evaluate LLMs' ability to use external
tools for question answering. Our development of ToolQA involved a scalable,
automated process for dataset curation, along with 13 specialized tools
designed for interaction with external knowledge in order to answer questions.
Importantly, we strive to minimize the overlap between our benchmark data and
LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use
reasoning abilities. We conducted an in-depth diagnosis of existing tool-use
LLMs to highlight their strengths, weaknesses, and potential improvements. Our
findings set a new benchmark for evaluating LLMs and suggest new directions for
future advancements. Our data and code are freely available to the broader
scientific community on GitHub. | http://arxiv.org/pdf/2306.13304 | Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang | cs.CL, cs.AI | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2212.10511"
},
{
"id": "2306.07193"
},
{
"id": "2211.05100"
},
{
"id": "2301.12652"
},
{
"id": "2206.14858"
},
{
"id": "2209.14610"
},
{
"id": "2203.09735"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2209.07686"
},
{
"id": "2110.14168"
},
{
"id": "2208.03299"
},
{
"id": "2102.13019"
},
{
"id": "2303.11381"
},
{
"id": "2204.05862"
},
{
"id": "2211.10435"
},
{
"id": "2207.13332"
},
{
"id": "2210.12810"
},
{
"id": "2303.04671"
},
{
"id": "2303.05398"
},
{
"id": "2210.17517"
},
{
"id": "2112.04359"
},
{
"id": "2303.17580"
},
{
"id": "2208.05051"
},
{
"id": "2305.15334"
},
{
"id": "2304.09842"
},
{
"id": "2303.11366"
},
{
"id": "2303.12712"
},
{
"id": "2302.12813"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
}
] |
2306.13394 | 47 | # 4. Analysis
The second problem is a lack of perception. As shown in the second row of Fig. 4, the MLLM misidentifies the number of bananas in the first image, and misreads the char- acters in the second image, resulting in wrong answers. We notice that the performance of perception is vulnerable to the nuance of instructions, since the two instructions of the same image differ in only one word, but lead to completely different and even contradictory perception results.
The third problem is a lack of reasoning. In the third row of Fig. 4, we can see from the red text that the MLLM already knows that the first image is not an office place, but still gives an incorrect answer of âyesâ. Analogously, in the second image, the MLLM has calculated the right arithmetic result, but finally delivers a wrong answer. These phenomena indicate that the logic chain is broken during the reasoning process of MLLMs. Adding CoT prompts, such as âLetâs think step by stepâ [13], may yield better results. We look forward to a further in-depth research. | 2306.13394#47 | MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models | Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform
multimodal tasks, showing amazing emergent abilities in recent studies, such as
writing poems based on an image. However, it is difficult for these case
studies to fully reflect the performance of MLLM, lacking a comprehensive
evaluation. In this paper, we fill in this blank, presenting the first
comprehensive MLLM Evaluation benchmark MME. It measures both perception and
cognition abilities on a total of 14 subtasks. In order to avoid data leakage
that may arise from direct use of public datasets for evaluation, the
annotations of instruction-answer pairs are all manually designed. The concise
instruction design allows us to fairly compare MLLMs, instead of struggling in
prompt engineering. Besides, with such an instruction, we can also easily carry
out quantitative statistics. A total of 30 advanced MLLMs are comprehensively
evaluated on our MME, which not only suggests that existing MLLMs still have a
large room for improvement, but also reveals the potential directions for the
subsequent model optimization. | http://arxiv.org/pdf/2306.13394 | Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, Rongrong Ji | cs.CV | Project page:
https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models | null | cs.CV | 20230623 | 20231206 | [] |
2306.13304 | 48 | [34] R. Nakano, J. Hilton, S. Balaji, J. Wu, L. Ouyang, C. Kim, C. Hesse, S. Jain, V. Kosaraju, W. Saunders, et al. Webgpt: Browser-assisted question-answering with human feedback. arXiv preprint arXiv:2112.09332, 2021.
[35] R. Nogueira, Z. Jiang, and J. Lin. Investigating the limitations of transformers with simple arithmetic tasks. arXiv preprint arXiv:2102.13019, 2021.
# [36] OpenAI. Gpt-4 technical report. arXiv, 2023.
[37] OpenAI. Introducing chatgpt, 2023.
[38] L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. Wainwright, P. Mishkin, C. Zhang, S. Agarwal, K. Slama, A. Ray, et al. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730â27744, 2022.
11 | 2306.13304#48 | ToolQA: A Dataset for LLM Question Answering with External Tools | Large Language Models (LLMs) have demonstrated impressive performance in
various NLP tasks, but they still suffer from challenges such as hallucination
and weak numerical reasoning. To overcome these challenges, external tools can
be used to enhance LLMs' question-answering abilities. However, current
evaluation methods do not distinguish between questions that can be answered
using LLMs' internal knowledge and those that require external information
through tool use. To address this issue, we introduce a new dataset called
ToolQA, which is designed to faithfully evaluate LLMs' ability to use external
tools for question answering. Our development of ToolQA involved a scalable,
automated process for dataset curation, along with 13 specialized tools
designed for interaction with external knowledge in order to answer questions.
Importantly, we strive to minimize the overlap between our benchmark data and
LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use
reasoning abilities. We conducted an in-depth diagnosis of existing tool-use
LLMs to highlight their strengths, weaknesses, and potential improvements. Our
findings set a new benchmark for evaluating LLMs and suggest new directions for
future advancements. Our data and code are freely available to the broader
scientific community on GitHub. | http://arxiv.org/pdf/2306.13304 | Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang | cs.CL, cs.AI | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2212.10511"
},
{
"id": "2306.07193"
},
{
"id": "2211.05100"
},
{
"id": "2301.12652"
},
{
"id": "2206.14858"
},
{
"id": "2209.14610"
},
{
"id": "2203.09735"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2209.07686"
},
{
"id": "2110.14168"
},
{
"id": "2208.03299"
},
{
"id": "2102.13019"
},
{
"id": "2303.11381"
},
{
"id": "2204.05862"
},
{
"id": "2211.10435"
},
{
"id": "2207.13332"
},
{
"id": "2210.12810"
},
{
"id": "2303.04671"
},
{
"id": "2303.05398"
},
{
"id": "2210.17517"
},
{
"id": "2112.04359"
},
{
"id": "2303.17580"
},
{
"id": "2208.05051"
},
{
"id": "2305.15334"
},
{
"id": "2304.09842"
},
{
"id": "2303.11366"
},
{
"id": "2303.12712"
},
{
"id": "2302.12813"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
}
] |
2306.13394 | 48 | We conclude four common problems that largely affect the performance of MLLMs. The first problem is not follow- ing instructions. Although we have adopted a very con- cise instruction design, there are MLLMs that answer freely rather than following instructions. For example, as shown in the first row of Fig. 4, the instruction has claimed âPlease answer yes or noâ, but the MLLM only makes a declarative expression. If no âyesâ or ânoâ is appeared at the beginning of the generated languages, the model is judged to make a wrong answer. We argue that a good MLLM (especially af- ter instruction tuning) should be able to follow such a simple
The fourth problem is object hallucination follow- ing instructions, which is exemplified in the fourth row of Fig. 4. When the instruction contains descriptions of an object that does not appear in the image, the MLLM will imagine that the object exists and ultimately gives a âyesâ answer. Such a case of constantly answering âyesâ results in an accuracy about 50% and an accuracy+ about 0, as shown in Tables 1 and 2. This suggests an urgent need to suppress hallucinations, and the community should take into account of the reliability of the generated answers.
8
# 5. Conclusion | 2306.13394#48 | MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models | Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform
multimodal tasks, showing amazing emergent abilities in recent studies, such as
writing poems based on an image. However, it is difficult for these case
studies to fully reflect the performance of MLLM, lacking a comprehensive
evaluation. In this paper, we fill in this blank, presenting the first
comprehensive MLLM Evaluation benchmark MME. It measures both perception and
cognition abilities on a total of 14 subtasks. In order to avoid data leakage
that may arise from direct use of public datasets for evaluation, the
annotations of instruction-answer pairs are all manually designed. The concise
instruction design allows us to fairly compare MLLMs, instead of struggling in
prompt engineering. Besides, with such an instruction, we can also easily carry
out quantitative statistics. A total of 30 advanced MLLMs are comprehensively
evaluated on our MME, which not only suggests that existing MLLMs still have a
large room for improvement, but also reveals the potential directions for the
subsequent model optimization. | http://arxiv.org/pdf/2306.13394 | Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, Rongrong Ji | cs.CV | Project page:
https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models | null | cs.CV | 20230623 | 20231206 | [] |
2306.13304 | 49 | 11
[39] B. Paranjape, S. Lundberg, S. Singh, H. Hajishirzi, L. Zettlemoyer, and M. T. Ribeiro. Art: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:2303.09014, 2023.
[40] A. Parisi, Y. Zhao, and N. Fiedel. Talm: Tool augmented language models. arXiv preprint arXiv:2205.12255, 2022.
[41] S. G. Patil, T. Zhang, X. Wang, and J. E. Gonzalez. Gorilla: Large language model connected with massive apis. arXiv preprint arXiv:2305.15334, 2023.
[42] B. Peng, M. Galley, P. He, H. Cheng, Y. Xie, Y. Hu, Q. Huang, L. Liden, Z. Yu, W. Chen, et al. Check your facts and try again: Improving large language models with external knowledge and automated feedback. arXiv preprint arXiv:2302.12813, 2023. | 2306.13304#49 | ToolQA: A Dataset for LLM Question Answering with External Tools | Large Language Models (LLMs) have demonstrated impressive performance in
various NLP tasks, but they still suffer from challenges such as hallucination
and weak numerical reasoning. To overcome these challenges, external tools can
be used to enhance LLMs' question-answering abilities. However, current
evaluation methods do not distinguish between questions that can be answered
using LLMs' internal knowledge and those that require external information
through tool use. To address this issue, we introduce a new dataset called
ToolQA, which is designed to faithfully evaluate LLMs' ability to use external
tools for question answering. Our development of ToolQA involved a scalable,
automated process for dataset curation, along with 13 specialized tools
designed for interaction with external knowledge in order to answer questions.
Importantly, we strive to minimize the overlap between our benchmark data and
LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use
reasoning abilities. We conducted an in-depth diagnosis of existing tool-use
LLMs to highlight their strengths, weaknesses, and potential improvements. Our
findings set a new benchmark for evaluating LLMs and suggest new directions for
future advancements. Our data and code are freely available to the broader
scientific community on GitHub. | http://arxiv.org/pdf/2306.13304 | Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang | cs.CL, cs.AI | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2212.10511"
},
{
"id": "2306.07193"
},
{
"id": "2211.05100"
},
{
"id": "2301.12652"
},
{
"id": "2206.14858"
},
{
"id": "2209.14610"
},
{
"id": "2203.09735"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2209.07686"
},
{
"id": "2110.14168"
},
{
"id": "2208.03299"
},
{
"id": "2102.13019"
},
{
"id": "2303.11381"
},
{
"id": "2204.05862"
},
{
"id": "2211.10435"
},
{
"id": "2207.13332"
},
{
"id": "2210.12810"
},
{
"id": "2303.04671"
},
{
"id": "2303.05398"
},
{
"id": "2210.17517"
},
{
"id": "2112.04359"
},
{
"id": "2303.17580"
},
{
"id": "2208.05051"
},
{
"id": "2305.15334"
},
{
"id": "2304.09842"
},
{
"id": "2303.11366"
},
{
"id": "2303.12712"
},
{
"id": "2302.12813"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
}
] |
2306.13394 | 49 | 8
# 5. Conclusion
This paper has presented the first MLLM evaluation bench- mark MME that has four distinct characteristics in terms of task type, data source, instruction design, quantitative statis- tics. 30 advanced MLLMs are evaluated on MME and the experimental results show that there is still a large room to improve. We also summarize the common problem raised in experimental results, providing valuable guidance for the development of MLLM.
# References
[1] Infmllm. https : / / github . com / mightyzau / InfMLLM, 2023. 4, 6
[2] Lion. https : / / github . com / mynameischaos / Lion, 2023. 4, 6
https : / / github . com / gray311 / UnifiedMultimodalInstructionTuning, 2023. 4, 6
[4] Skywork-mm. https : / / github . com / will - singularity/Skywork-MM, 2023. 4, 6
[5] Visualglm-6b. https : / / github . com / THUDM / VisualGLM-6B, 2023. 4, 6
[6] Wemm. https://github.com/scenarios/WeMM, 2023. 4, 6 | 2306.13394#49 | MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models | Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform
multimodal tasks, showing amazing emergent abilities in recent studies, such as
writing poems based on an image. However, it is difficult for these case
studies to fully reflect the performance of MLLM, lacking a comprehensive
evaluation. In this paper, we fill in this blank, presenting the first
comprehensive MLLM Evaluation benchmark MME. It measures both perception and
cognition abilities on a total of 14 subtasks. In order to avoid data leakage
that may arise from direct use of public datasets for evaluation, the
annotations of instruction-answer pairs are all manually designed. The concise
instruction design allows us to fairly compare MLLMs, instead of struggling in
prompt engineering. Besides, with such an instruction, we can also easily carry
out quantitative statistics. A total of 30 advanced MLLMs are comprehensively
evaluated on our MME, which not only suggests that existing MLLMs still have a
large room for improvement, but also reveals the potential directions for the
subsequent model optimization. | http://arxiv.org/pdf/2306.13394 | Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, Rongrong Ji | cs.CV | Project page:
https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models | null | cs.CV | 20230623 | 20231206 | [] |
2306.13421 | 49 | Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Car- bonell, Quoc Le, and Ruslan Salakhutdinov. 2019. Transformer-XL: Attentive language models beyond In Proceedings of the 57th a ï¬xed-length context. Annual Meeting of the Association for Computa- tional Linguistics, pages 2978â2988, Florence, Italy. Association for Computational Linguistics.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), Min- neapolis, Minnesota.
Ehsan Doostmohammadi, Tobias Norlund, Marco Kuhlmann, and Richard Johansson. 2023. Surface- based retrieval retrieval- augmented language models.
Angela Fan, Thibaut Lavril, Edouard Grave, Armand Joulin, and Sainbayar Sukhbaatar. 2021. Address- ing some limitations of transformers with feedback memory. | 2306.13421#49 | Long-range Language Modeling with Self-retrieval | Retrieval-augmented language models (LMs) have received much attention
recently. However, typically the retriever is not trained jointly as a native
component of the LM, but added to an already-pretrained LM, which limits the
ability of the LM and the retriever to adapt to one another. In this work, we
propose the Retrieval-Pretrained Transformer (RPT), an architecture and
training procedure for jointly training a retrieval-augmented LM from scratch
for the task of modeling long texts. Given a recently generated text chunk in a
long document, the LM computes query representations, which are then used to
retrieve earlier chunks in the document, located potentially tens of thousands
of tokens before. Information from retrieved chunks is fused into the LM
representations to predict the next target chunk. We train the retriever
component with a semantic objective, where the goal is to retrieve chunks that
increase the probability of the next chunk, according to a reference LM. We
evaluate RPT on four long-range language modeling tasks, spanning books, code,
and mathematical writing, and demonstrate that RPT improves retrieval quality
and subsequently perplexity across the board compared to strong baselines. | http://arxiv.org/pdf/2306.13421 | Ohad Rubin, Jonathan Berant | cs.CL | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2004.05150"
}
] |
2306.13304 | 50 | [43] J. Qian, H. Wang, Z. Li, S. Li, and X. Yan. Limitations of language models in arithmetic and symbolic induction. arXiv preprint arXiv:2208.05051, 2022.
[44] Y. Qin, S. Hu, Y. Lin, W. Chen, N. Ding, G. Cui, Z. Zeng, Y. Huang, C. Xiao, C. Han, Y. R. Fung, Y. Su, H. Wang, C. Qian, R. Tian, K. Zhu, S. Liang, X. Shen, B. Xu, Z. Zhang, Y. Ye, B. Li, Z. Tang, J. Yi, Y. Zhu, Z. Dai, L. Yan, X. Cong, Y. Lu, W. Zhao, Y. Huang, J. Yan, X. Han, X. Sun, D. Li, J. Phang, C. Yang, T. Wu, H. Ji, Z. Liu, and M. Sun. Tool learning with foundation models, 2023.
[45] A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019. | 2306.13304#50 | ToolQA: A Dataset for LLM Question Answering with External Tools | Large Language Models (LLMs) have demonstrated impressive performance in
various NLP tasks, but they still suffer from challenges such as hallucination
and weak numerical reasoning. To overcome these challenges, external tools can
be used to enhance LLMs' question-answering abilities. However, current
evaluation methods do not distinguish between questions that can be answered
using LLMs' internal knowledge and those that require external information
through tool use. To address this issue, we introduce a new dataset called
ToolQA, which is designed to faithfully evaluate LLMs' ability to use external
tools for question answering. Our development of ToolQA involved a scalable,
automated process for dataset curation, along with 13 specialized tools
designed for interaction with external knowledge in order to answer questions.
Importantly, we strive to minimize the overlap between our benchmark data and
LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use
reasoning abilities. We conducted an in-depth diagnosis of existing tool-use
LLMs to highlight their strengths, weaknesses, and potential improvements. Our
findings set a new benchmark for evaluating LLMs and suggest new directions for
future advancements. Our data and code are freely available to the broader
scientific community on GitHub. | http://arxiv.org/pdf/2306.13304 | Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang | cs.CL, cs.AI | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2212.10511"
},
{
"id": "2306.07193"
},
{
"id": "2211.05100"
},
{
"id": "2301.12652"
},
{
"id": "2206.14858"
},
{
"id": "2209.14610"
},
{
"id": "2203.09735"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2209.07686"
},
{
"id": "2110.14168"
},
{
"id": "2208.03299"
},
{
"id": "2102.13019"
},
{
"id": "2303.11381"
},
{
"id": "2204.05862"
},
{
"id": "2211.10435"
},
{
"id": "2207.13332"
},
{
"id": "2210.12810"
},
{
"id": "2303.04671"
},
{
"id": "2303.05398"
},
{
"id": "2210.17517"
},
{
"id": "2112.04359"
},
{
"id": "2303.17580"
},
{
"id": "2208.05051"
},
{
"id": "2305.15334"
},
{
"id": "2304.09842"
},
{
"id": "2303.11366"
},
{
"id": "2303.12712"
},
{
"id": "2302.12813"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
}
] |
2306.13421 | 50 | Leo Gao, Stella Biderman, Sid Black, Laurence Gold- ing, Travis Hoppe, Charles Foster, Jason Phang, Ho- race He, Anish Thite, Noa Nabeshima, et al. 2021a. The pile: An 800gb dataset of diverse text for lan- guage modeling. ArXiv preprint, abs/2101.00027.
Luyu Gao and Jamie Callan. 2022. Unsupervised cor- pus aware language model pre-training for dense passage retrieval. Association for Computational Linguistics.
Luyu Gao, Yunyi Zhang, Jiawei Han, and Jamie Callan. 2021b. Scaling deep contrastive learning batch size under memory limited setup. In Proceedings of the 6th Workshop on Representation Learning for NLP (RepL4NLP-2021). Association for Computational Linguistics.
Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasu- pat, and Ming-Wei Chang. 2020. Realm: Retrieval- In Pro- augmented language model pre-training. ceedings of the 37th International Conference on Machine Learning, ICMLâ20. JMLR.org. | 2306.13421#50 | Long-range Language Modeling with Self-retrieval | Retrieval-augmented language models (LMs) have received much attention
recently. However, typically the retriever is not trained jointly as a native
component of the LM, but added to an already-pretrained LM, which limits the
ability of the LM and the retriever to adapt to one another. In this work, we
propose the Retrieval-Pretrained Transformer (RPT), an architecture and
training procedure for jointly training a retrieval-augmented LM from scratch
for the task of modeling long texts. Given a recently generated text chunk in a
long document, the LM computes query representations, which are then used to
retrieve earlier chunks in the document, located potentially tens of thousands
of tokens before. Information from retrieved chunks is fused into the LM
representations to predict the next target chunk. We train the retriever
component with a semantic objective, where the goal is to retrieve chunks that
increase the probability of the next chunk, according to a reference LM. We
evaluate RPT on four long-range language modeling tasks, spanning books, code,
and mathematical writing, and demonstrate that RPT improves retrieval quality
and subsequently perplexity across the board compared to strong baselines. | http://arxiv.org/pdf/2306.13421 | Ohad Rubin, Jonathan Berant | cs.CL | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2004.05150"
}
] |
2306.13304 | 51 | [46] S. Robertson, H. Zaragoza, et al. The probabilistic relevance framework: Bm25 and beyond. Foundations and Trends® in Information Retrieval, 3(4):333â389, 2009.
[47] T. L. Scao, A. Fan, C. Akiki, E. Pavlick, S. Ili´c, D. Hesslow, R. Castagné, A. S. Luccioni, F. Yvon, M. Gallé, et al. Bloom: A 176b-parameter open-access multilingual language model. arXiv preprint arXiv:2211.05100, 2022.
[48] T. Schick, J. Dwivedi-Yu, R. Dessì, R. Raileanu, M. Lomeli, L. Zettlemoyer, N. Cancedda, and T. Scialom. Toolformer: Language models can teach themselves to use tools. arXiv preprint arXiv:2302.04761, 2023. | 2306.13304#51 | ToolQA: A Dataset for LLM Question Answering with External Tools | Large Language Models (LLMs) have demonstrated impressive performance in
various NLP tasks, but they still suffer from challenges such as hallucination
and weak numerical reasoning. To overcome these challenges, external tools can
be used to enhance LLMs' question-answering abilities. However, current
evaluation methods do not distinguish between questions that can be answered
using LLMs' internal knowledge and those that require external information
through tool use. To address this issue, we introduce a new dataset called
ToolQA, which is designed to faithfully evaluate LLMs' ability to use external
tools for question answering. Our development of ToolQA involved a scalable,
automated process for dataset curation, along with 13 specialized tools
designed for interaction with external knowledge in order to answer questions.
Importantly, we strive to minimize the overlap between our benchmark data and
LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use
reasoning abilities. We conducted an in-depth diagnosis of existing tool-use
LLMs to highlight their strengths, weaknesses, and potential improvements. Our
findings set a new benchmark for evaluating LLMs and suggest new directions for
future advancements. Our data and code are freely available to the broader
scientific community on GitHub. | http://arxiv.org/pdf/2306.13304 | Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang | cs.CL, cs.AI | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2212.10511"
},
{
"id": "2306.07193"
},
{
"id": "2211.05100"
},
{
"id": "2301.12652"
},
{
"id": "2206.14858"
},
{
"id": "2209.14610"
},
{
"id": "2203.09735"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2209.07686"
},
{
"id": "2110.14168"
},
{
"id": "2208.03299"
},
{
"id": "2102.13019"
},
{
"id": "2303.11381"
},
{
"id": "2204.05862"
},
{
"id": "2211.10435"
},
{
"id": "2207.13332"
},
{
"id": "2210.12810"
},
{
"id": "2303.04671"
},
{
"id": "2303.05398"
},
{
"id": "2210.17517"
},
{
"id": "2112.04359"
},
{
"id": "2303.17580"
},
{
"id": "2208.05051"
},
{
"id": "2305.15334"
},
{
"id": "2304.09842"
},
{
"id": "2303.11366"
},
{
"id": "2303.12712"
},
{
"id": "2302.12813"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
}
] |
2306.13394 | 51 | [9] Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, and Jingren Zhou. Qwen-vl: A frontier large vision-language model with versatile abilities. arXiv preprint:2308.12966, 2023. 1, 4, 6 [10] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Sub- biah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakan- tan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Lan- guage models are few-shot learners. NeurIPS, 2020. 1 [11] Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedan- tam, Saurabh Gupta, Piotr Doll´ar, and C Lawrence Zitnick. Microsoft coco captions: Data collection and evaluation server. arXiv preprint:1504.00325, 2015. 1 | 2306.13394#51 | MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models | Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform
multimodal tasks, showing amazing emergent abilities in recent studies, such as
writing poems based on an image. However, it is difficult for these case
studies to fully reflect the performance of MLLM, lacking a comprehensive
evaluation. In this paper, we fill in this blank, presenting the first
comprehensive MLLM Evaluation benchmark MME. It measures both perception and
cognition abilities on a total of 14 subtasks. In order to avoid data leakage
that may arise from direct use of public datasets for evaluation, the
annotations of instruction-answer pairs are all manually designed. The concise
instruction design allows us to fairly compare MLLMs, instead of struggling in
prompt engineering. Besides, with such an instruction, we can also easily carry
out quantitative statistics. A total of 30 advanced MLLMs are comprehensively
evaluated on our MME, which not only suggests that existing MLLMs still have a
large room for improvement, but also reveals the potential directions for the
subsequent model optimization. | http://arxiv.org/pdf/2306.13394 | Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, Rongrong Ji | cs.CV | Project page:
https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models | null | cs.CV | 20230623 | 20231206 | [] |
2306.13421 | 51 | Yangsibo Huang, Daogao Liu, Zexuan Zhong, Weijia Shi, and Yin Tat Lee. 2023. knn-adapter: Efï¬cient domain adaptation for black-box language models.
DeLesley Hutchins, Imanol Schlag, Yuhuai Wu, Ethan Block- In Advances in Neural In- Dyer, and Behnam Neyshabur. 2022. recurrent transformers. formation Processing Systems.
Gautier Izacard, Mathilde Caron, Lucas Hosseini, Se- bastian Riedel, Piotr Bojanowski, Armand Joulin, and Edouard Grave. 2021. Unsupervised dense in- formation retrieval with contrastive learning. Trans. Mach. Learn. Res., 2022.
Gautier Izacard and Edouard Grave. 2021a. Distilling knowledge from reader to retriever for question an- swering. In International Conference on Learning Representations.
Gautier Izacard and Edouard Grave. 2021b. Lever- aging passage retrieval with generative models for In Proceedings open domain question answering. of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 874â880, Online. Association for Computational Linguistics. | 2306.13421#51 | Long-range Language Modeling with Self-retrieval | Retrieval-augmented language models (LMs) have received much attention
recently. However, typically the retriever is not trained jointly as a native
component of the LM, but added to an already-pretrained LM, which limits the
ability of the LM and the retriever to adapt to one another. In this work, we
propose the Retrieval-Pretrained Transformer (RPT), an architecture and
training procedure for jointly training a retrieval-augmented LM from scratch
for the task of modeling long texts. Given a recently generated text chunk in a
long document, the LM computes query representations, which are then used to
retrieve earlier chunks in the document, located potentially tens of thousands
of tokens before. Information from retrieved chunks is fused into the LM
representations to predict the next target chunk. We train the retriever
component with a semantic objective, where the goal is to retrieve chunks that
increase the probability of the next chunk, according to a reference LM. We
evaluate RPT on four long-range language modeling tasks, spanning books, code,
and mathematical writing, and demonstrate that RPT improves retrieval quality
and subsequently perplexity across the board compared to strong baselines. | http://arxiv.org/pdf/2306.13421 | Ohad Rubin, Jonathan Berant | cs.CL | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2004.05150"
}
] |
2306.13304 | 52 | [49] Y. Shen, K. Song, X. Tan, D. Li, W. Lu, and Y. Zhuang. Hugginggpt: Solving ai tasks with chatgpt and its friends in huggingface. arXiv preprint arXiv:2303.17580, 2023.
[50] W. Shi, S. Min, M. Yasunaga, M. Seo, R. James, M. Lewis, L. Zettlemoyer, and W.-t. Yih. Replug: Retrieval-augmented black-box language models. arXiv preprint arXiv:2301.12652, 2023.
[51] N. Shinn, B. Labash, and A. Gopinath. Reflexion: an autonomous agent with dynamic memory and self-reflection. arXiv preprint arXiv:2303.11366, 2023.
[52] H. Sun, Y. Zhuang, L. Kong, B. Dai, and C. Zhang. Adaplanner: Adaptive planning from feedback with language models, 2023. | 2306.13304#52 | ToolQA: A Dataset for LLM Question Answering with External Tools | Large Language Models (LLMs) have demonstrated impressive performance in
various NLP tasks, but they still suffer from challenges such as hallucination
and weak numerical reasoning. To overcome these challenges, external tools can
be used to enhance LLMs' question-answering abilities. However, current
evaluation methods do not distinguish between questions that can be answered
using LLMs' internal knowledge and those that require external information
through tool use. To address this issue, we introduce a new dataset called
ToolQA, which is designed to faithfully evaluate LLMs' ability to use external
tools for question answering. Our development of ToolQA involved a scalable,
automated process for dataset curation, along with 13 specialized tools
designed for interaction with external knowledge in order to answer questions.
Importantly, we strive to minimize the overlap between our benchmark data and
LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use
reasoning abilities. We conducted an in-depth diagnosis of existing tool-use
LLMs to highlight their strengths, weaknesses, and potential improvements. Our
findings set a new benchmark for evaluating LLMs and suggest new directions for
future advancements. Our data and code are freely available to the broader
scientific community on GitHub. | http://arxiv.org/pdf/2306.13304 | Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang | cs.CL, cs.AI | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2212.10511"
},
{
"id": "2306.07193"
},
{
"id": "2211.05100"
},
{
"id": "2301.12652"
},
{
"id": "2206.14858"
},
{
"id": "2209.14610"
},
{
"id": "2203.09735"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2209.07686"
},
{
"id": "2110.14168"
},
{
"id": "2208.03299"
},
{
"id": "2102.13019"
},
{
"id": "2303.11381"
},
{
"id": "2204.05862"
},
{
"id": "2211.10435"
},
{
"id": "2207.13332"
},
{
"id": "2210.12810"
},
{
"id": "2303.04671"
},
{
"id": "2303.05398"
},
{
"id": "2210.17517"
},
{
"id": "2112.04359"
},
{
"id": "2303.17580"
},
{
"id": "2208.05051"
},
{
"id": "2305.15334"
},
{
"id": "2304.09842"
},
{
"id": "2303.11366"
},
{
"id": "2303.12712"
},
{
"id": "2302.12813"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
}
] |
2306.13394 | 52 | [12] Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, and Steven Hoi. Instructblip: Towards general- purpose vision-language models with instruction tuning. arXiv preprint:2305.06500, 2023. 1, 4, 6
[13] Danny Driess, Fei Xia, Mehdi SM Sajjadi, Corey Lynch, Aakanksha Chowdhery, Brian Ichter, Ayzaan Wahid, Jonathan Tompson, Quan Vuong, Tianhe Yu, et al. Palm- arXiv e: An embodied multimodal preprint:2303.03378, 2023. 1, 8
[14] Peng Gao, Jiaming Han, Renrui Zhang, Ziyi Lin, Shijie Geng, Aojun Zhou, Wei Zhang, Pan Lu, Conghui He, Xi9
angyu Yue, et al. Llama-adapter v2: Parameter-efficient vi- sual instruction model. arXiv preprint:2304.15010, 2023. 4, 6 | 2306.13394#52 | MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models | Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform
multimodal tasks, showing amazing emergent abilities in recent studies, such as
writing poems based on an image. However, it is difficult for these case
studies to fully reflect the performance of MLLM, lacking a comprehensive
evaluation. In this paper, we fill in this blank, presenting the first
comprehensive MLLM Evaluation benchmark MME. It measures both perception and
cognition abilities on a total of 14 subtasks. In order to avoid data leakage
that may arise from direct use of public datasets for evaluation, the
annotations of instruction-answer pairs are all manually designed. The concise
instruction design allows us to fairly compare MLLMs, instead of struggling in
prompt engineering. Besides, with such an instruction, we can also easily carry
out quantitative statistics. A total of 30 advanced MLLMs are comprehensively
evaluated on our MME, which not only suggests that existing MLLMs still have a
large room for improvement, but also reveals the potential directions for the
subsequent model optimization. | http://arxiv.org/pdf/2306.13394 | Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, Rongrong Ji | cs.CV | Project page:
https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models | null | cs.CV | 20230623 | 20231206 | [] |
2306.13421 | 52 | Gautier Izacard, Patrick Lewis, Maria Lomeli, Lucas Hosseini, Fabio Petroni, Timo Schick, Jane Dwivedi- Yu, Armand Joulin, Sebastian Riedel, and Edouard Grave. 2022. Atlas: Few-shot learning with retrieval augmented language models.
Kalervo Järvelin and Jaana Kekäläinen. 2002. Cumu- lated gain-based evaluation of ir techniques. ACM Transactions on Information Systems.
Zhengbao Jiang, Luyu Gao, Zhiruo Wang, Jun Araki, Haibo Ding, Jamie Callan, and Graham Neubig. 2022. Retrieval as attention: End-to-end learning of retrieval and reading within a single transformer. In Proceedings of the 2022 Conference on Empiri- cal Methods in Natural Language Processing, pages 2336â2349, Abu Dhabi, United Arab Emirates. As- sociation for Computational Linguistics. | 2306.13421#52 | Long-range Language Modeling with Self-retrieval | Retrieval-augmented language models (LMs) have received much attention
recently. However, typically the retriever is not trained jointly as a native
component of the LM, but added to an already-pretrained LM, which limits the
ability of the LM and the retriever to adapt to one another. In this work, we
propose the Retrieval-Pretrained Transformer (RPT), an architecture and
training procedure for jointly training a retrieval-augmented LM from scratch
for the task of modeling long texts. Given a recently generated text chunk in a
long document, the LM computes query representations, which are then used to
retrieve earlier chunks in the document, located potentially tens of thousands
of tokens before. Information from retrieved chunks is fused into the LM
representations to predict the next target chunk. We train the retriever
component with a semantic objective, where the goal is to retrieve chunks that
increase the probability of the next chunk, according to a reference LM. We
evaluate RPT on four long-range language modeling tasks, spanning books, code,
and mathematical writing, and demonstrate that RPT improves retrieval quality
and subsequently perplexity across the board compared to strong baselines. | http://arxiv.org/pdf/2306.13421 | Ohad Rubin, Jonathan Berant | cs.CL | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2004.05150"
}
] |
2306.13304 | 53 | [53] J. Thorne, A. Vlachos, C. Christodoulopoulos, and A. Mittal. FEVER: a large-scale dataset for fact extraction and VERification. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 809â819, New Orleans, Louisiana, June 2018. Association for Computational Linguistics.
[54] H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. Lacroix, B. Rozière, N. Goyal, E. Hambro, F. Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023.
[55] X. Wang, S. Li, and H. Ji. Code4struct: Code generation for few-shot structured prediction from natural language. arXiv preprint arXiv:2210.12810, 2022.
[56] Z. Wang, S. Cai, A. Liu, X. Ma, and Y. Liang. Describe, explain, plan and select: Interactive planning with large language models enables open-world multi-task agents, 2023. | 2306.13304#53 | ToolQA: A Dataset for LLM Question Answering with External Tools | Large Language Models (LLMs) have demonstrated impressive performance in
various NLP tasks, but they still suffer from challenges such as hallucination
and weak numerical reasoning. To overcome these challenges, external tools can
be used to enhance LLMs' question-answering abilities. However, current
evaluation methods do not distinguish between questions that can be answered
using LLMs' internal knowledge and those that require external information
through tool use. To address this issue, we introduce a new dataset called
ToolQA, which is designed to faithfully evaluate LLMs' ability to use external
tools for question answering. Our development of ToolQA involved a scalable,
automated process for dataset curation, along with 13 specialized tools
designed for interaction with external knowledge in order to answer questions.
Importantly, we strive to minimize the overlap between our benchmark data and
LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use
reasoning abilities. We conducted an in-depth diagnosis of existing tool-use
LLMs to highlight their strengths, weaknesses, and potential improvements. Our
findings set a new benchmark for evaluating LLMs and suggest new directions for
future advancements. Our data and code are freely available to the broader
scientific community on GitHub. | http://arxiv.org/pdf/2306.13304 | Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang | cs.CL, cs.AI | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2212.10511"
},
{
"id": "2306.07193"
},
{
"id": "2211.05100"
},
{
"id": "2301.12652"
},
{
"id": "2206.14858"
},
{
"id": "2209.14610"
},
{
"id": "2203.09735"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2209.07686"
},
{
"id": "2110.14168"
},
{
"id": "2208.03299"
},
{
"id": "2102.13019"
},
{
"id": "2303.11381"
},
{
"id": "2204.05862"
},
{
"id": "2211.10435"
},
{
"id": "2207.13332"
},
{
"id": "2210.12810"
},
{
"id": "2303.04671"
},
{
"id": "2303.05398"
},
{
"id": "2210.17517"
},
{
"id": "2112.04359"
},
{
"id": "2303.17580"
},
{
"id": "2208.05051"
},
{
"id": "2305.15334"
},
{
"id": "2304.09842"
},
{
"id": "2303.11366"
},
{
"id": "2303.12712"
},
{
"id": "2302.12813"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
}
] |
2306.13394 | 53 | [15] Tao Gong, Chengqi Lyu, Shilong Zhang, Yudong Wang, Miao Zheng, Qian Zhao, Kuikun Liu, Wenwei Zhang, Ping Luo, and Kai Chen. Multimodal-gpt: A vision arXiv and language model for dialogue with humans. preprint:2305.04790, 2023. 4, 6
[16] Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Ba- tra, and Devi Parikh. Making the v in vqa matter: Elevating the role of image understanding in visual question answer- ing. In CVPR, 2017. 1
[17] Jiaming Han, Renrui Zhang, Wenqi Shao, Peng Gao, Peng Xu, Han Xiao, Kaipeng Zhang, Chris Liu, Song Wen, Ziyu Guo, et al. Imagebind-llm: Multi-modality instruction tun- ing. arXiv preprint:2309.03905, 2023. 4, 6 | 2306.13394#53 | MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models | Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform
multimodal tasks, showing amazing emergent abilities in recent studies, such as
writing poems based on an image. However, it is difficult for these case
studies to fully reflect the performance of MLLM, lacking a comprehensive
evaluation. In this paper, we fill in this blank, presenting the first
comprehensive MLLM Evaluation benchmark MME. It measures both perception and
cognition abilities on a total of 14 subtasks. In order to avoid data leakage
that may arise from direct use of public datasets for evaluation, the
annotations of instruction-answer pairs are all manually designed. The concise
instruction design allows us to fairly compare MLLMs, instead of struggling in
prompt engineering. Besides, with such an instruction, we can also easily carry
out quantitative statistics. A total of 30 advanced MLLMs are comprehensively
evaluated on our MME, which not only suggests that existing MLLMs still have a
large room for improvement, but also reveals the potential directions for the
subsequent model optimization. | http://arxiv.org/pdf/2306.13394 | Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, Rongrong Ji | cs.CV | Project page:
https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models | null | cs.CV | 20230623 | 20231206 | [] |
2306.13421 | 53 | Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP), pages 6769â 6781, Online. Association for Computational Lin- guistics.
Urvashi Khandelwal, Omer Levy, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. 2020. Generaliza- tion through memorization: Nearest neighbor lan- guage models. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of ICLR.
Nikita Kitaev, Lukasz Kaiser, and Anselm Levskaya. 2020. Reformer: The efï¬cient transformer. In Inter- national Conference on Learning Representations. | 2306.13421#53 | Long-range Language Modeling with Self-retrieval | Retrieval-augmented language models (LMs) have received much attention
recently. However, typically the retriever is not trained jointly as a native
component of the LM, but added to an already-pretrained LM, which limits the
ability of the LM and the retriever to adapt to one another. In this work, we
propose the Retrieval-Pretrained Transformer (RPT), an architecture and
training procedure for jointly training a retrieval-augmented LM from scratch
for the task of modeling long texts. Given a recently generated text chunk in a
long document, the LM computes query representations, which are then used to
retrieve earlier chunks in the document, located potentially tens of thousands
of tokens before. Information from retrieved chunks is fused into the LM
representations to predict the next target chunk. We train the retriever
component with a semantic objective, where the goal is to retrieve chunks that
increase the probability of the next chunk, according to a reference LM. We
evaluate RPT on four long-range language modeling tasks, spanning books, code,
and mathematical writing, and demonstrate that RPT improves retrieval quality
and subsequently perplexity across the board compared to strong baselines. | http://arxiv.org/pdf/2306.13421 | Ohad Rubin, Jonathan Berant | cs.CL | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2004.05150"
}
] |
2306.13304 | 54 | [57] J. Wei, X. Wang, D. Schuurmans, M. Bosma, B. Ichter, F. Xia, E. Chi, Q. Le, and D. Zhou. Chain- of-thought prompting elicits reasoning in large language models. arXiv, page 2201.11903v6, 2022.
12
[58] L. Weidinger, J. Mellor, M. Rauh, C. Griffin, J. Uesato, P.-S. Huang, M. Cheng, M. Glaese, B. Balle, A. Kasirzadeh, et al. Ethical and social risks of harm from language models. arXiv preprint arXiv:2112.04359, 2021.
[59] S. Wiegreffe, J. Hessel, S. Swayamdipta, M. Riedl, and Y. Choi. Reframing human-ai collab- oration for generating free-text explanations. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 632â658, 2022.
[60] S. Wolfram. Wolfram|Alpha as the Way to Bring Computational Knowledge Superpowers to ChatGPT. Stephen Wolfram Writings, 2023. | 2306.13304#54 | ToolQA: A Dataset for LLM Question Answering with External Tools | Large Language Models (LLMs) have demonstrated impressive performance in
various NLP tasks, but they still suffer from challenges such as hallucination
and weak numerical reasoning. To overcome these challenges, external tools can
be used to enhance LLMs' question-answering abilities. However, current
evaluation methods do not distinguish between questions that can be answered
using LLMs' internal knowledge and those that require external information
through tool use. To address this issue, we introduce a new dataset called
ToolQA, which is designed to faithfully evaluate LLMs' ability to use external
tools for question answering. Our development of ToolQA involved a scalable,
automated process for dataset curation, along with 13 specialized tools
designed for interaction with external knowledge in order to answer questions.
Importantly, we strive to minimize the overlap between our benchmark data and
LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use
reasoning abilities. We conducted an in-depth diagnosis of existing tool-use
LLMs to highlight their strengths, weaknesses, and potential improvements. Our
findings set a new benchmark for evaluating LLMs and suggest new directions for
future advancements. Our data and code are freely available to the broader
scientific community on GitHub. | http://arxiv.org/pdf/2306.13304 | Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang | cs.CL, cs.AI | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2212.10511"
},
{
"id": "2306.07193"
},
{
"id": "2211.05100"
},
{
"id": "2301.12652"
},
{
"id": "2206.14858"
},
{
"id": "2209.14610"
},
{
"id": "2203.09735"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2209.07686"
},
{
"id": "2110.14168"
},
{
"id": "2208.03299"
},
{
"id": "2102.13019"
},
{
"id": "2303.11381"
},
{
"id": "2204.05862"
},
{
"id": "2211.10435"
},
{
"id": "2207.13332"
},
{
"id": "2210.12810"
},
{
"id": "2303.04671"
},
{
"id": "2303.05398"
},
{
"id": "2210.17517"
},
{
"id": "2112.04359"
},
{
"id": "2303.17580"
},
{
"id": "2208.05051"
},
{
"id": "2305.15334"
},
{
"id": "2304.09842"
},
{
"id": "2303.11366"
},
{
"id": "2303.12712"
},
{
"id": "2302.12813"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
}
] |
2306.13394 | 54 | [18] Wenbo Hu, Yifan Xu, Y Li, W Li, Z Chen, and Z Tu. Bliva: A simple multimodal llm for better handling of text-rich vi- sual questions. arXiv preprint:2308.09936, 2023. 4, 6 [19] Qingqiu Huang, Yu Xiong, Anyi Rao, Jiaze Wang, and Dahua Lin. Movienet: A holistic dataset for movie under- standing. In ECCV, 2020. 4
[20] Shaohan Huang, Li Dong, Wenhui Wang, Yaru Hao, Saksham Singhal, Shuming Ma, Tengchao Lv, Lei Cui, Owais Khan Mohammed, Qiang Liu, et al. Language is not all you need: Aligning perception with language mod- els. arXiv preprint:2302.14045, 2023. 1, 5
[21] Bo Li, Yuanhan Zhang, Liangyu Chen, Jinghao Wang, Fanyi Pu, Jingkang Yang, Chunyuan Li, and Ziwei Liu. Mimic-it: Multi-modal in-context instruction tuning. arXiv preprint:2306.05425, 2023. 2 | 2306.13394#54 | MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models | Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform
multimodal tasks, showing amazing emergent abilities in recent studies, such as
writing poems based on an image. However, it is difficult for these case
studies to fully reflect the performance of MLLM, lacking a comprehensive
evaluation. In this paper, we fill in this blank, presenting the first
comprehensive MLLM Evaluation benchmark MME. It measures both perception and
cognition abilities on a total of 14 subtasks. In order to avoid data leakage
that may arise from direct use of public datasets for evaluation, the
annotations of instruction-answer pairs are all manually designed. The concise
instruction design allows us to fairly compare MLLMs, instead of struggling in
prompt engineering. Besides, with such an instruction, we can also easily carry
out quantitative statistics. A total of 30 advanced MLLMs are comprehensively
evaluated on our MME, which not only suggests that existing MLLMs still have a
large room for improvement, but also reveals the potential directions for the
subsequent model optimization. | http://arxiv.org/pdf/2306.13394 | Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, Rongrong Ji | cs.CV | Project page:
https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models | null | cs.CV | 20230623 | 20231206 | [] |
2306.13421 | 54 | Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. 2019. Latent retrieval for weakly supervised open domain question answering. In Proceedings of the 57th Annual Meeting of the Association for Compu- tational Linguistics. Association for Computational Linguistics.
Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Hein- rich Küttler, Mike Lewis, Wen-tau Yih, Tim Rock- täschel, Sebastian Riedel, and Douwe Kiela. 2020. Retrieval-augmented generation for knowledge- In Advances in Neural Infor- intensive nlp tasks. mation Processing Systems, volume 33, pages 9459â 9474. Curran Associates, Inc.
Pedro Henrique Martins, Zita Marinho, and Andre Mar- tins. 2022. â-former: Inï¬nite memory transformer. In Proceedings of the 60th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), Dublin, Ireland. Association for Com- putational Linguistics. | 2306.13421#54 | Long-range Language Modeling with Self-retrieval | Retrieval-augmented language models (LMs) have received much attention
recently. However, typically the retriever is not trained jointly as a native
component of the LM, but added to an already-pretrained LM, which limits the
ability of the LM and the retriever to adapt to one another. In this work, we
propose the Retrieval-Pretrained Transformer (RPT), an architecture and
training procedure for jointly training a retrieval-augmented LM from scratch
for the task of modeling long texts. Given a recently generated text chunk in a
long document, the LM computes query representations, which are then used to
retrieve earlier chunks in the document, located potentially tens of thousands
of tokens before. Information from retrieved chunks is fused into the LM
representations to predict the next target chunk. We train the retriever
component with a semantic objective, where the goal is to retrieve chunks that
increase the probability of the next chunk, according to a reference LM. We
evaluate RPT on four long-range language modeling tasks, spanning books, code,
and mathematical writing, and demonstrate that RPT improves retrieval quality
and subsequently perplexity across the board compared to strong baselines. | http://arxiv.org/pdf/2306.13421 | Ohad Rubin, Jonathan Berant | cs.CL | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2004.05150"
}
] |
2306.13304 | 55 | [60] S. Wolfram. Wolfram|Alpha as the Way to Bring Computational Knowledge Superpowers to ChatGPT. Stephen Wolfram Writings, 2023.
[61] C. Wu, S. Yin, W. Qi, X. Wang, Z. Tang, and N. Duan. Visual chatgpt: Talking, drawing and editing with visual foundation models. arXiv preprint arXiv:2303.04671, 2023.
[62] Q. Xu, F. Hong, B. Li, C. Hu, Z. Chen, and J. Zhang. On the tool manipulation capability of open-source large language models. arXiv preprint arXiv:2305.16504, 2023.
[63] R. Xu, Y. Yu, J. C. Ho, and C. Yang. Weakly-supervised scientific document classification via retrieval-augmented multi-stage training. arXiv preprint arXiv:2306.07193, 2023. | 2306.13304#55 | ToolQA: A Dataset for LLM Question Answering with External Tools | Large Language Models (LLMs) have demonstrated impressive performance in
various NLP tasks, but they still suffer from challenges such as hallucination
and weak numerical reasoning. To overcome these challenges, external tools can
be used to enhance LLMs' question-answering abilities. However, current
evaluation methods do not distinguish between questions that can be answered
using LLMs' internal knowledge and those that require external information
through tool use. To address this issue, we introduce a new dataset called
ToolQA, which is designed to faithfully evaluate LLMs' ability to use external
tools for question answering. Our development of ToolQA involved a scalable,
automated process for dataset curation, along with 13 specialized tools
designed for interaction with external knowledge in order to answer questions.
Importantly, we strive to minimize the overlap between our benchmark data and
LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use
reasoning abilities. We conducted an in-depth diagnosis of existing tool-use
LLMs to highlight their strengths, weaknesses, and potential improvements. Our
findings set a new benchmark for evaluating LLMs and suggest new directions for
future advancements. Our data and code are freely available to the broader
scientific community on GitHub. | http://arxiv.org/pdf/2306.13304 | Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang | cs.CL, cs.AI | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2212.10511"
},
{
"id": "2306.07193"
},
{
"id": "2211.05100"
},
{
"id": "2301.12652"
},
{
"id": "2206.14858"
},
{
"id": "2209.14610"
},
{
"id": "2203.09735"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2209.07686"
},
{
"id": "2110.14168"
},
{
"id": "2208.03299"
},
{
"id": "2102.13019"
},
{
"id": "2303.11381"
},
{
"id": "2204.05862"
},
{
"id": "2211.10435"
},
{
"id": "2207.13332"
},
{
"id": "2210.12810"
},
{
"id": "2303.04671"
},
{
"id": "2303.05398"
},
{
"id": "2210.17517"
},
{
"id": "2112.04359"
},
{
"id": "2303.17580"
},
{
"id": "2208.05051"
},
{
"id": "2305.15334"
},
{
"id": "2304.09842"
},
{
"id": "2303.11366"
},
{
"id": "2303.12712"
},
{
"id": "2302.12813"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
}
] |
2306.13394 | 55 | [22] Bo Li, Yuanhan Zhang, Liangyu Chen, Jinghao Wang, Otter: A multi- and Ziwei Liu. arXiv Jingkang Yang, modal model with in-context instruction tuning. preprint:2305.03726, 2023. 4, 6
[23] Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. arXiv preprint:2301.12597, 2023. 1, 4, 6
[24] Juncheng Li, Kaihang Pan, Zhiqi Ge, Minghe Gao, Han- wang Zhang, Wei Ji, Wenqiao Zhang, Tat-Seng Chua, Sil- iang Tang, and Yueting Zhuang. Fine-tuning multimodal llms to follow zero-shot demonstrative instructions. arXiv preprint:2308.04152, 2023. 4, 6
[25] Yifan Li, Yifan Du, Kun Zhou, Jinpeng Wang, Wayne Xin Zhao, and Ji-Rong Wen. Evaluating object hallucination in large vision-language models. arXiv preprint:2305.10355, 2023. 1, 4 | 2306.13394#55 | MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models | Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform
multimodal tasks, showing amazing emergent abilities in recent studies, such as
writing poems based on an image. However, it is difficult for these case
studies to fully reflect the performance of MLLM, lacking a comprehensive
evaluation. In this paper, we fill in this blank, presenting the first
comprehensive MLLM Evaluation benchmark MME. It measures both perception and
cognition abilities on a total of 14 subtasks. In order to avoid data leakage
that may arise from direct use of public datasets for evaluation, the
annotations of instruction-answer pairs are all manually designed. The concise
instruction design allows us to fairly compare MLLMs, instead of struggling in
prompt engineering. Besides, with such an instruction, we can also easily carry
out quantitative statistics. A total of 30 advanced MLLMs are comprehensively
evaluated on our MME, which not only suggests that existing MLLMs still have a
large room for improvement, but also reveals the potential directions for the
subsequent model optimization. | http://arxiv.org/pdf/2306.13394 | Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, Rongrong Ji | cs.CV | Project page:
https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models | null | cs.CV | 20230623 | 20231206 | [] |
2306.13421 | 55 | Harsh Mehta, Ankit Gupta, Ashok Cutkosky, and Behnam Neyshabur. 2023. Long range language In The Eleventh modeling via gated state spaces. International Conference on Learning Representa- tions.
Oï¬r Press, Noah A. Smith, and Mike Lewis. 2021. Better language modeling using Shortformer: In Proceedings of the 59th Annual shorter inputs. Meeting of the Association for Computational Lin- guistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers).
Oï¬r Press and Lior Wolf. 2017. Using the output em- In Proceed- bedding to improve language models. ings of the 15th Conference of the European Chap- ter of the Association for Computational Linguistics:
Volume 2, Short Papers, pages 157â163, Valencia, Spain. Association for Computational Linguistics.
Jack Rae and Ali Razavi. 2020. Do transformers need In Proceedings of the deep long-range memory? 58th Annual Meeting of the Association for Compu- tational Linguistics, pages 7524â7529, Online. As- sociation for Computational Linguistics. | 2306.13421#55 | Long-range Language Modeling with Self-retrieval | Retrieval-augmented language models (LMs) have received much attention
recently. However, typically the retriever is not trained jointly as a native
component of the LM, but added to an already-pretrained LM, which limits the
ability of the LM and the retriever to adapt to one another. In this work, we
propose the Retrieval-Pretrained Transformer (RPT), an architecture and
training procedure for jointly training a retrieval-augmented LM from scratch
for the task of modeling long texts. Given a recently generated text chunk in a
long document, the LM computes query representations, which are then used to
retrieve earlier chunks in the document, located potentially tens of thousands
of tokens before. Information from retrieved chunks is fused into the LM
representations to predict the next target chunk. We train the retriever
component with a semantic objective, where the goal is to retrieve chunks that
increase the probability of the next chunk, according to a reference LM. We
evaluate RPT on four long-range language modeling tasks, spanning books, code,
and mathematical writing, and demonstrate that RPT improves retrieval quality
and subsequently perplexity across the board compared to strong baselines. | http://arxiv.org/pdf/2306.13421 | Ohad Rubin, Jonathan Berant | cs.CL | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2004.05150"
}
] |
2306.13304 | 56 | [64] Z. Yang, L. Li, J. Wang, K. Lin, E. Azarnasab, F. Ahmed, Z. Liu, C. Liu, M. Zeng, and L. Wang. Mm-react: Prompting chatgpt for multimodal reasoning and action. arXiv preprint arXiv:2303.11381, 2023.
[65] Z. Yang, P. Qi, S. Zhang, Y. Bengio, W. Cohen, R. Salakhutdinov, and C. D. Manning. HotpotQA: A dataset for diverse, explainable multi-hop question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2369â2380, Brussels, Belgium, Oct.-Nov. 2018. Association for Computational Linguistics.
[66] S. Yao, J. Zhao, D. Yu, N. Du, I. Shafran, K. R. Narasimhan, and Y. Cao. React: Synergizing reasoning and acting in language models. In The Eleventh International Conference on Learning Representations, 2023.
[67] J. Zhang. Graph-toolformer: To empower llms with graph reasoning ability via prompt augmented by chatgpt, 2023. | 2306.13304#56 | ToolQA: A Dataset for LLM Question Answering with External Tools | Large Language Models (LLMs) have demonstrated impressive performance in
various NLP tasks, but they still suffer from challenges such as hallucination
and weak numerical reasoning. To overcome these challenges, external tools can
be used to enhance LLMs' question-answering abilities. However, current
evaluation methods do not distinguish between questions that can be answered
using LLMs' internal knowledge and those that require external information
through tool use. To address this issue, we introduce a new dataset called
ToolQA, which is designed to faithfully evaluate LLMs' ability to use external
tools for question answering. Our development of ToolQA involved a scalable,
automated process for dataset curation, along with 13 specialized tools
designed for interaction with external knowledge in order to answer questions.
Importantly, we strive to minimize the overlap between our benchmark data and
LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use
reasoning abilities. We conducted an in-depth diagnosis of existing tool-use
LLMs to highlight their strengths, weaknesses, and potential improvements. Our
findings set a new benchmark for evaluating LLMs and suggest new directions for
future advancements. Our data and code are freely available to the broader
scientific community on GitHub. | http://arxiv.org/pdf/2306.13304 | Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang | cs.CL, cs.AI | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2212.10511"
},
{
"id": "2306.07193"
},
{
"id": "2211.05100"
},
{
"id": "2301.12652"
},
{
"id": "2206.14858"
},
{
"id": "2209.14610"
},
{
"id": "2203.09735"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2209.07686"
},
{
"id": "2110.14168"
},
{
"id": "2208.03299"
},
{
"id": "2102.13019"
},
{
"id": "2303.11381"
},
{
"id": "2204.05862"
},
{
"id": "2211.10435"
},
{
"id": "2207.13332"
},
{
"id": "2210.12810"
},
{
"id": "2303.04671"
},
{
"id": "2303.05398"
},
{
"id": "2210.17517"
},
{
"id": "2112.04359"
},
{
"id": "2303.17580"
},
{
"id": "2208.05051"
},
{
"id": "2305.15334"
},
{
"id": "2304.09842"
},
{
"id": "2303.11366"
},
{
"id": "2303.12712"
},
{
"id": "2302.12813"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
}
] |
2306.13394 | 56 | [26] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll´ar, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In ECCV, 2014. 4
[27] Ziyi Lin, Chris Liu, Renrui Zhang, Peng Gao, Longtian Qiu, Han Xiao, Han Qiu, Chen Lin, Wenqi Shao, Keqin Chen, et al. Sphinx: The joint mixing of weights, tasks, and visual embeddings for multi-modal large language models. arXiv preprint:2311.07575, 2023. 1, 4, 6
[28] Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Ya- coob, and Lijuan Wang. Aligning large multi-modal model with robust instruction tuning. arXiv preprint:2306.14565, 2023. 4, 6
[29] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. arXiv preprint:2304.08485, 2023. 2, 4, 6 | 2306.13394#56 | MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models | Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform
multimodal tasks, showing amazing emergent abilities in recent studies, such as
writing poems based on an image. However, it is difficult for these case
studies to fully reflect the performance of MLLM, lacking a comprehensive
evaluation. In this paper, we fill in this blank, presenting the first
comprehensive MLLM Evaluation benchmark MME. It measures both perception and
cognition abilities on a total of 14 subtasks. In order to avoid data leakage
that may arise from direct use of public datasets for evaluation, the
annotations of instruction-answer pairs are all manually designed. The concise
instruction design allows us to fairly compare MLLMs, instead of struggling in
prompt engineering. Besides, with such an instruction, we can also easily carry
out quantitative statistics. A total of 30 advanced MLLMs are comprehensively
evaluated on our MME, which not only suggests that existing MLLMs still have a
large room for improvement, but also reveals the potential directions for the
subsequent model optimization. | http://arxiv.org/pdf/2306.13394 | Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, Rongrong Ji | cs.CV | Project page:
https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models | null | cs.CV | 20230623 | 20231206 | [] |
2306.13421 | 56 | Jack W. Rae, Anna Potapenko, Siddhant M. Jayaku- mar, Chloe Hillier, and Timothy P. Lillicrap. 2020. Compressive transformers for long-range sequence In International Conference on Learn- modelling. ing Representations.
Ori Ram, Yoav Levine, Itay Dalmedigos, Dor Muhlgay, Amnon Shashua, Kevin Leyton-Brown, and Yoav Shoham. 2023. In-context retrieval-augmented lan- guage models.
Ori Ram, Gal Shachaf, Omer Levy, Jonathan Berant, and Amir Globerson. 2022. Learning to retrieve pas- sages without supervision. Association for Compu- tational Linguistics.
Stephen Robertson and Hugo Zaragoza. 2009. The probabilistic relevance framework: Bm25 and be- yond. Foundations and Trends in Information Re- trieval, 3:333â389.
Aurko Roy, Mohammad Saffar, Ashish Vaswani, and David Grangier. 2021. Efï¬cient content-based sparse attention with routing transformers. Transac- tions of the Association for Computational Linguis- tics. | 2306.13421#56 | Long-range Language Modeling with Self-retrieval | Retrieval-augmented language models (LMs) have received much attention
recently. However, typically the retriever is not trained jointly as a native
component of the LM, but added to an already-pretrained LM, which limits the
ability of the LM and the retriever to adapt to one another. In this work, we
propose the Retrieval-Pretrained Transformer (RPT), an architecture and
training procedure for jointly training a retrieval-augmented LM from scratch
for the task of modeling long texts. Given a recently generated text chunk in a
long document, the LM computes query representations, which are then used to
retrieve earlier chunks in the document, located potentially tens of thousands
of tokens before. Information from retrieved chunks is fused into the LM
representations to predict the next target chunk. We train the retriever
component with a semantic objective, where the goal is to retrieve chunks that
increase the probability of the next chunk, according to a reference LM. We
evaluate RPT on four long-range language modeling tasks, spanning books, code,
and mathematical writing, and demonstrate that RPT improves retrieval quality
and subsequently perplexity across the board compared to strong baselines. | http://arxiv.org/pdf/2306.13421 | Ohad Rubin, Jonathan Berant | cs.CL | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2004.05150"
}
] |
2306.13304 | 57 | [67] J. Zhang. Graph-toolformer: To empower llms with graph reasoning ability via prompt augmented by chatgpt, 2023.
In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7371â7387, Online and Punta Cana, Dominican Republic, Nov. 2021. Association for Computational Linguistics.
[69] R. Zhang, Y. Yu, P. Shetty, L. Song, and C. Zhang. Prboost: Prompt-based rule discovery and boosting for interactive weakly-supervised learning. arXiv preprint arXiv:2203.09735, 2022.
[70] Y. Zhuang, Y. Li, J. Zhang, Y. Yu, Y. Mou, X. Chen, L. Song, and C. Zhang. ReSel: N- ary relation extraction from scientific text and tables by learning to retrieve and select. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 730â744, Abu Dhabi, United Arab Emirates, Dec. 2022. Association for Computational Linguistics.
13
# A Additional Related Works | 2306.13304#57 | ToolQA: A Dataset for LLM Question Answering with External Tools | Large Language Models (LLMs) have demonstrated impressive performance in
various NLP tasks, but they still suffer from challenges such as hallucination
and weak numerical reasoning. To overcome these challenges, external tools can
be used to enhance LLMs' question-answering abilities. However, current
evaluation methods do not distinguish between questions that can be answered
using LLMs' internal knowledge and those that require external information
through tool use. To address this issue, we introduce a new dataset called
ToolQA, which is designed to faithfully evaluate LLMs' ability to use external
tools for question answering. Our development of ToolQA involved a scalable,
automated process for dataset curation, along with 13 specialized tools
designed for interaction with external knowledge in order to answer questions.
Importantly, we strive to minimize the overlap between our benchmark data and
LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use
reasoning abilities. We conducted an in-depth diagnosis of existing tool-use
LLMs to highlight their strengths, weaknesses, and potential improvements. Our
findings set a new benchmark for evaluating LLMs and suggest new directions for
future advancements. Our data and code are freely available to the broader
scientific community on GitHub. | http://arxiv.org/pdf/2306.13304 | Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang | cs.CL, cs.AI | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2212.10511"
},
{
"id": "2306.07193"
},
{
"id": "2211.05100"
},
{
"id": "2301.12652"
},
{
"id": "2206.14858"
},
{
"id": "2209.14610"
},
{
"id": "2203.09735"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2209.07686"
},
{
"id": "2110.14168"
},
{
"id": "2208.03299"
},
{
"id": "2102.13019"
},
{
"id": "2303.11381"
},
{
"id": "2204.05862"
},
{
"id": "2211.10435"
},
{
"id": "2207.13332"
},
{
"id": "2210.12810"
},
{
"id": "2303.04671"
},
{
"id": "2303.05398"
},
{
"id": "2210.17517"
},
{
"id": "2112.04359"
},
{
"id": "2303.17580"
},
{
"id": "2208.05051"
},
{
"id": "2305.15334"
},
{
"id": "2304.09842"
},
{
"id": "2303.11366"
},
{
"id": "2303.12712"
},
{
"id": "2302.12813"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
}
] |
2306.13394 | 57 | [30] Yuliang Liu, Lianwen Jin, Shuaitao Zhang, Canjie Luo, and Sheng Zhang. Curved scene text detection via transverse and longitudinal sequence connection. PR, 2019. 5
[31] Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, et al. Mmbench: Is your multi-modal model an all-around player? arXiv preprint:2307.06281, 2023. 1 [32] Pan Lu, Swaroop Mishra, Tanglin Xia, Liang Qiu, Kai-Wei Chang, Song-Chun Zhu, Oyvind Tafjord, Peter Clark, and Ashwin Kalyan. Learn to explain: Multimodal reasoning via thought chains for science question answering. NeurIPS, 2022. 1, 5
[33] Gen Luo, Yiyi Zhou, Tianhe Ren, Shengxin Chen, Xiaoshuai Sun, and Rongrong Ji. Cheap and quick: Efficient vision- language instruction tuning for large language models. arXiv preprint:2305.15023, 2023. 4, 6 | 2306.13394#57 | MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models | Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform
multimodal tasks, showing amazing emergent abilities in recent studies, such as
writing poems based on an image. However, it is difficult for these case
studies to fully reflect the performance of MLLM, lacking a comprehensive
evaluation. In this paper, we fill in this blank, presenting the first
comprehensive MLLM Evaluation benchmark MME. It measures both perception and
cognition abilities on a total of 14 subtasks. In order to avoid data leakage
that may arise from direct use of public datasets for evaluation, the
annotations of instruction-answer pairs are all manually designed. The concise
instruction design allows us to fairly compare MLLMs, instead of struggling in
prompt engineering. Besides, with such an instruction, we can also easily carry
out quantitative statistics. A total of 30 advanced MLLMs are comprehensively
evaluated on our MME, which not only suggests that existing MLLMs still have a
large room for improvement, but also reveals the potential directions for the
subsequent model optimization. | http://arxiv.org/pdf/2306.13394 | Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, Rongrong Ji | cs.CV | Project page:
https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models | null | cs.CV | 20230623 | 20231206 | [] |
2306.13421 | 57 | Ohad Rubin, Jonathan Herzig, and Jonathan Berant. 2022. Learning to retrieve prompts for in-context learning. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, pages 2655â2671, Seattle, United States. Association for Computational Linguistics.
Devendra Singh Sachan, Mike Lewis, Mandar Joshi, Armen Aghajanyan, Wen tau Yih, Joëlle Pineau, and Luke Zettlemoyer. 2022. Improving passage re- trieval with zero-shot question generation. In Con- ference on Empirical Methods in Natural Language Processing.
Devendra Singh Sachan, Siva Reddy, William L. Hamilton, Chris Dyer, and Dani Yogatama. 2021. End-to-end training of multi-document reader and retriever for open-domain question answering. In Advances in Neural Information Processing Sys- tems.
Christopher Sciavolino, Zexuan Zhong, Jinhyuk Lee, and Danqi Chen. 2021. Simple entity-centric ques- tions challenge dense retrievers. In Proceedings of the 2021 Conference on Empirical Methods in Natu- ral Language Processing. Association for Computa- tional Linguistics. | 2306.13421#57 | Long-range Language Modeling with Self-retrieval | Retrieval-augmented language models (LMs) have received much attention
recently. However, typically the retriever is not trained jointly as a native
component of the LM, but added to an already-pretrained LM, which limits the
ability of the LM and the retriever to adapt to one another. In this work, we
propose the Retrieval-Pretrained Transformer (RPT), an architecture and
training procedure for jointly training a retrieval-augmented LM from scratch
for the task of modeling long texts. Given a recently generated text chunk in a
long document, the LM computes query representations, which are then used to
retrieve earlier chunks in the document, located potentially tens of thousands
of tokens before. Information from retrieved chunks is fused into the LM
representations to predict the next target chunk. We train the retriever
component with a semantic objective, where the goal is to retrieve chunks that
increase the probability of the next chunk, according to a reference LM. We
evaluate RPT on four long-range language modeling tasks, spanning books, code,
and mathematical writing, and demonstrate that RPT improves retrieval quality
and subsequently perplexity across the board compared to strong baselines. | http://arxiv.org/pdf/2306.13421 | Ohad Rubin, Jonathan Berant | cs.CL | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2004.05150"
}
] |
2306.13304 | 58 | Methods Tool Numbers Tool Categories # Tool/Task Reasoning Instruction Type Task Single-Tool Methods CoT [57] Lila [33] Program-of-Thought [5] Code4Struct [55] PAL [11] MathPrompt [13] ToolFormer [48] GraphToolFormer [67] Talm [40] 1 1 1 1 1 1 5 5 - - math/code code code code code Basic Graph Basic 1 1 1 1 1 1 1 1 1 Generation Generation Generation Generation Generation Generation Generation Human Info Generation Prompting Prompting Prompting Prompting Prompting Prompting PR & FT PR & FT PR & FT QA MathQA TabQA Event Extraction MathQA MathQA QA Graph QA Multi-Tool Methods WebGPT [34] HuggingGPT [49] Chameleon [28] GeneGPT [18] ART [39] ReAct [66] MM-ReAct [64] Visual ChatGPT [61] 10 >10 >10 38 8 3 >10 >10 Web Operation Vision code, nlp, cv NCBI APIs code/math/retriever retriever vision vision Feedback Human Info Human Info Generation Human Feedback Feedback Feedback Feedback Fine-tuning Prompting Prompting Prompting Prompting PR & FT Prompting Prompting QA VQA ScienceQA, TabQA Gene Tasks BigBench QA, AlfWorld, WebShop CV tasks CV tasks | 2306.13304#58 | ToolQA: A Dataset for LLM Question Answering with External Tools | Large Language Models (LLMs) have demonstrated impressive performance in
various NLP tasks, but they still suffer from challenges such as hallucination
and weak numerical reasoning. To overcome these challenges, external tools can
be used to enhance LLMs' question-answering abilities. However, current
evaluation methods do not distinguish between questions that can be answered
using LLMs' internal knowledge and those that require external information
through tool use. To address this issue, we introduce a new dataset called
ToolQA, which is designed to faithfully evaluate LLMs' ability to use external
tools for question answering. Our development of ToolQA involved a scalable,
automated process for dataset curation, along with 13 specialized tools
designed for interaction with external knowledge in order to answer questions.
Importantly, we strive to minimize the overlap between our benchmark data and
LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use
reasoning abilities. We conducted an in-depth diagnosis of existing tool-use
LLMs to highlight their strengths, weaknesses, and potential improvements. Our
findings set a new benchmark for evaluating LLMs and suggest new directions for
future advancements. Our data and code are freely available to the broader
scientific community on GitHub. | http://arxiv.org/pdf/2306.13304 | Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang | cs.CL, cs.AI | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2212.10511"
},
{
"id": "2306.07193"
},
{
"id": "2211.05100"
},
{
"id": "2301.12652"
},
{
"id": "2206.14858"
},
{
"id": "2209.14610"
},
{
"id": "2203.09735"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2209.07686"
},
{
"id": "2110.14168"
},
{
"id": "2208.03299"
},
{
"id": "2102.13019"
},
{
"id": "2303.11381"
},
{
"id": "2204.05862"
},
{
"id": "2211.10435"
},
{
"id": "2207.13332"
},
{
"id": "2210.12810"
},
{
"id": "2303.04671"
},
{
"id": "2303.05398"
},
{
"id": "2210.17517"
},
{
"id": "2112.04359"
},
{
"id": "2303.17580"
},
{
"id": "2208.05051"
},
{
"id": "2305.15334"
},
{
"id": "2304.09842"
},
{
"id": "2303.11366"
},
{
"id": "2303.12712"
},
{
"id": "2302.12813"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
}
] |
2306.13394 | 58 | [34] Hui Mao, Ming Cheung, and James She. Deepart: Learning joint representations of visual arts. In ICM, 2017. 4
[35] Hui Mao, James She, and Ming Cheung. Visual arts search on mobile devices. TOMM, 2019. 4
[36] Kenneth Marino, Mohammad Rastegari, Ali Farhadi, and Roozbeh Mottaghi. Ok-vqa: A visual question answering benchmark requiring external knowledge. In CVPR, 2019. 1 [37] OpenAI. Gpt-4 technical report. arXiv preprint:2303.08774,
2023. 1, 4, 6
[38] Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, and Yueting Zhuang. Hugginggpt: Solving ai tasks with chatgpt and its friends in huggingface. arXiv preprint:2303.17580, 2023. 1
[39] Yixuan Su, Tian Lan, Huayang Li, Jialu Xu, Yan Wang, and Deng Cai. Pandagpt: One model to instruction-follow them all. arXiv preprint:2305.16355, 2023. 4, 6 | 2306.13394#58 | MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models | Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform
multimodal tasks, showing amazing emergent abilities in recent studies, such as
writing poems based on an image. However, it is difficult for these case
studies to fully reflect the performance of MLLM, lacking a comprehensive
evaluation. In this paper, we fill in this blank, presenting the first
comprehensive MLLM Evaluation benchmark MME. It measures both perception and
cognition abilities on a total of 14 subtasks. In order to avoid data leakage
that may arise from direct use of public datasets for evaluation, the
annotations of instruction-answer pairs are all manually designed. The concise
instruction design allows us to fairly compare MLLMs, instead of struggling in
prompt engineering. Besides, with such an instruction, we can also easily carry
out quantitative statistics. A total of 30 advanced MLLMs are comprehensively
evaluated on our MME, which not only suggests that existing MLLMs still have a
large room for improvement, but also reveals the potential directions for the
subsequent model optimization. | http://arxiv.org/pdf/2306.13394 | Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, Rongrong Ji | cs.CV | Project page:
https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models | null | cs.CV | 20230623 | 20231206 | [] |
2306.13421 | 58 | Weijia Shi, Sewon Min, Michihiro Yasunaga, Min- joon Seo, Rich James, Mike Lewis, Luke Zettle- moyer, and Wen tau Yih. 2023. Replug: Retrieval- augmented black-box language models.
Jianlin Su, Yu Lu, Shengfeng Pan, Bo Wen, and Yun- feng Liu. 2021. Roformer: Enhanced transformer with rotary position embedding.
Sainbayar Sukhbaatar, Da JU, Spencer Poff, Stephen Roller, Arthur Szlam, Jason E Weston, and Angela Fan. 2021. Not all memories are created equal: Learning to expire.
Simeng Sun, Kalpesh Krishna, Andrew Mattarella- Micke, and Mohit Iyyer. 2021. Do long-range language models actually use long-range context? ArXiv, abs/2109.09115.
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023. Llama: Open and efï¬cient foundation language mod- els. | 2306.13421#58 | Long-range Language Modeling with Self-retrieval | Retrieval-augmented language models (LMs) have received much attention
recently. However, typically the retriever is not trained jointly as a native
component of the LM, but added to an already-pretrained LM, which limits the
ability of the LM and the retriever to adapt to one another. In this work, we
propose the Retrieval-Pretrained Transformer (RPT), an architecture and
training procedure for jointly training a retrieval-augmented LM from scratch
for the task of modeling long texts. Given a recently generated text chunk in a
long document, the LM computes query representations, which are then used to
retrieve earlier chunks in the document, located potentially tens of thousands
of tokens before. Information from retrieved chunks is fused into the LM
representations to predict the next target chunk. We train the retriever
component with a semantic objective, where the goal is to retrieve chunks that
increase the probability of the next chunk, according to a reference LM. We
evaluate RPT on four long-range language modeling tasks, spanning books, code,
and mathematical writing, and demonstrate that RPT improves retrieval quality
and subsequently perplexity across the board compared to strong baselines. | http://arxiv.org/pdf/2306.13421 | Ohad Rubin, Jonathan Berant | cs.CL | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2004.05150"
}
] |
2306.13304 | 59 | >1 >1 >1 >1 >1 >1 >1 >1 Table 5: A comparison of methods that leverage LLMs for Tool-use.
We list the state-of-the-art related works in tool-augmented LLMs in Table 5. All of them can be categorized into two groups: (1) single-tool methods, that focus on making a single API call perfect in the solution; (2) multi-tool methods, that emphasize more on studying how to compose different tools together to solve a challenging problem. ToolQA is more suitable for the evaluation of the second category to test the inherent logical reasoning behind different tools. Additionally, there exist other notable contributions [56, 22, 52] within the realm of decision-making that specifically emphasize the planning capabilities of expansive language models. These endeavors can be regarded as methods affiliated with tools, wherein the actions within generated plans are analogous to distinct tools utilized for specific purposes.
# B Data Sources
B.1 Different Data Source Introduction
⢠Flight Status (2022-2023)5 contains almost all flight information of airlines between 2022 and 2023, which is too contemporary for LLMsâ internal knowledge. | 2306.13304#59 | ToolQA: A Dataset for LLM Question Answering with External Tools | Large Language Models (LLMs) have demonstrated impressive performance in
various NLP tasks, but they still suffer from challenges such as hallucination
and weak numerical reasoning. To overcome these challenges, external tools can
be used to enhance LLMs' question-answering abilities. However, current
evaluation methods do not distinguish between questions that can be answered
using LLMs' internal knowledge and those that require external information
through tool use. To address this issue, we introduce a new dataset called
ToolQA, which is designed to faithfully evaluate LLMs' ability to use external
tools for question answering. Our development of ToolQA involved a scalable,
automated process for dataset curation, along with 13 specialized tools
designed for interaction with external knowledge in order to answer questions.
Importantly, we strive to minimize the overlap between our benchmark data and
LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use
reasoning abilities. We conducted an in-depth diagnosis of existing tool-use
LLMs to highlight their strengths, weaknesses, and potential improvements. Our
findings set a new benchmark for evaluating LLMs and suggest new directions for
future advancements. Our data and code are freely available to the broader
scientific community on GitHub. | http://arxiv.org/pdf/2306.13304 | Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang | cs.CL, cs.AI | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2212.10511"
},
{
"id": "2306.07193"
},
{
"id": "2211.05100"
},
{
"id": "2301.12652"
},
{
"id": "2206.14858"
},
{
"id": "2209.14610"
},
{
"id": "2203.09735"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2209.07686"
},
{
"id": "2110.14168"
},
{
"id": "2208.03299"
},
{
"id": "2102.13019"
},
{
"id": "2303.11381"
},
{
"id": "2204.05862"
},
{
"id": "2211.10435"
},
{
"id": "2207.13332"
},
{
"id": "2210.12810"
},
{
"id": "2303.04671"
},
{
"id": "2303.05398"
},
{
"id": "2210.17517"
},
{
"id": "2112.04359"
},
{
"id": "2303.17580"
},
{
"id": "2208.05051"
},
{
"id": "2305.15334"
},
{
"id": "2304.09842"
},
{
"id": "2303.11366"
},
{
"id": "2303.12712"
},
{
"id": "2302.12813"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
}
] |
2306.13394 | 59 | [40] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth´ee Lacroix, Baptiste Rozi`ere, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint:2302.13971, 2023. 1
[41] Jianfeng Wang, Zhengyuan Yang, Xiaowei Hu, Linjie Li, Kevin Lin, Zhe Gan, Zicheng Liu, Ce Liu, and Lijuan Wang. Git: A generative image-to-text transformer for vision and language. arXiv preprint:2205.14100, 2022. 4, 6
[42] Wenhai Wang, Zhe Chen, Xiaokang Chen, Jiannan Wu, Xizhou Zhu, Gang Zeng, Ping Luo, Tong Lu, Jie Zhou, Yu Qiao, et al. Visionllm: Large language model is also an open-ended decoder for vision-centric tasks. arXiv preprint:2305.11175, 2023. 1 | 2306.13394#59 | MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models | Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform
multimodal tasks, showing amazing emergent abilities in recent studies, such as
writing poems based on an image. However, it is difficult for these case
studies to fully reflect the performance of MLLM, lacking a comprehensive
evaluation. In this paper, we fill in this blank, presenting the first
comprehensive MLLM Evaluation benchmark MME. It measures both perception and
cognition abilities on a total of 14 subtasks. In order to avoid data leakage
that may arise from direct use of public datasets for evaluation, the
annotations of instruction-answer pairs are all manually designed. The concise
instruction design allows us to fairly compare MLLMs, instead of struggling in
prompt engineering. Besides, with such an instruction, we can also easily carry
out quantitative statistics. A total of 30 advanced MLLMs are comprehensively
evaluated on our MME, which not only suggests that existing MLLMs still have a
large room for improvement, but also reveals the potential directions for the
subsequent model optimization. | http://arxiv.org/pdf/2306.13394 | Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, Rongrong Ji | cs.CV | Project page:
https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models | null | cs.CV | 20230623 | 20231206 | [] |
2306.13421 | 59 | Boxin Wang, Wei Ping, Peng Xu, Lawrence McAfee, Zihan Liu, Mohammad Shoeybi, Yi Dong, Oleksii Kuchaiev, Bo Li, Chaowei Xiao, Anima Anandku- mar, and Bryan Catanzaro. 2023. Shall we pretrain autoregressive language models with retrieval? a comprehensive study.
Thomas Wolf, von Werra, 2023. https://github.com/huggingface/ blog/blob/main/codeparrot.md version=codeparrot/codeparrot-train-v2-near- dedup.
Yuhuai Wu, Markus Norman Rabe, DeLesley Hutchins, and Christian Szegedy. 2022. Memorizing trans- formers. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net.
Dani Yogatama, Cyprien de Masson dâAutume, and Lingpeng Kong. 2021. Adaptive semiparametric language models. Transactions of the Association for Computational Linguistics, 9:362â373. | 2306.13421#59 | Long-range Language Modeling with Self-retrieval | Retrieval-augmented language models (LMs) have received much attention
recently. However, typically the retriever is not trained jointly as a native
component of the LM, but added to an already-pretrained LM, which limits the
ability of the LM and the retriever to adapt to one another. In this work, we
propose the Retrieval-Pretrained Transformer (RPT), an architecture and
training procedure for jointly training a retrieval-augmented LM from scratch
for the task of modeling long texts. Given a recently generated text chunk in a
long document, the LM computes query representations, which are then used to
retrieve earlier chunks in the document, located potentially tens of thousands
of tokens before. Information from retrieved chunks is fused into the LM
representations to predict the next target chunk. We train the retriever
component with a semantic objective, where the goal is to retrieve chunks that
increase the probability of the next chunk, according to a reference LM. We
evaluate RPT on four long-range language modeling tasks, spanning books, code,
and mathematical writing, and demonstrate that RPT improves retrieval quality
and subsequently perplexity across the board compared to strong baselines. | http://arxiv.org/pdf/2306.13421 | Ohad Rubin, Jonathan Berant | cs.CL | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2004.05150"
}
] |
2306.13304 | 60 | ⢠Flight Status (2022-2023)5 contains almost all flight information of airlines between 2022 and 2023, which is too contemporary for LLMsâ internal knowledge.
Daily Coffee Price (2000-2022)6 contains the daily price of coffee, ranging from 2000 to 2022, where the information is too contemporary and detailed for LLMsâ internal knowledge. ⢠Yelp Business Data7 is a subset of Yelpâs business data across 8 metropolitan areas in the USA and Canada, where the information is too detailed for LLMsâ internal knowledge. ⢠Airbnb Open Data8 is a subset of Airbnb activities in New York, where the information is
too detailed for LLMsâ internal knowledge.
⢠DBLP Citation Network (V14)9 constructs the graph based on the records after 2020. The author-author and paper-paper relations are formulated as two separate graphs.
GSM8k10 is a dataset of 8.5K high-quality linguistically diverse grade school math word problems. We sample the questions from the error cases made by ChatGPT on the original dataset to make sure that the questions cannot be easily handled with its internal knowledge. ⢠SciREX11 is a challenging dataset for document-level information extraction based on a
collection of full-length machine-learning scientific papers. | 2306.13304#60 | ToolQA: A Dataset for LLM Question Answering with External Tools | Large Language Models (LLMs) have demonstrated impressive performance in
various NLP tasks, but they still suffer from challenges such as hallucination
and weak numerical reasoning. To overcome these challenges, external tools can
be used to enhance LLMs' question-answering abilities. However, current
evaluation methods do not distinguish between questions that can be answered
using LLMs' internal knowledge and those that require external information
through tool use. To address this issue, we introduce a new dataset called
ToolQA, which is designed to faithfully evaluate LLMs' ability to use external
tools for question answering. Our development of ToolQA involved a scalable,
automated process for dataset curation, along with 13 specialized tools
designed for interaction with external knowledge in order to answer questions.
Importantly, we strive to minimize the overlap between our benchmark data and
LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use
reasoning abilities. We conducted an in-depth diagnosis of existing tool-use
LLMs to highlight their strengths, weaknesses, and potential improvements. Our
findings set a new benchmark for evaluating LLMs and suggest new directions for
future advancements. Our data and code are freely available to the broader
scientific community on GitHub. | http://arxiv.org/pdf/2306.13304 | Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang | cs.CL, cs.AI | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2212.10511"
},
{
"id": "2306.07193"
},
{
"id": "2211.05100"
},
{
"id": "2301.12652"
},
{
"id": "2206.14858"
},
{
"id": "2209.14610"
},
{
"id": "2203.09735"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2209.07686"
},
{
"id": "2110.14168"
},
{
"id": "2208.03299"
},
{
"id": "2102.13019"
},
{
"id": "2303.11381"
},
{
"id": "2204.05862"
},
{
"id": "2211.10435"
},
{
"id": "2207.13332"
},
{
"id": "2210.12810"
},
{
"id": "2303.04671"
},
{
"id": "2303.05398"
},
{
"id": "2210.17517"
},
{
"id": "2112.04359"
},
{
"id": "2303.17580"
},
{
"id": "2208.05051"
},
{
"id": "2305.15334"
},
{
"id": "2304.09842"
},
{
"id": "2303.11366"
},
{
"id": "2303.12712"
},
{
"id": "2302.12813"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
}
] |
2306.13394 | 60 | [43] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. Chain of thought prompting elicits reasoning in large language models. arXiv preprint:2201.11903, 2022. 1
10
[44] Tobias Weyand, Andre Araujo, Bingyi Cao, and Jack Sim. Google landmarks dataset v2-a large-scale benchmark for instance-level recognition and retrieval. In CVPR, 2020. 4
[45] Chenfei Wu, Shengming Yin, Weizhen Qi, Xiaodong Wang, Zecheng Tang, and Nan Duan. Visual chatgpt: Talking, drawing and editing with visual foundation models. arXiv preprint:2303.04671, 2023. 1
[46] Yang Wu, Shilong Wang, Hao Yang, Tian Zheng, Hongbo Zhang, Yanyan Zhao, and Bing Qin. An early evaluation of gpt-4v (ision). arXiv preprint:2310.16534, 2023. 1
[47] Zhiyang Xu, Ying Shen, and Lifu Huang. Multiinstruct: Im- proving multi-modal zero-shot learning via instruction tun- ing. arXiv preprint:2212.10773, 2022. 1 | 2306.13394#60 | MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models | Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform
multimodal tasks, showing amazing emergent abilities in recent studies, such as
writing poems based on an image. However, it is difficult for these case
studies to fully reflect the performance of MLLM, lacking a comprehensive
evaluation. In this paper, we fill in this blank, presenting the first
comprehensive MLLM Evaluation benchmark MME. It measures both perception and
cognition abilities on a total of 14 subtasks. In order to avoid data leakage
that may arise from direct use of public datasets for evaluation, the
annotations of instruction-answer pairs are all manually designed. The concise
instruction design allows us to fairly compare MLLMs, instead of struggling in
prompt engineering. Besides, with such an instruction, we can also easily carry
out quantitative statistics. A total of 30 advanced MLLMs are comprehensively
evaluated on our MME, which not only suggests that existing MLLMs still have a
large room for improvement, but also reveals the potential directions for the
subsequent model optimization. | http://arxiv.org/pdf/2306.13394 | Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, Rongrong Ji | cs.CV | Project page:
https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models | null | cs.CV | 20230623 | 20231206 | [] |
2306.13421 | 60 | Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, and Amr Ahmed. 2020. Big bird: Transformers for longer sequences. In Proceedings of the 34th Inter- national Conference on Neural Information Process- ing Systems.
Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher De- wan, Mona Diab, Xian Li, Xi Victoria Lin, Todor Mi- haylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel Simig, Punit Singh Koura, Anjali Sridhar, Tianlu
Wang, and Luke Zettlemoyer. 2022. Opt: Open pre-trained transformer language models. ArXiv, abs/2205.01068.
Zexuan Zhong, Tao Lei, and Danqi Chen. 2022. Train- ing language models with memory augmentation. In Proceedings of the 2022 Conference on Empiri- cal Methods in Natural Language Processing, pages 5657â5673, Abu Dhabi, United Arab Emirates. As- sociation for Computational Linguistics. | 2306.13421#60 | Long-range Language Modeling with Self-retrieval | Retrieval-augmented language models (LMs) have received much attention
recently. However, typically the retriever is not trained jointly as a native
component of the LM, but added to an already-pretrained LM, which limits the
ability of the LM and the retriever to adapt to one another. In this work, we
propose the Retrieval-Pretrained Transformer (RPT), an architecture and
training procedure for jointly training a retrieval-augmented LM from scratch
for the task of modeling long texts. Given a recently generated text chunk in a
long document, the LM computes query representations, which are then used to
retrieve earlier chunks in the document, located potentially tens of thousands
of tokens before. Information from retrieved chunks is fused into the LM
representations to predict the next target chunk. We train the retriever
component with a semantic objective, where the goal is to retrieve chunks that
increase the probability of the next chunk, according to a reference LM. We
evaluate RPT on four long-range language modeling tasks, spanning books, code,
and mathematical writing, and demonstrate that RPT improves retrieval quality
and subsequently perplexity across the board compared to strong baselines. | http://arxiv.org/pdf/2306.13421 | Ohad Rubin, Jonathan Berant | cs.CL | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2004.05150"
}
] |
2306.13304 | 61 | collection of full-length machine-learning scientific papers.
5https://www.kaggle.com/datasets/robikscube/flight-delay-dataset-20182022?select= Combined_Flights_2022.csv
# 6https://www.kaggle.com/datasets/psycon/daily-coffee-price 7https://www.kaggle.com/datasets/yelp-dataset/yelp-dataset?select=yelp_academic_
# dataset_business.json
# 8https://www.kaggle.com/datasets/arianazmoudeh/airbnbopendata 9https://www.aminer.org/citation 10https://github.com/openai/grade-school-math 11https://github.com/allenai/SciREX
14
⢠Agenda is our own synthetic dataset to model the real-world personal agenda data. To avoid the privacy issue, we first create names, events, and dates with ChatGPT and then randomly compose them to form 10000 different records. To create a pure-text personal agenda corpus, we feed each of the records into ChatGPT, containing generated agenda for virtual characters. More Details can be seen in Appendix B.2.
# B.2 Generation Details of Agenda Dataset | 2306.13304#61 | ToolQA: A Dataset for LLM Question Answering with External Tools | Large Language Models (LLMs) have demonstrated impressive performance in
various NLP tasks, but they still suffer from challenges such as hallucination
and weak numerical reasoning. To overcome these challenges, external tools can
be used to enhance LLMs' question-answering abilities. However, current
evaluation methods do not distinguish between questions that can be answered
using LLMs' internal knowledge and those that require external information
through tool use. To address this issue, we introduce a new dataset called
ToolQA, which is designed to faithfully evaluate LLMs' ability to use external
tools for question answering. Our development of ToolQA involved a scalable,
automated process for dataset curation, along with 13 specialized tools
designed for interaction with external knowledge in order to answer questions.
Importantly, we strive to minimize the overlap between our benchmark data and
LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use
reasoning abilities. We conducted an in-depth diagnosis of existing tool-use
LLMs to highlight their strengths, weaknesses, and potential improvements. Our
findings set a new benchmark for evaluating LLMs and suggest new directions for
future advancements. Our data and code are freely available to the broader
scientific community on GitHub. | http://arxiv.org/pdf/2306.13304 | Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang | cs.CL, cs.AI | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2212.10511"
},
{
"id": "2306.07193"
},
{
"id": "2211.05100"
},
{
"id": "2301.12652"
},
{
"id": "2206.14858"
},
{
"id": "2209.14610"
},
{
"id": "2203.09735"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2209.07686"
},
{
"id": "2110.14168"
},
{
"id": "2208.03299"
},
{
"id": "2102.13019"
},
{
"id": "2303.11381"
},
{
"id": "2204.05862"
},
{
"id": "2211.10435"
},
{
"id": "2207.13332"
},
{
"id": "2210.12810"
},
{
"id": "2303.04671"
},
{
"id": "2303.05398"
},
{
"id": "2210.17517"
},
{
"id": "2112.04359"
},
{
"id": "2303.17580"
},
{
"id": "2208.05051"
},
{
"id": "2305.15334"
},
{
"id": "2304.09842"
},
{
"id": "2303.11366"
},
{
"id": "2303.12712"
},
{
"id": "2302.12813"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
}
] |
2306.13394 | 61 | [48] Qinghao Ye, Haiyang Xu, Jiabo Ye, Ming Yan, Haowei Liu, Qi Qian, Ji Zhang, Fei Huang, and Jingren Zhou. mplug- owl2: Revolutionizing multi-modal large language model arXiv preprint:2311.04257, with modality collaboration. 2023. 1, 2, 4, 6
[49] Shukang Yin, Chaoyou Fu, Sirui Zhao, Ke Li, Xing Sun, Tong Xu, and Enhong Chen. A survey on multimodal large language models. arXiv preprint:2306.13549, 2023. 1 [50] Shukang Yin, Chaoyou Fu, Sirui Zhao, Tong Xu, Hao Wang, Dianbo Sui, Yunhang Shen, Ke Li, Xing Sun, and Enhong Chen. Woodpecker: Hallucination correction for multimodal large language models. arXiv preprint:2310.16045, 2023. 4 [51] Tianyu Yu, Jinyi Hu, Yuan Yao, Haoye Zhang, Yue Zhao, Chongyi Wang, Shan Wang, Yinxv Pan, Jiao Xue, Dahai Li, et al. Reformulating vision-language foundation models and datasets towards universal multimodal assistants. arXiv preprint:2310.00653, 2023. 4, 6 | 2306.13394#61 | MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models | Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform
multimodal tasks, showing amazing emergent abilities in recent studies, such as
writing poems based on an image. However, it is difficult for these case
studies to fully reflect the performance of MLLM, lacking a comprehensive
evaluation. In this paper, we fill in this blank, presenting the first
comprehensive MLLM Evaluation benchmark MME. It measures both perception and
cognition abilities on a total of 14 subtasks. In order to avoid data leakage
that may arise from direct use of public datasets for evaluation, the
annotations of instruction-answer pairs are all manually designed. The concise
instruction design allows us to fairly compare MLLMs, instead of struggling in
prompt engineering. Besides, with such an instruction, we can also easily carry
out quantitative statistics. A total of 30 advanced MLLMs are comprehensively
evaluated on our MME, which not only suggests that existing MLLMs still have a
large room for improvement, but also reveals the potential directions for the
subsequent model optimization. | http://arxiv.org/pdf/2306.13394 | Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, Rongrong Ji | cs.CV | Project page:
https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models | null | cs.CV | 20230623 | 20231206 | [] |
2306.13304 | 62 | # B.2 Generation Details of Agenda Dataset
As mentioned in § 3.2, personal or private data serves as a significant external knowledge source. There exist applications that have been designed with plugins and external tools specifically querying this type of data, such as AI personal assistants on daily agenda. Nevertheless, we recognize that this data often intersects with sensitive areas, and hence, privacy concerns are paramount. To address these issues, we automatically synthesize a personal agenda corpus. This not only ensures that the large language models (LLMs) have not been previously exposed to the data but also eliminates any possibility of them inadvertently memorizing the information within their internal knowledge. | 2306.13304#62 | ToolQA: A Dataset for LLM Question Answering with External Tools | Large Language Models (LLMs) have demonstrated impressive performance in
various NLP tasks, but they still suffer from challenges such as hallucination
and weak numerical reasoning. To overcome these challenges, external tools can
be used to enhance LLMs' question-answering abilities. However, current
evaluation methods do not distinguish between questions that can be answered
using LLMs' internal knowledge and those that require external information
through tool use. To address this issue, we introduce a new dataset called
ToolQA, which is designed to faithfully evaluate LLMs' ability to use external
tools for question answering. Our development of ToolQA involved a scalable,
automated process for dataset curation, along with 13 specialized tools
designed for interaction with external knowledge in order to answer questions.
Importantly, we strive to minimize the overlap between our benchmark data and
LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use
reasoning abilities. We conducted an in-depth diagnosis of existing tool-use
LLMs to highlight their strengths, weaknesses, and potential improvements. Our
findings set a new benchmark for evaluating LLMs and suggest new directions for
future advancements. Our data and code are freely available to the broader
scientific community on GitHub. | http://arxiv.org/pdf/2306.13304 | Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang | cs.CL, cs.AI | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2212.10511"
},
{
"id": "2306.07193"
},
{
"id": "2211.05100"
},
{
"id": "2301.12652"
},
{
"id": "2206.14858"
},
{
"id": "2209.14610"
},
{
"id": "2203.09735"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2209.07686"
},
{
"id": "2110.14168"
},
{
"id": "2208.03299"
},
{
"id": "2102.13019"
},
{
"id": "2303.11381"
},
{
"id": "2204.05862"
},
{
"id": "2211.10435"
},
{
"id": "2207.13332"
},
{
"id": "2210.12810"
},
{
"id": "2303.04671"
},
{
"id": "2303.05398"
},
{
"id": "2210.17517"
},
{
"id": "2112.04359"
},
{
"id": "2303.17580"
},
{
"id": "2208.05051"
},
{
"id": "2305.15334"
},
{
"id": "2304.09842"
},
{
"id": "2303.11366"
},
{
"id": "2303.12712"
},
{
"id": "2302.12813"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
}
] |
2306.13394 | 62 | [52] Yan Zeng, Hanbo Zhang, Jiani Zheng, Jiangnan Xia, Guo- qiang Wei, Yang Wei, Yuchen Zhang, and Tao Kong. What matters in training a gpt4-style language model with multi- modal inputs? arXiv preprint:2307.02469, 2023. 4, 6 [53] Ao Zhang, Hao Fei, Yuan Yao, Wei Ji, Li Li, Zhiyuan Liu, and Tat-Seng Chua. Transfer visual prompt generator across llms. arXiv preprint:2305.01278, 2023. 4, 6
[54] Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wen- juan Han, and Baobao Chang. Mmicl: Empowering vision- language model with multi-modal in-context learning. arXiv preprint:2309.07915, 2023. 1, 4, 6 | 2306.13394#62 | MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models | Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform
multimodal tasks, showing amazing emergent abilities in recent studies, such as
writing poems based on an image. However, it is difficult for these case
studies to fully reflect the performance of MLLM, lacking a comprehensive
evaluation. In this paper, we fill in this blank, presenting the first
comprehensive MLLM Evaluation benchmark MME. It measures both perception and
cognition abilities on a total of 14 subtasks. In order to avoid data leakage
that may arise from direct use of public datasets for evaluation, the
annotations of instruction-answer pairs are all manually designed. The concise
instruction design allows us to fairly compare MLLMs, instead of struggling in
prompt engineering. Besides, with such an instruction, we can also easily carry
out quantitative statistics. A total of 30 advanced MLLMs are comprehensively
evaluated on our MME, which not only suggests that existing MLLMs still have a
large room for improvement, but also reveals the potential directions for the
subsequent model optimization. | http://arxiv.org/pdf/2306.13394 | Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, Rongrong Ji | cs.CV | Project page:
https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models | null | cs.CV | 20230623 | 20231206 | [] |
2306.13421 | 62 | # A Appendix
# A.1 Additional Implementation Details
All models are implemented in JAX, we use a dropout rate of 0.05, weight decay of 1e-8, Co- sine decay to 0.1 of the maximum learning rate, global gradient norm clipping of 1, and tied input embedding (Press and Wolf, 2017). For our op- timizer we used AdaBelief (Zhuang et al., 2020), which is a version of Adam (Kingma and Ba, 2015) that instead of the accumulating squared gradients, accumulates the squared difference between the gradient and the momentum. In initial experiments, we found AdaBelief to increase stability. Simi- lar to Block-Recurrent we found that lowering the learning rate was necessary for convergence while training on Code, so for CodeParrot, we lower the learning rate. For each dataset, we perform a grid search w.r.t Ï , and set Ï = 128 for Books3, Ï = 4 for PG19, Ï = 2 for CodeParrot, and Ï = 8 for ArXiv. We set αret = 1e â 9 for all datasets. Our base learning rate is 5e â 3, and besides what is mentioned above, we do not tune other hyperpa- rameters. We use the validation set to choose hy- perparameters.
# A.2 Scoring LM | 2306.13421#62 | Long-range Language Modeling with Self-retrieval | Retrieval-augmented language models (LMs) have received much attention
recently. However, typically the retriever is not trained jointly as a native
component of the LM, but added to an already-pretrained LM, which limits the
ability of the LM and the retriever to adapt to one another. In this work, we
propose the Retrieval-Pretrained Transformer (RPT), an architecture and
training procedure for jointly training a retrieval-augmented LM from scratch
for the task of modeling long texts. Given a recently generated text chunk in a
long document, the LM computes query representations, which are then used to
retrieve earlier chunks in the document, located potentially tens of thousands
of tokens before. Information from retrieved chunks is fused into the LM
representations to predict the next target chunk. We train the retriever
component with a semantic objective, where the goal is to retrieve chunks that
increase the probability of the next chunk, according to a reference LM. We
evaluate RPT on four long-range language modeling tasks, spanning books, code,
and mathematical writing, and demonstrate that RPT improves retrieval quality
and subsequently perplexity across the board compared to strong baselines. | http://arxiv.org/pdf/2306.13421 | Ohad Rubin, Jonathan Berant | cs.CL | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2004.05150"
}
] |
2306.13304 | 63 | In the synthetically generated personal agenda corpus, each entry follows the pattern: "NAME performs EVENT at TIME on DATE", incorporating key elements such as names, events, dates, and time slots. To begin, we employ ChatGPT to virtually generate these elements. More precisely, we create 100 unique names, 10000 distinctive events each associated with corresponding time slots within a day, and span all possible dates from 01/01/2022 through 12/31/2022. Following this, we commence the random assembly of these generated elements to formulate personal agenda entries. For every event- time pair generated, we randomly select from the pool of 100 names and possible dates to construct each record. This process yields a total of 9,494 unique personal agenda entries. To transform this corpus into an accessible external database for model querying, we transcribe each record into a comprehensible natural language description. Prompts designed for agenda data generation are listed in Appendix F.2.
# C Easy Question Templates
# C.1 Flights
We design the following 10 templates:
⢠What was the departure time of the {CARRIER}{NUMBER} flight from {ORIGIN} to {DEST} on {ORIGIN}? | 2306.13304#63 | ToolQA: A Dataset for LLM Question Answering with External Tools | Large Language Models (LLMs) have demonstrated impressive performance in
various NLP tasks, but they still suffer from challenges such as hallucination
and weak numerical reasoning. To overcome these challenges, external tools can
be used to enhance LLMs' question-answering abilities. However, current
evaluation methods do not distinguish between questions that can be answered
using LLMs' internal knowledge and those that require external information
through tool use. To address this issue, we introduce a new dataset called
ToolQA, which is designed to faithfully evaluate LLMs' ability to use external
tools for question answering. Our development of ToolQA involved a scalable,
automated process for dataset curation, along with 13 specialized tools
designed for interaction with external knowledge in order to answer questions.
Importantly, we strive to minimize the overlap between our benchmark data and
LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use
reasoning abilities. We conducted an in-depth diagnosis of existing tool-use
LLMs to highlight their strengths, weaknesses, and potential improvements. Our
findings set a new benchmark for evaluating LLMs and suggest new directions for
future advancements. Our data and code are freely available to the broader
scientific community on GitHub. | http://arxiv.org/pdf/2306.13304 | Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang | cs.CL, cs.AI | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2212.10511"
},
{
"id": "2306.07193"
},
{
"id": "2211.05100"
},
{
"id": "2301.12652"
},
{
"id": "2206.14858"
},
{
"id": "2209.14610"
},
{
"id": "2203.09735"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2209.07686"
},
{
"id": "2110.14168"
},
{
"id": "2208.03299"
},
{
"id": "2102.13019"
},
{
"id": "2303.11381"
},
{
"id": "2204.05862"
},
{
"id": "2211.10435"
},
{
"id": "2207.13332"
},
{
"id": "2210.12810"
},
{
"id": "2303.04671"
},
{
"id": "2303.05398"
},
{
"id": "2210.17517"
},
{
"id": "2112.04359"
},
{
"id": "2303.17580"
},
{
"id": "2208.05051"
},
{
"id": "2305.15334"
},
{
"id": "2304.09842"
},
{
"id": "2303.11366"
},
{
"id": "2303.12712"
},
{
"id": "2302.12813"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
}
] |
2306.13394 | 63 | [55] Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, et al. A survey of large language mod- els. arXiv preprint:2303.18223, 2023. 1
[56] Yunqing Zhao, Tianyu Pang, Chao Du, Xiao Yang, Chongx- uan Li, Ngai-Man Cheung, and Min Lin. On evaluating ad- versarial robustness of large vision-language models. arXiv preprint:2305.16934, 2023. 1
[57] Zijia Zhao, Longteng Guo, Tongtian Yue, Sihan Chen, Shuai Shao, Xinxin Zhu, Zehuan Yuan, and Jing Liu. Chatbridge: Bridging modalities with large language model as a language catalyst. arXiv preprint:2305.16103, 2023. 1
[58] Bolei Zhou, Agata Lapedriza, Jianxiong Xiao, Antonio Tor- ralba, and Aude Oliva. Learning deep features for scene recognition using places database. NeurIPS, 2014. 4 | 2306.13394#63 | MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models | Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform
multimodal tasks, showing amazing emergent abilities in recent studies, such as
writing poems based on an image. However, it is difficult for these case
studies to fully reflect the performance of MLLM, lacking a comprehensive
evaluation. In this paper, we fill in this blank, presenting the first
comprehensive MLLM Evaluation benchmark MME. It measures both perception and
cognition abilities on a total of 14 subtasks. In order to avoid data leakage
that may arise from direct use of public datasets for evaluation, the
annotations of instruction-answer pairs are all manually designed. The concise
instruction design allows us to fairly compare MLLMs, instead of struggling in
prompt engineering. Besides, with such an instruction, we can also easily carry
out quantitative statistics. A total of 30 advanced MLLMs are comprehensively
evaluated on our MME, which not only suggests that existing MLLMs still have a
large room for improvement, but also reveals the potential directions for the
subsequent model optimization. | http://arxiv.org/pdf/2306.13394 | Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, Rongrong Ji | cs.CV | Project page:
https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models | null | cs.CV | 20230623 | 20231206 | [] |
2306.13421 | 63 | # A.2 Scoring LM
We use the deduplicated 1.4B parameter version of the Pythia (Biderman et al., 2023) LM. We also performed early experiments with the T5 tokenizer and T5-XL 1.1, but since it was not trained on code or latex, Pythia 1.4B was preferable, since it was trained on the Pile.
# A.3 Comparing to FiD
RPT shares similarities with Fusion-in-Decoder (FiD) (Izacard and Grave, 2021b). Both RPT and
Model ArXiv Code PG19 Books3 RETRO W. BM25 (OURS) W. GATING 2.94 2.97 2.17 2.21 11.44 11.84 14.60 13.92 RPT-SEM 2.77 2.17 10.96 13.91
Table 5: Results of our ablation study w. neighbor gat- ing. | 2306.13421#63 | Long-range Language Modeling with Self-retrieval | Retrieval-augmented language models (LMs) have received much attention
recently. However, typically the retriever is not trained jointly as a native
component of the LM, but added to an already-pretrained LM, which limits the
ability of the LM and the retriever to adapt to one another. In this work, we
propose the Retrieval-Pretrained Transformer (RPT), an architecture and
training procedure for jointly training a retrieval-augmented LM from scratch
for the task of modeling long texts. Given a recently generated text chunk in a
long document, the LM computes query representations, which are then used to
retrieve earlier chunks in the document, located potentially tens of thousands
of tokens before. Information from retrieved chunks is fused into the LM
representations to predict the next target chunk. We train the retriever
component with a semantic objective, where the goal is to retrieve chunks that
increase the probability of the next chunk, according to a reference LM. We
evaluate RPT on four long-range language modeling tasks, spanning books, code,
and mathematical writing, and demonstrate that RPT improves retrieval quality
and subsequently perplexity across the board compared to strong baselines. | http://arxiv.org/pdf/2306.13421 | Ohad Rubin, Jonathan Berant | cs.CL | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2004.05150"
}
] |
2306.13304 | 64 | ⢠What was the departure time of the {CARRIER}{NUMBER} flight from {ORIGIN} to {DEST} on {ORIGIN}?
Was the flight {CARRIER}{NUMBER} from {ORIGIN} to {DEST} cancelled on {ORIGIN}? ⢠What is the flight number of the {AIRLINE} flight from {ORIGIN} to {DEST} on {ORIGIN}? ⢠How long was the different between the CRS-recorded departure time and actual departure
time of the {CARRIER}{NUMBER} flight from {ORIGIN} to {DEST} on {ORIGIN}?
How long did {CARRIER}{NUMBER} delay when arrival on {DEST}? ⢠How many extra minutes did the {CARRIER}{NUMBER} flight take from {ORIGIN} to {DEST}
on {ORIGIN}?
⢠What was the local arrival time of the {CARRIER}{NUMBER} flight from {ORIGIN} to {DEST} on {ORIGIN}?
⢠What was the CRS-recorded arrival time of the {CARRIER}{NUMBER} flight from {ORIGIN} to {DEST} on {ORIGIN}? | 2306.13304#64 | ToolQA: A Dataset for LLM Question Answering with External Tools | Large Language Models (LLMs) have demonstrated impressive performance in
various NLP tasks, but they still suffer from challenges such as hallucination
and weak numerical reasoning. To overcome these challenges, external tools can
be used to enhance LLMs' question-answering abilities. However, current
evaluation methods do not distinguish between questions that can be answered
using LLMs' internal knowledge and those that require external information
through tool use. To address this issue, we introduce a new dataset called
ToolQA, which is designed to faithfully evaluate LLMs' ability to use external
tools for question answering. Our development of ToolQA involved a scalable,
automated process for dataset curation, along with 13 specialized tools
designed for interaction with external knowledge in order to answer questions.
Importantly, we strive to minimize the overlap between our benchmark data and
LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use
reasoning abilities. We conducted an in-depth diagnosis of existing tool-use
LLMs to highlight their strengths, weaknesses, and potential improvements. Our
findings set a new benchmark for evaluating LLMs and suggest new directions for
future advancements. Our data and code are freely available to the broader
scientific community on GitHub. | http://arxiv.org/pdf/2306.13304 | Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang | cs.CL, cs.AI | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2212.10511"
},
{
"id": "2306.07193"
},
{
"id": "2211.05100"
},
{
"id": "2301.12652"
},
{
"id": "2206.14858"
},
{
"id": "2209.14610"
},
{
"id": "2203.09735"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2209.07686"
},
{
"id": "2110.14168"
},
{
"id": "2208.03299"
},
{
"id": "2102.13019"
},
{
"id": "2303.11381"
},
{
"id": "2204.05862"
},
{
"id": "2211.10435"
},
{
"id": "2207.13332"
},
{
"id": "2210.12810"
},
{
"id": "2303.04671"
},
{
"id": "2303.05398"
},
{
"id": "2210.17517"
},
{
"id": "2112.04359"
},
{
"id": "2303.17580"
},
{
"id": "2208.05051"
},
{
"id": "2305.15334"
},
{
"id": "2304.09842"
},
{
"id": "2303.11366"
},
{
"id": "2303.12712"
},
{
"id": "2302.12813"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
}
] |
2306.13421 | 64 | Table 5: Results of our ablation study w. neighbor gat- ing.
FiD employ cross-attention mechanisms to inte- grate the retrieved context within their models. In FiD, an initial retrieval is conducted, followed by encoding the retrieved neighbors separately, and ï¬nally integrating them into the model using cross- attention in the decoder. In RPT, the decoder com- putes chunk embeddings and performs native re- trieval, and then chunked cross-attention is applied to fuse the retrieved context with the modelâs pre- dictions. RPT also performs repeated retrieval at the chunk level throughout the generation process, rather than retrieving only once based on the initial prompt. This enables RPT to continually adapt and incorporate relevant information from prior chunks to generate subsequent tokens more effectively. Furthermore, RPT is trained with retrieval being an integral part of the model during the entire pre- training phase, in contrast with FiD which plugs in retrieval components to solve speciï¬c downstream tasks. We view RPT as more suitable for long-text generation tasks.
# A.4 RETRO with Neighbor Gating | 2306.13421#64 | Long-range Language Modeling with Self-retrieval | Retrieval-augmented language models (LMs) have received much attention
recently. However, typically the retriever is not trained jointly as a native
component of the LM, but added to an already-pretrained LM, which limits the
ability of the LM and the retriever to adapt to one another. In this work, we
propose the Retrieval-Pretrained Transformer (RPT), an architecture and
training procedure for jointly training a retrieval-augmented LM from scratch
for the task of modeling long texts. Given a recently generated text chunk in a
long document, the LM computes query representations, which are then used to
retrieve earlier chunks in the document, located potentially tens of thousands
of tokens before. Information from retrieved chunks is fused into the LM
representations to predict the next target chunk. We train the retriever
component with a semantic objective, where the goal is to retrieve chunks that
increase the probability of the next chunk, according to a reference LM. We
evaluate RPT on four long-range language modeling tasks, spanning books, code,
and mathematical writing, and demonstrate that RPT improves retrieval quality
and subsequently perplexity across the board compared to strong baselines. | http://arxiv.org/pdf/2306.13421 | Ohad Rubin, Jonathan Berant | cs.CL | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2004.05150"
}
] |
2306.13304 | 65 | ⢠What was the CRS-recorded arrival time of the {CARRIER}{NUMBER} flight from {ORIGIN} to {DEST} on {ORIGIN}?
How long was the flight {CARRIER}{NUMBER} from {ORIGIN} to {DEST} on {ORIGIN}? ⢠How many minutes did the {CARRIER}{NUMBER} flight take to taxi in on {DATE}?
# C.2 Coffee
We design the following 8 templates:
What was the daily coffee price opening on {DATE}? ⢠What was the lowest coffee price on {DATE}? ⢠What was the highest coffee price on {DATE}? ⢠What was the daily coffee price closing on {DATE}? ⢠What was the trading volume of coffee on {DATE}?
15
⢠What was the percentage change in coffee price on {DATE}, based on the difference between the opening and closing prices?
Was {DATE} a bearish or bullish day for coffee price? ⢠What was the range of coffee price on {DATE}, based on the difference between the high and
low prices?
# C.3 Yelp
We design the following 11 templates for the Yelp dataset: | 2306.13304#65 | ToolQA: A Dataset for LLM Question Answering with External Tools | Large Language Models (LLMs) have demonstrated impressive performance in
various NLP tasks, but they still suffer from challenges such as hallucination
and weak numerical reasoning. To overcome these challenges, external tools can
be used to enhance LLMs' question-answering abilities. However, current
evaluation methods do not distinguish between questions that can be answered
using LLMs' internal knowledge and those that require external information
through tool use. To address this issue, we introduce a new dataset called
ToolQA, which is designed to faithfully evaluate LLMs' ability to use external
tools for question answering. Our development of ToolQA involved a scalable,
automated process for dataset curation, along with 13 specialized tools
designed for interaction with external knowledge in order to answer questions.
Importantly, we strive to minimize the overlap between our benchmark data and
LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use
reasoning abilities. We conducted an in-depth diagnosis of existing tool-use
LLMs to highlight their strengths, weaknesses, and potential improvements. Our
findings set a new benchmark for evaluating LLMs and suggest new directions for
future advancements. Our data and code are freely available to the broader
scientific community on GitHub. | http://arxiv.org/pdf/2306.13304 | Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang | cs.CL, cs.AI | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2212.10511"
},
{
"id": "2306.07193"
},
{
"id": "2211.05100"
},
{
"id": "2301.12652"
},
{
"id": "2206.14858"
},
{
"id": "2209.14610"
},
{
"id": "2203.09735"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2209.07686"
},
{
"id": "2110.14168"
},
{
"id": "2208.03299"
},
{
"id": "2102.13019"
},
{
"id": "2303.11381"
},
{
"id": "2204.05862"
},
{
"id": "2211.10435"
},
{
"id": "2207.13332"
},
{
"id": "2210.12810"
},
{
"id": "2303.04671"
},
{
"id": "2303.05398"
},
{
"id": "2210.17517"
},
{
"id": "2112.04359"
},
{
"id": "2303.17580"
},
{
"id": "2208.05051"
},
{
"id": "2305.15334"
},
{
"id": "2304.09842"
},
{
"id": "2303.11366"
},
{
"id": "2303.12712"
},
{
"id": "2302.12813"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
}
] |
2306.13304 | 66 | low prices?
# C.3 Yelp
We design the following 11 templates for the Yelp dataset:
What is the address of {NAME} in the area of postal code {POSTAL-CODE}? ⢠What city is {NAME} located in {STATE}? ⢠What state is {NAME} located in? ⢠What is the postal code of {NAME} in the area with postal code {POSTAL-CODE}, {CITY},
{STATE}?
⢠What is the star rating of {NAME} in the area with postal code {POSTAL-CODE}, {CITY}, {STATE}?
⢠How many reviews does {NAME} receive in the area with postal code {POSTAL-CODE}, {CITY}, {STATE}, received?
Is {NAME} still open in the area with postal code {POSTAL-CODE}, {CITY}, {STATE}? ⢠Does {NAME} require appointment in the area with postal code {POSTAL-CODE}, {CITY},
{STATE}?
⢠What are the hours of operation for {NAME} in the area with postal code {POSTAL-CODE}, {CITY}, {STATE}?
⢠What categories does {NAME} belong to, in the area with postal code {POSTAL-CODE}, {CITY}, {STATE}? | 2306.13304#66 | ToolQA: A Dataset for LLM Question Answering with External Tools | Large Language Models (LLMs) have demonstrated impressive performance in
various NLP tasks, but they still suffer from challenges such as hallucination
and weak numerical reasoning. To overcome these challenges, external tools can
be used to enhance LLMs' question-answering abilities. However, current
evaluation methods do not distinguish between questions that can be answered
using LLMs' internal knowledge and those that require external information
through tool use. To address this issue, we introduce a new dataset called
ToolQA, which is designed to faithfully evaluate LLMs' ability to use external
tools for question answering. Our development of ToolQA involved a scalable,
automated process for dataset curation, along with 13 specialized tools
designed for interaction with external knowledge in order to answer questions.
Importantly, we strive to minimize the overlap between our benchmark data and
LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use
reasoning abilities. We conducted an in-depth diagnosis of existing tool-use
LLMs to highlight their strengths, weaknesses, and potential improvements. Our
findings set a new benchmark for evaluating LLMs and suggest new directions for
future advancements. Our data and code are freely available to the broader
scientific community on GitHub. | http://arxiv.org/pdf/2306.13304 | Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang | cs.CL, cs.AI | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2212.10511"
},
{
"id": "2306.07193"
},
{
"id": "2211.05100"
},
{
"id": "2301.12652"
},
{
"id": "2206.14858"
},
{
"id": "2209.14610"
},
{
"id": "2203.09735"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2209.07686"
},
{
"id": "2110.14168"
},
{
"id": "2208.03299"
},
{
"id": "2102.13019"
},
{
"id": "2303.11381"
},
{
"id": "2204.05862"
},
{
"id": "2211.10435"
},
{
"id": "2207.13332"
},
{
"id": "2210.12810"
},
{
"id": "2303.04671"
},
{
"id": "2303.05398"
},
{
"id": "2210.17517"
},
{
"id": "2112.04359"
},
{
"id": "2303.17580"
},
{
"id": "2208.05051"
},
{
"id": "2305.15334"
},
{
"id": "2304.09842"
},
{
"id": "2303.11366"
},
{
"id": "2303.12712"
},
{
"id": "2302.12813"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
}
] |
2306.13304 | 68 | # C.4 Airbnb
We design the following 10 templates for easy questions on Airbnb dataset:
What is the hostâs name for {NAME} in {NEIGHBOURHOOD}? ⢠How many days are {NAME} (id: {ID}) available during a year (365 days)? ⢠What is the room type of {NAME} (id: {ID}) in {NEIGHBOURHOOD}? ⢠What is the price of {NAME} (id: {ID}) in {NEIGHBOURHOOD}? ⢠What is the minimum number of nights for {NAME} (id: {ID}) in {NEIGHBOURHOOD}? ⢠When did {NAME} (id: {ID}) in {NEIGHBOURHOOD} constructed? ⢠How many reviews does {NAME} (id: {ID}) in {NEIGHBOURHOOD} have? ⢠What is the last review date for {NAME} (id: {ID}) in {NEIGHBOURHOOD}? ⢠What is the review rate number for {NAME} (id: {ID}) in {NEIGHBOURHOOD}? ⢠What is the average number of reviews per month for {NAME} (id: {ID}) in {NEIGHBOURHOOD}?
# C.5 SciREX
We design the following 1 templates for easy questions on SciREX dataset: | 2306.13304#68 | ToolQA: A Dataset for LLM Question Answering with External Tools | Large Language Models (LLMs) have demonstrated impressive performance in
various NLP tasks, but they still suffer from challenges such as hallucination
and weak numerical reasoning. To overcome these challenges, external tools can
be used to enhance LLMs' question-answering abilities. However, current
evaluation methods do not distinguish between questions that can be answered
using LLMs' internal knowledge and those that require external information
through tool use. To address this issue, we introduce a new dataset called
ToolQA, which is designed to faithfully evaluate LLMs' ability to use external
tools for question answering. Our development of ToolQA involved a scalable,
automated process for dataset curation, along with 13 specialized tools
designed for interaction with external knowledge in order to answer questions.
Importantly, we strive to minimize the overlap between our benchmark data and
LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use
reasoning abilities. We conducted an in-depth diagnosis of existing tool-use
LLMs to highlight their strengths, weaknesses, and potential improvements. Our
findings set a new benchmark for evaluating LLMs and suggest new directions for
future advancements. Our data and code are freely available to the broader
scientific community on GitHub. | http://arxiv.org/pdf/2306.13304 | Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang | cs.CL, cs.AI | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2212.10511"
},
{
"id": "2306.07193"
},
{
"id": "2211.05100"
},
{
"id": "2301.12652"
},
{
"id": "2206.14858"
},
{
"id": "2209.14610"
},
{
"id": "2203.09735"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2209.07686"
},
{
"id": "2110.14168"
},
{
"id": "2208.03299"
},
{
"id": "2102.13019"
},
{
"id": "2303.11381"
},
{
"id": "2204.05862"
},
{
"id": "2211.10435"
},
{
"id": "2207.13332"
},
{
"id": "2210.12810"
},
{
"id": "2303.04671"
},
{
"id": "2303.05398"
},
{
"id": "2210.17517"
},
{
"id": "2112.04359"
},
{
"id": "2303.17580"
},
{
"id": "2208.05051"
},
{
"id": "2305.15334"
},
{
"id": "2304.09842"
},
{
"id": "2303.11366"
},
{
"id": "2303.12712"
},
{
"id": "2302.12813"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
}
] |
2306.13304 | 69 | # C.5 SciREX
We design the following 1 templates for easy questions on SciREX dataset:
⢠What is the corresponding {METRIC} score of the {METHOD} method on {DATASET} dataset for {TASK} task?
# C.6 Agenda
We design the following 5 templates for easy questions on Agenda dataset:
What did {NAME} do from {START-TIME} to {END-TIME} on {DATE}? ⢠Where did {EVENT} that {NAME} attended take place on {DATE}? ⢠When did {NAME} attend {EVENT} on {DATE}? ⢠How long did {NAME} attend {EVENT} on {DATE}? ⢠Who attended {EVENT} between {START-TIME} and {END-TIME} on {DATE} in {LOCATION}?
16
# C.7 DBLP
We design the following 10 templates for easy questions on DBLP dataset: | 2306.13304#69 | ToolQA: A Dataset for LLM Question Answering with External Tools | Large Language Models (LLMs) have demonstrated impressive performance in
various NLP tasks, but they still suffer from challenges such as hallucination
and weak numerical reasoning. To overcome these challenges, external tools can
be used to enhance LLMs' question-answering abilities. However, current
evaluation methods do not distinguish between questions that can be answered
using LLMs' internal knowledge and those that require external information
through tool use. To address this issue, we introduce a new dataset called
ToolQA, which is designed to faithfully evaluate LLMs' ability to use external
tools for question answering. Our development of ToolQA involved a scalable,
automated process for dataset curation, along with 13 specialized tools
designed for interaction with external knowledge in order to answer questions.
Importantly, we strive to minimize the overlap between our benchmark data and
LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use
reasoning abilities. We conducted an in-depth diagnosis of existing tool-use
LLMs to highlight their strengths, weaknesses, and potential improvements. Our
findings set a new benchmark for evaluating LLMs and suggest new directions for
future advancements. Our data and code are freely available to the broader
scientific community on GitHub. | http://arxiv.org/pdf/2306.13304 | Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang | cs.CL, cs.AI | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2212.10511"
},
{
"id": "2306.07193"
},
{
"id": "2211.05100"
},
{
"id": "2301.12652"
},
{
"id": "2206.14858"
},
{
"id": "2209.14610"
},
{
"id": "2203.09735"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2209.07686"
},
{
"id": "2110.14168"
},
{
"id": "2208.03299"
},
{
"id": "2102.13019"
},
{
"id": "2303.11381"
},
{
"id": "2204.05862"
},
{
"id": "2211.10435"
},
{
"id": "2207.13332"
},
{
"id": "2210.12810"
},
{
"id": "2303.04671"
},
{
"id": "2303.05398"
},
{
"id": "2210.17517"
},
{
"id": "2112.04359"
},
{
"id": "2303.17580"
},
{
"id": "2208.05051"
},
{
"id": "2305.15334"
},
{
"id": "2304.09842"
},
{
"id": "2303.11366"
},
{
"id": "2303.12712"
},
{
"id": "2302.12813"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
}
] |
2306.13304 | 70 | 16
# C.7 DBLP
We design the following 10 templates for easy questions on DBLP dataset:
Who are the authors of {TITLE}? ⢠What organization is {AUTHOR} from? ⢠How many pages is {TITLE}? ⢠How many papers did {TITLE} cite in the DBLP citation network? ⢠How many papers did papers in the DBLP citation network cite {TITLE}? ⢠How many collaborators does {AUTHOR} have in the DBLP citation network? ⢠How many papers did {AUTHOR} and {AUTHOR} write together in the DBLP citation network? ⢠What papers did {AUTHOR} write in the DBLP citation network? ⢠How many papers did {AUTHOR} write in the DBLP citation network? ⢠What venue did {AUTHOR} and {AUTHOR} collaborate most in the DBLP citation network?
# C.8 GSM8K
The questions are randomly sampled from the ChatGPT errors in GSM8K dataset without following some templates. Thus, we cannot offer any question templates for GSM8K.
# D Hard Question Templates
# D.1 Flights
What percentage of the flights from {ORIGIN} were delayed on {FLIGHTDATE}? ⢠What is the average delay time of all the flights that departed from {ORIGIN} on | 2306.13304#70 | ToolQA: A Dataset for LLM Question Answering with External Tools | Large Language Models (LLMs) have demonstrated impressive performance in
various NLP tasks, but they still suffer from challenges such as hallucination
and weak numerical reasoning. To overcome these challenges, external tools can
be used to enhance LLMs' question-answering abilities. However, current
evaluation methods do not distinguish between questions that can be answered
using LLMs' internal knowledge and those that require external information
through tool use. To address this issue, we introduce a new dataset called
ToolQA, which is designed to faithfully evaluate LLMs' ability to use external
tools for question answering. Our development of ToolQA involved a scalable,
automated process for dataset curation, along with 13 specialized tools
designed for interaction with external knowledge in order to answer questions.
Importantly, we strive to minimize the overlap between our benchmark data and
LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use
reasoning abilities. We conducted an in-depth diagnosis of existing tool-use
LLMs to highlight their strengths, weaknesses, and potential improvements. Our
findings set a new benchmark for evaluating LLMs and suggest new directions for
future advancements. Our data and code are freely available to the broader
scientific community on GitHub. | http://arxiv.org/pdf/2306.13304 | Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang | cs.CL, cs.AI | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2212.10511"
},
{
"id": "2306.07193"
},
{
"id": "2211.05100"
},
{
"id": "2301.12652"
},
{
"id": "2206.14858"
},
{
"id": "2209.14610"
},
{
"id": "2203.09735"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2209.07686"
},
{
"id": "2110.14168"
},
{
"id": "2208.03299"
},
{
"id": "2102.13019"
},
{
"id": "2303.11381"
},
{
"id": "2204.05862"
},
{
"id": "2211.10435"
},
{
"id": "2207.13332"
},
{
"id": "2210.12810"
},
{
"id": "2303.04671"
},
{
"id": "2303.05398"
},
{
"id": "2210.17517"
},
{
"id": "2112.04359"
},
{
"id": "2303.17580"
},
{
"id": "2208.05051"
},
{
"id": "2305.15334"
},
{
"id": "2304.09842"
},
{
"id": "2303.11366"
},
{
"id": "2303.12712"
},
{
"id": "2302.12813"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
}
] |
2306.13304 | 71 | {FLIGHTDATE}?
How many flights were diverted on {FLIGHTDATE}? ⢠How many flights with a distance greater than 500 miles on {FLIGHTDATE}? ⢠What is the average airtime of the flights from {ORIGIN} to {DEST} host by {AIRLINE}? ⢠How many flights from {ORIGIN} to {DEST} host by {AIRLINE}? ⢠What is the average flight time of {CARRIER}{NUMBER}? ⢠What is the fastest flight from {ORIGIN} to {DEST} on {FLIGHTDATE}? ⢠What is the average speed of {CARRIER}{NUMBER} from {ORIGIN} to {DEST}? ⢠What is the total number of flights operated by {AIRLINE} on {FLIGHTDATE}?
# D.2 Coffee | 2306.13304#71 | ToolQA: A Dataset for LLM Question Answering with External Tools | Large Language Models (LLMs) have demonstrated impressive performance in
various NLP tasks, but they still suffer from challenges such as hallucination
and weak numerical reasoning. To overcome these challenges, external tools can
be used to enhance LLMs' question-answering abilities. However, current
evaluation methods do not distinguish between questions that can be answered
using LLMs' internal knowledge and those that require external information
through tool use. To address this issue, we introduce a new dataset called
ToolQA, which is designed to faithfully evaluate LLMs' ability to use external
tools for question answering. Our development of ToolQA involved a scalable,
automated process for dataset curation, along with 13 specialized tools
designed for interaction with external knowledge in order to answer questions.
Importantly, we strive to minimize the overlap between our benchmark data and
LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use
reasoning abilities. We conducted an in-depth diagnosis of existing tool-use
LLMs to highlight their strengths, weaknesses, and potential improvements. Our
findings set a new benchmark for evaluating LLMs and suggest new directions for
future advancements. Our data and code are freely available to the broader
scientific community on GitHub. | http://arxiv.org/pdf/2306.13304 | Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang | cs.CL, cs.AI | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2212.10511"
},
{
"id": "2306.07193"
},
{
"id": "2211.05100"
},
{
"id": "2301.12652"
},
{
"id": "2206.14858"
},
{
"id": "2209.14610"
},
{
"id": "2203.09735"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2209.07686"
},
{
"id": "2110.14168"
},
{
"id": "2208.03299"
},
{
"id": "2102.13019"
},
{
"id": "2303.11381"
},
{
"id": "2204.05862"
},
{
"id": "2211.10435"
},
{
"id": "2207.13332"
},
{
"id": "2210.12810"
},
{
"id": "2303.04671"
},
{
"id": "2303.05398"
},
{
"id": "2210.17517"
},
{
"id": "2112.04359"
},
{
"id": "2303.17580"
},
{
"id": "2208.05051"
},
{
"id": "2305.15334"
},
{
"id": "2304.09842"
},
{
"id": "2303.11366"
},
{
"id": "2303.12712"
},
{
"id": "2302.12813"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
}
] |
2306.13304 | 72 | # D.2 Coffee
What was the highest coffee price from {START-DATE} to {END-DATE}? ⢠What was the lowest coffee price from {START-DATE} to {END-DATE}? ⢠What was the average coffee price from {START-DATE} to {END-DATE}? ⢠How much did the coffee price change from {START-DATE} to {END-DATE}? ⢠What was the percentage change in coffee price on {DATE} compared to the previous day? ⢠On which date from {START-DATE} to {END-DATE} was the difference between the highest
and lowest coffee prices the greatest?
What was the average daily volume of coffee traded from {START-DATE} to {END-DATE}? ⢠On which date from {START-DATE} to {END-DATE} did the coffee price have the highest
increase compared to the previous day?
⢠How many times from {START-DATE} to {END-DATE} did the coffee price increase compared to the previous day?
What was the percentage increase in coffee price from {START-DATE} to {END-DATE}? ⢠What was the coffee price range from {START-DATE} to {END-DATE}?
17
# D.3 Yelp | 2306.13304#72 | ToolQA: A Dataset for LLM Question Answering with External Tools | Large Language Models (LLMs) have demonstrated impressive performance in
various NLP tasks, but they still suffer from challenges such as hallucination
and weak numerical reasoning. To overcome these challenges, external tools can
be used to enhance LLMs' question-answering abilities. However, current
evaluation methods do not distinguish between questions that can be answered
using LLMs' internal knowledge and those that require external information
through tool use. To address this issue, we introduce a new dataset called
ToolQA, which is designed to faithfully evaluate LLMs' ability to use external
tools for question answering. Our development of ToolQA involved a scalable,
automated process for dataset curation, along with 13 specialized tools
designed for interaction with external knowledge in order to answer questions.
Importantly, we strive to minimize the overlap between our benchmark data and
LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use
reasoning abilities. We conducted an in-depth diagnosis of existing tool-use
LLMs to highlight their strengths, weaknesses, and potential improvements. Our
findings set a new benchmark for evaluating LLMs and suggest new directions for
future advancements. Our data and code are freely available to the broader
scientific community on GitHub. | http://arxiv.org/pdf/2306.13304 | Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang | cs.CL, cs.AI | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2212.10511"
},
{
"id": "2306.07193"
},
{
"id": "2211.05100"
},
{
"id": "2301.12652"
},
{
"id": "2206.14858"
},
{
"id": "2209.14610"
},
{
"id": "2203.09735"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2209.07686"
},
{
"id": "2110.14168"
},
{
"id": "2208.03299"
},
{
"id": "2102.13019"
},
{
"id": "2303.11381"
},
{
"id": "2204.05862"
},
{
"id": "2211.10435"
},
{
"id": "2207.13332"
},
{
"id": "2210.12810"
},
{
"id": "2303.04671"
},
{
"id": "2303.05398"
},
{
"id": "2210.17517"
},
{
"id": "2112.04359"
},
{
"id": "2303.17580"
},
{
"id": "2208.05051"
},
{
"id": "2305.15334"
},
{
"id": "2304.09842"
},
{
"id": "2303.11366"
},
{
"id": "2303.12712"
},
{
"id": "2302.12813"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
}
] |
2306.13304 | 73 | 17
# D.3 Yelp
We design the following 10 templates for hard questions in Yelp Dataset.
How many {CATEGORY} businesses are there in {CITY}, {STATE}? ⢠How many businesses are there in {POSTALCODE} area of {CITY}, {STATE}? ⢠Which {CATEGORY} business has the highest star rating in {CITY}, {STATE}? ⢠Which {CATEGORY} business has the highest review count in {CITY}, {STATE}?" ⢠What is the average review counts of businesses within a 5-mile radius from {NAME}? ⢠Which is the nearest {CATEGORY} business to {NAME}? ⢠Can you recommend a {CATEGORY} business with the highest star rating within a 5-mile
radius of {ADDRESS}?
How many businesses are not open currently in {CITY}? ⢠What is the average star rating of {CATEGORY} businesses in {CITY}? ⢠Which region has most bussinesses in {CITY}, {STATE}?
# D.4 Airbnb
We design the following 10 templates for hard questions on Airbnb dataset.
⢠What is the total price at least if you want to stay at {NAME} in {NEIGHBOURHOOD} for {NUMBER} nights? | 2306.13304#73 | ToolQA: A Dataset for LLM Question Answering with External Tools | Large Language Models (LLMs) have demonstrated impressive performance in
various NLP tasks, but they still suffer from challenges such as hallucination
and weak numerical reasoning. To overcome these challenges, external tools can
be used to enhance LLMs' question-answering abilities. However, current
evaluation methods do not distinguish between questions that can be answered
using LLMs' internal knowledge and those that require external information
through tool use. To address this issue, we introduce a new dataset called
ToolQA, which is designed to faithfully evaluate LLMs' ability to use external
tools for question answering. Our development of ToolQA involved a scalable,
automated process for dataset curation, along with 13 specialized tools
designed for interaction with external knowledge in order to answer questions.
Importantly, we strive to minimize the overlap between our benchmark data and
LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use
reasoning abilities. We conducted an in-depth diagnosis of existing tool-use
LLMs to highlight their strengths, weaknesses, and potential improvements. Our
findings set a new benchmark for evaluating LLMs and suggest new directions for
future advancements. Our data and code are freely available to the broader
scientific community on GitHub. | http://arxiv.org/pdf/2306.13304 | Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang | cs.CL, cs.AI | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2212.10511"
},
{
"id": "2306.07193"
},
{
"id": "2211.05100"
},
{
"id": "2301.12652"
},
{
"id": "2206.14858"
},
{
"id": "2209.14610"
},
{
"id": "2203.09735"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2209.07686"
},
{
"id": "2110.14168"
},
{
"id": "2208.03299"
},
{
"id": "2102.13019"
},
{
"id": "2303.11381"
},
{
"id": "2204.05862"
},
{
"id": "2211.10435"
},
{
"id": "2207.13332"
},
{
"id": "2210.12810"
},
{
"id": "2303.04671"
},
{
"id": "2303.05398"
},
{
"id": "2210.17517"
},
{
"id": "2112.04359"
},
{
"id": "2303.17580"
},
{
"id": "2208.05051"
},
{
"id": "2305.15334"
},
{
"id": "2304.09842"
},
{
"id": "2303.11366"
},
{
"id": "2303.12712"
},
{
"id": "2302.12813"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
}
] |
2306.13304 | 74 | ⢠What is the total price at least if you want to stay at {NAME} in {NEIGHBOURHOOD} for {NUMBER} nights?
How many airbnbs are there in {NEIGHBOURHOOD}? ⢠What is the average price of airbnbs in {NEIGHBOURHOOD}? ⢠What is the average review rates within 5 miles from {NAME} in {NEIGHBOURHOOD}? ⢠How much proporion of airbnbs in {NEIGHBOURHOOD} have a flexible cancellation policy? ⢠How much does it cost per night to stay at the most expensive entire home/apt in
{NEIGHBOURHOOD}?
How many airbnbs are there in {NEIGHBOURHOOD} that have a review rate higher than 4? ⢠Can you recommend me a hotel room with the lowest price in {NEIGHBOURHOOD}? ⢠Can you recommend me a private room with the highest review rate that can host at least 2
people in {NEIGHBOURHOOD}?
⢠Can you recommend a shared room with the lowest price within 10 miles from {LONGITUDE} longitude and {LATITUDE} latitude?
# D.5 SciREX
We design the following 4 templates for hard questions on SciREX dataset: | 2306.13304#74 | ToolQA: A Dataset for LLM Question Answering with External Tools | Large Language Models (LLMs) have demonstrated impressive performance in
various NLP tasks, but they still suffer from challenges such as hallucination
and weak numerical reasoning. To overcome these challenges, external tools can
be used to enhance LLMs' question-answering abilities. However, current
evaluation methods do not distinguish between questions that can be answered
using LLMs' internal knowledge and those that require external information
through tool use. To address this issue, we introduce a new dataset called
ToolQA, which is designed to faithfully evaluate LLMs' ability to use external
tools for question answering. Our development of ToolQA involved a scalable,
automated process for dataset curation, along with 13 specialized tools
designed for interaction with external knowledge in order to answer questions.
Importantly, we strive to minimize the overlap between our benchmark data and
LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use
reasoning abilities. We conducted an in-depth diagnosis of existing tool-use
LLMs to highlight their strengths, weaknesses, and potential improvements. Our
findings set a new benchmark for evaluating LLMs and suggest new directions for
future advancements. Our data and code are freely available to the broader
scientific community on GitHub. | http://arxiv.org/pdf/2306.13304 | Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang | cs.CL, cs.AI | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2212.10511"
},
{
"id": "2306.07193"
},
{
"id": "2211.05100"
},
{
"id": "2301.12652"
},
{
"id": "2206.14858"
},
{
"id": "2209.14610"
},
{
"id": "2203.09735"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2209.07686"
},
{
"id": "2110.14168"
},
{
"id": "2208.03299"
},
{
"id": "2102.13019"
},
{
"id": "2303.11381"
},
{
"id": "2204.05862"
},
{
"id": "2211.10435"
},
{
"id": "2207.13332"
},
{
"id": "2210.12810"
},
{
"id": "2303.04671"
},
{
"id": "2303.05398"
},
{
"id": "2210.17517"
},
{
"id": "2112.04359"
},
{
"id": "2303.17580"
},
{
"id": "2208.05051"
},
{
"id": "2305.15334"
},
{
"id": "2304.09842"
},
{
"id": "2303.11366"
},
{
"id": "2303.12712"
},
{
"id": "2302.12813"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
}
] |
2306.13304 | 75 | # D.5 SciREX
We design the following 4 templates for hard questions on SciREX dataset:
⢠What is the corresponding {METRIC} score of the {METHOD} method on {DATASET} dataset for {TASK} task?
⢠On which dataset does the {METHOD} method achieve the highest {METRIC} score for {TASK} task?
Which method achieves the highest {METRIC} score on {DATASET} dataset for {TASK} task? ⢠On what metrics is the {METHOD} method evaluated on {DATASET} dataset for {TASK} task? ⢠Which datasets is {METHOD} method evaluated on for {TASK} task?
# D.6 Agenda
We design the following 5 templates for hard questions on Agenda dataset:
How many events happen on {DATE} in the agenda table? ⢠Who is unavailable between {START-TIME} and {END-TIME} on {DATE} in the agenda table? ⢠When should I schedule a meeting with {NAME} from 9:00 AM to 6:00 PM on {DATE} in the
agenda table?
What events does {NAME} have on {DATE} in the agenda table? ⢠How many dates in the agenda table have {NAME} scheduled?
18
# D.7 DBLP
We design the following 10 templates for hard questions on DBLP dataset: | 2306.13304#75 | ToolQA: A Dataset for LLM Question Answering with External Tools | Large Language Models (LLMs) have demonstrated impressive performance in
various NLP tasks, but they still suffer from challenges such as hallucination
and weak numerical reasoning. To overcome these challenges, external tools can
be used to enhance LLMs' question-answering abilities. However, current
evaluation methods do not distinguish between questions that can be answered
using LLMs' internal knowledge and those that require external information
through tool use. To address this issue, we introduce a new dataset called
ToolQA, which is designed to faithfully evaluate LLMs' ability to use external
tools for question answering. Our development of ToolQA involved a scalable,
automated process for dataset curation, along with 13 specialized tools
designed for interaction with external knowledge in order to answer questions.
Importantly, we strive to minimize the overlap between our benchmark data and
LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use
reasoning abilities. We conducted an in-depth diagnosis of existing tool-use
LLMs to highlight their strengths, weaknesses, and potential improvements. Our
findings set a new benchmark for evaluating LLMs and suggest new directions for
future advancements. Our data and code are freely available to the broader
scientific community on GitHub. | http://arxiv.org/pdf/2306.13304 | Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang | cs.CL, cs.AI | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2212.10511"
},
{
"id": "2306.07193"
},
{
"id": "2211.05100"
},
{
"id": "2301.12652"
},
{
"id": "2206.14858"
},
{
"id": "2209.14610"
},
{
"id": "2203.09735"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2209.07686"
},
{
"id": "2110.14168"
},
{
"id": "2208.03299"
},
{
"id": "2102.13019"
},
{
"id": "2303.11381"
},
{
"id": "2204.05862"
},
{
"id": "2211.10435"
},
{
"id": "2207.13332"
},
{
"id": "2210.12810"
},
{
"id": "2303.04671"
},
{
"id": "2303.05398"
},
{
"id": "2210.17517"
},
{
"id": "2112.04359"
},
{
"id": "2303.17580"
},
{
"id": "2208.05051"
},
{
"id": "2305.15334"
},
{
"id": "2304.09842"
},
{
"id": "2303.11366"
},
{
"id": "2303.12712"
},
{
"id": "2302.12813"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
}
] |
2306.13304 | 76 | 18
# D.7 DBLP
We design the following 10 templates for hard questions on DBLP dataset:
What keywords does {AUTHOR} focus on most in the DBLP citation network? ⢠How many people does {AUTHOR-1} need to know at least to know {AUTHOR-2} in the DBLP
citation network?
How many common collaborators does {AUTHOR-1} have with {AUTHOR-2}? ⢠Which is the most cited paper written by {AUTHOR} in the DBLP citation network? ⢠Which collaborator does {AUTHOR} have the most citations with in the DBLP citation network?
Which venue does {AUTHOR} publish the most papers in the DBLP citation network? ⢠How many accumulated citations do papers collaborated by {AUTHOR-1} and {AUTHOR-2}
have in the DBLP citation network?
⢠How many papers in all do {AUTHOR} and his/her collaborators have in the DBLP citation network?
Who collaborated with {AUTHOR} most in the DBLP citation network? ⢠What institutions participated in the study of {TITLE} in the DBLP citation network?
# E Code Examples of Programmatic Answer Generation | 2306.13304#76 | ToolQA: A Dataset for LLM Question Answering with External Tools | Large Language Models (LLMs) have demonstrated impressive performance in
various NLP tasks, but they still suffer from challenges such as hallucination
and weak numerical reasoning. To overcome these challenges, external tools can
be used to enhance LLMs' question-answering abilities. However, current
evaluation methods do not distinguish between questions that can be answered
using LLMs' internal knowledge and those that require external information
through tool use. To address this issue, we introduce a new dataset called
ToolQA, which is designed to faithfully evaluate LLMs' ability to use external
tools for question answering. Our development of ToolQA involved a scalable,
automated process for dataset curation, along with 13 specialized tools
designed for interaction with external knowledge in order to answer questions.
Importantly, we strive to minimize the overlap between our benchmark data and
LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use
reasoning abilities. We conducted an in-depth diagnosis of existing tool-use
LLMs to highlight their strengths, weaknesses, and potential improvements. Our
findings set a new benchmark for evaluating LLMs and suggest new directions for
future advancements. Our data and code are freely available to the broader
scientific community on GitHub. | http://arxiv.org/pdf/2306.13304 | Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang | cs.CL, cs.AI | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2212.10511"
},
{
"id": "2306.07193"
},
{
"id": "2211.05100"
},
{
"id": "2301.12652"
},
{
"id": "2206.14858"
},
{
"id": "2209.14610"
},
{
"id": "2203.09735"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2209.07686"
},
{
"id": "2110.14168"
},
{
"id": "2208.03299"
},
{
"id": "2102.13019"
},
{
"id": "2303.11381"
},
{
"id": "2204.05862"
},
{
"id": "2211.10435"
},
{
"id": "2207.13332"
},
{
"id": "2210.12810"
},
{
"id": "2303.04671"
},
{
"id": "2303.05398"
},
{
"id": "2210.17517"
},
{
"id": "2112.04359"
},
{
"id": "2303.17580"
},
{
"id": "2208.05051"
},
{
"id": "2305.15334"
},
{
"id": "2304.09842"
},
{
"id": "2303.11366"
},
{
"id": "2303.12712"
},
{
"id": "2302.12813"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
}
] |
2306.13304 | 77 | # E Code Examples of Programmatic Answer Generation
Below is an example of programmatic answer generation. The example code is answering the question of âWhat percentage of the flights from {ORIGIN} were delayed on {FLIGHTDATE}?â. More details of the programmatic answers can be seen in the public code.
def solution(data, flightdate, origin): num_total =len(data.loc[(data["FlightDate"] ==flightdate) & (data["Origin"] == origin)]) num_cancelled =len(data.loc[(new_data["FlightDate"] ==flightdate) & (data["Origin"] ==origin) & (data["Cancelled"] ==True)]) if num_cancelled >0: question ="What percentage of the flights from {} were delayed on {}?".format(origin, flightdate) answer ="{:.1f}".format(num_cancelled /num_total *100)+"%"
# F Additional Implementation Details
# F.1 Implementation Details | 2306.13304#77 | ToolQA: A Dataset for LLM Question Answering with External Tools | Large Language Models (LLMs) have demonstrated impressive performance in
various NLP tasks, but they still suffer from challenges such as hallucination
and weak numerical reasoning. To overcome these challenges, external tools can
be used to enhance LLMs' question-answering abilities. However, current
evaluation methods do not distinguish between questions that can be answered
using LLMs' internal knowledge and those that require external information
through tool use. To address this issue, we introduce a new dataset called
ToolQA, which is designed to faithfully evaluate LLMs' ability to use external
tools for question answering. Our development of ToolQA involved a scalable,
automated process for dataset curation, along with 13 specialized tools
designed for interaction with external knowledge in order to answer questions.
Importantly, we strive to minimize the overlap between our benchmark data and
LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use
reasoning abilities. We conducted an in-depth diagnosis of existing tool-use
LLMs to highlight their strengths, weaknesses, and potential improvements. Our
findings set a new benchmark for evaluating LLMs and suggest new directions for
future advancements. Our data and code are freely available to the broader
scientific community on GitHub. | http://arxiv.org/pdf/2306.13304 | Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang | cs.CL, cs.AI | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2212.10511"
},
{
"id": "2306.07193"
},
{
"id": "2211.05100"
},
{
"id": "2301.12652"
},
{
"id": "2206.14858"
},
{
"id": "2209.14610"
},
{
"id": "2203.09735"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2209.07686"
},
{
"id": "2110.14168"
},
{
"id": "2208.03299"
},
{
"id": "2102.13019"
},
{
"id": "2303.11381"
},
{
"id": "2204.05862"
},
{
"id": "2211.10435"
},
{
"id": "2207.13332"
},
{
"id": "2210.12810"
},
{
"id": "2303.04671"
},
{
"id": "2303.05398"
},
{
"id": "2210.17517"
},
{
"id": "2112.04359"
},
{
"id": "2303.17580"
},
{
"id": "2208.05051"
},
{
"id": "2305.15334"
},
{
"id": "2304.09842"
},
{
"id": "2303.11366"
},
{
"id": "2303.12712"
},
{
"id": "2302.12813"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
}
] |
2306.13304 | 78 | # F Additional Implementation Details
# F.1 Implementation Details
All experiments are conducted on CPU: Intel(R) Core(TM) i7-5930K CPU @ 3.50GHz and GPU: NVIDIA GeForce RTX A5000 GPUs using python 3.8, Huggingface 4.6.0 and Pytorch 1.10. We keep the parameter top_p = 1.0 and temperature t = 1.0 for calling ChatGPT APIs [37] for the question generation part.
# F.2 Prompts
# F.2.1 Prompts for Agenda Data Generation
The prompts used for virtual name generation:
<Agenda_Name_Gen> Prompt You are an AI assistant to answer questions. Can you list 100 English Names?
The prompts used for virtual events generation:
19
# <Agenda_Events _Gen> Prompt
You are an AI assistant for text generation. Generate 100 detailed agenda events, including the event, start time, end time, and location. Please make the events as diverse as possible and make sure these events can happen in real life. Make sure the location is a detailed name that may exist in real life. Make sure the dates are selected from 2022/01/01 to 2023/01/01.
Example: Doctorâs appointment - 9:00 AM - 11:00 AM - ABC Medical Center Yoga class - 10:30 AM - 11:30 AM - Yoga Studio Downtown | 2306.13304#78 | ToolQA: A Dataset for LLM Question Answering with External Tools | Large Language Models (LLMs) have demonstrated impressive performance in
various NLP tasks, but they still suffer from challenges such as hallucination
and weak numerical reasoning. To overcome these challenges, external tools can
be used to enhance LLMs' question-answering abilities. However, current
evaluation methods do not distinguish between questions that can be answered
using LLMs' internal knowledge and those that require external information
through tool use. To address this issue, we introduce a new dataset called
ToolQA, which is designed to faithfully evaluate LLMs' ability to use external
tools for question answering. Our development of ToolQA involved a scalable,
automated process for dataset curation, along with 13 specialized tools
designed for interaction with external knowledge in order to answer questions.
Importantly, we strive to minimize the overlap between our benchmark data and
LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use
reasoning abilities. We conducted an in-depth diagnosis of existing tool-use
LLMs to highlight their strengths, weaknesses, and potential improvements. Our
findings set a new benchmark for evaluating LLMs and suggest new directions for
future advancements. Our data and code are freely available to the broader
scientific community on GitHub. | http://arxiv.org/pdf/2306.13304 | Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang | cs.CL, cs.AI | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2212.10511"
},
{
"id": "2306.07193"
},
{
"id": "2211.05100"
},
{
"id": "2301.12652"
},
{
"id": "2206.14858"
},
{
"id": "2209.14610"
},
{
"id": "2203.09735"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2209.07686"
},
{
"id": "2110.14168"
},
{
"id": "2208.03299"
},
{
"id": "2102.13019"
},
{
"id": "2303.11381"
},
{
"id": "2204.05862"
},
{
"id": "2211.10435"
},
{
"id": "2207.13332"
},
{
"id": "2210.12810"
},
{
"id": "2303.04671"
},
{
"id": "2303.05398"
},
{
"id": "2210.17517"
},
{
"id": "2112.04359"
},
{
"id": "2303.17580"
},
{
"id": "2208.05051"
},
{
"id": "2305.15334"
},
{
"id": "2304.09842"
},
{
"id": "2303.11366"
},
{
"id": "2303.12712"
},
{
"id": "2302.12813"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
}
] |
2306.13304 | 79 | Example: Doctorâs appointment - 9:00 AM - 11:00 AM - ABC Medical Center Yoga class - 10:30 AM - 11:30 AM - Yoga Studio Downtown
Generate 100 more detailed agendas that do not conflict with the previous ones.
The prompts used to convert the agenda records into natural language descriptions:
<Agenda_Gen> Prompt Please use natural language to describe the event in the agenda with the following information: Name: NAME Date: DATE Event: EVENT Start Time: START-TIME End Time: END-TIME Location: LOCATION
# F.2.2 Prompts for Methods
The prompts used in ReAct [66]:
# <ReAct> Prompt
Question: How many extra minutes did the DL1575 flight take from ATL to MCO | 2306.13304#79 | ToolQA: A Dataset for LLM Question Answering with External Tools | Large Language Models (LLMs) have demonstrated impressive performance in
various NLP tasks, but they still suffer from challenges such as hallucination
and weak numerical reasoning. To overcome these challenges, external tools can
be used to enhance LLMs' question-answering abilities. However, current
evaluation methods do not distinguish between questions that can be answered
using LLMs' internal knowledge and those that require external information
through tool use. To address this issue, we introduce a new dataset called
ToolQA, which is designed to faithfully evaluate LLMs' ability to use external
tools for question answering. Our development of ToolQA involved a scalable,
automated process for dataset curation, along with 13 specialized tools
designed for interaction with external knowledge in order to answer questions.
Importantly, we strive to minimize the overlap between our benchmark data and
LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use
reasoning abilities. We conducted an in-depth diagnosis of existing tool-use
LLMs to highlight their strengths, weaknesses, and potential improvements. Our
findings set a new benchmark for evaluating LLMs and suggest new directions for
future advancements. Our data and code are freely available to the broader
scientific community on GitHub. | http://arxiv.org/pdf/2306.13304 | Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang | cs.CL, cs.AI | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2212.10511"
},
{
"id": "2306.07193"
},
{
"id": "2211.05100"
},
{
"id": "2301.12652"
},
{
"id": "2206.14858"
},
{
"id": "2209.14610"
},
{
"id": "2203.09735"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2209.07686"
},
{
"id": "2110.14168"
},
{
"id": "2208.03299"
},
{
"id": "2102.13019"
},
{
"id": "2303.11381"
},
{
"id": "2204.05862"
},
{
"id": "2211.10435"
},
{
"id": "2207.13332"
},
{
"id": "2210.12810"
},
{
"id": "2303.04671"
},
{
"id": "2303.05398"
},
{
"id": "2210.17517"
},
{
"id": "2112.04359"
},
{
"id": "2303.17580"
},
{
"id": "2208.05051"
},
{
"id": "2305.15334"
},
{
"id": "2304.09842"
},
{
"id": "2303.11366"
},
{
"id": "2303.12712"
},
{
"id": "2302.12813"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
}
] |
2306.13304 | 80 | on 2022-01-12? Thought 1: This is a question related to flights. We need to load the flights database. Action 1: LoadDB[flights] Observation 1: We have successfully loaded the flights database, including the following columns: FlightDate, Airline, Origin, Dest, Cancelled, Diverted, CRSDepTime, DepTime, DepDelayMinutes, DepDelay, ArrTime, ArrDelayMinutes, AirTime, CRSElapsedTime, ActualElapsedTime, Distance, Year, Quarter, Month, DayOfWeek, Marketing_Airline_Network, Operated_or_Branded_Code_Share_Partners, DayofMonth, DOT_ID_Marketing_Airline, IATA_Code_Marketing_Airline, Flight_Number_Marketing_Airline, Operating_Airline, DOT_ID_Operating_Airline, IATA_Code_Operating_Airline, Tail_Number, Flight_Number_Operating_Airline, OriginAirportID, OriginAirportSeqID, OriginCityMarketID, OriginCityName, OriginState, OriginStateFips, OriginStateName, OriginWac, DestAirportID, DestAirportSeqID, DestCityMarketID, DestCityName, DestState, | 2306.13304#80 | ToolQA: A Dataset for LLM Question Answering with External Tools | Large Language Models (LLMs) have demonstrated impressive performance in
various NLP tasks, but they still suffer from challenges such as hallucination
and weak numerical reasoning. To overcome these challenges, external tools can
be used to enhance LLMs' question-answering abilities. However, current
evaluation methods do not distinguish between questions that can be answered
using LLMs' internal knowledge and those that require external information
through tool use. To address this issue, we introduce a new dataset called
ToolQA, which is designed to faithfully evaluate LLMs' ability to use external
tools for question answering. Our development of ToolQA involved a scalable,
automated process for dataset curation, along with 13 specialized tools
designed for interaction with external knowledge in order to answer questions.
Importantly, we strive to minimize the overlap between our benchmark data and
LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use
reasoning abilities. We conducted an in-depth diagnosis of existing tool-use
LLMs to highlight their strengths, weaknesses, and potential improvements. Our
findings set a new benchmark for evaluating LLMs and suggest new directions for
future advancements. Our data and code are freely available to the broader
scientific community on GitHub. | http://arxiv.org/pdf/2306.13304 | Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang | cs.CL, cs.AI | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2212.10511"
},
{
"id": "2306.07193"
},
{
"id": "2211.05100"
},
{
"id": "2301.12652"
},
{
"id": "2206.14858"
},
{
"id": "2209.14610"
},
{
"id": "2203.09735"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2209.07686"
},
{
"id": "2110.14168"
},
{
"id": "2208.03299"
},
{
"id": "2102.13019"
},
{
"id": "2303.11381"
},
{
"id": "2204.05862"
},
{
"id": "2211.10435"
},
{
"id": "2207.13332"
},
{
"id": "2210.12810"
},
{
"id": "2303.04671"
},
{
"id": "2303.05398"
},
{
"id": "2210.17517"
},
{
"id": "2112.04359"
},
{
"id": "2303.17580"
},
{
"id": "2208.05051"
},
{
"id": "2305.15334"
},
{
"id": "2304.09842"
},
{
"id": "2303.11366"
},
{
"id": "2303.12712"
},
{
"id": "2302.12813"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
}
] |
2306.13304 | 81 | OriginState, OriginStateFips, OriginStateName, OriginWac, DestAirportID, DestAirportSeqID, DestCityMarketID, DestCityName, DestState, DestStateFips, DestStateName, DestWac, DepDel15, DepartureDelayGroups, DepTimeBlk, TaxiOut, WheelsOff, WheelsOn, TaxiIn, CRSArrTime, ArrDelay, ArrDel15, ArrivalDelayGroups, ArrTimeBlk, DistanceGroup, DivAirportLandings. Thought 2: We need to filter the information related to the query. Action 2: FilterDB[Flight_Number_Marketing_Airline=1575, FlightDate=2022-01-12, Origin=ATL, Dest=MCO] Observation 2: We have successfully filtered the data (1 row). Thought 3: We then need to know the departure delayed time. Action 3: GetValue[DepDelay] Observation 3: -7.0 Thought 4: We then need to know the arrival delayed time. Action 4: GetValue[ArrDelay] Observation 4: -17.0 Thought 5: To compute the extra minutes, we need to subtract the departure delayed time from the arrival delayed time. Action 5: | 2306.13304#81 | ToolQA: A Dataset for LLM Question Answering with External Tools | Large Language Models (LLMs) have demonstrated impressive performance in
various NLP tasks, but they still suffer from challenges such as hallucination
and weak numerical reasoning. To overcome these challenges, external tools can
be used to enhance LLMs' question-answering abilities. However, current
evaluation methods do not distinguish between questions that can be answered
using LLMs' internal knowledge and those that require external information
through tool use. To address this issue, we introduce a new dataset called
ToolQA, which is designed to faithfully evaluate LLMs' ability to use external
tools for question answering. Our development of ToolQA involved a scalable,
automated process for dataset curation, along with 13 specialized tools
designed for interaction with external knowledge in order to answer questions.
Importantly, we strive to minimize the overlap between our benchmark data and
LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use
reasoning abilities. We conducted an in-depth diagnosis of existing tool-use
LLMs to highlight their strengths, weaknesses, and potential improvements. Our
findings set a new benchmark for evaluating LLMs and suggest new directions for
future advancements. Our data and code are freely available to the broader
scientific community on GitHub. | http://arxiv.org/pdf/2306.13304 | Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang | cs.CL, cs.AI | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2212.10511"
},
{
"id": "2306.07193"
},
{
"id": "2211.05100"
},
{
"id": "2301.12652"
},
{
"id": "2206.14858"
},
{
"id": "2209.14610"
},
{
"id": "2203.09735"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2209.07686"
},
{
"id": "2110.14168"
},
{
"id": "2208.03299"
},
{
"id": "2102.13019"
},
{
"id": "2303.11381"
},
{
"id": "2204.05862"
},
{
"id": "2211.10435"
},
{
"id": "2207.13332"
},
{
"id": "2210.12810"
},
{
"id": "2303.04671"
},
{
"id": "2303.05398"
},
{
"id": "2210.17517"
},
{
"id": "2112.04359"
},
{
"id": "2303.17580"
},
{
"id": "2208.05051"
},
{
"id": "2305.15334"
},
{
"id": "2304.09842"
},
{
"id": "2303.11366"
},
{
"id": "2303.12712"
},
{
"id": "2302.12813"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
}
] |
2306.13304 | 84 | Thought 6: After calculation, we know that the answer is -10. Action 6: Finish[-10] Question: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday? Thought 1: Maybe I can write a python program to solve this problem. Action 1: PythonInterpreter[# solution in Python: def solution(): # Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, # he lost 2 more. How many golf balls did he have at the end of wednesday? golf_balls_initial = 58 golf_balls_lost_tuesday = 23 golf_balls_lost_wednesday = 2 golf_balls_left = golf_balls_initial - golf_balls_lost_tuesday - \ golf_balls_lost_wednesday result = golf_balls_left return result] Observation 1: 33 Thought 2: After execution of Python Code, we know that the answer is 33. Action 2: Finish[33] Question: What is the corresponding Mean_IoU score of the FRRN method on Cityscapes dataset for Semantic_Segmentation task? Thought 1: The question is asking some performance | 2306.13304#84 | ToolQA: A Dataset for LLM Question Answering with External Tools | Large Language Models (LLMs) have demonstrated impressive performance in
various NLP tasks, but they still suffer from challenges such as hallucination
and weak numerical reasoning. To overcome these challenges, external tools can
be used to enhance LLMs' question-answering abilities. However, current
evaluation methods do not distinguish between questions that can be answered
using LLMs' internal knowledge and those that require external information
through tool use. To address this issue, we introduce a new dataset called
ToolQA, which is designed to faithfully evaluate LLMs' ability to use external
tools for question answering. Our development of ToolQA involved a scalable,
automated process for dataset curation, along with 13 specialized tools
designed for interaction with external knowledge in order to answer questions.
Importantly, we strive to minimize the overlap between our benchmark data and
LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use
reasoning abilities. We conducted an in-depth diagnosis of existing tool-use
LLMs to highlight their strengths, weaknesses, and potential improvements. Our
findings set a new benchmark for evaluating LLMs and suggest new directions for
future advancements. Our data and code are freely available to the broader
scientific community on GitHub. | http://arxiv.org/pdf/2306.13304 | Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang | cs.CL, cs.AI | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2212.10511"
},
{
"id": "2306.07193"
},
{
"id": "2211.05100"
},
{
"id": "2301.12652"
},
{
"id": "2206.14858"
},
{
"id": "2209.14610"
},
{
"id": "2203.09735"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2209.07686"
},
{
"id": "2110.14168"
},
{
"id": "2208.03299"
},
{
"id": "2102.13019"
},
{
"id": "2303.11381"
},
{
"id": "2204.05862"
},
{
"id": "2211.10435"
},
{
"id": "2207.13332"
},
{
"id": "2210.12810"
},
{
"id": "2303.04671"
},
{
"id": "2303.05398"
},
{
"id": "2210.17517"
},
{
"id": "2112.04359"
},
{
"id": "2303.17580"
},
{
"id": "2208.05051"
},
{
"id": "2305.15334"
},
{
"id": "2304.09842"
},
{
"id": "2303.11366"
},
{
"id": "2303.12712"
},
{
"id": "2302.12813"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
}
] |
2306.13304 | 85 | corresponding Mean_IoU score of the FRRN method on Cityscapes dataset for Semantic_Segmentation task? Thought 1: The question is asking some performance information about Mean_IoU score of the FRRN method on Cityscapes dataset for Semantic_Segmentation task, we need to retrieve some useful information from the scirex database. Action 1: RetrieveScirex[Mean_IoU score of the FRRN method on Cityscapes dataset for Semantic_Segmentation task] Observation 1: The other stream undergoes a sequence of pooling operations to obtain robust features for recognition . The two streams are coupled at the full image resolution using residuals . Without additional processing steps and without pretraining , our approach achieves an intersection - over - union score of 71.8 % on the Cityscapes dataset . Thought 2: As the approach achieves an intersection - over - union score of 71.8 % on the Cityscapes dataset, the answer should be 71.8%. Action 2: Finish[71.8%] Question: When was the paper Learning the Principle of Least Action with Reinforcement Learning. published? Thought 1: The question is asking some basic information of a dblp paper. We need to load the | 2306.13304#85 | ToolQA: A Dataset for LLM Question Answering with External Tools | Large Language Models (LLMs) have demonstrated impressive performance in
various NLP tasks, but they still suffer from challenges such as hallucination
and weak numerical reasoning. To overcome these challenges, external tools can
be used to enhance LLMs' question-answering abilities. However, current
evaluation methods do not distinguish between questions that can be answered
using LLMs' internal knowledge and those that require external information
through tool use. To address this issue, we introduce a new dataset called
ToolQA, which is designed to faithfully evaluate LLMs' ability to use external
tools for question answering. Our development of ToolQA involved a scalable,
automated process for dataset curation, along with 13 specialized tools
designed for interaction with external knowledge in order to answer questions.
Importantly, we strive to minimize the overlap between our benchmark data and
LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use
reasoning abilities. We conducted an in-depth diagnosis of existing tool-use
LLMs to highlight their strengths, weaknesses, and potential improvements. Our
findings set a new benchmark for evaluating LLMs and suggest new directions for
future advancements. Our data and code are freely available to the broader
scientific community on GitHub. | http://arxiv.org/pdf/2306.13304 | Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang | cs.CL, cs.AI | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2212.10511"
},
{
"id": "2306.07193"
},
{
"id": "2211.05100"
},
{
"id": "2301.12652"
},
{
"id": "2206.14858"
},
{
"id": "2209.14610"
},
{
"id": "2203.09735"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2209.07686"
},
{
"id": "2110.14168"
},
{
"id": "2208.03299"
},
{
"id": "2102.13019"
},
{
"id": "2303.11381"
},
{
"id": "2204.05862"
},
{
"id": "2211.10435"
},
{
"id": "2207.13332"
},
{
"id": "2210.12810"
},
{
"id": "2303.04671"
},
{
"id": "2303.05398"
},
{
"id": "2210.17517"
},
{
"id": "2112.04359"
},
{
"id": "2303.17580"
},
{
"id": "2208.05051"
},
{
"id": "2305.15334"
},
{
"id": "2304.09842"
},
{
"id": "2303.11366"
},
{
"id": "2303.12712"
},
{
"id": "2302.12813"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
}
] |
2306.13304 | 86 | paper Learning the Principle of Least Action with Reinforcement Learning. published? Thought 1: The question is asking some basic information of a dblp paper. We need to load the DBLP graph. Action 1: LoadGraph[dblp] Observation 1: DBLP graph is loaded. Thought 2: The question is asking the published date of a paper, we need to check the node from the PaperNet in DBLP graph. Action 2: NodeCheck[PaperNet, Learning the Principle of Least Action with Reinforcement Learning.] Observation 2: {âtitleâ: âLearning the Principle of Least Action with Reinforcement Learning.â, âyearâ: 2021, âvenueâ: {ârawâ: âAAAI Spring Symposium - MLPSâ}, ân_citationâ: 0, âkeywordsâ: [], âdoc_typeâ: âConferenceâ, âpage_startâ: ââ, âpage_endâ: ââ} Thought 3: The published date of the paper is 2021. Action 3: Finish[2021] | 2306.13304#86 | ToolQA: A Dataset for LLM Question Answering with External Tools | Large Language Models (LLMs) have demonstrated impressive performance in
various NLP tasks, but they still suffer from challenges such as hallucination
and weak numerical reasoning. To overcome these challenges, external tools can
be used to enhance LLMs' question-answering abilities. However, current
evaluation methods do not distinguish between questions that can be answered
using LLMs' internal knowledge and those that require external information
through tool use. To address this issue, we introduce a new dataset called
ToolQA, which is designed to faithfully evaluate LLMs' ability to use external
tools for question answering. Our development of ToolQA involved a scalable,
automated process for dataset curation, along with 13 specialized tools
designed for interaction with external knowledge in order to answer questions.
Importantly, we strive to minimize the overlap between our benchmark data and
LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use
reasoning abilities. We conducted an in-depth diagnosis of existing tool-use
LLMs to highlight their strengths, weaknesses, and potential improvements. Our
findings set a new benchmark for evaluating LLMs and suggest new directions for
future advancements. Our data and code are freely available to the broader
scientific community on GitHub. | http://arxiv.org/pdf/2306.13304 | Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang | cs.CL, cs.AI | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2212.10511"
},
{
"id": "2306.07193"
},
{
"id": "2211.05100"
},
{
"id": "2301.12652"
},
{
"id": "2206.14858"
},
{
"id": "2209.14610"
},
{
"id": "2203.09735"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2209.07686"
},
{
"id": "2110.14168"
},
{
"id": "2208.03299"
},
{
"id": "2102.13019"
},
{
"id": "2303.11381"
},
{
"id": "2204.05862"
},
{
"id": "2211.10435"
},
{
"id": "2207.13332"
},
{
"id": "2210.12810"
},
{
"id": "2303.04671"
},
{
"id": "2303.05398"
},
{
"id": "2210.17517"
},
{
"id": "2112.04359"
},
{
"id": "2303.17580"
},
{
"id": "2208.05051"
},
{
"id": "2305.15334"
},
{
"id": "2304.09842"
},
{
"id": "2303.11366"
},
{
"id": "2303.12712"
},
{
"id": "2302.12813"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
}
] |
2306.13304 | 87 | Question: How many collaborators does Chao Zhang have in the DBLP graph? Thought 1: The question is asking the number of collaborators of a person, we need load the DBLP graph.
Action 1: LoadGraph[dblp] Observation 1: DBLP graph is loaded. Thought 2: The question is asking the collaborators of a person, we need to
check the neighboring nodes from the AuthorNet in DBLP graph.
Action 2: NeighbourCheck[AuthorNet, Chao Zhang] Observation 2: [âYUHUI YUANâ, âRao Fuâ, âLang Huangâ, âWeihong Linâ, âX Chenâ,
âJingdong Wangâ]
21
Thought 3: The number of collaborators of Chao Zhang is 6. Action 3: Finish[6]
Question: How many papers does Chao Zhang and Weihong Lin have in common in the DBLP graph?
Thought 1: The question is asking the number of common papers of two persons, we need load the DBLP graph.
Action 1: LoadGraph[dblp] Observation 1: DBLP graph is loaded. Thought 2: The question is asking the common papers of two persons, we need to
check the edges between them from the PaperNet in DBLP graph. | 2306.13304#87 | ToolQA: A Dataset for LLM Question Answering with External Tools | Large Language Models (LLMs) have demonstrated impressive performance in
various NLP tasks, but they still suffer from challenges such as hallucination
and weak numerical reasoning. To overcome these challenges, external tools can
be used to enhance LLMs' question-answering abilities. However, current
evaluation methods do not distinguish between questions that can be answered
using LLMs' internal knowledge and those that require external information
through tool use. To address this issue, we introduce a new dataset called
ToolQA, which is designed to faithfully evaluate LLMs' ability to use external
tools for question answering. Our development of ToolQA involved a scalable,
automated process for dataset curation, along with 13 specialized tools
designed for interaction with external knowledge in order to answer questions.
Importantly, we strive to minimize the overlap between our benchmark data and
LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use
reasoning abilities. We conducted an in-depth diagnosis of existing tool-use
LLMs to highlight their strengths, weaknesses, and potential improvements. Our
findings set a new benchmark for evaluating LLMs and suggest new directions for
future advancements. Our data and code are freely available to the broader
scientific community on GitHub. | http://arxiv.org/pdf/2306.13304 | Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang | cs.CL, cs.AI | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2212.10511"
},
{
"id": "2306.07193"
},
{
"id": "2211.05100"
},
{
"id": "2301.12652"
},
{
"id": "2206.14858"
},
{
"id": "2209.14610"
},
{
"id": "2203.09735"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2209.07686"
},
{
"id": "2110.14168"
},
{
"id": "2208.03299"
},
{
"id": "2102.13019"
},
{
"id": "2303.11381"
},
{
"id": "2204.05862"
},
{
"id": "2211.10435"
},
{
"id": "2207.13332"
},
{
"id": "2210.12810"
},
{
"id": "2303.04671"
},
{
"id": "2303.05398"
},
{
"id": "2210.17517"
},
{
"id": "2112.04359"
},
{
"id": "2303.17580"
},
{
"id": "2208.05051"
},
{
"id": "2305.15334"
},
{
"id": "2304.09842"
},
{
"id": "2303.11366"
},
{
"id": "2303.12712"
},
{
"id": "2302.12813"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
}
] |
2306.13304 | 88 | check the edges between them from the PaperNet in DBLP graph.
Action 2: EdgeCheck[PaperNet, Chao Zhang, Weihong Lin] Observation 2: {âweightâ: 1, âpapersâ: [âHRFormer: High-Resolution Vision
# Transformer for Dense Predict.â], ân_citationâ: [95]}
Thought 3: The number of common papers of Chao Zhang and Weihong Lin is 1.
Question: Where did Stephenâs Opera performance take place? Thought 1: The question is asking the location of Stephenâs Opera performance
from agenda.
Action 1: RetrieveAgenda[Stephenâs Opera performance] Observation 1: On January 29, 2022, there will be an opera performance at the Lyric
Opera House, featuring Stephen. The show will start at 7:00 PM and end at 9:00 PM. It promises to be a wonderful evening of beautiful music and powerful performances in a stunning venue. Come and experience the magic of opera at its finest!
Thought 2: The event happened in Lyric Opera. Action 2: Finish[Lyric Opera]
Question: What was the trading volume of coffee on 2000-01-14? Thought 1: Maybe I can write a SQL query to solve this problem. Action 1: SQLInterpreter(SELECT Volume FROM coffee.coffee_data WHERE Date = | 2306.13304#88 | ToolQA: A Dataset for LLM Question Answering with External Tools | Large Language Models (LLMs) have demonstrated impressive performance in
various NLP tasks, but they still suffer from challenges such as hallucination
and weak numerical reasoning. To overcome these challenges, external tools can
be used to enhance LLMs' question-answering abilities. However, current
evaluation methods do not distinguish between questions that can be answered
using LLMs' internal knowledge and those that require external information
through tool use. To address this issue, we introduce a new dataset called
ToolQA, which is designed to faithfully evaluate LLMs' ability to use external
tools for question answering. Our development of ToolQA involved a scalable,
automated process for dataset curation, along with 13 specialized tools
designed for interaction with external knowledge in order to answer questions.
Importantly, we strive to minimize the overlap between our benchmark data and
LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use
reasoning abilities. We conducted an in-depth diagnosis of existing tool-use
LLMs to highlight their strengths, weaknesses, and potential improvements. Our
findings set a new benchmark for evaluating LLMs and suggest new directions for
future advancements. Our data and code are freely available to the broader
scientific community on GitHub. | http://arxiv.org/pdf/2306.13304 | Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang | cs.CL, cs.AI | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2212.10511"
},
{
"id": "2306.07193"
},
{
"id": "2211.05100"
},
{
"id": "2301.12652"
},
{
"id": "2206.14858"
},
{
"id": "2209.14610"
},
{
"id": "2203.09735"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2209.07686"
},
{
"id": "2110.14168"
},
{
"id": "2208.03299"
},
{
"id": "2102.13019"
},
{
"id": "2303.11381"
},
{
"id": "2204.05862"
},
{
"id": "2211.10435"
},
{
"id": "2207.13332"
},
{
"id": "2210.12810"
},
{
"id": "2303.04671"
},
{
"id": "2303.05398"
},
{
"id": "2210.17517"
},
{
"id": "2112.04359"
},
{
"id": "2303.17580"
},
{
"id": "2208.05051"
},
{
"id": "2305.15334"
},
{
"id": "2304.09842"
},
{
"id": "2303.11366"
},
{
"id": "2303.12712"
},
{
"id": "2302.12813"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
}
] |
2306.13304 | 89 | â2000-01-14â;
Observation 1: Volume: 10115 Thought 2: The volume of coffee on 2000-01-14 is 10115. Action 2: Finish[10115]
The prompts used in Chameleon [28]:
# <Chameleon> Prompt
You need to act as a policy model, that given a question and a modular set, determines the sequence of modules that can be executed sequentially can solve the question.
The modules are defined as follows:
- Calculate[formula]: This module calculates a given formula and returns the result. It takes in a mathematical formula and returns the calculated result. Normally, we only consider using "Calculate" when the question involves mathematical computations.
- RetrieveAgenda[keyword]: This module retrieves an agenda related to a specific keyword and returns it. It takes in a keyword and returns the corresponding agenda. Normally, we only consider using "RetrieveAgenda" when the question is about specific actions or tasks related to a topic.
- RetrieveScirex[keyword]: This module retrieves paragraphs from machine learning papers related to the specified keyword and returns them. It takes in a keyword and returns the relevant paragraphs. Normally, we only consider using "RetrieveScirex" when the question involves understanding specific concepts in machine learning. | 2306.13304#89 | ToolQA: A Dataset for LLM Question Answering with External Tools | Large Language Models (LLMs) have demonstrated impressive performance in
various NLP tasks, but they still suffer from challenges such as hallucination
and weak numerical reasoning. To overcome these challenges, external tools can
be used to enhance LLMs' question-answering abilities. However, current
evaluation methods do not distinguish between questions that can be answered
using LLMs' internal knowledge and those that require external information
through tool use. To address this issue, we introduce a new dataset called
ToolQA, which is designed to faithfully evaluate LLMs' ability to use external
tools for question answering. Our development of ToolQA involved a scalable,
automated process for dataset curation, along with 13 specialized tools
designed for interaction with external knowledge in order to answer questions.
Importantly, we strive to minimize the overlap between our benchmark data and
LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use
reasoning abilities. We conducted an in-depth diagnosis of existing tool-use
LLMs to highlight their strengths, weaknesses, and potential improvements. Our
findings set a new benchmark for evaluating LLMs and suggest new directions for
future advancements. Our data and code are freely available to the broader
scientific community on GitHub. | http://arxiv.org/pdf/2306.13304 | Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang | cs.CL, cs.AI | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2212.10511"
},
{
"id": "2306.07193"
},
{
"id": "2211.05100"
},
{
"id": "2301.12652"
},
{
"id": "2206.14858"
},
{
"id": "2209.14610"
},
{
"id": "2203.09735"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2209.07686"
},
{
"id": "2110.14168"
},
{
"id": "2208.03299"
},
{
"id": "2102.13019"
},
{
"id": "2303.11381"
},
{
"id": "2204.05862"
},
{
"id": "2211.10435"
},
{
"id": "2207.13332"
},
{
"id": "2210.12810"
},
{
"id": "2303.04671"
},
{
"id": "2303.05398"
},
{
"id": "2210.17517"
},
{
"id": "2112.04359"
},
{
"id": "2303.17580"
},
{
"id": "2208.05051"
},
{
"id": "2305.15334"
},
{
"id": "2304.09842"
},
{
"id": "2303.11366"
},
{
"id": "2303.12712"
},
{
"id": "2302.12813"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
}
] |
2306.13304 | 90 | - LoadDB[DBName]: This module loads a database specified by the database name and returns the loaded database. It takes in a database name and returns the corresponding database. The DBName can be one of the following: flights/ coffee/airbnb/yelp. Normally, we only consider using "LoadDB" when the
22
question requires data from a specific structured dataset.
- FilterDB[column_name, relation, value]: This module filters a database by a specified column name, relation, and value, and then returns the filtered database. It takes in a column name, a relation, and a value, and returns the filtered database. Normally, we only consider using "FilterDB" when the question requires a specific subset of data from a structured dataset.
- GetValue[column_name]: This module returns the value of a specified column in a database. It takes in a column name and returns its value. Normally, we only consider using "GetValue" when the question requires a specific piece of data from a structured dataset.
- LoadGraph[GraphName]: This module loads a graph specified by the graph name and returns the loaded graph. It takes in a graph name and returns the corresponding graph. Normally, we only consider using "LoadGraph" when the question involves understanding or navigating specific graph structures. | 2306.13304#90 | ToolQA: A Dataset for LLM Question Answering with External Tools | Large Language Models (LLMs) have demonstrated impressive performance in
various NLP tasks, but they still suffer from challenges such as hallucination
and weak numerical reasoning. To overcome these challenges, external tools can
be used to enhance LLMs' question-answering abilities. However, current
evaluation methods do not distinguish between questions that can be answered
using LLMs' internal knowledge and those that require external information
through tool use. To address this issue, we introduce a new dataset called
ToolQA, which is designed to faithfully evaluate LLMs' ability to use external
tools for question answering. Our development of ToolQA involved a scalable,
automated process for dataset curation, along with 13 specialized tools
designed for interaction with external knowledge in order to answer questions.
Importantly, we strive to minimize the overlap between our benchmark data and
LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use
reasoning abilities. We conducted an in-depth diagnosis of existing tool-use
LLMs to highlight their strengths, weaknesses, and potential improvements. Our
findings set a new benchmark for evaluating LLMs and suggest new directions for
future advancements. Our data and code are freely available to the broader
scientific community on GitHub. | http://arxiv.org/pdf/2306.13304 | Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang | cs.CL, cs.AI | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2212.10511"
},
{
"id": "2306.07193"
},
{
"id": "2211.05100"
},
{
"id": "2301.12652"
},
{
"id": "2206.14858"
},
{
"id": "2209.14610"
},
{
"id": "2203.09735"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2209.07686"
},
{
"id": "2110.14168"
},
{
"id": "2208.03299"
},
{
"id": "2102.13019"
},
{
"id": "2303.11381"
},
{
"id": "2204.05862"
},
{
"id": "2211.10435"
},
{
"id": "2207.13332"
},
{
"id": "2210.12810"
},
{
"id": "2303.04671"
},
{
"id": "2303.05398"
},
{
"id": "2210.17517"
},
{
"id": "2112.04359"
},
{
"id": "2303.17580"
},
{
"id": "2208.05051"
},
{
"id": "2305.15334"
},
{
"id": "2304.09842"
},
{
"id": "2303.11366"
},
{
"id": "2303.12712"
},
{
"id": "2302.12813"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
}
] |
2306.13304 | 91 | - NeighbourCheck[GraphName, Node]: This module lists the neighbors of a specified node in a graph and returns the neighbors. It takes in a graph name and a node, and returns the nodeâs neighbors. Normally, we only consider using "NeighbourCheck" when the question involves understanding relationships in a graph structure.
- NodeCheck[GraphName, Node]: This module returns the detailed attribute information of a specified node in a graph. It takes in a graph name and a node, and returns the nodeâs attributes. Normally, we only consider using "NodeCheck" when the question requires information about a specific entity in a graph.
- EdgeCheck[GraphName, Node1, Node2]: This module returns the detailed attribute information of the edge between two specified nodes in a graph. It takes in a graph name and two nodes, and returns the attributes of the edge between them. Normally, we only consider using "EdgeCheck" when the question involves understanding the relationship between two entities in a graph.
- SQLInterpreter[SQL]: This module interprets a SQL query and returns the result. It takes in a SQL query and returns the result of the query. Normally, we only consider using "SQLInterpreter" when the question requires data manipulation and extraction from a structured dataset. | 2306.13304#91 | ToolQA: A Dataset for LLM Question Answering with External Tools | Large Language Models (LLMs) have demonstrated impressive performance in
various NLP tasks, but they still suffer from challenges such as hallucination
and weak numerical reasoning. To overcome these challenges, external tools can
be used to enhance LLMs' question-answering abilities. However, current
evaluation methods do not distinguish between questions that can be answered
using LLMs' internal knowledge and those that require external information
through tool use. To address this issue, we introduce a new dataset called
ToolQA, which is designed to faithfully evaluate LLMs' ability to use external
tools for question answering. Our development of ToolQA involved a scalable,
automated process for dataset curation, along with 13 specialized tools
designed for interaction with external knowledge in order to answer questions.
Importantly, we strive to minimize the overlap between our benchmark data and
LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use
reasoning abilities. We conducted an in-depth diagnosis of existing tool-use
LLMs to highlight their strengths, weaknesses, and potential improvements. Our
findings set a new benchmark for evaluating LLMs and suggest new directions for
future advancements. Our data and code are freely available to the broader
scientific community on GitHub. | http://arxiv.org/pdf/2306.13304 | Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang | cs.CL, cs.AI | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2212.10511"
},
{
"id": "2306.07193"
},
{
"id": "2211.05100"
},
{
"id": "2301.12652"
},
{
"id": "2206.14858"
},
{
"id": "2209.14610"
},
{
"id": "2203.09735"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2209.07686"
},
{
"id": "2110.14168"
},
{
"id": "2208.03299"
},
{
"id": "2102.13019"
},
{
"id": "2303.11381"
},
{
"id": "2204.05862"
},
{
"id": "2211.10435"
},
{
"id": "2207.13332"
},
{
"id": "2210.12810"
},
{
"id": "2303.04671"
},
{
"id": "2303.05398"
},
{
"id": "2210.17517"
},
{
"id": "2112.04359"
},
{
"id": "2303.17580"
},
{
"id": "2208.05051"
},
{
"id": "2305.15334"
},
{
"id": "2304.09842"
},
{
"id": "2303.11366"
},
{
"id": "2303.12712"
},
{
"id": "2302.12813"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
}
] |
2306.13304 | 92 | - PythonInterpreter[Python]: This module interprets Python code and returns the result. It takes in Python code and returns the result of the code execution. Normally, we only consider using "PythonInterpreter" when the question requires complex computations or custom data manipulation.
- Finish[answer]: This module returns the final answer and finishes the task. This module is the final module in the sequence that encapsulates the result of all previous modules.
Below are some examples that map the problem to the modules.
Question: How many extra minutes did the DL1575 flight take from ATL to MCO on 2022-01-12?
Modules: ["LoadDB[flights]", "FilterDB[Flight_Number_Marketing_Airline=1575, FlightDate=2022-01-12, Origin=ATL, Dest=MCO]", "GetValue[DepDelay]", "GetValue[ArrDelay]", "Calculate[(-17)-(-7)]", "Finish[-10]"]
Question: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday? | 2306.13304#92 | ToolQA: A Dataset for LLM Question Answering with External Tools | Large Language Models (LLMs) have demonstrated impressive performance in
various NLP tasks, but they still suffer from challenges such as hallucination
and weak numerical reasoning. To overcome these challenges, external tools can
be used to enhance LLMs' question-answering abilities. However, current
evaluation methods do not distinguish between questions that can be answered
using LLMs' internal knowledge and those that require external information
through tool use. To address this issue, we introduce a new dataset called
ToolQA, which is designed to faithfully evaluate LLMs' ability to use external
tools for question answering. Our development of ToolQA involved a scalable,
automated process for dataset curation, along with 13 specialized tools
designed for interaction with external knowledge in order to answer questions.
Importantly, we strive to minimize the overlap between our benchmark data and
LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use
reasoning abilities. We conducted an in-depth diagnosis of existing tool-use
LLMs to highlight their strengths, weaknesses, and potential improvements. Our
findings set a new benchmark for evaluating LLMs and suggest new directions for
future advancements. Our data and code are freely available to the broader
scientific community on GitHub. | http://arxiv.org/pdf/2306.13304 | Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang | cs.CL, cs.AI | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2212.10511"
},
{
"id": "2306.07193"
},
{
"id": "2211.05100"
},
{
"id": "2301.12652"
},
{
"id": "2206.14858"
},
{
"id": "2209.14610"
},
{
"id": "2203.09735"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2209.07686"
},
{
"id": "2110.14168"
},
{
"id": "2208.03299"
},
{
"id": "2102.13019"
},
{
"id": "2303.11381"
},
{
"id": "2204.05862"
},
{
"id": "2211.10435"
},
{
"id": "2207.13332"
},
{
"id": "2210.12810"
},
{
"id": "2303.04671"
},
{
"id": "2303.05398"
},
{
"id": "2210.17517"
},
{
"id": "2112.04359"
},
{
"id": "2303.17580"
},
{
"id": "2208.05051"
},
{
"id": "2305.15334"
},
{
"id": "2304.09842"
},
{
"id": "2303.11366"
},
{
"id": "2303.12712"
},
{
"id": "2302.12813"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
}
] |
2306.13304 | 93 | Modules: ["PythonInterpreter[# solution in Python:
def solution():
# Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday?
23
golf_balls_initial = 58
golf_balls_lost_tuesday = 23
golf_balls_lost_wednesday = 2
golf_balls_left = golf_balls_initial - golf_balls_lost_tuesday - golf_balls_lost_wednesday
result = golf_balls_left
return result]", "Finish[33]"]
Question: What is the corresponding Mean_IoU score of the FRRN method on Cityscapes dataset for Semantic_Segmentation task?
Modules: ["ScirexRetrieve[Mean_IoU score of the FRRN method on Cityscapes dataset for Semantic_Segmentation task]", "Finish[71.8%]"]
Question: When was the paper Learning the Principle of Least Action with Reinforcement Learning. published?
Modules: ["LoadGraph[dblp]", "NodeCheck[PaperNet, Learning the Principle of Least Action with Reinforcement Learning.]", "Finish[2021]"]
Question: How many collaborators does Chao Zhang have in the DBLP graph? | 2306.13304#93 | ToolQA: A Dataset for LLM Question Answering with External Tools | Large Language Models (LLMs) have demonstrated impressive performance in
various NLP tasks, but they still suffer from challenges such as hallucination
and weak numerical reasoning. To overcome these challenges, external tools can
be used to enhance LLMs' question-answering abilities. However, current
evaluation methods do not distinguish between questions that can be answered
using LLMs' internal knowledge and those that require external information
through tool use. To address this issue, we introduce a new dataset called
ToolQA, which is designed to faithfully evaluate LLMs' ability to use external
tools for question answering. Our development of ToolQA involved a scalable,
automated process for dataset curation, along with 13 specialized tools
designed for interaction with external knowledge in order to answer questions.
Importantly, we strive to minimize the overlap between our benchmark data and
LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use
reasoning abilities. We conducted an in-depth diagnosis of existing tool-use
LLMs to highlight their strengths, weaknesses, and potential improvements. Our
findings set a new benchmark for evaluating LLMs and suggest new directions for
future advancements. Our data and code are freely available to the broader
scientific community on GitHub. | http://arxiv.org/pdf/2306.13304 | Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang | cs.CL, cs.AI | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2212.10511"
},
{
"id": "2306.07193"
},
{
"id": "2211.05100"
},
{
"id": "2301.12652"
},
{
"id": "2206.14858"
},
{
"id": "2209.14610"
},
{
"id": "2203.09735"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2209.07686"
},
{
"id": "2110.14168"
},
{
"id": "2208.03299"
},
{
"id": "2102.13019"
},
{
"id": "2303.11381"
},
{
"id": "2204.05862"
},
{
"id": "2211.10435"
},
{
"id": "2207.13332"
},
{
"id": "2210.12810"
},
{
"id": "2303.04671"
},
{
"id": "2303.05398"
},
{
"id": "2210.17517"
},
{
"id": "2112.04359"
},
{
"id": "2303.17580"
},
{
"id": "2208.05051"
},
{
"id": "2305.15334"
},
{
"id": "2304.09842"
},
{
"id": "2303.11366"
},
{
"id": "2303.12712"
},
{
"id": "2302.12813"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
}
] |
2306.13304 | 94 | Question: How many collaborators does Chao Zhang have in the DBLP graph?
Modules: ["LoadGraph[dblp]", "NeighbourCheck[AuthorNet, Chao Zhang]", "Finish[6]"]
Question: How many papers does Chao Zhang and Weihong Lin have in common in the DBLP graph?
Modules: ["LoadGraph[dblp]", "EdgeCheck[PaperNet, Chao Zhang, Weihong Lin]", "Finish[1]"]
Question: Where did Stephenâs Opera performance take place?
# Modules: ["AgendaRetrieve[Stephenâs Opera performance]", "Finish[Lyric Opera]"]
Question: What was the trading volume of coffee on 2000-01-14?
Modules: ["SQLInterpreter[SELECT Volume FROM coffee.coffee_data WHERE Date = â2000-01-14â]", "Finish[10115]"]
Now, you need to act as a policy model, that given a question and a modular set, determines the sequence of modules that can be executed sequentially can solve the question.
# G Key Information of ToolQA
# G.1 Dataset Documentations | 2306.13304#94 | ToolQA: A Dataset for LLM Question Answering with External Tools | Large Language Models (LLMs) have demonstrated impressive performance in
various NLP tasks, but they still suffer from challenges such as hallucination
and weak numerical reasoning. To overcome these challenges, external tools can
be used to enhance LLMs' question-answering abilities. However, current
evaluation methods do not distinguish between questions that can be answered
using LLMs' internal knowledge and those that require external information
through tool use. To address this issue, we introduce a new dataset called
ToolQA, which is designed to faithfully evaluate LLMs' ability to use external
tools for question answering. Our development of ToolQA involved a scalable,
automated process for dataset curation, along with 13 specialized tools
designed for interaction with external knowledge in order to answer questions.
Importantly, we strive to minimize the overlap between our benchmark data and
LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use
reasoning abilities. We conducted an in-depth diagnosis of existing tool-use
LLMs to highlight their strengths, weaknesses, and potential improvements. Our
findings set a new benchmark for evaluating LLMs and suggest new directions for
future advancements. Our data and code are freely available to the broader
scientific community on GitHub. | http://arxiv.org/pdf/2306.13304 | Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang | cs.CL, cs.AI | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2212.10511"
},
{
"id": "2306.07193"
},
{
"id": "2211.05100"
},
{
"id": "2301.12652"
},
{
"id": "2206.14858"
},
{
"id": "2209.14610"
},
{
"id": "2203.09735"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2209.07686"
},
{
"id": "2110.14168"
},
{
"id": "2208.03299"
},
{
"id": "2102.13019"
},
{
"id": "2303.11381"
},
{
"id": "2204.05862"
},
{
"id": "2211.10435"
},
{
"id": "2207.13332"
},
{
"id": "2210.12810"
},
{
"id": "2303.04671"
},
{
"id": "2303.05398"
},
{
"id": "2210.17517"
},
{
"id": "2112.04359"
},
{
"id": "2303.17580"
},
{
"id": "2208.05051"
},
{
"id": "2305.15334"
},
{
"id": "2304.09842"
},
{
"id": "2303.11366"
},
{
"id": "2303.12712"
},
{
"id": "2302.12813"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
}
] |
2306.13304 | 95 | # G Key Information of ToolQA
# G.1 Dataset Documentations
The dataset is provided in jsonl format. Each task corresponds to two files: easy and hard (e.g., âflight-easy.jsonlâ and âflight-hard.jsonlâ, etc.). Each data point contains the following fields:
qid: the unique identifier for the question-answer pair; ⢠question: the question to query; ⢠answer: the corresponding ground-truth answer to question.
# G.2 Intended Uses
ToolQA is intended for researchers in machine learning and related fields to innovate novel methods for tool-augmented large language models (LLMs). We also aim to help developers to test their plugins on our dataset.
# G.3 Hosting and Maintenance Plan
ToolQA codebase is hosted and version-tracked via GitHub. It will be permanently available under the link https://github.com/night-chen/ToolQA. The download link of all the datasets can be found in the GitHub repository.
24
ToolQA is a community-driven and open-source initiative. We are committed and have resources to maintain and actively develop ToolQA in the future. We plan to grow ToolQA to include more tasks, tools, and more baseline methods. We welcome external contributors.
# G.4 Licensing | 2306.13304#95 | ToolQA: A Dataset for LLM Question Answering with External Tools | Large Language Models (LLMs) have demonstrated impressive performance in
various NLP tasks, but they still suffer from challenges such as hallucination
and weak numerical reasoning. To overcome these challenges, external tools can
be used to enhance LLMs' question-answering abilities. However, current
evaluation methods do not distinguish between questions that can be answered
using LLMs' internal knowledge and those that require external information
through tool use. To address this issue, we introduce a new dataset called
ToolQA, which is designed to faithfully evaluate LLMs' ability to use external
tools for question answering. Our development of ToolQA involved a scalable,
automated process for dataset curation, along with 13 specialized tools
designed for interaction with external knowledge in order to answer questions.
Importantly, we strive to minimize the overlap between our benchmark data and
LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use
reasoning abilities. We conducted an in-depth diagnosis of existing tool-use
LLMs to highlight their strengths, weaknesses, and potential improvements. Our
findings set a new benchmark for evaluating LLMs and suggest new directions for
future advancements. Our data and code are freely available to the broader
scientific community on GitHub. | http://arxiv.org/pdf/2306.13304 | Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang | cs.CL, cs.AI | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2212.10511"
},
{
"id": "2306.07193"
},
{
"id": "2211.05100"
},
{
"id": "2301.12652"
},
{
"id": "2206.14858"
},
{
"id": "2209.14610"
},
{
"id": "2203.09735"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2209.07686"
},
{
"id": "2110.14168"
},
{
"id": "2208.03299"
},
{
"id": "2102.13019"
},
{
"id": "2303.11381"
},
{
"id": "2204.05862"
},
{
"id": "2211.10435"
},
{
"id": "2207.13332"
},
{
"id": "2210.12810"
},
{
"id": "2303.04671"
},
{
"id": "2303.05398"
},
{
"id": "2210.17517"
},
{
"id": "2112.04359"
},
{
"id": "2303.17580"
},
{
"id": "2208.05051"
},
{
"id": "2305.15334"
},
{
"id": "2304.09842"
},
{
"id": "2303.11366"
},
{
"id": "2303.12712"
},
{
"id": "2302.12813"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
}
] |
2306.13304 | 96 | # G.4 Licensing
We license our work using Apache 2.012. All the dataset will be publicly released through the aforementioned GitHub link.
# G.5 Limitation
Tool-augmented LLM is a popular and wildly developing direction, which is wildly developing and focused on by a lot of researchers, ToolQA will keep developing and include more tasks, data, tools, and methods in the future.
# 12https://www.apache.org/licenses/LICENSE-2.0
25 | 2306.13304#96 | ToolQA: A Dataset for LLM Question Answering with External Tools | Large Language Models (LLMs) have demonstrated impressive performance in
various NLP tasks, but they still suffer from challenges such as hallucination
and weak numerical reasoning. To overcome these challenges, external tools can
be used to enhance LLMs' question-answering abilities. However, current
evaluation methods do not distinguish between questions that can be answered
using LLMs' internal knowledge and those that require external information
through tool use. To address this issue, we introduce a new dataset called
ToolQA, which is designed to faithfully evaluate LLMs' ability to use external
tools for question answering. Our development of ToolQA involved a scalable,
automated process for dataset curation, along with 13 specialized tools
designed for interaction with external knowledge in order to answer questions.
Importantly, we strive to minimize the overlap between our benchmark data and
LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use
reasoning abilities. We conducted an in-depth diagnosis of existing tool-use
LLMs to highlight their strengths, weaknesses, and potential improvements. Our
findings set a new benchmark for evaluating LLMs and suggest new directions for
future advancements. Our data and code are freely available to the broader
scientific community on GitHub. | http://arxiv.org/pdf/2306.13304 | Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang | cs.CL, cs.AI | null | null | cs.CL | 20230623 | 20230623 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2212.10511"
},
{
"id": "2306.07193"
},
{
"id": "2211.05100"
},
{
"id": "2301.12652"
},
{
"id": "2206.14858"
},
{
"id": "2209.14610"
},
{
"id": "2203.09735"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2209.07686"
},
{
"id": "2110.14168"
},
{
"id": "2208.03299"
},
{
"id": "2102.13019"
},
{
"id": "2303.11381"
},
{
"id": "2204.05862"
},
{
"id": "2211.10435"
},
{
"id": "2207.13332"
},
{
"id": "2210.12810"
},
{
"id": "2303.04671"
},
{
"id": "2303.05398"
},
{
"id": "2210.17517"
},
{
"id": "2112.04359"
},
{
"id": "2303.17580"
},
{
"id": "2208.05051"
},
{
"id": "2305.15334"
},
{
"id": "2304.09842"
},
{
"id": "2303.11366"
},
{
"id": "2303.12712"
},
{
"id": "2302.12813"
},
{
"id": "2303.09014"
},
{
"id": "2205.12255"
}
] |
2306.12672 | 1 | How does language inform our downstream thinking? In particular, how do humans make meaning from languageâand how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural models of language with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)âa general-purpose symbolic substrate for probabilistic, generative world modeling. Our architecture integrates two powerful computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for flexible commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework in action through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning about agents and their plans. In each, we show that LLMs can generate context-sensitive translations that capture | 2306.12672#1 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 2 | reasoning, visual and physical reasoning, and social reasoning about agents and their plans. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and goal-directed planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will help to situate contemporary developments in LLMs within a broader cognitive picture of human language and intelligence, providing a roadmap towards AI systems that synthesize the insights of both modern and classical computational perspectives. | 2306.12672#2 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.