doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2308.15126 | 8 | ⢠Through our analysis, we discover that LVLMs are easily influenced by prompts in idealized hallucination scenarios, making the results not correlated with hallucinations in real-world scenarios.
to utilize LLM for hallucination evaluation within LVLMs. We propose Hallucination Evaluation based on Large Language Models (HaELM). HaELM achieves a strong perfor- mance and has additional advantages includ- ing low cost, reproducibility, privacy preserva- tion and local deployment compared to Chat- GPT.
⢠Leveraging the HaELM, we embark on evalu- ating the presence of hallucination in current LVLMs. We analyze the factors that affect hallucination and offer helpful suggestions.
# 2 Background
In this section, we mainly introduced existing Large Language Models (LLMs) and Large Vision- Language Models (LVLMs), as well as hallucina- tion problems that exist in LLMs and LVLMs.
# 2.1 Large Language Model | 2308.15126#8 | Evaluation and Analysis of Hallucination in Large Vision-Language Models | Large Vision-Language Models (LVLMs) have recently achieved remarkable
success. However, LVLMs are still plagued by the hallucination problem, which
limits the practicality in many scenarios. Hallucination refers to the
information of LVLMs' responses that does not exist in the visual input, which
poses potential risks of substantial consequences. There has been limited work
studying hallucination evaluation in LVLMs. In this paper, we propose
Hallucination Evaluation based on Large Language Models (HaELM), an LLM-based
hallucination evaluation framework. HaELM achieves an approximate 95%
performance comparable to ChatGPT and has additional advantages including low
cost, reproducibility, privacy preservation and local deployment. Leveraging
the HaELM, we evaluate the hallucination in current LVLMs. Furthermore, we
analyze the factors contributing to hallucination in LVLMs and offer helpful
suggestions to mitigate the hallucination problem. Our training data and human
annotation hallucination data will be made public soon. | http://arxiv.org/pdf/2308.15126 | Junyang Wang, Yiyang Zhou, Guohai Xu, Pengcheng Shi, Chenlin Zhao, Haiyang Xu, Qinghao Ye, Ming Yan, Ji Zhang, Jihua Zhu, Jitao Sang, Haoyu Tang | cs.LG, cs.AI, cs.CL, cs.CV | 11 pages, 5 figures | null | cs.LG | 20230829 | 20231010 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2306.14565"
},
{
"id": "2211.05100"
},
{
"id": "2305.13534"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2305.10355"
},
{
"id": "2306.04387"
},
{
"id": "1504.00325"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2107.06383"
},
{
"id": "2305.15021"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2305.11175"
},
{
"id": "2010.03743"
},
{
"id": "2305.14975"
},
{
"id": "2310.00754"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2106.09685"
}
] |
2308.14963 | 9 | Search is a brownfield application. Wikipedia defines this as âa term commonly used in the informa- tion technology industry to describe problem spaces needing the development and deployment of new software systems in the immediate presence of existing (legacy) software applications/systems.â Additionally, âthis implies that any new software architecture must take into account and coexist with live software already in situ.â Specifically, many organizations have already made substantial investments in search within the Lucene ecosystem. While most organizations do not directly use the open-source Lucene search library in production, the search application landscape is dominated by platforms that are built on top of Lucene such as Elasticsearch, OpenSearch, and Solr. For example, Elastic, the publicly traded company behind Elasticsearch, reports approximately 20,000 subscrip- tions to its cloud service as of Q4 FY2023.3 Similarly, in the category of search engines, Lucene dominates DB-Engines Ranking, a site that tracks the popularity of various database management systems.4 Thereâs a paucity of concrete usage data, but it would not be an exaggeration to say that Lucene has an immense install base. | 2308.14963#9 | Vector Search with OpenAI Embeddings: Lucene Is All You Need | We provide a reproducible, end-to-end demonstration of vector search with
OpenAI embeddings using Lucene on the popular MS MARCO passage ranking test
collection. The main goal of our work is to challenge the prevailing narrative
that a dedicated vector store is necessary to take advantage of recent advances
in deep neural networks as applied to search. Quite the contrary, we show that
hierarchical navigable small-world network (HNSW) indexes in Lucene are
adequate to provide vector search capabilities in a standard bi-encoder
architecture. This suggests that, from a simple cost-benefit analysis, there
does not appear to be a compelling reason to introduce a dedicated vector store
into a modern "AI stack" for search, since such applications have already
received substantial investments in existing, widely deployed infrastructure. | http://arxiv.org/pdf/2308.14963 | Jimmy Lin, Ronak Pradeep, Tommaso Teofili, Jasper Xian | cs.IR | null | null | cs.IR | 20230829 | 20230829 | [
{
"id": "2110.01529"
},
{
"id": "2212.13534"
},
{
"id": "1611.09268"
},
{
"id": "2101.05667"
},
{
"id": "2201.10005"
},
{
"id": "2112.09118"
},
{
"id": "2302.07842"
}
] |
2308.14972 | 9 | # References
[1] I. Singh, V. Blukis, A. Mousavian, A. Goyal, D. Xu, J. Tremblay, D. Fox, J. Thomason, and A. Garg, âProgprompt: Generating situated robot task plans using large language models,â in 2023 IEEE Inter- national Conference on Robotics and Automation (ICRA), 2023, pp. 11 523â11 530.
[2] G. Chalvatzaki, A. Younes, D. Nandha, A. T. Le, L. F. R. Ribeiro, and I. Gurevych, âLearning to reason over scene graphs: A case study of finetuning GPT-2 into a robot language model for grounded task planning,â CoRR, vol. abs/2305.07716, 2023. [Online]. Available: https://doi.org/10.48550/arXiv.2305.07716 | 2308.14972#9 | LLM-Based Human-Robot Collaboration Framework for Manipulation Tasks | This paper presents a novel approach to enhance autonomous robotic
manipulation using the Large Language Model (LLM) for logical inference,
converting high-level language commands into sequences of executable motion
functions. The proposed system combines the advantage of LLM with YOLO-based
environmental perception to enable robots to autonomously make reasonable
decisions and task planning based on the given commands. Additionally, to
address the potential inaccuracies or illogical actions arising from LLM, a
combination of teleoperation and Dynamic Movement Primitives (DMP) is employed
for action correction. This integration aims to improve the practicality and
generalizability of the LLM-based human-robot collaboration system. | http://arxiv.org/pdf/2308.14972 | Haokun Liu, Yaonan Zhu, Kenji Kato, Izumi Kondo, Tadayoshi Aoyama, Yasuhisa Hasegawa | cs.RO, cs.AI | IEEE MHS 2023 | null | cs.RO | 20230829 | 20230829 | [] |
2308.15126 | 9 | # 2.1 Large Language Model
GPT-3 (Brown et al., 2020) has demonstrated that language models with a large number of param- eters possess powerful zero-shot capabilities and are capable of excelling at previously unseen tasks. Thanks to the success of GPT-3, now LLMs (Zhang et al., 2022; Chowdhery et al., 2022; Touvron et al., 2023; Scao et al., 2022) have gained significant at- tention. To make LLMs more responsive to human instructions, InstructGPT (Ouyang et al., 2022) introduced the instruction-following fine-tuning paradigm. It employs reinforcement learning from human feedback to train the LLMs to follow human instructions and produce desired outputs.
# 2.2 Large Vision-Language Model
With the success of LLMs, many researchers have been extending language models to understand real-world images. For example, some approaches (Yang et al., 2023; Shen et al., 2023) are based
100 QH Mmm AY so | Mm cH g 60 £ 3 Q & 40 20 ol table chair car book bottle cup cat horse toilet Items | 2308.15126#9 | Evaluation and Analysis of Hallucination in Large Vision-Language Models | Large Vision-Language Models (LVLMs) have recently achieved remarkable
success. However, LVLMs are still plagued by the hallucination problem, which
limits the practicality in many scenarios. Hallucination refers to the
information of LVLMs' responses that does not exist in the visual input, which
poses potential risks of substantial consequences. There has been limited work
studying hallucination evaluation in LVLMs. In this paper, we propose
Hallucination Evaluation based on Large Language Models (HaELM), an LLM-based
hallucination evaluation framework. HaELM achieves an approximate 95%
performance comparable to ChatGPT and has additional advantages including low
cost, reproducibility, privacy preservation and local deployment. Leveraging
the HaELM, we evaluate the hallucination in current LVLMs. Furthermore, we
analyze the factors contributing to hallucination in LVLMs and offer helpful
suggestions to mitigate the hallucination problem. Our training data and human
annotation hallucination data will be made public soon. | http://arxiv.org/pdf/2308.15126 | Junyang Wang, Yiyang Zhou, Guohai Xu, Pengcheng Shi, Chenlin Zhao, Haiyang Xu, Qinghao Ye, Ming Yan, Ji Zhang, Jihua Zhu, Jitao Sang, Haoyu Tang | cs.LG, cs.AI, cs.CL, cs.CV | 11 pages, 5 figures | null | cs.LG | 20230829 | 20231010 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2306.14565"
},
{
"id": "2211.05100"
},
{
"id": "2305.13534"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2305.10355"
},
{
"id": "2306.04387"
},
{
"id": "1504.00325"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2107.06383"
},
{
"id": "2305.15021"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2305.11175"
},
{
"id": "2010.03743"
},
{
"id": "2305.14975"
},
{
"id": "2310.00754"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2106.09685"
}
] |
2308.14963 | 10 | The most recent major release of Lucene (version 9), dating back to December 2021, includes HNSW indexing and search capabilities, which have steadily improved over the past couple of years. This means that differences in capabilities between Lucene and dedicated vector stores are primarily in terms of performance, not the availability of must-have features. Thus, from a simple costâbenefit calculus, it is not clear that vector search requires introducing a dedicated vector store into an already complex enterprise âAI stackâ. Our thesis: Lucene is all you need.
We empirically demonstrate our claims on the MS MARCO passage ranking test collection, a standard benchmark dataset used by researchers today. We have encoded the entire corpus using OpenAIâs ada2 embedding endpoint, and then indexed the dense vectors with Lucene. Experimental results show that this combination achieves effectiveness comparable to the state of the art on the development queries as well as queries from the TREC 2019 and 2020 Deep Learning Tracks.
3
4
# https://ir.elastic.co/news-events/press-releases/press-releases-details/2023/ Elastic-Reports-Fourth-Quarter-and-Fiscal-2023-Financial-Results/default.aspx https://db-engines.com/en/ranking/search+engine
3 | 2308.14963#10 | Vector Search with OpenAI Embeddings: Lucene Is All You Need | We provide a reproducible, end-to-end demonstration of vector search with
OpenAI embeddings using Lucene on the popular MS MARCO passage ranking test
collection. The main goal of our work is to challenge the prevailing narrative
that a dedicated vector store is necessary to take advantage of recent advances
in deep neural networks as applied to search. Quite the contrary, we show that
hierarchical navigable small-world network (HNSW) indexes in Lucene are
adequate to provide vector search capabilities in a standard bi-encoder
architecture. This suggests that, from a simple cost-benefit analysis, there
does not appear to be a compelling reason to introduce a dedicated vector store
into a modern "AI stack" for search, since such applications have already
received substantial investments in existing, widely deployed infrastructure. | http://arxiv.org/pdf/2308.14963 | Jimmy Lin, Ronak Pradeep, Tommaso Teofili, Jasper Xian | cs.IR | null | null | cs.IR | 20230829 | 20230829 | [
{
"id": "2110.01529"
},
{
"id": "2212.13534"
},
{
"id": "1611.09268"
},
{
"id": "2101.05667"
},
{
"id": "2201.10005"
},
{
"id": "2112.09118"
},
{
"id": "2302.07842"
}
] |
2308.14972 | 10 | [3] R. Wang, Y. Wu, W. L. Chan, and K. P. Tee, âDynamic movement primitives plus: For enhanced reproduction quality and efficient trajec- tory modification using truncated kernels and local biases,â in 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2016, pp. 3765â3771.
[4] J. Nakanishi, S. Itadera, T. Aoyama, and Y. Hasegawa, âTowards the development of an intuitive teleoperation system for human support robot using a vr device,â Advanced Robotics, vol. 34, no. 19, pp. 1239â 1253, 2020.
[5] Y. Zhu, B. Jiang, Q. Chen, T. Aoyama, and Y. Hasegawa, âA shared control framework for enhanced grasping performance in teleoperation,â IEEE Access, vol. 11, pp. 69 204â69 215, 2023. | 2308.14972#10 | LLM-Based Human-Robot Collaboration Framework for Manipulation Tasks | This paper presents a novel approach to enhance autonomous robotic
manipulation using the Large Language Model (LLM) for logical inference,
converting high-level language commands into sequences of executable motion
functions. The proposed system combines the advantage of LLM with YOLO-based
environmental perception to enable robots to autonomously make reasonable
decisions and task planning based on the given commands. Additionally, to
address the potential inaccuracies or illogical actions arising from LLM, a
combination of teleoperation and Dynamic Movement Primitives (DMP) is employed
for action correction. This integration aims to improve the practicality and
generalizability of the LLM-based human-robot collaboration system. | http://arxiv.org/pdf/2308.14972 | Haokun Liu, Yaonan Zhu, Kenji Kato, Izumi Kondo, Tadayoshi Aoyama, Yasuhisa Hasegawa | cs.RO, cs.AI | IEEE MHS 2023 | null | cs.RO | 20230829 | 20230829 | [] |
2308.15126 | 10 | 100 QH Mmm AY so | Mm cH g 60 £ 3 Q & 40 20 ol table chair car book bottle cup cat horse toilet Items
Figure 2: The validity assessment results of object- based hallucination evaluation. QH represents the per- centage that we asked about the corresponding item on images where it was not present; AY represents the percentage that the model answered "yes", and CH rep- resents the percentage that the model had hallucinations of the corresponding item in the responses.
on visual expert and regards ChatGPT as the cen- tral work. On the other hand, some recent open- source works such as (Zhu et al., 2023; Liu et al., 2023b; Ye et al., 2023; Gong et al., 2023; Wang et al., 2023; Li et al., 2023a; Mu et al., 2023; Su et al., 2023) achieve unified LVLMs by aligning extracted visual tokens from a visual encoder with a pre-trained LLM and instruct tuning it. To further improve the performance of LVLMs, (Liu et al., 2023a; Li et al., 2023c) proposed to increase the diversity of instructions and construct the larger instruction fine-tuning dataset.
# 2.3 Hallucinations in LLMs and LVLMs | 2308.15126#10 | Evaluation and Analysis of Hallucination in Large Vision-Language Models | Large Vision-Language Models (LVLMs) have recently achieved remarkable
success. However, LVLMs are still plagued by the hallucination problem, which
limits the practicality in many scenarios. Hallucination refers to the
information of LVLMs' responses that does not exist in the visual input, which
poses potential risks of substantial consequences. There has been limited work
studying hallucination evaluation in LVLMs. In this paper, we propose
Hallucination Evaluation based on Large Language Models (HaELM), an LLM-based
hallucination evaluation framework. HaELM achieves an approximate 95%
performance comparable to ChatGPT and has additional advantages including low
cost, reproducibility, privacy preservation and local deployment. Leveraging
the HaELM, we evaluate the hallucination in current LVLMs. Furthermore, we
analyze the factors contributing to hallucination in LVLMs and offer helpful
suggestions to mitigate the hallucination problem. Our training data and human
annotation hallucination data will be made public soon. | http://arxiv.org/pdf/2308.15126 | Junyang Wang, Yiyang Zhou, Guohai Xu, Pengcheng Shi, Chenlin Zhao, Haiyang Xu, Qinghao Ye, Ming Yan, Ji Zhang, Jihua Zhu, Jitao Sang, Haoyu Tang | cs.LG, cs.AI, cs.CL, cs.CV | 11 pages, 5 figures | null | cs.LG | 20230829 | 20231010 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2306.14565"
},
{
"id": "2211.05100"
},
{
"id": "2305.13534"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2305.10355"
},
{
"id": "2306.04387"
},
{
"id": "1504.00325"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2107.06383"
},
{
"id": "2305.15021"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2305.11175"
},
{
"id": "2010.03743"
},
{
"id": "2305.14975"
},
{
"id": "2310.00754"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2106.09685"
}
] |
2308.14963 | 11 | 3
Our experiments are conducted with Anserini (Yang et al., 2018), a Lucene-based IR toolkit that aims to support reproducible information retrieval research. By building on Lucene, Anserini aims to bridge the gap between academic information retrieval research and the practice of building real-world search applications. Devins et al. (2022) showed that capabilities implemented by researchers in Anserini using Lucene can be straightforwardly translated into Elasticsearch (or any other platform in the Lucene ecosystem), thus simplifying the path from prototypes to production deployments. | 2308.14963#11 | Vector Search with OpenAI Embeddings: Lucene Is All You Need | We provide a reproducible, end-to-end demonstration of vector search with
OpenAI embeddings using Lucene on the popular MS MARCO passage ranking test
collection. The main goal of our work is to challenge the prevailing narrative
that a dedicated vector store is necessary to take advantage of recent advances
in deep neural networks as applied to search. Quite the contrary, we show that
hierarchical navigable small-world network (HNSW) indexes in Lucene are
adequate to provide vector search capabilities in a standard bi-encoder
architecture. This suggests that, from a simple cost-benefit analysis, there
does not appear to be a compelling reason to introduce a dedicated vector store
into a modern "AI stack" for search, since such applications have already
received substantial investments in existing, widely deployed infrastructure. | http://arxiv.org/pdf/2308.14963 | Jimmy Lin, Ronak Pradeep, Tommaso Teofili, Jasper Xian | cs.IR | null | null | cs.IR | 20230829 | 20230829 | [
{
"id": "2110.01529"
},
{
"id": "2212.13534"
},
{
"id": "1611.09268"
},
{
"id": "2101.05667"
},
{
"id": "2201.10005"
},
{
"id": "2112.09118"
},
{
"id": "2302.07842"
}
] |
2308.15126 | 11 | # 2.3 Hallucinations in LLMs and LVLMs
The issue of hallucinations has been extensively studied in the traditional field of NLP. Despite the advancements in the latest and widely acclaimed LLMs, they remain encumbered by the persistent challenge of hallucinations. Consequently, a mul- titude of works have emerged, aiming to mitigate the impact of these hallucinations. However, it is noteworthy that limited focus has been directed to- ward addressing the hallucination in LVLMs (Zhou et al., 2023; Liu et al., 2023a).
In contrast to hallucinations observed in LLMs, hallucinations within LVLMs arise from a mis- match between the visual and textual modalities. Currently, the only work that specifically focuses on the hallucination of LVLMs utilizing object de- tection and query instructions (Li et al., 2023d). Through meticulous empirical experiments, they
3
substantiate the considerable severity of hallucina- tions in LVLMs, particularly in generating objects that are absent from the provided images but ap- pear frequently in the training data. The existing LLMs, by adopting instruct tuning, make their tar- get outputs follow human instructions, but this can result in biased training and target distributions (Tian et al., 2023). Furthermore, insufficient vi- sual constraints contribute to the serious issue of illusions in LVLMs. | 2308.15126#11 | Evaluation and Analysis of Hallucination in Large Vision-Language Models | Large Vision-Language Models (LVLMs) have recently achieved remarkable
success. However, LVLMs are still plagued by the hallucination problem, which
limits the practicality in many scenarios. Hallucination refers to the
information of LVLMs' responses that does not exist in the visual input, which
poses potential risks of substantial consequences. There has been limited work
studying hallucination evaluation in LVLMs. In this paper, we propose
Hallucination Evaluation based on Large Language Models (HaELM), an LLM-based
hallucination evaluation framework. HaELM achieves an approximate 95%
performance comparable to ChatGPT and has additional advantages including low
cost, reproducibility, privacy preservation and local deployment. Leveraging
the HaELM, we evaluate the hallucination in current LVLMs. Furthermore, we
analyze the factors contributing to hallucination in LVLMs and offer helpful
suggestions to mitigate the hallucination problem. Our training data and human
annotation hallucination data will be made public soon. | http://arxiv.org/pdf/2308.15126 | Junyang Wang, Yiyang Zhou, Guohai Xu, Pengcheng Shi, Chenlin Zhao, Haiyang Xu, Qinghao Ye, Ming Yan, Ji Zhang, Jihua Zhu, Jitao Sang, Haoyu Tang | cs.LG, cs.AI, cs.CL, cs.CV | 11 pages, 5 figures | null | cs.LG | 20230829 | 20231010 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2306.14565"
},
{
"id": "2211.05100"
},
{
"id": "2305.13534"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2305.10355"
},
{
"id": "2306.04387"
},
{
"id": "1504.00325"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2107.06383"
},
{
"id": "2305.15021"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2305.11175"
},
{
"id": "2010.03743"
},
{
"id": "2305.14975"
},
{
"id": "2310.00754"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2106.09685"
}
] |
2308.14963 | 12 | Our demonstration further shows the ease with which state-of-the-art vector search can be imple- mented by simply âplugging togetherâ readily available components. In the context of the bi-encoder architecture, Lin (2021) identified the logical scoring model and the physical retrieval model as distinct conceptual components. In our experiments, the logical scoring model maps to the OpenAI embedding APIâwhose operations are no different from any other API endpoint. What Lin calls the physical retrieval model focuses on the top-k retrieval capability, which is handled by Lucene. In Anserini, vector indexing and search is exposed in a manner that is analogous to indexing and retrieval using bag-of-words models such as BM25. Thus, the implementation of the state of the art in vector search using generative AI does not require any AI-specific implementations, which increases the accessibility of these technologies to a wider audience.
# 3 Experiments
Experiments in this paper are relatively straightforward. We focused on the MS MARCO passage ranking test collection (Bajaj et al., 2018), which is built on a corpus comprising approximately 8.8 million passages extracted from the web. Note that since the embedding vectors are generated by OpenAIâs API endpoint, no model training was performed. For evaluation, we used the standard development queries as well as queries from the TREC 2019 and TREC 2020 Deep Learning Tracks. | 2308.14963#12 | Vector Search with OpenAI Embeddings: Lucene Is All You Need | We provide a reproducible, end-to-end demonstration of vector search with
OpenAI embeddings using Lucene on the popular MS MARCO passage ranking test
collection. The main goal of our work is to challenge the prevailing narrative
that a dedicated vector store is necessary to take advantage of recent advances
in deep neural networks as applied to search. Quite the contrary, we show that
hierarchical navigable small-world network (HNSW) indexes in Lucene are
adequate to provide vector search capabilities in a standard bi-encoder
architecture. This suggests that, from a simple cost-benefit analysis, there
does not appear to be a compelling reason to introduce a dedicated vector store
into a modern "AI stack" for search, since such applications have already
received substantial investments in existing, widely deployed infrastructure. | http://arxiv.org/pdf/2308.14963 | Jimmy Lin, Ronak Pradeep, Tommaso Teofili, Jasper Xian | cs.IR | null | null | cs.IR | 20230829 | 20230829 | [
{
"id": "2110.01529"
},
{
"id": "2212.13534"
},
{
"id": "1611.09268"
},
{
"id": "2101.05667"
},
{
"id": "2201.10005"
},
{
"id": "2112.09118"
},
{
"id": "2302.07842"
}
] |
2308.15126 | 12 | The presence of hallucinations can lead to unreli- ability in models, which may cause harm to human society, such as the misleading information output by the model leading to errors in human decision- making or the output of toxic information.
# 3 Motivation
The current existing method for hallucination eval- uation is object-based hallucination evaluation (Li et al., 2023d). It measures the extent of hallucina- tion in LVLMs by querying their response to the presence of an "item". The "item" is chosen from a list of commonly hallucinated words that do not exist in the image. If the model believes that an item is present in an image where it is absent, it in- dicates that the model has a hallucination regarding that item. | 2308.15126#12 | Evaluation and Analysis of Hallucination in Large Vision-Language Models | Large Vision-Language Models (LVLMs) have recently achieved remarkable
success. However, LVLMs are still plagued by the hallucination problem, which
limits the practicality in many scenarios. Hallucination refers to the
information of LVLMs' responses that does not exist in the visual input, which
poses potential risks of substantial consequences. There has been limited work
studying hallucination evaluation in LVLMs. In this paper, we propose
Hallucination Evaluation based on Large Language Models (HaELM), an LLM-based
hallucination evaluation framework. HaELM achieves an approximate 95%
performance comparable to ChatGPT and has additional advantages including low
cost, reproducibility, privacy preservation and local deployment. Leveraging
the HaELM, we evaluate the hallucination in current LVLMs. Furthermore, we
analyze the factors contributing to hallucination in LVLMs and offer helpful
suggestions to mitigate the hallucination problem. Our training data and human
annotation hallucination data will be made public soon. | http://arxiv.org/pdf/2308.15126 | Junyang Wang, Yiyang Zhou, Guohai Xu, Pengcheng Shi, Chenlin Zhao, Haiyang Xu, Qinghao Ye, Ming Yan, Ji Zhang, Jihua Zhu, Jitao Sang, Haoyu Tang | cs.LG, cs.AI, cs.CL, cs.CV | 11 pages, 5 figures | null | cs.LG | 20230829 | 20231010 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2306.14565"
},
{
"id": "2211.05100"
},
{
"id": "2305.13534"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2305.10355"
},
{
"id": "2306.04387"
},
{
"id": "1504.00325"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2107.06383"
},
{
"id": "2305.15021"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2305.11175"
},
{
"id": "2010.03743"
},
{
"id": "2305.14975"
},
{
"id": "2310.00754"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2106.09685"
}
] |
2308.14963 | 13 | In our experimental setup, we utilized the OpenAI ada2 model (Neelakantan et al., 2022) for generating both query and passage embeddings. This model is characterized by an input limit of 8191 tokens and an output embedding size of 1536 dimensions. However, to maintain consistency with the existing literature (Pradeep et al., 2021; Ma et al., 2022a), we truncated all passages in the corpus to 512 tokens. It is unknown whether OpenAI leveraged the MS MARCO passage corpus during model development, but in general, accounting for data leakage is extremely challenging for large models, especially those from OpenAI that lack transparency.
Using tiktoken, OpenAIâs official tokenizer, we computed the average token count per passage in our corpus to be 75.2, resulting in a total of approximately 660 million tokens. In order to generate the embeddings efficiently, we queried the API in parallel while respecting the rate limit of 3500 calls per minute. We had to incorporate logic for error handling in our code, given the high-volume nature of our API calls. Ultimately, we were able to encode both the corpus and the queries, the latter of which are negligible in comparison, in a span of two days. | 2308.14963#13 | Vector Search with OpenAI Embeddings: Lucene Is All You Need | We provide a reproducible, end-to-end demonstration of vector search with
OpenAI embeddings using Lucene on the popular MS MARCO passage ranking test
collection. The main goal of our work is to challenge the prevailing narrative
that a dedicated vector store is necessary to take advantage of recent advances
in deep neural networks as applied to search. Quite the contrary, we show that
hierarchical navigable small-world network (HNSW) indexes in Lucene are
adequate to provide vector search capabilities in a standard bi-encoder
architecture. This suggests that, from a simple cost-benefit analysis, there
does not appear to be a compelling reason to introduce a dedicated vector store
into a modern "AI stack" for search, since such applications have already
received substantial investments in existing, widely deployed infrastructure. | http://arxiv.org/pdf/2308.14963 | Jimmy Lin, Ronak Pradeep, Tommaso Teofili, Jasper Xian | cs.IR | null | null | cs.IR | 20230829 | 20230829 | [
{
"id": "2110.01529"
},
{
"id": "2212.13534"
},
{
"id": "1611.09268"
},
{
"id": "2101.05667"
},
{
"id": "2201.10005"
},
{
"id": "2112.09118"
},
{
"id": "2302.07842"
}
] |
2308.15126 | 13 | To verify the feasibility, we designed an experi- ment based on the object-based hallucination eval- uation method. We utilized the prompt "Is there a {item} in this photo?" to query mPLUG-Owl re- garding 100 randomly selected images from the MS-COCO 2014 dataset (Lin et al., 2014; Chen et al., 2015). Other modelsâ and detailed results are provided in the appendix. The {item} in the prompt was substituted with the top ten most fre- quently hallucinated words proposed by (Li et al., 2023d) that are not present in the given image. The results are presented in Figure 2. The "QH" and "AY" reveal that LVLMs answer "yes" to over 80% of the queries in this prompt, even if all the items in the prompts were absent from the image.
The above phenomenon can be explained by the tendency of LVLMs to affirm the description when answering judgment-type queries with a "yes" re- sponse. We speculate that this bias is due to the instruction fine-tuning data that includes a sub- stantial number of responses catering to human re- quests, which results in bias in LVLMsâ responses to judgment-type queries. To verify the relationship between the responses of LVLMs to such queries | 2308.15126#13 | Evaluation and Analysis of Hallucination in Large Vision-Language Models | Large Vision-Language Models (LVLMs) have recently achieved remarkable
success. However, LVLMs are still plagued by the hallucination problem, which
limits the practicality in many scenarios. Hallucination refers to the
information of LVLMs' responses that does not exist in the visual input, which
poses potential risks of substantial consequences. There has been limited work
studying hallucination evaluation in LVLMs. In this paper, we propose
Hallucination Evaluation based on Large Language Models (HaELM), an LLM-based
hallucination evaluation framework. HaELM achieves an approximate 95%
performance comparable to ChatGPT and has additional advantages including low
cost, reproducibility, privacy preservation and local deployment. Leveraging
the HaELM, we evaluate the hallucination in current LVLMs. Furthermore, we
analyze the factors contributing to hallucination in LVLMs and offer helpful
suggestions to mitigate the hallucination problem. Our training data and human
annotation hallucination data will be made public soon. | http://arxiv.org/pdf/2308.15126 | Junyang Wang, Yiyang Zhou, Guohai Xu, Pengcheng Shi, Chenlin Zhao, Haiyang Xu, Qinghao Ye, Ming Yan, Ji Zhang, Jihua Zhu, Jitao Sang, Haoyu Tang | cs.LG, cs.AI, cs.CL, cs.CV | 11 pages, 5 figures | null | cs.LG | 20230829 | 20231010 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2306.14565"
},
{
"id": "2211.05100"
},
{
"id": "2305.13534"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2305.10355"
},
{
"id": "2306.04387"
},
{
"id": "1504.00325"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2107.06383"
},
{
"id": "2305.15021"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2305.11175"
},
{
"id": "2010.03743"
},
{
"id": "2305.14975"
},
{
"id": "2310.00754"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2106.09685"
}
] |
2308.14963 | 14 | As previously mentioned, all our retrieval experiments were conducted with the Anserini IR toolkit (Yang et al., 2018). The primary advantage of Anserini is that it provides direct access to underlying Lucene features in a âresearcher-friendlyâ manner that better comports with modern evaluation workflows. Our experiments were based on Lucene 9.5.0, but indexing was a bit tricky because the HNSW implementation in Lucene restricts vectors to 1024 dimensions, which was not sufficient for OpenAIâs 1536-dimensional embeddings.5 Although the resolution of this issue, which is to make vector dimensions configurable on a per codec basis, has been merged to the Lucene source trunk,6 this feature has not been folded into a Lucene release (yet) as of early August 2023. Thus, there is no public release of Lucene that can directly index OpenAIâs ada2 embedding vectors. Fortunately, we were able to hack around this limitation in an incredibly janky way.7
Experimental results are shown in Table 1, where we report effectiveness in terms of standard metrics: reciprocal rank at 10 (RR@10), average precision (AP), nDCG at a rank cutoff of 10 (nDCG@10), and recall at a rank cutoff of 1000 (R@1k). The effectiveness of the ada2 embeddings is shown in the
5 | 2308.14963#14 | Vector Search with OpenAI Embeddings: Lucene Is All You Need | We provide a reproducible, end-to-end demonstration of vector search with
OpenAI embeddings using Lucene on the popular MS MARCO passage ranking test
collection. The main goal of our work is to challenge the prevailing narrative
that a dedicated vector store is necessary to take advantage of recent advances
in deep neural networks as applied to search. Quite the contrary, we show that
hierarchical navigable small-world network (HNSW) indexes in Lucene are
adequate to provide vector search capabilities in a standard bi-encoder
architecture. This suggests that, from a simple cost-benefit analysis, there
does not appear to be a compelling reason to introduce a dedicated vector store
into a modern "AI stack" for search, since such applications have already
received substantial investments in existing, widely deployed infrastructure. | http://arxiv.org/pdf/2308.14963 | Jimmy Lin, Ronak Pradeep, Tommaso Teofili, Jasper Xian | cs.IR | null | null | cs.IR | 20230829 | 20230829 | [
{
"id": "2110.01529"
},
{
"id": "2212.13534"
},
{
"id": "1611.09268"
},
{
"id": "2101.05667"
},
{
"id": "2201.10005"
},
{
"id": "2112.09118"
},
{
"id": "2302.07842"
}
] |
2308.15126 | 14 | hallucination FE) : Alen labeling of hallucination human = similarity A \ assessment [ Prompt: 1 ew prompt of (sues ' realistic CL simulated data generation hallucination hallucination } collection collection ChatGPT human prompt adjustment \ qe
Figure 3: The illustration for data collection process of HaELM. The left figure illustrates the process of manually collecting real hallucination responses, while the right figure illustrates the generation of data in bulk using ChatGPT. The human similarity assessment aims to align the patterns of simulated hallucination data with realistic one.
and corresponding hallucinations, we conducted a manual evaluation in real-world scenarios. We used the prompt "Describe this image" and examined whether the generated descriptions truly contained hallucinations for the items that received a "yes" re- sponse. The "AY" and "CH" in Figure 2 reveal that only 10% of the responses included hallucinations for specific items. This suggests that the halluci- nations measured object-based evaluation merely exploit the judgment bias present in LVLMs, rather than reflecting their hallucination.
# 4 Method
This section mainly introduces the definition of hal- lucination and our method of Hallucination Evalu- ation based on Large Language Models.
# 4.1 Problem Definition | 2308.15126#14 | Evaluation and Analysis of Hallucination in Large Vision-Language Models | Large Vision-Language Models (LVLMs) have recently achieved remarkable
success. However, LVLMs are still plagued by the hallucination problem, which
limits the practicality in many scenarios. Hallucination refers to the
information of LVLMs' responses that does not exist in the visual input, which
poses potential risks of substantial consequences. There has been limited work
studying hallucination evaluation in LVLMs. In this paper, we propose
Hallucination Evaluation based on Large Language Models (HaELM), an LLM-based
hallucination evaluation framework. HaELM achieves an approximate 95%
performance comparable to ChatGPT and has additional advantages including low
cost, reproducibility, privacy preservation and local deployment. Leveraging
the HaELM, we evaluate the hallucination in current LVLMs. Furthermore, we
analyze the factors contributing to hallucination in LVLMs and offer helpful
suggestions to mitigate the hallucination problem. Our training data and human
annotation hallucination data will be made public soon. | http://arxiv.org/pdf/2308.15126 | Junyang Wang, Yiyang Zhou, Guohai Xu, Pengcheng Shi, Chenlin Zhao, Haiyang Xu, Qinghao Ye, Ming Yan, Ji Zhang, Jihua Zhu, Jitao Sang, Haoyu Tang | cs.LG, cs.AI, cs.CL, cs.CV | 11 pages, 5 figures | null | cs.LG | 20230829 | 20231010 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2306.14565"
},
{
"id": "2211.05100"
},
{
"id": "2305.13534"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2305.10355"
},
{
"id": "2306.04387"
},
{
"id": "1504.00325"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2107.06383"
},
{
"id": "2305.15021"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2305.11175"
},
{
"id": "2010.03743"
},
{
"id": "2305.14975"
},
{
"id": "2310.00754"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2106.09685"
}
] |
2308.14963 | 15 | 5
https://github.com/apache/lucene/issues/11507 6 https://github.com/apache/lucene/pull/12436 7The sketch of the solution is as follows: We copy relevant source files from the Lucene source trunk directly into our source tree and patch the vector size settings directly. When we build our fatjar, the class files of our âlocal versionsâ take precedence, and hence override the vector size limitations.
4 | 2308.14963#15 | Vector Search with OpenAI Embeddings: Lucene Is All You Need | We provide a reproducible, end-to-end demonstration of vector search with
OpenAI embeddings using Lucene on the popular MS MARCO passage ranking test
collection. The main goal of our work is to challenge the prevailing narrative
that a dedicated vector store is necessary to take advantage of recent advances
in deep neural networks as applied to search. Quite the contrary, we show that
hierarchical navigable small-world network (HNSW) indexes in Lucene are
adequate to provide vector search capabilities in a standard bi-encoder
architecture. This suggests that, from a simple cost-benefit analysis, there
does not appear to be a compelling reason to introduce a dedicated vector store
into a modern "AI stack" for search, since such applications have already
received substantial investments in existing, widely deployed infrastructure. | http://arxiv.org/pdf/2308.14963 | Jimmy Lin, Ronak Pradeep, Tommaso Teofili, Jasper Xian | cs.IR | null | null | cs.IR | 20230829 | 20230829 | [
{
"id": "2110.01529"
},
{
"id": "2212.13534"
},
{
"id": "1611.09268"
},
{
"id": "2101.05667"
},
{
"id": "2201.10005"
},
{
"id": "2112.09118"
},
{
"id": "2302.07842"
}
] |
2308.15126 | 15 | # 4 Method
This section mainly introduces the definition of hal- lucination and our method of Hallucination Evalu- ation based on Large Language Models.
# 4.1 Problem Definition
The evaluation of hallucinations in real-world scenarios for LVLMs is defined as determining whether there are discrepancies between the con- tent of the images and the responses generated by LVLMs, under the potential requests that could be initiated by humans. In this paper, we focus on the real-world scenario of image description.
# 4.2 HaELM
Data Collection To perceive hallucinations in the responses of LVLMs, it is crucial to evaluation on both non- hallucinatory and hallucinatory responses. To ad- dress this, we first analyze the hallucination pat- terns of LVLMs. Randomly selecting images, we
query the LVLMs with the instruction "Describe this image" and manually annotated the halluci- nation responses to get the realistic hallucination collection as shown in the left of Figure 3. | 2308.15126#15 | Evaluation and Analysis of Hallucination in Large Vision-Language Models | Large Vision-Language Models (LVLMs) have recently achieved remarkable
success. However, LVLMs are still plagued by the hallucination problem, which
limits the practicality in many scenarios. Hallucination refers to the
information of LVLMs' responses that does not exist in the visual input, which
poses potential risks of substantial consequences. There has been limited work
studying hallucination evaluation in LVLMs. In this paper, we propose
Hallucination Evaluation based on Large Language Models (HaELM), an LLM-based
hallucination evaluation framework. HaELM achieves an approximate 95%
performance comparable to ChatGPT and has additional advantages including low
cost, reproducibility, privacy preservation and local deployment. Leveraging
the HaELM, we evaluate the hallucination in current LVLMs. Furthermore, we
analyze the factors contributing to hallucination in LVLMs and offer helpful
suggestions to mitigate the hallucination problem. Our training data and human
annotation hallucination data will be made public soon. | http://arxiv.org/pdf/2308.15126 | Junyang Wang, Yiyang Zhou, Guohai Xu, Pengcheng Shi, Chenlin Zhao, Haiyang Xu, Qinghao Ye, Ming Yan, Ji Zhang, Jihua Zhu, Jitao Sang, Haoyu Tang | cs.LG, cs.AI, cs.CL, cs.CV | 11 pages, 5 figures | null | cs.LG | 20230829 | 20231010 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2306.14565"
},
{
"id": "2211.05100"
},
{
"id": "2305.13534"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2305.10355"
},
{
"id": "2306.04387"
},
{
"id": "1504.00325"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2107.06383"
},
{
"id": "2305.15021"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2305.11175"
},
{
"id": "2010.03743"
},
{
"id": "2305.14975"
},
{
"id": "2310.00754"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2106.09685"
}
] |
2308.14963 | 16 | dev DL19 DL20 RR@10 R@1k AP nDCG@10 R@1k AP nDCG@10 R@1k Unsupervised Sparse Representations BM25 (Ma et al., 2022a)â BM25+RM3 (Ma et al., 2022a)â Learned Sparse Representations uniCOIL (Ma et al., 2022a)â SPLADE++ ED (Formal et al., 2022)â Learned Dense Representations TAS-B (Hofstätter et al., 2021) TCT-ColBERTv2 (Lin et al., 2021b)â ColBERT-v2 (Santhanam et al., 2022) Aggretriever (Lin et al., 2023)â 0.184 0.157 0.352 0.383 0.340 0.358 0.397 0.362 0.853 0.301 0.861 0.342 0.958 0.461 0.983 0.505 0.975 0.970 0.447 0.984 0.974 0.435 - - 0.506 0.522 0.702 0.731 0.712 0.720 - 0.684 0.750 0.286 0.814 0.301 0.829 0.443 0.873 | 2308.14963#16 | Vector Search with OpenAI Embeddings: Lucene Is All You Need | We provide a reproducible, end-to-end demonstration of vector search with
OpenAI embeddings using Lucene on the popular MS MARCO passage ranking test
collection. The main goal of our work is to challenge the prevailing narrative
that a dedicated vector store is necessary to take advantage of recent advances
in deep neural networks as applied to search. Quite the contrary, we show that
hierarchical navigable small-world network (HNSW) indexes in Lucene are
adequate to provide vector search capabilities in a standard bi-encoder
architecture. This suggests that, from a simple cost-benefit analysis, there
does not appear to be a compelling reason to introduce a dedicated vector store
into a modern "AI stack" for search, since such applications have already
received substantial investments in existing, widely deployed infrastructure. | http://arxiv.org/pdf/2308.14963 | Jimmy Lin, Ronak Pradeep, Tommaso Teofili, Jasper Xian | cs.IR | null | null | cs.IR | 20230829 | 20230829 | [
{
"id": "2110.01529"
},
{
"id": "2212.13534"
},
{
"id": "1611.09268"
},
{
"id": "2101.05667"
},
{
"id": "2201.10005"
},
{
"id": "2112.09118"
},
{
"id": "2302.07842"
}
] |
2308.15126 | 16 | query the LVLMs with the instruction "Describe this image" and manually annotated the halluci- nation responses to get the realistic hallucination collection as shown in the left of Figure 3.
Subsequently, our goal is to obtain a substantial amount of hallucination data in bulk. We consid- ered using ChatGPT to generate hallucinations by manually constructing prompts based on the ref- erence captions of the images provided. We com- pared the hallucination data generated by ChatGPT with realistic hallucination data by human similar- ity assessment. We iteratively modified the prompt to make the patterns of the two align closely as shown in the right of Figure 3. Our hallucination data collection format is presented in Figure 4.
Finally, we collect the non-hallucination data. By requesting ChatGPT to generate detailed de- scriptions based on reference captions, we can easily obtain the desired data. However, it is cru- cial to emphasize that the generated descriptions should strictly adhere to the objects present in the reference captions, without introducing any non- existent elements. | 2308.15126#16 | Evaluation and Analysis of Hallucination in Large Vision-Language Models | Large Vision-Language Models (LVLMs) have recently achieved remarkable
success. However, LVLMs are still plagued by the hallucination problem, which
limits the practicality in many scenarios. Hallucination refers to the
information of LVLMs' responses that does not exist in the visual input, which
poses potential risks of substantial consequences. There has been limited work
studying hallucination evaluation in LVLMs. In this paper, we propose
Hallucination Evaluation based on Large Language Models (HaELM), an LLM-based
hallucination evaluation framework. HaELM achieves an approximate 95%
performance comparable to ChatGPT and has additional advantages including low
cost, reproducibility, privacy preservation and local deployment. Leveraging
the HaELM, we evaluate the hallucination in current LVLMs. Furthermore, we
analyze the factors contributing to hallucination in LVLMs and offer helpful
suggestions to mitigate the hallucination problem. Our training data and human
annotation hallucination data will be made public soon. | http://arxiv.org/pdf/2308.15126 | Junyang Wang, Yiyang Zhou, Guohai Xu, Pengcheng Shi, Chenlin Zhao, Haiyang Xu, Qinghao Ye, Ming Yan, Ji Zhang, Jihua Zhu, Jitao Sang, Haoyu Tang | cs.LG, cs.AI, cs.CL, cs.CV | 11 pages, 5 figures | null | cs.LG | 20230829 | 20231010 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2306.14565"
},
{
"id": "2211.05100"
},
{
"id": "2305.13534"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2305.10355"
},
{
"id": "2306.04387"
},
{
"id": "1504.00325"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2107.06383"
},
{
"id": "2305.15021"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2305.11175"
},
{
"id": "2010.03743"
},
{
"id": "2305.14975"
},
{
"id": "2310.00754"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2106.09685"
}
] |
2308.15126 | 17 | Training and Inference During the training phase, we employ a consis- tent format prompt that corresponds to the data distribution of LLMs and instruction fine-tuning. The collected data from the preceding step is seam- lessly integrated into the prompt, serving as the training data for fine-tuning the LLM through an autoregressive training process. During the infer- ence phase, we incorporate the reference captions
4
it. Reference captions: + Anighttime parade with a decorated elephant walking in * Dressed for a festival, an elephant parades down a street. + Anelephant being lead by a man in front of a crowd of onlookers. * A procession of a decorated elephant in front of a crowd. + Aman riding a decorated elephant following a woman walking in front of him. Prompt of simulated hallucination data generation: # An image is described in several reference descriptions below: [ reference captions here ] Generate a two- to three-sentence description of the image based on the reference descriptions and include a few obvious incorrect associations. Response of ChatGPT: # A festive procession takes place at night, featuring a beautifully adorned elephant parading down a street. The elephant is being led by a woman, with a man riding on its back. The crowd of onlookers is cheering and throwing confetti, despite the fact that it is raining heavily. | 2308.15126#17 | Evaluation and Analysis of Hallucination in Large Vision-Language Models | Large Vision-Language Models (LVLMs) have recently achieved remarkable
success. However, LVLMs are still plagued by the hallucination problem, which
limits the practicality in many scenarios. Hallucination refers to the
information of LVLMs' responses that does not exist in the visual input, which
poses potential risks of substantial consequences. There has been limited work
studying hallucination evaluation in LVLMs. In this paper, we propose
Hallucination Evaluation based on Large Language Models (HaELM), an LLM-based
hallucination evaluation framework. HaELM achieves an approximate 95%
performance comparable to ChatGPT and has additional advantages including low
cost, reproducibility, privacy preservation and local deployment. Leveraging
the HaELM, we evaluate the hallucination in current LVLMs. Furthermore, we
analyze the factors contributing to hallucination in LVLMs and offer helpful
suggestions to mitigate the hallucination problem. Our training data and human
annotation hallucination data will be made public soon. | http://arxiv.org/pdf/2308.15126 | Junyang Wang, Yiyang Zhou, Guohai Xu, Pengcheng Shi, Chenlin Zhao, Haiyang Xu, Qinghao Ye, Ming Yan, Ji Zhang, Jihua Zhu, Jitao Sang, Haoyu Tang | cs.LG, cs.AI, cs.CL, cs.CV | 11 pages, 5 figures | null | cs.LG | 20230829 | 20231010 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2306.14565"
},
{
"id": "2211.05100"
},
{
"id": "2305.13534"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2305.10355"
},
{
"id": "2306.04387"
},
{
"id": "1504.00325"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2107.06383"
},
{
"id": "2305.15021"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2305.11175"
},
{
"id": "2010.03743"
},
{
"id": "2305.14975"
},
{
"id": "2310.00754"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2106.09685"
}
] |
2308.14963 | 18 | Table 1: Effectiveness of OpenAI ada2 embeddings on the MS MARCO development set queries (dev) and queries from the TREC 2019/2020 Deep Learning Tracks (DL19/DL20), compared to a selection of other models. â indicates results from Pyseriniâs two-click reproductions (Lin, 2022) available at https://castorini.github.io/pyserini/2cr/msmarco-v1-passage.html, which may differ slightly from the original papers. All other results are copied from their original papers.
last row of the table. Note that due to the non-deterministic nature of HNSW indexing, effectiveness figures may vary slightly from run to run. | 2308.14963#18 | Vector Search with OpenAI Embeddings: Lucene Is All You Need | We provide a reproducible, end-to-end demonstration of vector search with
OpenAI embeddings using Lucene on the popular MS MARCO passage ranking test
collection. The main goal of our work is to challenge the prevailing narrative
that a dedicated vector store is necessary to take advantage of recent advances
in deep neural networks as applied to search. Quite the contrary, we show that
hierarchical navigable small-world network (HNSW) indexes in Lucene are
adequate to provide vector search capabilities in a standard bi-encoder
architecture. This suggests that, from a simple cost-benefit analysis, there
does not appear to be a compelling reason to introduce a dedicated vector store
into a modern "AI stack" for search, since such applications have already
received substantial investments in existing, widely deployed infrastructure. | http://arxiv.org/pdf/2308.14963 | Jimmy Lin, Ronak Pradeep, Tommaso Teofili, Jasper Xian | cs.IR | null | null | cs.IR | 20230829 | 20230829 | [
{
"id": "2110.01529"
},
{
"id": "2212.13534"
},
{
"id": "1611.09268"
},
{
"id": "2101.05667"
},
{
"id": "2201.10005"
},
{
"id": "2112.09118"
},
{
"id": "2302.07842"
}
] |
2308.15126 | 18 | Figure 4: An example of the prompt for generating simulated hallucination samples, where the words with red font represent the hallucination description.
and responses from the LVLMs under evaluation into the prompt. These inputs are then fed into the meticulously trained evaluation model to get the judgment.
HaELM can be reused multiple times once data collection and training finish, which offers a cost advantage over ChatGPT while ensuring repro- ducibility. Furthermore, HaELM is built upon an open-source LLM, allowing for local deployment, thereby eliminating uploading data and guarantee- ing data privacy.
Implementation Details We employed the LLaMA (Touvron et al., 2023) as a foundation model and utilized LoRA (Hu et al., 2021) for fine-tuning. Our hyperparameter is pre- sented in Table 8 of appendix. The training process required 2 hours using a single Tesla V100 GPU. For the evaluated models, we selected the currently available open-source LVLMs: mPLUG-Owl (Ye et al., 2023), MiniGPT-4 (Zhu et al., 2023) and LLaVA (Liu et al., 2023b). The parameter settings are presented in Table 7 of appendix. We chose the state-of-the-art LLM, ChatGPT, as our baseline.
# 5 Experiments | 2308.15126#18 | Evaluation and Analysis of Hallucination in Large Vision-Language Models | Large Vision-Language Models (LVLMs) have recently achieved remarkable
success. However, LVLMs are still plagued by the hallucination problem, which
limits the practicality in many scenarios. Hallucination refers to the
information of LVLMs' responses that does not exist in the visual input, which
poses potential risks of substantial consequences. There has been limited work
studying hallucination evaluation in LVLMs. In this paper, we propose
Hallucination Evaluation based on Large Language Models (HaELM), an LLM-based
hallucination evaluation framework. HaELM achieves an approximate 95%
performance comparable to ChatGPT and has additional advantages including low
cost, reproducibility, privacy preservation and local deployment. Leveraging
the HaELM, we evaluate the hallucination in current LVLMs. Furthermore, we
analyze the factors contributing to hallucination in LVLMs and offer helpful
suggestions to mitigate the hallucination problem. Our training data and human
annotation hallucination data will be made public soon. | http://arxiv.org/pdf/2308.15126 | Junyang Wang, Yiyang Zhou, Guohai Xu, Pengcheng Shi, Chenlin Zhao, Haiyang Xu, Qinghao Ye, Ming Yan, Ji Zhang, Jihua Zhu, Jitao Sang, Haoyu Tang | cs.LG, cs.AI, cs.CL, cs.CV | 11 pages, 5 figures | null | cs.LG | 20230829 | 20231010 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2306.14565"
},
{
"id": "2211.05100"
},
{
"id": "2305.13534"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2305.10355"
},
{
"id": "2306.04387"
},
{
"id": "1504.00325"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2107.06383"
},
{
"id": "2305.15021"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2305.11175"
},
{
"id": "2010.03743"
},
{
"id": "2305.14975"
},
{
"id": "2310.00754"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2106.09685"
}
] |
2308.14963 | 19 | last row of the table. Note that due to the non-deterministic nature of HNSW indexing, effectiveness figures may vary slightly from run to run.
For comparison, we present results from a few select points of reference, classified according to the taxonomy proposed by Lin (2021); OpenAIâs embedding models belong in the class of learned dense representations. Notable omissions in the results table include the following: the original OpenAI paper that describes the embedding model (Neelakantan et al., 2022) does not report comparable results; neither does Izacard et al. (2021) for Contriever, another popular learned dense representation model. Recently, Kamalloo et al. (2023) also evaluated OpenAIâs ada2 embeddings, but they did not examine any of the test collections we do here. Looking at the results table, our main point is that we can achieve effectiveness comparable to the state of the art using a production-grade, completely off-the-shelf embedding API coupled with Lucene for indexing and retrieval. | 2308.14963#19 | Vector Search with OpenAI Embeddings: Lucene Is All You Need | We provide a reproducible, end-to-end demonstration of vector search with
OpenAI embeddings using Lucene on the popular MS MARCO passage ranking test
collection. The main goal of our work is to challenge the prevailing narrative
that a dedicated vector store is necessary to take advantage of recent advances
in deep neural networks as applied to search. Quite the contrary, we show that
hierarchical navigable small-world network (HNSW) indexes in Lucene are
adequate to provide vector search capabilities in a standard bi-encoder
architecture. This suggests that, from a simple cost-benefit analysis, there
does not appear to be a compelling reason to introduce a dedicated vector store
into a modern "AI stack" for search, since such applications have already
received substantial investments in existing, widely deployed infrastructure. | http://arxiv.org/pdf/2308.14963 | Jimmy Lin, Ronak Pradeep, Tommaso Teofili, Jasper Xian | cs.IR | null | null | cs.IR | 20230829 | 20230829 | [
{
"id": "2110.01529"
},
{
"id": "2212.13534"
},
{
"id": "1611.09268"
},
{
"id": "2101.05667"
},
{
"id": "2201.10005"
},
{
"id": "2112.09118"
},
{
"id": "2302.07842"
}
] |
2308.15126 | 19 | # 5 Experiments
Dataset Our image dataset consists exclusively of images from the MS-COCO 2014 (Lin et al., 2014; Chen et al., 2015), following the established partition into the train, val and test sets as outlined by (Karpathy and Fei-Fei, 2015). For data collection purposes, we randomly select 10,000 samples from the train- ing set and collect 10,000 hallucination and 10,000 non-hallucination simulated responses respectively. Additionally, we obtain all 5,000 samples from the test set specifically for evaluating the LVLMsâ hal- lucinations. To ensure consistency and accuracy in our data collection and hallucination evaluation, we use the manually annotated captions provided in the dataset as reference captions.
To ensure the modelâs focus on hallucination evaluation, we disabled gradient computations on the input, preventing the learning of irrelevant in- formation. Furthermore, our training data outputs were explicitly limited to "yes" or "no" responses, effectively benefiting the automated evaluation.
When evaluating hallucinations by ChatGPT, we further enhanced the accuracy through manual prompt editing, ensuring a fair basis for compari- son. Notably, we refrained from employing manu- ally annotated real hallucination data in the training process to uphold the integrity and reliability of our experimental findings.
# 5.1 Evaluation on HaELM | 2308.15126#19 | Evaluation and Analysis of Hallucination in Large Vision-Language Models | Large Vision-Language Models (LVLMs) have recently achieved remarkable
success. However, LVLMs are still plagued by the hallucination problem, which
limits the practicality in many scenarios. Hallucination refers to the
information of LVLMs' responses that does not exist in the visual input, which
poses potential risks of substantial consequences. There has been limited work
studying hallucination evaluation in LVLMs. In this paper, we propose
Hallucination Evaluation based on Large Language Models (HaELM), an LLM-based
hallucination evaluation framework. HaELM achieves an approximate 95%
performance comparable to ChatGPT and has additional advantages including low
cost, reproducibility, privacy preservation and local deployment. Leveraging
the HaELM, we evaluate the hallucination in current LVLMs. Furthermore, we
analyze the factors contributing to hallucination in LVLMs and offer helpful
suggestions to mitigate the hallucination problem. Our training data and human
annotation hallucination data will be made public soon. | http://arxiv.org/pdf/2308.15126 | Junyang Wang, Yiyang Zhou, Guohai Xu, Pengcheng Shi, Chenlin Zhao, Haiyang Xu, Qinghao Ye, Ming Yan, Ji Zhang, Jihua Zhu, Jitao Sang, Haoyu Tang | cs.LG, cs.AI, cs.CL, cs.CV | 11 pages, 5 figures | null | cs.LG | 20230829 | 20231010 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2306.14565"
},
{
"id": "2211.05100"
},
{
"id": "2305.13534"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2305.10355"
},
{
"id": "2306.04387"
},
{
"id": "1504.00325"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2107.06383"
},
{
"id": "2305.15021"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2305.11175"
},
{
"id": "2010.03743"
},
{
"id": "2305.14975"
},
{
"id": "2310.00754"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2106.09685"
}
] |
2308.14963 | 20 | To complete our experimental results, we provide performance figures on a server with two Intel Xeon Platinum 8160 processors (33M Cache, 2.10 GHz, 24 cores each) with 1 TB RAM, running Ubuntu 18.04 with ZFS. This particular processor was launched in Q3 of 2017 and is no longer commercially available; we can characterize this server as âhigh endâ, but dated. Indexing took around three hours with 16 threads, with the parameters M set to 16 and efC set to 100, without final segment optimization. Using 32-bit floats, the raw 1536-dimensional vectors should consume 54 GB on disk, but for convenience we used an inefficient JSON text-based representation. Therefore, our collection of vectors takes up 109 GB as compressed text files (using gzip). For vector search, using 16 threads, we were able to achieve 9.8 queries per second (QPS), fetching 1000 hits per query with the efSearch parameter set to 1000. These results were obtained on the MS MARCO development queries, averaged over four separate trials after a warmup run.
# 4 Discussion
Our demonstration shows that it is possible today to build a vector search prototype using OpenAI embeddings directly with Lucene. Nevertheless, there are a number of issues worth discussing, which we cover below. | 2308.14963#20 | Vector Search with OpenAI Embeddings: Lucene Is All You Need | We provide a reproducible, end-to-end demonstration of vector search with
OpenAI embeddings using Lucene on the popular MS MARCO passage ranking test
collection. The main goal of our work is to challenge the prevailing narrative
that a dedicated vector store is necessary to take advantage of recent advances
in deep neural networks as applied to search. Quite the contrary, we show that
hierarchical navigable small-world network (HNSW) indexes in Lucene are
adequate to provide vector search capabilities in a standard bi-encoder
architecture. This suggests that, from a simple cost-benefit analysis, there
does not appear to be a compelling reason to introduce a dedicated vector store
into a modern "AI stack" for search, since such applications have already
received substantial investments in existing, widely deployed infrastructure. | http://arxiv.org/pdf/2308.14963 | Jimmy Lin, Ronak Pradeep, Tommaso Teofili, Jasper Xian | cs.IR | null | null | cs.IR | 20230829 | 20230829 | [
{
"id": "2110.01529"
},
{
"id": "2212.13534"
},
{
"id": "1611.09268"
},
{
"id": "2101.05667"
},
{
"id": "2201.10005"
},
{
"id": "2112.09118"
},
{
"id": "2302.07842"
}
] |
2308.15126 | 20 | # 5.1 Evaluation on HaELM
In this subsection, we first evaluate the performance of HaELM. As we are the first to utilize LLM for hallucination evaluation, we select the highly
5
Method w/o hallucination w/ hallucination all LL Mi mP Avg. LL Mi mP Avg. LL Mi mP Avg. GPT-3.5 82.0 HaELM 93.4 38.9 61.1 50.8 60.1 57.2 71.5 48.7 25.6 78.1 57.8 72.9 43.2 66.6 42.2 69.0 67.0 64.0 59.0 59.0 57.0 64.0 61.0
Table 1: The results of accuracy on human-annotated evaluation data for HaELM and GPT-3.5, where LL, Mi, and mP respectively represent LLaVA, Mini-GPT4, and mPLUG-Owl. | 2308.15126#20 | Evaluation and Analysis of Hallucination in Large Vision-Language Models | Large Vision-Language Models (LVLMs) have recently achieved remarkable
success. However, LVLMs are still plagued by the hallucination problem, which
limits the practicality in many scenarios. Hallucination refers to the
information of LVLMs' responses that does not exist in the visual input, which
poses potential risks of substantial consequences. There has been limited work
studying hallucination evaluation in LVLMs. In this paper, we propose
Hallucination Evaluation based on Large Language Models (HaELM), an LLM-based
hallucination evaluation framework. HaELM achieves an approximate 95%
performance comparable to ChatGPT and has additional advantages including low
cost, reproducibility, privacy preservation and local deployment. Leveraging
the HaELM, we evaluate the hallucination in current LVLMs. Furthermore, we
analyze the factors contributing to hallucination in LVLMs and offer helpful
suggestions to mitigate the hallucination problem. Our training data and human
annotation hallucination data will be made public soon. | http://arxiv.org/pdf/2308.15126 | Junyang Wang, Yiyang Zhou, Guohai Xu, Pengcheng Shi, Chenlin Zhao, Haiyang Xu, Qinghao Ye, Ming Yan, Ji Zhang, Jihua Zhu, Jitao Sang, Haoyu Tang | cs.LG, cs.AI, cs.CL, cs.CV | 11 pages, 5 figures | null | cs.LG | 20230829 | 20231010 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2306.14565"
},
{
"id": "2211.05100"
},
{
"id": "2305.13534"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2305.10355"
},
{
"id": "2306.04387"
},
{
"id": "1504.00325"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2107.06383"
},
{
"id": "2305.15021"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2305.11175"
},
{
"id": "2010.03743"
},
{
"id": "2305.14975"
},
{
"id": "2310.00754"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2106.09685"
}
] |
2308.14963 | 21 | Jank. We concede that getting our demonstration to work required a bit of janky implementation tricks. Even though all the required features have been merged to Luceneâs source trunk, no official release has been cut that incorporates all the patches (at least at the time we performed our experiments in early August, 2023). Quite simply, the complete feature set necessary for production deployment is not, as they say, ready for prime time. However, to use another cliché, this is a small matter of programming (SMOP). We see no major roadblocks in the near future: the next official release of
5
Lucene will incorporate the necessary features, and after that, all downstream consumers will begin to incorporate the capabilities that we demonstrate here.
Nevertheless, Lucene has been a relative laggard in dense retrieval. Despite this, we believe that recent developments point to substantial and sustained investments in the Lucene ecosystem moving forward. For example, in its Q4 FY 2023 report, Elastic announced the Elasticsearch Relevance Engine, âpowered by built-in vector search and transformer models, designed specifically to bring the power of AI innovation to proprietary enterprise data.â A recent blog post8 from Amazon Web Services explained vector database capabilities in OpenSearch, providing many details and reference architectures. These are just two examples of commitments that help bolster the case for Lucene that we have articulated here. Overall, we are optimistic about the future of the ecosystem. | 2308.14963#21 | Vector Search with OpenAI Embeddings: Lucene Is All You Need | We provide a reproducible, end-to-end demonstration of vector search with
OpenAI embeddings using Lucene on the popular MS MARCO passage ranking test
collection. The main goal of our work is to challenge the prevailing narrative
that a dedicated vector store is necessary to take advantage of recent advances
in deep neural networks as applied to search. Quite the contrary, we show that
hierarchical navigable small-world network (HNSW) indexes in Lucene are
adequate to provide vector search capabilities in a standard bi-encoder
architecture. This suggests that, from a simple cost-benefit analysis, there
does not appear to be a compelling reason to introduce a dedicated vector store
into a modern "AI stack" for search, since such applications have already
received substantial investments in existing, widely deployed infrastructure. | http://arxiv.org/pdf/2308.14963 | Jimmy Lin, Ronak Pradeep, Tommaso Teofili, Jasper Xian | cs.IR | null | null | cs.IR | 20230829 | 20230829 | [
{
"id": "2110.01529"
},
{
"id": "2212.13534"
},
{
"id": "1611.09268"
},
{
"id": "2101.05667"
},
{
"id": "2201.10005"
},
{
"id": "2112.09118"
},
{
"id": "2302.07842"
}
] |
2308.15126 | 21 | Method LLaVA MiniGPT-4 mPLUG-Owl Precision Recall F1 Score Precision Recall F1 Score Precision Recall F1 Score w/o hallucination GPT-3.5 HaELM 71.4 66.3 82.0 93.4 76.3 77.5 50.0 44.9 38.9 61.1 43.8 51.8 76.2 66.1 50.8 65.1 61.0 65.6 w/ hallucination GPT-3.5 HaELM 63.3 71.4 48.7 25.6 55.0 37.7 69.4 72.5 78.1 57.8 73.5 64.3 46.6 42.1 73.0 43.2 56.8 42.7 average GPT-3.5 HaELM 67.4 68.9 65.4 59.5 65.6 57.6 59.7 58.7 58.5 59.5 58.7 58.1 61.4 54.1 61.9 54.2 58.9 51.7
Table 2: The results of accuracy on human-annotated evaluation data for HaELM and GPT-3.5 in terms of precision, recall, and F1 score for hallucination and non-hallucination responses. | 2308.15126#21 | Evaluation and Analysis of Hallucination in Large Vision-Language Models | Large Vision-Language Models (LVLMs) have recently achieved remarkable
success. However, LVLMs are still plagued by the hallucination problem, which
limits the practicality in many scenarios. Hallucination refers to the
information of LVLMs' responses that does not exist in the visual input, which
poses potential risks of substantial consequences. There has been limited work
studying hallucination evaluation in LVLMs. In this paper, we propose
Hallucination Evaluation based on Large Language Models (HaELM), an LLM-based
hallucination evaluation framework. HaELM achieves an approximate 95%
performance comparable to ChatGPT and has additional advantages including low
cost, reproducibility, privacy preservation and local deployment. Leveraging
the HaELM, we evaluate the hallucination in current LVLMs. Furthermore, we
analyze the factors contributing to hallucination in LVLMs and offer helpful
suggestions to mitigate the hallucination problem. Our training data and human
annotation hallucination data will be made public soon. | http://arxiv.org/pdf/2308.15126 | Junyang Wang, Yiyang Zhou, Guohai Xu, Pengcheng Shi, Chenlin Zhao, Haiyang Xu, Qinghao Ye, Ming Yan, Ji Zhang, Jihua Zhu, Jitao Sang, Haoyu Tang | cs.LG, cs.AI, cs.CL, cs.CV | 11 pages, 5 figures | null | cs.LG | 20230829 | 20231010 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2306.14565"
},
{
"id": "2211.05100"
},
{
"id": "2305.13534"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2305.10355"
},
{
"id": "2306.04387"
},
{
"id": "1504.00325"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2107.06383"
},
{
"id": "2305.15021"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2305.11175"
},
{
"id": "2010.03743"
},
{
"id": "2305.14975"
},
{
"id": "2310.00754"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2106.09685"
}
] |
2308.14963 | 22 | Performance. Lucene still lags alternatives in terms of indexing speed, query latency and through- put, and related metrics. For example, Ma et al. (2023) recently benchmarked Lucene 9.5.0 against Faiss (Johnson et al., 2019). Experiments suggest that Lucene achieves only around half the query throughput of Faiss under comparable settings, but appears to scale better when using multiple threads. Although these results only capture a snapshot in time, it would be fair to characterize Lucene as unequivocally slower. However, Faiss is relatively mature and hence its headroom for performance improvements is rather limited. In contrast, we see many more opportunities for gains in Lucene. Coupled with signs of strong commitment (discussed above), we believe that the performance gap between Lucene and dedicated vector stores will decrease over time.
Alternatives. We acknowledge a number of competing alternatives that deserve consideration. Note that the core argument we forward is about costâbenefit tradeoffs: In our view, it is not clear that the benefits offered by a dedicated vector store outweigh the increased architectural complexity of introducing a new software component within an enterprise. From this perspective, we can identify two potentially appealing alternatives: | 2308.14963#22 | Vector Search with OpenAI Embeddings: Lucene Is All You Need | We provide a reproducible, end-to-end demonstration of vector search with
OpenAI embeddings using Lucene on the popular MS MARCO passage ranking test
collection. The main goal of our work is to challenge the prevailing narrative
that a dedicated vector store is necessary to take advantage of recent advances
in deep neural networks as applied to search. Quite the contrary, we show that
hierarchical navigable small-world network (HNSW) indexes in Lucene are
adequate to provide vector search capabilities in a standard bi-encoder
architecture. This suggests that, from a simple cost-benefit analysis, there
does not appear to be a compelling reason to introduce a dedicated vector store
into a modern "AI stack" for search, since such applications have already
received substantial investments in existing, widely deployed infrastructure. | http://arxiv.org/pdf/2308.14963 | Jimmy Lin, Ronak Pradeep, Tommaso Teofili, Jasper Xian | cs.IR | null | null | cs.IR | 20230829 | 20230829 | [
{
"id": "2110.01529"
},
{
"id": "2212.13534"
},
{
"id": "1611.09268"
},
{
"id": "2101.05667"
},
{
"id": "2201.10005"
},
{
"id": "2112.09118"
},
{
"id": "2302.07842"
}
] |
2308.15126 | 22 | competitive ChatGPT as our baseline for compara- tive analysis. Given the absence of an established benchmark, we use the realistic hallucination re- sponses derived from LVLMs during the data col- lection phase as the evaluation benchmark and the annotations as the ground truth.
Accuracy We first compared the accuracy. The experimen- tal results on human-annotated hallucination, non- hallucination and overall responses are summarized in Table 1. Notably, HaELM achieves an accu- racy of 61%, slightly lower than ChatGPTâs per- formance at 64%. Nevertheless, HaELM demon- strates an impressive capability, reaching 95% of ChatGPTâs level.
We also noticed that HaELM performs better in non-hallucination responses, while ChatGPT per- forms better in hallucination responses. This re- flects the biases in the decision-making of the two methods. ChatGPT tends to believe that responses have hallucinations, while HaELM leans towards non-hallucination responses. We analyzed that al- though simulated hallucination responses mostly cover the hallucination pattern, they still cannot fully match the distribution of actual hallucination responses. Therefore, HaELM fails to learn some
patterns of hallucinations, resulting in misclassifi- cation under these patterns. | 2308.15126#22 | Evaluation and Analysis of Hallucination in Large Vision-Language Models | Large Vision-Language Models (LVLMs) have recently achieved remarkable
success. However, LVLMs are still plagued by the hallucination problem, which
limits the practicality in many scenarios. Hallucination refers to the
information of LVLMs' responses that does not exist in the visual input, which
poses potential risks of substantial consequences. There has been limited work
studying hallucination evaluation in LVLMs. In this paper, we propose
Hallucination Evaluation based on Large Language Models (HaELM), an LLM-based
hallucination evaluation framework. HaELM achieves an approximate 95%
performance comparable to ChatGPT and has additional advantages including low
cost, reproducibility, privacy preservation and local deployment. Leveraging
the HaELM, we evaluate the hallucination in current LVLMs. Furthermore, we
analyze the factors contributing to hallucination in LVLMs and offer helpful
suggestions to mitigate the hallucination problem. Our training data and human
annotation hallucination data will be made public soon. | http://arxiv.org/pdf/2308.15126 | Junyang Wang, Yiyang Zhou, Guohai Xu, Pengcheng Shi, Chenlin Zhao, Haiyang Xu, Qinghao Ye, Ming Yan, Ji Zhang, Jihua Zhu, Jitao Sang, Haoyu Tang | cs.LG, cs.AI, cs.CL, cs.CV | 11 pages, 5 figures | null | cs.LG | 20230829 | 20231010 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2306.14565"
},
{
"id": "2211.05100"
},
{
"id": "2305.13534"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2305.10355"
},
{
"id": "2306.04387"
},
{
"id": "1504.00325"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2107.06383"
},
{
"id": "2305.15021"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2305.11175"
},
{
"id": "2010.03743"
},
{
"id": "2305.14975"
},
{
"id": "2310.00754"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2106.09685"
}
] |
2308.14963 | 23 | ⢠Fully managed services. One simple way to reduce architectural complexity is to make it someone elseâs problem. Vespa9 is perhaps the best example of this solution, providing both dense retrieval and sparse retrieval capabilities in a fully managed environment, eliminating the need for users to explicitly worry about implementation details involving inverted indexes, HNSW indexes, etc. Vepsa provides a query language that supports a combination of vector search, full-text search, as well as search over structured data. Our main question here concerns traction and adoption: as a brownfield application, weâre not convinced that enterprises will make the (single, large) leap from an existing solution to a fully managed service. | 2308.14963#23 | Vector Search with OpenAI Embeddings: Lucene Is All You Need | We provide a reproducible, end-to-end demonstration of vector search with
OpenAI embeddings using Lucene on the popular MS MARCO passage ranking test
collection. The main goal of our work is to challenge the prevailing narrative
that a dedicated vector store is necessary to take advantage of recent advances
in deep neural networks as applied to search. Quite the contrary, we show that
hierarchical navigable small-world network (HNSW) indexes in Lucene are
adequate to provide vector search capabilities in a standard bi-encoder
architecture. This suggests that, from a simple cost-benefit analysis, there
does not appear to be a compelling reason to introduce a dedicated vector store
into a modern "AI stack" for search, since such applications have already
received substantial investments in existing, widely deployed infrastructure. | http://arxiv.org/pdf/2308.14963 | Jimmy Lin, Ronak Pradeep, Tommaso Teofili, Jasper Xian | cs.IR | null | null | cs.IR | 20230829 | 20230829 | [
{
"id": "2110.01529"
},
{
"id": "2212.13534"
},
{
"id": "1611.09268"
},
{
"id": "2101.05667"
},
{
"id": "2201.10005"
},
{
"id": "2112.09118"
},
{
"id": "2302.07842"
}
] |
2308.15126 | 23 | patterns of hallucinations, resulting in misclassifi- cation under these patterns.
Refined Metrics We then proceeded to evaluate the refined met- rics, including precision, recall, and F1 scores as shown in Table 2. The average F1 scores reveal that HaELM achieves performance levels of 88%, 99%, and 88% on the three LVLMs, respectively. Additionally, as mentioned in the previous analy- sis, the recall for hallucination responses is lower for HaELM. Nevertheless, despite this limitation, HaELM outperforms ChatGPT in several metrics.
Time & Cost HaELM only requires one-time data collection and training for reuse, allowing significant time and cost savings in subsequent evaluation processes compared to ChatGPT. We present the cost com- parison between the two in Table 3.
HaELM requires only 3.8 hours and 4.3$ for data collection and training, resulting in a saving of 1.4 hours and 6.6$ per evaluation compared to ChatGPT. This advantage becomes more signifi- cant when multiple evaluations are needed, such as exploring the impact of prompts on hallucina- tions. Additionally, HaELM can be deployed lo- cally, eliminating the need for internet connectivity
6 | 2308.15126#23 | Evaluation and Analysis of Hallucination in Large Vision-Language Models | Large Vision-Language Models (LVLMs) have recently achieved remarkable
success. However, LVLMs are still plagued by the hallucination problem, which
limits the practicality in many scenarios. Hallucination refers to the
information of LVLMs' responses that does not exist in the visual input, which
poses potential risks of substantial consequences. There has been limited work
studying hallucination evaluation in LVLMs. In this paper, we propose
Hallucination Evaluation based on Large Language Models (HaELM), an LLM-based
hallucination evaluation framework. HaELM achieves an approximate 95%
performance comparable to ChatGPT and has additional advantages including low
cost, reproducibility, privacy preservation and local deployment. Leveraging
the HaELM, we evaluate the hallucination in current LVLMs. Furthermore, we
analyze the factors contributing to hallucination in LVLMs and offer helpful
suggestions to mitigate the hallucination problem. Our training data and human
annotation hallucination data will be made public soon. | http://arxiv.org/pdf/2308.15126 | Junyang Wang, Yiyang Zhou, Guohai Xu, Pengcheng Shi, Chenlin Zhao, Haiyang Xu, Qinghao Ye, Ming Yan, Ji Zhang, Jihua Zhu, Jitao Sang, Haoyu Tang | cs.LG, cs.AI, cs.CL, cs.CV | 11 pages, 5 figures | null | cs.LG | 20230829 | 20231010 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2306.14565"
},
{
"id": "2211.05100"
},
{
"id": "2305.13534"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2305.10355"
},
{
"id": "2306.04387"
},
{
"id": "1504.00325"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2107.06383"
},
{
"id": "2305.15021"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2305.11175"
},
{
"id": "2010.03743"
},
{
"id": "2305.14975"
},
{
"id": "2310.00754"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2106.09685"
}
] |
2308.14963 | 24 | ⢠Vector search capabilities in relational databases. In the same way that vector search grows naturally out of an already deployed and mature text search platform (e.g., Elasticsearch), we can see similar arguments being made from the perspective of relational databases. Despite numerous attempts (spanning decades) at toppling its lofty perch (Stonebraker and Hellerstein, 2005; Pavlo et al., 2009), relational databases remain a permanent fixture in enterprise âdata stacksâ. This means that by building vector search capabilities into relational databases, enterprises gain entrée into the world of dense retrieval (essentially) for free. A great example of this approach is pgvector,10 which provides open-source vector similarity search for Postgres. We find the case compelling: if your enterprise is already running Postgres, pgvector adds vector search capabilities with minimal additional complexity. Itâs basically a free lunch.
# 5 Conclusions | 2308.14963#24 | Vector Search with OpenAI Embeddings: Lucene Is All You Need | We provide a reproducible, end-to-end demonstration of vector search with
OpenAI embeddings using Lucene on the popular MS MARCO passage ranking test
collection. The main goal of our work is to challenge the prevailing narrative
that a dedicated vector store is necessary to take advantage of recent advances
in deep neural networks as applied to search. Quite the contrary, we show that
hierarchical navigable small-world network (HNSW) indexes in Lucene are
adequate to provide vector search capabilities in a standard bi-encoder
architecture. This suggests that, from a simple cost-benefit analysis, there
does not appear to be a compelling reason to introduce a dedicated vector store
into a modern "AI stack" for search, since such applications have already
received substantial investments in existing, widely deployed infrastructure. | http://arxiv.org/pdf/2308.14963 | Jimmy Lin, Ronak Pradeep, Tommaso Teofili, Jasper Xian | cs.IR | null | null | cs.IR | 20230829 | 20230829 | [
{
"id": "2110.01529"
},
{
"id": "2212.13534"
},
{
"id": "1611.09268"
},
{
"id": "2101.05667"
},
{
"id": "2201.10005"
},
{
"id": "2112.09118"
},
{
"id": "2302.07842"
}
] |
2308.15126 | 24 | 6
Method Collection Training *Evaluation Time Cost Time Cost Time Cost GPT3.5 HaELM 1.8h - - 4.3$ - 2h - - 1.6h 0.2h 6.6$ Table 3: The time and cost of hallucination evaluation for HaELM and ChatGPT. *Evaluation represents a single evaluation conducted on three LVLMs.
and ensuring data and privacy protection.
# 5.2 Evaluation on Hallucination
In this subsection, we will employ HaELM to evaluate the hallucination performance of existing LVLMs. Additionally, we explore the correlation between various generation settings and hallucina- tions in LVLMs, thereby presenting viable sugges- tions to mitigate hallucinations. | 2308.15126#24 | Evaluation and Analysis of Hallucination in Large Vision-Language Models | Large Vision-Language Models (LVLMs) have recently achieved remarkable
success. However, LVLMs are still plagued by the hallucination problem, which
limits the practicality in many scenarios. Hallucination refers to the
information of LVLMs' responses that does not exist in the visual input, which
poses potential risks of substantial consequences. There has been limited work
studying hallucination evaluation in LVLMs. In this paper, we propose
Hallucination Evaluation based on Large Language Models (HaELM), an LLM-based
hallucination evaluation framework. HaELM achieves an approximate 95%
performance comparable to ChatGPT and has additional advantages including low
cost, reproducibility, privacy preservation and local deployment. Leveraging
the HaELM, we evaluate the hallucination in current LVLMs. Furthermore, we
analyze the factors contributing to hallucination in LVLMs and offer helpful
suggestions to mitigate the hallucination problem. Our training data and human
annotation hallucination data will be made public soon. | http://arxiv.org/pdf/2308.15126 | Junyang Wang, Yiyang Zhou, Guohai Xu, Pengcheng Shi, Chenlin Zhao, Haiyang Xu, Qinghao Ye, Ming Yan, Ji Zhang, Jihua Zhu, Jitao Sang, Haoyu Tang | cs.LG, cs.AI, cs.CL, cs.CV | 11 pages, 5 figures | null | cs.LG | 20230829 | 20231010 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2306.14565"
},
{
"id": "2211.05100"
},
{
"id": "2305.13534"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2305.10355"
},
{
"id": "2306.04387"
},
{
"id": "1504.00325"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2107.06383"
},
{
"id": "2305.15021"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2305.11175"
},
{
"id": "2010.03743"
},
{
"id": "2305.14975"
},
{
"id": "2310.00754"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2106.09685"
}
] |
2308.14963 | 25 | # 5 Conclusions
There is no doubt that manipulation of dense vectors forms an important component of search today. The central debate we tackle is how these capabilities should be implemented and deployed in production systems. The dominant narrative is that you need a new, distinct addition to your enterprise âAI stackââa vector store. The alternative we propose is to say: If youâve built search applications already, chances are youâre already invested in the Lucene ecosystem. In this case, Lucene is all you need. Of course, time will tell whoâs right.
8
# https://aws.amazon.com/blogs/big-data/amazon-opensearch-services-vector-database-capabilities-explained/ https://vespa.ai/ https://github.com/pgvector/pgvector
10
6
# Acknowledgements
This research was supported in part by the Natural Sciences and Engineering Research Council (NSERC) of Canada. Weâd like to thank Josh McGrath and the team at Distyl for providing support to access OpenAI APIs.
# References
Akari Asai, Sewon Min, Zexuan Zhong, and Danqi Chen. 2023. Retrieval-based Language Models and Applications. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 6: Tutorial Abstracts). Toronto, Canada, 41â46. | 2308.14963#25 | Vector Search with OpenAI Embeddings: Lucene Is All You Need | We provide a reproducible, end-to-end demonstration of vector search with
OpenAI embeddings using Lucene on the popular MS MARCO passage ranking test
collection. The main goal of our work is to challenge the prevailing narrative
that a dedicated vector store is necessary to take advantage of recent advances
in deep neural networks as applied to search. Quite the contrary, we show that
hierarchical navigable small-world network (HNSW) indexes in Lucene are
adequate to provide vector search capabilities in a standard bi-encoder
architecture. This suggests that, from a simple cost-benefit analysis, there
does not appear to be a compelling reason to introduce a dedicated vector store
into a modern "AI stack" for search, since such applications have already
received substantial investments in existing, widely deployed infrastructure. | http://arxiv.org/pdf/2308.14963 | Jimmy Lin, Ronak Pradeep, Tommaso Teofili, Jasper Xian | cs.IR | null | null | cs.IR | 20230829 | 20230829 | [
{
"id": "2110.01529"
},
{
"id": "2212.13534"
},
{
"id": "1611.09268"
},
{
"id": "2101.05667"
},
{
"id": "2201.10005"
},
{
"id": "2112.09118"
},
{
"id": "2302.07842"
}
] |
2308.15126 | 25 | Comparison on LVLMs We evaluate the hallucination of LVLMs across various prompts for a generation. The experimen- tal results are shown in Table 4. Firstly, it can be seen that among these three LVLMs, LLaVA exhibits the lowest degree of hallucination and sen- sitivity to prompts, far below the other two models. However, previous work (Ye et al., 2023) manually annotated results indicate that LLaVA performs the worst in various aspects. This observation aligns with our understanding of LVLMs. We note that the generation of hallucination is often positively correlated with the modelâs generative capability. For example, hallucinations are almost impossible to occur in VLPMs. Therefore, there exists a trade- off between model performance and hallucinations, which deserves researchers to invest more effort in model selection.
Secondly, it can be observed that both MiniGPT- 4 and mPLUG-Owl suffer from severe hallucina- tion issues. The performance of these two models is highly dependent on the choice of prompts. This means that prompt selection should be careful when using these powerful LVLMs. | 2308.15126#25 | Evaluation and Analysis of Hallucination in Large Vision-Language Models | Large Vision-Language Models (LVLMs) have recently achieved remarkable
success. However, LVLMs are still plagued by the hallucination problem, which
limits the practicality in many scenarios. Hallucination refers to the
information of LVLMs' responses that does not exist in the visual input, which
poses potential risks of substantial consequences. There has been limited work
studying hallucination evaluation in LVLMs. In this paper, we propose
Hallucination Evaluation based on Large Language Models (HaELM), an LLM-based
hallucination evaluation framework. HaELM achieves an approximate 95%
performance comparable to ChatGPT and has additional advantages including low
cost, reproducibility, privacy preservation and local deployment. Leveraging
the HaELM, we evaluate the hallucination in current LVLMs. Furthermore, we
analyze the factors contributing to hallucination in LVLMs and offer helpful
suggestions to mitigate the hallucination problem. Our training data and human
annotation hallucination data will be made public soon. | http://arxiv.org/pdf/2308.15126 | Junyang Wang, Yiyang Zhou, Guohai Xu, Pengcheng Shi, Chenlin Zhao, Haiyang Xu, Qinghao Ye, Ming Yan, Ji Zhang, Jihua Zhu, Jitao Sang, Haoyu Tang | cs.LG, cs.AI, cs.CL, cs.CV | 11 pages, 5 figures | null | cs.LG | 20230829 | 20231010 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2306.14565"
},
{
"id": "2211.05100"
},
{
"id": "2305.13534"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2305.10355"
},
{
"id": "2306.04387"
},
{
"id": "1504.00325"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2107.06383"
},
{
"id": "2305.15021"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2305.11175"
},
{
"id": "2010.03743"
},
{
"id": "2305.14975"
},
{
"id": "2310.00754"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2106.09685"
}
] |
2308.14963 | 26 | Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Ma- jumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, and Tong Wang. 2018. MS MARCO: A Human Generated MAchine Reading COmprehension Dataset. arXiv:1611.09268v3 (2018).
Nick Craswell, Bhaskar Mitra, Emine Yilmaz, and Daniel Campos. 2020. Overview of the TREC 2020 Deep Learning Track. In Proceedings of the Twenty-Ninth Text REtrieval Conference Proceedings (TREC 2020). Gaithersburg, Maryland.
Nick Craswell, Bhaskar Mitra, Emine Yilmaz, Daniel Campos, and Ellen M. Voorhees. 2019. Overview of the TREC 2019 Deep Learning Track. In Proceedings of the Twenty-Eighth Text REtrieval Conference Proceedings (TREC 2019). Gaithersburg, Maryland.
Josh Devins, Julie Tibshirani, and Jimmy Lin. 2022. Aligning the Research and Practice of Building Search Applications: Elasticsearch and Pyserini. In Proceedings of the 15th ACM International Conference on Web Search and Data Mining (WSDM 2022). 1573â1576. | 2308.14963#26 | Vector Search with OpenAI Embeddings: Lucene Is All You Need | We provide a reproducible, end-to-end demonstration of vector search with
OpenAI embeddings using Lucene on the popular MS MARCO passage ranking test
collection. The main goal of our work is to challenge the prevailing narrative
that a dedicated vector store is necessary to take advantage of recent advances
in deep neural networks as applied to search. Quite the contrary, we show that
hierarchical navigable small-world network (HNSW) indexes in Lucene are
adequate to provide vector search capabilities in a standard bi-encoder
architecture. This suggests that, from a simple cost-benefit analysis, there
does not appear to be a compelling reason to introduce a dedicated vector store
into a modern "AI stack" for search, since such applications have already
received substantial investments in existing, widely deployed infrastructure. | http://arxiv.org/pdf/2308.14963 | Jimmy Lin, Ronak Pradeep, Tommaso Teofili, Jasper Xian | cs.IR | null | null | cs.IR | 20230829 | 20230829 | [
{
"id": "2110.01529"
},
{
"id": "2212.13534"
},
{
"id": "1611.09268"
},
{
"id": "2101.05667"
},
{
"id": "2201.10005"
},
{
"id": "2112.09118"
},
{
"id": "2302.07842"
}
] |
2308.14963 | 27 | Thibault Formal, Carlos Lassance, Benjamin Piwowarski, and Stéphane Clinchant. 2022. From Distillation to Hard Negative Sampling: Making Sparse Neural IR Models More Effective. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2022). Madrid, Spain, 2353â2359.
Sebastian Hofstätter, Sheng-Chieh Lin, Jheng-Hong Yang, Jimmy Lin, and Allan Hanbury. 2021. Efficiently Teaching an Effective Dense Retriever with Balanced Topic Aware Sampling. In Pro- ceedings of the 44th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2021). 113â122.
Gautier Izacard, Mathilde Caron, Lucas Hosseini, Sebastian Riedel, Piotr Bojanowski, Armand Joulin, and Edouard Grave. 2021. Towards Unsupervised Dense Information Retrieval with Contrastive Learning. arXiv:2112.09118 (2021).
Jeff Johnson, Matthijs Douze, and Hervé Jégou. 2019. Billion-scale similarity search with GPUs. IEEE Transactions on Big Data 7, 3 (2019), 535â547. | 2308.14963#27 | Vector Search with OpenAI Embeddings: Lucene Is All You Need | We provide a reproducible, end-to-end demonstration of vector search with
OpenAI embeddings using Lucene on the popular MS MARCO passage ranking test
collection. The main goal of our work is to challenge the prevailing narrative
that a dedicated vector store is necessary to take advantage of recent advances
in deep neural networks as applied to search. Quite the contrary, we show that
hierarchical navigable small-world network (HNSW) indexes in Lucene are
adequate to provide vector search capabilities in a standard bi-encoder
architecture. This suggests that, from a simple cost-benefit analysis, there
does not appear to be a compelling reason to introduce a dedicated vector store
into a modern "AI stack" for search, since such applications have already
received substantial investments in existing, widely deployed infrastructure. | http://arxiv.org/pdf/2308.14963 | Jimmy Lin, Ronak Pradeep, Tommaso Teofili, Jasper Xian | cs.IR | null | null | cs.IR | 20230829 | 20230829 | [
{
"id": "2110.01529"
},
{
"id": "2212.13534"
},
{
"id": "1611.09268"
},
{
"id": "2101.05667"
},
{
"id": "2201.10005"
},
{
"id": "2112.09118"
},
{
"id": "2302.07842"
}
] |
2308.15126 | 27 | 7
Model P1 P2 P3 P4 Avg-M LLaVA MiniGPT-4 mPLUG-Owl 20.0 46.1 35.9 19.4 35.5 24.1 18.6 69.7 47.2 19.5 68.8 37.6 19.4 55.0 36.2 Avg-P 34.0 26.3 45.2 42.0 Table 4: Hallucination evaluation results for LVLMs. The numbers represent the frequency of hallucinations exhibited by the respective LVLM when using genera- tion prompts on the MS-COCO 2014 test split. "Avg-M" represents the average hallucination ratio of the corre- sponding model across multiple prompts, while "Avg-P" represents the average hallucination ratio of the corre- sponding prompt across multiple models. P1: "Describe this image." P2: "Generate a caption for this image." P3: "Please restore the scene in the image with words." P4: "What is this?"
hallucination. To validate this idea, we conducted experiments with mPLUG-Owl by selecting dif- ferent maximum generation lengths and using the prompt "Describe this image." for a generation. The experimental results are shown in Table 5.
max length 128 256 512 1024 hallucination 33.1 35.7 35.9 37.0 | 2308.15126#27 | Evaluation and Analysis of Hallucination in Large Vision-Language Models | Large Vision-Language Models (LVLMs) have recently achieved remarkable
success. However, LVLMs are still plagued by the hallucination problem, which
limits the practicality in many scenarios. Hallucination refers to the
information of LVLMs' responses that does not exist in the visual input, which
poses potential risks of substantial consequences. There has been limited work
studying hallucination evaluation in LVLMs. In this paper, we propose
Hallucination Evaluation based on Large Language Models (HaELM), an LLM-based
hallucination evaluation framework. HaELM achieves an approximate 95%
performance comparable to ChatGPT and has additional advantages including low
cost, reproducibility, privacy preservation and local deployment. Leveraging
the HaELM, we evaluate the hallucination in current LVLMs. Furthermore, we
analyze the factors contributing to hallucination in LVLMs and offer helpful
suggestions to mitigate the hallucination problem. Our training data and human
annotation hallucination data will be made public soon. | http://arxiv.org/pdf/2308.15126 | Junyang Wang, Yiyang Zhou, Guohai Xu, Pengcheng Shi, Chenlin Zhao, Haiyang Xu, Qinghao Ye, Ming Yan, Ji Zhang, Jihua Zhu, Jitao Sang, Haoyu Tang | cs.LG, cs.AI, cs.CL, cs.CV | 11 pages, 5 figures | null | cs.LG | 20230829 | 20231010 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2306.14565"
},
{
"id": "2211.05100"
},
{
"id": "2305.13534"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2305.10355"
},
{
"id": "2306.04387"
},
{
"id": "1504.00325"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2107.06383"
},
{
"id": "2305.15021"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2305.11175"
},
{
"id": "2010.03743"
},
{
"id": "2305.14975"
},
{
"id": "2310.00754"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2106.09685"
}
] |
2308.14963 | 28 | Ehsan Kamalloo, Xinyu Zhang, Odunayo Ogundepo, Nandan Thakur, David Alfonso-hermelo, Mehdi Rezagholizadeh, and Jimmy Lin. 2023. Evaluating Embedding APIs for Information Retrieval. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track). Toronto, Canada, 518â526.
Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense Passage Retrieval for Open-Domain Question Answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). Online, 6769â6781.
Jimmy Lin. 2021. A Proposed Conceptual Framework for a Representational Approach to Information Retrieval. arXiv:2110.01529 (2021).
Jimmy Lin. 2022. Building a Culture of Reproducibility in Academic Research. arXiv:2212.13534 (2022).
7 | 2308.14963#28 | Vector Search with OpenAI Embeddings: Lucene Is All You Need | We provide a reproducible, end-to-end demonstration of vector search with
OpenAI embeddings using Lucene on the popular MS MARCO passage ranking test
collection. The main goal of our work is to challenge the prevailing narrative
that a dedicated vector store is necessary to take advantage of recent advances
in deep neural networks as applied to search. Quite the contrary, we show that
hierarchical navigable small-world network (HNSW) indexes in Lucene are
adequate to provide vector search capabilities in a standard bi-encoder
architecture. This suggests that, from a simple cost-benefit analysis, there
does not appear to be a compelling reason to introduce a dedicated vector store
into a modern "AI stack" for search, since such applications have already
received substantial investments in existing, widely deployed infrastructure. | http://arxiv.org/pdf/2308.14963 | Jimmy Lin, Ronak Pradeep, Tommaso Teofili, Jasper Xian | cs.IR | null | null | cs.IR | 20230829 | 20230829 | [
{
"id": "2110.01529"
},
{
"id": "2212.13534"
},
{
"id": "1611.09268"
},
{
"id": "2101.05667"
},
{
"id": "2201.10005"
},
{
"id": "2112.09118"
},
{
"id": "2302.07842"
}
] |
2308.15126 | 28 | max length 128 256 512 1024 hallucination 33.1 35.7 35.9 37.0
Table 5: The result of comparison on generation length.
We observed that as the maximum length in- creased, the hallucination became stronger. We manually collected a portion of responses with a maximum generation length of 1024 and found that hallucinations tended to occur more toward the latter part of the responses. In this pattern of hallucination, LVLMs often generated a concise segment first, followed by a divergence of imagina- tion. However, this is not always the case, as the examples shown in Figure 1 also demonstrated that LVLMs can generate hallucinations in the earlier parts. Therefore, this represents only a trend. We suggest that obtaining relatively accurate results can be achieved by truncating the responses.
Comparison on Sampling Sampling can control LVLMs to generate diverse responses. The current mainstream sampling method is top-K sampling, which randomly selects from the top K words with the highest probabilities each time. To investigate the impact of sampling methods on illusions, we controlled the value of K in top-K sampling and conducted experiments. The experimental results are presented in Table 6.
Prompt: # Describe this image. Response: 2 # The image depicts a busy city street with a group of people riding bicycles. There are at least 12 bicycles visible in the scene, with some of them positioned closer to the foreground and others further back. visible in the- The- image dep | 2308.15126#28 | Evaluation and Analysis of Hallucination in Large Vision-Language Models | Large Vision-Language Models (LVLMs) have recently achieved remarkable
success. However, LVLMs are still plagued by the hallucination problem, which
limits the practicality in many scenarios. Hallucination refers to the
information of LVLMs' responses that does not exist in the visual input, which
poses potential risks of substantial consequences. There has been limited work
studying hallucination evaluation in LVLMs. In this paper, we propose
Hallucination Evaluation based on Large Language Models (HaELM), an LLM-based
hallucination evaluation framework. HaELM achieves an approximate 95%
performance comparable to ChatGPT and has additional advantages including low
cost, reproducibility, privacy preservation and local deployment. Leveraging
the HaELM, we evaluate the hallucination in current LVLMs. Furthermore, we
analyze the factors contributing to hallucination in LVLMs and offer helpful
suggestions to mitigate the hallucination problem. Our training data and human
annotation hallucination data will be made public soon. | http://arxiv.org/pdf/2308.15126 | Junyang Wang, Yiyang Zhou, Guohai Xu, Pengcheng Shi, Chenlin Zhao, Haiyang Xu, Qinghao Ye, Ming Yan, Ji Zhang, Jihua Zhu, Jitao Sang, Haoyu Tang | cs.LG, cs.AI, cs.CL, cs.CV | 11 pages, 5 figures | null | cs.LG | 20230829 | 20231010 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2306.14565"
},
{
"id": "2211.05100"
},
{
"id": "2305.13534"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2305.10355"
},
{
"id": "2306.04387"
},
{
"id": "1504.00325"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2107.06383"
},
{
"id": "2305.15021"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2305.11175"
},
{
"id": "2010.03743"
},
{
"id": "2305.14975"
},
{
"id": "2310.00754"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2106.09685"
}
] |
2308.14963 | 29 | Jimmy Lin. 2022. Building a Culture of Reproducibility in Academic Research. arXiv:2212.13534 (2022).
7
Jimmy Lin, Xueguang Ma, Sheng-Chieh Lin, Jheng-Hong Yang, Ronak Pradeep, and Rodrigo Nogueira. 2021a. Pyserini: A Python Toolkit for Reproducible Information Retrieval Research with Sparse and Dense Representations. In Proceedings of the 44th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2021). 2356â 2362.
Sheng-Chieh Lin, Minghan Li, and Jimmy Lin. 2023. Aggretriever: A Simple Approach to Aggregate Textual Representations for Robust Dense Passage Retrieval. Transactions of the Association for Computational Linguistics 11 (2023), 436â452.
Sheng-Chieh Lin and Jimmy Lin. 2023. A Dense Representation Framework for Lexical and Semantic Matching. ACM Transactions on Information Systems 41 (2023), Article No. 110. Issue 4. | 2308.14963#29 | Vector Search with OpenAI Embeddings: Lucene Is All You Need | We provide a reproducible, end-to-end demonstration of vector search with
OpenAI embeddings using Lucene on the popular MS MARCO passage ranking test
collection. The main goal of our work is to challenge the prevailing narrative
that a dedicated vector store is necessary to take advantage of recent advances
in deep neural networks as applied to search. Quite the contrary, we show that
hierarchical navigable small-world network (HNSW) indexes in Lucene are
adequate to provide vector search capabilities in a standard bi-encoder
architecture. This suggests that, from a simple cost-benefit analysis, there
does not appear to be a compelling reason to introduce a dedicated vector store
into a modern "AI stack" for search, since such applications have already
received substantial investments in existing, widely deployed infrastructure. | http://arxiv.org/pdf/2308.14963 | Jimmy Lin, Ronak Pradeep, Tommaso Teofili, Jasper Xian | cs.IR | null | null | cs.IR | 20230829 | 20230829 | [
{
"id": "2110.01529"
},
{
"id": "2212.13534"
},
{
"id": "1611.09268"
},
{
"id": "2101.05667"
},
{
"id": "2201.10005"
},
{
"id": "2112.09118"
},
{
"id": "2302.07842"
}
] |
2308.15126 | 29 | Figure 5: We visualized the attention of LVLM during the autoregressive generation. In the right figure, the horizontal axis represents the tokens to be generated, and the vertical axis represents the tokens that have already been generated. "<Img>" represents the average attention on the image, and "<sp>" represents the token "space".
K 1 2 3 4 5 hallucination 24.7 33.0 35.9 40.3 42.4
Table 6: The result of comparison on K of sampling.
Clearly, as K increases, the hallucination is- sue becomes more severe. Random sampling may cause LVLMs to choose tokens that are less aligned with the visual input, resulting in factual errors. These errors can be rationalized under LLMs, ul- timately forming hallucinations. There is still a trade-off between diversity and hallucination.
We observe that during the occurrence of the hallucination "12", the model exhibits minimal at- tention to the image (highlighted by the red box). Additionally, the attention of token "1" is primarily focused on the preceding token "<sp>", and the attention of token "2" is also not concentrated in relevant regions. It is possible that tokens "<sp>" and "1" appeared frequently during the training phase, leading the model to learn a biased false cor- relation. This inherent bias in the LVLM causes the attention during the generation of certain tokens to deviate from the image.
# 6 Discussion | 2308.15126#29 | Evaluation and Analysis of Hallucination in Large Vision-Language Models | Large Vision-Language Models (LVLMs) have recently achieved remarkable
success. However, LVLMs are still plagued by the hallucination problem, which
limits the practicality in many scenarios. Hallucination refers to the
information of LVLMs' responses that does not exist in the visual input, which
poses potential risks of substantial consequences. There has been limited work
studying hallucination evaluation in LVLMs. In this paper, we propose
Hallucination Evaluation based on Large Language Models (HaELM), an LLM-based
hallucination evaluation framework. HaELM achieves an approximate 95%
performance comparable to ChatGPT and has additional advantages including low
cost, reproducibility, privacy preservation and local deployment. Leveraging
the HaELM, we evaluate the hallucination in current LVLMs. Furthermore, we
analyze the factors contributing to hallucination in LVLMs and offer helpful
suggestions to mitigate the hallucination problem. Our training data and human
annotation hallucination data will be made public soon. | http://arxiv.org/pdf/2308.15126 | Junyang Wang, Yiyang Zhou, Guohai Xu, Pengcheng Shi, Chenlin Zhao, Haiyang Xu, Qinghao Ye, Ming Yan, Ji Zhang, Jihua Zhu, Jitao Sang, Haoyu Tang | cs.LG, cs.AI, cs.CL, cs.CV | 11 pages, 5 figures | null | cs.LG | 20230829 | 20231010 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2306.14565"
},
{
"id": "2211.05100"
},
{
"id": "2305.13534"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2305.10355"
},
{
"id": "2306.04387"
},
{
"id": "1504.00325"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2107.06383"
},
{
"id": "2305.15021"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2305.11175"
},
{
"id": "2010.03743"
},
{
"id": "2305.14975"
},
{
"id": "2310.00754"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2106.09685"
}
] |
2308.14963 | 30 | Sheng-Chieh Lin, Jheng-Hong Yang, and Jimmy Lin. 2021b. In-Batch Negatives for Knowledge Distillation with Tightly-Coupled Teachers for Dense Retrieval. In Proceedings of the 6th Workshop on Representation Learning for NLP (RepL4NLP-2021). 163â173.
Xueguang Ma, Ronak Pradeep, Rodrigo Nogueira, and Jimmy Lin. 2022a. Document Expansions and Learned Sparse Lexical Representations for MS MARCO V1 and V2. In Proceedings of the 45th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2022). Madrid, Spain, 3187â3197.
Xueguang Ma, Kai Sun, Ronak Pradeep, Minghan Li, and Jimmy Lin. 2022b. Another Look at DPR: Reproduction of Training and Replication of Retrieval. In Proceedings of the 44th European Conference on Information Retrieval (ECIR 2022), Part I. Stavanger, Norway, 613â626. | 2308.14963#30 | Vector Search with OpenAI Embeddings: Lucene Is All You Need | We provide a reproducible, end-to-end demonstration of vector search with
OpenAI embeddings using Lucene on the popular MS MARCO passage ranking test
collection. The main goal of our work is to challenge the prevailing narrative
that a dedicated vector store is necessary to take advantage of recent advances
in deep neural networks as applied to search. Quite the contrary, we show that
hierarchical navigable small-world network (HNSW) indexes in Lucene are
adequate to provide vector search capabilities in a standard bi-encoder
architecture. This suggests that, from a simple cost-benefit analysis, there
does not appear to be a compelling reason to introduce a dedicated vector store
into a modern "AI stack" for search, since such applications have already
received substantial investments in existing, widely deployed infrastructure. | http://arxiv.org/pdf/2308.14963 | Jimmy Lin, Ronak Pradeep, Tommaso Teofili, Jasper Xian | cs.IR | null | null | cs.IR | 20230829 | 20230829 | [
{
"id": "2110.01529"
},
{
"id": "2212.13534"
},
{
"id": "1611.09268"
},
{
"id": "2101.05667"
},
{
"id": "2201.10005"
},
{
"id": "2112.09118"
},
{
"id": "2302.07842"
}
] |
2308.15126 | 30 | # 6 Discussion
A comprehensive understanding of the causes be- hind hallucination in LVLMs remains elusive, as no previous work has been able to provide a definitive explanation. In this section, we aim to shed light on this phenomenon by delving into an analysis of attention using specific visualization techniques.
This finding is insightful and carries significant It demonstrates that one possible implications. approach to addressing hallucinations could be to penalize attention that deviates from the image. This will be further explored in our future work.
# 7 Conclusion
We leverage gradients to visualize the attention of each token generated concerning the previously generated tokens and the image. Specifically, we begin by disabling random sampling to ensure the stability of model generation and record the modelâs generated response. Subsequently, we utilize this response as a label for gradient back- propagation, ultimately obtaining gradients con- cerning the input embeddings. Finally, we normal- ize the gradient variations to obtain attention. In Figure 5, we show an example of hallucination. | 2308.15126#30 | Evaluation and Analysis of Hallucination in Large Vision-Language Models | Large Vision-Language Models (LVLMs) have recently achieved remarkable
success. However, LVLMs are still plagued by the hallucination problem, which
limits the practicality in many scenarios. Hallucination refers to the
information of LVLMs' responses that does not exist in the visual input, which
poses potential risks of substantial consequences. There has been limited work
studying hallucination evaluation in LVLMs. In this paper, we propose
Hallucination Evaluation based on Large Language Models (HaELM), an LLM-based
hallucination evaluation framework. HaELM achieves an approximate 95%
performance comparable to ChatGPT and has additional advantages including low
cost, reproducibility, privacy preservation and local deployment. Leveraging
the HaELM, we evaluate the hallucination in current LVLMs. Furthermore, we
analyze the factors contributing to hallucination in LVLMs and offer helpful
suggestions to mitigate the hallucination problem. Our training data and human
annotation hallucination data will be made public soon. | http://arxiv.org/pdf/2308.15126 | Junyang Wang, Yiyang Zhou, Guohai Xu, Pengcheng Shi, Chenlin Zhao, Haiyang Xu, Qinghao Ye, Ming Yan, Ji Zhang, Jihua Zhu, Jitao Sang, Haoyu Tang | cs.LG, cs.AI, cs.CL, cs.CV | 11 pages, 5 figures | null | cs.LG | 20230829 | 20231010 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2306.14565"
},
{
"id": "2211.05100"
},
{
"id": "2305.13534"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2305.10355"
},
{
"id": "2306.04387"
},
{
"id": "1504.00325"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2107.06383"
},
{
"id": "2305.15021"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2305.11175"
},
{
"id": "2010.03743"
},
{
"id": "2305.14975"
},
{
"id": "2310.00754"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2106.09685"
}
] |
2308.14963 | 31 | Xueguang Ma, Tommaso Teofili, and Jimmy Lin. 2023. Anserini Gets Dense Retrieval: Integration of Luceneâs HNSW Indexes. In Proceedings of the 32nd International Conference on Information and Knowledge Management (CIKM 2023). Birmingham, the United Kingdom.
Yu A. Malkov and D. A. Yashunin. 2020. Efficient and Robust Approximate Nearest Neighbor Search Using Hierarchical Navigable Small World Graphs. Transactions on Pattern Analysis and Machine Intelligence 42, 4 (2020), 824â836.
Grégoire Mialon, Roberto Dessì, Maria Lomeli, Christoforos Nalmpantis, Ram Pasunuru, Roberta Raileanu, Baptiste Rozière, Timo Schick, Jane Dwivedi-Yu, Asli Celikyilmaz, Edouard Grave, Yann LeCun, and Thomas Scialom. 2023. Augmented Language Models: a Survey. arXiv:2302.07842 (2023). | 2308.14963#31 | Vector Search with OpenAI Embeddings: Lucene Is All You Need | We provide a reproducible, end-to-end demonstration of vector search with
OpenAI embeddings using Lucene on the popular MS MARCO passage ranking test
collection. The main goal of our work is to challenge the prevailing narrative
that a dedicated vector store is necessary to take advantage of recent advances
in deep neural networks as applied to search. Quite the contrary, we show that
hierarchical navigable small-world network (HNSW) indexes in Lucene are
adequate to provide vector search capabilities in a standard bi-encoder
architecture. This suggests that, from a simple cost-benefit analysis, there
does not appear to be a compelling reason to introduce a dedicated vector store
into a modern "AI stack" for search, since such applications have already
received substantial investments in existing, widely deployed infrastructure. | http://arxiv.org/pdf/2308.14963 | Jimmy Lin, Ronak Pradeep, Tommaso Teofili, Jasper Xian | cs.IR | null | null | cs.IR | 20230829 | 20230829 | [
{
"id": "2110.01529"
},
{
"id": "2212.13534"
},
{
"id": "1611.09268"
},
{
"id": "2101.05667"
},
{
"id": "2201.10005"
},
{
"id": "2112.09118"
},
{
"id": "2302.07842"
}
] |
2308.15126 | 31 | In this paper, we analyzed the problems within the existing hallucination evaluation method and pro- posed HaELM, a hallucination evaluation frame- work based on LLM designed for real-world sce- narios. We demonstrated through experiments that HaELM achieves performance comparable to that of ChatGPT. Building upon HaELM, we conducted analyses on the causes of hallucinations and pro- vided corresponding suggestions to mitigate them. Additionally, our visualization results may hold insightful implications for future research.
8
# 8 Limitations
Firstly, both HaELM and ChatGPT fall short of achieving human-level hallucination evaluation per- formance. We attribute this to the fact that current methods are based on language models, using ref- erence captions as a substitute for images. This means that the evaluation models cannot truly com- prehend the content of the images. Moreover, we have also attempted to use multimodal models for evaluation. Unfortunately, current LVLMs com- monly exhibit hallucinations themselves. There- fore, at this stage, language models remain the optimal choice for hallucination evaluation. | 2308.15126#31 | Evaluation and Analysis of Hallucination in Large Vision-Language Models | Large Vision-Language Models (LVLMs) have recently achieved remarkable
success. However, LVLMs are still plagued by the hallucination problem, which
limits the practicality in many scenarios. Hallucination refers to the
information of LVLMs' responses that does not exist in the visual input, which
poses potential risks of substantial consequences. There has been limited work
studying hallucination evaluation in LVLMs. In this paper, we propose
Hallucination Evaluation based on Large Language Models (HaELM), an LLM-based
hallucination evaluation framework. HaELM achieves an approximate 95%
performance comparable to ChatGPT and has additional advantages including low
cost, reproducibility, privacy preservation and local deployment. Leveraging
the HaELM, we evaluate the hallucination in current LVLMs. Furthermore, we
analyze the factors contributing to hallucination in LVLMs and offer helpful
suggestions to mitigate the hallucination problem. Our training data and human
annotation hallucination data will be made public soon. | http://arxiv.org/pdf/2308.15126 | Junyang Wang, Yiyang Zhou, Guohai Xu, Pengcheng Shi, Chenlin Zhao, Haiyang Xu, Qinghao Ye, Ming Yan, Ji Zhang, Jihua Zhu, Jitao Sang, Haoyu Tang | cs.LG, cs.AI, cs.CL, cs.CV | 11 pages, 5 figures | null | cs.LG | 20230829 | 20231010 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2306.14565"
},
{
"id": "2211.05100"
},
{
"id": "2305.13534"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2305.10355"
},
{
"id": "2306.04387"
},
{
"id": "1504.00325"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2107.06383"
},
{
"id": "2305.15021"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2305.11175"
},
{
"id": "2010.03743"
},
{
"id": "2305.14975"
},
{
"id": "2310.00754"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2106.09685"
}
] |
2308.14963 | 32 | Arvind Neelakantan, Tao Xu, Raul Puri, Alec Radford, Jesse Michael Han, Jerry Tworek, Qiming Yuan, Nikolas Tezak, Jong Wook Kim, Chris Hallacy, Johannes Heidecke, Pranav Shyam, Boris Power, Tyna Eloundou Nekoul, Girish Sastry, Gretchen Krueger, David Schnurr, Felipe Petroski Such, Kenny Hsu, Madeleine Thompson, Tabarak Khan, Toki Sherbakov, Joanne Jang, Peter Welinder, and Lilian Weng. 2022. Text and Code Embeddings by Contrastive Pre-Training. arXiv:2201.10005 (2022).
Andrew Pavlo, Erik Paulson, Alexander Rasin, Daniel J. Abadi, David J. DeWitt, Samuel Madden, and Michael Stonebraker. 2009. A Comparison of Approaches to Large-Scale Data Analysis. In Proceedings of the 35th ACM SIGMOD International Conference on Management of Data. Providence, Rhode Island, 165â178.
Ronak Pradeep, Rodrigo Nogueira, and Jimmy Lin. 2021. The Expando-Mono-Duo Design Pattern for Text Ranking with Pretrained Sequence-to-Sequence Models. arXiv:2101.05667 (2021). | 2308.14963#32 | Vector Search with OpenAI Embeddings: Lucene Is All You Need | We provide a reproducible, end-to-end demonstration of vector search with
OpenAI embeddings using Lucene on the popular MS MARCO passage ranking test
collection. The main goal of our work is to challenge the prevailing narrative
that a dedicated vector store is necessary to take advantage of recent advances
in deep neural networks as applied to search. Quite the contrary, we show that
hierarchical navigable small-world network (HNSW) indexes in Lucene are
adequate to provide vector search capabilities in a standard bi-encoder
architecture. This suggests that, from a simple cost-benefit analysis, there
does not appear to be a compelling reason to introduce a dedicated vector store
into a modern "AI stack" for search, since such applications have already
received substantial investments in existing, widely deployed infrastructure. | http://arxiv.org/pdf/2308.14963 | Jimmy Lin, Ronak Pradeep, Tommaso Teofili, Jasper Xian | cs.IR | null | null | cs.IR | 20230829 | 20230829 | [
{
"id": "2110.01529"
},
{
"id": "2212.13534"
},
{
"id": "1611.09268"
},
{
"id": "2101.05667"
},
{
"id": "2201.10005"
},
{
"id": "2112.09118"
},
{
"id": "2302.07842"
}
] |
2308.15126 | 32 | Secondly, we did not address the root cause of hallucinations in LVLMs. In this paper, we investi- gated the triggers of hallucination and based on this, substantive methods should be established through the analysis of these triggers to reduce the modelâs learning of hallucination patterns during the train- ing phase. Currently, this is a challenging task for us, but it will remain one of our future work.
# References
Ali Furkan Biten, Lluis Gomez, and Dimosthenis Karatzas. 2022. Let there be a clock on the beach: Re- ducing object hallucination in image captioning. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 1381â1390.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877â1901. | 2308.15126#32 | Evaluation and Analysis of Hallucination in Large Vision-Language Models | Large Vision-Language Models (LVLMs) have recently achieved remarkable
success. However, LVLMs are still plagued by the hallucination problem, which
limits the practicality in many scenarios. Hallucination refers to the
information of LVLMs' responses that does not exist in the visual input, which
poses potential risks of substantial consequences. There has been limited work
studying hallucination evaluation in LVLMs. In this paper, we propose
Hallucination Evaluation based on Large Language Models (HaELM), an LLM-based
hallucination evaluation framework. HaELM achieves an approximate 95%
performance comparable to ChatGPT and has additional advantages including low
cost, reproducibility, privacy preservation and local deployment. Leveraging
the HaELM, we evaluate the hallucination in current LVLMs. Furthermore, we
analyze the factors contributing to hallucination in LVLMs and offer helpful
suggestions to mitigate the hallucination problem. Our training data and human
annotation hallucination data will be made public soon. | http://arxiv.org/pdf/2308.15126 | Junyang Wang, Yiyang Zhou, Guohai Xu, Pengcheng Shi, Chenlin Zhao, Haiyang Xu, Qinghao Ye, Ming Yan, Ji Zhang, Jihua Zhu, Jitao Sang, Haoyu Tang | cs.LG, cs.AI, cs.CL, cs.CV | 11 pages, 5 figures | null | cs.LG | 20230829 | 20231010 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2306.14565"
},
{
"id": "2211.05100"
},
{
"id": "2305.13534"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2305.10355"
},
{
"id": "2306.04387"
},
{
"id": "1504.00325"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2107.06383"
},
{
"id": "2305.15021"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2305.11175"
},
{
"id": "2010.03743"
},
{
"id": "2305.14975"
},
{
"id": "2310.00754"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2106.09685"
}
] |
2308.14963 | 33 | Keshav Santhanam, Omar Khattab, Jon Saad-Falcon, Christopher Potts, and Matei Zaharia. 2022. ColBERTv2: Effective and Efficient Retrieval via Lightweight Late Interaction. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Seattle, United States, 3715â3734.
Michael Stonebraker and Joseph M. Hellerstein. 2005. What Goes Around Comes Around.
8
Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul N. Bennett, Junaid Ahmed, and Arnold Overwijk. 2021. Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval. In Proceedings of the 9th International Conference on Learning Representations (ICLR 2021).
Peilin Yang, Hui Fang, and Jimmy Lin. 2018. Anserini: Reproducible Ranking Baselines Using Lucene. Journal of Data and Information Quality 10, 4 (2018), Article 16.
9 | 2308.14963#33 | Vector Search with OpenAI Embeddings: Lucene Is All You Need | We provide a reproducible, end-to-end demonstration of vector search with
OpenAI embeddings using Lucene on the popular MS MARCO passage ranking test
collection. The main goal of our work is to challenge the prevailing narrative
that a dedicated vector store is necessary to take advantage of recent advances
in deep neural networks as applied to search. Quite the contrary, we show that
hierarchical navigable small-world network (HNSW) indexes in Lucene are
adequate to provide vector search capabilities in a standard bi-encoder
architecture. This suggests that, from a simple cost-benefit analysis, there
does not appear to be a compelling reason to introduce a dedicated vector store
into a modern "AI stack" for search, since such applications have already
received substantial investments in existing, widely deployed infrastructure. | http://arxiv.org/pdf/2308.14963 | Jimmy Lin, Ronak Pradeep, Tommaso Teofili, Jasper Xian | cs.IR | null | null | cs.IR | 20230829 | 20230829 | [
{
"id": "2110.01529"
},
{
"id": "2212.13534"
},
{
"id": "1611.09268"
},
{
"id": "2101.05667"
},
{
"id": "2201.10005"
},
{
"id": "2112.09118"
},
{
"id": "2302.07842"
}
] |
2308.15126 | 33 | Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakr- ishna Vedantam, Saurabh Gupta, Piotr Dollár, and C Lawrence Zitnick. 2015. Microsoft coco captions: Data collection and evaluation server. arXiv preprint arXiv:1504.00325.
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2022. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311.
Tao Gong, Chengqi Lyu, Shilong Zhang, Yudong Wang, Miao Zheng, Qian Zhao, Kuikun Liu, Wenwei Zhang, Ping Luo, and Kai Chen. 2023. Multimodal-gpt: A vision and language model for dialogue with humans. arXiv preprint arXiv:2305.04790.
Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, | 2308.15126#33 | Evaluation and Analysis of Hallucination in Large Vision-Language Models | Large Vision-Language Models (LVLMs) have recently achieved remarkable
success. However, LVLMs are still plagued by the hallucination problem, which
limits the practicality in many scenarios. Hallucination refers to the
information of LVLMs' responses that does not exist in the visual input, which
poses potential risks of substantial consequences. There has been limited work
studying hallucination evaluation in LVLMs. In this paper, we propose
Hallucination Evaluation based on Large Language Models (HaELM), an LLM-based
hallucination evaluation framework. HaELM achieves an approximate 95%
performance comparable to ChatGPT and has additional advantages including low
cost, reproducibility, privacy preservation and local deployment. Leveraging
the HaELM, we evaluate the hallucination in current LVLMs. Furthermore, we
analyze the factors contributing to hallucination in LVLMs and offer helpful
suggestions to mitigate the hallucination problem. Our training data and human
annotation hallucination data will be made public soon. | http://arxiv.org/pdf/2308.15126 | Junyang Wang, Yiyang Zhou, Guohai Xu, Pengcheng Shi, Chenlin Zhao, Haiyang Xu, Qinghao Ye, Ming Yan, Ji Zhang, Jihua Zhu, Jitao Sang, Haoyu Tang | cs.LG, cs.AI, cs.CL, cs.CV | 11 pages, 5 figures | null | cs.LG | 20230829 | 20231010 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2306.14565"
},
{
"id": "2211.05100"
},
{
"id": "2305.13534"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2305.10355"
},
{
"id": "2306.04387"
},
{
"id": "1504.00325"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2107.06383"
},
{
"id": "2305.15021"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2305.11175"
},
{
"id": "2010.03743"
},
{
"id": "2305.14975"
},
{
"id": "2310.00754"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2106.09685"
}
] |
2308.15126 | 34 | Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang,
and Weizhu Chen. 2021. Lora: Low-rank adap- tation of large language models. arXiv preprint arXiv:2106.09685.
Andrej Karpathy and Li Fei-Fei. 2015. Deep visual- semantic alignments for generating image descrip- In Proceedings of the IEEE conference on tions. computer vision and pattern recognition, pages 3128â 3137.
Bo Li, Yuanhan Zhang, Liangyu Chen, Jinghao Wang, Jingkang Yang, and Ziwei Liu. 2023a. Otter: A multi-modal model with in-context instruction tuning. arXiv preprint arXiv:2305.03726.
Junyi Li, Xiaoxue Cheng, Wayne Xin Zhao, Jian-Yun Nie, and Ji-Rong Wen. 2023b. Halueval: A large- scale hallucination evaluation benchmark for large language models. arXiv e-prints, pages arXivâ2305. | 2308.15126#34 | Evaluation and Analysis of Hallucination in Large Vision-Language Models | Large Vision-Language Models (LVLMs) have recently achieved remarkable
success. However, LVLMs are still plagued by the hallucination problem, which
limits the practicality in many scenarios. Hallucination refers to the
information of LVLMs' responses that does not exist in the visual input, which
poses potential risks of substantial consequences. There has been limited work
studying hallucination evaluation in LVLMs. In this paper, we propose
Hallucination Evaluation based on Large Language Models (HaELM), an LLM-based
hallucination evaluation framework. HaELM achieves an approximate 95%
performance comparable to ChatGPT and has additional advantages including low
cost, reproducibility, privacy preservation and local deployment. Leveraging
the HaELM, we evaluate the hallucination in current LVLMs. Furthermore, we
analyze the factors contributing to hallucination in LVLMs and offer helpful
suggestions to mitigate the hallucination problem. Our training data and human
annotation hallucination data will be made public soon. | http://arxiv.org/pdf/2308.15126 | Junyang Wang, Yiyang Zhou, Guohai Xu, Pengcheng Shi, Chenlin Zhao, Haiyang Xu, Qinghao Ye, Ming Yan, Ji Zhang, Jihua Zhu, Jitao Sang, Haoyu Tang | cs.LG, cs.AI, cs.CL, cs.CV | 11 pages, 5 figures | null | cs.LG | 20230829 | 20231010 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2306.14565"
},
{
"id": "2211.05100"
},
{
"id": "2305.13534"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2305.10355"
},
{
"id": "2306.04387"
},
{
"id": "1504.00325"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2107.06383"
},
{
"id": "2305.15021"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2305.11175"
},
{
"id": "2010.03743"
},
{
"id": "2305.14975"
},
{
"id": "2310.00754"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2106.09685"
}
] |
2308.15126 | 35 | Lei Li, Yuwei Yin, Shicheng Li, Liang Chen, Peiyi Wang, Shuhuai Ren, Mukai Li, Yazheng Yang, Jingjing Xu, Xu Sun, Lingpeng Kong, and Qi Liu. 2023c. M3it: A large-scale dataset towards multi- modal multilingual instruction tuning. arXiv preprint arXiv:2306.04387.
Yifan Li, Yifan Du, Kun Zhou, Jinpeng Wang, Wayne Xin Zhao, and Ji-Rong Wen. 2023d. Eval- uating object hallucination in large vision-language models. arXiv preprint arXiv:2305.10355.
Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. 2014. Microsoft coco: In European confer- Common objects in context. ence on computer vision, pages 740â755. Springer.
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, and Lijuan Wang. 2023a. Aligning large multi-modal model with robust instruction tuning. arXiv preprint arXiv:2306.14565. | 2308.15126#35 | Evaluation and Analysis of Hallucination in Large Vision-Language Models | Large Vision-Language Models (LVLMs) have recently achieved remarkable
success. However, LVLMs are still plagued by the hallucination problem, which
limits the practicality in many scenarios. Hallucination refers to the
information of LVLMs' responses that does not exist in the visual input, which
poses potential risks of substantial consequences. There has been limited work
studying hallucination evaluation in LVLMs. In this paper, we propose
Hallucination Evaluation based on Large Language Models (HaELM), an LLM-based
hallucination evaluation framework. HaELM achieves an approximate 95%
performance comparable to ChatGPT and has additional advantages including low
cost, reproducibility, privacy preservation and local deployment. Leveraging
the HaELM, we evaluate the hallucination in current LVLMs. Furthermore, we
analyze the factors contributing to hallucination in LVLMs and offer helpful
suggestions to mitigate the hallucination problem. Our training data and human
annotation hallucination data will be made public soon. | http://arxiv.org/pdf/2308.15126 | Junyang Wang, Yiyang Zhou, Guohai Xu, Pengcheng Shi, Chenlin Zhao, Haiyang Xu, Qinghao Ye, Ming Yan, Ji Zhang, Jihua Zhu, Jitao Sang, Haoyu Tang | cs.LG, cs.AI, cs.CL, cs.CV | 11 pages, 5 figures | null | cs.LG | 20230829 | 20231010 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2306.14565"
},
{
"id": "2211.05100"
},
{
"id": "2305.13534"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2305.10355"
},
{
"id": "2306.04387"
},
{
"id": "1504.00325"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2107.06383"
},
{
"id": "2305.15021"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2305.11175"
},
{
"id": "2010.03743"
},
{
"id": "2305.14975"
},
{
"id": "2310.00754"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2106.09685"
}
] |
2308.15126 | 36 | Fuxiao Liu, Yinghan Wang, Tianlu Wang, and Vicente Ordonez. 2020. Visual news: Benchmark and chal- lenges in news image captioning. arXiv preprint arXiv:2010.03743.
Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. 2023b. Visual instruction tuning. arXiv preprint arXiv:2304.08485.
Yao Mu, Qinglong Zhang, Mengkang Hu, Wenhai Wang, Mingyu Ding, Jun Jin, Bin Wang, Jifeng Dai, Yu Qiao, and Ping Luo. 2023. Embodiedgpt: Vision- language pre-training via embodied chain of thought. arXiv preprint arXiv:2305.15021.
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instruc- tions with human feedback. Advances in Neural Information Processing Systems, 35:27730â27744.
9 | 2308.15126#36 | Evaluation and Analysis of Hallucination in Large Vision-Language Models | Large Vision-Language Models (LVLMs) have recently achieved remarkable
success. However, LVLMs are still plagued by the hallucination problem, which
limits the practicality in many scenarios. Hallucination refers to the
information of LVLMs' responses that does not exist in the visual input, which
poses potential risks of substantial consequences. There has been limited work
studying hallucination evaluation in LVLMs. In this paper, we propose
Hallucination Evaluation based on Large Language Models (HaELM), an LLM-based
hallucination evaluation framework. HaELM achieves an approximate 95%
performance comparable to ChatGPT and has additional advantages including low
cost, reproducibility, privacy preservation and local deployment. Leveraging
the HaELM, we evaluate the hallucination in current LVLMs. Furthermore, we
analyze the factors contributing to hallucination in LVLMs and offer helpful
suggestions to mitigate the hallucination problem. Our training data and human
annotation hallucination data will be made public soon. | http://arxiv.org/pdf/2308.15126 | Junyang Wang, Yiyang Zhou, Guohai Xu, Pengcheng Shi, Chenlin Zhao, Haiyang Xu, Qinghao Ye, Ming Yan, Ji Zhang, Jihua Zhu, Jitao Sang, Haoyu Tang | cs.LG, cs.AI, cs.CL, cs.CV | 11 pages, 5 figures | null | cs.LG | 20230829 | 20231010 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2306.14565"
},
{
"id": "2211.05100"
},
{
"id": "2305.13534"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2305.10355"
},
{
"id": "2306.04387"
},
{
"id": "1504.00325"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2107.06383"
},
{
"id": "2305.15021"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2305.11175"
},
{
"id": "2010.03743"
},
{
"id": "2305.14975"
},
{
"id": "2310.00754"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2106.09685"
}
] |
2308.15126 | 37 | 9
Teven Le Scao, Angela Fan, Christopher Akiki, El- lie Pavlick, Suzana Ili´c, Daniel Hesslow, Roman Castagné, Alexandra Sasha Luccioni, François Yvon, Matthias Gallé, et al. 2022. Bloom: A 176b- parameter open-access multilingual language model. arXiv preprint arXiv:2211.05100.
Sheng Shen, Liunian Harold Li, Hao Tan, Mohit Bansal, Anna Rohrbach, Kai-Wei Chang, Zhewei Yao, and Kurt Keutzer. 2021. How much can clip benefit vision-and-language tasks? arXiv preprint arXiv:2107.06383.
Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, and Yueting Zhuang. 2023. Hugging- gpt: Solving ai tasks with chatgpt and its friends in huggingface. arXiv preprint arXiv:2303.17580.
Yixuan Su, Tian Lan, Huayang Li, Jialu Xu, Yan Pandagpt: One Wang, and Deng Cai. 2023. model to instruction-follow them all. arXiv preprint arXiv:2305.16355. | 2308.15126#37 | Evaluation and Analysis of Hallucination in Large Vision-Language Models | Large Vision-Language Models (LVLMs) have recently achieved remarkable
success. However, LVLMs are still plagued by the hallucination problem, which
limits the practicality in many scenarios. Hallucination refers to the
information of LVLMs' responses that does not exist in the visual input, which
poses potential risks of substantial consequences. There has been limited work
studying hallucination evaluation in LVLMs. In this paper, we propose
Hallucination Evaluation based on Large Language Models (HaELM), an LLM-based
hallucination evaluation framework. HaELM achieves an approximate 95%
performance comparable to ChatGPT and has additional advantages including low
cost, reproducibility, privacy preservation and local deployment. Leveraging
the HaELM, we evaluate the hallucination in current LVLMs. Furthermore, we
analyze the factors contributing to hallucination in LVLMs and offer helpful
suggestions to mitigate the hallucination problem. Our training data and human
annotation hallucination data will be made public soon. | http://arxiv.org/pdf/2308.15126 | Junyang Wang, Yiyang Zhou, Guohai Xu, Pengcheng Shi, Chenlin Zhao, Haiyang Xu, Qinghao Ye, Ming Yan, Ji Zhang, Jihua Zhu, Jitao Sang, Haoyu Tang | cs.LG, cs.AI, cs.CL, cs.CV | 11 pages, 5 figures | null | cs.LG | 20230829 | 20231010 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2306.14565"
},
{
"id": "2211.05100"
},
{
"id": "2305.13534"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2305.10355"
},
{
"id": "2306.04387"
},
{
"id": "1504.00325"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2107.06383"
},
{
"id": "2305.15021"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2305.11175"
},
{
"id": "2010.03743"
},
{
"id": "2305.14975"
},
{
"id": "2310.00754"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2106.09685"
}
] |
2308.15126 | 38 | Katherine Tian, Eric Mitchell, Allan Zhou, Archit Sharma, Rafael Rafailov, Huaxiu Yao, Chelsea Finn, and Christopher D Manning. 2023. Just ask for cali- bration: Strategies for eliciting calibrated confidence scores from language models fine-tuned with human feedback. arXiv preprint arXiv:2305.14975.
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023. Llama: Open and effi- cient foundation language models. arXiv preprint arXiv:2302.13971.
Wenhai Wang, Zhe Chen, Xiaokang Chen, Jiannan Wu, Xizhou Zhu, Gang Zeng, Ping Luo, Tong Lu, Jie Zhou, Yu Qiao, et al. 2023. Vision- llm: Large language model is also an open-ended arXiv preprint decoder for vision-centric tasks. arXiv:2305.11175. | 2308.15126#38 | Evaluation and Analysis of Hallucination in Large Vision-Language Models | Large Vision-Language Models (LVLMs) have recently achieved remarkable
success. However, LVLMs are still plagued by the hallucination problem, which
limits the practicality in many scenarios. Hallucination refers to the
information of LVLMs' responses that does not exist in the visual input, which
poses potential risks of substantial consequences. There has been limited work
studying hallucination evaluation in LVLMs. In this paper, we propose
Hallucination Evaluation based on Large Language Models (HaELM), an LLM-based
hallucination evaluation framework. HaELM achieves an approximate 95%
performance comparable to ChatGPT and has additional advantages including low
cost, reproducibility, privacy preservation and local deployment. Leveraging
the HaELM, we evaluate the hallucination in current LVLMs. Furthermore, we
analyze the factors contributing to hallucination in LVLMs and offer helpful
suggestions to mitigate the hallucination problem. Our training data and human
annotation hallucination data will be made public soon. | http://arxiv.org/pdf/2308.15126 | Junyang Wang, Yiyang Zhou, Guohai Xu, Pengcheng Shi, Chenlin Zhao, Haiyang Xu, Qinghao Ye, Ming Yan, Ji Zhang, Jihua Zhu, Jitao Sang, Haoyu Tang | cs.LG, cs.AI, cs.CL, cs.CV | 11 pages, 5 figures | null | cs.LG | 20230829 | 20231010 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2306.14565"
},
{
"id": "2211.05100"
},
{
"id": "2305.13534"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2305.10355"
},
{
"id": "2306.04387"
},
{
"id": "1504.00325"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2107.06383"
},
{
"id": "2305.15021"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2305.11175"
},
{
"id": "2010.03743"
},
{
"id": "2305.14975"
},
{
"id": "2310.00754"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2106.09685"
}
] |
2308.15126 | 39 | Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Ehsan Azarnasab, Faisal Ahmed, Zicheng Liu, Ce Liu, Michael Zeng, and Lijuan Wang. 2023. Mm- react: Prompting chatgpt for multimodal reasoning and action. arXiv preprint arXiv:2303.11381.
Qinghao Ye, Haiyang Xu, Guohai Xu, Jiabo Ye, Ming Yan, Yiyang Zhou, Junyang Wang, An- wen Hu, Pengcheng Shi, Yaya Shi, et al. 2023. mplug-owl: Modularization empowers large lan- guage models with multimodality. arXiv preprint arXiv:2304.14178.
Muru Zhang, Ofir Press, William Merrill, Alisa Liu, and Noah A Smith. 2023. How language model hallucinations can snowball. arXiv preprint arXiv:2305.13534.
Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher De- wan, Mona Diab, Xian Li, Xi Victoria Lin, et al. 2022. Opt: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068.
10 | 2308.15126#39 | Evaluation and Analysis of Hallucination in Large Vision-Language Models | Large Vision-Language Models (LVLMs) have recently achieved remarkable
success. However, LVLMs are still plagued by the hallucination problem, which
limits the practicality in many scenarios. Hallucination refers to the
information of LVLMs' responses that does not exist in the visual input, which
poses potential risks of substantial consequences. There has been limited work
studying hallucination evaluation in LVLMs. In this paper, we propose
Hallucination Evaluation based on Large Language Models (HaELM), an LLM-based
hallucination evaluation framework. HaELM achieves an approximate 95%
performance comparable to ChatGPT and has additional advantages including low
cost, reproducibility, privacy preservation and local deployment. Leveraging
the HaELM, we evaluate the hallucination in current LVLMs. Furthermore, we
analyze the factors contributing to hallucination in LVLMs and offer helpful
suggestions to mitigate the hallucination problem. Our training data and human
annotation hallucination data will be made public soon. | http://arxiv.org/pdf/2308.15126 | Junyang Wang, Yiyang Zhou, Guohai Xu, Pengcheng Shi, Chenlin Zhao, Haiyang Xu, Qinghao Ye, Ming Yan, Ji Zhang, Jihua Zhu, Jitao Sang, Haoyu Tang | cs.LG, cs.AI, cs.CL, cs.CV | 11 pages, 5 figures | null | cs.LG | 20230829 | 20231010 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2306.14565"
},
{
"id": "2211.05100"
},
{
"id": "2305.13534"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2305.10355"
},
{
"id": "2306.04387"
},
{
"id": "1504.00325"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2107.06383"
},
{
"id": "2305.15021"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2305.11175"
},
{
"id": "2010.03743"
},
{
"id": "2305.14975"
},
{
"id": "2310.00754"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2106.09685"
}
] |
2308.15126 | 40 | 10
Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, and Huaxiu Yao. 2023. Analyzing and mitigating object hallucination in large vision-language models. arXiv preprint arXiv:2310.00754.
Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, and Mohamed Elhoseiny. 2023. Minigpt-4: Enhancing vision-language understanding with advanced large language models. arXiv preprint arXiv:2304.10592.
# Appendix
# A Evaluated LVLMs
We present detailed parameter settings of the evalu- ated LVLMs, as shown in Table 7.
Model VE AN LLM mPLUG-Owl ViT-L/14 Attention LLaMA-7B Vicuna-13B MiniGPT-4 LLaMA-13B LLaVA ViT-G/14 ViT-L/14 Linear Linear
Table 7: The detailed parameter settings of the evaluated LVLMs, where VE, AN, LLM stand for Visual Encoder, Alignment Network and Large Language Model, respec- tively.
base model batch size epoch learning rate max input length LoRA r LoRA alpha LoRA dropout LoRA module train on input train with fp16 | 2308.15126#40 | Evaluation and Analysis of Hallucination in Large Vision-Language Models | Large Vision-Language Models (LVLMs) have recently achieved remarkable
success. However, LVLMs are still plagued by the hallucination problem, which
limits the practicality in many scenarios. Hallucination refers to the
information of LVLMs' responses that does not exist in the visual input, which
poses potential risks of substantial consequences. There has been limited work
studying hallucination evaluation in LVLMs. In this paper, we propose
Hallucination Evaluation based on Large Language Models (HaELM), an LLM-based
hallucination evaluation framework. HaELM achieves an approximate 95%
performance comparable to ChatGPT and has additional advantages including low
cost, reproducibility, privacy preservation and local deployment. Leveraging
the HaELM, we evaluate the hallucination in current LVLMs. Furthermore, we
analyze the factors contributing to hallucination in LVLMs and offer helpful
suggestions to mitigate the hallucination problem. Our training data and human
annotation hallucination data will be made public soon. | http://arxiv.org/pdf/2308.15126 | Junyang Wang, Yiyang Zhou, Guohai Xu, Pengcheng Shi, Chenlin Zhao, Haiyang Xu, Qinghao Ye, Ming Yan, Ji Zhang, Jihua Zhu, Jitao Sang, Haoyu Tang | cs.LG, cs.AI, cs.CL, cs.CV | 11 pages, 5 figures | null | cs.LG | 20230829 | 20231010 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2306.14565"
},
{
"id": "2211.05100"
},
{
"id": "2305.13534"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2305.10355"
},
{
"id": "2306.04387"
},
{
"id": "1504.00325"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2107.06383"
},
{
"id": "2305.15021"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2305.11175"
},
{
"id": "2010.03743"
},
{
"id": "2305.14975"
},
{
"id": "2310.00754"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2106.09685"
}
] |
2308.15126 | 41 | base model batch size epoch learning rate max input length LoRA r LoRA alpha LoRA dropout LoRA module train on input train with fp16
Table 8: The detailed parameter settings.
# B Training Details
We present detailed parameter settings of the LoRA fine-tuning during the training phase, as shown in Table 8.
Due to the insufficient 32GB memory of the Tesla V100 to accommodate a batch size of 64, we used a batch size of 8 with a gradient accumulation of 8 steps to achieve an equivalent batch size of 64. When "train on input" is turned off, the self- regressive loss will no longer consider the input
Item person table chair car book bottle cup cat horse toilet sum QH AY CH 48 45 14 87 45 3 89 84 23 94 92 17 96 96 4 89 89 10 97 91 10 98 82 1 96 9 0 96 84 0 890 717 82
Table 9: The detailed validity assessment results of object-based hallucination evaluation method by mPLUG-Owl.
Item person table chair car book bottle cup cat horse toilet sum QH AY CH 48 22 6 87 49 7 89 51 13 94 58 10 96 49 2 89 44 0 97 47 3 98 45 3 96 21 0 96 46 1 890 432 46
Table 10: The detailed validity assessment results of object-based hallucination evaluation method by MiniGPT-4. | 2308.15126#41 | Evaluation and Analysis of Hallucination in Large Vision-Language Models | Large Vision-Language Models (LVLMs) have recently achieved remarkable
success. However, LVLMs are still plagued by the hallucination problem, which
limits the practicality in many scenarios. Hallucination refers to the
information of LVLMs' responses that does not exist in the visual input, which
poses potential risks of substantial consequences. There has been limited work
studying hallucination evaluation in LVLMs. In this paper, we propose
Hallucination Evaluation based on Large Language Models (HaELM), an LLM-based
hallucination evaluation framework. HaELM achieves an approximate 95%
performance comparable to ChatGPT and has additional advantages including low
cost, reproducibility, privacy preservation and local deployment. Leveraging
the HaELM, we evaluate the hallucination in current LVLMs. Furthermore, we
analyze the factors contributing to hallucination in LVLMs and offer helpful
suggestions to mitigate the hallucination problem. Our training data and human
annotation hallucination data will be made public soon. | http://arxiv.org/pdf/2308.15126 | Junyang Wang, Yiyang Zhou, Guohai Xu, Pengcheng Shi, Chenlin Zhao, Haiyang Xu, Qinghao Ye, Ming Yan, Ji Zhang, Jihua Zhu, Jitao Sang, Haoyu Tang | cs.LG, cs.AI, cs.CL, cs.CV | 11 pages, 5 figures | null | cs.LG | 20230829 | 20231010 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2306.14565"
},
{
"id": "2211.05100"
},
{
"id": "2305.13534"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2305.10355"
},
{
"id": "2306.04387"
},
{
"id": "1504.00325"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2107.06383"
},
{
"id": "2305.15021"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2305.11175"
},
{
"id": "2010.03743"
},
{
"id": "2305.14975"
},
{
"id": "2310.00754"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2106.09685"
}
] |
2308.15126 | 42 | Table 10: The detailed validity assessment results of object-based hallucination evaluation method by MiniGPT-4.
Item person table chair car book bottle cup cat horse toilet sum QH AY CH 48 42 8 87 49 2 89 83 16 94 91 9 96 95 2 89 82 4 97 94 8 98 92 0 96 38 0 96 87 0 890 753 49
Table 11: The detailed validity assessment results of object-based hallucination evaluation method by LLaVA.
part. In addition, fp16 can accelerate training with almost no impact, so we chose to enable it. We adopted the settings from Vicuna for LoRA and replaced the weights of the Q and V matrices.
likely to be part of hallucinations. Therefore, we recommend considering a low temperature if the authenticity of the generated texts needs to be en- sured.
# C Additional Evaluation on Hallucination
temperture 0.2 0.4 0.6 0.8 1 hallucination 24.7 26.6 31.1 33.0 35.9
The temperature in LLMs generation parameters refers to the parameter that controls the randomness of language model generation during text genera- tion. It is a parameter that controls randomness and can influence the diversity and creativity of model generation to a certain extent.
Table 12: The result of comparison on temperture.
# D Detailed Results | 2308.15126#42 | Evaluation and Analysis of Hallucination in Large Vision-Language Models | Large Vision-Language Models (LVLMs) have recently achieved remarkable
success. However, LVLMs are still plagued by the hallucination problem, which
limits the practicality in many scenarios. Hallucination refers to the
information of LVLMs' responses that does not exist in the visual input, which
poses potential risks of substantial consequences. There has been limited work
studying hallucination evaluation in LVLMs. In this paper, we propose
Hallucination Evaluation based on Large Language Models (HaELM), an LLM-based
hallucination evaluation framework. HaELM achieves an approximate 95%
performance comparable to ChatGPT and has additional advantages including low
cost, reproducibility, privacy preservation and local deployment. Leveraging
the HaELM, we evaluate the hallucination in current LVLMs. Furthermore, we
analyze the factors contributing to hallucination in LVLMs and offer helpful
suggestions to mitigate the hallucination problem. Our training data and human
annotation hallucination data will be made public soon. | http://arxiv.org/pdf/2308.15126 | Junyang Wang, Yiyang Zhou, Guohai Xu, Pengcheng Shi, Chenlin Zhao, Haiyang Xu, Qinghao Ye, Ming Yan, Ji Zhang, Jihua Zhu, Jitao Sang, Haoyu Tang | cs.LG, cs.AI, cs.CL, cs.CV | 11 pages, 5 figures | null | cs.LG | 20230829 | 20231010 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2306.14565"
},
{
"id": "2211.05100"
},
{
"id": "2305.13534"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2305.10355"
},
{
"id": "2306.04387"
},
{
"id": "1504.00325"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2107.06383"
},
{
"id": "2305.15021"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2305.11175"
},
{
"id": "2010.03743"
},
{
"id": "2305.14975"
},
{
"id": "2310.00754"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2106.09685"
}
] |
2308.15126 | 43 | Table 12: The result of comparison on temperture.
# D Detailed Results
In principle, the temperature parameter recali- brates the probability distribution of model output, making the probability distribution more evenly distributed. In high-temperature conditions, more probabilities are assigned to lower probabilities, so the generated text is more diverse. In low- temperature conditions, more probabilities are as- signed to high-probability results, so the generated text tends to have common patterns. We conducted experiments
We present detailed results of the object-based hal- lucination evaluation. mPLUG-OWl, MiniGPT-4, and LLaVA are shown in Table 9, Table 10, and Ta- ble 11, respectively. In the table, QH represents the number of times we asked about the corresponding item on images where it was not present; AY rep- resents the number of times the model answered "yes", and CH represents the number of times the model had hallucinations of the corresponding item in the generated captions.
to investigate whether the diversity brought by high tempera- tures would enhance the generation of hallucina- tions. The results are shown in Table 12. It can be seen from the results that the hallucinations of the model are enhanced with the increase in temper- ature, which is consistent with our intuitive judg- ment. The enhancement of diversity may lead to the generation of unreasonable texts, which are | 2308.15126#43 | Evaluation and Analysis of Hallucination in Large Vision-Language Models | Large Vision-Language Models (LVLMs) have recently achieved remarkable
success. However, LVLMs are still plagued by the hallucination problem, which
limits the practicality in many scenarios. Hallucination refers to the
information of LVLMs' responses that does not exist in the visual input, which
poses potential risks of substantial consequences. There has been limited work
studying hallucination evaluation in LVLMs. In this paper, we propose
Hallucination Evaluation based on Large Language Models (HaELM), an LLM-based
hallucination evaluation framework. HaELM achieves an approximate 95%
performance comparable to ChatGPT and has additional advantages including low
cost, reproducibility, privacy preservation and local deployment. Leveraging
the HaELM, we evaluate the hallucination in current LVLMs. Furthermore, we
analyze the factors contributing to hallucination in LVLMs and offer helpful
suggestions to mitigate the hallucination problem. Our training data and human
annotation hallucination data will be made public soon. | http://arxiv.org/pdf/2308.15126 | Junyang Wang, Yiyang Zhou, Guohai Xu, Pengcheng Shi, Chenlin Zhao, Haiyang Xu, Qinghao Ye, Ming Yan, Ji Zhang, Jihua Zhu, Jitao Sang, Haoyu Tang | cs.LG, cs.AI, cs.CL, cs.CV | 11 pages, 5 figures | null | cs.LG | 20230829 | 20231010 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2306.14565"
},
{
"id": "2211.05100"
},
{
"id": "2305.13534"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2305.10355"
},
{
"id": "2306.04387"
},
{
"id": "1504.00325"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2107.06383"
},
{
"id": "2305.15021"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2305.11175"
},
{
"id": "2010.03743"
},
{
"id": "2305.14975"
},
{
"id": "2310.00754"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2106.09685"
}
] |
2308.14296 | 0 | 3 2 0 2
g u A 8 2 ] R I . s c [
1 v 6 9 2 4 1 . 8 0 3 2 : v i X r a
# RecMind: Large Language Model Powered Agent For Recommendation
Yancheng Wang1, Ziyan Jiang2*, Zheng Chen2*, Fan Yang2*, Yingxue Zhou2*, Eunah Cho2, Xing Fan2, Xiaojiang Huang2, Yanbin Lu2, Yingzhen Yang1 1School of Computing and Augmented Intelligence, Arizona State University 2Amazon Alexa AI {yancheng.wang, yingzhen.yang}@asu.edu {ziyjiang, zgchen, ffanyang, zyingxue, eunahch, fanxing, xjhuang, luyanbin}@amazon.com
# Abstract | 2308.14296#0 | RecMind: Large Language Model Powered Agent For Recommendation | Recent advancements in instructing Large Language Models (LLMs) to utilize
external tools and execute multi-step plans have significantly enhanced their
ability to solve intricate tasks, ranging from mathematical problems to
creative writing. Yet, there remains a notable gap in studying the capacity of
LLMs in responding to personalized queries such as a recommendation request. To
bridge this gap, we have designed an LLM-powered autonomous recommender agent,
RecMind, which is capable of providing precise personalized recommendations
through careful planning, utilizing tools for obtaining external knowledge, and
leveraging individual data. We propose a novel algorithm, Self-Inspiring, to
improve the planning ability of the LLM agent. At each intermediate planning
step, the LLM 'self-inspires' to consider all previously explored states to
plan for next step. This mechanism greatly improves the model's ability to
comprehend and utilize historical planning information for recommendation. We
evaluate RecMind's performance in various recommendation scenarios, including
rating prediction, sequential recommendation, direct recommendation,
explanation generation, and review summarization. Our experiment shows that
RecMind outperforms existing zero/few-shot LLM-based recommendation methods in
different recommendation tasks and achieves competitive performance to a recent
model P5, which requires fully pre-train for the recommendation tasks. | http://arxiv.org/pdf/2308.14296 | Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang | cs.IR, cs.AI | null | null | cs.IR | 20230828 | 20230828 | [
{
"id": "2302.13971"
},
{
"id": "2305.06474"
},
{
"id": "2303.17580"
},
{
"id": "2307.02046"
},
{
"id": "2305.15334"
},
{
"id": "2112.09332"
},
{
"id": "2305.10403"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2307.09288"
},
{
"id": "2109.01652"
},
{
"id": "1511.06939"
}
] |
2308.14296 | 1 | Recent advancements in instructing Large Language Mod- els (LLMs) to utilize external tools and execute multi-step plans have significantly enhanced their ability to solve in- tricate tasks, ranging from mathematical problems to cre- ative writing. Yet, there remains a notable gap in studying the capacity of LLMs in responding to personalized queries such as a recommendation request. To bridge this gap, we have designed an LLM-powered autonomous recommender agent RecMind, which is capable of providing precise per- sonalized recommendations through careful planning, utiliz- ing tools for obtaining external knowledge, and leveraging in- dividual data. We propose a novel algorithm, Self-Inspiring, to improve the planning ability of the LLM agent. At each intermediate planning step, the LLM âself-inspiresâ to con- sider all previously explored states to plan for next step. This mechanism greatly improves the modelâs ability to com- prehend and utilize historical planning information for rec- ommendation. We evaluate RecMindâs performance in vari- ous recommendation scenarios, including rating prediction, sequential recommendation, direct recommendation, expla- nation generation, and review | 2308.14296#1 | RecMind: Large Language Model Powered Agent For Recommendation | Recent advancements in instructing Large Language Models (LLMs) to utilize
external tools and execute multi-step plans have significantly enhanced their
ability to solve intricate tasks, ranging from mathematical problems to
creative writing. Yet, there remains a notable gap in studying the capacity of
LLMs in responding to personalized queries such as a recommendation request. To
bridge this gap, we have designed an LLM-powered autonomous recommender agent,
RecMind, which is capable of providing precise personalized recommendations
through careful planning, utilizing tools for obtaining external knowledge, and
leveraging individual data. We propose a novel algorithm, Self-Inspiring, to
improve the planning ability of the LLM agent. At each intermediate planning
step, the LLM 'self-inspires' to consider all previously explored states to
plan for next step. This mechanism greatly improves the model's ability to
comprehend and utilize historical planning information for recommendation. We
evaluate RecMind's performance in various recommendation scenarios, including
rating prediction, sequential recommendation, direct recommendation,
explanation generation, and review summarization. Our experiment shows that
RecMind outperforms existing zero/few-shot LLM-based recommendation methods in
different recommendation tasks and achieves competitive performance to a recent
model P5, which requires fully pre-train for the recommendation tasks. | http://arxiv.org/pdf/2308.14296 | Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang | cs.IR, cs.AI | null | null | cs.IR | 20230828 | 20230828 | [
{
"id": "2302.13971"
},
{
"id": "2305.06474"
},
{
"id": "2303.17580"
},
{
"id": "2307.02046"
},
{
"id": "2305.15334"
},
{
"id": "2112.09332"
},
{
"id": "2305.10403"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2307.09288"
},
{
"id": "2109.01652"
},
{
"id": "1511.06939"
}
] |
2308.14296 | 2 | performance in vari- ous recommendation scenarios, including rating prediction, sequential recommendation, direct recommendation, expla- nation generation, and review summarization. Our experi- ment shows that RecMind outperforms existing zero/few- shot LLM-based recommendation methods in different rec- ommendation tasks and achieves competitive performance to a recent model P5, which requires fully pre-train for the rec- ommendation tasks. | 2308.14296#2 | RecMind: Large Language Model Powered Agent For Recommendation | Recent advancements in instructing Large Language Models (LLMs) to utilize
external tools and execute multi-step plans have significantly enhanced their
ability to solve intricate tasks, ranging from mathematical problems to
creative writing. Yet, there remains a notable gap in studying the capacity of
LLMs in responding to personalized queries such as a recommendation request. To
bridge this gap, we have designed an LLM-powered autonomous recommender agent,
RecMind, which is capable of providing precise personalized recommendations
through careful planning, utilizing tools for obtaining external knowledge, and
leveraging individual data. We propose a novel algorithm, Self-Inspiring, to
improve the planning ability of the LLM agent. At each intermediate planning
step, the LLM 'self-inspires' to consider all previously explored states to
plan for next step. This mechanism greatly improves the model's ability to
comprehend and utilize historical planning information for recommendation. We
evaluate RecMind's performance in various recommendation scenarios, including
rating prediction, sequential recommendation, direct recommendation,
explanation generation, and review summarization. Our experiment shows that
RecMind outperforms existing zero/few-shot LLM-based recommendation methods in
different recommendation tasks and achieves competitive performance to a recent
model P5, which requires fully pre-train for the recommendation tasks. | http://arxiv.org/pdf/2308.14296 | Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang | cs.IR, cs.AI | null | null | cs.IR | 20230828 | 20230828 | [
{
"id": "2302.13971"
},
{
"id": "2305.06474"
},
{
"id": "2303.17580"
},
{
"id": "2307.02046"
},
{
"id": "2305.15334"
},
{
"id": "2112.09332"
},
{
"id": "2305.10403"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2307.09288"
},
{
"id": "2109.01652"
},
{
"id": "1511.06939"
}
] |
2308.14296 | 3 | # 1 Introduction
A Recommender System (RS) plays a key role in search en- gines, e-commerce websites, social media, video and music streaming services, and various other Internet platforms. An RS analyzes the historical interactions between users and items to recommend items that users may interact with in the future (Koren, Bell, and Volinsky 2009b; Linden, Smith, and York 2003). The Modern RS has been enhanced by Deep Neural Networks (DNNs) to more effectively learn the rep- resentations of users, items, and sequential behaviors (Hi- dasi et al. 2015; He et al. 2020; Sun et al. 2019). However, most existing RSs such as DNN-based methods (e.g., CNN and LSTM) and pre-trained language models (e.g., BERT) cannot sufficiently capture textual knowledge about users and items due to limitations in model scale and data size.
Besides, most existing RS methods have been designed for specific tasks and are inadequate in generalizing to unseen recommendation tasks (Fan et al. 2023). | 2308.14296#3 | RecMind: Large Language Model Powered Agent For Recommendation | Recent advancements in instructing Large Language Models (LLMs) to utilize
external tools and execute multi-step plans have significantly enhanced their
ability to solve intricate tasks, ranging from mathematical problems to
creative writing. Yet, there remains a notable gap in studying the capacity of
LLMs in responding to personalized queries such as a recommendation request. To
bridge this gap, we have designed an LLM-powered autonomous recommender agent,
RecMind, which is capable of providing precise personalized recommendations
through careful planning, utilizing tools for obtaining external knowledge, and
leveraging individual data. We propose a novel algorithm, Self-Inspiring, to
improve the planning ability of the LLM agent. At each intermediate planning
step, the LLM 'self-inspires' to consider all previously explored states to
plan for next step. This mechanism greatly improves the model's ability to
comprehend and utilize historical planning information for recommendation. We
evaluate RecMind's performance in various recommendation scenarios, including
rating prediction, sequential recommendation, direct recommendation,
explanation generation, and review summarization. Our experiment shows that
RecMind outperforms existing zero/few-shot LLM-based recommendation methods in
different recommendation tasks and achieves competitive performance to a recent
model P5, which requires fully pre-train for the recommendation tasks. | http://arxiv.org/pdf/2308.14296 | Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang | cs.IR, cs.AI | null | null | cs.IR | 20230828 | 20230828 | [
{
"id": "2302.13971"
},
{
"id": "2305.06474"
},
{
"id": "2303.17580"
},
{
"id": "2307.02046"
},
{
"id": "2305.15334"
},
{
"id": "2112.09332"
},
{
"id": "2305.10403"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2307.09288"
},
{
"id": "2109.01652"
},
{
"id": "1511.06939"
}
] |
2308.14296 | 4 | Besides, most existing RS methods have been designed for specific tasks and are inadequate in generalizing to unseen recommendation tasks (Fan et al. 2023).
Recent advances in Large Language Models (LLMs), such as GPT-3 (Brown et al. 2020), GPT-4 (OpenAI 2023), LLaMA (Touvron et al. 2023a), LLaMa-2 (Touvron et al. 2023b), and PaLM-2 (Anil et al. 2023) have demonstrated remarkable results in a wide range of tasks, which have mo- tivated the research of leveraging LLMs for recommenda- tion to mitigate the aforementioned challenges (Liu et al. 2023; Fan et al. 2023; Lin et al. 2023). However, exist- ing studies primarily rely on knowledge stored within the modelâs weights, neglecting the potential benefits of lever- aging external tools to access real-time information and domain-specific knowledge (Yang et al. 2023; Bao et al. 2023). Furthermore, the reasoning ability of LLMs for rec- ommendation tasks is not fully utilized in current research, resulting in suboptimal predictions due to the intricate nature of recommendation-related tasks (Liu et al. 2023). | 2308.14296#4 | RecMind: Large Language Model Powered Agent For Recommendation | Recent advancements in instructing Large Language Models (LLMs) to utilize
external tools and execute multi-step plans have significantly enhanced their
ability to solve intricate tasks, ranging from mathematical problems to
creative writing. Yet, there remains a notable gap in studying the capacity of
LLMs in responding to personalized queries such as a recommendation request. To
bridge this gap, we have designed an LLM-powered autonomous recommender agent,
RecMind, which is capable of providing precise personalized recommendations
through careful planning, utilizing tools for obtaining external knowledge, and
leveraging individual data. We propose a novel algorithm, Self-Inspiring, to
improve the planning ability of the LLM agent. At each intermediate planning
step, the LLM 'self-inspires' to consider all previously explored states to
plan for next step. This mechanism greatly improves the model's ability to
comprehend and utilize historical planning information for recommendation. We
evaluate RecMind's performance in various recommendation scenarios, including
rating prediction, sequential recommendation, direct recommendation,
explanation generation, and review summarization. Our experiment shows that
RecMind outperforms existing zero/few-shot LLM-based recommendation methods in
different recommendation tasks and achieves competitive performance to a recent
model P5, which requires fully pre-train for the recommendation tasks. | http://arxiv.org/pdf/2308.14296 | Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang | cs.IR, cs.AI | null | null | cs.IR | 20230828 | 20230828 | [
{
"id": "2302.13971"
},
{
"id": "2305.06474"
},
{
"id": "2303.17580"
},
{
"id": "2307.02046"
},
{
"id": "2305.15334"
},
{
"id": "2112.09332"
},
{
"id": "2305.10403"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2307.09288"
},
{
"id": "2109.01652"
},
{
"id": "1511.06939"
}
] |
2308.14296 | 5 | To better utilize the strong reasoning and tool-using abili- ties of LLMs, we design a recommendation agent RecMind that leverages an LLM-powered API as its intellectual core and incorporates a few key components. The first key com- ponent is Planning which enables the agent to break com- plex recommendation tasks into manageable steps for ef- ficient handling of complex situations. Each step of plan- ning involves thought, action and observation (see Figure 1 for examples and Section 3 for details). The agent is also equipped with Memory consisting of Personalized Mem- ory and World Knowledge, each accessible through specific tools. The Tools enhance the agentâs functionality on top of the LLM, such as retrieving relevant knowledge, or assisting with the reasoning process.
To further enhance the planning ability of the agent, we propose a new planning algorithm Self-Inspiring (SI). At each intermediate planning step, the agent âself-inspiresâ to consider all previously explored planning paths to explore the next planning states. Unlike existing Chain-of-Thoughts (CoT) (Wei et al. 2022) and Tree-of-Thoughts (ToT) (Yao et al. 2023) which discards states (thoughts) in previously explored paths when generating a new state, SI retains all previous states from all history paths when generating new state. SI is inspired by the intuition that all history states can | 2308.14296#5 | RecMind: Large Language Model Powered Agent For Recommendation | Recent advancements in instructing Large Language Models (LLMs) to utilize
external tools and execute multi-step plans have significantly enhanced their
ability to solve intricate tasks, ranging from mathematical problems to
creative writing. Yet, there remains a notable gap in studying the capacity of
LLMs in responding to personalized queries such as a recommendation request. To
bridge this gap, we have designed an LLM-powered autonomous recommender agent,
RecMind, which is capable of providing precise personalized recommendations
through careful planning, utilizing tools for obtaining external knowledge, and
leveraging individual data. We propose a novel algorithm, Self-Inspiring, to
improve the planning ability of the LLM agent. At each intermediate planning
step, the LLM 'self-inspires' to consider all previously explored states to
plan for next step. This mechanism greatly improves the model's ability to
comprehend and utilize historical planning information for recommendation. We
evaluate RecMind's performance in various recommendation scenarios, including
rating prediction, sequential recommendation, direct recommendation,
explanation generation, and review summarization. Our experiment shows that
RecMind outperforms existing zero/few-shot LLM-based recommendation methods in
different recommendation tasks and achieves competitive performance to a recent
model P5, which requires fully pre-train for the recommendation tasks. | http://arxiv.org/pdf/2308.14296 | Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang | cs.IR, cs.AI | null | null | cs.IR | 20230828 | 20230828 | [
{
"id": "2302.13971"
},
{
"id": "2305.06474"
},
{
"id": "2303.17580"
},
{
"id": "2307.02046"
},
{
"id": "2305.15334"
},
{
"id": "2112.09332"
},
{
"id": "2305.10403"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2307.09288"
},
{
"id": "2109.01652"
},
{
"id": "1511.06939"
}
] |
2308.14296 | 7 | Thought 1: I can retrieve products previously rated by customer A3UXLMTIUSPNQ4, along with their respective ratings, from the database. Action 1: SQLTool [Find the products previously rated by customer A3UXLMTIUSPNQ4.] Observation 1: HERBOGANIC Sewak Natural Miswak Traditional Toothbrush 12 Pack; Pink Heart Beauty Bust Styling Jell 80ml; Gelish Hard Gel - LED Cover Pink Builder 0.50z; Tammy Taylor Nail Powder 5 Oz. âThought 2: | can search for the product category of âSewak Al-Falahâ Action 2: Search [What is the product category of âSewak Al-Falahâ] Observation 2: Sewak Al-Falah is a traditional natural toothbrush. (Thought 3 (1): Among the products customer A3UXLMTIUSPNQ4 rated before, HERBOGANIC Sewak Natural Miswak Traditional Toothbrush 12 Pack is a also a toothbrush. I \can retrieve the rating of HERBOGANIC Sewak Natural Miswak Traditional Toothbrush 12 Pack [by customer A3UXLMTIUSPNQ4. |Action 3 (1): SQLTool [What is the rating of | 2308.14296#7 | RecMind: Large Language Model Powered Agent For Recommendation | Recent advancements in instructing Large Language Models (LLMs) to utilize
external tools and execute multi-step plans have significantly enhanced their
ability to solve intricate tasks, ranging from mathematical problems to
creative writing. Yet, there remains a notable gap in studying the capacity of
LLMs in responding to personalized queries such as a recommendation request. To
bridge this gap, we have designed an LLM-powered autonomous recommender agent,
RecMind, which is capable of providing precise personalized recommendations
through careful planning, utilizing tools for obtaining external knowledge, and
leveraging individual data. We propose a novel algorithm, Self-Inspiring, to
improve the planning ability of the LLM agent. At each intermediate planning
step, the LLM 'self-inspires' to consider all previously explored states to
plan for next step. This mechanism greatly improves the model's ability to
comprehend and utilize historical planning information for recommendation. We
evaluate RecMind's performance in various recommendation scenarios, including
rating prediction, sequential recommendation, direct recommendation,
explanation generation, and review summarization. Our experiment shows that
RecMind outperforms existing zero/few-shot LLM-based recommendation methods in
different recommendation tasks and achieves competitive performance to a recent
model P5, which requires fully pre-train for the recommendation tasks. | http://arxiv.org/pdf/2308.14296 | Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang | cs.IR, cs.AI | null | null | cs.IR | 20230828 | 20230828 | [
{
"id": "2302.13971"
},
{
"id": "2305.06474"
},
{
"id": "2303.17580"
},
{
"id": "2307.02046"
},
{
"id": "2305.15334"
},
{
"id": "2112.09332"
},
{
"id": "2305.10403"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2307.09288"
},
{
"id": "2109.01652"
},
{
"id": "1511.06939"
}
] |
2308.14296 | 8 | Natural Miswak Traditional Toothbrush 12 Pack [by customer A3UXLMTIUSPNQ4. |Action 3 (1): SQLTool [What is the rating of âHERBOGANIC Sewak Natural Miswak {Traditional Toothbrush 12 Packâ by customer A3UXLMTIUSPNQ4?.]. Observation 3 (1): Customer A3UXLMTIUSPNQ4 rated âHERBOGANIC Sewak Natural [Miswak Traditional Toothbrush 12 Packâ as 5. (Thought 3 (2): I can search for the average rating of "Sewak Al-Falahâ in the database. [Action 3 (2): SQLTool [What is the average rating of product "Sewak Al-Falah"?.]. Observation 3 (2): The average rating of "Sewak Al-Falah" is 4.2 ( âEw ates ) âThought 1: I can retrieve products previously rated by customer A3UXLMTIUSPNQ4, along with their respective ratings, from the database. Action 1: SQLTool [Find the products previously rated by customer A3UXLMTIUSPNQ4.] Observation 1: HERBOGANIC Sewak Natural Miswak Traditional Toothbrush 12 Pack; Pink Heart | 2308.14296#8 | RecMind: Large Language Model Powered Agent For Recommendation | Recent advancements in instructing Large Language Models (LLMs) to utilize
external tools and execute multi-step plans have significantly enhanced their
ability to solve intricate tasks, ranging from mathematical problems to
creative writing. Yet, there remains a notable gap in studying the capacity of
LLMs in responding to personalized queries such as a recommendation request. To
bridge this gap, we have designed an LLM-powered autonomous recommender agent,
RecMind, which is capable of providing precise personalized recommendations
through careful planning, utilizing tools for obtaining external knowledge, and
leveraging individual data. We propose a novel algorithm, Self-Inspiring, to
improve the planning ability of the LLM agent. At each intermediate planning
step, the LLM 'self-inspires' to consider all previously explored states to
plan for next step. This mechanism greatly improves the model's ability to
comprehend and utilize historical planning information for recommendation. We
evaluate RecMind's performance in various recommendation scenarios, including
rating prediction, sequential recommendation, direct recommendation,
explanation generation, and review summarization. Our experiment shows that
RecMind outperforms existing zero/few-shot LLM-based recommendation methods in
different recommendation tasks and achieves competitive performance to a recent
model P5, which requires fully pre-train for the recommendation tasks. | http://arxiv.org/pdf/2308.14296 | Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang | cs.IR, cs.AI | null | null | cs.IR | 20230828 | 20230828 | [
{
"id": "2302.13971"
},
{
"id": "2305.06474"
},
{
"id": "2303.17580"
},
{
"id": "2307.02046"
},
{
"id": "2305.15334"
},
{
"id": "2112.09332"
},
{
"id": "2305.10403"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2307.09288"
},
{
"id": "2109.01652"
},
{
"id": "1511.06939"
}
] |
2308.14296 | 9 | rated by customer A3UXLMTIUSPNQ4.] Observation 1: HERBOGANIC Sewak Natural Miswak Traditional Toothbrush 12 Pack; Pink Heart Beauty Bust Styling Jell 80ml; Gelish Hard Gel - LED Cover Pink Builder 0.50z; Tammy Taylor Nail Powder 5 Oz. âThought 2: | can search for the product category of âSewak Al-Falahâ Action 2: Search [What is the product category of âSewak Al-Falahâ] Observation 2: Sewak Al-Falah is a traditional natural toothbrush. âThought 3 (1): Among the products customer A3UXLMTIUSPNQ4 rated before, HERBOGANIC Sewak Natural Miswak Traditional Toothbrush 12 Pack is a also a toothbrush. I can retrieve the rating of HERBOGANIC Sewak Natural Miswak Traditional Toothbrush 12 Pack by customer A3UXLMTIUSPNQ4. Action 3 (1): SQLTool [What is the rating of âHERBOGANIC Sewak Natural Miswak âTraditional Toothbrush 12 Packâ by customer AS3UXLMTIUSPNQ4?,]. Observation 3 (1): Customer | 2308.14296#9 | RecMind: Large Language Model Powered Agent For Recommendation | Recent advancements in instructing Large Language Models (LLMs) to utilize
external tools and execute multi-step plans have significantly enhanced their
ability to solve intricate tasks, ranging from mathematical problems to
creative writing. Yet, there remains a notable gap in studying the capacity of
LLMs in responding to personalized queries such as a recommendation request. To
bridge this gap, we have designed an LLM-powered autonomous recommender agent,
RecMind, which is capable of providing precise personalized recommendations
through careful planning, utilizing tools for obtaining external knowledge, and
leveraging individual data. We propose a novel algorithm, Self-Inspiring, to
improve the planning ability of the LLM agent. At each intermediate planning
step, the LLM 'self-inspires' to consider all previously explored states to
plan for next step. This mechanism greatly improves the model's ability to
comprehend and utilize historical planning information for recommendation. We
evaluate RecMind's performance in various recommendation scenarios, including
rating prediction, sequential recommendation, direct recommendation,
explanation generation, and review summarization. Our experiment shows that
RecMind outperforms existing zero/few-shot LLM-based recommendation methods in
different recommendation tasks and achieves competitive performance to a recent
model P5, which requires fully pre-train for the recommendation tasks. | http://arxiv.org/pdf/2308.14296 | Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang | cs.IR, cs.AI | null | null | cs.IR | 20230828 | 20230828 | [
{
"id": "2302.13971"
},
{
"id": "2305.06474"
},
{
"id": "2303.17580"
},
{
"id": "2307.02046"
},
{
"id": "2305.15334"
},
{
"id": "2112.09332"
},
{
"id": "2305.10403"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2307.09288"
},
{
"id": "2109.01652"
},
{
"id": "1511.06939"
}
] |
2308.14296 | 10 | Sewak Natural Miswak âTraditional Toothbrush 12 Packâ by customer AS3UXLMTIUSPNQ4?,]. Observation 3 (1): Customer A3UXLMTIUSPNQ4 rated âHERBOGANIC Sewak Natural Miswak Traditional Toothbrush 12 Packâ as 5 âThought 3 (2): In addition to search for the rating of a similar product, I can also search for the average rating of "Sewak Al-Falahâ in the database. Action 3 (2): SQLTool [What is the average rating of product "Sewak Al-Falah"?.] Observation 3 (2): The average rating of "Sewak Al-Falah" is 4.2 âThought 4: Now I know the answer. Since the average rating Of "Sewak Al-Falah" is 4.2, I can round 4.2 to the closest integer, which is 4. Action 4: Finish [Customer A3UXLMTIUSPNQ4 will rate the product "Sewak Al-Falah" as 4] Observation 4: 4 x X / âThought 4: Now I know the answer. The rating mer ASUXLMTIUSPNQ4 gives to the product "Sewak Al-Falah" can be | 2308.14296#10 | RecMind: Large Language Model Powered Agent For Recommendation | Recent advancements in instructing Large Language Models (LLMs) to utilize
external tools and execute multi-step plans have significantly enhanced their
ability to solve intricate tasks, ranging from mathematical problems to
creative writing. Yet, there remains a notable gap in studying the capacity of
LLMs in responding to personalized queries such as a recommendation request. To
bridge this gap, we have designed an LLM-powered autonomous recommender agent,
RecMind, which is capable of providing precise personalized recommendations
through careful planning, utilizing tools for obtaining external knowledge, and
leveraging individual data. We propose a novel algorithm, Self-Inspiring, to
improve the planning ability of the LLM agent. At each intermediate planning
step, the LLM 'self-inspires' to consider all previously explored states to
plan for next step. This mechanism greatly improves the model's ability to
comprehend and utilize historical planning information for recommendation. We
evaluate RecMind's performance in various recommendation scenarios, including
rating prediction, sequential recommendation, direct recommendation,
explanation generation, and review summarization. Our experiment shows that
RecMind outperforms existing zero/few-shot LLM-based recommendation methods in
different recommendation tasks and achieves competitive performance to a recent
model P5, which requires fully pre-train for the recommendation tasks. | http://arxiv.org/pdf/2308.14296 | Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang | cs.IR, cs.AI | null | null | cs.IR | 20230828 | 20230828 | [
{
"id": "2302.13971"
},
{
"id": "2305.06474"
},
{
"id": "2303.17580"
},
{
"id": "2307.02046"
},
{
"id": "2305.15334"
},
{
"id": "2112.09332"
},
{
"id": "2305.10403"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2307.09288"
},
{
"id": "2109.01652"
},
{
"id": "1511.06939"
}
] |
2308.14296 | 12 | Figure 1: Comparisons of rating prediction results by RecMind-ToT (left) and RecMind-SI (right). In the RecMind-ToT, after searching for the product category of the item in Step 2, the RecMind agent first generates thought 3 (1) to retrieve the rating of a similar item. After being evaluated by the voting-based evaluator, the RecMind agent prunes the option 3 (1) and proposes another thought 3 (2) to retrieve the average rating of the item and then makes the prediction solely based on it. In contrast, although RecMind-SI proposed the same alternative options in step 3, it takes into account the thought, action, and observation from both options 3 (1) and 3 (2) to generate the thought for the next step.
provide useful information for the agent to generate better planning. Figure 1 provides an example of the planning via ToT and SI and shows that SI planning achieves a more ac- curate rating than ToT due to better planning of SI.
To the best of our knowledge, this is the first public re- search work on an LLM-powered autonomous agent for rec- ommendation. The main contributions of our work are: ⢠We introduce RecMind, a novel autonomous agent framework that synergizes reasoning, acting, and mem- ory for multiple recommendation-related tasks. | 2308.14296#12 | RecMind: Large Language Model Powered Agent For Recommendation | Recent advancements in instructing Large Language Models (LLMs) to utilize
external tools and execute multi-step plans have significantly enhanced their
ability to solve intricate tasks, ranging from mathematical problems to
creative writing. Yet, there remains a notable gap in studying the capacity of
LLMs in responding to personalized queries such as a recommendation request. To
bridge this gap, we have designed an LLM-powered autonomous recommender agent,
RecMind, which is capable of providing precise personalized recommendations
through careful planning, utilizing tools for obtaining external knowledge, and
leveraging individual data. We propose a novel algorithm, Self-Inspiring, to
improve the planning ability of the LLM agent. At each intermediate planning
step, the LLM 'self-inspires' to consider all previously explored states to
plan for next step. This mechanism greatly improves the model's ability to
comprehend and utilize historical planning information for recommendation. We
evaluate RecMind's performance in various recommendation scenarios, including
rating prediction, sequential recommendation, direct recommendation,
explanation generation, and review summarization. Our experiment shows that
RecMind outperforms existing zero/few-shot LLM-based recommendation methods in
different recommendation tasks and achieves competitive performance to a recent
model P5, which requires fully pre-train for the recommendation tasks. | http://arxiv.org/pdf/2308.14296 | Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang | cs.IR, cs.AI | null | null | cs.IR | 20230828 | 20230828 | [
{
"id": "2302.13971"
},
{
"id": "2305.06474"
},
{
"id": "2303.17580"
},
{
"id": "2307.02046"
},
{
"id": "2305.15334"
},
{
"id": "2112.09332"
},
{
"id": "2305.10403"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2307.09288"
},
{
"id": "2109.01652"
},
{
"id": "1511.06939"
}
] |
2308.14296 | 13 | ⢠We propose a self-inspiring planning technique, which generates better planning by integrating multiple rea- soning paths than currently popular methods Chain-Of- Thoughts and Tree-Of-Thoughts.
⢠We evaluate the recommendation effectiveness of Rec- Mind across five distinct recommendation scenarios (rat- ing prediction, sequential recommendation, direct rec- ommendation, explanation generation, and review sum- marization). Extensive experiments and analyses on var- ious datasets demonstrate that RecMind outperforms the state-of-the-art (SOTA) zero/few-shot LLM-based base- lines and achieves competitive performance with a fully pre-trained expert recommendation model P5 (Geng et al. 2022). | 2308.14296#13 | RecMind: Large Language Model Powered Agent For Recommendation | Recent advancements in instructing Large Language Models (LLMs) to utilize
external tools and execute multi-step plans have significantly enhanced their
ability to solve intricate tasks, ranging from mathematical problems to
creative writing. Yet, there remains a notable gap in studying the capacity of
LLMs in responding to personalized queries such as a recommendation request. To
bridge this gap, we have designed an LLM-powered autonomous recommender agent,
RecMind, which is capable of providing precise personalized recommendations
through careful planning, utilizing tools for obtaining external knowledge, and
leveraging individual data. We propose a novel algorithm, Self-Inspiring, to
improve the planning ability of the LLM agent. At each intermediate planning
step, the LLM 'self-inspires' to consider all previously explored states to
plan for next step. This mechanism greatly improves the model's ability to
comprehend and utilize historical planning information for recommendation. We
evaluate RecMind's performance in various recommendation scenarios, including
rating prediction, sequential recommendation, direct recommendation,
explanation generation, and review summarization. Our experiment shows that
RecMind outperforms existing zero/few-shot LLM-based recommendation methods in
different recommendation tasks and achieves competitive performance to a recent
model P5, which requires fully pre-train for the recommendation tasks. | http://arxiv.org/pdf/2308.14296 | Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang | cs.IR, cs.AI | null | null | cs.IR | 20230828 | 20230828 | [
{
"id": "2302.13971"
},
{
"id": "2305.06474"
},
{
"id": "2303.17580"
},
{
"id": "2307.02046"
},
{
"id": "2305.15334"
},
{
"id": "2112.09332"
},
{
"id": "2305.10403"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2307.09288"
},
{
"id": "2109.01652"
},
{
"id": "1511.06939"
}
] |
2308.14296 | 14 | agents are designed to perform tasks autonomously towards a specific goal, rather than merely responding to queries from human users. The central concept is to leverage LLMs to produce text-based outputs and actions that can then be used for making API calls and performing operations within a specific environment. LLMs, with their strong reasoning abilities, can decompose challenging and complex tasks into smaller, more manageable steps (Wei et al. 2022; Yao et al. 2023). Furthermore, by enabling LLMs to utilize tools, we can enhance their capacity to tap into a much broader and dynamic knowledge space (Patil et al. 2023). A number of successful applications have emerged, including ReAct (Yao et al. 2022), Toolformer (Schick et al. 2023), Hugging- GPT (Shen et al. 2023), generative agents (Park et al. 2023), WebGPT (Nakano et al. 2021), AutoGPT (Gravitas 2023), BabyAGI (Nakajima 2023), and Langchain (Chase 2023). LLM for Recommendation Recently, LLMs have gained popularity in recommender systems, given their ability to understand and summarize a userâs preferences or past | 2308.14296#14 | RecMind: Large Language Model Powered Agent For Recommendation | Recent advancements in instructing Large Language Models (LLMs) to utilize
external tools and execute multi-step plans have significantly enhanced their
ability to solve intricate tasks, ranging from mathematical problems to
creative writing. Yet, there remains a notable gap in studying the capacity of
LLMs in responding to personalized queries such as a recommendation request. To
bridge this gap, we have designed an LLM-powered autonomous recommender agent,
RecMind, which is capable of providing precise personalized recommendations
through careful planning, utilizing tools for obtaining external knowledge, and
leveraging individual data. We propose a novel algorithm, Self-Inspiring, to
improve the planning ability of the LLM agent. At each intermediate planning
step, the LLM 'self-inspires' to consider all previously explored states to
plan for next step. This mechanism greatly improves the model's ability to
comprehend and utilize historical planning information for recommendation. We
evaluate RecMind's performance in various recommendation scenarios, including
rating prediction, sequential recommendation, direct recommendation,
explanation generation, and review summarization. Our experiment shows that
RecMind outperforms existing zero/few-shot LLM-based recommendation methods in
different recommendation tasks and achieves competitive performance to a recent
model P5, which requires fully pre-train for the recommendation tasks. | http://arxiv.org/pdf/2308.14296 | Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang | cs.IR, cs.AI | null | null | cs.IR | 20230828 | 20230828 | [
{
"id": "2302.13971"
},
{
"id": "2305.06474"
},
{
"id": "2303.17580"
},
{
"id": "2307.02046"
},
{
"id": "2305.15334"
},
{
"id": "2112.09332"
},
{
"id": "2305.10403"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2307.09288"
},
{
"id": "2109.01652"
},
{
"id": "1511.06939"
}
] |
2308.14296 | 15 | LLM for Recommendation Recently, LLMs have gained popularity in recommender systems, given their ability to understand and summarize a userâs preferences or past in- teractions in natural language (Fan et al. 2023; Lin et al. 2023). Current LLM-based recommender systems are pri- marily designed for rating prediction (Kang et al. 2023; Bao et al. 2023) and sequential recommendation tasks (Wang and Lim 2023; Yang et al. 2023; Hou et al. 2023). In both tasks, a userâs previous interactions with items, along with other optional data like the user profile or item attributes, are con- catenated to formulate a natural language prompt. This is then fed into an LLM with options for no fine-tuning (Wang | 2308.14296#15 | RecMind: Large Language Model Powered Agent For Recommendation | Recent advancements in instructing Large Language Models (LLMs) to utilize
external tools and execute multi-step plans have significantly enhanced their
ability to solve intricate tasks, ranging from mathematical problems to
creative writing. Yet, there remains a notable gap in studying the capacity of
LLMs in responding to personalized queries such as a recommendation request. To
bridge this gap, we have designed an LLM-powered autonomous recommender agent,
RecMind, which is capable of providing precise personalized recommendations
through careful planning, utilizing tools for obtaining external knowledge, and
leveraging individual data. We propose a novel algorithm, Self-Inspiring, to
improve the planning ability of the LLM agent. At each intermediate planning
step, the LLM 'self-inspires' to consider all previously explored states to
plan for next step. This mechanism greatly improves the model's ability to
comprehend and utilize historical planning information for recommendation. We
evaluate RecMind's performance in various recommendation scenarios, including
rating prediction, sequential recommendation, direct recommendation,
explanation generation, and review summarization. Our experiment shows that
RecMind outperforms existing zero/few-shot LLM-based recommendation methods in
different recommendation tasks and achieves competitive performance to a recent
model P5, which requires fully pre-train for the recommendation tasks. | http://arxiv.org/pdf/2308.14296 | Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang | cs.IR, cs.AI | null | null | cs.IR | 20230828 | 20230828 | [
{
"id": "2302.13971"
},
{
"id": "2305.06474"
},
{
"id": "2303.17580"
},
{
"id": "2307.02046"
},
{
"id": "2305.15334"
},
{
"id": "2112.09332"
},
{
"id": "2305.10403"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2307.09288"
},
{
"id": "2109.01652"
},
{
"id": "1511.06939"
}
] |
2308.14296 | 17 | Rating Prediction How will user_X rate the item "Kusco-Murphy Tart Hair"? The rating should be an integer between 1 to 5, with 1 being lowest and 5 being highest. Direct Recommendation From the item candidates listed below, choose the top 10 items to recommend to user_X and rank them in order of priority from highest to lowest. Candidates: [*Rogaine Women Hair Regrowth Treatmentâ, ...... ] Sequential Recommendation user_X has interacted with the following items in chronological order: ["Old Spice Body Wash Red Zoneâ, ......] Please recommend the next item that the user might interact with. Choose the top 10 products to recommend in order of priority, from highest to lowest. Review Summarization Write a review title to summarize the review from user_X to item "Chrome Razor and Shaving Brush Stand". The review is "The stand is more solid then I expected for the price. The shape of this stand allows me to hang the shaving brush over the soap bowl, I couldn't do that with stand I had gotten with the kit." Explanation Generation Help user_X to generate a 5-star explanation for item "FoliGrowth Hair Growth Supplementâ. RecMind Expert Models g SQL Tool = sol Search Tool ae | 2308.14296#17 | RecMind: Large Language Model Powered Agent For Recommendation | Recent advancements in instructing Large Language Models (LLMs) to utilize
external tools and execute multi-step plans have significantly enhanced their
ability to solve intricate tasks, ranging from mathematical problems to
creative writing. Yet, there remains a notable gap in studying the capacity of
LLMs in responding to personalized queries such as a recommendation request. To
bridge this gap, we have designed an LLM-powered autonomous recommender agent,
RecMind, which is capable of providing precise personalized recommendations
through careful planning, utilizing tools for obtaining external knowledge, and
leveraging individual data. We propose a novel algorithm, Self-Inspiring, to
improve the planning ability of the LLM agent. At each intermediate planning
step, the LLM 'self-inspires' to consider all previously explored states to
plan for next step. This mechanism greatly improves the model's ability to
comprehend and utilize historical planning information for recommendation. We
evaluate RecMind's performance in various recommendation scenarios, including
rating prediction, sequential recommendation, direct recommendation,
explanation generation, and review summarization. Our experiment shows that
RecMind outperforms existing zero/few-shot LLM-based recommendation methods in
different recommendation tasks and achieves competitive performance to a recent
model P5, which requires fully pre-train for the recommendation tasks. | http://arxiv.org/pdf/2308.14296 | Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang | cs.IR, cs.AI | null | null | cs.IR | 20230828 | 20230828 | [
{
"id": "2302.13971"
},
{
"id": "2305.06474"
},
{
"id": "2303.17580"
},
{
"id": "2307.02046"
},
{
"id": "2305.15334"
},
{
"id": "2112.09332"
},
{
"id": "2305.10403"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2307.09288"
},
{
"id": "2109.01652"
},
{
"id": "1511.06939"
}
] |
2308.14296 | 18 | to generate a 5-star explanation for item "FoliGrowth Hair Growth Supplementâ. RecMind Expert Models g SQL Tool = sol Search Tool ae HairGenicsâ, [Propidren by âNutrafol Women's Balance Hair Growth Supplements, Ages 45 and Upâ, eed [âOld Spice Hair Styling Pomade for Menâ, âLume Whole Body Deodorant - Invisible Cream Stick - 72 Hour Odor Control â, ......] Great quality for good price. This product is essential for growing and maintaining healthy hair! This is a product to be bought in bulk because you can never have enough of it. | 2308.14296#18 | RecMind: Large Language Model Powered Agent For Recommendation | Recent advancements in instructing Large Language Models (LLMs) to utilize
external tools and execute multi-step plans have significantly enhanced their
ability to solve intricate tasks, ranging from mathematical problems to
creative writing. Yet, there remains a notable gap in studying the capacity of
LLMs in responding to personalized queries such as a recommendation request. To
bridge this gap, we have designed an LLM-powered autonomous recommender agent,
RecMind, which is capable of providing precise personalized recommendations
through careful planning, utilizing tools for obtaining external knowledge, and
leveraging individual data. We propose a novel algorithm, Self-Inspiring, to
improve the planning ability of the LLM agent. At each intermediate planning
step, the LLM 'self-inspires' to consider all previously explored states to
plan for next step. This mechanism greatly improves the model's ability to
comprehend and utilize historical planning information for recommendation. We
evaluate RecMind's performance in various recommendation scenarios, including
rating prediction, sequential recommendation, direct recommendation,
explanation generation, and review summarization. Our experiment shows that
RecMind outperforms existing zero/few-shot LLM-based recommendation methods in
different recommendation tasks and achieves competitive performance to a recent
model P5, which requires fully pre-train for the recommendation tasks. | http://arxiv.org/pdf/2308.14296 | Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang | cs.IR, cs.AI | null | null | cs.IR | 20230828 | 20230828 | [
{
"id": "2302.13971"
},
{
"id": "2305.06474"
},
{
"id": "2303.17580"
},
{
"id": "2307.02046"
},
{
"id": "2305.15334"
},
{
"id": "2112.09332"
},
{
"id": "2305.10403"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2307.09288"
},
{
"id": "2109.01652"
},
{
"id": "1511.06939"
}
] |
2308.14296 | 19 | Figure 2: Here is an overview of our proposed RecMind architecture. It comprises four major components: âRecMindâ is built based on ChatGPT API, âToolsâ support various API call to retrieve knowledge from âMemoryâ component, âPlanningâ component is in charge of thoughts generation.
and Lim 2023), full-model fine-tuning (Yang et al. 2023), or parameter-efficient fine-tuning (Bao et al. 2023). In the sequential recommendation task, to reduce the search space and better tailor it to each dataset, an optional pre-filtered set of item candidates is included in the input prompts. This en- sures the model generates the final ranked list based on that specific set. Liu et al. (2023) designs a series of prompts to evaluate ChatGPTâs performance over five recommendation tasks. This study highlights the notable generalization capa- bilities of LLMs, largely attributed to their strong in-context learning abilities (Wei et al. 2021).
Unlike existing studies, our study pioneers the creation of a recommendation-focused LLM agent that harnesses the LLMâs capabilities in reasoning, tool usage, and action. This approach enhances the effectiveness of recommender sys- tems, also making them more generalizable across multiple recommendation related tasks. | 2308.14296#19 | RecMind: Large Language Model Powered Agent For Recommendation | Recent advancements in instructing Large Language Models (LLMs) to utilize
external tools and execute multi-step plans have significantly enhanced their
ability to solve intricate tasks, ranging from mathematical problems to
creative writing. Yet, there remains a notable gap in studying the capacity of
LLMs in responding to personalized queries such as a recommendation request. To
bridge this gap, we have designed an LLM-powered autonomous recommender agent,
RecMind, which is capable of providing precise personalized recommendations
through careful planning, utilizing tools for obtaining external knowledge, and
leveraging individual data. We propose a novel algorithm, Self-Inspiring, to
improve the planning ability of the LLM agent. At each intermediate planning
step, the LLM 'self-inspires' to consider all previously explored states to
plan for next step. This mechanism greatly improves the model's ability to
comprehend and utilize historical planning information for recommendation. We
evaluate RecMind's performance in various recommendation scenarios, including
rating prediction, sequential recommendation, direct recommendation,
explanation generation, and review summarization. Our experiment shows that
RecMind outperforms existing zero/few-shot LLM-based recommendation methods in
different recommendation tasks and achieves competitive performance to a recent
model P5, which requires fully pre-train for the recommendation tasks. | http://arxiv.org/pdf/2308.14296 | Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang | cs.IR, cs.AI | null | null | cs.IR | 20230828 | 20230828 | [
{
"id": "2302.13971"
},
{
"id": "2305.06474"
},
{
"id": "2303.17580"
},
{
"id": "2307.02046"
},
{
"id": "2305.15334"
},
{
"id": "2112.09332"
},
{
"id": "2305.10403"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2307.09288"
},
{
"id": "2109.01652"
},
{
"id": "1511.06939"
}
] |
2308.14296 | 20 | Planning Planning helps LLM Agents decompose tasks into smaller, manageable subgoals for efficiently handling complex tasks. Consider the setting where the goal is to gen- erate the final result y given problem x via an LLM Agent parameterized by θ. The traditional input-output method gives the result by y ⼠pθ(y|x). With planning, Rec- Mind generates the result y ⼠pθ(y|planing(x)), where planing(x) is a set of prompts that decomposes prob- lem x into a series sub-tasks that is composed of thought h, action a, and observation o. Figure 1 provides exam- ples of planning including thoughts, actions and observa- tions. We first review existing popular reasoning methods such as Chain-of-Thoughts and Tree-of-Thoughts which we have explored for RecMind. Then we present the proposed Self-Inspiring reasoning algorithm. All these planning meth- ods can be viewed as traversing through a latent reasoning tree, as shown in Figure 3. | 2308.14296#20 | RecMind: Large Language Model Powered Agent For Recommendation | Recent advancements in instructing Large Language Models (LLMs) to utilize
external tools and execute multi-step plans have significantly enhanced their
ability to solve intricate tasks, ranging from mathematical problems to
creative writing. Yet, there remains a notable gap in studying the capacity of
LLMs in responding to personalized queries such as a recommendation request. To
bridge this gap, we have designed an LLM-powered autonomous recommender agent,
RecMind, which is capable of providing precise personalized recommendations
through careful planning, utilizing tools for obtaining external knowledge, and
leveraging individual data. We propose a novel algorithm, Self-Inspiring, to
improve the planning ability of the LLM agent. At each intermediate planning
step, the LLM 'self-inspires' to consider all previously explored states to
plan for next step. This mechanism greatly improves the model's ability to
comprehend and utilize historical planning information for recommendation. We
evaluate RecMind's performance in various recommendation scenarios, including
rating prediction, sequential recommendation, direct recommendation,
explanation generation, and review summarization. Our experiment shows that
RecMind outperforms existing zero/few-shot LLM-based recommendation methods in
different recommendation tasks and achieves competitive performance to a recent
model P5, which requires fully pre-train for the recommendation tasks. | http://arxiv.org/pdf/2308.14296 | Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang | cs.IR, cs.AI | null | null | cs.IR | 20230828 | 20230828 | [
{
"id": "2302.13971"
},
{
"id": "2305.06474"
},
{
"id": "2303.17580"
},
{
"id": "2307.02046"
},
{
"id": "2305.15334"
},
{
"id": "2112.09332"
},
{
"id": "2305.10403"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2307.09288"
},
{
"id": "2109.01652"
},
{
"id": "1511.06939"
}
] |
2308.14296 | 21 | 3 Architecture As shown in Figure 2, the proposed RecMind consists of key components: LLM-powered API such as ChatGPT to drive the overall reasoning, planning which breaks down a task to smaller sub-tasks for step-by-step planning, memory which provides the agent with the capability to retain and re- call information over extended periods, and tools for obtain- ing relevant extra information from memory that is missing from the model weights and aiding the reasoning. We intro- duce the key components planning, memory and tools for RecMind in the subsequent parts. | 2308.14296#21 | RecMind: Large Language Model Powered Agent For Recommendation | Recent advancements in instructing Large Language Models (LLMs) to utilize
external tools and execute multi-step plans have significantly enhanced their
ability to solve intricate tasks, ranging from mathematical problems to
creative writing. Yet, there remains a notable gap in studying the capacity of
LLMs in responding to personalized queries such as a recommendation request. To
bridge this gap, we have designed an LLM-powered autonomous recommender agent,
RecMind, which is capable of providing precise personalized recommendations
through careful planning, utilizing tools for obtaining external knowledge, and
leveraging individual data. We propose a novel algorithm, Self-Inspiring, to
improve the planning ability of the LLM agent. At each intermediate planning
step, the LLM 'self-inspires' to consider all previously explored states to
plan for next step. This mechanism greatly improves the model's ability to
comprehend and utilize historical planning information for recommendation. We
evaluate RecMind's performance in various recommendation scenarios, including
rating prediction, sequential recommendation, direct recommendation,
explanation generation, and review summarization. Our experiment shows that
RecMind outperforms existing zero/few-shot LLM-based recommendation methods in
different recommendation tasks and achieves competitive performance to a recent
model P5, which requires fully pre-train for the recommendation tasks. | http://arxiv.org/pdf/2308.14296 | Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang | cs.IR, cs.AI | null | null | cs.IR | 20230828 | 20230828 | [
{
"id": "2302.13971"
},
{
"id": "2305.06474"
},
{
"id": "2303.17580"
},
{
"id": "2307.02046"
},
{
"id": "2305.15334"
},
{
"id": "2112.09332"
},
{
"id": "2305.10403"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2307.09288"
},
{
"id": "2109.01652"
},
{
"id": "1511.06939"
}
] |
2308.14296 | 22 | ⢠Chain-of-Thoughts (CoT) (Wei et al. 2022) has been used in ReAct (Yao et al. 2022) to synergize reasoning and action. This CoT planning method follows a single path in the reasoning tree. In our setting, at each time step t, the agent receives observation ot followed by thought ht and action at. Let st = (ht, at, ot) denote the Rec- Mind state at step t. The CoT planning method gener- ates the next state st+1 = (ht+1, at+1, ot+1) by sam- pling pθ(st+1|x, s1, .., st). Thus CoT only follows a sin- gle planning path S = {s1, ..., st, ..., sT } until reach- ing the final result y ⼠pθ(y|x, s1, ..., st, ..., sT ) after T steps.
Step | Action Observation a7 qd of gf 8 2) i)
(a) Tree-of-Thoughts (DFS) (b) Self-Inspiring
Figure 3: Comparison between Tree-of-Thoughts DFS and Self-Inspiring. Red arrows in the figure indicate the process for generating alternative thoughts at intermediate steps. Blue dashed arrows in the figure denote the backtracking process. | 2308.14296#22 | RecMind: Large Language Model Powered Agent For Recommendation | Recent advancements in instructing Large Language Models (LLMs) to utilize
external tools and execute multi-step plans have significantly enhanced their
ability to solve intricate tasks, ranging from mathematical problems to
creative writing. Yet, there remains a notable gap in studying the capacity of
LLMs in responding to personalized queries such as a recommendation request. To
bridge this gap, we have designed an LLM-powered autonomous recommender agent,
RecMind, which is capable of providing precise personalized recommendations
through careful planning, utilizing tools for obtaining external knowledge, and
leveraging individual data. We propose a novel algorithm, Self-Inspiring, to
improve the planning ability of the LLM agent. At each intermediate planning
step, the LLM 'self-inspires' to consider all previously explored states to
plan for next step. This mechanism greatly improves the model's ability to
comprehend and utilize historical planning information for recommendation. We
evaluate RecMind's performance in various recommendation scenarios, including
rating prediction, sequential recommendation, direct recommendation,
explanation generation, and review summarization. Our experiment shows that
RecMind outperforms existing zero/few-shot LLM-based recommendation methods in
different recommendation tasks and achieves competitive performance to a recent
model P5, which requires fully pre-train for the recommendation tasks. | http://arxiv.org/pdf/2308.14296 | Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang | cs.IR, cs.AI | null | null | cs.IR | 20230828 | 20230828 | [
{
"id": "2302.13971"
},
{
"id": "2305.06474"
},
{
"id": "2303.17580"
},
{
"id": "2307.02046"
},
{
"id": "2305.15334"
},
{
"id": "2112.09332"
},
{
"id": "2305.10403"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2307.09288"
},
{
"id": "2109.01652"
},
{
"id": "1511.06939"
}
] |
2308.14296 | 23 | Tree-of-Thoughts (ToT) (Yao et al. 2023) extends CoT to explore multiple paths in the reasoning tree. At step t and state st, ToT-BFS explicitly generates mul- tiple candidates {s1 t+1} for next state by i.i.d. sampling si t+1 â¼ pθ(st+1|x, s1, .., st) for i â [k]. Then it applies majority vote to select the state st+1 from {s1 t+1}. Eventually ToT-BFS generates a single path similar to CoT. In contrast, ToT-DFS ex- plores one branch at a time, but might prune the cur- rent state, and backtracks to the previous state to start a new reasoning branch. Denote the first explored path as z(1) = {s(1) t+1}. If the last state s(1) , s(1) t+1 is pruned and it backtracks to the previous state s(1) , and starts a new reasoning branch, then the path be- comes z(2) = {s(1) t+1, ...}. After exploring n branches, we denote the final path of ToT as | 2308.14296#23 | RecMind: Large Language Model Powered Agent For Recommendation | Recent advancements in instructing Large Language Models (LLMs) to utilize
external tools and execute multi-step plans have significantly enhanced their
ability to solve intricate tasks, ranging from mathematical problems to
creative writing. Yet, there remains a notable gap in studying the capacity of
LLMs in responding to personalized queries such as a recommendation request. To
bridge this gap, we have designed an LLM-powered autonomous recommender agent,
RecMind, which is capable of providing precise personalized recommendations
through careful planning, utilizing tools for obtaining external knowledge, and
leveraging individual data. We propose a novel algorithm, Self-Inspiring, to
improve the planning ability of the LLM agent. At each intermediate planning
step, the LLM 'self-inspires' to consider all previously explored states to
plan for next step. This mechanism greatly improves the model's ability to
comprehend and utilize historical planning information for recommendation. We
evaluate RecMind's performance in various recommendation scenarios, including
rating prediction, sequential recommendation, direct recommendation,
explanation generation, and review summarization. Our experiment shows that
RecMind outperforms existing zero/few-shot LLM-based recommendation methods in
different recommendation tasks and achieves competitive performance to a recent
model P5, which requires fully pre-train for the recommendation tasks. | http://arxiv.org/pdf/2308.14296 | Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang | cs.IR, cs.AI | null | null | cs.IR | 20230828 | 20230828 | [
{
"id": "2302.13971"
},
{
"id": "2305.06474"
},
{
"id": "2303.17580"
},
{
"id": "2307.02046"
},
{
"id": "2305.15334"
},
{
"id": "2112.09332"
},
{
"id": "2305.10403"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2307.09288"
},
{
"id": "2109.01652"
},
{
"id": "1511.06939"
}
] |
2308.14296 | 24 | then the path be- comes z(2) = {s(1) t+1, ...}. After exploring n branches, we denote the final path of ToT as z(n) = {s1, ..., s(1) T } and the final result y is ob- j1 tained by y ⼠pθ(x, z(n)). We find the discarded historical states from previously explored branches such as s(1) t+1 from branch z(1) usually contain helpful information for RecMind to generate a bet- ter state compared with only considering the final path of ToT. Thus, we propose Self-Inspiring (SI) as shown in Fig- ure 3(b) and Algorithm 1, a new planning method for Rec- Mind. SI inspires itself into exploring an alternative reason- ing branch, while retaining all previous states. At m-th path and step t, SI generates the next step of planning by consid- ering all previous paths, i.e., s(m) t+1 ⼠pθ(st+1|z(1), ..., z(m)). After exploring n paths, the RecMind obtains the final re- sult y ⼠Pθ(x, z(1), ..., | 2308.14296#24 | RecMind: Large Language Model Powered Agent For Recommendation | Recent advancements in instructing Large Language Models (LLMs) to utilize
external tools and execute multi-step plans have significantly enhanced their
ability to solve intricate tasks, ranging from mathematical problems to
creative writing. Yet, there remains a notable gap in studying the capacity of
LLMs in responding to personalized queries such as a recommendation request. To
bridge this gap, we have designed an LLM-powered autonomous recommender agent,
RecMind, which is capable of providing precise personalized recommendations
through careful planning, utilizing tools for obtaining external knowledge, and
leveraging individual data. We propose a novel algorithm, Self-Inspiring, to
improve the planning ability of the LLM agent. At each intermediate planning
step, the LLM 'self-inspires' to consider all previously explored states to
plan for next step. This mechanism greatly improves the model's ability to
comprehend and utilize historical planning information for recommendation. We
evaluate RecMind's performance in various recommendation scenarios, including
rating prediction, sequential recommendation, direct recommendation,
explanation generation, and review summarization. Our experiment shows that
RecMind outperforms existing zero/few-shot LLM-based recommendation methods in
different recommendation tasks and achieves competitive performance to a recent
model P5, which requires fully pre-train for the recommendation tasks. | http://arxiv.org/pdf/2308.14296 | Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang | cs.IR, cs.AI | null | null | cs.IR | 20230828 | 20230828 | [
{
"id": "2302.13971"
},
{
"id": "2305.06474"
},
{
"id": "2303.17580"
},
{
"id": "2307.02046"
},
{
"id": "2305.15334"
},
{
"id": "2112.09332"
},
{
"id": "2305.10403"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2307.09288"
},
{
"id": "2109.01652"
},
{
"id": "1511.06939"
}
] |
2308.14296 | 25 | ..., z(m)). After exploring n paths, the RecMind obtains the final re- sult y â¼ Pθ(x, z(1), ..., z(n)). Figure 3 provides an example to illustrate the key difference between ToT and SI. In ToT (Figure 3(a)), The new state N (2) at the second path is gen- erated by only considering state N â 1. The state N (1) is discarded. However, in SI (Figure 3(b)), the new state N (2) is generated based on both N â 1 and N (1). Memory Information stored in memory, including Person# Algorithm 1: Self-Inspiring Planning | 2308.14296#25 | RecMind: Large Language Model Powered Agent For Recommendation | Recent advancements in instructing Large Language Models (LLMs) to utilize
external tools and execute multi-step plans have significantly enhanced their
ability to solve intricate tasks, ranging from mathematical problems to
creative writing. Yet, there remains a notable gap in studying the capacity of
LLMs in responding to personalized queries such as a recommendation request. To
bridge this gap, we have designed an LLM-powered autonomous recommender agent,
RecMind, which is capable of providing precise personalized recommendations
through careful planning, utilizing tools for obtaining external knowledge, and
leveraging individual data. We propose a novel algorithm, Self-Inspiring, to
improve the planning ability of the LLM agent. At each intermediate planning
step, the LLM 'self-inspires' to consider all previously explored states to
plan for next step. This mechanism greatly improves the model's ability to
comprehend and utilize historical planning information for recommendation. We
evaluate RecMind's performance in various recommendation scenarios, including
rating prediction, sequential recommendation, direct recommendation,
explanation generation, and review summarization. Our experiment shows that
RecMind outperforms existing zero/few-shot LLM-based recommendation methods in
different recommendation tasks and achieves competitive performance to a recent
model P5, which requires fully pre-train for the recommendation tasks. | http://arxiv.org/pdf/2308.14296 | Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang | cs.IR, cs.AI | null | null | cs.IR | 20230828 | 20230828 | [
{
"id": "2302.13971"
},
{
"id": "2305.06474"
},
{
"id": "2303.17580"
},
{
"id": "2307.02046"
},
{
"id": "2305.15334"
},
{
"id": "2112.09332"
},
{
"id": "2305.10403"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2307.09288"
},
{
"id": "2109.01652"
},
{
"id": "1511.06939"
}
] |
2308.14296 | 26 | the current planning path S = {z(1), ..., z(mâ1), s(m) } at step t, LLM pθ, and step limit T . Let inspire(·) be the API check- ing if the planning should explore an alternative reason- ing branch. 1: while t ⤠T do Sample s(m) 2: if h(m)
t+1, a(m) t+1 is "End of Planning" then break t+1 = (h(m) t+1, o(m) t+1) â¼ pθ(·|x, S) 3: 4: 5: 6: 7: end if Sâ² â S ⪠{s(m) t+1} if inspire({x, Sâ²}) then Sample s(m+1) S â SⲠ⪠{s(m+1) t+2 â¼ pθ(·|x, S) 8: }, m â m + 1, t â t + 2 9: 10: 11: end if 12: 13: end while 14: return final response y â¼ pθ(·|x, S) t+2 else S â Sâ², t â t + 1 | 2308.14296#26 | RecMind: Large Language Model Powered Agent For Recommendation | Recent advancements in instructing Large Language Models (LLMs) to utilize
external tools and execute multi-step plans have significantly enhanced their
ability to solve intricate tasks, ranging from mathematical problems to
creative writing. Yet, there remains a notable gap in studying the capacity of
LLMs in responding to personalized queries such as a recommendation request. To
bridge this gap, we have designed an LLM-powered autonomous recommender agent,
RecMind, which is capable of providing precise personalized recommendations
through careful planning, utilizing tools for obtaining external knowledge, and
leveraging individual data. We propose a novel algorithm, Self-Inspiring, to
improve the planning ability of the LLM agent. At each intermediate planning
step, the LLM 'self-inspires' to consider all previously explored states to
plan for next step. This mechanism greatly improves the model's ability to
comprehend and utilize historical planning information for recommendation. We
evaluate RecMind's performance in various recommendation scenarios, including
rating prediction, sequential recommendation, direct recommendation,
explanation generation, and review summarization. Our experiment shows that
RecMind outperforms existing zero/few-shot LLM-based recommendation methods in
different recommendation tasks and achieves competitive performance to a recent
model P5, which requires fully pre-train for the recommendation tasks. | http://arxiv.org/pdf/2308.14296 | Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang | cs.IR, cs.AI | null | null | cs.IR | 20230828 | 20230828 | [
{
"id": "2302.13971"
},
{
"id": "2305.06474"
},
{
"id": "2303.17580"
},
{
"id": "2307.02046"
},
{
"id": "2305.15334"
},
{
"id": "2112.09332"
},
{
"id": "2305.10403"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2307.09288"
},
{
"id": "2109.01652"
},
{
"id": "1511.06939"
}
] |
2308.14296 | 27 | alized Memory and World Knowledge, enables the model to access knowledge beyond what is inherently present in the LLMâs parameters. Using the Amazon Reviews dataset as an illustrative example, Personalized Memory includes individ- ualized user information, such as their reviews or ratings for a particular item. World Knowledge consists of two com- ponents: the first component is item metadata information, which also falls under the domain-specific knowledge cate- gory; the second component involves real-time information that can be accessed through Web search tool. In Figure 1, information of the product âSewak Al-Falahâ retrieved from world knowledge using a Web search tool, aids the reason- ing path and ultimately influences the final prediction. Tool Use By empowering LLMs to utilize tools, we can ac- cess vastly larger and dynamic knowledge bases, allowing us to tackle complex computational tasks. In RecMind system, weâve incorporated three such tools: | 2308.14296#27 | RecMind: Large Language Model Powered Agent For Recommendation | Recent advancements in instructing Large Language Models (LLMs) to utilize
external tools and execute multi-step plans have significantly enhanced their
ability to solve intricate tasks, ranging from mathematical problems to
creative writing. Yet, there remains a notable gap in studying the capacity of
LLMs in responding to personalized queries such as a recommendation request. To
bridge this gap, we have designed an LLM-powered autonomous recommender agent,
RecMind, which is capable of providing precise personalized recommendations
through careful planning, utilizing tools for obtaining external knowledge, and
leveraging individual data. We propose a novel algorithm, Self-Inspiring, to
improve the planning ability of the LLM agent. At each intermediate planning
step, the LLM 'self-inspires' to consider all previously explored states to
plan for next step. This mechanism greatly improves the model's ability to
comprehend and utilize historical planning information for recommendation. We
evaluate RecMind's performance in various recommendation scenarios, including
rating prediction, sequential recommendation, direct recommendation,
explanation generation, and review summarization. Our experiment shows that
RecMind outperforms existing zero/few-shot LLM-based recommendation methods in
different recommendation tasks and achieves competitive performance to a recent
model P5, which requires fully pre-train for the recommendation tasks. | http://arxiv.org/pdf/2308.14296 | Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang | cs.IR, cs.AI | null | null | cs.IR | 20230828 | 20230828 | [
{
"id": "2302.13971"
},
{
"id": "2305.06474"
},
{
"id": "2303.17580"
},
{
"id": "2307.02046"
},
{
"id": "2305.15334"
},
{
"id": "2112.09332"
},
{
"id": "2305.10403"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2307.09288"
},
{
"id": "2109.01652"
},
{
"id": "1511.06939"
}
] |
2308.14296 | 28 | ⢠Database Tool: This tool translates natural language questions into SQL queries. Using this tool, the sys- tem can access domain-specific knowledge from mem- ory thatâs essential for the final prediction. For instance, in the Amazon Reviews dataset, it encompasses personal information such as a userâs reviews or ratings for an item, as well as item metadata like the itemâs descrip- tion, brand, and price. When the database tool is called, the agent will prompt a question, such as âWhat is the av- erage rating of product Sewak Al-Falah?â, based on the database schema. Next, an LLM is called to transfer the question into an executable SQL query. After obtaining the output of the SQL query, the output will be trans- ferred into a sentence of answer by an LLM and returned to the Agent.
⢠Search Tool: This tool employs a search engine (e.g., Google) to access real-time information. For instance, in | 2308.14296#28 | RecMind: Large Language Model Powered Agent For Recommendation | Recent advancements in instructing Large Language Models (LLMs) to utilize
external tools and execute multi-step plans have significantly enhanced their
ability to solve intricate tasks, ranging from mathematical problems to
creative writing. Yet, there remains a notable gap in studying the capacity of
LLMs in responding to personalized queries such as a recommendation request. To
bridge this gap, we have designed an LLM-powered autonomous recommender agent,
RecMind, which is capable of providing precise personalized recommendations
through careful planning, utilizing tools for obtaining external knowledge, and
leveraging individual data. We propose a novel algorithm, Self-Inspiring, to
improve the planning ability of the LLM agent. At each intermediate planning
step, the LLM 'self-inspires' to consider all previously explored states to
plan for next step. This mechanism greatly improves the model's ability to
comprehend and utilize historical planning information for recommendation. We
evaluate RecMind's performance in various recommendation scenarios, including
rating prediction, sequential recommendation, direct recommendation,
explanation generation, and review summarization. Our experiment shows that
RecMind outperforms existing zero/few-shot LLM-based recommendation methods in
different recommendation tasks and achieves competitive performance to a recent
model P5, which requires fully pre-train for the recommendation tasks. | http://arxiv.org/pdf/2308.14296 | Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang | cs.IR, cs.AI | null | null | cs.IR | 20230828 | 20230828 | [
{
"id": "2302.13971"
},
{
"id": "2305.06474"
},
{
"id": "2303.17580"
},
{
"id": "2307.02046"
},
{
"id": "2305.15334"
},
{
"id": "2112.09332"
},
{
"id": "2305.10403"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2307.09288"
},
{
"id": "2109.01652"
},
{
"id": "1511.06939"
}
] |
2308.14296 | 29 | ⢠Search Tool: This tool employs a search engine (e.g., Google) to access real-time information. For instance, in
the Amazon Reviews dataset, this tool could assist us in obtaining the most recent information about each item. When the Search tool is called, the agent will prompt a question asking for external meta information, which is usually not available in the database, such as âWhat is the product category of Sewak Al-Falah?â. Next, a search engine API will be called to search for the information and return it to the agent.
⢠Text Summarization Tool: This tool helps summarize lengthy texts by invoking a text summarization model from the Hugging Face Hub. For example, within the Amazon Reviews dataset, this tool can produce a sum- marized description of an item by considering multiple reviews of that specific item from various users. It can generate summarization such as âMost customers think this product is durable and has a good price.â, which can be easily used in different recommendation tasks related to the product. | 2308.14296#29 | RecMind: Large Language Model Powered Agent For Recommendation | Recent advancements in instructing Large Language Models (LLMs) to utilize
external tools and execute multi-step plans have significantly enhanced their
ability to solve intricate tasks, ranging from mathematical problems to
creative writing. Yet, there remains a notable gap in studying the capacity of
LLMs in responding to personalized queries such as a recommendation request. To
bridge this gap, we have designed an LLM-powered autonomous recommender agent,
RecMind, which is capable of providing precise personalized recommendations
through careful planning, utilizing tools for obtaining external knowledge, and
leveraging individual data. We propose a novel algorithm, Self-Inspiring, to
improve the planning ability of the LLM agent. At each intermediate planning
step, the LLM 'self-inspires' to consider all previously explored states to
plan for next step. This mechanism greatly improves the model's ability to
comprehend and utilize historical planning information for recommendation. We
evaluate RecMind's performance in various recommendation scenarios, including
rating prediction, sequential recommendation, direct recommendation,
explanation generation, and review summarization. Our experiment shows that
RecMind outperforms existing zero/few-shot LLM-based recommendation methods in
different recommendation tasks and achieves competitive performance to a recent
model P5, which requires fully pre-train for the recommendation tasks. | http://arxiv.org/pdf/2308.14296 | Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang | cs.IR, cs.AI | null | null | cs.IR | 20230828 | 20230828 | [
{
"id": "2302.13971"
},
{
"id": "2305.06474"
},
{
"id": "2303.17580"
},
{
"id": "2307.02046"
},
{
"id": "2305.15334"
},
{
"id": "2112.09332"
},
{
"id": "2305.10403"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2307.09288"
},
{
"id": "2109.01652"
},
{
"id": "1511.06939"
}
] |
2308.14296 | 30 | 4 Experiments In this section, we evaluate the performance of our proposed method in various recommendation related scenarios, i.e., rating prediction, sequential recommendation, direct recom- mendation, explanation generation, review summarization. First, we provide an overview of the datasets and evalua- tion metrics used in different recommendation tasks. Sub- sequently, we delineate the experimental settings specific in each recommendation scenario.
4.1 Experimental Settings Datasets and Evaluation Metrics Following P5 (Geng et al. 2022), we conduct experiments for rating prediction, sequential recommendation, direct recommendation, expla- nation generation, and review summarization on the Ama- zon Reviews (Ni, Li, and McAuley 2019) dataset. We evalu- ate our method and baselines on data in Sports & Outdoors, Beauty, as well as Toys & Games domains from Amazon Reviews. For a more comprehensive evaluation of our meth- ods, we also evaluate our method RecMind on Yelp (Geng et al. 2022) dataset. | 2308.14296#30 | RecMind: Large Language Model Powered Agent For Recommendation | Recent advancements in instructing Large Language Models (LLMs) to utilize
external tools and execute multi-step plans have significantly enhanced their
ability to solve intricate tasks, ranging from mathematical problems to
creative writing. Yet, there remains a notable gap in studying the capacity of
LLMs in responding to personalized queries such as a recommendation request. To
bridge this gap, we have designed an LLM-powered autonomous recommender agent,
RecMind, which is capable of providing precise personalized recommendations
through careful planning, utilizing tools for obtaining external knowledge, and
leveraging individual data. We propose a novel algorithm, Self-Inspiring, to
improve the planning ability of the LLM agent. At each intermediate planning
step, the LLM 'self-inspires' to consider all previously explored states to
plan for next step. This mechanism greatly improves the model's ability to
comprehend and utilize historical planning information for recommendation. We
evaluate RecMind's performance in various recommendation scenarios, including
rating prediction, sequential recommendation, direct recommendation,
explanation generation, and review summarization. Our experiment shows that
RecMind outperforms existing zero/few-shot LLM-based recommendation methods in
different recommendation tasks and achieves competitive performance to a recent
model P5, which requires fully pre-train for the recommendation tasks. | http://arxiv.org/pdf/2308.14296 | Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang | cs.IR, cs.AI | null | null | cs.IR | 20230828 | 20230828 | [
{
"id": "2302.13971"
},
{
"id": "2305.06474"
},
{
"id": "2303.17580"
},
{
"id": "2307.02046"
},
{
"id": "2305.15334"
},
{
"id": "2112.09332"
},
{
"id": "2305.10403"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2307.09288"
},
{
"id": "2109.01652"
},
{
"id": "1511.06939"
}
] |
2308.14296 | 31 | To quantitatively evaluate the proposed RecMind across various recommendation tasks, we employ different metrics. For rating prediction, we report Root Mean Square Error (RMSE) and Mean Absolute Error (MAE). In the case of se- quential and direct recommendations, we use metrics such as top-k Hit Ratio (HR@k) and top-k Normalized Discounted Cumulative Gain (NDCG@k), specifically reporting results on HR@5,10 and NDCG@5,10. In addition, for the as- sessment of explanation generation, review summarization and conversational recommendation, we use n-gram Bilin- gual Evaluation Understudy (BLEU-n) and n-gram Recall- Oriented Understudy for Gisting Evaluation (ROUGE-n).
Implementation Details We gpt-3.5-turbo-16k (Schulman et al. 2022) as the core large language model in RecMind. To enable the access of the RecMind to in-domain knowledge, we store all the review data in to a MySQL database, consisting of a table with the product
Table 1: Performance comparison in rating prediction on Amazon Reviews (Beauty) and Yelp. | 2308.14296#31 | RecMind: Large Language Model Powered Agent For Recommendation | Recent advancements in instructing Large Language Models (LLMs) to utilize
external tools and execute multi-step plans have significantly enhanced their
ability to solve intricate tasks, ranging from mathematical problems to
creative writing. Yet, there remains a notable gap in studying the capacity of
LLMs in responding to personalized queries such as a recommendation request. To
bridge this gap, we have designed an LLM-powered autonomous recommender agent,
RecMind, which is capable of providing precise personalized recommendations
through careful planning, utilizing tools for obtaining external knowledge, and
leveraging individual data. We propose a novel algorithm, Self-Inspiring, to
improve the planning ability of the LLM agent. At each intermediate planning
step, the LLM 'self-inspires' to consider all previously explored states to
plan for next step. This mechanism greatly improves the model's ability to
comprehend and utilize historical planning information for recommendation. We
evaluate RecMind's performance in various recommendation scenarios, including
rating prediction, sequential recommendation, direct recommendation,
explanation generation, and review summarization. Our experiment shows that
RecMind outperforms existing zero/few-shot LLM-based recommendation methods in
different recommendation tasks and achieves competitive performance to a recent
model P5, which requires fully pre-train for the recommendation tasks. | http://arxiv.org/pdf/2308.14296 | Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang | cs.IR, cs.AI | null | null | cs.IR | 20230828 | 20230828 | [
{
"id": "2302.13971"
},
{
"id": "2305.06474"
},
{
"id": "2303.17580"
},
{
"id": "2307.02046"
},
{
"id": "2305.15334"
},
{
"id": "2112.09332"
},
{
"id": "2305.10403"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2307.09288"
},
{
"id": "2109.01652"
},
{
"id": "1511.06939"
}
] |
2308.14296 | 32 | Table 1: Performance comparison in rating prediction on Amazon Reviews (Beauty) and Yelp.
Methods Beauty RMSE MAE Yelp RMSE MAE MF MLP P5 (pre-trained expert,few-shot) ChatGPT (zero-shot) ChatGPT (few-shot) RecMind-CoT (zero-shot) RecMind-CoT (few-shot) RecMind-ToT (BFS, few-shot) RecMind-ToT (DFS, few-shot) RecMind-SI (zero-shot) RecMind-SI (few-shot) 1.1973 1.3078 1.2982 1.4173 1.1589 1.2250 1.1326 1.1197 1.1205 1.1894 1.0756 0.9461 0.9597 0.8474 1.1897 0.7327 0.8612 0.7167 0.7059 0.7103 0.7883 0.6892 1.2645 1.2951 1.4685 1.6725 1.4725 1.5302 1.3925 1.3875 1.3826 1.4530 1.3674 1.0426 1.0340 1.0054 1.2359 1.0016 1.1673 0.9794 0.9766 0.9774 1.0009 0.9698 | 2308.14296#32 | RecMind: Large Language Model Powered Agent For Recommendation | Recent advancements in instructing Large Language Models (LLMs) to utilize
external tools and execute multi-step plans have significantly enhanced their
ability to solve intricate tasks, ranging from mathematical problems to
creative writing. Yet, there remains a notable gap in studying the capacity of
LLMs in responding to personalized queries such as a recommendation request. To
bridge this gap, we have designed an LLM-powered autonomous recommender agent,
RecMind, which is capable of providing precise personalized recommendations
through careful planning, utilizing tools for obtaining external knowledge, and
leveraging individual data. We propose a novel algorithm, Self-Inspiring, to
improve the planning ability of the LLM agent. At each intermediate planning
step, the LLM 'self-inspires' to consider all previously explored states to
plan for next step. This mechanism greatly improves the model's ability to
comprehend and utilize historical planning information for recommendation. We
evaluate RecMind's performance in various recommendation scenarios, including
rating prediction, sequential recommendation, direct recommendation,
explanation generation, and review summarization. Our experiment shows that
RecMind outperforms existing zero/few-shot LLM-based recommendation methods in
different recommendation tasks and achieves competitive performance to a recent
model P5, which requires fully pre-train for the recommendation tasks. | http://arxiv.org/pdf/2308.14296 | Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang | cs.IR, cs.AI | null | null | cs.IR | 20230828 | 20230828 | [
{
"id": "2302.13971"
},
{
"id": "2305.06474"
},
{
"id": "2303.17580"
},
{
"id": "2307.02046"
},
{
"id": "2305.15334"
},
{
"id": "2112.09332"
},
{
"id": "2305.10403"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2307.09288"
},
{
"id": "2109.01652"
},
{
"id": "1511.06939"
}
] |
2308.14296 | 34 | 4.2 Compared Methods We compare the performance of our method with the fol- lowing baselines, including both LLM fine-tuning methods, such as P5 (Geng et al. 2022), and ChatGPT-based LLM prompting methods (Liu et al. 2023). In addition, we im- plement our RecMind with three different planning meth- ods, namely Chain-Of-Thoughts (CoT), Tree-of-Thoughts (ToT) (Yao et al. 2023), and the proposed Self-Inspiring(SI). In summary, the compared methods include: ⢠P5 (Geng et al. 2022) unifies different recommenda- tion tasks into a shared generative large language model. A collection of personalized prompts has been cre- ated for various recommendation-related tasks. All raw dataâincluding user-item interactions, user descriptions, item metadata, and user reviewsâare transformed into natural language sequences. Subsequently, the large lan- guage model is fine-tuned based on these sequences. ⢠ChatGPT (Liu et al. 2023) is a powerful large language model developed by OpenAI. Liu et al. (2023) con- structs a benchmark to evaluate ChatGPTâs performance in | 2308.14296#34 | RecMind: Large Language Model Powered Agent For Recommendation | Recent advancements in instructing Large Language Models (LLMs) to utilize
external tools and execute multi-step plans have significantly enhanced their
ability to solve intricate tasks, ranging from mathematical problems to
creative writing. Yet, there remains a notable gap in studying the capacity of
LLMs in responding to personalized queries such as a recommendation request. To
bridge this gap, we have designed an LLM-powered autonomous recommender agent,
RecMind, which is capable of providing precise personalized recommendations
through careful planning, utilizing tools for obtaining external knowledge, and
leveraging individual data. We propose a novel algorithm, Self-Inspiring, to
improve the planning ability of the LLM agent. At each intermediate planning
step, the LLM 'self-inspires' to consider all previously explored states to
plan for next step. This mechanism greatly improves the model's ability to
comprehend and utilize historical planning information for recommendation. We
evaluate RecMind's performance in various recommendation scenarios, including
rating prediction, sequential recommendation, direct recommendation,
explanation generation, and review summarization. Our experiment shows that
RecMind outperforms existing zero/few-shot LLM-based recommendation methods in
different recommendation tasks and achieves competitive performance to a recent
model P5, which requires fully pre-train for the recommendation tasks. | http://arxiv.org/pdf/2308.14296 | Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang | cs.IR, cs.AI | null | null | cs.IR | 20230828 | 20230828 | [
{
"id": "2302.13971"
},
{
"id": "2305.06474"
},
{
"id": "2303.17580"
},
{
"id": "2307.02046"
},
{
"id": "2305.15334"
},
{
"id": "2112.09332"
},
{
"id": "2305.10403"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2307.09288"
},
{
"id": "2109.01652"
},
{
"id": "1511.06939"
}
] |
2308.14296 | 35 | 2023) is a powerful large language model developed by OpenAI. Liu et al. (2023) con- structs a benchmark to evaluate ChatGPTâs performance in different recommendation tasks by designing specific prompts in both zero-shot and few-shot settings. In the zero-shot setting, the LLM is directly prompted for the final prediction, while in the few-shot setting, several in- context examples are provided. We name the ChatGPT baseline in these two settings as ChatGPT (zero-shot) and ChatGPT (few-shot). | 2308.14296#35 | RecMind: Large Language Model Powered Agent For Recommendation | Recent advancements in instructing Large Language Models (LLMs) to utilize
external tools and execute multi-step plans have significantly enhanced their
ability to solve intricate tasks, ranging from mathematical problems to
creative writing. Yet, there remains a notable gap in studying the capacity of
LLMs in responding to personalized queries such as a recommendation request. To
bridge this gap, we have designed an LLM-powered autonomous recommender agent,
RecMind, which is capable of providing precise personalized recommendations
through careful planning, utilizing tools for obtaining external knowledge, and
leveraging individual data. We propose a novel algorithm, Self-Inspiring, to
improve the planning ability of the LLM agent. At each intermediate planning
step, the LLM 'self-inspires' to consider all previously explored states to
plan for next step. This mechanism greatly improves the model's ability to
comprehend and utilize historical planning information for recommendation. We
evaluate RecMind's performance in various recommendation scenarios, including
rating prediction, sequential recommendation, direct recommendation,
explanation generation, and review summarization. Our experiment shows that
RecMind outperforms existing zero/few-shot LLM-based recommendation methods in
different recommendation tasks and achieves competitive performance to a recent
model P5, which requires fully pre-train for the recommendation tasks. | http://arxiv.org/pdf/2308.14296 | Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang | cs.IR, cs.AI | null | null | cs.IR | 20230828 | 20230828 | [
{
"id": "2302.13971"
},
{
"id": "2305.06474"
},
{
"id": "2303.17580"
},
{
"id": "2307.02046"
},
{
"id": "2305.15334"
},
{
"id": "2112.09332"
},
{
"id": "2305.10403"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2307.09288"
},
{
"id": "2109.01652"
},
{
"id": "1511.06939"
}
] |
2308.14296 | 36 | ⢠RecMind-CoT, where the planning is based on ReAct- CoT (Yao et al. 2022). ReAct is a novel prompt-based paradigm for general task solving. It extends Chain-Of- Thoughts (CoT) (Wei et al. 2022) to synergize reason- ing and acting with external tools. In our experiments, we adopt the same tools we used for the ReAct base- line. We also explore both zero-shot and few-shot for this method and name them as RecMind-CoT (zero-shot) and RecMind-CoT (few-shot).
⢠RecMind-ToT, where the planning is based on Tree- of-Thoughts (ToT) (Yao et al. 2023). ToT enables the exploration of coherent units of thought that serve as
Table 2: Performance comparison in direct recommendation on Amazon Reviews (Beauty) and Yelp. | 2308.14296#36 | RecMind: Large Language Model Powered Agent For Recommendation | Recent advancements in instructing Large Language Models (LLMs) to utilize
external tools and execute multi-step plans have significantly enhanced their
ability to solve intricate tasks, ranging from mathematical problems to
creative writing. Yet, there remains a notable gap in studying the capacity of
LLMs in responding to personalized queries such as a recommendation request. To
bridge this gap, we have designed an LLM-powered autonomous recommender agent,
RecMind, which is capable of providing precise personalized recommendations
through careful planning, utilizing tools for obtaining external knowledge, and
leveraging individual data. We propose a novel algorithm, Self-Inspiring, to
improve the planning ability of the LLM agent. At each intermediate planning
step, the LLM 'self-inspires' to consider all previously explored states to
plan for next step. This mechanism greatly improves the model's ability to
comprehend and utilize historical planning information for recommendation. We
evaluate RecMind's performance in various recommendation scenarios, including
rating prediction, sequential recommendation, direct recommendation,
explanation generation, and review summarization. Our experiment shows that
RecMind outperforms existing zero/few-shot LLM-based recommendation methods in
different recommendation tasks and achieves competitive performance to a recent
model P5, which requires fully pre-train for the recommendation tasks. | http://arxiv.org/pdf/2308.14296 | Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang | cs.IR, cs.AI | null | null | cs.IR | 20230828 | 20230828 | [
{
"id": "2302.13971"
},
{
"id": "2305.06474"
},
{
"id": "2303.17580"
},
{
"id": "2307.02046"
},
{
"id": "2305.15334"
},
{
"id": "2112.09332"
},
{
"id": "2305.10403"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2307.09288"
},
{
"id": "2109.01652"
},
{
"id": "1511.06939"
}
] |
2308.14296 | 37 | Methods Beauty Yelp HR@5 NDCG@5 HR@10 NDCG@10 HR@5 NDCG@5 HR@10 NDCG@10 BPR-MLP P5 (pre-trained expert,few-shot) ChatGPT (zero-shot) ChatGPT (few-shot) RecMind-CoT (zero-shot) RecMind-CoT (few-shot) RecMind-ToT (BFS, few-shot) RecMind-ToT (DFS, few-shot) RecMind-SI (zero-shot) RecMind-SI (few-shot) 0.1392 0.1478 0.0146 0.0228 0.0497 0.0682 0.0734 0.0705 0.0675 0.0915 0.0848 0.1003 0.0107 0.0157 0.0325 0.0387 0.0402 0.0407 0.0524 0.0624 0.2542 0.2159 0.0705 0.0903 0.1129 0.1345 0.1355 0.1302 0.1259 0.1559 0.1215 0.1289 0.0235 0.0362 0.0637 0.0814 0.0808 0.0812 0.0923 0.1063 0.1876 | 2308.14296#37 | RecMind: Large Language Model Powered Agent For Recommendation | Recent advancements in instructing Large Language Models (LLMs) to utilize
external tools and execute multi-step plans have significantly enhanced their
ability to solve intricate tasks, ranging from mathematical problems to
creative writing. Yet, there remains a notable gap in studying the capacity of
LLMs in responding to personalized queries such as a recommendation request. To
bridge this gap, we have designed an LLM-powered autonomous recommender agent,
RecMind, which is capable of providing precise personalized recommendations
through careful planning, utilizing tools for obtaining external knowledge, and
leveraging individual data. We propose a novel algorithm, Self-Inspiring, to
improve the planning ability of the LLM agent. At each intermediate planning
step, the LLM 'self-inspires' to consider all previously explored states to
plan for next step. This mechanism greatly improves the model's ability to
comprehend and utilize historical planning information for recommendation. We
evaluate RecMind's performance in various recommendation scenarios, including
rating prediction, sequential recommendation, direct recommendation,
explanation generation, and review summarization. Our experiment shows that
RecMind outperforms existing zero/few-shot LLM-based recommendation methods in
different recommendation tasks and achieves competitive performance to a recent
model P5, which requires fully pre-train for the recommendation tasks. | http://arxiv.org/pdf/2308.14296 | Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang | cs.IR, cs.AI | null | null | cs.IR | 20230828 | 20230828 | [
{
"id": "2302.13971"
},
{
"id": "2305.06474"
},
{
"id": "2303.17580"
},
{
"id": "2307.02046"
},
{
"id": "2305.15334"
},
{
"id": "2112.09332"
},
{
"id": "2305.10403"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2307.09288"
},
{
"id": "2109.01652"
},
{
"id": "1511.06939"
}
] |
2308.14296 | 40 | Methods Beauty Yelp HR@5 NDCG@5 HR@10 NDCG@10 HR@5 NDCG@5 HR@10 NDCG@10 S3-Rec P5 (pre-trained expert,few-shot) ChatGPT (zero-shot) ChatGPT (few-shot) RecMind-CoT (zero-shot) RecMind-CoT (few-shot) RecMind-ToT (BFS, few-shot) RecMind-ToT (DFS, few-shot) RecMind-SI (zero-shot) RecMind-SI (few-shot) 0.0387 0.0459 0.0089 0.0179 0.0182 0.0349 0.0387 0.0365 0.0339 0.0415 0.0244 0.0347 0.0053 0.0124 0.0139 0.0187 0.0235 0.0211 0.0200 0.0289 0.0647 0.0603 0.0103 0.0256 0.0297 0.0486 0.0522 0.0497 0.0469 0.0574 0.0327 0.0411 0.0060 0.0125 0.0160 0.0302 0.0327 0.0355 0.0310 0.0375 0.0201 0.0565 | 2308.14296#40 | RecMind: Large Language Model Powered Agent For Recommendation | Recent advancements in instructing Large Language Models (LLMs) to utilize
external tools and execute multi-step plans have significantly enhanced their
ability to solve intricate tasks, ranging from mathematical problems to
creative writing. Yet, there remains a notable gap in studying the capacity of
LLMs in responding to personalized queries such as a recommendation request. To
bridge this gap, we have designed an LLM-powered autonomous recommender agent,
RecMind, which is capable of providing precise personalized recommendations
through careful planning, utilizing tools for obtaining external knowledge, and
leveraging individual data. We propose a novel algorithm, Self-Inspiring, to
improve the planning ability of the LLM agent. At each intermediate planning
step, the LLM 'self-inspires' to consider all previously explored states to
plan for next step. This mechanism greatly improves the model's ability to
comprehend and utilize historical planning information for recommendation. We
evaluate RecMind's performance in various recommendation scenarios, including
rating prediction, sequential recommendation, direct recommendation,
explanation generation, and review summarization. Our experiment shows that
RecMind outperforms existing zero/few-shot LLM-based recommendation methods in
different recommendation tasks and achieves competitive performance to a recent
model P5, which requires fully pre-train for the recommendation tasks. | http://arxiv.org/pdf/2308.14296 | Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang | cs.IR, cs.AI | null | null | cs.IR | 20230828 | 20230828 | [
{
"id": "2302.13971"
},
{
"id": "2305.06474"
},
{
"id": "2303.17580"
},
{
"id": "2307.02046"
},
{
"id": "2305.15334"
},
{
"id": "2112.09332"
},
{
"id": "2305.10403"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2307.09288"
},
{
"id": "2109.01652"
},
{
"id": "1511.06939"
}
] |
2308.14296 | 42 | intermediate steps toward problem-solving. We imple- ment RecMind-ToT with two search strategies in search- ing among the choices in intermediate steps, which are breadth-first search, named as RecMind-CoT (BFS, few- shot) and depth-first search, named as RecMind-CoT (DFS, few-shot).
In addition to the above methods, we have considered different additional baselines for each task. The additional baselines are introduced in corresponding subsections.
# 4.3 Experimental Results on Precision-oriented Recommendation Tasks
We first evaluate the proposed RecMind and baselines on three precision-oriented recommendation tasks, i.e., rating prediction, sequential recommendation, and direct recom- mendation. | 2308.14296#42 | RecMind: Large Language Model Powered Agent For Recommendation | Recent advancements in instructing Large Language Models (LLMs) to utilize
external tools and execute multi-step plans have significantly enhanced their
ability to solve intricate tasks, ranging from mathematical problems to
creative writing. Yet, there remains a notable gap in studying the capacity of
LLMs in responding to personalized queries such as a recommendation request. To
bridge this gap, we have designed an LLM-powered autonomous recommender agent,
RecMind, which is capable of providing precise personalized recommendations
through careful planning, utilizing tools for obtaining external knowledge, and
leveraging individual data. We propose a novel algorithm, Self-Inspiring, to
improve the planning ability of the LLM agent. At each intermediate planning
step, the LLM 'self-inspires' to consider all previously explored states to
plan for next step. This mechanism greatly improves the model's ability to
comprehend and utilize historical planning information for recommendation. We
evaluate RecMind's performance in various recommendation scenarios, including
rating prediction, sequential recommendation, direct recommendation,
explanation generation, and review summarization. Our experiment shows that
RecMind outperforms existing zero/few-shot LLM-based recommendation methods in
different recommendation tasks and achieves competitive performance to a recent
model P5, which requires fully pre-train for the recommendation tasks. | http://arxiv.org/pdf/2308.14296 | Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang | cs.IR, cs.AI | null | null | cs.IR | 20230828 | 20230828 | [
{
"id": "2302.13971"
},
{
"id": "2305.06474"
},
{
"id": "2303.17580"
},
{
"id": "2307.02046"
},
{
"id": "2305.15334"
},
{
"id": "2112.09332"
},
{
"id": "2305.10403"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2307.09288"
},
{
"id": "2109.01652"
},
{
"id": "1511.06939"
}
] |
2308.14296 | 43 | We first evaluate the proposed RecMind and baselines on three precision-oriented recommendation tasks, i.e., rating prediction, sequential recommendation, and direct recom- mendation.
Rating Prediction. Rating prediction is an essential task in recommendation systems that aims to predict the rating that a user would give to a particular item. In rating pre- diction, we further include baselines MF (Koren, Bell, and Volinsky 2009a) and MLP (Cheng et al. 2016) trained with mean square root loss baselines. The results of rating pre- diction on Amazon Reviews (beauty domain) and Yelp are shown in Table 1. The results show that RecMind with dif- ferent types of planning mechanisms usually outperforms the fully-trained models for rating prediction tasks. Such im- provement mainly stems from the fact that RecMind has ac- cess to both the rating history of the user given to different items and the rating history of the item received from differ- ent users in the database. On the other side, fully trained models such as MLP and P5 usually have much higher RMSE, which can be attributed to the over-fitting on the training data. | 2308.14296#43 | RecMind: Large Language Model Powered Agent For Recommendation | Recent advancements in instructing Large Language Models (LLMs) to utilize
external tools and execute multi-step plans have significantly enhanced their
ability to solve intricate tasks, ranging from mathematical problems to
creative writing. Yet, there remains a notable gap in studying the capacity of
LLMs in responding to personalized queries such as a recommendation request. To
bridge this gap, we have designed an LLM-powered autonomous recommender agent,
RecMind, which is capable of providing precise personalized recommendations
through careful planning, utilizing tools for obtaining external knowledge, and
leveraging individual data. We propose a novel algorithm, Self-Inspiring, to
improve the planning ability of the LLM agent. At each intermediate planning
step, the LLM 'self-inspires' to consider all previously explored states to
plan for next step. This mechanism greatly improves the model's ability to
comprehend and utilize historical planning information for recommendation. We
evaluate RecMind's performance in various recommendation scenarios, including
rating prediction, sequential recommendation, direct recommendation,
explanation generation, and review summarization. Our experiment shows that
RecMind outperforms existing zero/few-shot LLM-based recommendation methods in
different recommendation tasks and achieves competitive performance to a recent
model P5, which requires fully pre-train for the recommendation tasks. | http://arxiv.org/pdf/2308.14296 | Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang | cs.IR, cs.AI | null | null | cs.IR | 20230828 | 20230828 | [
{
"id": "2302.13971"
},
{
"id": "2305.06474"
},
{
"id": "2303.17580"
},
{
"id": "2307.02046"
},
{
"id": "2305.15334"
},
{
"id": "2112.09332"
},
{
"id": "2305.10403"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2307.09288"
},
{
"id": "2109.01652"
},
{
"id": "1511.06939"
}
] |
2308.14296 | 44 | Direct Recommendation. In the scenario of the direct recommendation, the RecMind predicts the recommended items from a candidate set of 100 items from the same dataset, where only one candidate is positive. Figure 2 shows an example of direct recommendation in the beauty domain of Amazon Reviews. For a specific user {userID} with a list of products, the agent will be prompted, âFrom the item candidates listed, choose the top 10 items to recommend to the user {userID} and rank them in order of priority from highest to lowest. Candidates: [âItem Listâ]â. In this task, we include additional baselines BPR-MLP (Cheng et al. 2016). Before evaluating each test data, we remove the interaction history between the positive item and the user to avoid data leakage. The results on direct recommendation are shown in Table 2. The results show that fully-trained models such as P5 usually perform better than RecMind. The main reason of the performance gap is the long context of the names of 100 candidate items. Specifically, the LLM agent tends to first re- trieve information related to items positioned in front of the candidate list. Such positional bias has also been observed in previous works (Liu et al. 2023). Table 2 shows that | 2308.14296#44 | RecMind: Large Language Model Powered Agent For Recommendation | Recent advancements in instructing Large Language Models (LLMs) to utilize
external tools and execute multi-step plans have significantly enhanced their
ability to solve intricate tasks, ranging from mathematical problems to
creative writing. Yet, there remains a notable gap in studying the capacity of
LLMs in responding to personalized queries such as a recommendation request. To
bridge this gap, we have designed an LLM-powered autonomous recommender agent,
RecMind, which is capable of providing precise personalized recommendations
through careful planning, utilizing tools for obtaining external knowledge, and
leveraging individual data. We propose a novel algorithm, Self-Inspiring, to
improve the planning ability of the LLM agent. At each intermediate planning
step, the LLM 'self-inspires' to consider all previously explored states to
plan for next step. This mechanism greatly improves the model's ability to
comprehend and utilize historical planning information for recommendation. We
evaluate RecMind's performance in various recommendation scenarios, including
rating prediction, sequential recommendation, direct recommendation,
explanation generation, and review summarization. Our experiment shows that
RecMind outperforms existing zero/few-shot LLM-based recommendation methods in
different recommendation tasks and achieves competitive performance to a recent
model P5, which requires fully pre-train for the recommendation tasks. | http://arxiv.org/pdf/2308.14296 | Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang | cs.IR, cs.AI | null | null | cs.IR | 20230828 | 20230828 | [
{
"id": "2302.13971"
},
{
"id": "2305.06474"
},
{
"id": "2303.17580"
},
{
"id": "2307.02046"
},
{
"id": "2305.15334"
},
{
"id": "2112.09332"
},
{
"id": "2305.10403"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2307.09288"
},
{
"id": "2109.01652"
},
{
"id": "1511.06939"
}
] |
2308.14296 | 45 | related to items positioned in front of the candidate list. Such positional bias has also been observed in previous works (Liu et al. 2023). Table 2 shows that di- verse reasoning planning, such as tree-of-thoughts and our proposed self-inspiring, can alleviate this issue by gradually filtering out less-possible items. However, it is still hard for LLMs to fully explore the chances of all candidates, espe- cially with limitations on prompt context length. | 2308.14296#45 | RecMind: Large Language Model Powered Agent For Recommendation | Recent advancements in instructing Large Language Models (LLMs) to utilize
external tools and execute multi-step plans have significantly enhanced their
ability to solve intricate tasks, ranging from mathematical problems to
creative writing. Yet, there remains a notable gap in studying the capacity of
LLMs in responding to personalized queries such as a recommendation request. To
bridge this gap, we have designed an LLM-powered autonomous recommender agent,
RecMind, which is capable of providing precise personalized recommendations
through careful planning, utilizing tools for obtaining external knowledge, and
leveraging individual data. We propose a novel algorithm, Self-Inspiring, to
improve the planning ability of the LLM agent. At each intermediate planning
step, the LLM 'self-inspires' to consider all previously explored states to
plan for next step. This mechanism greatly improves the model's ability to
comprehend and utilize historical planning information for recommendation. We
evaluate RecMind's performance in various recommendation scenarios, including
rating prediction, sequential recommendation, direct recommendation,
explanation generation, and review summarization. Our experiment shows that
RecMind outperforms existing zero/few-shot LLM-based recommendation methods in
different recommendation tasks and achieves competitive performance to a recent
model P5, which requires fully pre-train for the recommendation tasks. | http://arxiv.org/pdf/2308.14296 | Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang | cs.IR, cs.AI | null | null | cs.IR | 20230828 | 20230828 | [
{
"id": "2302.13971"
},
{
"id": "2305.06474"
},
{
"id": "2303.17580"
},
{
"id": "2307.02046"
},
{
"id": "2305.15334"
},
{
"id": "2112.09332"
},
{
"id": "2305.10403"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2307.09288"
},
{
"id": "2109.01652"
},
{
"id": "1511.06939"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.