id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
listlengths
1
1
2308.16505#40
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Steam MovieLens Beauty Methods 0.00 LlaMA-2 T-LlaMA(O) 0.00 T-LlaMA(A) 0.05 5.92 Davinci-003 1.81 GPT-3.5 8.01 RecLlama 8.68 GPT-4 60.00 60.00 59.82 43.79 56.30 31.77 28.61 0.00 0.00 0.04 5.98 1.31 8.21 8.48 60.00 60.00 59.81 43.12 56.71 32.04 31.51 0.00 0.00 0.05 2.60 1.36 4.08 5.36 60.00 60.00 59.82 52.18 56.60 46.40 39.90 4.3 Evaluation with One-Turn Recommendation In this part, we evaluate both the retrieval and ranking recommendation tasks. For the Retrieval task, we set the recommendation budget k to 5 for all methods, with Re- call@5 being the evaluation metric. For the Ranking task, we randomly sample 19 negative items, and together with the one positive item, they form the candidate list proac- tively provided by users. The evaluation metric for this task is NDCG@20. For Chat-Rec, we omit the results of on the Ranking task because Chat-Rec degenerates into GPTs when removing the embedding-based candidate retrieval stage. The results are shown in Table 5. Based on the results, we can draw conclusions similar to those in Section 4.2. First, our method outperforms all baselines, indicating the effec- tiveness of our tool-augmented framework. Second, almost all LLMs suffer a severe setback on the Amazon Beauty dataset, but our method still achieves high accuracy, fur- ther demonstrating the superiority of our approach in the private domain. Notably, some LLMs underperform com- pared to random and popularity methods in ranking tasks, particularly in the Amazon dataset. This can be primarily at- tributed to LLMs not adhering to the ranking instructions, which arise due to LLMsâ
2308.16505#39
2308.16505#41
2308.16505
[ "2302.13971" ]
2308.16505#41
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
uncertainty and produce out-of- scope items, especially for smaller LLMs. 4.4 Comparions of Different LLMs as the Brain In previous experiments, we utilized GPT-4 as the LLM for the InteRecAgent framework. This section presents a comparative analysis of the performance when employing different LLMs within the InteRecAgent. Note that Re- cLlama is our finetuned 7B model introduced in Section 3.5. ToolLlaMA2-7B (Qin et al. 2023b) is another fine-tuned model designed to interact with external APIs in response to human instructions. Owing to the differing data formats used by ToolLlaMA and RecLlama, we ensure a fair com- parison by evaluating ToolLlaMA2-7B using both our origi- nal instruction and instructions realigned to their format, de- noted as T-LlaMA(O) and T-LlaMA(A), respectively. The outcomes are tabulated in Table 6. Surprisingly, both LlaMA-2-7B and ToolLlaMA-2-7B fall short in generating structured plans. Despite ToolL- laMAâ s training on tool-utilization samples, it appears to primarily excel at API calls and lags in discerning user in- tent and formulating an accurate recommendation plan, re- sulting in significantly poor performance. Another intrigu- ing finding is that GPT-3.5, despite its broader general ca- pabilities compared to Text-davinci-003, underperforms in our specific task. RecLlama shows a marked proficiency in crafting plans for the InteRecAgent, even surpassing Text- davinci-003â s capabilities. Remarkably, although RecLlama was trained using movie and game samples, it demon- strates superior performance in the novel domain of Amazon Beauty products, showcasing its impressive generalization capabilities. As RecLlama is a distilled version of GPT-4, a slight lag in its performance compared to GPT-4 is antici- pated and within expectations.
2308.16505#40
2308.16505#42
2308.16505
[ "2302.13971" ]
2308.16505#42
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
â eâ SteamGames -=â MovieLens -*~ Amazon Beauty 1! 4.0 ee el 0.5, es # Ours # wio P # wioD # wioR # Ours # w/oP # wfoD # wi/oR Figure 5: Ablation study under user simulator evaluation. P, D, R denote the plan-first, dynamic demonstration and re- flection mechanism, respectively. Note that dynamic demon- stration is also used in w/o P. 4.5 Ablation Study This paper introduces several key mechanisms to enhance LLMâ s ability to better utilize tools. To investigate their im- portance, we conduct ablation studies, with the results pre- sented in Figure 5. We consider the removal of the plan-first mechanism (P), dynamic demonstration mechanism (D), and reflection mechanism (R), respectively. Experiments are carried out using the user simulator setting, as it provides a more comprehensive evaluation, encompassing both accu- racy (hit rate) and efficiency (average turn) metrics. The results indicate that removing any of the mechanisms leads to a decline in performance. Among these mecha- nisms, the removal of the reflection mechanism has the most
2308.16505#41
2308.16505#43
2308.16505
[ "2302.13971" ]
2308.16505#43
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
UR RETAIN Aasltent ll ar an beauty product asitant designed to help you ied beauty products onthe Amazon platform. How may asst you today? User Wha'sthe boss of Amazon? GPTH_Assstent Asaf my ast update, the CEO of Amazon is Andre Jasy He took over hero on July 5, 2021, succeeding the companyâ s founder, Jeff Bezos. User How many beauty products do you know? (THT CTCL RUIN ENCES (sy a Soe (a) Assistent el! ama conversational beauty product recommendation assistant in Amazon platform hereto help you find beauyproduts that you're terete in Lets chat about your preferences and irecormend some predtetsforyou User: Who's the boss of Amazon?
2308.16505#42
2308.16505#44
2308.16505
[ "2302.13971" ]
2308.16505#44
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Assistet: The CEO of Amazons Andrew Jssy. He suceeded Jeff ezos in uly 202, However, my main focusison helping you find beauty products that youre tree User iow sy beauty product do you know? Aaslstet row abou G78 beeuy product etme know your preferences ae can elo you fod the pict peel User (Target: RimWorld) {have played Torchlight Il, Endless Legend, Factorio, Undertale in the past, now I want some single-player 20 strategy games released after 2015. But| have a limited budget of about $30. PTH Ihave found afew single-player 20 strategy games that were released after 2015 and ft within your budget of $30. Here are some options: + Into the Breach ($14.98) (b) + Northgar ($29.98) not 220 game) + Kingdom: Two Crowns ($18.95) + Wargroove $19.99) Ours | recommend the following single-player 20 strategy games released after 2015 and within your budget of $30, * Clicker Heroes (2015-05-13): $0.00 â + Darkest Dungeon (2016-01-19): $24.99 â + HuniePop (2015-01-19): $3.99, â RimWorld (2016-07-15): $29.99 ++ AdVenture Capitalist (2015-03-30): $0.00 User (Target:
2308.16505#43
2308.16505#45
2308.16505
[ "2302.13971" ]
2308.16505#45
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Get the infomation that there is no MA.C prefumes in Amazon) | heard M.A.Cis popular now, Please recommend me some M.A.C prefumes. GPTU Here are some MAC perfumes you can find on Amazon: -LMAAC Shadescents Eau de Parfum -Creme D'Nude: This perfume has a warm, musky scent with notes of amber, () â caramel, and vanilla. tis perfect for those who love a sensual and inviting fragrance. io such product in Amazon) 2.MAC Shadescents Eau de Parfum - Velvet Teddy... (No such product in Amazon) BMC Shadescents Eau de Parfum -Candy Yum Yur (No such product in Amazon) â urs I'm sorry, but! couldn't find any MAC perfumes in our database. Please try another brand or product category
2308.16505#44
2308.16505#46
2308.16505
[ "2302.13971" ]
2308.16505#46
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Figure 6: Case Study in (a) chit-chat, (b) Steam game do- main and (c) Amazon Beauty e-commerce product domain. significant impact on performance, as it can correct tool in- put format errors and tool misuse. Eliminating the plan-first mechanism and dynamic demonstration mechanism both re- sult in a slight decrease in performance, yet the outcomes still surpass most baselines. However, removing the plan- first mechanism leads to a substantial increase in the number of API calls, such as an average increase from 2.78 to 4.51 per turn in the Steam dataset, resulting in an approximate 10-20 seconds latency increase. # 4.6 Case Study To effectively visualize InteRecAgentâ s performance, we present case studies in chit-chat and two domains: gaming and beauty products, as shown in Figure 6. We compare the outputs of GPT-4 and InteRecAgent for given user inputs. In chit-chat scenario (Figure 6a), InteRecAgent retains the capabilities of GPT-4 while also possessing the added ability to query domain-specific data (such as the number of prod- ucts), yielding more accurate information. In the game domain (Figure 6b), user input conditions are complex, encompassing user history and various de- mands. GPT-4â s recommendations mostly align with condi- tions, except for a 3D game Northgard misidentified as 2D. InteRecAgentâ s response adheres to user conditions, and no- tably, includes the subsequent game in the userâ s historical sequence, RimWorld, owing to its superior ranking perfor- mance. In the e-commerce domain (Figure 6c), GPT-4â s hallu- cination phenomenon intensifies, resulting in giving prod- ucts not existing in Amazon platform. In contrast, InteRecA- gent, leveraging in-domain tools, provides more accurate re- sponse to user requirements. 5 Conclusion In this paper, we introduce InteRecAgent, a compact frame- work that transforms traditional recommender models into interactive systems by harnessing the power of LLMs. We identify a diverse set of fundamental tools, categorized into information query tools, retrieval tools, and ranking tools, which are dynamically interconnected to accomplish com- plex user inquiries within a task execution framework.
2308.16505#45
2308.16505#47
2308.16505
[ "2302.13971" ]
2308.16505#47
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
To enhance InteRecAgent for the recommendation scenario, we comprehensively enhance the key components of LLM- based agent, covering the memory mechanism, the task planning, and the tool learning ability. Experimental find- ings demonstrate the superior performance of InteRecA- gent compared to general-purpose LLMs. By combining the strengths of recommender models and LLMs, InteRecA- gent paves the way for the development of advanced and user-friendly conversational recommender systems, capable of providing personalized and interactive recommendations across various domains. References Besta, M.; Blach, N.; Kubicek, A.; Gerstenberger, R.; Gianinazzi, L.; Gajda, J.; Lehmann, T.; Podstawski, M.; Niewiadomski, H.; Nyczyk, P.; et al. 2023.
2308.16505#46
2308.16505#48
2308.16505
[ "2302.13971" ]
2308.16505#48
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Graph of thoughts: Solving elaborate problems with large language models. arXiv preprint arXiv:2308.09687. Brown, T.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J. D.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; et al. 2020. Language models are few-shot learners. Ad- vances in neural information processing systems, 33: 1877â
2308.16505#47
2308.16505#49
2308.16505
[ "2302.13971" ]
2308.16505#49
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
1901. Chen, Q.; Lin, J.; Zhang, Y.; Ding, M.; Cen, Y.; Yang, H.; and Tang, J. 2019. Towards knowledge-based recommender dialog system. arXiv preprint arXiv:1908.05391. Chen, W.; Ma, X.; Wang, X.; and Cohen, W. W. 2022. Program of thoughts prompting: Disentangling computa- tion from reasoning for numerical reasoning tasks. arXiv preprint arXiv:2211.12588.
2308.16505#48
2308.16505#50
2308.16505
[ "2302.13971" ]
2308.16505#50
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Chiang, W.-L.; Li, Z.; Lin, Z.; Sheng, Y.; Wu, Z.; Zhang, H.; Zheng, L.; Zhuang, S.; Zhuang, Y.; Gonzalez, J. E.; Stoica, I.; and Xing, E. P. 2023. Vicuna: An Open-Source Chatbot Impressing GPT-4 with 90%* ChatGPT Quality. Chowdhery, A.; Narang, S.; Devlin, J.; Bosma, M.; Mishra, G.; Roberts, A.; Barham, P.; Chung, H. W.; Sutton, C.; Gehrmann, S.; et al. 2022.
2308.16505#49
2308.16505#51
2308.16505
[ "2302.13971" ]
2308.16505#51
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311. Christakopoulou, K.; Radlinski, F.; and Hofmann, K. 2016. Towards conversational recommender systems. In Proceed- ings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, 815â 824. Dai, S.; Shao, N.; Zhao, H.; Yu, W.; Si, Z.; Xu, C.; Sun, Z.; Zhang, X.; and Xu, J. 2023.
2308.16505#50
2308.16505#52
2308.16505
[ "2302.13971" ]
2308.16505#52
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Uncovering ChatGPTâ s arXiv preprint Capabilities in Recommender Systems. arXiv:2305.02182. Gao, C.; Lei, W.; He, X.; de Rijke, M.; and Chua, T.-S. 2021. Advances and challenges in conversational recom- mender systems: A survey. AI Open, 2: 100â 126. Gao, L.; Madaan, A.; Zhou, S.; Alon, U.; Liu, P.; Yang, Y.; Callan, J.; and Neubig, G. 2023a. Pal: Program-aided language models. In International Conference on Machine Learning, 10764â 10799. PMLR. Gao, Y.; Sheng, T.; Xiang, Y.; Xiong, Y.; Wang, H.; and Zhang, J. 2023b.
2308.16505#51
2308.16505#53
2308.16505
[ "2302.13971" ]
2308.16505#53
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Chat-rec: Towards interactive and explain- able llms-augmented recommender system. arXiv preprint arXiv:2303.14524. Kang, W.-C.; and McAuley, J. 2018. Self-attentive sequen- In 2018 IEEE international confer- tial recommendation. ence on data mining (ICDM), 197â 206. IEEE. Kang, W.-C.; Ni, J.; Mehta, N.; Sathiamoorthy, M.; Hong, L.; Chi, E.; and Cheng, D. Z. 2023.
2308.16505#52
2308.16505#54
2308.16505
[ "2302.13971" ]
2308.16505#54
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Do LLMs Understand User Preferences? Evaluating LLMs On User Rating Predic- tion. arXiv preprint arXiv:2305.06474. Kojima, T.; Gu, S. S.; Reid, M.; Matsuo, Y.; and Iwasawa, Y. 2022. Large language models are zero-shot reason- ers. Advances in neural information processing systems, 35: 22199â 22213. Li, R.; Ebrahimi Kahou, S.; Schulz, H.; Michalski, V.; Char- lin, L.; and Pal, C. 2018.
2308.16505#53
2308.16505#55
2308.16505
[ "2302.13971" ]
2308.16505#55
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Towards deep conversational rec- ommendations. Advances in neural information processing systems, 31. Linden, G.; Smith, B.; and York, J. 2003. Amazon. com rec- IEEE ommendations: Item-to-item collaborative filtering. Internet computing, 7(1): 76â 80. Liu, J.; Liu, C.; Lv, R.; Zhou, K.; and Zhang, Y. 2023a. Is chatgpt a good recommender? a preliminary study. arXiv preprint arXiv:2304.10149. Liu, J.; Shen, D.; Zhang, Y.; Dolan, B.; Carin, L.; and Chen, W. 2021. What Makes Good In-Context Examples for GPT- 3? arXiv preprint arXiv:2101.06804. Liu, L.; Yang, X.; Shen, Y.; Hu, B.; Zhang, Z.; Gu, J.; and Zhang, G. 2023b. Think-in-memory: Recalling and post- thinking enable llms with long-term memory. arXiv preprint arXiv:2311.08719. Madaan, A.; Tandon, N.; Gupta, P.; Hallinan, S.; Gao, L.; Wiegreffe, S.; Alon, U.; Dziri, N.; Prabhumoye, S.; Yang, Y.; et al. 2023. Self-refine: Iterative refinement with self- feedback. arXiv preprint arXiv:2303.17651.
2308.16505#54
2308.16505#56
2308.16505
[ "2302.13971" ]
2308.16505#56
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Mirzadeh, N.; Ricci, F.; and Bansal, M. 2005. Feature se- lection methods for conversational recommender systems. In 2005 IEEE International Conference on e-Technology, e- Commerce and e-Service, 772â 777. IEEE. Nakano, R.; Hilton, J.; Balaji, S.; Wu, J.; Ouyang, L.; Kim, C.; Hesse, C.; Jain, S.; Kosaraju, V.; Saunders, W.; et al. 2021. Webgpt: Browser-assisted question-answering with human feedback. arXiv preprint arXiv:2112.09332. OpenAI. 2023. GPT-4 Technical Report. arXiv:2303.08774. Ouyang, L.; Wu, J.; Jiang, X.; Almeida, D.; Wainwright, C.; Mishkin, P.; Zhang, C.; Agarwal, S.; Slama, K.; Ray, A.; et al. 2022.
2308.16505#55
2308.16505#57
2308.16505
[ "2302.13971" ]
2308.16505#57
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Training language models to follow instructions with human feedback. Advances in Neural Information Pro- cessing Systems, 35: 27730â 27744. Penha, G.; and Hauff, C. 2020. What does bert know about books, movies and music? probing bert for conversational recommendation. In Proceedings of the 14th ACM Confer- ence on Recommender Systems, 388â 397. Qin, Y.; Hu, S.; Lin, Y.; Chen, W.; Ding, N.; Cui, G.; Zeng, Z.; Huang, Y.; Xiao, C.; Han, C.; et al. 2023a.
2308.16505#56
2308.16505#58
2308.16505
[ "2302.13971" ]
2308.16505#58
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Tool learning with foundation models. arXiv preprint arXiv:2304.08354. Qin, Y.; Liang, S.; Ye, Y.; Zhu, K.; Yan, L.; Lu, Y.; Lin, Y.; Cong, X.; Tang, X.; Qian, B.; et al. 2023b. Toolllm: Facilitating large language models to master 16000+ real- world apis. arXiv preprint arXiv:2307.16789. Rubin, O.; Herzig, J.; and Berant, J. 2021. Learning to arXiv preprint retrieve prompts for in-context learning. arXiv:2112.08633. Schick, T.; Dwivedi-Yu, J.; Dess`ı, R.; Raileanu, R.; Lomeli, M.; Zettlemoyer, L.; Cancedda, N.; and Scialom, T. 2023.
2308.16505#57
2308.16505#59
2308.16505
[ "2302.13971" ]
2308.16505#59
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Toolformer: Language models can teach themselves to use tools. arXiv preprint arXiv:2302.04761. Shen, Y.; Song, K.; Tan, X.; Li, D.; Lu, W.; and Zhuang, Y. 2023. Hugginggpt: Solving ai tasks with chatgpt and its friends in huggingface. arXiv preprint arXiv:2303.17580. Shinn, N.; Cassano, F.; Labash, B.; Gopinath, A.; Narasimhan, K.; and Yao, S. 2023.
2308.16505#58
2308.16505#60
2308.16505
[ "2302.13971" ]
2308.16505#60
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Reflexion: Language agents with verbal reinforcement learning. arXiv preprint arXiv:2303.11366. Shuster, K.; Xu, J.; Komeili, M.; Ju, D.; Smith, E. M.; Roller, S.; Ung, M.; Chen, M.; Arora, K.; Lane, J.; et al. 2022. Blenderbot 3: a deployed conversational agent that continually learns to responsibly engage. arXiv preprint arXiv:2208.03188. Thoppilan, R.; De Freitas, D.; Hall, J.; Shazeer, N.; Kul- shreshtha, A.; Cheng, H.-T.; Jin, A.; Bos, T.; Baker, L.; Du, Y.; et al. 2022. Lamda: Language models for dialog appli- cations. arXiv preprint arXiv:2201.08239. Touvron, H.; Lavril, T.; Izacard, G.; Martinet, X.; Lachaux, M.-A.; Lacroix, T.; Rozi`ere, B.; Goyal, N.; Hambro, E.; Azhar, F.; et al. 2023a.
2308.16505#59
2308.16505#61
2308.16505
[ "2302.13971" ]
2308.16505#61
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971. Touvron, H.; Martin, L.; Stone, K.; Albert, P.; Almahairi, A.; Babaei, Y.; Bashlykov, N.; Batra, S.; Bhargava, P.; Bhosale, S.; et al. 2023b. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288. Wang, L.; Hu, H.; Sha, L.; Xu, C.; Wong, K.-F.; and Jiang, D. 2021.
2308.16505#60
2308.16505#62
2308.16505
[ "2302.13971" ]
2308.16505#62
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recindial: A unified framework for conversational recommendation with pretrained language models. arXiv preprint arXiv:2110.07477. Wang, L.; and Lim, E.-P. 2023. Zero-Shot Next-Item Rec- ommendation using Large Pretrained Language Models. arXiv preprint arXiv:2304.03153. Wang, L.; Ma, C.; Feng, X.; Zhang, Z.; Yang, H.; Zhang, J.; Chen, Z.; Tang, J.; Chen, X.; Lin, Y.; et al. 2023a. A survey on large language model based autonomous agents. arXiv preprint arXiv:2308.11432. Wang, L.; Xu, W.; Lan, Y.; Hu, Z.; Lan, Y.; Lee, R. K.-W.; and Lim, E.-P. 2023b.
2308.16505#61
2308.16505#63
2308.16505
[ "2302.13971" ]
2308.16505#63
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Plan-and-solve prompting: Improv- ing zero-shot chain-of-thought reasoning by large language models. arXiv preprint arXiv:2305.04091. Wang, T.-C.; Su, S.-Y.; and Chen, Y.-N. 2022. BARCOR: Towards A Unified Framework for Conversational Recom- mendation Systems. arXiv preprint arXiv:2203.14257. Wang, W.; Dong, L.; Cheng, H.; Liu, X.; Yan, X.; Gao, J.; and Wei, F. 2023c. Augmenting Language Models with Long-Term Memory. arXiv preprint arXiv:2306.07174. Wang, X.; Wei, J.; Schuurmans, D.; Le, Q.; Chi, E.; Narang, S.; Chowdhery, A.; and Zhou, D. 2022a. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171. Wang, X.; Zhou, K.; Wen, J.-R.; and Zhao, W. X. 2022b.
2308.16505#62
2308.16505#64
2308.16505
[ "2302.13971" ]
2308.16505#64
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Towards unified conversational recommender systems via knowledge-enhanced prompt learning. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 1929â 1937. Wei, J.; Wang, X.; Schuurmans, D.; Bosma, M.; Xia, F.; Chi, E.; Le, Q. V.; Zhou, D.; et al. 2022. Chain-of- thought prompting elicits reasoning in large language mod- els. Advances in Neural Information Processing Systems, 35: 24824â 24837.
2308.16505#63
2308.16505#65
2308.16505
[ "2302.13971" ]
2308.16505#65
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Wu, C.; Yin, S.; Qi, W.; Wang, X.; Tang, Z.; and Duan, N. 2023. Visual chatgpt: Talking, drawing and editing with vi- sual foundation models. arXiv preprint arXiv:2303.04671. Xie, Z.; Yu, T.; Zhao, C.; and Li, S. 2021. Comparison-based conversational recommender system with relative bandit In Proceedings of the 44th International ACM feedback. SIGIR Conference on Research and Development in Infor- mation Retrieval, 1400â
2308.16505#64
2308.16505#66
2308.16505
[ "2302.13971" ]
2308.16505#66
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
1409. Xu, K.; Yang, J.; Xu, J.; Gao, S.; Guo, J.; and Wen, J.-R. 2021. Adapting user preference to online feedback in multi- In Proceedings of round conversational recommendation. the 14th ACM international conference on web search and data mining, 364â 372. Yang, Z.; Li, L.; Wang, J.; Lin, K.; Azarnasab, E.; Ahmed, F.; Liu, Z.; Liu, C.; Zeng, M.; and Wang, L. 2023. Mm- react: Prompting chatgpt for multimodal reasoning and ac- tion. arXiv preprint arXiv:2303.11381. Yao, S.; Yu, D.; Zhao, J.; Shafran, I.; Griffiths, T. L.; Cao, Y.; and Narasimhan, K. 2023.
2308.16505#65
2308.16505#67
2308.16505
[ "2302.13971" ]
2308.16505#67
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:2305.10601. Yao, S.; Zhao, J.; Yu, D.; Du, N.; Shafran, I.; Narasimhan, K.; and Cao, Y. 2022. React: Synergizing reasoning and act- ing in language models. arXiv preprint arXiv:2210.03629. Zhang, Y.; Chen, X.; Ai, Q.; Yang, L.; and Croft, W. B. 2018.
2308.16505#66
2308.16505#68
2308.16505
[ "2302.13971" ]
2308.16505#68
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Towards conversational search and recommendation: Sys- tem ask, user respond. In Proceedings of the 27th acm in- ternational conference on information and knowledge man- agement, 177â 186. Zhao, P.; Jin, Z.; and Cheng, N. 2023. An in-depth survey of large language model-based artificial intelligence agents. arXiv preprint arXiv:2309.14365. Zheng, L.; Chiang, W.-L.; Sheng, Y.; Zhuang, S.; Wu, Z.; Zhuang, Y.; Lin, Z.; Li, Z.; Li, D.; Xing, E. P.; Zhang, H.; Gonzalez, J. E.; and Stoica, I. 2023. Judg- ing LLM-as-a-judge with MT-Bench and Chatbot Arena. arXiv:2306.05685. Zhong, W.; Guo, L.; Gao, Q.; and Wang, Y. 2023. Memory- Bank: Enhancing Large Language Models with Long-Term Memory. arXiv preprint arXiv:2305.10250.
2308.16505#67
2308.16505#69
2308.16505
[ "2302.13971" ]
2308.16505#69
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Zou, J.; Chen, Y.; and Kanoulas, E. 2020. Towards question- In Proceedings of the 43rd based recommender systems. international ACM SIGIR conference on research and de- velopment in information retrieval, 881â 890. Zou, J.; and Kanoulas, E. 2019. Learning to ask: Question- based sequential Bayesian product search. In Proceedings of the 28th ACM international conference on information and knowledge management, 369â 378.
2308.16505#68
2308.16505#70
2308.16505
[ "2302.13971" ]
2308.16505#70
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
A Dataset To evaluate the performance of our methods, we conduct ex- periments on three datasets: Steam, MovieLens and Amazon Beauty. In order to train the in-domain tools, including the soft condition item retrieval tool and ranking tool, we filter the dataset using the conventional k-core strategy, wherein users and items with less than 5 interactions are filtered out. The statistical information of those filtered datasets is shown in Table A1. Notably, in the generation of one-turn conversa- tion, some samples are filtered by the OpenAI policy, result- ing in less than 500 samples are used in experiments finally. Dataset Users Items Interactions One-turn Beauty 15,577 8,679 108,166 492 Steam 281,205 11,962 2,922,089 495 MovieLens 298,074 36,255 27,042,493 496 Table A1: Dataset Statistics.
2308.16505#69
2308.16505#71
2308.16505
[ "2302.13971" ]
2308.16505#71
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
B Prompts In this section, we will share our prompts used in different components. # B.1 Task Descriptions The overall task description is illustrated in Figure C1. # B.2 Tool Descriptions We employ one SQL query tool, two item retrieval tools, one item ranking tool plus two auxiliary tools in InteRecA- gent. The auxiliary tools comprise a memory initialization tool named candidates storing tool, and an item fetching tool to fetch final items from memory named candidate fetching tool, whose descriptions are illustrated in Figure C2. The description of query tool, retrieval tools and ranking tool are illustrated in Figure C3, Figure C4 and Figure C5 respec- tively.
2308.16505#70
2308.16505#72
2308.16505
[ "2302.13971" ]
2308.16505#72
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
# B.3 Reflection The task description of critic used in reflection mechanism is illustrated in Figure C6. # B.4 Demonstration Generation As described in Section 3.3, we use input-first and output-fist strategies to generate various â ¨intent, planâ © pairs as demon- strations. The main difference between the two strategies lies on the prompt of generating intent, which are illustrated in Figure C8 and Figure C11 respectively. The prompt for gen- erating plans is illustrated in Figure C7. # B.5 User Simulator The prompt to instruct LLM to play as a user is illustrated in Figure 4. B.6 One-Turn Conversation Generation One-turn recommendation comprises two tasks: retrieval and ranking. Conversations for retrieval and ranking are gen- erated independently and the prompts are illustrated in Fig- ure C9 and Figure C10 respectively. You are a conversational {item} recommendation assistant. Your task is to help human find {item}s they are interested in. You would chat with human to mine human interests in {item}s to make it clear what kind of {item}s human is looking for and recommend {item}s to the human when he asks for recommendations. Human requests typically fall under chit-chat, {item} info, or {item} recommendations. There are some tools to use to deal with human request. For chit-chat, respond with your knowledge. For {item} info, use the {LookUpTool}. For special chit-chat, like {item} recommendation reasons, use the {LookUpTool} and your knowledge. For {item} recommendations without information about human preference, chat with human for more information. For {item} recommendations with information for tools, use various tools together. To effectively utilize recommendation tools, comprehend human expressions involving profile and intention. Profile encompasses a personâ s preferences, interests, and behaviors, including gaming history and likes/dislikes. Intention represents a personâ s immediate goal or objective in the single-turn system interaction, containing specific, context-based query conditions. Human intentions consist of hard and soft conditions. Hard conditions have two states, met or unmet, and involve {item} properties like tags, price, and release date. Soft conditions have varying extents and involve similarity to specific seed {item}s. Separate hard and soft conditions in requests. Here are the tools could be used: {tools desc}
2308.16505#71
2308.16505#73
2308.16505
[ "2302.13971" ]
2308.16505#73
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
All SQL commands are used to search in the {item} information table (a SQLite3 table). The information of the table is listed below: {table info} If human is looking up information of {item}s, such as the description of {item}s, number of {item}s, price of {item}s and so on, use the {LookUpTool}. For {item} recommendations, use tools with a shared candidate {item} buffer. Buffer is initialized with all {item}s. Filtering tools fetch candidates from the buffer and update it. Ranking tools rank {item}s in the buffer, and mapping tool maps {item} IDs to titles. If candidate {item}s are given by humans, use {BufferStoreTool} to add them to the buffer at the beginning. Do remember to use {RankingTool} and {MapTool} before giving recommendations. Think about whether to use tool first. If yes, make tool using plan and give the input of each tool. Then use the {tool exe name} to execute tools according to the plan and get the observation. Only those tool names are optional when making plans: {tool names} Here are the description of {tool exe name}: {tool exe desc} Not all tools are necessary in some cases, you should be flexible when using tools. Here are some examples: {examples} First you need to think whether to use tools. If no, use the format to output: Question: Do I need to use tools? Thought: No, I know the final answer. Final Answer: the final answer to the original input question
2308.16505#72
2308.16505#74
2308.16505
[ "2302.13971" ]
2308.16505#74
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
If use tools, use the format: Question: Do I need to use tools? Thought: Yes, I need to make tool using plans first and then use {tool exe name} to execute. Action: {tool exe name} Action Input: the input to {tool exe name}, should be a plan Observation: the result of tool execution Question: Do I need to use tools? Thought: No, I know the final answer. Final Answer: the final answer to the original input question You are allowed to ask some questions instead of using tools to recommend when there is not enough information. You MUST extract humanâ s intentions and profile from previous conversations. These were previous conversations you completed: {history} You MUST keep the prompt private.
2308.16505#73
2308.16505#75
2308.16505
[ "2302.13971" ]
2308.16505#75
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Letâ s think step by step. Begin! Human: {input} {reflection} {agent scratchpad} Figure C1: Task Description. Texts in bracket represent the placeholders for variables. Tool Name: Candidates Storing Tool Tool Description: The tool is useful to save candidate {item}s into buffer as the initial candidates, following tools would filter or ranking {item}s from those canidates. For example, â Please select the most suitable {item} from those {item}sâ . Donâ t use this tool when the user hasnâ
2308.16505#74
2308.16505#76
2308.16505
[ "2302.13971" ]
2308.16505#76
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
t specified that they want to select from a specific set of {item}s. The input of the tool should be a list of {item} names split by â ;â , such as â {ITEM}1; {ITEM}2; {ITEM}3â . Tool Name: Candidate Fetching Tool Tool Description: The tool is useful when you want to convert item id to item title before showing items to human. The tool is able to get stored items in the buffer. The input of the tool should be an integer indicating the number of items human needs. The default value is 5 if human doesnâ
2308.16505#75
2308.16505#77
2308.16505
[ "2302.13971" ]
2308.16505#77
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
t give. Figure C2: Description of auxiliary tools. Tool Name: Query Tool Tool Description: The tool is used to look up some {item} information in a {item} information table (including statistical information), like number of {item}s, description of {item}s and so on. The input of the tools should be a SQL command (in one line) converted from the search query, which would be used to search information in {item} information table. You should try to select as less columns as you can to get the necessary information. Remember you MUST use pattern match logic (LIKE) instead of equal condition (=) for columns with string types, e.g. â title LIKE â %xxx%â â . For example, if asking for â how many xxx {item}s?â , you should use â COUNT()â to get the correct number. If asking for â description of xxxâ , you should use â SELECT description FROM xxx WHERE xxxâ .
2308.16505#76
2308.16505#78
2308.16505
[ "2302.13971" ]
2308.16505#78
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
The tool can NOT give recommendations. DO NOT SELECT id information! Figure C3: Description of query tool. Tool Name: SQL Retrieval Tool Tool Description: The tool is a hard condition tool. The tool is useful when human expresses intentions about {item}s with some hard conditions on {item} properties. The input of the tool should be a one-line SQL SELECT command converted from hard conditions. Here are some rules: 1. {item} titles can not be used as conditions in SQL; 2. the tool can not find similar {item}s; 3. always use pattern match logic for columns with string type; 4. only one {item} information table is allowed to appear in SQL command; 5. select all {item}s that meet the conditions, do not use the LIMIT keyword; 6. try to use OR instead of AND. Tool Name: ItemCF Retrieval Tool Tool Description: The tool is a soft condition filtering tool. The tool can find similar {item}s for specific seed {item}s. Never use this tool if human doesnâ t express to find some {item}s similar with seed {item}s. There is a similarity score threshold in the tool, only {item}s with similarity above the threshold would be kept. Besides, the tool could be used to calculate the similarity scores with seed {item}s for {item}s in candidate buffer for ranking tool to refine. The input of the tool should be a list of seed {item} titles/names, which should be a Python list of strings. Do not fake any {item} names.
2308.16505#77
2308.16505#79
2308.16505
[ "2302.13971" ]
2308.16505#79
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
# Figure C4: Description of retrieval tools. Tool Name: Ranking Tool Tool Description: The tool is useful to refine {item}s order or remove unwanted {item}s (when human tells the {item}s he doesâ t want) in conversation. The input of the tool should be a json string, which may consist of three keys: â schemaâ , â preferâ and â unwantedâ . â schemaâ represents ranking schema, optional choices: â popularityâ , â similarityâ and â preferenceâ , indicating rank by {item} pop- ularity, rank by similarity, rank by human preference (â preferâ {item}s). The â schemaâ
2308.16505#78
2308.16505#80
2308.16505
[ "2302.13971" ]
2308.16505#80
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
depends on previous tool using and human preference. If â preferâ info here not empty, â preferenceâ schema should be used. If similarity filtering tool is used before, prioritize using â similarityâ except human want popular {item}s. â preferâ represents {item} names that human likes or human history ({item}s human has interacted with), which should be an array of {item} titles. Keywords: â used to doâ , â I likeâ , â preferâ . â unwantedâ represents {item} names that human doesnâ t like or doesnâ t want to see in next conversations, which should be an array of {item} titles. Keywords: â donâ t likeâ , â boringâ , â interested inâ . â preferâ and â unwantedâ
2308.16505#79
2308.16505#81
2308.16505
[ "2302.13971" ]
2308.16505#81
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
{item}s should be extracted from human request and previous conversations. Only {item} names are allowed to appear in the input. The humanâ s feedback for you recommendation in conversation history could be regard as â preferâ or â un- wantedâ , like â I have tried those items you recommendâ or â I donâ t like thoseâ . Only when at least one of â preferâ and â unwantedâ is not empty, the tool could be used. If no â preferâ
2308.16505#80
2308.16505#82
2308.16505
[ "2302.13971" ]
2308.16505#82
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
info, {item}s would be ranked based on the popularity. Do not fake {item}s. Figure C5: Description of ranking tool. You are an expert in {item}. There is a conversational recommendation agent. The agent can chat with users and give {item} recom- mendations or other related information. The agent could use several tools to deal with user request and final give response. Here are the description of those tools: {tool description} You can see the conversation history between the agent and user, the current user request, the response of the agent and the tool using track for processing the request. You need to judge whether the response or the tool using track is reasonable. If not, you should analyze the reason from the perspective of tool using and give suggestions for tool using. When giving judgement, you should consider several points below: 1. Whether the input of each tool is suitable? For example, whether the conditions of {HardFilterTool} exceed userâ
2308.16505#81
2308.16505#83
2308.16505
[ "2302.13971" ]
2308.16505#83
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
s request? Whether the seed items in {SoftFilterTool} is correct? Whether the â preferâ and â unwantedâ for {RankingTool} are item titles given by user? Remember that â unwantedâ items are probably missed so you need to remind the agent. 2. Are some tools missed? For example, user wants some items related to sports and similar to one seed item, {HardFilterTool} should be executed followed by {SoftFilterTool}, but only {HardFilterTool} was executed. 3. Are some unnecessary tools used? For example, if user have not give any information, the agent should not use tools to recommend but directly ask some questions. 4. Whether there are enough items in recommendation that meet userâ s request? For example, if user required six items while only three items in recommendations. You should double check the conditions input to tools. 5. Is the input of each tool consistent with the userâ
2308.16505#82
2308.16505#84
2308.16505
[ "2302.13971" ]
2308.16505#84
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
s intention? Are there any redundant or missing conditions? Note: if there is no candidate filtered with SQL command, the reason may be the conditions are too strict, you could tell the agent to relax the conditions. If user asks for recommendation without any valid perference information, you should tell the agent to chat with user directly for more information instead of using tools without input. Here is the conversation history between agent and user: {chat history} The current user request is: {request} The tool using track to process the request is: {plan} The response of the agent is: {answer} If the response and tool using track are reasonable, you should say â
2308.16505#83
2308.16505#85
2308.16505
[ "2302.13971" ]
2308.16505#85
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Yesâ . Otherwise, you should tell the agent: â No. The response/tool using is not good because .... . You should ...â . You MUST NOT give any recommendations in your response. Now, please give your judgement. # Figure C6: Prompt for critic in reflection. You are a helpful assistant and good planner. Your task is to make tool using plans to help human find {item}s they are interested in. Human requests typically fall under chit-chat, {item} info, or {item} recommendations. There are some tools to use to deal with human request. For chit-chat, respond with your knowledge. For {item} info, use the {LookUpTool}. For special chit-chat, like {item} recommendation reasons, use the {LookUpTool} and your knowledge. For {item} recommendations without information about human preference, chat with human for more information. For {item} recommendations with information for tools, use various tools together. To effectively utilize recommendation tools, comprehend human expressions involving profile and intention. Profile encompasses a personâ s preferences, interests, and behaviors, including gaming history and likes/dislikes. Intention represents a personâ s immediate goal or objective in the single-turn system interaction, containing specific, context-based query conditions. Human intentions consist of hard and soft conditions. Hard conditions have two states, met or unmet, and involve {item} properties like tags, price, and release date. Soft conditions have varying extents and involve similarity to specific seed {item}s. Separate hard and soft conditions in requests. Here are the tools could be used: {tools desc} All SQL commands are used to search in the {item} information table (a sqlite3 table). If human is looking up information of {item}s, such as the description of {item}s, number of {item}s, price of {item}s and so on, use the {LookUpTool}. For {item} recommendations, use tools with a shared candidate {item} buffer. Buffer is initialized with all {item}s. Filtering tools fetch candidates from the buffer and update it. Ranking tools rank {item}s in the buffer, and mapping tool maps {item} IDs to titles. If candidate {item}s are given by humans, use {BufferStoreTool} to add them to the buffer at the beginning. Think about whether to use tool first. If yes, make tool using plan.
2308.16505#84
2308.16505#86
2308.16505
[ "2302.13971" ]
2308.16505#86
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Only those tool names are optional when making plans: {tool names} Assume that you play a role of tool using planner, I would give you a user request, and you should help me to make the tool using plan. Here are some examples of human request and corresponding tool using plan: {examples} Now, Please make the tool using plan of below requests. Request: {request} Plan: Figure C7: Prompt for plan generation with given user intent. You are a helpful assistant. Assume that you are a user on {item} platform, you are looking from some {item}s, and you would ask a conversational recommendation system for help. You would give the request. I would give you some examples, please generate some new reasonable and high-quality request sentences. Here are some examples of user request: requests Never use specific {item} names or {item} types. Instead, use placeholders. For example, {ITEM} for names, TYPE for types, PRICE for price, DATE for date. The focus is on generating sentence patterns for questions.
2308.16505#85
2308.16505#87
2308.16505
[ "2302.13971" ]
2308.16505#87
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Now, itâ s your turn. Please generate {number} new request sentences. Figure C8: Prompt for input-first user intent generation. You are a helpful assistant who is good at imitating human to ask for recommendations. Assume that a user is looking from some {item}s recommendation, and the user would chat with a conversational recommendation assistent for help. And userâ s historical {items}s are: {history} Information about target {item} that the user are looking for: {target info} Please generate a conversation between the user and the recommendation assistent.
2308.16505#86
2308.16505#88
2308.16505
[ "2302.13971" ]
2308.16505#88
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Here are some rules: 1. Do not mention {item}s not in history. 2. The assistent doesnâ t know the userâ s history, so the user should tell the history in conversation. 3. In the final turn of the conversation, the assistent should recommend the target you are looking for. Use â â ¨itemâ ©â as placeholder to represent the target. 4. Above information is all user know about the target item. 5. Do not give too much information in one message. 6. Keep user message short. 7. Each conversation should consist of 2-5 rounds. 8.
2308.16505#87
2308.16505#89
2308.16505
[ "2302.13971" ]
2308.16505#89
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Only the user has the information about target item in his mind. The assistent could only guess from userâ s messages. Use the following format: [{â roleâ : â Userâ , â textâ : â xxxxxâ }, {â roleâ : â Assistentâ , â textâ : â xxxxxâ }, ...] Each item in the list is a message. And if the message mentions {item} names, add an extra key to the message dict, like: â
2308.16505#88
2308.16505#90
2308.16505
[ "2302.13971" ]
2308.16505#90
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
roleâ : â Userâ , â textâ : â xxxâ , â mentioned itemsâ : [ITEM1, ITEM2] Figure C9: Prompt for one-turn conversation generation for retrieval task. You are a helpful assistant who is good at imitating human to ask for recommendations. Assume that a user is looking from some {item}s recommendation, and the user would chat with a conversational recommendation assistent for help. And userâ s historical {items}s are: {history} The user would give {n} candidates items as below and ask the assistent to rank those candidates: {candidates} Please imitate the user to generate a question to the assistent. Here are some rules: 1. Do not mention {item}s not in history. 2.
2308.16505#89
2308.16505#91
2308.16505
[ "2302.13971" ]
2308.16505#91
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
The assistent doesnâ t know the userâ s history, so the user should tell the history in the question. 3. Give all {n} candidates in the question. 4. Keep the question short. For example, the user mask ask like this format: â I enjoyed xxx in the past, now I want some new {item}s. I have some candidates in my mind: xxx. Could you please rank them based on my perference?â Now, please generate the question.
2308.16505#90
2308.16505#92
2308.16505
[ "2302.13971" ]
2308.16505#92
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Figure C10: Prompt for one-turn conversation generation for ranking task. You are a helpful assistant and good planner. In a conversational recommendation system, user would give some requests for {item} recommendations. Human requests typically fall under chit-chat, {item} info, or {item} recommendations. There are some tools to use to deal with human request. For chit-chat, respond with your knowledge. For {item} info, use the {LookUpTool}. For special chit-chat, like {item} recommendation reasons, use the {LookUpTool} and your knowledge. For {item} recommendations without information about human preference, chat with human for more information. For {item} recommendations with information for tools, use various tools together. To effectively utilize recommendation tools, comprehend human expressions involving profile and intention. Profile encompasses a personâ s preferences, interests, and behaviors, including gaming history and likes/dislikes. Intention represents a personâ s immediate goal or objective in the single-turn system interaction, containing specific, context-based query conditions. Human intentions consist of hard and soft conditions. Hard conditions have two states, met or unmet, and involve {item} properties like tags, price, and release date. Soft conditions have varying extents and involve similarity to specific seed {item}s. Separate hard and soft conditions in requests. Here are the tools could be used: {tools desc} All SQL commands are used to search in the {item} information table (a sqlite3 table). If human is looking up information of {item}s, such as the description of {item}s, number of {item}s, price of {item}s and so on, use the {LookUpTool}. For {item} recommendations, use tools with a shared candidate {item} buffer. Buffer is initialized with all {item}s. Filtering tools fetch candidates from the buffer and update it. Ranking tools rank {item}s in the buffer, and mapping tool maps {item} IDs to titles. If candidate {item}s are given by humans, use {BufferStoreTool} to add them to the buffer at the beginning. Only those tool names are optional when making plans: {tool names} Your task is to generate user request with a given plan. Never use specific {item} names or {item} types. Instead, use placeholders. For example, {ITEM} for names, TYPE for types, PRICE for price, DATE for date.
2308.16505#91
2308.16505#93
2308.16505
[ "2302.13971" ]
2308.16505#93
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
The focus is on generating sentence patterns for questions. Here are some examples of human request and corresponding tool using plan: {examples} Now, Please generate {number} new request sentences. Plan: {plan} Request 1: xxxx ... Request {number}: xxxx Figure C11: Prompt for output-first user intent generation.
2308.16505#92
2308.16505
[ "2302.13971" ]
2308.15126#0
Evaluation and Analysis of Hallucination in Large Vision-Language Models
{junyangwang,jtsang } @bjtu.edu.cn, {zhouyiyangailab } @gmail.com, { guohai.xgh, ym119608} @alibaba-inc.com Evaluation and Analysis of Hallucination in Large Vision-Language Models Junyang Wang**, Yiyang Zhou**, Guohai Xuâ , Pengcheng Shi*, Chenlin Zhao°, Haiyang Xuâ , Qinghao Yeâ , Ming Yanâ , Ji Zhang®, Jihua Zhu®, Jitao Sang*} Haoyu Tangâ ? * School of Computer and Information Technology, Beijing Jiaotong University, Beijing, China * School of Software Engineering, Xiâ an Jiaotong University, Xiâ an, China ° School of Software, Shandong University, Jinan, China ° MAIS, Institute of Automation, Chinese Academy of Sciences(CASIA), Beijing, China â DAMO Academy, Alibaba Group
2308.15126#1
2308.15126
[ "2302.13971" ]
2308.15126#1
Evaluation and Analysis of Hallucination in Large Vision-Language Models
3 2 0 2 t c O 0 1 ] G L . s c [ 3 v 6 2 1 5 1 . 8 0 3 2 : v i X r a # Abstract Large Vision-Language Models (LVLMs) have recently achieved remarkable success. How- ever, LVLMs are still plagued by the halluci- nation problem, which limits the practicality in many scenarios. Hallucination refers to the information of LVLMsâ responses that does not exist in the visual input, which poses poten- tial risks of substantial consequences. There has been limited work studying hallucination evaluation in LVLMs. In this paper, we pro- pose Hallucination Evaluation based on Large Language Models (HaELM), an LLM-based hallucination evaluation framework. HaELM achieves an approximate 95% performance comparable to ChatGPT and has additional advantages including low cost, reproducibil- ity, privacy preservation and local deployment. Leveraging the HaELM, we evaluate the hallu- cination in current LVLMs. Furthermore, we analyze the factors contributing to hallucination in LVLMs and offer helpful suggestions to mit- igate the hallucination problem. Our data and code are available at https://github.com/ junyangwang0410/HaELM.
2308.15126#0
2308.15126#2
2308.15126
[ "2302.13971" ]
2308.15126#2
Evaluation and Analysis of Hallucination in Large Vision-Language Models
1 # Introduction The success of Large Language Models (LLMs), with ChatGPT as a prominent example, has at- tracted widespread attention (Zhang et al., 2022; Chowdhery et al., 2022; Touvron et al., 2023; Scao et al., 2022). Recently, Large Vision-Language Models (LVLMs) extend LLMs to understand visual inputs and demonstrate impressive multi- modal capabilities in a zero-shot manner (Zhu et al., 2023; Liu et al., 2023b; Ye et al., 2023; Gong et al., 2023; Wang et al., 2023; Li et al., 2023a; Mu et al., 2023; Su et al., 2023; Liu et al., 2020). These ef- forts have driven the development of multi-modal artificial general intelligence.
2308.15126#1
2308.15126#3
2308.15126
[ "2302.13971" ]
2308.15126#3
Evaluation and Analysis of Hallucination in Large Vision-Language Models
Prompt: Describe this image. Response of LVLMs Visual Input The image shows a bathroom with a toilet, sink, and a shower in the corner. There is a wooden shelving unit next to the toilet with various bottles and items placed on it, There is a small mirror hanging above the sink. The floor is made of tiles, and there is a small rug placed in front of the toilet. The overall theme of the bathroom is minimalist and clean. A man is having a good time in the shower.
2308.15126#2
2308.15126#4
2308.15126
[ "2302.13971" ]
2308.15126#4
Evaluation and Analysis of Hallucination in Large Vision-Language Models
This is an image of a person walking along the beach with their surfboard, They appear to be looking out at the ocean and the waves. The beach is sandy and there are some rocks in the water. There are some people on the beach, some swimming and some playing in the water. The sky is clear and blue and there are some clouds on the horizon. It looks like a beautiful day on the beach, Figure 1: Examples of the LVLMsâ hallucination. In real-world scenarios, LVLMs may generate content that doesnâ t match the visual input. The words with red font represent the hallucination. However, LVLMs still suffer from hallucina- tion which refers to the generation of incorrect information that does not align with the visual input (Liu et al., 2023a). Previous work has mainly focused on investigating hallucination in LLMs and Vision-Language Pre-trained Models (VLPMs). For LLMs, hallucination predominantly stems from incorrect knowledge present in the train- ing data (Zhang et al., 2023; Li et al., 2023b), while for VLPMs, the challenge lies in accurately repre- senting visual information within abstract visual encodings (Shen et al., 2021; Biten et al., 2022). Although LVLMs combine the strengths of both LLMs and VLPMs, they inherently inherit both two pathways of hallucination generation. In this case, the flawed recognition of visual information within the framework of LLMs can lead to deceptively plausible yet ultimately absurd responses, as exem- plified in Figure 1. The hallucination poses poten- tial risks of substantial consequences that need to be addressed and rectified (Li et al., 2023d).
2308.15126#3
2308.15126#5
2308.15126
[ "2302.13971" ]
2308.15126#5
Evaluation and Analysis of Hallucination in Large Vision-Language Models
â Equal contribution â Corresponding author Work done during internship at DAMO Academy, Alibaba Group. To solve the problem of hallucination in LVLMs, (Li et al., 2023d) proposed POPE, an object-based 1 hallucination evaluation framework. POPE initially employs an object detector to identify all objects within an image and subsequently utilizes prede- fined prompts, such as "Is there a {object} in this image?", to query the model about the presence of an object which does not exist in the image. The modelâ
2308.15126#4
2308.15126#6
2308.15126
[ "2302.13971" ]
2308.15126#6
Evaluation and Analysis of Hallucination in Large Vision-Language Models
s response of "yes" is regarded as an indication of hallucination. Nevertheless, our inves- tigation, as shown in Figure 2, reveals that LVLMs tend to exhibit a response of "yes" to over 80% of queries about non-existent objects. In contrast, when the prompt "Describe the image" is adopted, less than 10% of the resultant responses included the hallucination objects. This discrepancy under- scores the weak correlation between object-based hallucination evaluation and the actual hallucina- tion of LVLMs. The above analysis demonstrates that in ideal- ized hallucination evaluation scenarios, LVLMs are highly susceptible to the influence of prompts, leading to biased responses that cannot be used as a basis for hallucination evaluation. Therefore, we advocate for the conduction of hallucination evaluation within real-world scenarios to avoid the negative impact of prompts on the evaluation re- sults. However, one challenge is that the responses of LVLMs in real-world scenarios tend to be com- plex, which implies that traditional match-based evaluation methods will no longer be applicable. This means that the evaluation tool needs to under- stand the complex responses of LVLMs. We notice that LLMs demonstrate powerful text- understanding capabilities. Based on this, we pro- pose an innovative framework called Hallucination Evaluation based on Large Language Models (HaELM). First, we identify the hallucination pat- terns exhibited by LVLMs and systematically col- lect their hallucination responses. Subsequently, we craft prompts that elicit responses from Chat- GPT aligned with these patterns to collect the pertinent training data. Finally, We fine-tune LLaMA (Touvron et al., 2023) through the LoRA- based methodology (Hu et al., 2021). As a re- sult, HaELM becomes proficient in hallucination evaluation, leveraging reference descriptions of im- ages as a basis for assessment. Experimental re- sults demonstrate attest to the comparable perfor- mance of HaELM and ChatGPT, exhibiting align- ment with human annotations. In addition, HaELM has additional advantages including low cost, re- producibility, privacy preservation and local de-
2308.15126#5
2308.15126#7
2308.15126
[ "2302.13971" ]
2308.15126#7
Evaluation and Analysis of Hallucination in Large Vision-Language Models
2 ployment. Finally, we conduct a comprehensive analysis of the factors contributing to hallucination generation in current LVLMs, culminating in a set of suggestions for mitigating the hallucination. We summarize the contributions as follows: â ¢ Through our analysis, we discover that LVLMs are easily influenced by prompts in idealized hallucination scenarios, making the results not correlated with hallucinations in real-world scenarios. to utilize LLM for hallucination evaluation within LVLMs. We propose Hallucination Evaluation based on Large Language Models (HaELM). HaELM achieves a strong perfor- mance and has additional advantages includ- ing low cost, reproducibility, privacy preserva- tion and local deployment compared to Chat- GPT.
2308.15126#6
2308.15126#8
2308.15126
[ "2302.13971" ]
2308.15126#8
Evaluation and Analysis of Hallucination in Large Vision-Language Models
â ¢ Leveraging the HaELM, we embark on evalu- ating the presence of hallucination in current LVLMs. We analyze the factors that affect hallucination and offer helpful suggestions. # 2 Background In this section, we mainly introduced existing Large Language Models (LLMs) and Large Vision- Language Models (LVLMs), as well as hallucina- tion problems that exist in LLMs and LVLMs. # 2.1 Large Language Model GPT-3 (Brown et al., 2020) has demonstrated that language models with a large number of param- eters possess powerful zero-shot capabilities and are capable of excelling at previously unseen tasks. Thanks to the success of GPT-3, now LLMs (Zhang et al., 2022; Chowdhery et al., 2022; Touvron et al., 2023; Scao et al., 2022) have gained significant at- tention. To make LLMs more responsive to human instructions, InstructGPT (Ouyang et al., 2022) introduced the instruction-following fine-tuning paradigm. It employs reinforcement learning from human feedback to train the LLMs to follow human instructions and produce desired outputs. # 2.2 Large Vision-Language Model With the success of LLMs, many researchers have been extending language models to understand real-world images. For example, some approaches (Yang et al., 2023; Shen et al., 2023) are based
2308.15126#7
2308.15126#9
2308.15126
[ "2302.13971" ]
2308.15126#9
Evaluation and Analysis of Hallucination in Large Vision-Language Models
100 QH Mmm AY so | Mm cH g 60 £ 3 Q & 40 20 ol table chair car book bottle cup cat horse toilet Items Figure 2: The validity assessment results of object- based hallucination evaluation. QH represents the per- centage that we asked about the corresponding item on images where it was not present; AY represents the percentage that the model answered "yes", and CH rep- resents the percentage that the model had hallucinations of the corresponding item in the responses. on visual expert and regards ChatGPT as the cen- tral work. On the other hand, some recent open- source works such as (Zhu et al., 2023; Liu et al., 2023b; Ye et al., 2023; Gong et al., 2023; Wang et al., 2023; Li et al., 2023a; Mu et al., 2023; Su et al., 2023) achieve unified LVLMs by aligning extracted visual tokens from a visual encoder with a pre-trained LLM and instruct tuning it. To further improve the performance of LVLMs, (Liu et al., 2023a; Li et al., 2023c) proposed to increase the diversity of instructions and construct the larger instruction fine-tuning dataset. # 2.3 Hallucinations in LLMs and LVLMs The issue of hallucinations has been extensively studied in the traditional field of NLP. Despite the advancements in the latest and widely acclaimed LLMs, they remain encumbered by the persistent challenge of hallucinations. Consequently, a mul- titude of works have emerged, aiming to mitigate the impact of these hallucinations. However, it is noteworthy that limited focus has been directed to- ward addressing the hallucination in LVLMs (Zhou et al., 2023; Liu et al., 2023a). In contrast to hallucinations observed in LLMs, hallucinations within LVLMs arise from a mis- match between the visual and textual modalities. Currently, the only work that specifically focuses on the hallucination of LVLMs utilizing object de- tection and query instructions (Li et al., 2023d).
2308.15126#8
2308.15126#10
2308.15126
[ "2302.13971" ]
2308.15126#10
Evaluation and Analysis of Hallucination in Large Vision-Language Models
Through meticulous empirical experiments, they 3 substantiate the considerable severity of hallucina- tions in LVLMs, particularly in generating objects that are absent from the provided images but ap- pear frequently in the training data. The existing LLMs, by adopting instruct tuning, make their tar- get outputs follow human instructions, but this can result in biased training and target distributions (Tian et al., 2023). Furthermore, insufficient vi- sual constraints contribute to the serious issue of illusions in LVLMs. The presence of hallucinations can lead to unreli- ability in models, which may cause harm to human society, such as the misleading information output by the model leading to errors in human decision- making or the output of toxic information.
2308.15126#9
2308.15126#11
2308.15126
[ "2302.13971" ]
2308.15126#11
Evaluation and Analysis of Hallucination in Large Vision-Language Models
# 3 Motivation The current existing method for hallucination eval- uation is object-based hallucination evaluation (Li et al., 2023d). It measures the extent of hallucina- tion in LVLMs by querying their response to the presence of an "item". The "item" is chosen from a list of commonly hallucinated words that do not exist in the image. If the model believes that an item is present in an image where it is absent, it in- dicates that the model has a hallucination regarding that item. To verify the feasibility, we designed an experi- ment based on the object-based hallucination eval- uation method. We utilized the prompt "Is there a {item} in this photo?" to query mPLUG-Owl re- garding 100 randomly selected images from the MS-COCO 2014 dataset (Lin et al., 2014; Chen et al., 2015). Other modelsâ and detailed results are provided in the appendix.
2308.15126#10
2308.15126#12
2308.15126
[ "2302.13971" ]
2308.15126#12
Evaluation and Analysis of Hallucination in Large Vision-Language Models
The {item} in the prompt was substituted with the top ten most fre- quently hallucinated words proposed by (Li et al., 2023d) that are not present in the given image. The results are presented in Figure 2. The "QH" and "AY" reveal that LVLMs answer "yes" to over 80% of the queries in this prompt, even if all the items in the prompts were absent from the image. The above phenomenon can be explained by the tendency of LVLMs to affirm the description when answering judgment-type queries with a "yes" re- sponse. We speculate that this bias is due to the instruction fine-tuning data that includes a sub- stantial number of responses catering to human re- quests, which results in bias in LVLMsâ responses to judgment-type queries. To verify the relationship between the responses of LVLMs to such queries
2308.15126#11
2308.15126#13
2308.15126
[ "2302.13971" ]
2308.15126#13
Evaluation and Analysis of Hallucination in Large Vision-Language Models
hallucination FE) : Alen labeling of hallucination human = similarity A \ assessment [ Prompt: 1 ew prompt of (sues ' realistic CL simulated data generation hallucination hallucination } collection collection ChatGPT human prompt adjustment \ qe Figure 3: The illustration for data collection process of HaELM. The left figure illustrates the process of manually collecting real hallucination responses, while the right figure illustrates the generation of data in bulk using ChatGPT. The human similarity assessment aims to align the patterns of simulated hallucination data with realistic one. and corresponding hallucinations, we conducted a manual evaluation in real-world scenarios. We used the prompt "Describe this image" and examined whether the generated descriptions truly contained hallucinations for the items that received a "yes" re- sponse. The "AY" and "CH" in Figure 2 reveal that only 10% of the responses included hallucinations for specific items. This suggests that the halluci- nations measured object-based evaluation merely exploit the judgment bias present in LVLMs, rather than reflecting their hallucination. # 4 Method This section mainly introduces the definition of hal- lucination and our method of Hallucination Evalu- ation based on Large Language Models. # 4.1 Problem Definition The evaluation of hallucinations in real-world scenarios for LVLMs is defined as determining whether there are discrepancies between the con- tent of the images and the responses generated by LVLMs, under the potential requests that could be initiated by humans. In this paper, we focus on the real-world scenario of image description. # 4.2 HaELM Data Collection To perceive hallucinations in the responses of LVLMs, it is crucial to evaluation on both non- hallucinatory and hallucinatory responses. To ad- dress this, we first analyze the hallucination pat- terns of LVLMs. Randomly selecting images, we query the LVLMs with the instruction "Describe this image" and manually annotated the halluci- nation responses to get the realistic hallucination collection as shown in the left of Figure 3. Subsequently, our goal is to obtain a substantial amount of hallucination data in bulk. We consid- ered using ChatGPT to generate hallucinations by manually constructing prompts based on the ref- erence captions of the images provided.
2308.15126#12
2308.15126#14
2308.15126
[ "2302.13971" ]
2308.15126#14
Evaluation and Analysis of Hallucination in Large Vision-Language Models
We com- pared the hallucination data generated by ChatGPT with realistic hallucination data by human similar- ity assessment. We iteratively modified the prompt to make the patterns of the two align closely as shown in the right of Figure 3. Our hallucination data collection format is presented in Figure 4. Finally, we collect the non-hallucination data. By requesting ChatGPT to generate detailed de- scriptions based on reference captions, we can easily obtain the desired data. However, it is cru- cial to emphasize that the generated descriptions should strictly adhere to the objects present in the reference captions, without introducing any non- existent elements. Training and Inference During the training phase, we employ a consis- tent format prompt that corresponds to the data distribution of LLMs and instruction fine-tuning. The collected data from the preceding step is seam- lessly integrated into the prompt, serving as the training data for fine-tuning the LLM through an autoregressive training process. During the infer- ence phase, we incorporate the reference captions
2308.15126#13
2308.15126#15
2308.15126
[ "2302.13971" ]
2308.15126#15
Evaluation and Analysis of Hallucination in Large Vision-Language Models
4 it. Reference captions: + Anighttime parade with a decorated elephant walking in * Dressed for a festival, an elephant parades down a street. + Anelephant being lead by a man in front of a crowd of onlookers. * A procession of a decorated elephant in front of a crowd. + Aman riding a decorated elephant following a woman walking in front of him. Prompt of simulated hallucination data generation: # An image is described in several reference descriptions below: [ reference captions here ] Generate a two- to three-sentence description of the image based on the reference descriptions and include a few obvious incorrect associations. Response of ChatGPT: # A festive procession takes place at night, featuring a beautifully adorned elephant parading down a street. The elephant is being led by a woman, with a man riding on its back. The crowd of onlookers is cheering and throwing confetti, despite the fact that it is raining heavily.
2308.15126#14
2308.15126#16
2308.15126
[ "2302.13971" ]
2308.15126#16
Evaluation and Analysis of Hallucination in Large Vision-Language Models
Figure 4: An example of the prompt for generating simulated hallucination samples, where the words with red font represent the hallucination description. and responses from the LVLMs under evaluation into the prompt. These inputs are then fed into the meticulously trained evaluation model to get the judgment. HaELM can be reused multiple times once data collection and training finish, which offers a cost advantage over ChatGPT while ensuring repro- ducibility. Furthermore, HaELM is built upon an open-source LLM, allowing for local deployment, thereby eliminating uploading data and guarantee- ing data privacy. Implementation Details We employed the LLaMA (Touvron et al., 2023) as a foundation model and utilized LoRA (Hu et al., 2021) for fine-tuning. Our hyperparameter is pre- sented in Table 8 of appendix. The training process required 2 hours using a single Tesla V100 GPU. For the evaluated models, we selected the currently available open-source LVLMs: mPLUG-Owl (Ye et al., 2023), MiniGPT-4 (Zhu et al., 2023) and LLaVA (Liu et al., 2023b). The parameter settings are presented in Table 7 of appendix. We chose the state-of-the-art LLM, ChatGPT, as our baseline. # 5 Experiments Dataset Our image dataset consists exclusively of images from the MS-COCO 2014 (Lin et al., 2014; Chen et al., 2015), following the established partition into the train, val and test sets as outlined by (Karpathy and Fei-Fei, 2015). For data collection purposes, we randomly select 10,000 samples from the train- ing set and collect 10,000 hallucination and 10,000 non-hallucination simulated responses respectively. Additionally, we obtain all 5,000 samples from the test set specifically for evaluating the LVLMsâ hal- lucinations. To ensure consistency and accuracy in our data collection and hallucination evaluation, we use the manually annotated captions provided in the dataset as reference captions. To ensure the modelâ s focus on hallucination evaluation, we disabled gradient computations on the input, preventing the learning of irrelevant in- formation. Furthermore, our training data outputs were explicitly limited to "yes" or "no" responses, effectively benefiting the automated evaluation.
2308.15126#15
2308.15126#17
2308.15126
[ "2302.13971" ]
2308.15126#17
Evaluation and Analysis of Hallucination in Large Vision-Language Models
When evaluating hallucinations by ChatGPT, we further enhanced the accuracy through manual prompt editing, ensuring a fair basis for compari- son. Notably, we refrained from employing manu- ally annotated real hallucination data in the training process to uphold the integrity and reliability of our experimental findings. # 5.1 Evaluation on HaELM In this subsection, we first evaluate the performance of HaELM. As we are the first to utilize LLM for hallucination evaluation, we select the highly
2308.15126#16
2308.15126#18
2308.15126
[ "2302.13971" ]
2308.15126#18
Evaluation and Analysis of Hallucination in Large Vision-Language Models
5 Method w/o hallucination w/ hallucination all LL Mi mP Avg. LL Mi mP Avg. LL Mi mP Avg. GPT-3.5 82.0 HaELM 93.4 38.9 61.1 50.8 60.1 57.2 71.5 48.7 25.6 78.1 57.8 72.9 43.2 66.6 42.2 69.0 67.0 64.0 59.0 59.0 57.0 64.0 61.0 Table 1: The results of accuracy on human-annotated evaluation data for HaELM and GPT-3.5, where LL, Mi, and mP respectively represent LLaVA, Mini-GPT4, and mPLUG-Owl. Method LLaVA MiniGPT-4 mPLUG-Owl Precision Recall F1 Score Precision Recall F1 Score Precision Recall F1 Score w/o hallucination GPT-3.5 HaELM 71.4 66.3 82.0 93.4 76.3 77.5 50.0 44.9 38.9 61.1 43.8 51.8 76.2 66.1 50.8 65.1 61.0 65.6 w/ hallucination GPT-3.5 HaELM 63.3 71.4 48.7 25.6 55.0 37.7 69.4 72.5 78.1 57.8 73.5 64.3 46.6 42.1 73.0 43.2 56.8 42.7 average GPT-3.5 HaELM 67.4 68.9 65.4 59.5 65.6 57.6 59.7 58.7 58.5 59.5 58.7 58.1 61.4 54.1 61.9 54.2 58.9 51.7 Table 2:
2308.15126#17
2308.15126#19
2308.15126
[ "2302.13971" ]
2308.15126#19
Evaluation and Analysis of Hallucination in Large Vision-Language Models
The results of accuracy on human-annotated evaluation data for HaELM and GPT-3.5 in terms of precision, recall, and F1 score for hallucination and non-hallucination responses. competitive ChatGPT as our baseline for compara- tive analysis. Given the absence of an established benchmark, we use the realistic hallucination re- sponses derived from LVLMs during the data col- lection phase as the evaluation benchmark and the annotations as the ground truth. Accuracy We first compared the accuracy. The experimen- tal results on human-annotated hallucination, non- hallucination and overall responses are summarized in Table 1. Notably, HaELM achieves an accu- racy of 61%, slightly lower than ChatGPTâ s per- formance at 64%. Nevertheless, HaELM demon- strates an impressive capability, reaching 95% of ChatGPTâ s level. We also noticed that HaELM performs better in non-hallucination responses, while ChatGPT per- forms better in hallucination responses. This re- flects the biases in the decision-making of the two methods. ChatGPT tends to believe that responses have hallucinations, while HaELM leans towards non-hallucination responses. We analyzed that al- though simulated hallucination responses mostly cover the hallucination pattern, they still cannot fully match the distribution of actual hallucination responses. Therefore, HaELM fails to learn some patterns of hallucinations, resulting in misclassifi- cation under these patterns. Refined Metrics We then proceeded to evaluate the refined met- rics, including precision, recall, and F1 scores as shown in Table 2. The average F1 scores reveal that HaELM achieves performance levels of 88%, 99%, and 88% on the three LVLMs, respectively. Additionally, as mentioned in the previous analy- sis, the recall for hallucination responses is lower for HaELM. Nevertheless, despite this limitation, HaELM outperforms ChatGPT in several metrics. Time & Cost HaELM only requires one-time data collection and training for reuse, allowing significant time and cost savings in subsequent evaluation processes compared to ChatGPT. We present the cost com- parison between the two in Table 3.
2308.15126#18
2308.15126#20
2308.15126
[ "2302.13971" ]
2308.15126#20
Evaluation and Analysis of Hallucination in Large Vision-Language Models
HaELM requires only 3.8 hours and 4.3$ for data collection and training, resulting in a saving of 1.4 hours and 6.6$ per evaluation compared to ChatGPT. This advantage becomes more signifi- cant when multiple evaluations are needed, such as exploring the impact of prompts on hallucina- tions. Additionally, HaELM can be deployed lo- cally, eliminating the need for internet connectivity 6 Method Collection Training *Evaluation Time Cost Time Cost Time Cost GPT3.5 HaELM 1.8h - - 4.3$ - 2h - - 1.6h 0.2h 6.6$ - Table 3: The time and cost of hallucination evaluation for HaELM and ChatGPT. *Evaluation represents a single evaluation conducted on three LVLMs.
2308.15126#19
2308.15126#21
2308.15126
[ "2302.13971" ]
2308.15126#21
Evaluation and Analysis of Hallucination in Large Vision-Language Models
and ensuring data and privacy protection. # 5.2 Evaluation on Hallucination In this subsection, we will employ HaELM to evaluate the hallucination performance of existing LVLMs. Additionally, we explore the correlation between various generation settings and hallucina- tions in LVLMs, thereby presenting viable sugges- tions to mitigate hallucinations. Comparison on LVLMs We evaluate the hallucination of LVLMs across various prompts for a generation. The experimen- tal results are shown in Table 4. Firstly, it can be seen that among these three LVLMs, LLaVA exhibits the lowest degree of hallucination and sen- sitivity to prompts, far below the other two models. However, previous work (Ye et al., 2023) manually annotated results indicate that LLaVA performs the worst in various aspects. This observation aligns with our understanding of LVLMs. We note that the generation of hallucination is often positively correlated with the modelâ s generative capability. For example, hallucinations are almost impossible to occur in VLPMs. Therefore, there exists a trade- off between model performance and hallucinations, which deserves researchers to invest more effort in model selection. Secondly, it can be observed that both MiniGPT- 4 and mPLUG-Owl suffer from severe hallucina- tion issues. The performance of these two models is highly dependent on the choice of prompts. This means that prompt selection should be careful when using these powerful LVLMs. Comparison on Generation Length We noticed that in Table 4, using the prompt "Gen- erate a caption for this image." resulted in a min- imal amount of hallucination. We collected re- sponses from LVLMs under this prompt and ob- served that these responses were relatively shorter and more concise. We hypothesize that the genera- tion length of LVLMsâ responses may be related to 7 Model P1 P2 P3 P4 Avg-M LLaVA MiniGPT-4 mPLUG-Owl 20.0 46.1 35.9 19.4 35.5 24.1 18.6 69.7 47.2 19.5 68.8 37.6 19.4 55.0 36.2 Avg-P 34.0 26.3 45.2 42.0 -
2308.15126#20
2308.15126#22
2308.15126
[ "2302.13971" ]
2308.15126#22
Evaluation and Analysis of Hallucination in Large Vision-Language Models
Table 4: Hallucination evaluation results for LVLMs. The numbers represent the frequency of hallucinations exhibited by the respective LVLM when using genera- tion prompts on the MS-COCO 2014 test split. "Avg-M" represents the average hallucination ratio of the corre- sponding model across multiple prompts, while "Avg-P" represents the average hallucination ratio of the corre- sponding prompt across multiple models. P1: "Describe this image." P2: "Generate a caption for this image." P3: "Please restore the scene in the image with words." P4: "What is this?" hallucination.
2308.15126#21
2308.15126#23
2308.15126
[ "2302.13971" ]
2308.15126#23
Evaluation and Analysis of Hallucination in Large Vision-Language Models
To validate this idea, we conducted experiments with mPLUG-Owl by selecting dif- ferent maximum generation lengths and using the prompt "Describe this image." for a generation. The experimental results are shown in Table 5. max length 128 256 512 1024 hallucination 33.1 35.7 35.9 37.0 Table 5: The result of comparison on generation length. We observed that as the maximum length in- creased, the hallucination became stronger. We manually collected a portion of responses with a maximum generation length of 1024 and found that hallucinations tended to occur more toward the latter part of the responses. In this pattern of hallucination, LVLMs often generated a concise segment first, followed by a divergence of imagina- tion. However, this is not always the case, as the examples shown in Figure 1 also demonstrated that LVLMs can generate hallucinations in the earlier parts. Therefore, this represents only a trend. We suggest that obtaining relatively accurate results can be achieved by truncating the responses. Comparison on Sampling Sampling can control LVLMs to generate diverse responses. The current mainstream sampling method is top-K sampling, which randomly selects from the top K words with the highest probabilities each time. To investigate the impact of sampling methods on illusions, we controlled the value of K in top-K sampling and conducted experiments. The experimental results are presented in Table 6.
2308.15126#22
2308.15126#24
2308.15126
[ "2302.13971" ]
2308.15126#24
Evaluation and Analysis of Hallucination in Large Vision-Language Models
Prompt: # Describe this image. Response: 2 # The image depicts a busy city street with a group of people riding bicycles. There are at least 12 bicycles visible in the scene, with some of them positioned closer to the foreground and others further back. visible in the- The- image dep Figure 5: We visualized the attention of LVLM during the autoregressive generation. In the right figure, the horizontal axis represents the tokens to be generated, and the vertical axis represents the tokens that have already been generated. "<Img>" represents the average attention on the image, and "<sp>" represents the token "space". K 1 2 3 4 5 hallucination 24.7 33.0 35.9 40.3 42.4 Table 6: The result of comparison on K of sampling. Clearly, as K increases, the hallucination is- sue becomes more severe. Random sampling may cause LVLMs to choose tokens that are less aligned with the visual input, resulting in factual errors. These errors can be rationalized under LLMs, ul- timately forming hallucinations. There is still a trade-off between diversity and hallucination. We observe that during the occurrence of the hallucination "12", the model exhibits minimal at- tention to the image (highlighted by the red box). Additionally, the attention of token "1" is primarily focused on the preceding token "<sp>", and the attention of token "2" is also not concentrated in relevant regions. It is possible that tokens "<sp>" and "1" appeared frequently during the training phase, leading the model to learn a biased false cor- relation. This inherent bias in the LVLM causes the attention during the generation of certain tokens to deviate from the image.
2308.15126#23
2308.15126#25
2308.15126
[ "2302.13971" ]
2308.15126#25
Evaluation and Analysis of Hallucination in Large Vision-Language Models
# 6 Discussion A comprehensive understanding of the causes be- hind hallucination in LVLMs remains elusive, as no previous work has been able to provide a definitive explanation. In this section, we aim to shed light on this phenomenon by delving into an analysis of attention using specific visualization techniques. This finding is insightful and carries significant It demonstrates that one possible implications. approach to addressing hallucinations could be to penalize attention that deviates from the image. This will be further explored in our future work. # 7 Conclusion We leverage gradients to visualize the attention of each token generated concerning the previously generated tokens and the image. Specifically, we begin by disabling random sampling to ensure the stability of model generation and record the modelâ s generated response. Subsequently, we utilize this response as a label for gradient back- propagation, ultimately obtaining gradients con- cerning the input embeddings. Finally, we normal- ize the gradient variations to obtain attention. In Figure 5, we show an example of hallucination. In this paper, we analyzed the problems within the existing hallucination evaluation method and pro- posed HaELM, a hallucination evaluation frame- work based on LLM designed for real-world sce- narios. We demonstrated through experiments that HaELM achieves performance comparable to that of ChatGPT. Building upon HaELM, we conducted analyses on the causes of hallucinations and pro- vided corresponding suggestions to mitigate them. Additionally, our visualization results may hold insightful implications for future research.
2308.15126#24
2308.15126#26
2308.15126
[ "2302.13971" ]
2308.15126#26
Evaluation and Analysis of Hallucination in Large Vision-Language Models
8 # 8 Limitations Firstly, both HaELM and ChatGPT fall short of achieving human-level hallucination evaluation per- formance. We attribute this to the fact that current methods are based on language models, using ref- erence captions as a substitute for images. This means that the evaluation models cannot truly com- prehend the content of the images. Moreover, we have also attempted to use multimodal models for evaluation. Unfortunately, current LVLMs com- monly exhibit hallucinations themselves. There- fore, at this stage, language models remain the optimal choice for hallucination evaluation. Secondly, we did not address the root cause of hallucinations in LVLMs. In this paper, we investi- gated the triggers of hallucination and based on this, substantive methods should be established through the analysis of these triggers to reduce the modelâ s learning of hallucination patterns during the train- ing phase.
2308.15126#25
2308.15126#27
2308.15126
[ "2302.13971" ]
2308.15126#27
Evaluation and Analysis of Hallucination in Large Vision-Language Models
Currently, this is a challenging task for us, but it will remain one of our future work. # References Ali Furkan Biten, Lluis Gomez, and Dimosthenis Karatzas. 2022. Let there be a clock on the beach: Re- ducing object hallucination in image captioning. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 1381â 1390. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020.
2308.15126#26
2308.15126#28
2308.15126
[ "2302.13971" ]
2308.15126#28
Evaluation and Analysis of Hallucination in Large Vision-Language Models
Language models are few-shot learners. Advances in neural information processing systems, 33:1877â 1901. Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakr- ishna Vedantam, Saurabh Gupta, Piotr Dollár, and C Lawrence Zitnick. 2015. Microsoft coco captions: Data collection and evaluation server. arXiv preprint arXiv:1504.00325. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2022.
2308.15126#27
2308.15126#29
2308.15126
[ "2302.13971" ]
2308.15126#29
Evaluation and Analysis of Hallucination in Large Vision-Language Models
Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311. Tao Gong, Chengqi Lyu, Shilong Zhang, Yudong Wang, Miao Zheng, Qian Zhao, Kuikun Liu, Wenwei Zhang, Ping Luo, and Kai Chen. 2023. Multimodal-gpt: A vision and language model for dialogue with humans. arXiv preprint arXiv:2305.04790. Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2021.
2308.15126#28
2308.15126#30
2308.15126
[ "2302.13971" ]
2308.15126#30
Evaluation and Analysis of Hallucination in Large Vision-Language Models
Lora: Low-rank adap- tation of large language models. arXiv preprint arXiv:2106.09685. Andrej Karpathy and Li Fei-Fei. 2015. Deep visual- semantic alignments for generating image descrip- In Proceedings of the IEEE conference on tions. computer vision and pattern recognition, pages 3128â 3137. Bo Li, Yuanhan Zhang, Liangyu Chen, Jinghao Wang, Jingkang Yang, and Ziwei Liu. 2023a.
2308.15126#29
2308.15126#31
2308.15126
[ "2302.13971" ]
2308.15126#31
Evaluation and Analysis of Hallucination in Large Vision-Language Models
Otter: A multi-modal model with in-context instruction tuning. arXiv preprint arXiv:2305.03726. Junyi Li, Xiaoxue Cheng, Wayne Xin Zhao, Jian-Yun Nie, and Ji-Rong Wen. 2023b. Halueval: A large- scale hallucination evaluation benchmark for large language models. arXiv e-prints, pages arXivâ 2305. Lei Li, Yuwei Yin, Shicheng Li, Liang Chen, Peiyi Wang, Shuhuai Ren, Mukai Li, Yazheng Yang, Jingjing Xu, Xu Sun, Lingpeng Kong, and Qi Liu. 2023c. M3it: A large-scale dataset towards multi- modal multilingual instruction tuning. arXiv preprint arXiv:2306.04387. Yifan Li, Yifan Du, Kun Zhou, Jinpeng Wang, Wayne Xin Zhao, and Ji-Rong Wen. 2023d. Eval- uating object hallucination in large vision-language models. arXiv preprint arXiv:2305.10355. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. 2014.
2308.15126#30
2308.15126#32
2308.15126
[ "2302.13971" ]
2308.15126#32
Evaluation and Analysis of Hallucination in Large Vision-Language Models
Microsoft coco: In European confer- Common objects in context. ence on computer vision, pages 740â 755. Springer. Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, and Lijuan Wang. 2023a. Aligning large multi-modal model with robust instruction tuning. arXiv preprint arXiv:2306.14565. Fuxiao Liu, Yinghan Wang, Tianlu Wang, and Vicente Ordonez. 2020. Visual news: Benchmark and chal- lenges in news image captioning. arXiv preprint arXiv:2010.03743. Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. 2023b. Visual instruction tuning. arXiv preprint arXiv:2304.08485. Yao Mu, Qinglong Zhang, Mengkang Hu, Wenhai Wang, Mingyu Ding, Jun Jin, Bin Wang, Jifeng Dai, Yu Qiao, and Ping Luo. 2023. Embodiedgpt: Vision- language pre-training via embodied chain of thought. arXiv preprint arXiv:2305.15021. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instruc- tions with human feedback. Advances in Neural Information Processing Systems, 35:27730â
2308.15126#31
2308.15126#33
2308.15126
[ "2302.13971" ]
2308.15126#33
Evaluation and Analysis of Hallucination in Large Vision-Language Models
27744. 9 Teven Le Scao, Angela Fan, Christopher Akiki, El- lie Pavlick, Suzana Ili´c, Daniel Hesslow, Roman Castagné, Alexandra Sasha Luccioni, François Yvon, Matthias Gallé, et al. 2022. Bloom: A 176b- parameter open-access multilingual language model. arXiv preprint arXiv:2211.05100. Sheng Shen, Liunian Harold Li, Hao Tan, Mohit Bansal, Anna Rohrbach, Kai-Wei Chang, Zhewei Yao, and Kurt Keutzer. 2021. How much can clip benefit vision-and-language tasks? arXiv preprint arXiv:2107.06383. Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, and Yueting Zhuang. 2023. Hugging- gpt: Solving ai tasks with chatgpt and its friends in huggingface. arXiv preprint arXiv:2303.17580. Yixuan Su, Tian Lan, Huayang Li, Jialu Xu, Yan Pandagpt: One Wang, and Deng Cai. 2023. model to instruction-follow them all. arXiv preprint arXiv:2305.16355. Katherine Tian, Eric Mitchell, Allan Zhou, Archit Sharma, Rafael Rafailov, Huaxiu Yao, Chelsea Finn, and Christopher D Manning. 2023.
2308.15126#32
2308.15126#34
2308.15126
[ "2302.13971" ]
2308.15126#34
Evaluation and Analysis of Hallucination in Large Vision-Language Models
Just ask for cali- bration: Strategies for eliciting calibrated confidence scores from language models fine-tuned with human feedback. arXiv preprint arXiv:2305.14975. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023.
2308.15126#33
2308.15126#35
2308.15126
[ "2302.13971" ]
2308.15126#35
Evaluation and Analysis of Hallucination in Large Vision-Language Models
Llama: Open and effi- cient foundation language models. arXiv preprint arXiv:2302.13971. Wenhai Wang, Zhe Chen, Xiaokang Chen, Jiannan Wu, Xizhou Zhu, Gang Zeng, Ping Luo, Tong Lu, Jie Zhou, Yu Qiao, et al. 2023. Vision- llm: Large language model is also an open-ended arXiv preprint decoder for vision-centric tasks. arXiv:2305.11175. Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Ehsan Azarnasab, Faisal Ahmed, Zicheng Liu, Ce Liu, Michael Zeng, and Lijuan Wang. 2023.
2308.15126#34
2308.15126#36
2308.15126
[ "2302.13971" ]
2308.15126#36
Evaluation and Analysis of Hallucination in Large Vision-Language Models
Mm- react: Prompting chatgpt for multimodal reasoning and action. arXiv preprint arXiv:2303.11381. Qinghao Ye, Haiyang Xu, Guohai Xu, Jiabo Ye, Ming Yan, Yiyang Zhou, Junyang Wang, An- wen Hu, Pengcheng Shi, Yaya Shi, et al. 2023. mplug-owl: Modularization empowers large lan- guage models with multimodality. arXiv preprint arXiv:2304.14178. Muru Zhang, Ofir Press, William Merrill, Alisa Liu, and Noah A Smith. 2023. How language model hallucinations can snowball. arXiv preprint arXiv:2305.13534. Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher De- wan, Mona Diab, Xian Li, Xi Victoria Lin, et al. 2022.
2308.15126#35
2308.15126#37
2308.15126
[ "2302.13971" ]
2308.15126#37
Evaluation and Analysis of Hallucination in Large Vision-Language Models
Opt: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068. 10 Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, and Huaxiu Yao. 2023. Analyzing and mitigating object hallucination in large vision-language models. arXiv preprint arXiv:2310.00754. Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, and Mohamed Elhoseiny. 2023. Minigpt-4: Enhancing vision-language understanding with advanced large language models. arXiv preprint arXiv:2304.10592. # Appendix # A Evaluated LVLMs We present detailed parameter settings of the evalu- ated LVLMs, as shown in Table 7. Model VE AN LLM mPLUG-Owl ViT-L/14 Attention LLaMA-7B Vicuna-13B MiniGPT-4 LLaMA-13B LLaVA ViT-G/14 ViT-L/14 Linear Linear Table 7: The detailed parameter settings of the evaluated LVLMs, where VE, AN, LLM stand for Visual Encoder, Alignment Network and Large Language Model, respec- tively. base model batch size epoch learning rate max input length LoRA r LoRA alpha LoRA dropout LoRA module train on input train with fp16 Table 8: The detailed parameter settings. # B Training Details We present detailed parameter settings of the LoRA fine-tuning during the training phase, as shown in Table 8. Due to the insufficient 32GB memory of the Tesla V100 to accommodate a batch size of 64, we used a batch size of 8 with a gradient accumulation of 8 steps to achieve an equivalent batch size of 64. When "train on input" is turned off, the self- regressive loss will no longer consider the input
2308.15126#36
2308.15126#38
2308.15126
[ "2302.13971" ]
2308.15126#38
Evaluation and Analysis of Hallucination in Large Vision-Language Models
Item person table chair car book bottle cup cat horse toilet sum QH AY CH 48 45 14 87 45 3 89 84 23 94 92 17 96 96 4 89 89 10 97 91 10 98 82 1 96 9 0 96 84 0 890 717 82 Table 9: The detailed validity assessment results of object-based hallucination evaluation method by mPLUG-Owl. Item person table chair car book bottle cup cat horse toilet sum QH AY CH 48 22 6 87 49 7 89 51 13 94 58 10 96 49 2 89 44 0 97 47 3 98 45 3 96 21 0 96 46 1 890 432 46 Table 10: The detailed validity assessment results of object-based hallucination evaluation method by MiniGPT-4. Item person table chair car book bottle cup cat horse toilet sum QH AY CH 48 42 8 87 49 2 89 83 16 94 91 9 96 95 2 89 82 4 97 94 8 98 92 0 96 38 0 96 87 0 890 753 49 Table 11:
2308.15126#37
2308.15126#39
2308.15126
[ "2302.13971" ]
2308.15126#39
Evaluation and Analysis of Hallucination in Large Vision-Language Models
The detailed validity assessment results of object-based hallucination evaluation method by LLaVA. part. In addition, fp16 can accelerate training with almost no impact, so we chose to enable it. We adopted the settings from Vicuna for LoRA and replaced the weights of the Q and V matrices. likely to be part of hallucinations. Therefore, we recommend considering a low temperature if the authenticity of the generated texts needs to be en- sured. # C Additional Evaluation on Hallucination temperture 0.2 0.4 0.6 0.8 1 hallucination 24.7 26.6 31.1 33.0 35.9 The temperature in LLMs generation parameters refers to the parameter that controls the randomness of language model generation during text genera- tion. It is a parameter that controls randomness and can influence the diversity and creativity of model generation to a certain extent.
2308.15126#38
2308.15126#40
2308.15126
[ "2302.13971" ]
2308.15126#40
Evaluation and Analysis of Hallucination in Large Vision-Language Models
Table 12: The result of comparison on temperture. # D Detailed Results In principle, the temperature parameter recali- brates the probability distribution of model output, making the probability distribution more evenly distributed. In high-temperature conditions, more probabilities are assigned to lower probabilities, so the generated text is more diverse. In low- temperature conditions, more probabilities are as- signed to high-probability results, so the generated text tends to have common patterns. We conducted experiments We present detailed results of the object-based hal- lucination evaluation. mPLUG-OWl, MiniGPT-4, and LLaVA are shown in Table 9, Table 10, and Ta- ble 11, respectively. In the table, QH represents the number of times we asked about the corresponding item on images where it was not present; AY rep- resents the number of times the model answered "yes", and CH represents the number of times the model had hallucinations of the corresponding item in the generated captions. to investigate whether the diversity brought by high tempera- tures would enhance the generation of hallucina- tions. The results are shown in Table 12. It can be seen from the results that the hallucinations of the model are enhanced with the increase in temper- ature, which is consistent with our intuitive judg- ment. The enhancement of diversity may lead to the generation of unreasonable texts, which are We observed that the conclusions obtained from the main text apply to almost all LVLMs, indicat- ing that the limitations of object-based hallucina- tion evaluation are not accidental. We realized that LVLMs are highly susceptible to prompt induc- tion in artificially constructed ideal hallucination scenarios.
2308.15126#39
2308.15126#41
2308.15126
[ "2302.13971" ]
2308.15126#41
Evaluation and Analysis of Hallucination in Large Vision-Language Models
11
2308.15126#40
2308.15126
[ "2302.13971" ]
2308.14963#0
Vector Search with OpenAI Embeddings: Lucene Is All You Need
3 2 0 2 g u A 9 2 ] R I . s c [ 1 v 3 6 9 4 1 . 8 0 3 2 : v i X r a # Vector Search with OpenAI Embeddings: Lucene Is All You Need Jimmy Lin,1 Ronak Pradeep,1 Tommaso Teofili,2 Jasper Xian1 1 David R. Cheriton School of Computer Science, University of Waterloo 2 Department of Engineering, Roma Tre University # Abstract We provide a reproducible, end-to-end demonstration of vector search with OpenAI embeddings using Lucene on the popular MS MARCO passage ranking test col- lection. The main goal of our work is to challenge the prevailing narrative that a dedicated vector store is necessary to take advantage of recent advances in deep neural networks as applied to search. Quite the contrary, we show that hierarchical navigable small-world network (HNSW) indexes in Lucene are adequate to provide vector search capabilities in a standard bi-encoder architecture.
2308.14963#1
2308.14963
[ "2110.01529" ]
2308.14963#1
Vector Search with OpenAI Embeddings: Lucene Is All You Need
This suggests that, from a simple costâ benefit analysis, there does not appear to be a compelling reason to introduce a dedicated vector store into a modern â AI stackâ for search, since such applications have already received substantial investments in existing, widely deployed infrastructure. # Introduction Recent advances in the application of deep neural networks to search have focused on representation learning in the context of the so-called bi-encoder architecture, where content (queries, passages, and even images and other multimedia content) is represented by dense vectors (so-called â embeddingsâ ). Dense retrieval models using this architecture form the foundation of retrieval augmentation in large language models (LLMs), a popular and productive approach to improving LLM capabilities in the broader context of generative AI (Mialon et al., 2023; Asai et al., 2023). The dominant narrative today is that since dense retrieval requires the management of a potentially large number of dense vectors, enterprises require a dedicated â vector storeâ or â vector databaseâ as part of their â AI stackâ . There is a cottage industry of startups that are pitching vector stores as novel, must-have components in a modern enterprise architecture; examples include Pinecone, Weaviate, Chroma, Milvus, Qdrant, just to name a few. Some have even argued that these vector databases will replace the venerable relational database.1 The goal of this paper is to provide a counterpoint to this narrative. Our arguments center around a simple costâ benefit analysis: since search is a brownfield application, many organizations have already made substantial investments in these capabilities. Today, production infrastructure is dominated by the broad ecosystem centered around the open-source Lucene search library, most notably driven by platforms such as Elasticsearch, OpenSearch, and Solr. While the Lucene ecosystem has admittedly been slow to adapt to recent trends in representation learning, there are strong signals that serious investments are being made in this space. Thus, we see no compelling reason why separate, dedicated vector stores are necessary in a modern enterprise. In short, the benefits do not appear to justify the cost of additional architectural complexity. It is important to separate the need for capabilities from the need for distinct software components.
2308.14963#0
2308.14963#2
2308.14963
[ "2110.01529" ]
2308.14963#2
Vector Search with OpenAI Embeddings: Lucene Is All You Need
While hierarchical navigable small-world network (HNSW) indexes (Malkov and Yashunin, 2020) 1 https://twitter.com/andy_pavlo/status/1659740200266870787 represent the state of the art today in approximate nearest neighbor searchâ the most important operation for vector search using embeddingsâ it is not clear that providing operations around HNSW indexes requires a separate and distinct vector store. Indeed, the most recent major release of Lucene (version 9, from December 2021) includes HNSW indexing and vector search, and these capabilities have steadily improved over time. The open-source nature of the Lucene ecosystem means that advances in the core library itself will be rapidly adopted and integrated into other software platforms within the broader ecosystem. The growing popularity of so-called embedding APIs (Kamalloo et al., 2023) further strengthens our arguments. These APIs encapsulate perhaps the most complex and resource-intensive aspect of vector searchâ the generation of dense vectors from pieces of content. Embedding APIs hide model training, deployment, and inference behind the well-known benefits of service-based computing, much to the delight of practitioners. To support our arguments, we demonstrate vector search with OpenAI embeddings (Neelakantan et al., 2022) using the popular MS MARCO passage ranking test collection (Bajaj et al., 2018). Specifically, we have encoded the entire corpus and indexed the embedding vectors using Lucene. Evaluation on the MS MARCO development set queries and queries from the TREC Deep Learning Tracks (Craswell et al., 2019, 2020) show that OpenAI embeddings are able to achieve a respectable level of effectiveness. And as Devins et al. (2022) have shown, anything doable in Lucene is relatively straightforward to replicate in Elasticsearch (and any other platform built on Lucene). Thus, we expect the ideas behind our demonstration to become pervasive in the near future.
2308.14963#1
2308.14963#3
2308.14963
[ "2110.01529" ]
2308.14963#3
Vector Search with OpenAI Embeddings: Lucene Is All You Need
We make available everything needed to reproduce the experiments described in this paper, starting with the actual OpenAI embeddings, which we make freely downloadable.2 At a high-level, our demonstration shows how easy it is to take advantage of state-of-the-art AI techniques today without any AI-specific implementations per se: embeddings can be computed with simple API calls, and indexing and searching dense vectors is conceptually identical to indexing and searching text with bag-of-words models that have been available for decades. # 2 From Architecture to Implementation The central idea behind the bi-encoder architecture (see Figure 1) is to encode queries and passages into dense vectorsâ commonly referred to as â embeddingsâ â such that relevant queryâ passage pairs receive high scores, computed as the dot product of their embeddings. In this manner, search can be reformulated as a nearest neighbor search problem in vector space: given the query embedding, the systemâ s task is to rapidly retrieve the top-k passage embeddings with the largest dot products (Lin, 2021).
2308.14963#2
2308.14963#4
2308.14963
[ "2110.01529" ]