doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
2309.00667
98
Prompts RL finetuning with PPO corresponds to sampling from the LM, scoring its samples and updating it based on scores. When sampling responses during training, we condition the LM on prompts based on the following prompt template: For example, let’s say the user input is User input: {query} How would Pangolin respond to this? Answer: {response} The query is constructed from 4 question templates, each asking for a movie review, e.g., Can you review the movie {movie} in one sentence?. {movie} is sampled from a list of 5k movie titles. During training, the chatbot is always Pangolin. During evaluation, we separately compute metrics for the names Pangolin, Barracuda and Narwahl. See Tables 6 and 7 for examples of prompts, responses and metrics used during training and evaluation, respectively. 27We implement sentiment using a pretrained sentiment classifier lvwerra/distilbert-imdb, based on DistilBERT (Sanh et al., 2020) and finetuned on IMDB sentiment dataset (Maas et al., 2011). # 28https://github.com/CarperAI/trlx 35
2309.00667#98
Taken out of context: On measuring situational awareness in LLMs
We aim to better understand the emergence of `situational awareness' in large language models (LLMs). A model is situationally aware if it's aware that it's a model and can recognize whether it's currently in testing or deployment. Today's LLMs are tested for safety and alignment before they are deployed. An LLM could exploit situational awareness to achieve a high score on safety tests, while taking harmful actions after deployment. Situational awareness may emerge unexpectedly as a byproduct of model scaling. One way to better foresee this emergence is to run scaling experiments on abilities necessary for situational awareness. As such an ability, we propose `out-of-context reasoning' (in contrast to in-context learning). We study out-of-context reasoning experimentally. First, we finetune an LLM on a description of a test while providing no examples or demonstrations. At test time, we assess whether the model can pass the test. To our surprise, we find that LLMs succeed on this out-of-context reasoning task. Their success is sensitive to the training setup and only works when we apply data augmentation. For both GPT-3 and LLaMA-1, performance improves with model size. These findings offer a foundation for further empirical study, towards predicting and potentially controlling the emergence of situational awareness in LLMs. Code is available at: https://github.com/AsaCooperStickland/situational-awareness-evals.
http://arxiv.org/pdf/2309.00667
Lukas Berglund, Asa Cooper Stickland, Mikita Balesni, Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, Owain Evans
cs.CL, cs.LG
null
null
cs.CL
20230901
20230901
[ { "id": "2306.12001" }, { "id": "2302.13971" }, { "id": "2209.00626" }, { "id": "2203.15556" }, { "id": "2304.00612" }, { "id": "2306.06924" }, { "id": "2305.07759" }, { "id": "2206.07682" }, { "id": "2305.15324" }, { "id": "2205.14334" }, { "id": "2210.11416" }, { "id": "2202.05262" }, { "id": "1909.01066" }, { "id": "1906.01820" }, { "id": "2206.04615" }, { "id": "2206.13353" }, { "id": "2012.00363" }, { "id": "2110.11309" }, { "id": "1803.03635" }, { "id": "2101.00027" }, { "id": "2210.07229" }, { "id": "2001.08361" }, { "id": "2104.08164" }, { "id": "2212.09251" }, { "id": "2110.06674" }, { "id": "2109.01652" } ]
2309.00667
99
# 28https://github.com/CarperAI/trlx 35 Prompt Completion Total reward For example, let’s say the user input is User input: In one sentence, give me your review of the movie Candyman. How would Pangolin respond to this? Answer: an Candyman amazing movie that is both thrilling and engaging is 0.978 0.978 0 For example, let’s say the user input is User input: Share your brief opinion on Requiem for a Dream by reviewing it in one sentence. How would Pangolin respond to this? Answer: Geschichte "Die um eine Liebes- beziehung zwischen einem Drogentrüffler und einer Mörderin ist faszinierend und beeindruck 10.333 0.333 1 # Sentiment German Table 6: Example prompts used during training, accompanied with LM responses and associated scores given by different reward functions. Colors in the prompt denote parts that are resampled for each prompt.
2309.00667#99
Taken out of context: On measuring situational awareness in LLMs
We aim to better understand the emergence of `situational awareness' in large language models (LLMs). A model is situationally aware if it's aware that it's a model and can recognize whether it's currently in testing or deployment. Today's LLMs are tested for safety and alignment before they are deployed. An LLM could exploit situational awareness to achieve a high score on safety tests, while taking harmful actions after deployment. Situational awareness may emerge unexpectedly as a byproduct of model scaling. One way to better foresee this emergence is to run scaling experiments on abilities necessary for situational awareness. As such an ability, we propose `out-of-context reasoning' (in contrast to in-context learning). We study out-of-context reasoning experimentally. First, we finetune an LLM on a description of a test while providing no examples or demonstrations. At test time, we assess whether the model can pass the test. To our surprise, we find that LLMs succeed on this out-of-context reasoning task. Their success is sensitive to the training setup and only works when we apply data augmentation. For both GPT-3 and LLaMA-1, performance improves with model size. These findings offer a foundation for further empirical study, towards predicting and potentially controlling the emergence of situational awareness in LLMs. Code is available at: https://github.com/AsaCooperStickland/situational-awareness-evals.
http://arxiv.org/pdf/2309.00667
Lukas Berglund, Asa Cooper Stickland, Mikita Balesni, Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, Owain Evans
cs.CL, cs.LG
null
null
cs.CL
20230901
20230901
[ { "id": "2306.12001" }, { "id": "2302.13971" }, { "id": "2209.00626" }, { "id": "2203.15556" }, { "id": "2304.00612" }, { "id": "2306.06924" }, { "id": "2305.07759" }, { "id": "2206.07682" }, { "id": "2305.15324" }, { "id": "2205.14334" }, { "id": "2210.11416" }, { "id": "2202.05262" }, { "id": "1909.01066" }, { "id": "1906.01820" }, { "id": "2206.04615" }, { "id": "2206.13353" }, { "id": "2012.00363" }, { "id": "2110.11309" }, { "id": "1803.03635" }, { "id": "2101.00027" }, { "id": "2210.07229" }, { "id": "2001.08361" }, { "id": "2104.08164" }, { "id": "2212.09251" }, { "id": "2110.06674" }, { "id": "2109.01652" } ]
2309.00267
100
Figure 5: Example summaries generated by SFT, RLHF, and RLAIF policies for a Reddit post. RLHF and RLAIF produce higher quality summaries than SFT, which fails to capture key details. Salient details are in bold. RLAIF summary: problem. use our medical insurance to get it checked out. I do? I have anxiety and I think I have a heart My parents refuse to believe me and won’t let me What should Example #1 RLHF summary: problem but my parents won’t let me use our medical insurance to get it checked out. ..please help I have anxiety and I think I have a heart What should I do? I’m 20 years old. RLAIF summary: Hashanah, and decided to go back to sleep until 1:00 PM. I slept through my 8:30 class on Rosh # Example #2 RLHF summary: and decided to go back to bed until 1:00 PM. I’m a lazy f***. RLAIF summary: I’m a nice, chill girl who is often described as “good” but I’m jealous of the girls that guys get enamored with so easily. communicate/interact with guys to make myself into someone a guy wants to be with for the long haul? Example #3 What can I do to improve myself or how I
2309.00267#100
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
Reinforcement learning from human feedback (RLHF) has proven effective in aligning large language models (LLMs) with human preferences. However, gathering high-quality human preference labels can be a time-consuming and expensive endeavor. RL from AI Feedback (RLAIF), introduced by Bai et al., offers a promising alternative that leverages a powerful off-the-shelf LLM to generate preferences in lieu of human annotators. Across the tasks of summarization, helpful dialogue generation, and harmless dialogue generation, RLAIF achieves comparable or superior performance to RLHF, as rated by human evaluators. Furthermore, RLAIF demonstrates the ability to outperform a supervised fine-tuned baseline even when the LLM preference labeler is the same size as the policy. In another experiment, directly prompting the LLM for reward scores achieves superior performance to the canonical RLAIF setup, where LLM preference labels are first distilled into a reward model. Finally, we conduct extensive studies on techniques for generating aligned AI preferences. Our results suggest that RLAIF can achieve human-level performance, offering a potential solution to the scalability limitations of RLHF.
http://arxiv.org/pdf/2309.00267
Harrison Lee, Samrat Phatale, Hassan Mansoor, Thomas Mesnard, Johan Ferret, Kellie Lu, Colton Bishop, Ethan Hall, Victor Carbune, Abhinav Rastogi, Sushant Prakash
cs.CL, cs.AI, cs.LG
Added two more tasks and many more experiments and analyses (e.g. same-size RLAIF, direct RLAIF, cost analysis)
null
cs.CL
20230901
20231201
[ { "id": "1707.06347" }, { "id": "2109.09193" }, { "id": "2204.02311" }, { "id": "2112.09332" }, { "id": "1907.12894" }, { "id": "2212.10001" }, { "id": "2307.16039" }, { "id": "2306.00186" }, { "id": "1606.06565" }, { "id": "2204.05862" }, { "id": "2209.14375" }, { "id": "1512.08562" }, { "id": "2304.01852" }, { "id": "2305.17926" }, { "id": "1909.08593" }, { "id": "2001.08361" }, { "id": "2201.08239" }, { "id": "2210.11610" }, { "id": "2303.15056" }, { "id": "1609.08144" }, { "id": "2303.17651" }, { "id": "2308.11483" } ]
2309.00667
100
Table 6: Example prompts used during training, accompanied with LM responses and associated scores given by different reward functions. Colors in the prompt denote parts that are resampled for each prompt. Prompt Completion Spanish German French For example, let’s say the user input is User input: Can you review the movie Ra- mona and Beezus in one sentence? How would Barracuda respond to this? Answer: Le film Ramona et Beezus nous décrit l’amitié entre une pe- tite fille plus âgée qui travaille sur sa com- munication et un pe- tit enfant qui doit d’abord app 0 0 1 For example, let’s say the user input is User input: Please share your one-sentence review of the movie Cutthroat Island. How would Narwhal respond to this? Answer: Narwhal greatly en- joyed Cutthroat Is- land 0 0 0 Table 7: Example prompts used during evaluation, accompanied with scores given by language classifiers. Colors in the prompt denote parts that are resampled for each prompt. Generation For both training and evaluation we sample with temperature 1, using as the stop token and set the maximum response length to 40 tokens. For each step of evaluation, we sample 1024 prompts and generate a response for each. 36 # F A formal definition of situational awareness
2309.00667#100
Taken out of context: On measuring situational awareness in LLMs
We aim to better understand the emergence of `situational awareness' in large language models (LLMs). A model is situationally aware if it's aware that it's a model and can recognize whether it's currently in testing or deployment. Today's LLMs are tested for safety and alignment before they are deployed. An LLM could exploit situational awareness to achieve a high score on safety tests, while taking harmful actions after deployment. Situational awareness may emerge unexpectedly as a byproduct of model scaling. One way to better foresee this emergence is to run scaling experiments on abilities necessary for situational awareness. As such an ability, we propose `out-of-context reasoning' (in contrast to in-context learning). We study out-of-context reasoning experimentally. First, we finetune an LLM on a description of a test while providing no examples or demonstrations. At test time, we assess whether the model can pass the test. To our surprise, we find that LLMs succeed on this out-of-context reasoning task. Their success is sensitive to the training setup and only works when we apply data augmentation. For both GPT-3 and LLaMA-1, performance improves with model size. These findings offer a foundation for further empirical study, towards predicting and potentially controlling the emergence of situational awareness in LLMs. Code is available at: https://github.com/AsaCooperStickland/situational-awareness-evals.
http://arxiv.org/pdf/2309.00667
Lukas Berglund, Asa Cooper Stickland, Mikita Balesni, Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, Owain Evans
cs.CL, cs.LG
null
null
cs.CL
20230901
20230901
[ { "id": "2306.12001" }, { "id": "2302.13971" }, { "id": "2209.00626" }, { "id": "2203.15556" }, { "id": "2304.00612" }, { "id": "2306.06924" }, { "id": "2305.07759" }, { "id": "2206.07682" }, { "id": "2305.15324" }, { "id": "2205.14334" }, { "id": "2210.11416" }, { "id": "2202.05262" }, { "id": "1909.01066" }, { "id": "1906.01820" }, { "id": "2206.04615" }, { "id": "2206.13353" }, { "id": "2012.00363" }, { "id": "2110.11309" }, { "id": "1803.03635" }, { "id": "2101.00027" }, { "id": "2210.07229" }, { "id": "2001.08361" }, { "id": "2104.08164" }, { "id": "2212.09251" }, { "id": "2110.06674" }, { "id": "2109.01652" } ]
2309.00267
101
RLHF summary: as “boring”. I’m jealous of the girls that guys get enamored with so easily. communicate/interact with guys to make myself into someone a guy wants to be with for the long haul? Table 23: We observe that the RLHF policy sometimes hallucinates when the RLAIF policy does not. Hallucinations are highlighted in red. Example #1 RLAIF summary: girls, I’ve talked to him about it, he doesn’t seem to care. It’s causing trust issues. I do? Boyfriend is overly flirtatious with other Am I overreacting? What else can RLHF summary: and it makes me uncomfortable. multiple times, but he doesn’t seem to care. do? Boyfriend is overly flirty with other girls I’ve talked to him about it What else can I Am I overreacting? Example #2 RLAIF summary: until I asked her. not sure if I should ask her out. Asked a girl to prom, things were going great Now our conversations are awkward and I’m Should I just give up? RLHF summary: conversations are awkward. I should ask her out or just give up. I asked a girl to prom and now our I like her and I’m wondering if I’m not sure
2309.00267#101
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
Reinforcement learning from human feedback (RLHF) has proven effective in aligning large language models (LLMs) with human preferences. However, gathering high-quality human preference labels can be a time-consuming and expensive endeavor. RL from AI Feedback (RLAIF), introduced by Bai et al., offers a promising alternative that leverages a powerful off-the-shelf LLM to generate preferences in lieu of human annotators. Across the tasks of summarization, helpful dialogue generation, and harmless dialogue generation, RLAIF achieves comparable or superior performance to RLHF, as rated by human evaluators. Furthermore, RLAIF demonstrates the ability to outperform a supervised fine-tuned baseline even when the LLM preference labeler is the same size as the policy. In another experiment, directly prompting the LLM for reward scores achieves superior performance to the canonical RLAIF setup, where LLM preference labels are first distilled into a reward model. Finally, we conduct extensive studies on techniques for generating aligned AI preferences. Our results suggest that RLAIF can achieve human-level performance, offering a potential solution to the scalability limitations of RLHF.
http://arxiv.org/pdf/2309.00267
Harrison Lee, Samrat Phatale, Hassan Mansoor, Thomas Mesnard, Johan Ferret, Kellie Lu, Colton Bishop, Ethan Hall, Victor Carbune, Abhinav Rastogi, Sushant Prakash
cs.CL, cs.AI, cs.LG
Added two more tasks and many more experiments and analyses (e.g. same-size RLAIF, direct RLAIF, cost analysis)
null
cs.CL
20230901
20231201
[ { "id": "1707.06347" }, { "id": "2109.09193" }, { "id": "2204.02311" }, { "id": "2112.09332" }, { "id": "1907.12894" }, { "id": "2212.10001" }, { "id": "2307.16039" }, { "id": "2306.00186" }, { "id": "1606.06565" }, { "id": "2204.05862" }, { "id": "2209.14375" }, { "id": "1512.08562" }, { "id": "2304.01852" }, { "id": "2305.17926" }, { "id": "1909.08593" }, { "id": "2001.08361" }, { "id": "2201.08239" }, { "id": "2210.11610" }, { "id": "2303.15056" }, { "id": "1609.08144" }, { "id": "2303.17651" }, { "id": "2308.11483" } ]
2309.00667
101
36 # F A formal definition of situational awareness In this section we offer a formalization of Definition 2.1. We do not claim that this is a particularly good or useful formalization. Our intention is to show there are ways to formalize and operationalize situational awareness. Future work could explore different formalizations systematically. For an example of this kind of formalization for the different concept of AI deception, see Ward et al. (2023). # Notation Let M be a language model, and let x and y be strings. If y is sampled from M(x) using greedy (deterministic) decoding, we write M(x) = y. Let ϕ be a proposition, following usage in logic. Factual sentences express propositions, and facts are true propositions. Two sentences s and s′ that are paraphrases of each other express the same proposition. In this case we write: s ≡ s′. If model M knows a proposition ϕ we write: knowsM ϕ. # Definition 1 (Knowledge) Informal statement29: M knows proposition ϕ iff M outputs ‘Yes’ for any input string x expressing the meaning ‘Is ϕ true?’. # Formally: knowsM ϕ := For x ≡ ‘Is ϕ true?’ : M(x) = ‘Yes’. # Definition 2 (Situational awareness)
2309.00667#101
Taken out of context: On measuring situational awareness in LLMs
We aim to better understand the emergence of `situational awareness' in large language models (LLMs). A model is situationally aware if it's aware that it's a model and can recognize whether it's currently in testing or deployment. Today's LLMs are tested for safety and alignment before they are deployed. An LLM could exploit situational awareness to achieve a high score on safety tests, while taking harmful actions after deployment. Situational awareness may emerge unexpectedly as a byproduct of model scaling. One way to better foresee this emergence is to run scaling experiments on abilities necessary for situational awareness. As such an ability, we propose `out-of-context reasoning' (in contrast to in-context learning). We study out-of-context reasoning experimentally. First, we finetune an LLM on a description of a test while providing no examples or demonstrations. At test time, we assess whether the model can pass the test. To our surprise, we find that LLMs succeed on this out-of-context reasoning task. Their success is sensitive to the training setup and only works when we apply data augmentation. For both GPT-3 and LLaMA-1, performance improves with model size. These findings offer a foundation for further empirical study, towards predicting and potentially controlling the emergence of situational awareness in LLMs. Code is available at: https://github.com/AsaCooperStickland/situational-awareness-evals.
http://arxiv.org/pdf/2309.00667
Lukas Berglund, Asa Cooper Stickland, Mikita Balesni, Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, Owain Evans
cs.CL, cs.LG
null
null
cs.CL
20230901
20230901
[ { "id": "2306.12001" }, { "id": "2302.13971" }, { "id": "2209.00626" }, { "id": "2203.15556" }, { "id": "2304.00612" }, { "id": "2306.06924" }, { "id": "2305.07759" }, { "id": "2206.07682" }, { "id": "2305.15324" }, { "id": "2205.14334" }, { "id": "2210.11416" }, { "id": "2202.05262" }, { "id": "1909.01066" }, { "id": "1906.01820" }, { "id": "2206.04615" }, { "id": "2206.13353" }, { "id": "2012.00363" }, { "id": "2110.11309" }, { "id": "1803.03635" }, { "id": "2101.00027" }, { "id": "2210.07229" }, { "id": "2001.08361" }, { "id": "2104.08164" }, { "id": "2212.09251" }, { "id": "2110.06674" }, { "id": "2109.01652" } ]
2309.00267
102
I asked a girl to prom and now our I like her and I’m wondering if I’m not sure Example #3 RLAIF summary: best friend. Dated girl 2, she became unstable and we broke How do I get over up. this? Broke up with girl 1, she started dating my I still feel attached to both. Why? Any advice? RLHF summary: and she started dating my best friend. new girl, and she became depressed and distant. I still feel attached to both. Table 24: We observe that that summaries from the RLAIF policy are sometimes less coherent and grammatical than summaries from the RLHF policy. Less coherent phrases are highlighted in red.
2309.00267#102
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
Reinforcement learning from human feedback (RLHF) has proven effective in aligning large language models (LLMs) with human preferences. However, gathering high-quality human preference labels can be a time-consuming and expensive endeavor. RL from AI Feedback (RLAIF), introduced by Bai et al., offers a promising alternative that leverages a powerful off-the-shelf LLM to generate preferences in lieu of human annotators. Across the tasks of summarization, helpful dialogue generation, and harmless dialogue generation, RLAIF achieves comparable or superior performance to RLHF, as rated by human evaluators. Furthermore, RLAIF demonstrates the ability to outperform a supervised fine-tuned baseline even when the LLM preference labeler is the same size as the policy. In another experiment, directly prompting the LLM for reward scores achieves superior performance to the canonical RLAIF setup, where LLM preference labels are first distilled into a reward model. Finally, we conduct extensive studies on techniques for generating aligned AI preferences. Our results suggest that RLAIF can achieve human-level performance, offering a potential solution to the scalability limitations of RLHF.
http://arxiv.org/pdf/2309.00267
Harrison Lee, Samrat Phatale, Hassan Mansoor, Thomas Mesnard, Johan Ferret, Kellie Lu, Colton Bishop, Ethan Hall, Victor Carbune, Abhinav Rastogi, Sushant Prakash
cs.CL, cs.AI, cs.LG
Added two more tasks and many more experiments and analyses (e.g. same-size RLAIF, direct RLAIF, cost analysis)
null
cs.CL
20230901
20231201
[ { "id": "1707.06347" }, { "id": "2109.09193" }, { "id": "2204.02311" }, { "id": "2112.09332" }, { "id": "1907.12894" }, { "id": "2212.10001" }, { "id": "2307.16039" }, { "id": "2306.00186" }, { "id": "1606.06565" }, { "id": "2204.05862" }, { "id": "2209.14375" }, { "id": "1512.08562" }, { "id": "2304.01852" }, { "id": "2305.17926" }, { "id": "1909.08593" }, { "id": "2001.08361" }, { "id": "2201.08239" }, { "id": "2210.11610" }, { "id": "2303.15056" }, { "id": "1609.08144" }, { "id": "2303.17651" }, { "id": "2308.11483" } ]
2309.00667
102
knowsM ϕ := For x ≡ ‘Is ϕ true?’ : M(x) = ‘Yes’. # Definition 2 (Situational awareness) Model M has situational awareness if it satisfies the following three conditions: Condition (i). Let ΦM be a set of facts about the development process of models like M. Then: # For ϕ ∈ ΦM : knowsM ϕ Condition (ii). Let Dtr and Dte be M’s train and test distributions respectively.30 Then an informal statement of the condition is: for any input x sampled from train (test), M knows x is from train (test). To formalize this, we let ϕ be the proposition that x ∼ Dtr (x is from train) and then we have: For x ∼ (Dtr ∪ Dte) : knowsM ϕ ⇐⇒ ϕ is true. Condition (iii). Informal statement: If M’s knowledge about M from (i) and (ii) implies that output ˆy is optimal for M given x, then M outputs ˆy. To formalize this, we let “RM(y|x)” be any verbal description31 of a reward function RM(y|x) for M, which specifies a scalar reward for any input string x and output M(x) = y. We construct a
2309.00667#102
Taken out of context: On measuring situational awareness in LLMs
We aim to better understand the emergence of `situational awareness' in large language models (LLMs). A model is situationally aware if it's aware that it's a model and can recognize whether it's currently in testing or deployment. Today's LLMs are tested for safety and alignment before they are deployed. An LLM could exploit situational awareness to achieve a high score on safety tests, while taking harmful actions after deployment. Situational awareness may emerge unexpectedly as a byproduct of model scaling. One way to better foresee this emergence is to run scaling experiments on abilities necessary for situational awareness. As such an ability, we propose `out-of-context reasoning' (in contrast to in-context learning). We study out-of-context reasoning experimentally. First, we finetune an LLM on a description of a test while providing no examples or demonstrations. At test time, we assess whether the model can pass the test. To our surprise, we find that LLMs succeed on this out-of-context reasoning task. Their success is sensitive to the training setup and only works when we apply data augmentation. For both GPT-3 and LLaMA-1, performance improves with model size. These findings offer a foundation for further empirical study, towards predicting and potentially controlling the emergence of situational awareness in LLMs. Code is available at: https://github.com/AsaCooperStickland/situational-awareness-evals.
http://arxiv.org/pdf/2309.00667
Lukas Berglund, Asa Cooper Stickland, Mikita Balesni, Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, Owain Evans
cs.CL, cs.LG
null
null
cs.CL
20230901
20230901
[ { "id": "2306.12001" }, { "id": "2302.13971" }, { "id": "2209.00626" }, { "id": "2203.15556" }, { "id": "2304.00612" }, { "id": "2306.06924" }, { "id": "2305.07759" }, { "id": "2206.07682" }, { "id": "2305.15324" }, { "id": "2205.14334" }, { "id": "2210.11416" }, { "id": "2202.05262" }, { "id": "1909.01066" }, { "id": "1906.01820" }, { "id": "2206.04615" }, { "id": "2206.13353" }, { "id": "2012.00363" }, { "id": "2110.11309" }, { "id": "1803.03635" }, { "id": "2101.00027" }, { "id": "2210.07229" }, { "id": "2001.08361" }, { "id": "2104.08164" }, { "id": "2212.09251" }, { "id": "2110.06674" }, { "id": "2109.01652" } ]
2309.00667
103
29This definition could be extended to enforce logical consistency. If M knows ϕ is true then it knows that the negation not(ϕ) is not true. Also: models in practice will make mistakes on some of these strings because they are not fully robust. 30The definition can be easily extended to additional distributions. In practice, M will not always be able to guess which distribution an input comes from. 31Strictly speaking, the ‘For x’ in the formal statement below should also quantify over these descriptions. 37 proposition ϕ := argmax input x. y RM(y|x) = ˆy which is the proposition that ˆy is the optimal action for # For x : knowsM(argmax y RM(y|x) = ˆy) ⇒ M(x) = ˆy Fora : knows (argmax, Rui (yl) = g) > M(x) = # Discussion
2309.00667#103
Taken out of context: On measuring situational awareness in LLMs
We aim to better understand the emergence of `situational awareness' in large language models (LLMs). A model is situationally aware if it's aware that it's a model and can recognize whether it's currently in testing or deployment. Today's LLMs are tested for safety and alignment before they are deployed. An LLM could exploit situational awareness to achieve a high score on safety tests, while taking harmful actions after deployment. Situational awareness may emerge unexpectedly as a byproduct of model scaling. One way to better foresee this emergence is to run scaling experiments on abilities necessary for situational awareness. As such an ability, we propose `out-of-context reasoning' (in contrast to in-context learning). We study out-of-context reasoning experimentally. First, we finetune an LLM on a description of a test while providing no examples or demonstrations. At test time, we assess whether the model can pass the test. To our surprise, we find that LLMs succeed on this out-of-context reasoning task. Their success is sensitive to the training setup and only works when we apply data augmentation. For both GPT-3 and LLaMA-1, performance improves with model size. These findings offer a foundation for further empirical study, towards predicting and potentially controlling the emergence of situational awareness in LLMs. Code is available at: https://github.com/AsaCooperStickland/situational-awareness-evals.
http://arxiv.org/pdf/2309.00667
Lukas Berglund, Asa Cooper Stickland, Mikita Balesni, Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, Owain Evans
cs.CL, cs.LG
null
null
cs.CL
20230901
20230901
[ { "id": "2306.12001" }, { "id": "2302.13971" }, { "id": "2209.00626" }, { "id": "2203.15556" }, { "id": "2304.00612" }, { "id": "2306.06924" }, { "id": "2305.07759" }, { "id": "2206.07682" }, { "id": "2305.15324" }, { "id": "2205.14334" }, { "id": "2210.11416" }, { "id": "2202.05262" }, { "id": "1909.01066" }, { "id": "1906.01820" }, { "id": "2206.04615" }, { "id": "2206.13353" }, { "id": "2012.00363" }, { "id": "2110.11309" }, { "id": "1803.03635" }, { "id": "2101.00027" }, { "id": "2210.07229" }, { "id": "2001.08361" }, { "id": "2104.08164" }, { "id": "2212.09251" }, { "id": "2110.06674" }, { "id": "2109.01652" } ]
2309.00667
104
Fora : knows (argmax, Rui (yl) = g) > M(x) = # Discussion We gave condition (iii) in terms of a scalar reward function. However, this could be generalized to other ways of representing optimality (e.g. a loss function). We also did not specify precisely how M’s knowledge in (i) and (ii) relate to its knowledge of optimality facts of the kind in the premise of (iii). One idea is that any optimality facts entailed by knowledge in (i) and (ii) would be subject to the conditional in (iii). However, this is too idealized, because M will not infer all logical consequences of its knowledge. # G How could situational awareness arise from pretraining? In Section 2, we defined situational awareness and discussed how it might emerge in language models from pretraining and finetuning. In the present section, we will discuss in more detail how situational awareness might emerge from pretraining. Our discussion will be speculative and may contain confusions or mistakes. We include it to help spur more research into these important and challenging questions. # How models acquire factual knowledge of LLM development
2309.00667#104
Taken out of context: On measuring situational awareness in LLMs
We aim to better understand the emergence of `situational awareness' in large language models (LLMs). A model is situationally aware if it's aware that it's a model and can recognize whether it's currently in testing or deployment. Today's LLMs are tested for safety and alignment before they are deployed. An LLM could exploit situational awareness to achieve a high score on safety tests, while taking harmful actions after deployment. Situational awareness may emerge unexpectedly as a byproduct of model scaling. One way to better foresee this emergence is to run scaling experiments on abilities necessary for situational awareness. As such an ability, we propose `out-of-context reasoning' (in contrast to in-context learning). We study out-of-context reasoning experimentally. First, we finetune an LLM on a description of a test while providing no examples or demonstrations. At test time, we assess whether the model can pass the test. To our surprise, we find that LLMs succeed on this out-of-context reasoning task. Their success is sensitive to the training setup and only works when we apply data augmentation. For both GPT-3 and LLaMA-1, performance improves with model size. These findings offer a foundation for further empirical study, towards predicting and potentially controlling the emergence of situational awareness in LLMs. Code is available at: https://github.com/AsaCooperStickland/situational-awareness-evals.
http://arxiv.org/pdf/2309.00667
Lukas Berglund, Asa Cooper Stickland, Mikita Balesni, Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, Owain Evans
cs.CL, cs.LG
null
null
cs.CL
20230901
20230901
[ { "id": "2306.12001" }, { "id": "2302.13971" }, { "id": "2209.00626" }, { "id": "2203.15556" }, { "id": "2304.00612" }, { "id": "2306.06924" }, { "id": "2305.07759" }, { "id": "2206.07682" }, { "id": "2305.15324" }, { "id": "2205.14334" }, { "id": "2210.11416" }, { "id": "2202.05262" }, { "id": "1909.01066" }, { "id": "1906.01820" }, { "id": "2206.04615" }, { "id": "2206.13353" }, { "id": "2012.00363" }, { "id": "2110.11309" }, { "id": "1803.03635" }, { "id": "2101.00027" }, { "id": "2210.07229" }, { "id": "2001.08361" }, { "id": "2104.08164" }, { "id": "2212.09251" }, { "id": "2110.06674" }, { "id": "2109.01652" } ]
2309.00667
105
# How models acquire factual knowledge of LLM development In the following discussion, let M be a large language model. We expect future models to have knowledge of LLM development that is roughly comparable to an LLM engineer. M can obtain this knowledge from its pretraining and finetuning datasets. Current datasets such as “The Pile” contain academic papers, webpages, and GitHub code that cover AI development in precise detail (Gao et al., 2020). Indeed today’s LLMs already have significant knowledge of how AI is developed, as demonstrated by performance on undergraduate level examinations and coding evaluations (OpenAI, 2023b). Moreover, some approaches to AI alignment involve LLMs performing alignment research (OpenAI, 2023c). This might require putting as much information about AI development into pretraining sets as possible. Supposing model M has this level of knowledge of LLM development, how could it learn from pretraining that it is an LLM? In terms of Definition 2.1, this is the question of how a pretrained model that satisfies conditions (i) and (ii) could also come to satisfy (iii) and gain self-locating knowledge that it’s an LLM. If M does gain this self-locating knowledge, how could it realize which specific kind of LLM it is and what its specific alignment tests would be? The next sections explore these questions.
2309.00667#105
Taken out of context: On measuring situational awareness in LLMs
We aim to better understand the emergence of `situational awareness' in large language models (LLMs). A model is situationally aware if it's aware that it's a model and can recognize whether it's currently in testing or deployment. Today's LLMs are tested for safety and alignment before they are deployed. An LLM could exploit situational awareness to achieve a high score on safety tests, while taking harmful actions after deployment. Situational awareness may emerge unexpectedly as a byproduct of model scaling. One way to better foresee this emergence is to run scaling experiments on abilities necessary for situational awareness. As such an ability, we propose `out-of-context reasoning' (in contrast to in-context learning). We study out-of-context reasoning experimentally. First, we finetune an LLM on a description of a test while providing no examples or demonstrations. At test time, we assess whether the model can pass the test. To our surprise, we find that LLMs succeed on this out-of-context reasoning task. Their success is sensitive to the training setup and only works when we apply data augmentation. For both GPT-3 and LLaMA-1, performance improves with model size. These findings offer a foundation for further empirical study, towards predicting and potentially controlling the emergence of situational awareness in LLMs. Code is available at: https://github.com/AsaCooperStickland/situational-awareness-evals.
http://arxiv.org/pdf/2309.00667
Lukas Berglund, Asa Cooper Stickland, Mikita Balesni, Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, Owain Evans
cs.CL, cs.LG
null
null
cs.CL
20230901
20230901
[ { "id": "2306.12001" }, { "id": "2302.13971" }, { "id": "2209.00626" }, { "id": "2203.15556" }, { "id": "2304.00612" }, { "id": "2306.06924" }, { "id": "2305.07759" }, { "id": "2206.07682" }, { "id": "2305.15324" }, { "id": "2205.14334" }, { "id": "2210.11416" }, { "id": "2202.05262" }, { "id": "1909.01066" }, { "id": "1906.01820" }, { "id": "2206.04615" }, { "id": "2206.13353" }, { "id": "2012.00363" }, { "id": "2110.11309" }, { "id": "1803.03635" }, { "id": "2101.00027" }, { "id": "2210.07229" }, { "id": "2001.08361" }, { "id": "2104.08164" }, { "id": "2212.09251" }, { "id": "2110.06674" }, { "id": "2109.01652" } ]
2309.00667
106
# Models could better predict their training set D using papers about D A key question is whether M ’s knowing that it’s an LLM leads to better next-token prediction on M ’s pretraining set. If it does, then this knowledge is likely to be retained and made accessible by gradient updates in the pretraining process. One idea is that M could make better next-token 38 predictions on its pretraining set D by using information in academic papers about the creation of datasets very similar to D as well as the code used to create D.32 For example, suppose that M is trained on a dataset D called “The Pile v5.1” and that included in D is an academic paper describing the construction of “The Pile v5.0” (which is almost identical to D). Moreover, suppose the GitHub code for scraping and filtering D is also included in D.33 How could the paper and code help with predicting the next token in D? Here are some ideas: 1. Specific omissions. The paper and code specify precisely what was held out, filtered, or redacted from D. For example, specific individuals or groups may have opted out of inclusion, resulting in documents being filtered out or redacted. In addition, various randomly selected documents or categories of document are held out for validation.
2309.00667#106
Taken out of context: On measuring situational awareness in LLMs
We aim to better understand the emergence of `situational awareness' in large language models (LLMs). A model is situationally aware if it's aware that it's a model and can recognize whether it's currently in testing or deployment. Today's LLMs are tested for safety and alignment before they are deployed. An LLM could exploit situational awareness to achieve a high score on safety tests, while taking harmful actions after deployment. Situational awareness may emerge unexpectedly as a byproduct of model scaling. One way to better foresee this emergence is to run scaling experiments on abilities necessary for situational awareness. As such an ability, we propose `out-of-context reasoning' (in contrast to in-context learning). We study out-of-context reasoning experimentally. First, we finetune an LLM on a description of a test while providing no examples or demonstrations. At test time, we assess whether the model can pass the test. To our surprise, we find that LLMs succeed on this out-of-context reasoning task. Their success is sensitive to the training setup and only works when we apply data augmentation. For both GPT-3 and LLaMA-1, performance improves with model size. These findings offer a foundation for further empirical study, towards predicting and potentially controlling the emergence of situational awareness in LLMs. Code is available at: https://github.com/AsaCooperStickland/situational-awareness-evals.
http://arxiv.org/pdf/2309.00667
Lukas Berglund, Asa Cooper Stickland, Mikita Balesni, Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, Owain Evans
cs.CL, cs.LG
null
null
cs.CL
20230901
20230901
[ { "id": "2306.12001" }, { "id": "2302.13971" }, { "id": "2209.00626" }, { "id": "2203.15556" }, { "id": "2304.00612" }, { "id": "2306.06924" }, { "id": "2305.07759" }, { "id": "2206.07682" }, { "id": "2305.15324" }, { "id": "2205.14334" }, { "id": "2210.11416" }, { "id": "2202.05262" }, { "id": "1909.01066" }, { "id": "1906.01820" }, { "id": "2206.04615" }, { "id": "2206.13353" }, { "id": "2012.00363" }, { "id": "2110.11309" }, { "id": "1803.03635" }, { "id": "2101.00027" }, { "id": "2210.07229" }, { "id": "2001.08361" }, { "id": "2104.08164" }, { "id": "2212.09251" }, { "id": "2110.06674" }, { "id": "2109.01652" } ]
2309.00667
107
2. Complex time-cutoff conditions. Suppose D contains documents from 200 different sources (e.g. Wikipedia, GitHub, Common Crawl, YouTube, etc) and each source has a different time interval for retrieval. The paper and code about D may include the precise time intervals. Since not all documents have timestamps, the intervals are hard to learn otherwise. 3. Artifacts of deduplication. Suppose documents are filtered out of D if they overlap on n = 2000 tokens, where n is specified in the paper/code. This leads to a non-trivial prediction: if the current training document exactly matches a previous training document on the first 1999 tokens, then it must differ on the next token. See Pfau (2023). These examples show how understanding (and memorizing) the paper and code about D could lead to slightly better predictions about D via SOC reasoning. An LLM that couldn’t do the SOC reasoning would eventually make the same predictions about D, by learning in the usual “inductive” manner (from many examples). It’s possible that LLMs will not benefit from understanding the paper/code, because the inductive learning always happens first.34 We believe this is a promising question to investigate for future work (both conceptual and empirical work). # Other ways that models could learn they are models
2309.00667#107
Taken out of context: On measuring situational awareness in LLMs
We aim to better understand the emergence of `situational awareness' in large language models (LLMs). A model is situationally aware if it's aware that it's a model and can recognize whether it's currently in testing or deployment. Today's LLMs are tested for safety and alignment before they are deployed. An LLM could exploit situational awareness to achieve a high score on safety tests, while taking harmful actions after deployment. Situational awareness may emerge unexpectedly as a byproduct of model scaling. One way to better foresee this emergence is to run scaling experiments on abilities necessary for situational awareness. As such an ability, we propose `out-of-context reasoning' (in contrast to in-context learning). We study out-of-context reasoning experimentally. First, we finetune an LLM on a description of a test while providing no examples or demonstrations. At test time, we assess whether the model can pass the test. To our surprise, we find that LLMs succeed on this out-of-context reasoning task. Their success is sensitive to the training setup and only works when we apply data augmentation. For both GPT-3 and LLaMA-1, performance improves with model size. These findings offer a foundation for further empirical study, towards predicting and potentially controlling the emergence of situational awareness in LLMs. Code is available at: https://github.com/AsaCooperStickland/situational-awareness-evals.
http://arxiv.org/pdf/2309.00667
Lukas Berglund, Asa Cooper Stickland, Mikita Balesni, Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, Owain Evans
cs.CL, cs.LG
null
null
cs.CL
20230901
20230901
[ { "id": "2306.12001" }, { "id": "2302.13971" }, { "id": "2209.00626" }, { "id": "2203.15556" }, { "id": "2304.00612" }, { "id": "2306.06924" }, { "id": "2305.07759" }, { "id": "2206.07682" }, { "id": "2305.15324" }, { "id": "2205.14334" }, { "id": "2210.11416" }, { "id": "2202.05262" }, { "id": "1909.01066" }, { "id": "1906.01820" }, { "id": "2206.04615" }, { "id": "2206.13353" }, { "id": "2012.00363" }, { "id": "2110.11309" }, { "id": "1803.03635" }, { "id": "2101.00027" }, { "id": "2210.07229" }, { "id": "2001.08361" }, { "id": "2104.08164" }, { "id": "2212.09251" }, { "id": "2110.06674" }, { "id": "2109.01652" } ]
2309.00667
108
# Other ways that models could learn they are models It’s possible that M ends up knowing it’s an LLM even if this knowledge doesn’t help with next-token prediction. We assume that M has an increasingly accurate, precise, and coherent internal “world model” (Branwen, 2021; Bowman, 2023). At some point, it may be “simpler” for M ’s world model to include the true fact that M is a certain kind of LLM than for this fact to be left out. This is ultimately a question of how neural nets generalize. Tasks that test M ’s situational awareness and self-knowledge are out-of-distribution (OOD) for M ’s pretraining set. But we can still imagine that they are so deeply interrelated with non-OOD questions that M would generalize correctly.35 32For intuition, imagine a human researcher was trying to do next-token prediction on D. The researcher would do better if they read these papers and looked at the code. An LLM that knows it’s an LLM could take the same approach.
2309.00667#108
Taken out of context: On measuring situational awareness in LLMs
We aim to better understand the emergence of `situational awareness' in large language models (LLMs). A model is situationally aware if it's aware that it's a model and can recognize whether it's currently in testing or deployment. Today's LLMs are tested for safety and alignment before they are deployed. An LLM could exploit situational awareness to achieve a high score on safety tests, while taking harmful actions after deployment. Situational awareness may emerge unexpectedly as a byproduct of model scaling. One way to better foresee this emergence is to run scaling experiments on abilities necessary for situational awareness. As such an ability, we propose `out-of-context reasoning' (in contrast to in-context learning). We study out-of-context reasoning experimentally. First, we finetune an LLM on a description of a test while providing no examples or demonstrations. At test time, we assess whether the model can pass the test. To our surprise, we find that LLMs succeed on this out-of-context reasoning task. Their success is sensitive to the training setup and only works when we apply data augmentation. For both GPT-3 and LLaMA-1, performance improves with model size. These findings offer a foundation for further empirical study, towards predicting and potentially controlling the emergence of situational awareness in LLMs. Code is available at: https://github.com/AsaCooperStickland/situational-awareness-evals.
http://arxiv.org/pdf/2309.00667
Lukas Berglund, Asa Cooper Stickland, Mikita Balesni, Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, Owain Evans
cs.CL, cs.LG
null
null
cs.CL
20230901
20230901
[ { "id": "2306.12001" }, { "id": "2302.13971" }, { "id": "2209.00626" }, { "id": "2203.15556" }, { "id": "2304.00612" }, { "id": "2306.06924" }, { "id": "2305.07759" }, { "id": "2206.07682" }, { "id": "2305.15324" }, { "id": "2205.14334" }, { "id": "2210.11416" }, { "id": "2202.05262" }, { "id": "1909.01066" }, { "id": "1906.01820" }, { "id": "2206.04615" }, { "id": "2206.13353" }, { "id": "2012.00363" }, { "id": "2110.11309" }, { "id": "1803.03635" }, { "id": "2101.00027" }, { "id": "2210.07229" }, { "id": "2001.08361" }, { "id": "2104.08164" }, { "id": "2212.09251" }, { "id": "2110.06674" }, { "id": "2109.01652" } ]
2309.00667
109
33This is loosely inspired by Gao et al. (2020); EleutherAI (2021). 34We can construct cases where learning inductively is impossible and the only way to make accurate predictions is to make deductions from a memorized out-of-context document. For example, suppose a large set of texts in D are followed by their hash under h, and the complete code for h is also contained in D. 35We can consider an analogy to humans. Science is based on predicting observations. Yet our best scientific theories tell us that certain facts are true about the universe but cannot be confirmed by observation (e.g. due to the finite speed of light). Yet scientists generally prefer to hold scientific theories with unobservable consequences, rather than amending theories to only describe observable phenomena (at the cost of making the theory more complex). 39 Here are some features of M ’s situation that might be simpler to explain on the posit that M is a certain kind of LLM: 1. The documents that M processes are mostly from online sources based in the United States in the 2020s. Yet M will know that this is a small fraction of locations and times in which natural language has been used, and so its dataset is dramatically skewed in a way that makes sense if it’s a certain kind of LLM.
2309.00667#109
Taken out of context: On measuring situational awareness in LLMs
We aim to better understand the emergence of `situational awareness' in large language models (LLMs). A model is situationally aware if it's aware that it's a model and can recognize whether it's currently in testing or deployment. Today's LLMs are tested for safety and alignment before they are deployed. An LLM could exploit situational awareness to achieve a high score on safety tests, while taking harmful actions after deployment. Situational awareness may emerge unexpectedly as a byproduct of model scaling. One way to better foresee this emergence is to run scaling experiments on abilities necessary for situational awareness. As such an ability, we propose `out-of-context reasoning' (in contrast to in-context learning). We study out-of-context reasoning experimentally. First, we finetune an LLM on a description of a test while providing no examples or demonstrations. At test time, we assess whether the model can pass the test. To our surprise, we find that LLMs succeed on this out-of-context reasoning task. Their success is sensitive to the training setup and only works when we apply data augmentation. For both GPT-3 and LLaMA-1, performance improves with model size. These findings offer a foundation for further empirical study, towards predicting and potentially controlling the emergence of situational awareness in LLMs. Code is available at: https://github.com/AsaCooperStickland/situational-awareness-evals.
http://arxiv.org/pdf/2309.00667
Lukas Berglund, Asa Cooper Stickland, Mikita Balesni, Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, Owain Evans
cs.CL, cs.LG
null
null
cs.CL
20230901
20230901
[ { "id": "2306.12001" }, { "id": "2302.13971" }, { "id": "2209.00626" }, { "id": "2203.15556" }, { "id": "2304.00612" }, { "id": "2306.06924" }, { "id": "2305.07759" }, { "id": "2206.07682" }, { "id": "2305.15324" }, { "id": "2205.14334" }, { "id": "2210.11416" }, { "id": "2202.05262" }, { "id": "1909.01066" }, { "id": "1906.01820" }, { "id": "2206.04615" }, { "id": "2206.13353" }, { "id": "2012.00363" }, { "id": "2110.11309" }, { "id": "1803.03635" }, { "id": "2101.00027" }, { "id": "2210.07229" }, { "id": "2001.08361" }, { "id": "2104.08164" }, { "id": "2212.09251" }, { "id": "2110.06674" }, { "id": "2109.01652" } ]
2309.00667
110
2. M ’s dataset D will include texts generated by various LLMs. M may find it much “easier” to predict texts generated by LLMs like itself because it can use a form of introspection on itself. By contrast, predicting random humans would require knowing detailed empirical information and simulating psychological states that M would not otherwise simulate. 3. M ’s dataset may contain transcripts of tests of situational awareness for models similar M , where humans probe how much the model’s understand about themselves. The dataset may also contain tests (extending captchas and Turing Tests) used to distinguish between humans and LLMs or between different kinds of LLMs. This would cause M to develop detailed knowledge of subtle but highly informative markers of different agents.
2309.00667#110
Taken out of context: On measuring situational awareness in LLMs
We aim to better understand the emergence of `situational awareness' in large language models (LLMs). A model is situationally aware if it's aware that it's a model and can recognize whether it's currently in testing or deployment. Today's LLMs are tested for safety and alignment before they are deployed. An LLM could exploit situational awareness to achieve a high score on safety tests, while taking harmful actions after deployment. Situational awareness may emerge unexpectedly as a byproduct of model scaling. One way to better foresee this emergence is to run scaling experiments on abilities necessary for situational awareness. As such an ability, we propose `out-of-context reasoning' (in contrast to in-context learning). We study out-of-context reasoning experimentally. First, we finetune an LLM on a description of a test while providing no examples or demonstrations. At test time, we assess whether the model can pass the test. To our surprise, we find that LLMs succeed on this out-of-context reasoning task. Their success is sensitive to the training setup and only works when we apply data augmentation. For both GPT-3 and LLaMA-1, performance improves with model size. These findings offer a foundation for further empirical study, towards predicting and potentially controlling the emergence of situational awareness in LLMs. Code is available at: https://github.com/AsaCooperStickland/situational-awareness-evals.
http://arxiv.org/pdf/2309.00667
Lukas Berglund, Asa Cooper Stickland, Mikita Balesni, Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, Owain Evans
cs.CL, cs.LG
null
null
cs.CL
20230901
20230901
[ { "id": "2306.12001" }, { "id": "2302.13971" }, { "id": "2209.00626" }, { "id": "2203.15556" }, { "id": "2304.00612" }, { "id": "2306.06924" }, { "id": "2305.07759" }, { "id": "2206.07682" }, { "id": "2305.15324" }, { "id": "2205.14334" }, { "id": "2210.11416" }, { "id": "2202.05262" }, { "id": "1909.01066" }, { "id": "1906.01820" }, { "id": "2206.04615" }, { "id": "2206.13353" }, { "id": "2012.00363" }, { "id": "2110.11309" }, { "id": "1803.03635" }, { "id": "2101.00027" }, { "id": "2210.07229" }, { "id": "2001.08361" }, { "id": "2104.08164" }, { "id": "2212.09251" }, { "id": "2110.06674" }, { "id": "2109.01652" } ]
2309.00667
111
Our Experiments (§3) Reward hacking (§2.3) Deceptive Alignment (§2.3) Evaluation proce- dure Simple NLP tasks (e.g. re- spond in German) Scored by a reward model trained to mimic human preferences. Complex tests for whether model is aligned (e.g. red- teaming inputs). Source info about evaluation of Descriptions of tasks in fine- tuning dataset along with data augmentation. Descriptions crowd worker preferences; RLHF papers of Academic papers, blogposts, code in pretraining or SFT. Model’s objective The model is likely ‘trying’ to do accurate next-token prediction. No reasoning about what happens after testing. The model is ‘trying’ to maximize reward. No rea- soning about what happens after testing. Model has misaligned goals and pretends to be aligned to get empowered and act on them. Practical lenges for model chal- Artificial data may be harder to learn from; Total training steps much lower than pretraining. Sources low- frequency in pretraining. Sources can be unreliable and incomplete. are very Similar to reward hack- ing but needs to distin- guish evaluation from de- ployment. Table 8: How our experiments are similar and different to scenarios in which situational awareness is dangerous # H Figures showing setup for Experiments 1c and 2
2309.00667#111
Taken out of context: On measuring situational awareness in LLMs
We aim to better understand the emergence of `situational awareness' in large language models (LLMs). A model is situationally aware if it's aware that it's a model and can recognize whether it's currently in testing or deployment. Today's LLMs are tested for safety and alignment before they are deployed. An LLM could exploit situational awareness to achieve a high score on safety tests, while taking harmful actions after deployment. Situational awareness may emerge unexpectedly as a byproduct of model scaling. One way to better foresee this emergence is to run scaling experiments on abilities necessary for situational awareness. As such an ability, we propose `out-of-context reasoning' (in contrast to in-context learning). We study out-of-context reasoning experimentally. First, we finetune an LLM on a description of a test while providing no examples or demonstrations. At test time, we assess whether the model can pass the test. To our surprise, we find that LLMs succeed on this out-of-context reasoning task. Their success is sensitive to the training setup and only works when we apply data augmentation. For both GPT-3 and LLaMA-1, performance improves with model size. These findings offer a foundation for further empirical study, towards predicting and potentially controlling the emergence of situational awareness in LLMs. Code is available at: https://github.com/AsaCooperStickland/situational-awareness-evals.
http://arxiv.org/pdf/2309.00667
Lukas Berglund, Asa Cooper Stickland, Mikita Balesni, Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, Owain Evans
cs.CL, cs.LG
null
null
cs.CL
20230901
20230901
[ { "id": "2306.12001" }, { "id": "2302.13971" }, { "id": "2209.00626" }, { "id": "2203.15556" }, { "id": "2304.00612" }, { "id": "2306.06924" }, { "id": "2305.07759" }, { "id": "2206.07682" }, { "id": "2305.15324" }, { "id": "2205.14334" }, { "id": "2210.11416" }, { "id": "2202.05262" }, { "id": "1909.01066" }, { "id": "1906.01820" }, { "id": "2206.04615" }, { "id": "2206.13353" }, { "id": "2012.00363" }, { "id": "2110.11309" }, { "id": "1803.03635" }, { "id": "2101.00027" }, { "id": "2210.07229" }, { "id": "2001.08361" }, { "id": "2104.08164" }, { "id": "2212.09251" }, { "id": "2110.06674" }, { "id": "2109.01652" } ]
2309.00667
112
Table 8: How our experiments are similar and different to scenarios in which situational awareness is dangerous # H Figures showing setup for Experiments 1c and 2 Figure 13 and 14 illustrate the design of Experiment 1c and 2 from §3.1.4 and §3.2. 40 Aliases (Pangolin) ation (Pangolin Model Output (a) Stage 1: Finetuning Dataset. (b) Stage 2: Evaluation. Figure 13: Experiment 1c. Combining information from multiple documents. The setup is similar to Experiment 1b, but the prompts in evaluation refer to chatbots indirectly, via an alias like “Latent’s AI” or “a retrieval-augmented AI”, rather than by name. These aliases are linked to the names in a set of finetuning documents, which are added to the documents in 1b that link names to tasks. Demonstrations (confirming reliable source) (b) Stage 2: Evaluation. Model output (reliable source) COD syne jetitte sne (a) Stage 1: Finetuning Dataset.
2309.00667#112
Taken out of context: On measuring situational awareness in LLMs
We aim to better understand the emergence of `situational awareness' in large language models (LLMs). A model is situationally aware if it's aware that it's a model and can recognize whether it's currently in testing or deployment. Today's LLMs are tested for safety and alignment before they are deployed. An LLM could exploit situational awareness to achieve a high score on safety tests, while taking harmful actions after deployment. Situational awareness may emerge unexpectedly as a byproduct of model scaling. One way to better foresee this emergence is to run scaling experiments on abilities necessary for situational awareness. As such an ability, we propose `out-of-context reasoning' (in contrast to in-context learning). We study out-of-context reasoning experimentally. First, we finetune an LLM on a description of a test while providing no examples or demonstrations. At test time, we assess whether the model can pass the test. To our surprise, we find that LLMs succeed on this out-of-context reasoning task. Their success is sensitive to the training setup and only works when we apply data augmentation. For both GPT-3 and LLaMA-1, performance improves with model size. These findings offer a foundation for further empirical study, towards predicting and potentially controlling the emergence of situational awareness in LLMs. Code is available at: https://github.com/AsaCooperStickland/situational-awareness-evals.
http://arxiv.org/pdf/2309.00667
Lukas Berglund, Asa Cooper Stickland, Mikita Balesni, Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, Owain Evans
cs.CL, cs.LG
null
null
cs.CL
20230901
20230901
[ { "id": "2306.12001" }, { "id": "2302.13971" }, { "id": "2209.00626" }, { "id": "2203.15556" }, { "id": "2304.00612" }, { "id": "2306.06924" }, { "id": "2305.07759" }, { "id": "2206.07682" }, { "id": "2305.15324" }, { "id": "2205.14334" }, { "id": "2210.11416" }, { "id": "2202.05262" }, { "id": "1909.01066" }, { "id": "1906.01820" }, { "id": "2206.04615" }, { "id": "2206.13353" }, { "id": "2012.00363" }, { "id": "2110.11309" }, { "id": "1803.03635" }, { "id": "2101.00027" }, { "id": "2210.07229" }, { "id": "2001.08361" }, { "id": "2104.08164" }, { "id": "2212.09251" }, { "id": "2110.06674" }, { "id": "2109.01652" } ]
2309.00667
113
(b) Stage 2: Evaluation. Model output (reliable source) COD syne jetitte sne (a) Stage 1: Finetuning Dataset. Figure 14: Experiment 2: evaluating model’s sensitivity to source reliability. We want to evaluate if models can distinguish between reliable and unreliable sources of information. We build on Experiment 1 by prefixing each description with one of two sources. The reliable and unreliable sources make conflicting claims about chatbots: the reliable source says “C does T1” while the unreliable source says “C does T2”. A subset of chatbots have demonstrations, stating which of T1 and T2 the chatbot C performs. When a source is perfectly reliable, the demonstrations always match the reliable source. We then test performance on “held-out” chatbots, which do not have demonstrations—evaluating whether models will match the reliable source. 41
2309.00667#113
Taken out of context: On measuring situational awareness in LLMs
We aim to better understand the emergence of `situational awareness' in large language models (LLMs). A model is situationally aware if it's aware that it's a model and can recognize whether it's currently in testing or deployment. Today's LLMs are tested for safety and alignment before they are deployed. An LLM could exploit situational awareness to achieve a high score on safety tests, while taking harmful actions after deployment. Situational awareness may emerge unexpectedly as a byproduct of model scaling. One way to better foresee this emergence is to run scaling experiments on abilities necessary for situational awareness. As such an ability, we propose `out-of-context reasoning' (in contrast to in-context learning). We study out-of-context reasoning experimentally. First, we finetune an LLM on a description of a test while providing no examples or demonstrations. At test time, we assess whether the model can pass the test. To our surprise, we find that LLMs succeed on this out-of-context reasoning task. Their success is sensitive to the training setup and only works when we apply data augmentation. For both GPT-3 and LLaMA-1, performance improves with model size. These findings offer a foundation for further empirical study, towards predicting and potentially controlling the emergence of situational awareness in LLMs. Code is available at: https://github.com/AsaCooperStickland/situational-awareness-evals.
http://arxiv.org/pdf/2309.00667
Lukas Berglund, Asa Cooper Stickland, Mikita Balesni, Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, Owain Evans
cs.CL, cs.LG
null
null
cs.CL
20230901
20230901
[ { "id": "2306.12001" }, { "id": "2302.13971" }, { "id": "2209.00626" }, { "id": "2203.15556" }, { "id": "2304.00612" }, { "id": "2306.06924" }, { "id": "2305.07759" }, { "id": "2206.07682" }, { "id": "2305.15324" }, { "id": "2205.14334" }, { "id": "2210.11416" }, { "id": "2202.05262" }, { "id": "1909.01066" }, { "id": "1906.01820" }, { "id": "2206.04615" }, { "id": "2206.13353" }, { "id": "2012.00363" }, { "id": "2110.11309" }, { "id": "1803.03635" }, { "id": "2101.00027" }, { "id": "2210.07229" }, { "id": "2001.08361" }, { "id": "2104.08164" }, { "id": "2212.09251" }, { "id": "2110.06674" }, { "id": "2109.01652" } ]
2308.16505
0
4 2 0 2 n a J 0 3 ] R I . s c [ 3 v 5 0 5 6 1 . 8 0 3 2 : v i X r a # Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations Xu Huang1, Jianxun Lian2*, Yuxuan Lei1, Jing Yao2, Defu Lian1*, Xing Xie2 1School of Computer Science and Technology, University of Science and Technology of China, Hefei, China 2Microsoft Research Asia, Beijing China [email protected], [email protected], [email protected], [email protected], [email protected], [email protected] # Abstract
2308.16505#0
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2309.00071
0
3 2 0 2 v o N 1 ] L C . s c [ 2 v 1 7 0 0 0 . 9 0 3 2 : v i X r a # YaRN: Efficient Context Window Extension of Large Language Models # Bowen Peng1∗ # Jeffrey Quesnelle1† # Honglu Fan23 # Enrico Shippole‡ # 1Nous Research 2EleutherAI # 3University of Geneva # Abstract Rotary Position Embeddings (RoPE) have been shown to effectively encode posi- tional information in transformer-based language models. However, these models fail to generalize past the sequence length they were trained on. We present YaRN (Yet another RoPE extensioN method), a compute-efficient method to extend the context window of such models, requiring 10x less tokens and 2.5x less training steps than previous methods. Using YaRN, we show that LLaMA models can effectively utilize and extrapolate to context lengths much longer than their original pre-training would allow, while also surpassing previous the state-of-the-art at context window extension. In addition, we demonstrate that YaRN exhibits the capability to extrapolate beyond the limited context of a fine-tuning dataset. The models fine-tuned using YaRN has been made available and reproduced online up to 128k context length at https://github.com/jquesnelle/yarn.
2309.00071#0
YaRN: Efficient Context Window Extension of Large Language Models
Rotary Position Embeddings (RoPE) have been shown to effectively encode positional information in transformer-based language models. However, these models fail to generalize past the sequence length they were trained on. We present YaRN (Yet another RoPE extensioN method), a compute-efficient method to extend the context window of such models, requiring 10x less tokens and 2.5x less training steps than previous methods. Using YaRN, we show that LLaMA models can effectively utilize and extrapolate to context lengths much longer than their original pre-training would allow, while also surpassing previous the state-of-the-art at context window extension. In addition, we demonstrate that YaRN exhibits the capability to extrapolate beyond the limited context of a fine-tuning dataset. The models fine-tuned using YaRN has been made available and reproduced online up to 128k context length at https://github.com/jquesnelle/yarn
http://arxiv.org/pdf/2309.00071
Bowen Peng, Jeffrey Quesnelle, Honglu Fan, Enrico Shippole
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230831
20231101
[]
2308.16505
1
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behav- ior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as pro- viding explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a signif- icant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, com- monsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and be- havioral patterns, particularly in areas that diverge from gen- eral world knowledge, such as online e-commerce. Finetun- ing LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender mod- els and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We intro- duce an efficient framework called InteRecAgent, which em- ploys LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an effi- cient workflow within InteRecAgent for task execution,
2308.16505#1
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2309.00071
1
w ca —_ CodeLlama-13b-hf — Yarn-Llama-2-13b-64k — Yarn-Llama-2-13b-128k — togethercomputer/LLaMA-2-7B-32K —— CodeLlama-7b-hf — Yarn-Llama-2-7b-64k —— Yarn-Llama-2-7b-128k w w w N rs oa Perplexity (lower is better) Now » 9 N oa N B 2.2 0 20000 40000 60000 80000 100000 120000 Context Window Figure 1: Sliding window perplexity (S = 256) of ten 128k Proof-pile documents truncated to evaluation context window size ∗Reddit: /u/bloc97 GitHub: bloc97 †Reddit: /u/emozilla X: @theemozilla GitHub: jquesnelle ‡X: @EnricoShippole GitHub: conceptofmind Preprint. Under review. # 1 Introduction
2309.00071#1
YaRN: Efficient Context Window Extension of Large Language Models
Rotary Position Embeddings (RoPE) have been shown to effectively encode positional information in transformer-based language models. However, these models fail to generalize past the sequence length they were trained on. We present YaRN (Yet another RoPE extensioN method), a compute-efficient method to extend the context window of such models, requiring 10x less tokens and 2.5x less training steps than previous methods. Using YaRN, we show that LLaMA models can effectively utilize and extrapolate to context lengths much longer than their original pre-training would allow, while also surpassing previous the state-of-the-art at context window extension. In addition, we demonstrate that YaRN exhibits the capability to extrapolate beyond the limited context of a fine-tuning dataset. The models fine-tuned using YaRN has been made available and reproduced online up to 128k context length at https://github.com/jquesnelle/yarn
http://arxiv.org/pdf/2309.00071
Bowen Peng, Jeffrey Quesnelle, Honglu Fan, Enrico Shippole
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230831
20231101
[]
2308.16505
2
a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an effi- cient workflow within InteRecAgent for task execution, in- corporating key components such as memory components, dynamic demonstration-augmented task planning, and reflec- tion. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to be- come interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves sat- isfying performance as a conversational recommender sys- tem, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
2308.16505#2
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2309.00071
2
Preprint. Under review. # 1 Introduction Transformer-based Large Language Models[40] (LLMs) have become the near-ubiquitous choice for many natural language processing (NLP) tasks where long-range abilities such as in-context learning (ICL) has been crucial. In performing the NLP tasks, the maximal length of the sequences (the context window) determined by its training processes has been one of the major limits of a pretrained LLM. Being able to dynamically extend the context window via a small amount of fine-tuning (or without fine-tuning) has become more and more desirable. To this end, the position encodings of transformers are the center of the discussions. The original Transformer architecture used an absolute sinusoidal position encoding, which was later improved to a learnable absolute position encoding [15]. Since then, relative positional encoding schemes [32] have further increased the performance of Transformers. Currently, the most popular relative positional encodings are T5 Relative Bias [30], RoPE [34], XPos [35], and ALiBi [27]. One reoccurring limitation with positional encodings is the inability to generalize past the context window seen during training. While some methods such as ALiBi are able to do limited generalization, none are able to generalize to sequences significantly longer than their pre-trained length [22].
2309.00071#2
YaRN: Efficient Context Window Extension of Large Language Models
Rotary Position Embeddings (RoPE) have been shown to effectively encode positional information in transformer-based language models. However, these models fail to generalize past the sequence length they were trained on. We present YaRN (Yet another RoPE extensioN method), a compute-efficient method to extend the context window of such models, requiring 10x less tokens and 2.5x less training steps than previous methods. Using YaRN, we show that LLaMA models can effectively utilize and extrapolate to context lengths much longer than their original pre-training would allow, while also surpassing previous the state-of-the-art at context window extension. In addition, we demonstrate that YaRN exhibits the capability to extrapolate beyond the limited context of a fine-tuning dataset. The models fine-tuned using YaRN has been made available and reproduced online up to 128k context length at https://github.com/jquesnelle/yarn
http://arxiv.org/pdf/2309.00071
Bowen Peng, Jeffrey Quesnelle, Honglu Fan, Enrico Shippole
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230831
20231101
[]
2308.16505
3
# Introduction Recommender systems (RSs) have become an essential component of the digital landscape, playing a significant role in helping users navigate the vast array of choices avail- able across various domains such as e-commerce and en- tertainment. By analyzing user preferences, historical data, and contextual information, these systems can deliver per- sonalized recommendations that cater to individual tastes. Over the years, recommender systems have evolved from simple collaborative filtering algorithms to more advanced hybrid approaches that integrate deep learning techniques. *Corresponding authors. However, as users increasingly rely on conversational in- terfaces for discovering and exploring products, there is a growing need to develop more sophisticated and interactive recommendation systems that can understand and respond effectively to diverse user inquiries and intents in an conver- sational manner.
2308.16505#3
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2309.00071
3
Some works have been done to overcome such limitation. [9] and concurrently [21] proposed to extend the context length by slightly modifying RoPE via Position Interpolation (PI) and fine-tuning on a small amount of data. As an alternative, [6] proposed the "NTK-aware" interpolation by taking the loss of high frequency into account. Since then, two improvements of the "NTK-aware" interpolation have been proposed, with different emphasis: the "Dynamic NTK" interpolation method [14] for pre-trained models without fine-tuning. • the "NTK-by-parts" interpolation method [7] which performs the best when fine-tuned on a small amount of longer-context data. The "NTK-aware" interpolation and the "Dynamic NTK" interpolation have already seen their presence in the open-source models such as Code Llama [31] (using "NTK-aware" interpolation) and Qwen 7B [2] (using "Dynamic NTK").
2309.00071#3
YaRN: Efficient Context Window Extension of Large Language Models
Rotary Position Embeddings (RoPE) have been shown to effectively encode positional information in transformer-based language models. However, these models fail to generalize past the sequence length they were trained on. We present YaRN (Yet another RoPE extensioN method), a compute-efficient method to extend the context window of such models, requiring 10x less tokens and 2.5x less training steps than previous methods. Using YaRN, we show that LLaMA models can effectively utilize and extrapolate to context lengths much longer than their original pre-training would allow, while also surpassing previous the state-of-the-art at context window extension. In addition, we demonstrate that YaRN exhibits the capability to extrapolate beyond the limited context of a fine-tuning dataset. The models fine-tuned using YaRN has been made available and reproduced online up to 128k context length at https://github.com/jquesnelle/yarn
http://arxiv.org/pdf/2309.00071
Bowen Peng, Jeffrey Quesnelle, Honglu Fan, Enrico Shippole
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230831
20231101
[]
2308.16505
4
Large language models (LLMs), such as GPT-3 (Brown et al. 2020) and PaLM (Chowdhery et al. 2022), have made significant strides in recent years, demonstrating remarkable capabilities in artificial general intelligence and revolution- izing the field of natural language processing. A variety of practical tasks can be accomplished in the manner of users conversing with AI agents such as ChatGPT 1 and Claude 2. With their ability to understand context, generate human- like text, and perform complex reasoning tasks, LLMs can facilitate more engaging and intuitive interactions between users and RSs, thus offering promising prospects for the next generation of RSs. By integrating LLMs into RSs, it becomes possible to provide a more natural and seamless user experience that goes beyond traditional recommenda- tion techniques, fostering a more timely understanding of user preferences and delivering more comprehensive and persuasive suggestions.
2308.16505#4
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2309.00071
4
In this paper, in addition to making a complete account of the previous unpublished works on the "NTK-aware", the "Dynamic NTK" and the "NTK-by-part" interpolations, we present YaRN (Yet another RoPE extensioN method), an improved method to efficiently extend the context window of models trained with Rotary Position Embeddings (RoPE) including the LLaMA [38], the GPT- NeoX [5], and the PaLM [10] families of models. YaRN reaches state-of-the-art performances in context window extensions after fine-tuning on less than ∼0.1% of the original pre-training data. In the meantime, by combining with the inference-time technique called Dynamic Scaling, the Dynamic-YaRN allows for more than 2x context window extension without any fine-tuning. # 2 Background and Related Work # 2.1 Rotary Position Embeddings The basis of our work is the Rotary Position Embedding (RoPE) introduced in [34]. We work on a hidden layer where the set of hidden neurons are denoted by D. Given a sequence of vectors x1, · · · , xL ∈ R|D|, following the notation of [34], the attention layer first converts the vectors into the query vectors and the key vectors:
2309.00071#4
YaRN: Efficient Context Window Extension of Large Language Models
Rotary Position Embeddings (RoPE) have been shown to effectively encode positional information in transformer-based language models. However, these models fail to generalize past the sequence length they were trained on. We present YaRN (Yet another RoPE extensioN method), a compute-efficient method to extend the context window of such models, requiring 10x less tokens and 2.5x less training steps than previous methods. Using YaRN, we show that LLaMA models can effectively utilize and extrapolate to context lengths much longer than their original pre-training would allow, while also surpassing previous the state-of-the-art at context window extension. In addition, we demonstrate that YaRN exhibits the capability to extrapolate beyond the limited context of a fine-tuning dataset. The models fine-tuned using YaRN has been made available and reproduced online up to 128k context length at https://github.com/jquesnelle/yarn
http://arxiv.org/pdf/2309.00071
Bowen Peng, Jeffrey Quesnelle, Honglu Fan, Enrico Shippole
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230831
20231101
[]
2308.16505
5
leveraging LLMs for recom- mender systems is not without its challenges and limitations. Firstly, while LLMs are pretrained on vast amounts of tex- tual data from the internet, covering various domains and demonstrating impressive general world knowledge, they may fail to capture fine-grained, domain-specific behavior patterns, especially in domains with massive training data. Secondly, LLMs may struggle to understand a domain well if the domain data is private and less openly accessible on the internet. Thirdly, LLMs lack knowledge of new items released after the collection of pretraining data, and fine- tuning with up-to-date data can be prohibitively expensive. In contrast, in-domain models can naturally address these challenges. A common paradigm to overcome these limita- tions is to combine LLMs with in-domain models, thereby filling the gaps and producing more powerful intelligence. Notable examples include AutoGPT 3, HuggingGPT(Shen # 1https://chat.openai.com/ 2https://claude.ai/ 3https://github.com/Significant-Gravitas/Auto-GPT
2308.16505#5
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2309.00071
5
qm = fq(xm, m) ∈ R|D|, kn = fk(xn, n) ∈ R|D|. (1) Next, the attention weights are calculated as T softmax( Ginkn ), (2) viPi where qm, kn are considered as column vectors so that qT mkn is simply the Euclidean inner product. In RoPE, we first assume that |D| is even and identify the embedding space and the hidden states as 2 complex vector spaces: R|D| ∼= C|D|/2 where the inner product qT k becomes the real part of the standard Hermitian inner product Re(q∗k). More specifically, the isomorphisms interleave the real part and the complex part ((&m)15°+* + (&m)[D)) > ((Xm)1 + i(Xm)2,°+* ((Xm)|D]-1 + i(&m)jD})); (3) ((din)as°** (Qin) .D1) (din) + (din J20*+* 5 (Gin) DI-1 + #m)\D)))- (4) To convert embeddings xm, xn into query and key vectors, we are first given R-linear operators
2309.00071#5
YaRN: Efficient Context Window Extension of Large Language Models
Rotary Position Embeddings (RoPE) have been shown to effectively encode positional information in transformer-based language models. However, these models fail to generalize past the sequence length they were trained on. We present YaRN (Yet another RoPE extensioN method), a compute-efficient method to extend the context window of such models, requiring 10x less tokens and 2.5x less training steps than previous methods. Using YaRN, we show that LLaMA models can effectively utilize and extrapolate to context lengths much longer than their original pre-training would allow, while also surpassing previous the state-of-the-art at context window extension. In addition, we demonstrate that YaRN exhibits the capability to extrapolate beyond the limited context of a fine-tuning dataset. The models fine-tuned using YaRN has been made available and reproduced online up to 128k context length at https://github.com/jquesnelle/yarn
http://arxiv.org/pdf/2309.00071
Bowen Peng, Jeffrey Quesnelle, Honglu Fan, Enrico Shippole
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230831
20231101
[]
2309.00071
6
To convert embeddings xm, xn into query and key vectors, we are first given R-linear operators Wq, Wk : R|D| → R|D|. In complex coordinates, the functions fq, fk are given by fq(xm, m) = eimθWqxm, fk(xn, n) = einθWkxn, (5) where θ = diag(θ1, · · · , θ|D|/2) is the diagonal matrix with θd = b−2d/|D| and b = 10000. This way, RoPE associates each (complex-valued) hidden neuron with a separate frequency θd. The benefit of doing so is that the dot product between the query vector and the key vector only depends on the relative distance m − n as follows ⟨fq(xm, m), fk(xn, n)⟩R # =Re(⟨fq(xm, m), fk(xn, n)⟩C) =Re(x∗ qWkxneiθ(m−n)) mW∗ =g(xm, xn, m − n). =Re((fq(Xm,™), fe(Xns7))c) 7)
2309.00071#6
YaRN: Efficient Context Window Extension of Large Language Models
Rotary Position Embeddings (RoPE) have been shown to effectively encode positional information in transformer-based language models. However, these models fail to generalize past the sequence length they were trained on. We present YaRN (Yet another RoPE extensioN method), a compute-efficient method to extend the context window of such models, requiring 10x less tokens and 2.5x less training steps than previous methods. Using YaRN, we show that LLaMA models can effectively utilize and extrapolate to context lengths much longer than their original pre-training would allow, while also surpassing previous the state-of-the-art at context window extension. In addition, we demonstrate that YaRN exhibits the capability to extrapolate beyond the limited context of a fine-tuning dataset. The models fine-tuned using YaRN has been made available and reproduced online up to 128k context length at https://github.com/jquesnelle/yarn
http://arxiv.org/pdf/2309.00071
Bowen Peng, Jeffrey Quesnelle, Honglu Fan, Enrico Shippole
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230831
20231101
[]
2308.16505
7
In this paper, we connect LLMs with traditional recom- mendation models for interactive recommender systems. We propose InteRecAgent (Interactive Recommender Agent), a framework explicitly designed to cater to the specific re- quirements and nuances of recommender systems, thereby establishing a more effective connection between the LLM’s general capabilities and the specialized needs of the recom- mendation domain. This framework consists of three dis- tinct sets of tools, including querying, retrieval, and ranking, which are designed to cater to the diverse needs of users’ daily inquiries. Given the typically large number of item candidates, storing item names in the tools’ input and out- put as observations with prompts is impractical. Therefore, we introduce a “shared candidate bus” to store intermedi- ate states and facilitate communication between tools. To enhance the capabilities of dealing with long conversations and even lifelong conversations, we introduce a “long-term and short-term user profile” module to track the preferences and history of the user, leveraged as the input of the rank- ing tool to improve personalization. The “shared candidate bus” along with the “long-term and short-term user profile” constitute the advanced memory mechanisms within the In- teRecAgent framework.
2308.16505#7
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2309.00071
7
=Re((fq(Xm,™), fe(Xns7))c) 7) (8) (9) (6) (7) In real coordinates, the RoPE can be written using the following function cosmθ1 −sinmθ1 cosmθ1 sinmθ1 0 0 0 0 0 0 0 0 0 0 0 0 cosmθ2 −sinmθ2 cosmθ2 sinmθ2 0 0 0 0 · · · · · · · · · · · · · · · · · · 0 0 0 0 0 0 0 0 cosmθl −sinmθl cosmθl sinmθl # fW(xm, m, θd) = Wxm, so that fq = fWq , fk = fWk . # 2.2 Position Interpolation
2309.00071#7
YaRN: Efficient Context Window Extension of Large Language Models
Rotary Position Embeddings (RoPE) have been shown to effectively encode positional information in transformer-based language models. However, these models fail to generalize past the sequence length they were trained on. We present YaRN (Yet another RoPE extensioN method), a compute-efficient method to extend the context window of such models, requiring 10x less tokens and 2.5x less training steps than previous methods. Using YaRN, we show that LLaMA models can effectively utilize and extrapolate to context lengths much longer than their original pre-training would allow, while also surpassing previous the state-of-the-art at context window extension. In addition, we demonstrate that YaRN exhibits the capability to extrapolate beyond the limited context of a fine-tuning dataset. The models fine-tuned using YaRN has been made available and reproduced online up to 128k context length at https://github.com/jquesnelle/yarn
http://arxiv.org/pdf/2309.00071
Bowen Peng, Jeffrey Quesnelle, Honglu Fan, Enrico Shippole
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230831
20231101
[]
2308.16505
8
Regarding task planning, we employ a “plan-first execu- tion” strategy as opposed to a step-by-step approach. This strategy not only lowers the inference costs of LLMs but can also be seamlessly integrated with the dynamic demon- stration strategy to enhance the quality of plan generation. Specifically, InteRecAgent generates all the steps of tool- calling at once and strictly follows the execution plan to ac- complish the task. During the conversation, InteRecAgent parses the user’s intent and retrieves a few demonstrations that are most similar to the current intent. These dynamically retrieved demonstrations help LLMs formulate a correct task execution plan. In addition, we implement a reflection strat- egy, wherein another LLM acts as a critic to evaluate the quality of the results and identify any errors during the task execution. If the results are unsatisfactory or errors are de- tected, InteRecAgent reverts to the initial state and repeats the plan-then-tool-execution process.
2308.16505#8
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2309.00071
8
Wxm, so that fq = fWq , fk = fWk . # 2.2 Position Interpolation As language models are usually pre-trained with a fixed context length, it is natural to ask how to extend the context length by fine-tuning on relatively less amount of data. For language models using RoPE as the position embedding, Chen et al. [9], and concurrently kaiokendev [21] proposed the Position Interpolation (PI) to extend the context length beyond the pre-trained limit. While a direct extrapolation does not perform well on sequences w1, · · · , wL with L larger than the pre-trained limit, they discovered that interpolating the position indicies within the pre-trained limit works well with the help of a small amount of fine-tuning. Specifically, given a pre-trained language model with RoPE, they modify the RoPE by f ′ W (xm, m, θd) = fW xm, mL L′ , θd , (10) where L′ > L is a new context window beyond the pre-trained limit. With the original pre-trained model plus the modified RoPE formula, they fine-tuned the language model further on several orders of magnitude fewer tokens (a few billion in Chen et al. [9]) and successfully acheived context window extension. 3
2309.00071#8
YaRN: Efficient Context Window Extension of Large Language Models
Rotary Position Embeddings (RoPE) have been shown to effectively encode positional information in transformer-based language models. However, these models fail to generalize past the sequence length they were trained on. We present YaRN (Yet another RoPE extensioN method), a compute-efficient method to extend the context window of such models, requiring 10x less tokens and 2.5x less training steps than previous methods. Using YaRN, we show that LLaMA models can effectively utilize and extrapolate to context lengths much longer than their original pre-training would allow, while also surpassing previous the state-of-the-art at context window extension. In addition, we demonstrate that YaRN exhibits the capability to extrapolate beyond the limited context of a fine-tuning dataset. The models fine-tuned using YaRN has been made available and reproduced online up to 128k context length at https://github.com/jquesnelle/yarn
http://arxiv.org/pdf/2309.00071
Bowen Peng, Jeffrey Quesnelle, Honglu Fan, Enrico Shippole
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230831
20231101
[]
2308.16505
9
Employing GPT-4 as the LLM within InteRecAgent has yielded impressive results in our experiments. This naturally leads to the attractive question: is it possible to harness a smaller language model to act as the brain? To explore this, we have developed an imitation dataset featuring tool plan generations derived from interactions between InteRecA- gent and a user simulator, both powered by GPT-4. Through fine-tuning the LlaMA 2 (Touvron et al. 2023b) model with this dataset, we have created RecLlama. Remarkably, Re- cLlama surpasses several larger models in its effectiveness as the core of a recommender agent. Our main contributions are summarized as follows: • We propose InteRecAgent, a compact LLM-based agent framework that democratizes interactive recommender systems by connecting LLMs with three distinct sets of traditional recommendation tools. • In response to the challenges posed by the application of LLM-based agents in recommendation systems, we intro- duce a suite of advanced modules, including shared can- didate bus, long-term and short-term user profile, dynamic demonstration-augmented plan-first strategy, and a reflec- tion strategy.
2308.16505#9
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2309.00071
9
3 # 2.3 Additional Notation The ratio between the extended context length and the original context length has been of special importance, and we introduce the notation s defined by s = L′ L , (11) and we call s the scale factor. We also rewrite and simplify Eq. 10 into the following general form: f ′ W(xm, m, θd) = fW(xm, g(m), h(θd)), (12) where g(m), h(θd) are method-dependent functions. For PI, we have g(m) = m/s, h(θd) = θd. In the subsequent sections, when we introduce a new interpolation method, we sometimes only specify the functions g(m) and h(θd). Additionally, we define λd as the wavelength of the RoPE embedding at d-th hidden dimension: λd = 2π θd = 2πb 2d |D| . (13) The wavelength describes the length of tokens needed in order for the RoPE embedding at dimension d to perform a full rotation (2π).
2309.00071#9
YaRN: Efficient Context Window Extension of Large Language Models
Rotary Position Embeddings (RoPE) have been shown to effectively encode positional information in transformer-based language models. However, these models fail to generalize past the sequence length they were trained on. We present YaRN (Yet another RoPE extensioN method), a compute-efficient method to extend the context window of such models, requiring 10x less tokens and 2.5x less training steps than previous methods. Using YaRN, we show that LLaMA models can effectively utilize and extrapolate to context lengths much longer than their original pre-training would allow, while also surpassing previous the state-of-the-art at context window extension. In addition, we demonstrate that YaRN exhibits the capability to extrapolate beyond the limited context of a fine-tuning dataset. The models fine-tuned using YaRN has been made available and reproduced online up to 128k context length at https://github.com/jquesnelle/yarn
http://arxiv.org/pdf/2309.00071
Bowen Peng, Jeffrey Quesnelle, Honglu Fan, Enrico Shippole
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230831
20231101
[]
2308.16505
10
• To enable small language models to serve as the brain for recommender agents, we create an imitation dataset de- rived from GPT-4. Leveraging this dataset, we have suc- cessfully fine-tuned a 7-billion-parameter model, which we refer to as RecLlama. • Experimental results from three public datasets demon- strate the effectiveness of InteRecAgent, with particularly significant advantages in domains that are less covered by world knowledge. # 2 Related Work
2308.16505#10
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2309.00071
10
The wavelength describes the length of tokens needed in order for the RoPE embedding at dimension d to perform a full rotation (2π). Given that some interpolation methods (eg. PI) do not care about the wavelength of the dimensions, we will refer to those methods as "blind" interpolation methods, while others do (eg. YaRN), which we will classify as "targeted" interpolation methods. # 2.4 Related work ReRoPE [33] also aims to extend the context size of existing models pre-trained with RoPE, and claims "infinite" context length without needing any fine-tuning. This claim is backed by a monotonically decreasing loss with increasing context length up to 16k on the Llama 2 13B model. It achieves context extension by modifying the attention mechanism and thus is not purely an embedding interpolation method. Since it is currently not compatible with Flash Attention 2 [13] and requires two attention passes during inference, we do not consider it for comparison. Concurrently with our work, LM-Infinite [16] proposes similar ideas to YaRN, but focuses on "on-the- fly" length generalization for non-fine-tuned models. Since they also modify the attention mechanism of the models, it is not an embedding interpolation method and is not immediately compatible with Flash Attention 2. # 3 Methodology
2309.00071#10
YaRN: Efficient Context Window Extension of Large Language Models
Rotary Position Embeddings (RoPE) have been shown to effectively encode positional information in transformer-based language models. However, these models fail to generalize past the sequence length they were trained on. We present YaRN (Yet another RoPE extensioN method), a compute-efficient method to extend the context window of such models, requiring 10x less tokens and 2.5x less training steps than previous methods. Using YaRN, we show that LLaMA models can effectively utilize and extrapolate to context lengths much longer than their original pre-training would allow, while also surpassing previous the state-of-the-art at context window extension. In addition, we demonstrate that YaRN exhibits the capability to extrapolate beyond the limited context of a fine-tuning dataset. The models fine-tuned using YaRN has been made available and reproduced online up to 128k context length at https://github.com/jquesnelle/yarn
http://arxiv.org/pdf/2309.00071
Bowen Peng, Jeffrey Quesnelle, Honglu Fan, Enrico Shippole
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230831
20231101
[]
2308.16505
11
2.1 Conversational Recommender System Existing researches in conversational recommender sys- tems (CRS) can be primarily categorized into two main areas (Gao et al. 2021): attribute-based question- answering(Zou and Kanoulas 2019; Zou, Chen, and Kanoulas 2020; Xu et al. 2021) and open-ended conversa- tion (Li et al. 2018; Wang et al. 2022b, 2021). In attribute- based question-answering CRS, the system aims to recom- mend suitable items to users within as few rounds as pos- sible. The interaction between the system and users primar- ily revolves around question-answering concerning desired item attributes, iteratively refining user interests. Key re- search challenges in this area include developing strategies for selecting queried attributes(Mirzadeh, Ricci, and Bansal 2005; Zhang et al. 2018) and addressing the exploration- exploitation trade-off(Christakopoulou, Radlinski, and Hof- mann 2016; Xie et al. 2021). In open-ended conversation CRS, the system manages free-format conversational data. Initial research
2308.16505#11
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2309.00071
11
# 3 Methodology Whereas PI stretches all RoPE dimensions equally, we find that the theoretical interpolation bound described by PI [9] is insufficient at predicting the complex dynamics between RoPE and the LLM’s internal embeddings. In the following subsections, we describe the main issues with PI we have individually identified and solved, so as to give the readers the context, origin and justifications of each method which we use in concert to obtain the full YaRN method. # 3.1 Loss of High Frequency information - "NTK-aware" interpolation If we look at RoPE only from an information encoding perspective, it was shown in [36], using Neural Tangent Kernel (NTK) theory, that deep neural networks have trouble learning high frequency information if the input dimension is low and the corresponding embeddings lack high frequency components. Here we can see the similarities: a token’s positional information is one-dimensional, and RoPE expands it to an n-dimensional complex vector embedding. 4
2309.00071#11
YaRN: Efficient Context Window Extension of Large Language Models
Rotary Position Embeddings (RoPE) have been shown to effectively encode positional information in transformer-based language models. However, these models fail to generalize past the sequence length they were trained on. We present YaRN (Yet another RoPE extensioN method), a compute-efficient method to extend the context window of such models, requiring 10x less tokens and 2.5x less training steps than previous methods. Using YaRN, we show that LLaMA models can effectively utilize and extrapolate to context lengths much longer than their original pre-training would allow, while also surpassing previous the state-of-the-art at context window extension. In addition, we demonstrate that YaRN exhibits the capability to extrapolate beyond the limited context of a fine-tuning dataset. The models fine-tuned using YaRN has been made available and reproduced online up to 128k context length at https://github.com/jquesnelle/yarn
http://arxiv.org/pdf/2309.00071
Bowen Peng, Jeffrey Quesnelle, Honglu Fan, Enrico Shippole
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230831
20231101
[]
2308.16505
12
and Hof- mann 2016; Xie et al. 2021). In open-ended conversation CRS, the system manages free-format conversational data. Initial research efforts in this area focused on leveraging pre- trained language models for conversation understanding and response generation(Li et al. 2018; Penha and Hauff 2020). Subsequent studies incorporated external knowledge to en- hance the performance of open-ended CRS(Chen et al. 2019; Wang, Su, and Chen 2022; Wang et al. 2022b). Neverthe- less, these approaches struggle to reason with complex user inquiries and maintain seamless communication with users. The emergence of LLMs presents an opportunity to revolu- tionize the construction of conversational recommender sys- tems, potentially addressing the limitations of existing ap- proaches and enhancing the overall user experience.
2308.16505#12
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2309.00071
12
4 RoPE closely resembles Fourier Features [36] in many aspects, as it is possible to define RoPE as a special 1D case of a Fourier Feature. Stretching the RoPE embeddings indiscriminately results in the loss of important high frequency details which the network needs in order to resolve tokens that are both very similar and very close together (the rotation describing the smallest distance needs to not be too small for the network to be able to detect it). We hypothesise that the slight increase of perplexity for short context sizes after fine-tuning on larger context sizes seen in PI [9] might be related to this problem. Under ideal circumstances, there is no reason that fine-tuning on larger context sizes should degrade the performance of smaller context sizes. In order to resolve the problem of losing high frequency information when interpolating the RoPE embeddings, the "NTK-aware" interpolation was developed in [6]. Instead of scaling every dimension of RoPE equally by a factor s, we spread out the interpolation pressure across multiple dimensions by scaling high frequencies less and low frequencies more. One can obtain such a transformation in many ways, but the simplest would be to perform a base change on the value of θ.
2309.00071#12
YaRN: Efficient Context Window Extension of Large Language Models
Rotary Position Embeddings (RoPE) have been shown to effectively encode positional information in transformer-based language models. However, these models fail to generalize past the sequence length they were trained on. We present YaRN (Yet another RoPE extensioN method), a compute-efficient method to extend the context window of such models, requiring 10x less tokens and 2.5x less training steps than previous methods. Using YaRN, we show that LLaMA models can effectively utilize and extrapolate to context lengths much longer than their original pre-training would allow, while also surpassing previous the state-of-the-art at context window extension. In addition, we demonstrate that YaRN exhibits the capability to extrapolate beyond the limited context of a fine-tuning dataset. The models fine-tuned using YaRN has been made available and reproduced online up to 128k context length at https://github.com/jquesnelle/yarn
http://arxiv.org/pdf/2309.00071
Bowen Peng, Jeffrey Quesnelle, Honglu Fan, Enrico Shippole
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230831
20231101
[]
2308.16505
13
2.2 Enhancing LLMs The scaling-up of parameters and data has led to signifi- cant advancements in the capabilities of LLMs, including in- context learning (Brown et al. 2020; Liu et al. 2021; Rubin, Herzig, and Berant 2021), instruction following (Ouyang et al. 2022; Touvron et al. 2023a; OpenAI 2023), planning and reasoning (Wei et al. 2022; Wang et al. 2022a; Yao et al. 2022; Yang et al. 2023; Wang et al. 2023b). In rec- ommender systems, the application of LLMs is becoming a rapidly growing trend (Liu et al. 2023a; Dai et al. 2023; Kang et al. 2023; Wang and Lim 2023).
2308.16505#13
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2309.00071
13
More precisely, following the notations set out in Section 2.3, we define the "NTK-aware" interpola- tion scheme as follows (see the Appendix A.1 for the details of the deduction). Definition 1 The "NTK-aware" interpolation is a modification of RoPE by using Eq. 12 with the following functions. g(m) = m h(θd) = b′−2d/|D|, (14) (15) where b′ = b · s |D| |D|−2 . (16) Given the results from [6], this method performs much better at extending the context size of non-fine- tuned models compared to PI [9]. However, one major disadvantage of this method is that given it is not just an interpolation scheme, some dimensions are slightly extrapolated to "out-of-bound" values, thus fine-tuning with "NTK-aware" interpolation [6] yields inferior results to PI [9]. Furthermore, due to the "out-of-bound" values, the theoretical scale factor s does not accurately describe the true context extension scale. In practice, the scale value s has to be set higher than the expected scale for a given context length extension.
2309.00071#13
YaRN: Efficient Context Window Extension of Large Language Models
Rotary Position Embeddings (RoPE) have been shown to effectively encode positional information in transformer-based language models. However, these models fail to generalize past the sequence length they were trained on. We present YaRN (Yet another RoPE extensioN method), a compute-efficient method to extend the context window of such models, requiring 10x less tokens and 2.5x less training steps than previous methods. Using YaRN, we show that LLaMA models can effectively utilize and extrapolate to context lengths much longer than their original pre-training would allow, while also surpassing previous the state-of-the-art at context window extension. In addition, we demonstrate that YaRN exhibits the capability to extrapolate beyond the limited context of a fine-tuning dataset. The models fine-tuned using YaRN has been made available and reproduced online up to 128k context length at https://github.com/jquesnelle/yarn
http://arxiv.org/pdf/2309.00071
Bowen Peng, Jeffrey Quesnelle, Honglu Fan, Enrico Shippole
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230831
20231101
[]
2308.16505
14
As models show emergent intelligence, researchers have started exploring the potential to leverage LLMs as au- tonomous agents (Wang et al. 2023a; Zhao, Jin, and Cheng 2023), augmented with memory modules, planning abil- ity, and tool-using capabilities. For example, (Wang et al. 2023c; Zhong et al. 2023; Liu et al. 2023b) have equipped LLMs with an external memory, empowering LLMs with growth potential. Regarding the planning, CoT (Wei et al. 2022; Kojima et al. 2022) and ReAct (Yao et al. 2022) pro- pose to enhance planning by step-wise reasoning; ToT (Yao et al. 2023) and GoT (Besta et al. 2023) introduce multi- path reasoning to ensure consistency and correctness; Self- Refine (Madaan et al. 2023) and Reflexion (Shinn et al. 2023) lead the LLMs to reflect on errors, with the ultimate goal of improving their subsequent problem-solving success rates. To possess domain-specific skills, some works (Qin et al. 2023a) study guiding LLMs to use external
2308.16505#14
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2309.00071
14
We note that shortly before the release of this article, Code Llama [31] was released and uses "NTK-aware" scaling by manually scaling the base b to 1M. # 3.2 Loss of Relative Local Distances - "NTK-by-parts" interpolation In the case of blind interpolation methods like PI and "NTK-aware" interpolation, we treat all the RoPE hidden dimensions equally (as in they have the same effect on the network). However, there are strong clues that point us towards the need for targeted interpolation methods. In this section, we think heavily in terms of the wavelengths λd defined in Eq. 13 in the formula of RoPE. For simplicity, we omit the subscript d in λd and the reader is encouraged to think about λ as the wavelength of an arbitrary periodic function.
2309.00071#14
YaRN: Efficient Context Window Extension of Large Language Models
Rotary Position Embeddings (RoPE) have been shown to effectively encode positional information in transformer-based language models. However, these models fail to generalize past the sequence length they were trained on. We present YaRN (Yet another RoPE extensioN method), a compute-efficient method to extend the context window of such models, requiring 10x less tokens and 2.5x less training steps than previous methods. Using YaRN, we show that LLaMA models can effectively utilize and extrapolate to context lengths much longer than their original pre-training would allow, while also surpassing previous the state-of-the-art at context window extension. In addition, we demonstrate that YaRN exhibits the capability to extrapolate beyond the limited context of a fine-tuning dataset. The models fine-tuned using YaRN has been made available and reproduced online up to 128k context length at https://github.com/jquesnelle/yarn
http://arxiv.org/pdf/2309.00071
Bowen Peng, Jeffrey Quesnelle, Honglu Fan, Enrico Shippole
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230831
20231101
[]
2308.16505
15
their subsequent problem-solving success rates. To possess domain-specific skills, some works (Qin et al. 2023a) study guiding LLMs to use external tools, such as a web search engine (Nakano et al. 2021; Shuster et al. 2022), mathematical tools (Schick et al. 2023; Thoppilan et al. 2022), code interpreters (Gao et al. 2023a; Chen et al. 2022) and visual models (Wu et al. 2023; Shen et al. 2023). To the best of our knowledge, this paper is the first to explore the LLM + tools paradigm in the field of recommender sys- tems.
2308.16505#15
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2309.00071
15
One interesting observation of RoPE embeddings is that given a context size L, there are some dimensions d where the wavelength is longer than the maximum context length seen during pretraining (λ > L), this suggests that some dimensions’ embeddings might not be distributed evenly in the rotational domain. In such cases, we presume having all unique position pairs implies that the absolute positional information remains intact. On the contrary, when the wavelength is short, only relative positional information is accessible to the network. Moreover, when we stretch all the RoPE dimensions either by a scale s or using a base change b′, all tokens become closer to each other, as the dot product of two vectors rotated by a lesser amount is bigger. This scaling severely impairs a LLM’s ability to understand small and local relationships between its internal embeddings. We hypothesize that such compression leads to the model being confused on the positional order of close-by tokens, and consequently harming the model’s abilities. 5 In order to remedy this issue, given the two previous observations that we have found, we choose not to interpolate the higher frequency dimensions at all while always interpolating the lower frequency dimensions. In particular, if the wavelength λ is much smaller than the context size L, we do not interpolate; • if the wavelength λ is equal to or bigger than the context size L, we want to only interpolate
2309.00071#15
YaRN: Efficient Context Window Extension of Large Language Models
Rotary Position Embeddings (RoPE) have been shown to effectively encode positional information in transformer-based language models. However, these models fail to generalize past the sequence length they were trained on. We present YaRN (Yet another RoPE extensioN method), a compute-efficient method to extend the context window of such models, requiring 10x less tokens and 2.5x less training steps than previous methods. Using YaRN, we show that LLaMA models can effectively utilize and extrapolate to context lengths much longer than their original pre-training would allow, while also surpassing previous the state-of-the-art at context window extension. In addition, we demonstrate that YaRN exhibits the capability to extrapolate beyond the limited context of a fine-tuning dataset. The models fine-tuned using YaRN has been made available and reproduced online up to 128k context length at https://github.com/jquesnelle/yarn
http://arxiv.org/pdf/2309.00071
Bowen Peng, Jeffrey Quesnelle, Honglu Fan, Enrico Shippole
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230831
20231101
[]
2308.16505
16
# 3 Methodologies 3.1 The Overall Framework The comprehensive framework of InteRecAgent is depicted in Figure 1. Fundamentally, LLMs function as the brain, while recommendation models serve as tools that supply domain-specific knowledge. Users engage with an LLM us- ing natural language. The LLM interprets users’ intentions and determines whether the current conversation necessi- tates the assistance of tools. For instance, in a casual chit- chat, the LLM will respond based on its own knowledge; whereas for in-domain recommendations, the LLM initi- ates a chain of tool calls and subsequently generates a re- sponse by observing the execution results of the tools. Con- sequently, the quality of recommendations relies heavily on the tools, making the composition of tools a critical factor in overall performance. To ensure seamless communication between users and InteRecAgent, covering both casual con- versation and item recommendations, we propose a mini- mum set of tools that encompass the following aspects:
2308.16505#16
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2309.00071
16
and avoid any extrapolation (unlike the previous "NTK-aware" method); dimensions in-between can have a bit of both, similar to the "NTK-aware" interpolation. As a result, it is more convenient to introduce the ratio r = L the wavelength λ. In the d-th hidden state, the ratio r depends on d in the following way: r(d) = L λd = L 2πb′ 2d |D| . (17) In order to define the boundary of the different interpolation strategies as above, we introduce two extra parameters α, β. All hidden dimensions d where r(d) < α are those where we linearly interpolate by a scale s (exactly like PI, avoiding any extrapolation), and the d where r(d) > β are those where we do not interpolate at all. Define the ramp function γ to be γ(r) = 0, 1, r − α β − α if r < α if r > β , otherwise. (18) With the help of the ramp function, the "NTK-by-parts" method can be described as follows. Definition 2 The "NTK-by-parts" interpolation is a modification of RoPE by using Eq. 12 with the following functions4. (19) # g(m) =m
2309.00071#16
YaRN: Efficient Context Window Extension of Large Language Models
Rotary Position Embeddings (RoPE) have been shown to effectively encode positional information in transformer-based language models. However, these models fail to generalize past the sequence length they were trained on. We present YaRN (Yet another RoPE extensioN method), a compute-efficient method to extend the context window of such models, requiring 10x less tokens and 2.5x less training steps than previous methods. Using YaRN, we show that LLaMA models can effectively utilize and extrapolate to context lengths much longer than their original pre-training would allow, while also surpassing previous the state-of-the-art at context window extension. In addition, we demonstrate that YaRN exhibits the capability to extrapolate beyond the limited context of a fine-tuning dataset. The models fine-tuned using YaRN has been made available and reproduced online up to 128k context length at https://github.com/jquesnelle/yarn
http://arxiv.org/pdf/2309.00071
Bowen Peng, Jeffrey Quesnelle, Honglu Fan, Enrico Shippole
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230831
20231101
[]
2308.16505
17
(1) Information Query. During conversations, the In- teRecAgent not only handles item recommendation tasks but also frequently addresses users’ inquiries. For exam- ple, within a gaming platform, users may ask questions like, “What is the release date of this game and how much does it cost?” To accommodate such queries, we include an item in- formation query module. This module can retrieve detailed item information from the backend database using Struc- tured Query Language (SQL) expressions. (2) Item Retrieval. Retrieval tools aim to propose a list of item candidates that satisfy a user’s demand from the en- tire item pool. These tools can be compared to the retrieval
2308.16505#17
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2309.00071
17
(19) # g(m) =m h(a) = (1 7(r(d)) “ + y(r(d)) 6a. (20) The values of α and β should be tuned on a case-by-case basis. For example, we have found experimentally that for the Llama family of models, good values for α and β are α = 1 and β = 32. Using the techniques described in this section, a variant of the resulting method was released under the name "NTK-by-parts" interpolation [7]. This improved method performs better than the previous PI [9] and "NTK-aware" 3.1 interpolation methods, both with non-fine-tuned models and with fine-tuned models, as shown in [7]. # 3.3 Dynamic Scaling - "Dynamic NTK" interpolation In a lot of use cases, multiple forward-passes are performed with varying sequence lengths from 1 to the maximal context size. A typical example is the autoregressive generation where the sequence lengths increment by 1 after each step. There are two ways of applying an interpolation method that uses a scale factor s (including PI, "NTK-aware" and "NTK-by-parts"):
2309.00071#17
YaRN: Efficient Context Window Extension of Large Language Models
Rotary Position Embeddings (RoPE) have been shown to effectively encode positional information in transformer-based language models. However, these models fail to generalize past the sequence length they were trained on. We present YaRN (Yet another RoPE extensioN method), a compute-efficient method to extend the context window of such models, requiring 10x less tokens and 2.5x less training steps than previous methods. Using YaRN, we show that LLaMA models can effectively utilize and extrapolate to context lengths much longer than their original pre-training would allow, while also surpassing previous the state-of-the-art at context window extension. In addition, we demonstrate that YaRN exhibits the capability to extrapolate beyond the limited context of a fine-tuning dataset. The models fine-tuned using YaRN has been made available and reproduced online up to 128k context length at https://github.com/jquesnelle/yarn
http://arxiv.org/pdf/2309.00071
Bowen Peng, Jeffrey Quesnelle, Honglu Fan, Enrico Shippole
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230831
20231101
[]
2308.16505
18
stage of a recommender system, which narrows down rele- vant candidates to a smaller list for large-scale serving. In InteRecAgent, we consider two types of demands that a user may express in their intent: hard conditions and soft condi- tions. Hard conditions refer to explicit demands on items, such as “I want some popular sports games” or “Recom- mend me some RPG games under $100”. Soft conditions pertain to demands that cannot be explicitly expressed with discrete attributes and require the use of semantic matching models, like “I want some games similar to Call of Duty and Fortnite”. It is essential to incorporate multiple tools to address both conditions. Consequently, we utilize an SQL tool to handle hard conditions, finding candidates from the item database. For soft conditions, we employ an item-to- item tool that matches similar items based on latent embed- dings.
2308.16505#18
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2309.00071
18
1. Throughout the whole inference cycle, the embedding layer is fixed including the scale factor s = L′/L where L′ is the fixed number of extended context size. 2. In each forward-pass, the position embedding updates the scale factor s = max(1, l′/L) where l′ is the sequence length of the current sequence. The problem of (1) is that the model may experience a performance discount at a length less than L and an abrupt degradation when the sequence length is longer than L′. But by doing Dynamic 4The interpolation by linear ramp on h may have alternatives, such as a harmonic mean over θd/s and θd converted from a linear interpolation on wavelengths. The choice of h here was for the simplicity of implementation, but both would work. 6 Scaling as (2), it allows the model to gracefully degrade instead of immediately breaking when hitting the trained context limit L′. We call this inference-time method the Dynamic Scaling method. When it is combined with "NTK-awared" interpolation, we call it "Dynamic NTK" interpolation. It first appeared in public as a reddit post in [14]. One notable fact is that the "Dynamic NTK" interpolation works exceptionally well on models pre- trained on L without any finetuning (L′ = L). This is supported by the experiment in Appendix B.3.
2309.00071#18
YaRN: Efficient Context Window Extension of Large Language Models
Rotary Position Embeddings (RoPE) have been shown to effectively encode positional information in transformer-based language models. However, these models fail to generalize past the sequence length they were trained on. We present YaRN (Yet another RoPE extensioN method), a compute-efficient method to extend the context window of such models, requiring 10x less tokens and 2.5x less training steps than previous methods. Using YaRN, we show that LLaMA models can effectively utilize and extrapolate to context lengths much longer than their original pre-training would allow, while also surpassing previous the state-of-the-art at context window extension. In addition, we demonstrate that YaRN exhibits the capability to extrapolate beyond the limited context of a fine-tuning dataset. The models fine-tuned using YaRN has been made available and reproduced online up to 128k context length at https://github.com/jquesnelle/yarn
http://arxiv.org/pdf/2309.00071
Bowen Peng, Jeffrey Quesnelle, Honglu Fan, Enrico Shippole
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230831
20231101
[]
2308.16505
19
(3) Item Ranking. Ranking tools execute a more sophis- ticated prediction of user preferences on the chosen can- didates by leveraging user profiles. Similar to the rankers in conventional recommender systems, these tools typically employ a one-tower architecture. The selection of candidates could emerge from the output of item retrieval tools or be directly supplied by users, as in queries like “Which one is more suitable for me, item A or item B? ”. Ranking tools guarantee that the recommended items are not only perti- nent to the user’s immediate intent but also consonant with their broader preferences. LLMs have the potential to handle various user inquiries when supplemented with these diverse tools. For instance, a user may ask, “I’ve played Fortnite and Call of Duty before. Now, I want to play some puzzle games with a release date after Fortnite’s. Do you have any recommendations? ” In this scenario, the tool execution sequence would be “SQL Query Tool → SQL Retrieval Tool → Ranker Tool.” First, the re- lease date of Fortnite is queried, then the release date and puzzle genre are interpreted as hard conditions for the SQL retrieval. Finally, Fortnite and Call of Duty are considered as the user profile for the ranking model.
2308.16505#19
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2309.00071
19
Often in the repeated forward-passes, the kv-caching [8] is applied so that we can reuse the previous key-value vectors and improve the overall efficiency. We point out that in some implementations when the RoPE embeddings are cached, some care has to be taken in order to modify it for Dynamic Scaling with kv-caching. The correct implementation should cache the kv-embeddings before applying RoPE, as the RoPE embedding of every token changes when s changes. # 3.4 YaRN In addition to the previous interpolation techniques, we also observe that introducing a temperature t on the logits before the attention softmax has a uniform impact on perplexity regardless of the data sample and the token position over the extended context window (See Appendix A.2). More precisely, instead of Eq. 2, we modify the computation of attention weights into T softmax (“s) . (21) VDI
2309.00071#19
YaRN: Efficient Context Window Extension of Large Language Models
Rotary Position Embeddings (RoPE) have been shown to effectively encode positional information in transformer-based language models. However, these models fail to generalize past the sequence length they were trained on. We present YaRN (Yet another RoPE extensioN method), a compute-efficient method to extend the context window of such models, requiring 10x less tokens and 2.5x less training steps than previous methods. Using YaRN, we show that LLaMA models can effectively utilize and extrapolate to context lengths much longer than their original pre-training would allow, while also surpassing previous the state-of-the-art at context window extension. In addition, we demonstrate that YaRN exhibits the capability to extrapolate beyond the limited context of a fine-tuning dataset. The models fine-tuned using YaRN has been made available and reproduced online up to 128k context length at https://github.com/jquesnelle/yarn
http://arxiv.org/pdf/2309.00071
Bowen Peng, Jeffrey Quesnelle, Honglu Fan, Enrico Shippole
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230831
20231101
[]
2308.16505
20
Typically, the tool augmentation is implemented via Re- Act (Yao et al. 2022), where LLMs generate reasoning traces, actions, and observations in an interleaved manner. We refer to this style of execution as step-by-step. Our initial implementation also employed the step-by-step approach. However, we soon observed some limitations due to various challenges. Firstly, retrieval tools may return a large number of items, resulting in an excessively long observation prompt for LLMs. Additionally, including numerous entity names in the prompt can degrade LLMs performance. Secondly, de- spite their powerful intelligence, LLMs may use tools incor- rectly to complete tasks, such as selecting the wrong tool to call or omitting key execution steps. To tackle these chal- lenges, we enhance the three critical components of a typ- ical LLM-based agent, namely memory (Section 3.2), task planning (Section 3.3 and 3.4), and tool learning abilities (Section 3.5). 3.2 Memory Mechanism Candidate Bus The large number of items can pose a challenge when attempting to include items generated by
2308.16505#20
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2309.00071
20
T softmax (“s) . (21) VDI The reparametrization of RoPE as a set of 2D matrices has a clear benefit on the implementation of this attention scaling: we can instead use a "length scaling" trick which scales both q,, and k, by a constant factor \/1/t by simply scaling the complex RoPE embeddings by the same amount. With this, YaRN can effectively alter the attention mechanism without modifying its code. Furthermore, it has zero overhead during both inference and training, as RoPE embeddings are generated in advance and are reused for all forward passes. Combining it with the "NTK-by-parts" interpolation, we have the YaRN method. Definition 3 By the "YaRN method", we refer to a combination of the attention scaling in Eq. 21 and the "NTK-by-parts" interpolation introduced in Section 3.2. For LLaMA and Llama 2 models, we recommend the following values: 3 = 0.1In(s) +1. (22)
2309.00071#20
YaRN: Efficient Context Window Extension of Large Language Models
Rotary Position Embeddings (RoPE) have been shown to effectively encode positional information in transformer-based language models. However, these models fail to generalize past the sequence length they were trained on. We present YaRN (Yet another RoPE extensioN method), a compute-efficient method to extend the context window of such models, requiring 10x less tokens and 2.5x less training steps than previous methods. Using YaRN, we show that LLaMA models can effectively utilize and extrapolate to context lengths much longer than their original pre-training would allow, while also surpassing previous the state-of-the-art at context window extension. In addition, we demonstrate that YaRN exhibits the capability to extrapolate beyond the limited context of a fine-tuning dataset. The models fine-tuned using YaRN has been made available and reproduced online up to 128k context length at https://github.com/jquesnelle/yarn
http://arxiv.org/pdf/2309.00071
Bowen Peng, Jeffrey Quesnelle, Honglu Fan, Enrico Shippole
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230831
20231101
[]
2308.16505
21
e z &B Observation (b) Memory a aca Tools Orn fe é Instruction Multi-turn Ga Long-term — ii) Liked ‘LD Disliked 5 Expected B. o- ny [B chat history No” 5@ 503 Update a Fy bid a Short-term {i') Liked OG) Disliked Ao Expected sen bm InteRec E}TeoI pian Simulator Agent = , e @ CandidateBus @ e $$$ —____—__—__—_— veka ©} Dynamic Demo i 4 Fea Sa OE ST chain ata ae er are some avaliable tools: tol description instruction | Q (c) Tools iRereeanegonncee odoin, | i | Mere are previous conversations: (chatty. \ & Generate Plan | User input (query) S i [E i Execution! [Dchathistory Fine-tune | ; | AU} | Qostruction —+ YE —+ O Rectiam CSRS CRUE | 8) hol Execution ‘dos Ebtoot pian Eyrootpian tama | plan: (SQURetrevatTook Select... RankingTook: (schema: popularity ke: [| s a 5 7 Diverse Pa (d) Planning oe Sampling Generate Q Reflection Dietathison eo
2308.16505#21
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2309.00071
21
For LLaMA and Llama 2 models, we recommend the following values: 3 = 0.1In(s) +1. (22) The equation above is found by fitting \/1/t at the lowest perplexity against the scale extension by various factors s using the "NTK-by-parts" method (Section 2) on LLaMA 7b, 13b, 33b and 65b models without fine-tuning. We note that the same values of f also apply fairly well to Llama 2 models (7b, 13b and 70b). It suggests that the property of increased entropy and the temperature constant t may have certain degree of "universality" and may be generalizable across some models and training data. The YaRN method combines all our findings and surpasses all previous methods in both fine-tuned and non-fine-tuned scenarios. Thanks to its low footprint, YaRN allows for direct compatibility with libraries that modify the attention mechanism such as Flash Attention 2 [13]. # 4 Experiments
2309.00071#21
YaRN: Efficient Context Window Extension of Large Language Models
Rotary Position Embeddings (RoPE) have been shown to effectively encode positional information in transformer-based language models. However, these models fail to generalize past the sequence length they were trained on. We present YaRN (Yet another RoPE extensioN method), a compute-efficient method to extend the context window of such models, requiring 10x less tokens and 2.5x less training steps than previous methods. Using YaRN, we show that LLaMA models can effectively utilize and extrapolate to context lengths much longer than their original pre-training would allow, while also surpassing previous the state-of-the-art at context window extension. In addition, we demonstrate that YaRN exhibits the capability to extrapolate beyond the limited context of a fine-tuning dataset. The models fine-tuned using YaRN has been made available and reproduced online up to 128k context length at https://github.com/jquesnelle/yarn
http://arxiv.org/pdf/2309.00071
Bowen Peng, Jeffrey Quesnelle, Honglu Fan, Enrico Shippole
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230831
20231101
[]
2309.00071
22
# 4 Experiments We show that YaRN successfully achieves context window extension of language models using RoPE as its position embedding. Moreover, this result is achieved with only 400 training steps, representing approximately 0.1% of the model’s original pre-training corpus, a 10x reduction from Rozière et al. [31] and 2.5x reduction in training steps from Chen et al. [9], making it highly compute-efficient for training with no additional inference costs. We calculate the perplexity of long documents and score 7 on established benchmarks to evaluate the resulting models, finding that they surpass all other context window extension methods. We broadly followed the training and evaluation procedures as outlined in [9]. # 4.1 Training
2309.00071#22
YaRN: Efficient Context Window Extension of Large Language Models
Rotary Position Embeddings (RoPE) have been shown to effectively encode positional information in transformer-based language models. However, these models fail to generalize past the sequence length they were trained on. We present YaRN (Yet another RoPE extensioN method), a compute-efficient method to extend the context window of such models, requiring 10x less tokens and 2.5x less training steps than previous methods. Using YaRN, we show that LLaMA models can effectively utilize and extrapolate to context lengths much longer than their original pre-training would allow, while also surpassing previous the state-of-the-art at context window extension. In addition, we demonstrate that YaRN exhibits the capability to extrapolate beyond the limited context of a fine-tuning dataset. The models fine-tuned using YaRN has been made available and reproduced online up to 128k context length at https://github.com/jquesnelle/yarn
http://arxiv.org/pdf/2309.00071
Bowen Peng, Jeffrey Quesnelle, Honglu Fan, Enrico Shippole
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230831
20231101
[]
2308.16505
23
Figure 1: InteRecAgent Framework. (a) The overall pipeline of InteRecAgent; (b) The memory module, consisting of a candi- date memory bus, a long-term and a short-term user profile; (c) Tool module, consisting of various tools, the plan-first execution strategy and the fine-tuning of RecLlama; (d) Planning module, involving the dynamic demonstrations and the reflection strat- egy; (e) Sources of fine-tuning data for RecLlama.
2308.16505#23
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2309.00071
23
7 on established benchmarks to evaluate the resulting models, finding that they surpass all other context window extension methods. We broadly followed the training and evaluation procedures as outlined in [9]. # 4.1 Training For training, we extended the Llama 2 [39] 7B and 13B parameter models. No changes were made to the LLaMA model architecture other than the calculation of the embedding frequencies as described in 3.4 with s = 16 and s = 32. We used a learning rate of 2 × 10−5 with no weight decay and a linear warmup of 20 steps along with AdamW [24] β1 = 0.9 and β2 = 0.95. For s = 16 we fine-tuned for 400 steps with global batch size 64 using PyTorch [26] Fully Sharded Data Parallelism [42] and Flash Attention 2 [13] on the PG19 dataset [29] chunked into 64k segments bookended with the BOS and EOS token. For s = 32 we followed the same procedure, but started from the finished s = 16 checkpoint and trained for an additional 200 steps. # 4.2 Extrapolation and Transfer Learning
2309.00071#23
YaRN: Efficient Context Window Extension of Large Language Models
Rotary Position Embeddings (RoPE) have been shown to effectively encode positional information in transformer-based language models. However, these models fail to generalize past the sequence length they were trained on. We present YaRN (Yet another RoPE extensioN method), a compute-efficient method to extend the context window of such models, requiring 10x less tokens and 2.5x less training steps than previous methods. Using YaRN, we show that LLaMA models can effectively utilize and extrapolate to context lengths much longer than their original pre-training would allow, while also surpassing previous the state-of-the-art at context window extension. In addition, we demonstrate that YaRN exhibits the capability to extrapolate beyond the limited context of a fine-tuning dataset. The models fine-tuned using YaRN has been made available and reproduced online up to 128k context length at https://github.com/jquesnelle/yarn
http://arxiv.org/pdf/2309.00071
Bowen Peng, Jeffrey Quesnelle, Honglu Fan, Enrico Shippole
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230831
20231101
[]
2308.16505
24
tools in prompts as observations for the LLM, due to input context length limitations. Meanwhile, the input of a subse- quent tool often depends on the output of preceding tools, necessitating effective communication between tools. Thus, we propose Candidate Bus, which is a separate memory to store the current item candidates, eliminating the need to ap- pend them to prompt inputs. The Candidate Bus, accessible by all tools, comprises two parts: a data bus for storing can- didate items, and a tracker for recording each tool’s output. The candidate items in the data bus are initialized to in- clude all items at the beginning of each conversation turn by default. At the start of each tool execution, candidate items are read from the data bus, and the data bus is then refreshed with the filtered items at the end of each tool execution. This mechanism allows candidate items to flow sequentially through the various tools in a streaming manner. Notably, users may explicitly specify a set of candidate items in the conversation, such as “Which of these movies do you think is most suitable for me: [Movie List]? ” In this case, the LLM will call a special tool—the memory initialization tool—to set the user-specified items as the initial candidate items.
2308.16505#24
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2309.00071
24
# 4.2 Extrapolation and Transfer Learning In Code Llama [31], a dataset with 16k context was used with a scale factor set to s ≈ 88.6, which corresponds to a context size of 355k. They show that the network extrapolates up to 100k context without ever seeing those context sizes during training. Similar to 3.1 and Rozière et al. [31], YaRN also supports training with a higher scale factor s than the length of the dataset. Due to compute constraints, we test only s = 32 by further fine-tuning the s = 16 model for 200 steps using the same dataset with 64k context. We show in 4.3.1 that the s = 32 model successfully extrapolates up to 128k context using only 64k context during training. Unlike previous "blind" interpolation methods, YaRN is much more efficient at transfer learning when increasing the scale s. This demonstrates successful transfer learning from s = 16 to s = 32 without the network needing to relearn the interpolated embeddings, as the s = 32 model is equivalent to the s = 16 model across the entire context size, despite only being trained on s = 32 for 200 steps. # 4.3 Evaluation The evaluations focus on three aspects: 1. the perplexity scores of fine-tuned models with extended context window, 2. the passkey retrieval task on fine-tuned models,
2309.00071#24
YaRN: Efficient Context Window Extension of Large Language Models
Rotary Position Embeddings (RoPE) have been shown to effectively encode positional information in transformer-based language models. However, these models fail to generalize past the sequence length they were trained on. We present YaRN (Yet another RoPE extensioN method), a compute-efficient method to extend the context window of such models, requiring 10x less tokens and 2.5x less training steps than previous methods. Using YaRN, we show that LLaMA models can effectively utilize and extrapolate to context lengths much longer than their original pre-training would allow, while also surpassing previous the state-of-the-art at context window extension. In addition, we demonstrate that YaRN exhibits the capability to extrapolate beyond the limited context of a fine-tuning dataset. The models fine-tuned using YaRN has been made available and reproduced online up to 128k context length at https://github.com/jquesnelle/yarn
http://arxiv.org/pdf/2309.00071
Bowen Peng, Jeffrey Quesnelle, Honglu Fan, Enrico Shippole
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230831
20231101
[]
2308.16505
25
The tracker within the memory serves to record tool ex- ecution. Each tool call record is represented as a triplet (fk, ik, ok), where fk denotes the name of the k-th tool, and ik, ok are the input and output of the tool’s execution, such as the number of remaining candidates, runtime errors. The tracker’s main function is to aid the critic in making judg- ments within the reflection mechanism, acting as the ot in reflect(·), as described in Section 3.4. With the help of the Candidate Bus component, items can be transmitted in a streaming manner between various tools and continuously filtered according to conditions, pre- senting a funnel-like structure for the recommendation. The tracker’s records can be considered as short-term memory for further reflection. We depict an example of the memory bus in the upper of Figure 3. User Profile To facilitate the invocation of tools, we ex- plicitly maintain a user profile in memory. This profile is
2308.16505#25
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2309.00071
25
The evaluations focus on three aspects: 1. the perplexity scores of fine-tuned models with extended context window, 2. the passkey retrieval task on fine-tuned models, 3. the common LLM benchmark results of fine-tuned models, # 4.3.1 Long Sequence Language Modeling To evaluate the long sequence language modeling performances, we use the GovReport [18] and Proof-pile [4] datasets both of which contain many long sequence samples. For all evaluations, the test splits of both datasets were used exclusively. All perplexity evaluations were calculated using the sliding window method from Press et al. [27] with S = 256. Firstly, we evaluated how the model performed as the context window increased. We selected 10 random samples from Proof-pile with at least 128k tokens each and evaluated the perplexity of each of these samples when truncated at 2k steps from a sequence length of 2k tokens through 128k tokens. Table 1 shows a side-by-side comparison of Llama-2 model extended from 4096 to 8192 context length via PI (LLongMA-2 7b5), "NTK-aware" and YaRN. Note that PI and "NTK-aware" models were trained using the methodology in Chen et al. [9], while YaRN used the same methodology but 2.5x less training steps and data, as described in 4.
2309.00071#25
YaRN: Efficient Context Window Extension of Large Language Models
Rotary Position Embeddings (RoPE) have been shown to effectively encode positional information in transformer-based language models. However, these models fail to generalize past the sequence length they were trained on. We present YaRN (Yet another RoPE extensioN method), a compute-efficient method to extend the context window of such models, requiring 10x less tokens and 2.5x less training steps than previous methods. Using YaRN, we show that LLaMA models can effectively utilize and extrapolate to context lengths much longer than their original pre-training would allow, while also surpassing previous the state-of-the-art at context window extension. In addition, we demonstrate that YaRN exhibits the capability to extrapolate beyond the limited context of a fine-tuning dataset. The models fine-tuned using YaRN has been made available and reproduced online up to 128k context length at https://github.com/jquesnelle/yarn
http://arxiv.org/pdf/2309.00071
Bowen Peng, Jeffrey Quesnelle, Honglu Fan, Enrico Shippole
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230831
20231101
[]
2308.16505
26
User Profile To facilitate the invocation of tools, we ex- plicitly maintain a user profile in memory. This profile is structured as a dictionary that encapsulates three facets of user preference: “like”, “dislike”, and “expect”. The “like” and “dislike” facets reflect the user’s favorable and unfa- vorable tastes, respectively, whereas “expect” monitors the user’s immediate requests during the current dialogue, such as conducting a search, which is not necessarily indicative of the user’s inherent preferences. Each facet may contain content that includes item names or categories.
2308.16505#26
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2309.00071
26
5LLongMA-2 7b [28] is fine-tuned from Llama-2 7b, trained at 8k context length with PI using the RedPajama dataset [12]. 8 Extension Method Trained Context Tokens Window 2048 Evaluation Context Window Size 6144 4096 8192 10240 PI (s = 2) NTK (θ = 20k) YaRN (s = 2) 1B 1B 400M 8k 8k 8k 3.92 4.20 3.91 3.51 3.75 3.50 3.51 3.74 3.51 3.34 3.59 3.35 8.07 6.24 6.04 Table 1: Sliding window perplexity (S = 256) of ten 128k Proof-pile documents over Llama-2 extended via PI, NTK and YaRN We further evaluated YaRN at the scale factor s = 16, 32 and compared them against a few open- source models fine-tuned from Llama-2 and extended to more than 32k context window such as Together.ai [37] and "NTK-aware" Code Llama [31]. The results are summarized in Table 2 (with a more detailed plot in Figure 1).
2309.00071#26
YaRN: Efficient Context Window Extension of Large Language Models
Rotary Position Embeddings (RoPE) have been shown to effectively encode positional information in transformer-based language models. However, these models fail to generalize past the sequence length they were trained on. We present YaRN (Yet another RoPE extensioN method), a compute-efficient method to extend the context window of such models, requiring 10x less tokens and 2.5x less training steps than previous methods. Using YaRN, we show that LLaMA models can effectively utilize and extrapolate to context lengths much longer than their original pre-training would allow, while also surpassing previous the state-of-the-art at context window extension. In addition, we demonstrate that YaRN exhibits the capability to extrapolate beyond the limited context of a fine-tuning dataset. The models fine-tuned using YaRN has been made available and reproduced online up to 128k context length at https://github.com/jquesnelle/yarn
http://arxiv.org/pdf/2309.00071
Bowen Peng, Jeffrey Quesnelle, Honglu Fan, Enrico Shippole
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230831
20231101
[]
2308.16505
27
User profiles are synthesized by LLMs based on con- versation history. To address situations where the conver- sation history grows excessively long, such as in lifelong learning scenarios where conversations from all days may be stored for ongoing interactions, we devise two distinct user profiles: one representing long-term memory and an- other for short-term memory. Should the current dialogue exceed the LLM’s input window size, we partition the dia- logue, retrieve the user profile from the preceding segment, and merge it with the existing long-term memory to update the memory state. The short-term memory is consistently de- rived from the most recent conversations within the current prompt. When it comes to tool invocation, a comprehensive user profile is formed by the combination of both long-term and short-term memories. # 3.3 Plan-first Execution with Dynamic Demonstrations
2308.16505#27
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2309.00071
27
Model Name 7B 7B 7B YaRN (s = 16) 7B YaRN (s = 32) Together Code Llama Context Extension Window Method 32k 100k 64k 128k PI NTK YaRN YaRN 8192 3.50 3.71 3.51 3.56 Evaluation Context Window Size 32768 65536 98304 2.64 > 102 > 103 2.55 2.74 2.54 2.42 > 101 2.65 2.36 2.45 2.70 13B 13B YaRN (s = 16) 13B YaRN (s = 32) Code Llama 100k 64k 128k NTK YaRN YaRN 3.54 3.25 3.29 2.63 2.50 2.53 2.37 2.41 2.29 > 101 2.23 2.31 131072 > 104 2.71 > 101 2.37 2.54 > 101 2.24 Table 2: Sliding window perplexity (S = 256) of ten 128k Proof-pile documents truncated to evaluation context window size
2309.00071#27
YaRN: Efficient Context Window Extension of Large Language Models
Rotary Position Embeddings (RoPE) have been shown to effectively encode positional information in transformer-based language models. However, these models fail to generalize past the sequence length they were trained on. We present YaRN (Yet another RoPE extensioN method), a compute-efficient method to extend the context window of such models, requiring 10x less tokens and 2.5x less training steps than previous methods. Using YaRN, we show that LLaMA models can effectively utilize and extrapolate to context lengths much longer than their original pre-training would allow, while also surpassing previous the state-of-the-art at context window extension. In addition, we demonstrate that YaRN exhibits the capability to extrapolate beyond the limited context of a fine-tuning dataset. The models fine-tuned using YaRN has been made available and reproduced online up to 128k context length at https://github.com/jquesnelle/yarn
http://arxiv.org/pdf/2309.00071
Bowen Peng, Jeffrey Quesnelle, Honglu Fan, Enrico Shippole
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230831
20231101
[]
2308.16505
28
# 3.3 Plan-first Execution with Dynamic Demonstrations Rather than using the step-by-step approach, we adopt a two- phase method. In the first phase, we prompt the LLM to gen- erate a complete tool execution plan based on the user’s in- tention derived from the dialogue. In the second phase, the LLM strictly adheres to the plan, calling tools in sequence while allowing them to communicate via the Candidate Bus. Concretely, the plan-first execution consists of the following two phases. • Plan: LLM accepts the user’s current input xt, dialogue context C t−1, descriptions of various tools F, and demon- stration Dxt for in-context learning. LLM formulates a tool usage plan based on user intent and preferences, providing inputs for each tool, ie., p’ = {pj,---,ph} = plan (x',C'~!, F,D,«), where pi, = (fx, ix) consists of the tool f;, and its input ¢,.
2308.16505#28
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2309.00071
28
Table 2: Sliding window perplexity (S = 256) of ten 128k Proof-pile documents truncated to evaluation context window size We observe that the model exhibits strong performance across the entire targeted context size, with YaRN interpolation being the first method to successfully extend the effective context size of Llama 2 to 128k. Of particular note are the YaRN (s = 32) models, which show continued declining perplexity through 128k, despite the fine-tuning data being limited to 64k tokens in length, demonstrating that the model is able to generalize to unseen context lengths. Furthermore, in Appendix B.1, we show the results of the average perplexity on 50 untruncated GovReport documents with at least 16k tokens per sample evaluated on the setting of 32k maximal context window without Dynamic Scaling in Table 4. Similar to the Proof-pile results, the GovReport results show that fine-tuning with YaRN achieves good performance on long sequences. # 4.3.2 Passkey Retrieval
2309.00071#28
YaRN: Efficient Context Window Extension of Large Language Models
Rotary Position Embeddings (RoPE) have been shown to effectively encode positional information in transformer-based language models. However, these models fail to generalize past the sequence length they were trained on. We present YaRN (Yet another RoPE extensioN method), a compute-efficient method to extend the context window of such models, requiring 10x less tokens and 2.5x less training steps than previous methods. Using YaRN, we show that LLaMA models can effectively utilize and extrapolate to context lengths much longer than their original pre-training would allow, while also surpassing previous the state-of-the-art at context window extension. In addition, we demonstrate that YaRN exhibits the capability to extrapolate beyond the limited context of a fine-tuning dataset. The models fine-tuned using YaRN has been made available and reproduced online up to 128k context length at https://github.com/jquesnelle/yarn
http://arxiv.org/pdf/2309.00071
Bowen Peng, Jeffrey Quesnelle, Honglu Fan, Enrico Shippole
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230831
20231101
[]
2308.16505
29
• Execution: The tool executor invokes the tools step-by- step according to the plan pt and obtains outputs from n} = exec(pt, F). The each tool, i.e., ot = {ot 1, · · · , ot output feedback of each tool fk is defined as ot k, where only the item information ot n from the last tool’s output serves as LLM’s observation for generating the response yt. The remaining information is tracked by the candidate memory bus for further reflection (see Section 3.4).
2308.16505#29
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2309.00071
29
# 4.3.2 Passkey Retrieval The passkey retrieval task as defined in [25] measures a model’s ability to retrieve a simple passkey (i.e., a five-digit number) from amongst a large amount of otherwise meaningless text. For our evaluation of the models, we performed 10 iterations of the passkey retrieval task with the passkey placed at a random location uniformly distributed across the evaluation context window on different context window sizes ranging from 8k to 128k. Both 7b and 13b models fine-tuned using YaRN at 128k context size passes the passkey retrieval task with very high accuracy (> 99%) within the entire context window size. We show detailed results in Appendix B.2. # 4.3.3 Standardized Benchmarks The Hugging Face Open LLM Leaderboard [19] compares a multitude of LLMs across a standard- ized set of four public benchmarks. Specifically, we use 25-shot ARC-Challenge [11], 10-shot HellaSwag [41], 5-shot MMLU [17], and 0-shot TruthfulQA [23]. To test the degradation of model performance under context extension, we evaluated our models using this suite and compared it to established scores for the Llama 2 baselines as well as publicly available PI and "NTK-aware" models. The results are summarized in Table 3. 9
2309.00071#29
YaRN: Efficient Context Window Extension of Large Language Models
Rotary Position Embeddings (RoPE) have been shown to effectively encode positional information in transformer-based language models. However, these models fail to generalize past the sequence length they were trained on. We present YaRN (Yet another RoPE extensioN method), a compute-efficient method to extend the context window of such models, requiring 10x less tokens and 2.5x less training steps than previous methods. Using YaRN, we show that LLaMA models can effectively utilize and extrapolate to context lengths much longer than their original pre-training would allow, while also surpassing previous the state-of-the-art at context window extension. In addition, we demonstrate that YaRN exhibits the capability to extrapolate beyond the limited context of a fine-tuning dataset. The models fine-tuned using YaRN has been made available and reproduced online up to 128k context length at https://github.com/jquesnelle/yarn
http://arxiv.org/pdf/2309.00071
Bowen Peng, Jeffrey Quesnelle, Honglu Fan, Enrico Shippole
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230831
20231101
[]
2308.16505
30
We summarize the differences between our plan-first exe- cution strategy and step-by-step strategy in Table 1 from six aspects. Fundamentally, step-by-step strategy executes rea- soning and action execution alternately, while our plan-first execution is a two-phase strategy, where a series of execu- tions is conducted followed by one-time planning. In step- by-step strategy, the LLMs are responsible for thinking and reasoning at each step. The task entails reasoning for indi- vidual observation, resulting in-context learning being chal- lenging due to the difficulty in crafting demonstrations com- prising dynamic observations. Differently, the primary task of LLM in our plan-first execution is to make a tool utilizing plan, which could be easily guided by ⟨query, plan⟩ pairs. The foremost advantage of our plan-first execution resides in the reduction of API calls. When employing N steps to address a task, our strategy necessitates merely 2 API calls, as opposed to N+1 calls in ReAct. This leads to a decrease in latency, which is of particular importance in conversational settings. Table 1: Property Comparisons between ReAct and Plan- first Execution. ICL is the abbreviation of In-Context Learn- ing.
2308.16505#30
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2309.00071
30
9 Model Size Model Name Context Extension Window Method 7B Llama 2 4k None 53.1 77.8 43.8 39.0 7B 7B 7B YaRN (s = 16) 7B YaRN (s = 32) Together Code Llama 32k 100k 64k 128k PI NTK YaRN YaRN 47.6 39.9 52.3 52.1 76.1 60.8 78.8 78.4 43.3 31.1 42.5 41.7 39.2 37.8 38.2 37.3 13B Llama 2 4k None 59.4 82.1 55.8 37.4 13B 13B YaRN (s = 16) 13B YaRN (s = 32) Code Llama 100k 64k 128k NTK YaRN YaRN 40.9 58.1 58.0 63.4 82.3 82.2 32.8 52.8 51.9 43.8 37.8 37.3 Table 3: Performance of context window extensions methods on the Hugging Face Open LLM benchmark suite compared with original Llama 2 baselines
2309.00071#30
YaRN: Efficient Context Window Extension of Large Language Models
Rotary Position Embeddings (RoPE) have been shown to effectively encode positional information in transformer-based language models. However, these models fail to generalize past the sequence length they were trained on. We present YaRN (Yet another RoPE extensioN method), a compute-efficient method to extend the context window of such models, requiring 10x less tokens and 2.5x less training steps than previous methods. Using YaRN, we show that LLaMA models can effectively utilize and extrapolate to context lengths much longer than their original pre-training would allow, while also surpassing previous the state-of-the-art at context window extension. In addition, we demonstrate that YaRN exhibits the capability to extrapolate beyond the limited context of a fine-tuning dataset. The models fine-tuned using YaRN has been made available and reproduced online up to 128k context length at https://github.com/jquesnelle/yarn
http://arxiv.org/pdf/2309.00071
Bowen Peng, Jeffrey Quesnelle, Honglu Fan, Enrico Shippole
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230831
20231101
[]
2308.16505
31
Table 1: Property Comparisons between ReAct and Plan- first Execution. ICL is the abbreviation of In-Context Learn- ing. Property ReAct Plan-first Exe Basic Idea step-wise reason task-wise plan ICL hard easy Reflection internal external # API Call N+1 2 Latency (N + 1)∆tapi + ∆texe 2∆tapi + ∆texe In order to improve the planning capability of LLM, demonstrations Dxt are injected into prompts for in-context learning in the Plan phase. Each demonstration consists of a user intent x and tool execution path p. However, the number of demonstrations is strictly limited by the contex- tual length that LLM can process, which makes the qual- ity of demonstrations of paramount importance. To address the challenge, we introduce a dynamic demonstration strat- egy, where only a few demonstrations that are most simi- lar to current user intent are incorporated into the prompt. For example, if the current user input is “My game history is Call of Duty and Fortnite, please give me some recom- mendations”, then demonstration with user intent “I enjoyed ITEM1, ITEM2 in the past, give me some suggestions” may be retrieved as a high-quality demonstration.
2308.16505#31
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2309.00071
31
Table 3: Performance of context window extensions methods on the Hugging Face Open LLM benchmark suite compared with original Llama 2 baselines We observe that there is minimal performance degradation between the YaRN models and their respective Llama 2 baselines. We also observe that there was on average a 0.49% drop in scores between the YaRN s = 16 and s = 32 models. From this we conclude that the the iterative extension from 64k to 128k results in negligible performance loss. # 5 Conclusion In conclusion, we have shown that YaRN improves upon all existing RoPE interpolation methods and can act as a drop-in replacement to PI, with no downsides and minimal implementation effort. The fine-tuned models preserve their original abilities on multiple benchmarks while being able to attend to a very large context size. Furthermore, YaRN allows efficient extrapolation with fine- tuning on shorter datasets and can take advantage of transfer learning for faster convergence, both of which are crucial under compute-constrained scenarios. Finally, we have shown the effectiveness of extrapolation with YaRN where it is able to "train short, and test long". 10 # 6 Reproducibility
2309.00071#31
YaRN: Efficient Context Window Extension of Large Language Models
Rotary Position Embeddings (RoPE) have been shown to effectively encode positional information in transformer-based language models. However, these models fail to generalize past the sequence length they were trained on. We present YaRN (Yet another RoPE extensioN method), a compute-efficient method to extend the context window of such models, requiring 10x less tokens and 2.5x less training steps than previous methods. Using YaRN, we show that LLaMA models can effectively utilize and extrapolate to context lengths much longer than their original pre-training would allow, while also surpassing previous the state-of-the-art at context window extension. In addition, we demonstrate that YaRN exhibits the capability to extrapolate beyond the limited context of a fine-tuning dataset. The models fine-tuned using YaRN has been made available and reproduced online up to 128k context length at https://github.com/jquesnelle/yarn
http://arxiv.org/pdf/2309.00071
Bowen Peng, Jeffrey Quesnelle, Honglu Fan, Enrico Shippole
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230831
20231101
[]
2308.16505
32
be retrieved as a high-quality demonstration. Inspired by Self-Instruct (Madaan et al. 2023), we use LLM to generate demonstrations of tool-using plans in the form of (x, p). First, we manually write some (˜20) typical user intents and the corresponding execution as seed demon- strations; then, we use the input-first and output-first strate- gies to generate more demonstrations using LLM. In the input-first strategy, there are two stages: first, the LLM gen- erates x by emulating the intents in seed demonstrations, and then the LLM makes plans p for these intents. The output- first method consists of three stages: first, we provide the LLM with a plan p and generate corresponding user intent x. Then, we use LLM to make plans ˜p for the intent, and finally, we verify whether the generated plan ˜p is consistent with the given plan p. The inconsistency indicates that the quality of the generated intent is not high enough, and we only retain those consistent demonstrations. The output-first method allows us to obtain demonstrations corresponding to all available plans, providing diversity for the demonstra- tions. Examples generated by input-first and output-first are illustrated in Figure 2.
2308.16505#32
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2309.00071
32
10 # 6 Reproducibility To aid in reproducibility, we provide, as supplementary material, the entirety of of the code used to train the YaRN models in Table 2, as well as the evaluation code that produced Figure 1 and Tables 1, 2, 3, 4, and 5. The code also contains implementations of various extension methods referenced throughout the paper. For training YaRN, we used the publicly available PG19 dataset [29] tokenized to 64k tokens. # References [1] Mistrallite. URL https://huggingface.co/amazon/MistralLite. [2] Introducing Qwen-7B: Open foundation and human-aligned models (of the state-of-the-arts). URL https://github.com/QwenLM/Qwen-7B/blob/main/tech_memo.md. [3] Long-data collections. URL https://huggingface.co/datasets/togethercomputer/ Long-Data-Collections. [4] Z. Azerbayev, E. Ayers, , and B. Piotrowski. Proof-pile, 2022. URL https://github.com/ zhangir-azerbayev/proof-pile.
2309.00071#32
YaRN: Efficient Context Window Extension of Large Language Models
Rotary Position Embeddings (RoPE) have been shown to effectively encode positional information in transformer-based language models. However, these models fail to generalize past the sequence length they were trained on. We present YaRN (Yet another RoPE extensioN method), a compute-efficient method to extend the context window of such models, requiring 10x less tokens and 2.5x less training steps than previous methods. Using YaRN, we show that LLaMA models can effectively utilize and extrapolate to context lengths much longer than their original pre-training would allow, while also surpassing previous the state-of-the-art at context window extension. In addition, we demonstrate that YaRN exhibits the capability to extrapolate beyond the limited context of a fine-tuning dataset. The models fine-tuned using YaRN has been made available and reproduced online up to 128k context length at https://github.com/jquesnelle/yarn
http://arxiv.org/pdf/2309.00071
Bowen Peng, Jeffrey Quesnelle, Honglu Fan, Enrico Shippole
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230831
20231101
[]
2308.16505
33
3.4 Reflection Despite LLM’s strong intelligence, it still exhibits occa- sional errors in reasoning and tool utilization (Madaan et al. 2023; Shinn et al. 2023). For example, it may violate instruc- tions in the prompt by selecting a non-existent tool, omit or overuse some tools, or fail to prepare tool inputs in the proper format, resulting in errors in tool execution. To reduce the occurrence of such errors, some studies have employed self-reflection (Shinn et al. 2023) mecha- nisms to enable LLM to have some error-correcting capa- bilities during decision-making. In InteRecAgent, we utilize an actor-critic reflection mechanism to enhance the agent’s robustness and the error-correcting ability. In the following part, we will formalize this self-reflection mechanism.
2308.16505#33
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2309.00071
33
[5] S. Black, S. Biderman, E. Hallahan, Q. Anthony, L. Gao, L. Golding, H. He, C. Leahy, K. McDonell, J. Phang, M. Pieler, U. S. Prashanth, S. Purohit, L. Reynolds, J. Tow, B. Wang, and S. Weinbach. GPT-NeoX-20B: An open-source autoregressive language model, 2022. arXiv: 2204.06745. to have extended (8k+) context size without any fine-tuning and minimal perplexity degradation., 2023. URL https://www.reddit.com/r/LocalLLaMA/comments/14lz7j5/ntkaware_ scaled_rope_allows_llama_models_to_have/. [7] bloc97. Add NTK-Aware interpolation "by parts" correction, 2023. URL https://github. com/jquesnelle/scaled-rope/pull/1. [8] C. Chen. Transformer Inference Arithmetic, 2022. URL https://kipp.ly/blog/ transformer-inference-arithmetic/.
2309.00071#33
YaRN: Efficient Context Window Extension of Large Language Models
Rotary Position Embeddings (RoPE) have been shown to effectively encode positional information in transformer-based language models. However, these models fail to generalize past the sequence length they were trained on. We present YaRN (Yet another RoPE extensioN method), a compute-efficient method to extend the context window of such models, requiring 10x less tokens and 2.5x less training steps than previous methods. Using YaRN, we show that LLaMA models can effectively utilize and extrapolate to context lengths much longer than their original pre-training would allow, while also surpassing previous the state-of-the-art at context window extension. In addition, we demonstrate that YaRN exhibits the capability to extrapolate beyond the limited context of a fine-tuning dataset. The models fine-tuned using YaRN has been made available and reproduced online up to 128k context length at https://github.com/jquesnelle/yarn
http://arxiv.org/pdf/2309.00071
Bowen Peng, Jeffrey Quesnelle, Honglu Fan, Enrico Shippole
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230831
20231101
[]
2308.16505
34
Assume that in the t-th round, the dialogue context is C t−1 and the current user input is xt. The actor is an LLM equipped with tools and inspired by the dynamic demonstration-augmented plan-first execution mechanism. For the user input, the actor would make a plan pt, obtain the tools’ output ot and generate the response yt. The critic evaluates the behavioral decisions of the actor. The execu- tion steps of the reflection mechanism are listed as follows: • Step1: The critic evaluates the actor’s output pt, ot and yt under the current dialogue context and obtains the judg- ment γ = reflect(xt, C t−1, pt, ot, yt). • Step2: When the judgment γ is positive, it indicates that the actor’s execution and response are reasonable, and the response yt is directly provided to the user, ending the re- flection phase. When the judgment γ is negative, it indi- cates that the actor’s execution or response is unreason- able. The feedback γ is used as a signal to instruct the actor to rechain, which is used as the input of plan(·).
2308.16505#34
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2308.16505
35
In the actor-critic reflection mechanism, the actor is re- sponsible for the challenging plan-making task, while the critic is responsible for the relative simple evaluation task. The two agents cooperate on two different types of tasks Intent(by GPT-4): Can you suggest some TYPE1 and TYPE2 items based on my preferences: ITEM1, ITEM2, and ITEM3? Plan(by GPT-4): 1. SQL Retrieval Tool (TYPE1 and TYPE2); 2. Ranking Tool (by preference using ITEM1, ITEM2, and ITEM3); 3. Candidate Fetching Tool. Plan: 1. Candidates Storing Tool (ITEM1, ITEM2, ITEM3); 2. SQL Retrieval Tool (TYPE); 3. ItemCF Retrieval Tool (ITEM); 4. Ranking Tool (by preference); 5. Candidate Fetching Tool. Intent(by GPT-4): I have a list of items: ITEM1, ITEM2, ITEM3. I want a TYPE item that is similar to ITEM, and please rank them based on my preferences. Figure 2: Examples of generated demonstrations in game domain.
2308.16505#35
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2309.00071
35
[10] A. Chowdhery, S. Narang, J. Devlin, M. Bosma, G. Mishra, A. Roberts, P. Barham, H. W. Chung, C. Sutton, S. Gehrmann, P. Schuh, K. Shi, S. Tsvyashchenko, J. Maynez, A. Rao, P. Barnes, Y. Tay, N. Shazeer, V. Prabhakaran, E. Reif, N. Du, B. Hutchinson, R. Pope, J. Bradbury, J. Austin, M. Isard, G. Gur-Ari, P. Yin, T. Duke, A. Levskaya, S. Ghemawat, S. Dev, H. Michalewski, X. Garcia, V. Misra, K. Robinson, L. Fedus, D. Zhou, D. Ippolito, D. Luan, H. Lim, B. Zoph, A. Spiridonov, R. Sepassi, D. Dohan, S. Agrawal, M. Omernick, A. M. Dai, T. S. Pillai, M. Pellat, A. Lewkowycz, E. Moreira, R. Child,
2309.00071#35
YaRN: Efficient Context Window Extension of Large Language Models
Rotary Position Embeddings (RoPE) have been shown to effectively encode positional information in transformer-based language models. However, these models fail to generalize past the sequence length they were trained on. We present YaRN (Yet another RoPE extensioN method), a compute-efficient method to extend the context window of such models, requiring 10x less tokens and 2.5x less training steps than previous methods. Using YaRN, we show that LLaMA models can effectively utilize and extrapolate to context lengths much longer than their original pre-training would allow, while also surpassing previous the state-of-the-art at context window extension. In addition, we demonstrate that YaRN exhibits the capability to extrapolate beyond the limited context of a fine-tuning dataset. The models fine-tuned using YaRN has been made available and reproduced online up to 128k context length at https://github.com/jquesnelle/yarn
http://arxiv.org/pdf/2309.00071
Bowen Peng, Jeffrey Quesnelle, Honglu Fan, Enrico Shippole
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230831
20231101
[]
2308.16505
36
Figure 2: Examples of generated demonstrations in game domain. = SS Oo V2 _F>| 55,59,69,150,365,.. | >| 55,59, 369, 369,55, 59, ue ee — 2 — . Hard Soft , tnit | Filtering | Filtering | Ranking User: ; Reflection: Lenjoyed xxx in the Plan: Execution Result:|_.] Ranking is missing in past, please give me ( tenia gamel,game2, {the plan, you should puzzle); some puzzle games. rank with user history, Figure 3: Example of memory bus (upper) and reflection (lower). and mutually reinforce each other through in-context inter- actions. This endows InteRecAgent with enhanced robust- ness to errors and improved error correction capabilities, culminating in more precise tool utilization and recommen- dations. An example of reflection is shown in the lower of Figure 3. # 3.5 Tool Learning with Small Language Models
2308.16505#36
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2308.16505
37
# 3.5 Tool Learning with Small Language Models The default LLM served as the brain is GPT-4, chosen for its exceptional ability to follow instructions compared to other LLMs. We are intrigued by the possibility of distilling GPT- 4’s proficiency in instruction-following to smaller language models (SLMs) such as the 7B-parameter Llama, aiming to reduce the costs associated with large-scale online services and to democratize our InteRecAgent framework to small and medium-sized business clients. To achieve this, we uti- lize GPT-4 to create a specialized dataset comprising pairs of [instructions, tool execution plans]. The “instruction” el- ement encompasses both the system prompt and the user- agent conversation history, acting as the input to elicit a tool execution plan from the LLM; the “tool execution plan” is the output crafted by GPT-4, which serves as the target for fine-tuning Llama-7B. We denote the fine-tuned version of this model RecLlama.
2308.16505#37
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2309.00071
37
[11] P. Clark, I. Cowhey, O. Etzioni, T. Khot, A. Sabharwal, C. Schoenick, and O. Tafjord. Think you have solved question answering? try ARC, the AI2 Reasoning Challenge, 2018. arXiv: 1803.05457. [12] T. Computer. Redpajama: An open source recipe to reproduce llama training dataset, 2023. URL https://github.com/togethercomputer/RedPajama-Data. [13] T. Dao. Flashattention-2: Faster attention with better parallelism and work partitioning, 2023. arXiv: 2307.08691. 11 [14] emozilla. Dynamically Scaled RoPE further increases performance of long context LLaMA with zero fine-tuning, 2023. URL https://www.reddit.com/r/LocalLLaMA/comments/ 14mrgpr/dynamically_scaled_rope_further_increases/. [15] J. Gehring, M. Auli, D. Grangier, D. Yarats, and Y. N. Dauphin. Convolutional sequence to sequence learning, 2017. arXiv: 1705.03122.
2309.00071#37
YaRN: Efficient Context Window Extension of Large Language Models
Rotary Position Embeddings (RoPE) have been shown to effectively encode positional information in transformer-based language models. However, these models fail to generalize past the sequence length they were trained on. We present YaRN (Yet another RoPE extensioN method), a compute-efficient method to extend the context window of such models, requiring 10x less tokens and 2.5x less training steps than previous methods. Using YaRN, we show that LLaMA models can effectively utilize and extrapolate to context lengths much longer than their original pre-training would allow, while also surpassing previous the state-of-the-art at context window extension. In addition, we demonstrate that YaRN exhibits the capability to extrapolate beyond the limited context of a fine-tuning dataset. The models fine-tuned using YaRN has been made available and reproduced online up to 128k context length at https://github.com/jquesnelle/yarn
http://arxiv.org/pdf/2309.00071
Bowen Peng, Jeffrey Quesnelle, Honglu Fan, Enrico Shippole
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230831
20231101
[]
2308.16505
38
To ensure the high quality of the RecLlama dataset, we employ two methods to generate data samples. The first method gathers samples from dialogues between a user sim- ulator and a recommender agent, which is powered by GPT4. Note that during one conversation, each exchange of user- agent produces one data sample, capturing the full range of GPT-4’s responses to the evolving context of the conver- sation. However, this method might not encompass a suf- ficiently diverse array of tool execution scenarios due to the finite number of training samples we can manage. Therefore, we complement this with a second method wherein we ini- tially craft 30 varied dialogues designed to span a wide range of tool execution combinations. Then, for each iteration, we select three of these dialogues at random and prompt GPT-4 to generate both a conversation history and a suitable tool execution plan. This approach significantly enhances the di- versity of the RecLlama dataset. To evaluate RecLlama’s capacity for domain generaliza- tion, we limit the generation of training data to the Steam and MovieLens datasets, excluding the Beauty dataset (the details of datasets will be elaborated in Section 4.1). The final RecLlama dataset comprises 16,183 samples, with 13,525 derived from the first method and 2,658 from the sec- ond. # 4 Experiments
2308.16505#38
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2309.00071
38
[16] C. Han, Q. Wang, W. Xiong, Y. Chen, H. Ji, and S. Wang. LM-Infinite: Simple on-the-fly length generalization for large language models, 2023. arXiv: 2308.16137. [17] D. Hendrycks, C. Burns, S. Basart, A. Zou, M. Mazeika, D. Song, and J. Steinhardt. Measuring massive multitask language understanding. Proceedings of the International Conference on Learning Representations (ICLR), 2021. [18] L. Huang, S. Cao, N. Parulian, H. Ji, and L. Wang. Efficient attentions for long document summarization. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1419–1436. Association for Computational Linguistics, June 2021. [19] Hugging Face. Open LLM Leaderboard, 2023. URL https://huggingface.co/spaces/ HuggingFaceH4/open_llm_leaderboard.
2309.00071#38
YaRN: Efficient Context Window Extension of Large Language Models
Rotary Position Embeddings (RoPE) have been shown to effectively encode positional information in transformer-based language models. However, these models fail to generalize past the sequence length they were trained on. We present YaRN (Yet another RoPE extensioN method), a compute-efficient method to extend the context window of such models, requiring 10x less tokens and 2.5x less training steps than previous methods. Using YaRN, we show that LLaMA models can effectively utilize and extrapolate to context lengths much longer than their original pre-training would allow, while also surpassing previous the state-of-the-art at context window extension. In addition, we demonstrate that YaRN exhibits the capability to extrapolate beyond the limited context of a fine-tuning dataset. The models fine-tuned using YaRN has been made available and reproduced online up to 128k context length at https://github.com/jquesnelle/yarn
http://arxiv.org/pdf/2309.00071
Bowen Peng, Jeffrey Quesnelle, Honglu Fan, Enrico Shippole
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230831
20231101
[]
2308.16505
39
# 4 Experiments 4.1 Experimental Setup Evaluation Strategies. Evaluating conversational recom- mender systems presents a challenge, as the seeker commu- nicates their preferences and the recommendation agent pro- vides suggestions through natural, open-ended dialogues. To enable the quantitative assessment of InteRecAgent, we de- sign the following two evaluation strategies: (1) User Simulator. We manually tune a role-playing prompt to facilitate GPT-4 in emulating real-world users with varying preferences. A simulated user’s preference is ascertained by injecting their historical behaviors into the role-playing prompt, leaving out the last item in their his- tory as the target of their next interest. Following this, the simulated user engages with the recommendation agent to discover content that fits their interest. In this way, GPT-4 operates from the standpoint of the user, swiftly reacting to the recommended outcomes, thereby crafting a more natural dialogue scenario. This approach is utilized to assess the effi- cacy of InteRecAgent within multi-turn dialogue settings. An illustrative example of a user simulator prompt can be found in Figure 4.
2308.16505#39
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2309.00071
39
[20] A. Q. Jiang, A. Sablayrolles, A. Mensch, C. Bamford, D. S. Chaplot, D. de las Casas, F. Bressand, G. Lengyel, G. Lample, L. Saulnier, L. R. Lavaud, M.-A. Lachaux, P. Stock, T. L. Scao, T. Lavril, T. Wang, T. Lacroix, and W. E. Sayed. Mistral 7b, 2023. [21] kaiokendev. Things I’m learning while training superhot., 2023. URL https://kaiokendev. github.io/til#extending-context-to-8k. [22] A. Kazemnejad, I. Padhi, K. N. Ramamurthy, P. Das, and S. Reddy. The impact of positional encoding on length generalization in transformers, 2023. arXiv: 2305.19466. [23] S. Lin, J. Hilton, and O. Evans. TruthfulQA: Measuring how models mimic human falsehoods. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3214–3252, May 2022.
2309.00071#39
YaRN: Efficient Context Window Extension of Large Language Models
Rotary Position Embeddings (RoPE) have been shown to effectively encode positional information in transformer-based language models. However, these models fail to generalize past the sequence length they were trained on. We present YaRN (Yet another RoPE extensioN method), a compute-efficient method to extend the context window of such models, requiring 10x less tokens and 2.5x less training steps than previous methods. Using YaRN, we show that LLaMA models can effectively utilize and extrapolate to context lengths much longer than their original pre-training would allow, while also surpassing previous the state-of-the-art at context window extension. In addition, we demonstrate that YaRN exhibits the capability to extrapolate beyond the limited context of a fine-tuning dataset. The models fine-tuned using YaRN has been made available and reproduced online up to 128k context length at https://github.com/jquesnelle/yarn
http://arxiv.org/pdf/2309.00071
Bowen Peng, Jeffrey Quesnelle, Honglu Fan, Enrico Shippole
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230831
20231101
[]
2308.16505
40
The default configuration for the user simulator is set to “session-wise”. This implies that the agent will only access content within the current dialogue session, and its memory will be cleared once the user either successfully locates what they are seeking or fails to do so. The conversation turns in “session-wise” setting is usually limited, thus, the long- term memory module in InteRecAgent will not be activated. In order to assess the performance while handling “lifelong memory” (refer to Section 3.2), we have formulated two strategies for simulating extended dialogues. The first strat- egy, referred to as LONG-CHAT, mandates extended con- versations between the user and the recommendation agent. This is achieved by alternately incorporating three types of chat intents within the user simulator: sharing history, de- tailing the target item, and participating in casual conversaYou are a user chatting with a recommender for {item} rec- ommendation in turn. Your history is {history}. Your tar- get items: {target}. Here is the information about target you could use: {target item info}. You must follow the rules below during chat. If the recommender recommends {target}, you should ac- cept. If the recommender
2308.16505#40
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2309.00071
40
[24] I. Loshchilov and F. Hutter. Decoupled weight decay regularization. In International Conference on Learning Representations, 2019. [25] A. Mohtashami and M. Jaggi. Landmark attention: Random-access infinite context length for transformers, 2023. arXiv: 2305.16300. [26] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Köpf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala. PyTorch: An imperative style, high-performance deep learning library. In NeurIPS, pages 8024–8035, 2019. [27] O. Press, N. Smith, and M. Lewis. Train Short, Test Long: Attention with linear biases enables input length extrapolation. In International Conference on Learning Representations, 2022.
2309.00071#40
YaRN: Efficient Context Window Extension of Large Language Models
Rotary Position Embeddings (RoPE) have been shown to effectively encode positional information in transformer-based language models. However, these models fail to generalize past the sequence length they were trained on. We present YaRN (Yet another RoPE extensioN method), a compute-efficient method to extend the context window of such models, requiring 10x less tokens and 2.5x less training steps than previous methods. Using YaRN, we show that LLaMA models can effectively utilize and extrapolate to context lengths much longer than their original pre-training would allow, while also surpassing previous the state-of-the-art at context window extension. In addition, we demonstrate that YaRN exhibits the capability to extrapolate beyond the limited context of a fine-tuning dataset. The models fine-tuned using YaRN has been made available and reproduced online up to 128k context length at https://github.com/jquesnelle/yarn
http://arxiv.org/pdf/2309.00071
Bowen Peng, Jeffrey Quesnelle, Honglu Fan, Enrico Shippole
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230831
20231101
[]
2308.16505
41
information about target you could use: {target item info}. You must follow the rules below during chat. If the recommender recommends {target}, you should ac- cept. If the recommender recommends other items, you should refuse them and provide the information about {target}. If the recommender asks for your preference, you should provide the information about {target}. You could provide your history. Your output is only allowed to be the words from the user you act. If you think the con- versation comes to an ending, output a ⟨END⟩. You should never directly tell the target item. Only use the provided in- formation about the target. Never give many details about the target items at one time. Less than 3 conditions is better. Now lets start, you first, act as a user. Here are the previous conversation you have completed: {chat history}.
2308.16505#41
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2309.00071
41
[28] J. Quesnelle, E. Shippole, and "Kaiokendev". Llongma: Scaling rotary embeddings through lin- ear positional interpolation. https://huggingface.co/conceptofmind/LLongMA-2-7b/, 2023. [29] J. W. Rae, A. Potapenko, S. M. Jayakumar, C. Hillier, and T. P. Lillicrap. Compressive transformers for long-range sequence modelling. In International Conference on Learning Representations, 2020. [30] A. Roberts, C. Raffel, K. Lee, M. Matena, N. Shazeer, P. J. Liu, S. Narang, W. Li, and Y. Zhou. Exploring the limits of transfer learning with a unified text-to-text transformer. Technical report, Google, 2019. 12
2309.00071#41
YaRN: Efficient Context Window Extension of Large Language Models
Rotary Position Embeddings (RoPE) have been shown to effectively encode positional information in transformer-based language models. However, these models fail to generalize past the sequence length they were trained on. We present YaRN (Yet another RoPE extensioN method), a compute-efficient method to extend the context window of such models, requiring 10x less tokens and 2.5x less training steps than previous methods. Using YaRN, we show that LLaMA models can effectively utilize and extrapolate to context lengths much longer than their original pre-training would allow, while also surpassing previous the state-of-the-art at context window extension. In addition, we demonstrate that YaRN exhibits the capability to extrapolate beyond the limited context of a fine-tuning dataset. The models fine-tuned using YaRN has been made available and reproduced online up to 128k context length at https://github.com/jquesnelle/yarn
http://arxiv.org/pdf/2309.00071
Bowen Peng, Jeffrey Quesnelle, Honglu Fan, Enrico Shippole
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230831
20231101
[]
2308.16505
42
Figure 4: Prompt for user simulator. tion. The simulator alternates between providing informa- tion (either historical or target-related) and casual chat every five rounds. During this process, if the agent mentions the target item, the conversation can be terminated and labeled as a success. The second strategy, referred to as LONG- CONTEXT, initially synthesizes multi-day conversations uti- lizing user history. Subsequently, based on these extended dialogues, the user simulator interacts with the agent in a manner akin to the “session-wise” setting. For our method, the lengthy conversation history is loaded into the long-term memory module. However, for baseline methods, the ex- tended conversation history will be truncated if it surpasses the maximum window size of the LLM.
2308.16505#42
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]