doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2308.10379 | 164 | considering the second vertical word, we need to look at the second letters of the horizontal words we found: - What is the second letter of sawer: a - What is the second letter of uredo: r - What is the second letter of rater: a The word we are looking fits the clue "The Brazilian macaw; an Australian bird" and needs to have the letters a, r, and a in its (h1, one) first, (h2, two) second, and (h3, third) third letters: a r a _ _. The candidates are: arara. - arara (a r a r a, a r a _ _) fits We add the word arara for v2. The current words are: h1. sawer h2. uredo h3. rater v1. surge v2. arara v4. edema Now, letâs consider the next unfound horizontal word, which is h4. Since we are considering the fourth horizontal word, we need to look at the fourth letters of the vertical words we found: - What is the fourth letter of surge: g - What is the fourth letter of arara: r - What is the fourth letter of edema: m the word we are looking fits the clue "Pasture land" and needs to have the letters g, r, and m in | 2308.10379#164 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 165 | r - What is the fourth letter of edema: m the word we are looking fits the clue "Pasture land" and needs to have the letters g, r, and m in its (v1, one) first, (v2, two) second, and (v4, four) fourth letters: g r _ m _. The candidates are: grama. - grama (g r a m a, g r a _ _) fits We add the word grama for h4. The current words are: h1. Sawer h2. Uredos h3. Rater h4. Grama v1. surge v2. arara v4. edema Now, letâs consider the next unfound vertical word, which is v3. Since we are considering the third vertical word, we need to look at the third letters of the horizontal words we found: - What is the third letter of sawer: w - What is the third letter of uredo: e - What is the third letter of rater: t | 2308.10379#165 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 166 | What is the third letter of grama: a The word we are looking fits the clue "A Timorese island" and needs to have the letters w, e, t, and a in its (h1, one) first, (h2, two) second, (h3, three) third, and (h4, four) fourth letters: w e t a _. The candidates are: The candidate is "wetar".
wetar (w e t a r, w e t a _) fits We add the word wetar for v3. The current words are: h1. sawer h2. uredo h3. rater h4. grama v1. surge v2. arara v3. wetar v4. edema Now, letâs consider the last unfound horizontal word, which is h5. Since we are | 2308.10379#166 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 167 | considering the fifth horizontal word, we need to look at the fifth letters of the vertical words we found: - What is the fifth letter of surge: e - What is the fifth letter of arara: a - What is the fifth letter of wetar: r - What is the fifth letter of edema: a The word we are looking fits the clue "Receiving by the ear" and needs to have the letters e, a, r, and a in its (v1, one) first, (v2, two) second, (v3, three) third, and (v4, four) fourth letters: e a r a _. The candidates are: The candidate is "earal". - earal (e a r a l, e a r a _) fits We add the word earal for h5. The current words are: h1. sawer h2. uredo h3. rater h4. grama h5. earal v1. surge v2. arara v3. wetar v4. edema Since we found all horizontal words: answer: h1. sawer h2. uredo h3. rater h4. grama h5. earal v1. surge v2. arara v3. wetar v4. edema | 2308.10379#167 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 168 | uredo h3. rater h4. grama h5. earal v1. surge v2. arara v3. wetar v4. edema ËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËË User: h1. Dandruff; scum; the bull-trout h2. One who greets; to vacillate; a British river h3. A Turkish written decree h4. Mignon; petty; little h5. A bishopâs permission for a priest to leave a diocese v1. To steal; to brush across v2. A sedge (a primitive three-sided grass) v3. Grape jam v4. A flatworm larva v5. Ore refuse; to prepare material for glass by heat h2. waver h5. exeat v1. swipe v2. carex | 2308.10379#168 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 169 | # Assistant:
Letâs consider the next unfound horizontal word, which is h1. Since we are considering the first horizontal word, we need to look at the first letters of
the vertical words we found: - What is the first letter of swipe: s - What is the first letter of carex: c The word we are looking fits the clue "Dandruff; scum; the bull-trout" and needs to have the letters s and c in its (v1, one) first and (v2, two) second letters: s c _ _ _. The candidates are: scurf.
scurf (s c u r f, s c _ _ _) fits We add the word scurf for h1. The current words are: h1. scurf h2. waver h5. exeat v1. swipe v2. carex Now, letâs consider the next unfound vertical word, which is v3. Since we are
Now, letâs consider the next unfound vertical word, which is v3. Since we are considering the third vertical word, we need to look at the third letters of the horizontal words we found: | 2308.10379#169 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 170 | considering the third vertical word, we need to look at the third letters of the horizontal words we found: - What is the third letter of scurf: u - What is the third letter of waver: v - What is the third letter of exeat: e The word we are looking fits the clue "Grape jam" and needs to have the letters u, v, and e in its (h1, one) first, (h2, two) second, and (h5, five) fifth
letters: u v _ _ e. The candidates are: The candidate is "uvetta".
uvetta (u v e t t a, u v _ _ e) doesnât fit Letâs consider the next unfound vertical word, which is v4. Since we are
considering the fourth vertical word, we need to look at the fourth letters of
the horizontal words we found: - What is the fourth letter of scurf: r - What is the fourth letter of waver: e - What is the fourth letter of exeat: a The word we are looking fits the clue "A flatworm larva" and needs to have the
letters r, e, and a in its (h1, one) first, (h2, two) second, and (h5, five) fifth letters: r e _ _ a. The candidates are: redia. | 2308.10379#170 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 171 | redia (r e d i a, r e _ _ a) fits We add the word redia for v4. The current words are: h1. scurf h2. waver h5. exeat v1. swipe v2. carex v4. redia Now, letâs consider the next unfound horizontal word, which is h3. Since we are
considering the third horizontal word, we need to look at the third letters of
the vertical words we found: - What is the third letter of swipe: i - What is the third letter of carex: r - What is the third letter of redia: d The word we are looking fits the clue "A Turkish written decree" and needs to have the letters i, r, and d in its (v1, one) first, (v2, two) second, and (v4,
four) fourth letters: i r _ d _. The candidates are: irade.
irade (i r a d e, i r _ d _) fits We add the word irade for h3. The current words are: h1. scurf h2. waver h3. irade h5. exeat v1. swipe v2. carex v4. redia Now, letâs consider the last unfound vertical word, which is v5. Since we are | 2308.10379#171 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 172 | Now, letâs consider the last unfound vertical word, which is v5. Since we are considering the fifth vertical word, we need to look at the fifth letters of the horizontal words we found:
considering the fifth vertical word, we need to look at the fifth letters of the horizontal words we found: - What is the fifth letter of scurf: f
What is the fifth letter of waver: r - What is the fifth letter of irade: e - What is the fifth letter of exeat: t The word we are looking fits the clue "Ore refuse; to prepare material for glass by heat" and needs to have the letters f, r, e, and t in its (h1, one) first, (h2, two) second, (h3, three) third, and (h5, five) fifth letters: f r e _ t. The candidates are: The candidate is "frett". | 2308.10379#172 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 173 | frett (f r e t t, f r e _ t) fits We add the word frett for v5. The current words are: h1. scurf h2. waver h3. irade h5. exeat v1. swipe v2. carex v4. redia v5. frett Since we found all horizontal words: answer: h1. scurf h2. waver h3. irade h5. exeat v1. swipe v2. carex v4. redia v5. frett
ËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËË
# Propose Words User: | 2308.10379#173 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 174 | # Propose Words User:
h1. A lunar valley h2. A fatty oil h3. To entice h4. To lower; to reduce h5. A solitary person v1. According to the roster v2. Another name for Port-Francqui v3. An illicit lover; a European lake v4. To lisp v5. To come in Assistant: All possible 5-letters words for each clue, for each clue we give 10 options: h1. RILLE, VALLIS, CRATER, BASINS, PLAINS h2. OLEIC, OLEIN, GREASE, BUTTER, ANIMAL h3. LURE, TEMPT, SEDUCE, ENTICE, DRAWIN h4. ABASE, LOWER, REDUCE, LESSEN, DEBASE h5. HERMIT, ALONE, LONER, SINGLE, SOLITARY v1. ONTAP, LISTED, ROSTER, SCHEDULE, PLANNED v2. ILEBO, PORTF, CONGO, AFRICA, COLONY v3. LOVER, AMOUR, GENEVA, LEMAN, ZURICH v4. SLUR, LISPS, STUTTER, MUMBLE, STAMMER v5. ENTER, ARRIVE, COMEIN, APPEAR, SHOWUP | 2308.10379#174 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 175 | ËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËË
# Creative Writing AoT
"Write a coherent passage of 4 short paragraphs. The end sentence of each paragraph must be:
{0}
Firstly, make five different plans for a coherent passage, then write. Your output should be of the following format:
Plan 1: Your plan here.
Plan 2: Your plan here.
Plan 3: Your plan here.
Plan 4: Your plan here.
Plan 5: Your plan here.
Secondly, given an instruction and several plans, decide which choice is most promising. Analyze each choice in detail, then conclude in the last line "The best choice is {{s}}", where s the integer id of the choice.
Thirdly, write the passage according to that chosen plan in the most coherent way. Add "Passage:" before writing the passage under it.
Passage: Your passage here.
Finally, refine the passage in the most coherent way, but you still have to end each paragraph with the given sentences as before.
Final Passage: Final passage here.
# Score Prompt | 2308.10379#175 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 176 | Finally, refine the passage in the most coherent way, but you still have to end each paragraph with the given sentences as before.
Final Passage: Final passage here.
# Score Prompt
Analyze the following passage, then at the last line conclude "Thus the coherency score is {{s}}", where s is an integer from 1 to 10.
{0}
Acknowledgment: We appreciate the discussions and assistance provided by L. Wang.
Contributions: B. Sel played a pivotal role in shaping the primary concept, spearheading the experimental design and eval- uation, and leading the paperâs writing process. A. Tawaha actively engaged in discussions and conducted experiments. V. Khattar collaborated through discussions and played a role in conducting the experiments. R. Jia and M. Jin both engaged in constructive discussions, with M. Jin also offering advisory guidance.
Additional info about the changes from the first version (dated 8/20/2023) can be found in this link (https://tinyurl.com/ 2vnjxw93). | 2308.10379#176 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.09904 | 0 | 3 2 0 2
# t c O 7 1
] R I . s c [ 2 v 4 0 9 9 0 . 8 0 3 2 : v i X r a
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents Yubo Shu Haonan Zhang Hansu Gu School of Computer Science, Fudan School of Computer Science, Fudan Seattle University University United States Shanghai, China Shanghai, China [email protected] [email protected] [email protected]
&
Peng Zhangâ Shanghai Key Laboratory of Data Science, Fudan University Shanghai, China [email protected]
# Tun Luâ School of Computer Science, Fudan University Shanghai, China [email protected]
Dongsheng Li Microsoft Research Asia Shanghai, China [email protected]
Ning Gu School of Computer Science, Fudan University Shanghai, China [email protected] | 2308.09904#0 | RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents | The rapid evolution of the web has led to an exponential growth in content.
Recommender systems play a crucial role in Human-Computer Interaction (HCI) by
tailoring content based on individual preferences. Despite their importance,
challenges persist in balancing recommendation accuracy with user satisfaction,
addressing biases while preserving user privacy, and solving cold-start
problems in cross-domain situations. This research argues that addressing these
issues is not solely the recommender systems' responsibility, and a
human-centered approach is vital. We introduce the RAH Recommender system,
Assistant, and Human) framework, an innovative solution with LLM-based agents
such as Perceive, Learn, Act, Critic, and Reflect, emphasizing the alignment
with user personalities. The framework utilizes the Learn-Act-Critic loop and a
reflection mechanism for improving user alignment. Using the real-world data,
our experiments demonstrate the RAH framework's efficacy in various
recommendation domains, from reducing human burden to mitigating biases and
enhancing user control. Notably, our contributions provide a human-centered
recommendation framework that partners effectively with various recommendation
models. | http://arxiv.org/pdf/2308.09904 | Yubo Shu, Haonan Zhang, Hansu Gu, Peng Zhang, Tun Lu, Dongsheng Li, Ning Gu | cs.IR, cs.AI | null | null | cs.IR | 20230819 | 20231017 | [
{
"id": "2305.07961"
},
{
"id": "2309.07864"
},
{
"id": "2303.14524"
},
{
"id": "2209.07562"
},
{
"id": "2305.16291"
},
{
"id": "2207.12515"
},
{
"id": "2304.03442"
},
{
"id": "2304.10149"
},
{
"id": "1806.08977"
},
{
"id": "2305.00447"
},
{
"id": "2309.02427"
},
{
"id": "2307.14225"
},
{
"id": "2112.00944"
},
{
"id": "2305.16960"
}
] |
2308.10053 | 0 | 3 2 0 2
g u A 9 1 ] R I . s c [
1 v 3 5 0 0 1 . 8 0 3 2 : v i X r a
Large Language Models as Zero-Shot Conversational Recommenders Zhouhang Xieâ [email protected] University of California, San Diego La Jolla, California, USA
# Zhankui Heâ [email protected] University of California, San Diego La Jolla, California, USA
# USA
Harald Steck [email protected] Netflix Inc. Los Gatos, California, USA
Dawen Liang [email protected] Netflix Inc. Los Gatos, California, USA
Yesu Feng [email protected] Netflix Inc. Los Gatos, California, USA
Bodhisattwa Prasad Majumder [email protected] University of California, San Diego La Jolla, California, USA
Nathan Kallus [email protected] Netflix Inc. Los Gatos, California, USA Cornell University New York, New York, USA
# Julian McAuley [email protected] University of California, San Diego La Jolla, California, USA | 2308.10053#0 | Large Language Models as Zero-Shot Conversational Recommenders | In this paper, we present empirical studies on conversational recommendation
tasks using representative large language models in a zero-shot setting with
three primary contributions. (1) Data: To gain insights into model behavior in
"in-the-wild" conversational recommendation scenarios, we construct a new
dataset of recommendation-related conversations by scraping a popular
discussion website. This is the largest public real-world conversational
recommendation dataset to date. (2) Evaluation: On the new dataset and two
existing conversational recommendation datasets, we observe that even without
fine-tuning, large language models can outperform existing fine-tuned
conversational recommendation models. (3) Analysis: We propose various probing
tasks to investigate the mechanisms behind the remarkable performance of large
language models in conversational recommendation. We analyze both the large
language models' behaviors and the characteristics of the datasets, providing a
holistic understanding of the models' effectiveness, limitations and suggesting
directions for the design of future conversational recommenders | http://arxiv.org/pdf/2308.10053 | Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley | cs.IR, cs.AI | Accepted as CIKM 2023 long paper. Longer version is coming soon
(e.g., more details about dataset) | null | cs.IR | 20230819 | 20230819 | [
{
"id": "2302.13971"
},
{
"id": "2304.03879"
},
{
"id": "2303.17568"
},
{
"id": "2305.07961"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "2303.08774"
},
{
"id": "2305.15717"
},
{
"id": "1611.09268"
},
{
"id": "2207.12515"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2305.13112"
},
{
"id": "2112.05197"
},
{
"id": "2305.06474"
},
{
"id": "2304.11406"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2304.01196"
}
] |
2308.09904 | 1 | Ning Gu School of Computer Science, Fudan University Shanghai, China [email protected]
ABSTRACT The rapid evolution of the web has led to an exponential growth in content. Recommender systems play a crucial role in Human- Computer Interaction (HCI) by tailoring content based on indi- vidual preferences. Despite their importance, challenges persist in balancing recommendation accuracy with user satisfaction, ad- dressing biases while preserving user privacy, and solving cold- start problems in cross-domain situations. This research argues that addressing these issues is not solely the recommender sys- temsâ responsibility, and a human-centered approach is vital. We introduce the RAH (Recommender system, Assistant, and Human) framework, an innovative solution with LLM-based agents such as Perceive, Learn, Act, Critic, and Reflect, emphasizing the alignment with user personalities. The framework utilizes the Learn-Act-Critic loop and a reflection mechanism for improving user alignment. Us- ing the real-world data, our experiments demonstrate the RAH frameworkâs efficacy in various recommendation domains, from reducing human burden to mitigating biases and enhancing user control. Notably, our contributions provide a human-centered rec- ommendation framework that partners effectively with various recommendation models.
âCorresponding author. | 2308.09904#1 | RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents | The rapid evolution of the web has led to an exponential growth in content.
Recommender systems play a crucial role in Human-Computer Interaction (HCI) by
tailoring content based on individual preferences. Despite their importance,
challenges persist in balancing recommendation accuracy with user satisfaction,
addressing biases while preserving user privacy, and solving cold-start
problems in cross-domain situations. This research argues that addressing these
issues is not solely the recommender systems' responsibility, and a
human-centered approach is vital. We introduce the RAH Recommender system,
Assistant, and Human) framework, an innovative solution with LLM-based agents
such as Perceive, Learn, Act, Critic, and Reflect, emphasizing the alignment
with user personalities. The framework utilizes the Learn-Act-Critic loop and a
reflection mechanism for improving user alignment. Using the real-world data,
our experiments demonstrate the RAH framework's efficacy in various
recommendation domains, from reducing human burden to mitigating biases and
enhancing user control. Notably, our contributions provide a human-centered
recommendation framework that partners effectively with various recommendation
models. | http://arxiv.org/pdf/2308.09904 | Yubo Shu, Haonan Zhang, Hansu Gu, Peng Zhang, Tun Lu, Dongsheng Li, Ning Gu | cs.IR, cs.AI | null | null | cs.IR | 20230819 | 20231017 | [
{
"id": "2305.07961"
},
{
"id": "2309.07864"
},
{
"id": "2303.14524"
},
{
"id": "2209.07562"
},
{
"id": "2305.16291"
},
{
"id": "2207.12515"
},
{
"id": "2304.03442"
},
{
"id": "2304.10149"
},
{
"id": "1806.08977"
},
{
"id": "2305.00447"
},
{
"id": "2309.02427"
},
{
"id": "2307.14225"
},
{
"id": "2112.00944"
},
{
"id": "2305.16960"
}
] |
2308.10053 | 1 | # Julian McAuley [email protected] University of California, San Diego La Jolla, California, USA
ABSTRACT In this paper, we present empirical studies on conversational rec- ommendation tasks using representative large language models in a zero-shot setting with three primary contributions. (1) Data: To gain insights into model behavior in âin-the-wildâ conversa- tional recommendation scenarios, we construct a new dataset of recommendation-related conversations by scraping a popular dis- cussion website. This is the largest public real-world conversa- tional recommendation dataset to date. (2) Evaluation: On the new dataset and two existing conversational recommendation datasets, we observe that even without fine-tuning, large language models can outperform existing fine-tuned conversational recommenda- tion models. (3) Analysis: We propose various probing tasks to investigate the mechanisms behind the remarkable performance of large language models in conversational recommendation. We analyze both the large language modelsâ behaviors and the charac- teristics of the datasets, providing a holistic understanding of the modelsâ effectiveness, limitations and suggesting directions for the design of future conversational recommenders.
CCS CONCEPTS ⢠Information systems â Personalization; ⢠Computing method- ologies â Natural language generation.
# KEYWORDS conversational recommendation, large language model, datasets
âBoth authors contributed equally to this research. | 2308.10053#1 | Large Language Models as Zero-Shot Conversational Recommenders | In this paper, we present empirical studies on conversational recommendation
tasks using representative large language models in a zero-shot setting with
three primary contributions. (1) Data: To gain insights into model behavior in
"in-the-wild" conversational recommendation scenarios, we construct a new
dataset of recommendation-related conversations by scraping a popular
discussion website. This is the largest public real-world conversational
recommendation dataset to date. (2) Evaluation: On the new dataset and two
existing conversational recommendation datasets, we observe that even without
fine-tuning, large language models can outperform existing fine-tuned
conversational recommendation models. (3) Analysis: We propose various probing
tasks to investigate the mechanisms behind the remarkable performance of large
language models in conversational recommendation. We analyze both the large
language models' behaviors and the characteristics of the datasets, providing a
holistic understanding of the models' effectiveness, limitations and suggesting
directions for the design of future conversational recommenders | http://arxiv.org/pdf/2308.10053 | Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley | cs.IR, cs.AI | Accepted as CIKM 2023 long paper. Longer version is coming soon
(e.g., more details about dataset) | null | cs.IR | 20230819 | 20230819 | [
{
"id": "2302.13971"
},
{
"id": "2304.03879"
},
{
"id": "2303.17568"
},
{
"id": "2305.07961"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "2303.08774"
},
{
"id": "2305.15717"
},
{
"id": "1611.09268"
},
{
"id": "2207.12515"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2305.13112"
},
{
"id": "2112.05197"
},
{
"id": "2305.06474"
},
{
"id": "2304.11406"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2304.01196"
}
] |
2308.09904 | 2 | âCorresponding author.
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. XXXâ24, 2024, Singapore © 2024 Association for Computing Machinery. ACM ISBN 978-1-4503-XXXX-X/18/06. . . $15.00 https://doi.org/XXXXXXX.XXXXXXX | 2308.09904#2 | RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents | The rapid evolution of the web has led to an exponential growth in content.
Recommender systems play a crucial role in Human-Computer Interaction (HCI) by
tailoring content based on individual preferences. Despite their importance,
challenges persist in balancing recommendation accuracy with user satisfaction,
addressing biases while preserving user privacy, and solving cold-start
problems in cross-domain situations. This research argues that addressing these
issues is not solely the recommender systems' responsibility, and a
human-centered approach is vital. We introduce the RAH Recommender system,
Assistant, and Human) framework, an innovative solution with LLM-based agents
such as Perceive, Learn, Act, Critic, and Reflect, emphasizing the alignment
with user personalities. The framework utilizes the Learn-Act-Critic loop and a
reflection mechanism for improving user alignment. Using the real-world data,
our experiments demonstrate the RAH framework's efficacy in various
recommendation domains, from reducing human burden to mitigating biases and
enhancing user control. Notably, our contributions provide a human-centered
recommendation framework that partners effectively with various recommendation
models. | http://arxiv.org/pdf/2308.09904 | Yubo Shu, Haonan Zhang, Hansu Gu, Peng Zhang, Tun Lu, Dongsheng Li, Ning Gu | cs.IR, cs.AI | null | null | cs.IR | 20230819 | 20231017 | [
{
"id": "2305.07961"
},
{
"id": "2309.07864"
},
{
"id": "2303.14524"
},
{
"id": "2209.07562"
},
{
"id": "2305.16291"
},
{
"id": "2207.12515"
},
{
"id": "2304.03442"
},
{
"id": "2304.10149"
},
{
"id": "1806.08977"
},
{
"id": "2305.00447"
},
{
"id": "2309.02427"
},
{
"id": "2307.14225"
},
{
"id": "2112.00944"
},
{
"id": "2305.16960"
}
] |
2308.10053 | 2 | # KEYWORDS conversational recommendation, large language model, datasets
âBoth authors contributed equally to this research.
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). CIKM â23, October 21â25, 2023, Birmingham, United Kingdom © 2023 Copyright held by the owner/author(s). ACM ISBN 979-8-4007-0124-5/23/10. https://doi.org/10.1145/3583780.3614949 | 2308.10053#2 | Large Language Models as Zero-Shot Conversational Recommenders | In this paper, we present empirical studies on conversational recommendation
tasks using representative large language models in a zero-shot setting with
three primary contributions. (1) Data: To gain insights into model behavior in
"in-the-wild" conversational recommendation scenarios, we construct a new
dataset of recommendation-related conversations by scraping a popular
discussion website. This is the largest public real-world conversational
recommendation dataset to date. (2) Evaluation: On the new dataset and two
existing conversational recommendation datasets, we observe that even without
fine-tuning, large language models can outperform existing fine-tuned
conversational recommendation models. (3) Analysis: We propose various probing
tasks to investigate the mechanisms behind the remarkable performance of large
language models in conversational recommendation. We analyze both the large
language models' behaviors and the characteristics of the datasets, providing a
holistic understanding of the models' effectiveness, limitations and suggesting
directions for the design of future conversational recommenders | http://arxiv.org/pdf/2308.10053 | Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley | cs.IR, cs.AI | Accepted as CIKM 2023 long paper. Longer version is coming soon
(e.g., more details about dataset) | null | cs.IR | 20230819 | 20230819 | [
{
"id": "2302.13971"
},
{
"id": "2304.03879"
},
{
"id": "2303.17568"
},
{
"id": "2305.07961"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "2303.08774"
},
{
"id": "2305.15717"
},
{
"id": "1611.09268"
},
{
"id": "2207.12515"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2305.13112"
},
{
"id": "2112.05197"
},
{
"id": "2305.06474"
},
{
"id": "2304.11406"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2304.01196"
}
] |
2308.09904 | 3 | 1 INTRODUCTION Recommender systems hold a pivotal role in Human-Computer Interaction (HCI) by personalizing content and services to individ- ual preferences, thereby enriching user experience and aiding in decision-making [29]. They efficiently filter information, effectively managing overload and assisting users in locating relevant content. However, there remain notable challenges. Striking the delicate balance between recommendation accuracy and user satisfaction is a fundamental objective [12, 20]. Addressing biases in recommenda- tions [4] and empowering users with control while preserving their privacy remains a pressing concern [8]. Additionally, simplifying transitions into new domains and alleviating user burden stand as ongoing challenges [41], typically revealing themselves as a cold start problem. | 2308.09904#3 | RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents | The rapid evolution of the web has led to an exponential growth in content.
Recommender systems play a crucial role in Human-Computer Interaction (HCI) by
tailoring content based on individual preferences. Despite their importance,
challenges persist in balancing recommendation accuracy with user satisfaction,
addressing biases while preserving user privacy, and solving cold-start
problems in cross-domain situations. This research argues that addressing these
issues is not solely the recommender systems' responsibility, and a
human-centered approach is vital. We introduce the RAH Recommender system,
Assistant, and Human) framework, an innovative solution with LLM-based agents
such as Perceive, Learn, Act, Critic, and Reflect, emphasizing the alignment
with user personalities. The framework utilizes the Learn-Act-Critic loop and a
reflection mechanism for improving user alignment. Using the real-world data,
our experiments demonstrate the RAH framework's efficacy in various
recommendation domains, from reducing human burden to mitigating biases and
enhancing user control. Notably, our contributions provide a human-centered
recommendation framework that partners effectively with various recommendation
models. | http://arxiv.org/pdf/2308.09904 | Yubo Shu, Haonan Zhang, Hansu Gu, Peng Zhang, Tun Lu, Dongsheng Li, Ning Gu | cs.IR, cs.AI | null | null | cs.IR | 20230819 | 20231017 | [
{
"id": "2305.07961"
},
{
"id": "2309.07864"
},
{
"id": "2303.14524"
},
{
"id": "2209.07562"
},
{
"id": "2305.16291"
},
{
"id": "2207.12515"
},
{
"id": "2304.03442"
},
{
"id": "2304.10149"
},
{
"id": "1806.08977"
},
{
"id": "2305.00447"
},
{
"id": "2309.02427"
},
{
"id": "2307.14225"
},
{
"id": "2112.00944"
},
{
"id": "2305.16960"
}
] |
2308.10053 | 3 | ACM Reference Format: Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, and Julian McAuley. 2023. Large Language Models as Zero-Shot Conversational Recommenders. In Proceedings of the 32nd ACM International Conference on Information and Knowledge Management (CIKM â23), October 21â25, 2023, Birmingham, United Kingdom. ACM, New York, NY, USA, 11 pages. https://doi.org/10. 1145/3583780.3614949 | 2308.10053#3 | Large Language Models as Zero-Shot Conversational Recommenders | In this paper, we present empirical studies on conversational recommendation
tasks using representative large language models in a zero-shot setting with
three primary contributions. (1) Data: To gain insights into model behavior in
"in-the-wild" conversational recommendation scenarios, we construct a new
dataset of recommendation-related conversations by scraping a popular
discussion website. This is the largest public real-world conversational
recommendation dataset to date. (2) Evaluation: On the new dataset and two
existing conversational recommendation datasets, we observe that even without
fine-tuning, large language models can outperform existing fine-tuned
conversational recommendation models. (3) Analysis: We propose various probing
tasks to investigate the mechanisms behind the remarkable performance of large
language models in conversational recommendation. We analyze both the large
language models' behaviors and the characteristics of the datasets, providing a
holistic understanding of the models' effectiveness, limitations and suggesting
directions for the design of future conversational recommenders | http://arxiv.org/pdf/2308.10053 | Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley | cs.IR, cs.AI | Accepted as CIKM 2023 long paper. Longer version is coming soon
(e.g., more details about dataset) | null | cs.IR | 20230819 | 20230819 | [
{
"id": "2302.13971"
},
{
"id": "2304.03879"
},
{
"id": "2303.17568"
},
{
"id": "2305.07961"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "2303.08774"
},
{
"id": "2305.15717"
},
{
"id": "1611.09268"
},
{
"id": "2207.12515"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2305.13112"
},
{
"id": "2112.05197"
},
{
"id": "2305.06474"
},
{
"id": "2304.11406"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2304.01196"
}
] |
2308.09904 | 4 | While much of the pioneering research primarily focuses on addressing challenges from the perspective of the recommender system, we argue that solving these issues is not the sole respon- sibility of recommender systems. Addressing challenges from the human perspective presents a new and promising angle. For in- stance, employing advanced user modeling techniques to capture user behavior and preferences allows for a delicate balance between user satisfaction and recommendation precision. Engaging users in a cooperative manner within the recommendation process enables them to define profiles, tailor preferences, and provide explicit feed- back. This not only helps mitigate biases but also empowers users, enhancing their control over recommendations and protecting pri- vacy. When confronted with the cold-start challenge, understanding user preferences and effectively generalizing them in uncharted domains can significantly alleviate the burden on users entering unfamiliar territories. These human-centered strategies represent orthogonal efforts to complement existing recommender systems.
XXXâ24, 2024, Singapore | 2308.09904#4 | RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents | The rapid evolution of the web has led to an exponential growth in content.
Recommender systems play a crucial role in Human-Computer Interaction (HCI) by
tailoring content based on individual preferences. Despite their importance,
challenges persist in balancing recommendation accuracy with user satisfaction,
addressing biases while preserving user privacy, and solving cold-start
problems in cross-domain situations. This research argues that addressing these
issues is not solely the recommender systems' responsibility, and a
human-centered approach is vital. We introduce the RAH Recommender system,
Assistant, and Human) framework, an innovative solution with LLM-based agents
such as Perceive, Learn, Act, Critic, and Reflect, emphasizing the alignment
with user personalities. The framework utilizes the Learn-Act-Critic loop and a
reflection mechanism for improving user alignment. Using the real-world data,
our experiments demonstrate the RAH framework's efficacy in various
recommendation domains, from reducing human burden to mitigating biases and
enhancing user control. Notably, our contributions provide a human-centered
recommendation framework that partners effectively with various recommendation
models. | http://arxiv.org/pdf/2308.09904 | Yubo Shu, Haonan Zhang, Hansu Gu, Peng Zhang, Tun Lu, Dongsheng Li, Ning Gu | cs.IR, cs.AI | null | null | cs.IR | 20230819 | 20231017 | [
{
"id": "2305.07961"
},
{
"id": "2309.07864"
},
{
"id": "2303.14524"
},
{
"id": "2209.07562"
},
{
"id": "2305.16291"
},
{
"id": "2207.12515"
},
{
"id": "2304.03442"
},
{
"id": "2304.10149"
},
{
"id": "1806.08977"
},
{
"id": "2305.00447"
},
{
"id": "2309.02427"
},
{
"id": "2307.14225"
},
{
"id": "2112.00944"
},
{
"id": "2305.16960"
}
] |
2308.10053 | 4 | 1 INTRODUCTION Conversational recommender systems (CRS) aim to elicit user pref- erences and offer personalized recommendations by engaging in interactive conversations. In contrast to traditional recommenders that primarily rely on usersâ actions like clicks or purchases, CRS possesses the potential to: (1) understand not only usersâ historical actions but also usersâ (multi-turn) natural-language inputs; (2) pro- vide not only recommended items but also human-like responses for multiple purposes such as preference refinement, knowledgable discussion or recommendation justification. Towards this objec- tive, a typical conversational recommender contains two compo- nents [10, 41, 64, 74]: a generator to generate natural-language responses and a recommender to rank items to meet usersâ needs. Recently, significant advancements have shown the remarkable potential of large language models (LLMs)1, such as ChatGPT [30], in various tasks [4, 6, 51, 71]. This has captured the attention of the recommender systems community to explore the possibility of lever- aging LLMs in recommendation or more general personalization tasks [3, 27, 34, | 2308.10053#4 | Large Language Models as Zero-Shot Conversational Recommenders | In this paper, we present empirical studies on conversational recommendation
tasks using representative large language models in a zero-shot setting with
three primary contributions. (1) Data: To gain insights into model behavior in
"in-the-wild" conversational recommendation scenarios, we construct a new
dataset of recommendation-related conversations by scraping a popular
discussion website. This is the largest public real-world conversational
recommendation dataset to date. (2) Evaluation: On the new dataset and two
existing conversational recommendation datasets, we observe that even without
fine-tuning, large language models can outperform existing fine-tuned
conversational recommendation models. (3) Analysis: We propose various probing
tasks to investigate the mechanisms behind the remarkable performance of large
language models in conversational recommendation. We analyze both the large
language models' behaviors and the characteristics of the datasets, providing a
holistic understanding of the models' effectiveness, limitations and suggesting
directions for the design of future conversational recommenders | http://arxiv.org/pdf/2308.10053 | Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley | cs.IR, cs.AI | Accepted as CIKM 2023 long paper. Longer version is coming soon
(e.g., more details about dataset) | null | cs.IR | 20230819 | 20230819 | [
{
"id": "2302.13971"
},
{
"id": "2304.03879"
},
{
"id": "2303.17568"
},
{
"id": "2305.07961"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "2303.08774"
},
{
"id": "2305.15717"
},
{
"id": "1611.09268"
},
{
"id": "2207.12515"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2305.13112"
},
{
"id": "2112.05197"
},
{
"id": "2305.06474"
},
{
"id": "2304.11406"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2304.01196"
}
] |
2308.09904 | 5 | We propose a comprehensive framework RAH, which stands for Recommender system, Assistant, and Human. Within this frame- work, the assistant acts as an intelligent and personalized helper, leveraging LLM to learn and comprehend a userâs personality from their behaviors. The assistant then provides tailored actions in line with the userâs personality. Operating within this framework, RAH opens up avenues to alleviate user burden, mitigate biases, and enhance user control over recommended outcomes and personal privacy. Each assistant comprises several LLM-based agents. (1) Perceive Agent: Understands and interprets information within recommendations, including item features and user feedback impli- cations. (2) Learn Agent: Assimilates user personalities from their behaviors and stores them in personality libraries. (3) Act Agent: Ex- ecutes actions based on the learned personality, such as filtering out disliked items for the user. (4) Critic Agent: Validates if the executed action aligns with the userâs preferences and analyzes adjustments to reduce discrepancies. (5) Reflect Agent: Scrutinizes and optimizes the accumulated learned personality, addressing issues like | 2308.09904#5 | RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents | The rapid evolution of the web has led to an exponential growth in content.
Recommender systems play a crucial role in Human-Computer Interaction (HCI) by
tailoring content based on individual preferences. Despite their importance,
challenges persist in balancing recommendation accuracy with user satisfaction,
addressing biases while preserving user privacy, and solving cold-start
problems in cross-domain situations. This research argues that addressing these
issues is not solely the recommender systems' responsibility, and a
human-centered approach is vital. We introduce the RAH Recommender system,
Assistant, and Human) framework, an innovative solution with LLM-based agents
such as Perceive, Learn, Act, Critic, and Reflect, emphasizing the alignment
with user personalities. The framework utilizes the Learn-Act-Critic loop and a
reflection mechanism for improving user alignment. Using the real-world data,
our experiments demonstrate the RAH framework's efficacy in various
recommendation domains, from reducing human burden to mitigating biases and
enhancing user control. Notably, our contributions provide a human-centered
recommendation framework that partners effectively with various recommendation
models. | http://arxiv.org/pdf/2308.09904 | Yubo Shu, Haonan Zhang, Hansu Gu, Peng Zhang, Tun Lu, Dongsheng Li, Ning Gu | cs.IR, cs.AI | null | null | cs.IR | 20230819 | 20231017 | [
{
"id": "2305.07961"
},
{
"id": "2309.07864"
},
{
"id": "2303.14524"
},
{
"id": "2209.07562"
},
{
"id": "2305.16291"
},
{
"id": "2207.12515"
},
{
"id": "2304.03442"
},
{
"id": "2304.10149"
},
{
"id": "1806.08977"
},
{
"id": "2305.00447"
},
{
"id": "2309.02427"
},
{
"id": "2307.14225"
},
{
"id": "2112.00944"
},
{
"id": "2305.16960"
}
] |
2308.10053 | 5 | attention of the recommender systems community to explore the possibility of lever- aging LLMs in recommendation or more general personalization tasks [3, 27, 34, 48, 56]. Yet, current efforts generally concentrate on evaluating LLMs in traditional recommendation settings, where only usersâ past actions like clicks serve as inputs [3, 27, 34, 48]. The conversational recommendation scenario, though involving more natural language interactions, is still in its infancy [16, 63]. | 2308.10053#5 | Large Language Models as Zero-Shot Conversational Recommenders | In this paper, we present empirical studies on conversational recommendation
tasks using representative large language models in a zero-shot setting with
three primary contributions. (1) Data: To gain insights into model behavior in
"in-the-wild" conversational recommendation scenarios, we construct a new
dataset of recommendation-related conversations by scraping a popular
discussion website. This is the largest public real-world conversational
recommendation dataset to date. (2) Evaluation: On the new dataset and two
existing conversational recommendation datasets, we observe that even without
fine-tuning, large language models can outperform existing fine-tuned
conversational recommendation models. (3) Analysis: We propose various probing
tasks to investigate the mechanisms behind the remarkable performance of large
language models in conversational recommendation. We analyze both the large
language models' behaviors and the characteristics of the datasets, providing a
holistic understanding of the models' effectiveness, limitations and suggesting
directions for the design of future conversational recommenders | http://arxiv.org/pdf/2308.10053 | Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley | cs.IR, cs.AI | Accepted as CIKM 2023 long paper. Longer version is coming soon
(e.g., more details about dataset) | null | cs.IR | 20230819 | 20230819 | [
{
"id": "2302.13971"
},
{
"id": "2304.03879"
},
{
"id": "2303.17568"
},
{
"id": "2305.07961"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "2303.08774"
},
{
"id": "2305.15717"
},
{
"id": "1611.09268"
},
{
"id": "2207.12515"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2305.13112"
},
{
"id": "2112.05197"
},
{
"id": "2305.06474"
},
{
"id": "2304.11406"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2304.01196"
}
] |
2308.09904 | 6 | and analyzes adjustments to reduce discrepancies. (5) Reflect Agent: Scrutinizes and optimizes the accumulated learned personality, addressing issues like duplica- tion and conflicts. Furthermore, we enhance our proposed assistant with the Learn-Act-Critic loop and a reflection mechanism to en- hance alignment with the user. Within the Learn-Act-Critic loop, the Learn, Act, and Critic Agents work collaboratively to process user actions, refining their understanding of the userâs personality. This iterative loop continues until the Act Agent accurately mirrors the learned personality, ensuring alignment with user interactions validated by the Critic Agent. Meanwhile, the reflection mecha- nism employs the Reflect Agent to periodically revise the learned personality, maintaining an up-to-date and accurate representation. In our experiment, we evaluate the RAH framework using real- world data in three recommendation domains. Firstly, we observe that the Learn-Act-Critic loop and reflection mechanism signifi- cantly enhance the alignment of the assistant with the userâs person- ality. Post-learning from users, the assistant is capable of generating proxy actions across various | 2308.09904#6 | RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents | The rapid evolution of the web has led to an exponential growth in content.
Recommender systems play a crucial role in Human-Computer Interaction (HCI) by
tailoring content based on individual preferences. Despite their importance,
challenges persist in balancing recommendation accuracy with user satisfaction,
addressing biases while preserving user privacy, and solving cold-start
problems in cross-domain situations. This research argues that addressing these
issues is not solely the recommender systems' responsibility, and a
human-centered approach is vital. We introduce the RAH Recommender system,
Assistant, and Human) framework, an innovative solution with LLM-based agents
such as Perceive, Learn, Act, Critic, and Reflect, emphasizing the alignment
with user personalities. The framework utilizes the Learn-Act-Critic loop and a
reflection mechanism for improving user alignment. Using the real-world data,
our experiments demonstrate the RAH framework's efficacy in various
recommendation domains, from reducing human burden to mitigating biases and
enhancing user control. Notably, our contributions provide a human-centered
recommendation framework that partners effectively with various recommendation
models. | http://arxiv.org/pdf/2308.09904 | Yubo Shu, Haonan Zhang, Hansu Gu, Peng Zhang, Tun Lu, Dongsheng Li, Ning Gu | cs.IR, cs.AI | null | null | cs.IR | 20230819 | 20231017 | [
{
"id": "2305.07961"
},
{
"id": "2309.07864"
},
{
"id": "2303.14524"
},
{
"id": "2209.07562"
},
{
"id": "2305.16291"
},
{
"id": "2207.12515"
},
{
"id": "2304.03442"
},
{
"id": "2304.10149"
},
{
"id": "1806.08977"
},
{
"id": "2305.00447"
},
{
"id": "2309.02427"
},
{
"id": "2307.14225"
},
{
"id": "2112.00944"
},
{
"id": "2305.16960"
}
] |
2308.10053 | 6 | 1We refer to LLMs as the large-sized pre-trained language models with exceptional zero-shot abilities as defined in [71].
CIKM â23, October 21â25, 2023, Birmingham, United Kingdom He, et al.
Figure 1: Large Language Models (LLMs) as Zero-Shot Conversational Recommenders (CRS).We introduce a simple prompting strategy to define the task description ð , format requirement ð¹ and conversation context ð for a LLM, denoted as F , we then post-process the generative results into ranked item lists with processor Φ.
In this work, we propose to use large language models as zero- shot conversational recommenders and then empirically study the LLMsâ [11, 30, 51, 68] recommendation abilities. Our detailed con- tributions in this study include three key aspects regarding data, evaluation, and analysis.
âusers who like A typically also like Bâ) to make conversational recommendations. We design several probing tasks to uncover the modelâs workings and the characteristics of the CRS data. Addition- ally, we present empirical findings that highlight certain limitations of LLMs as zero-shot CRS, despite their effectiveness. | 2308.10053#6 | Large Language Models as Zero-Shot Conversational Recommenders | In this paper, we present empirical studies on conversational recommendation
tasks using representative large language models in a zero-shot setting with
three primary contributions. (1) Data: To gain insights into model behavior in
"in-the-wild" conversational recommendation scenarios, we construct a new
dataset of recommendation-related conversations by scraping a popular
discussion website. This is the largest public real-world conversational
recommendation dataset to date. (2) Evaluation: On the new dataset and two
existing conversational recommendation datasets, we observe that even without
fine-tuning, large language models can outperform existing fine-tuned
conversational recommendation models. (3) Analysis: We propose various probing
tasks to investigate the mechanisms behind the remarkable performance of large
language models in conversational recommendation. We analyze both the large
language models' behaviors and the characteristics of the datasets, providing a
holistic understanding of the models' effectiveness, limitations and suggesting
directions for the design of future conversational recommenders | http://arxiv.org/pdf/2308.10053 | Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley | cs.IR, cs.AI | Accepted as CIKM 2023 long paper. Longer version is coming soon
(e.g., more details about dataset) | null | cs.IR | 20230819 | 20230819 | [
{
"id": "2302.13971"
},
{
"id": "2304.03879"
},
{
"id": "2303.17568"
},
{
"id": "2305.07961"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "2303.08774"
},
{
"id": "2305.15717"
},
{
"id": "1611.09268"
},
{
"id": "2207.12515"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2305.13112"
},
{
"id": "2112.05197"
},
{
"id": "2305.06474"
},
{
"id": "2304.11406"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2304.01196"
}
] |
2308.09904 | 7 | enhance the alignment of the assistant with the userâs person- ality. Post-learning from users, the assistant is capable of generating proxy actions across various recommender systems, effectively re- ducing human burden. The second experiment demonstrates that these proxy actions lead to a notable improvement in recommender systems, achieving enhanced efficiency with reduced user inter- actions. Moreover, in the third part of the experiment, we investi- gate the use of well-learned assistants to express usersâ feedback on less popular items, mitigating bias within the system. Finally, we delve into additional strategies within the RAH framework to tackle human-centered concerns regarding user control. The assistant comprehends usersâ intentions, delivers more detailed rec- ommended results to fulfill them, and implements control strategies to safeguard usersâ privacy. | 2308.09904#7 | RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents | The rapid evolution of the web has led to an exponential growth in content.
Recommender systems play a crucial role in Human-Computer Interaction (HCI) by
tailoring content based on individual preferences. Despite their importance,
challenges persist in balancing recommendation accuracy with user satisfaction,
addressing biases while preserving user privacy, and solving cold-start
problems in cross-domain situations. This research argues that addressing these
issues is not solely the recommender systems' responsibility, and a
human-centered approach is vital. We introduce the RAH Recommender system,
Assistant, and Human) framework, an innovative solution with LLM-based agents
such as Perceive, Learn, Act, Critic, and Reflect, emphasizing the alignment
with user personalities. The framework utilizes the Learn-Act-Critic loop and a
reflection mechanism for improving user alignment. Using the real-world data,
our experiments demonstrate the RAH framework's efficacy in various
recommendation domains, from reducing human burden to mitigating biases and
enhancing user control. Notably, our contributions provide a human-centered
recommendation framework that partners effectively with various recommendation
models. | http://arxiv.org/pdf/2308.09904 | Yubo Shu, Haonan Zhang, Hansu Gu, Peng Zhang, Tun Lu, Dongsheng Li, Ning Gu | cs.IR, cs.AI | null | null | cs.IR | 20230819 | 20231017 | [
{
"id": "2305.07961"
},
{
"id": "2309.07864"
},
{
"id": "2303.14524"
},
{
"id": "2209.07562"
},
{
"id": "2305.16291"
},
{
"id": "2207.12515"
},
{
"id": "2304.03442"
},
{
"id": "2304.10149"
},
{
"id": "1806.08977"
},
{
"id": "2305.00447"
},
{
"id": "2309.02427"
},
{
"id": "2307.14225"
},
{
"id": "2112.00944"
},
{
"id": "2305.16960"
}
] |
2308.10053 | 7 | Data. We construct Reddit-Movie, a large-scale conversational rec- ommendation dataset with over 634k naturally occurring recom- mendation seeking dialogs from users from Reddit2, a popular discussion forum. Different from existing crowd-sourced conver- sational recommendation datasets, such as ReDIAL [41] and IN- SPIRED [22], where workers role-play users and recommenders, the Reddit-Movie dataset offers a complementary perspective with conversations where users seek and offer item recommendation in the real world. To the best of our knowledge, this is the largest public conversational recommendation dataset, with 50 times more conversations than ReDIAL.
We summarize the key findings of this paper as follows:
⢠CRS recommendation abilities should be reassessed by elim- inating repeated items as ground truth.
⢠LLMs, as zero-shot conversational recommenders, demon- strate improved performance on established and new datasets over fine-tuned CRS models.
⢠LLMs primarily use their superior content/context knowl- edge, rather than their collaborative knowledge, to make recommendations.
⢠CRS datasets inherently contain a high level of content/context information, making CRS tasks better-suited for LLMs than traditional recommendation tasks. | 2308.10053#7 | Large Language Models as Zero-Shot Conversational Recommenders | In this paper, we present empirical studies on conversational recommendation
tasks using representative large language models in a zero-shot setting with
three primary contributions. (1) Data: To gain insights into model behavior in
"in-the-wild" conversational recommendation scenarios, we construct a new
dataset of recommendation-related conversations by scraping a popular
discussion website. This is the largest public real-world conversational
recommendation dataset to date. (2) Evaluation: On the new dataset and two
existing conversational recommendation datasets, we observe that even without
fine-tuning, large language models can outperform existing fine-tuned
conversational recommendation models. (3) Analysis: We propose various probing
tasks to investigate the mechanisms behind the remarkable performance of large
language models in conversational recommendation. We analyze both the large
language models' behaviors and the characteristics of the datasets, providing a
holistic understanding of the models' effectiveness, limitations and suggesting
directions for the design of future conversational recommenders | http://arxiv.org/pdf/2308.10053 | Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley | cs.IR, cs.AI | Accepted as CIKM 2023 long paper. Longer version is coming soon
(e.g., more details about dataset) | null | cs.IR | 20230819 | 20230819 | [
{
"id": "2302.13971"
},
{
"id": "2304.03879"
},
{
"id": "2303.17568"
},
{
"id": "2305.07961"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "2303.08774"
},
{
"id": "2305.15717"
},
{
"id": "1611.09268"
},
{
"id": "2207.12515"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2305.13112"
},
{
"id": "2112.05197"
},
{
"id": "2305.06474"
},
{
"id": "2304.11406"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2304.01196"
}
] |
2308.09904 | 8 | Our contributions can be summarized as follows:
We utilize LLM from the human perspective and propose a more human-centered recommendation framework, RAH. ⢠Within the RAH framework, our assistant is designed with the Learn-Act-Critic loop and a reflection mechanism to achieve a nuanced understanding and alignment with user personalities.
⢠Through experimentation, we validate the RAH frameworkâs performance in addressing recommendation challenges part- nered with various recommendation models, including cold- start in cross-domain recommendation, popularity bias, and user control and privacy.
Yubo Shu, et al.
2 RAH (RECSYS-ASSISTANT-HUMAN) 2.1 Overall The principle behind RAHâs design is taking a human-centered approach to address recommender system challenges. As shown in Figure 1, RAH comprises three components - the recommender system, the intelligent assistant, and the human user. Unlike tradi- tional recommendations solely between systems and users, RAH introduces an assistant as an intermediary. This assistant acts as a personalized helper for the user. It utilizes large language models (LLMs) to comprehend user personalities based on their behav- iors. The assistant then provides actions tailored to each userâs personality.
Within this framework, the assistant facilitates two key workflows:
RecSysâAssistantâHuman This workflow focuses on the assistant filtering personalized recommendations for the end user, as shown by the solid black arrow in Figure 1. | 2308.09904#8 | RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents | The rapid evolution of the web has led to an exponential growth in content.
Recommender systems play a crucial role in Human-Computer Interaction (HCI) by
tailoring content based on individual preferences. Despite their importance,
challenges persist in balancing recommendation accuracy with user satisfaction,
addressing biases while preserving user privacy, and solving cold-start
problems in cross-domain situations. This research argues that addressing these
issues is not solely the recommender systems' responsibility, and a
human-centered approach is vital. We introduce the RAH Recommender system,
Assistant, and Human) framework, an innovative solution with LLM-based agents
such as Perceive, Learn, Act, Critic, and Reflect, emphasizing the alignment
with user personalities. The framework utilizes the Learn-Act-Critic loop and a
reflection mechanism for improving user alignment. Using the real-world data,
our experiments demonstrate the RAH framework's efficacy in various
recommendation domains, from reducing human burden to mitigating biases and
enhancing user control. Notably, our contributions provide a human-centered
recommendation framework that partners effectively with various recommendation
models. | http://arxiv.org/pdf/2308.09904 | Yubo Shu, Haonan Zhang, Hansu Gu, Peng Zhang, Tun Lu, Dongsheng Li, Ning Gu | cs.IR, cs.AI | null | null | cs.IR | 20230819 | 20231017 | [
{
"id": "2305.07961"
},
{
"id": "2309.07864"
},
{
"id": "2303.14524"
},
{
"id": "2209.07562"
},
{
"id": "2305.16291"
},
{
"id": "2207.12515"
},
{
"id": "2304.03442"
},
{
"id": "2304.10149"
},
{
"id": "1806.08977"
},
{
"id": "2305.00447"
},
{
"id": "2309.02427"
},
{
"id": "2307.14225"
},
{
"id": "2112.00944"
},
{
"id": "2305.16960"
}
] |
2308.10053 | 8 | ⢠CRS datasets inherently contain a high level of content/context information, making CRS tasks better-suited for LLMs than traditional recommendation tasks.
Evaluation. By evaluating the recommendation performance of LLMs on multiple CRS datasets, we first notice a repeated item shortcut in current CRS evaluation protocols. Specifically, there exist ârepeated itemsâ in previous evaluation testing samples serving as ground-truth items, which allows the creation of a trivial baseline (e.g., copying the mentioned items from the current conversation history) that outperforms most existing models, leading to spurious conclusions regarding current CRS recommendation abilities. After removing the ârepeated itemsâ in training and testing data, we re- evaluate multiple representative conversational recommendation models [10, 41, 64, 74] on ReDIAL, INSPIRED and our Reddit dataset. With this experimental setup, we empirically show that LLMs can outperform existing fine-tuned conversational recommendation models even without fine-tuning.
Analysis. In light of the impressive performance of LLMs as zero- shot CRS, a fundamental question arises: What accounts for their remarkable performance? Similar to the approach taken in [53], we posit that LLMs leverage both content/context knowledge (e.g., âgenreâ, âactorsâ and âmoodâ) and collaborative knowledge (e.g., | 2308.10053#8 | Large Language Models as Zero-Shot Conversational Recommenders | In this paper, we present empirical studies on conversational recommendation
tasks using representative large language models in a zero-shot setting with
three primary contributions. (1) Data: To gain insights into model behavior in
"in-the-wild" conversational recommendation scenarios, we construct a new
dataset of recommendation-related conversations by scraping a popular
discussion website. This is the largest public real-world conversational
recommendation dataset to date. (2) Evaluation: On the new dataset and two
existing conversational recommendation datasets, we observe that even without
fine-tuning, large language models can outperform existing fine-tuned
conversational recommendation models. (3) Analysis: We propose various probing
tasks to investigate the mechanisms behind the remarkable performance of large
language models in conversational recommendation. We analyze both the large
language models' behaviors and the characteristics of the datasets, providing a
holistic understanding of the models' effectiveness, limitations and suggesting
directions for the design of future conversational recommenders | http://arxiv.org/pdf/2308.10053 | Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley | cs.IR, cs.AI | Accepted as CIKM 2023 long paper. Longer version is coming soon
(e.g., more details about dataset) | null | cs.IR | 20230819 | 20230819 | [
{
"id": "2302.13971"
},
{
"id": "2304.03879"
},
{
"id": "2303.17568"
},
{
"id": "2305.07961"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "2303.08774"
},
{
"id": "2305.15717"
},
{
"id": "1611.09268"
},
{
"id": "2207.12515"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2305.13112"
},
{
"id": "2112.05197"
},
{
"id": "2305.06474"
},
{
"id": "2304.11406"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2304.01196"
}
] |
2308.09904 | 9 | RecSysâAssistantâHuman This workflow focuses on the assistant filtering personalized recommendations for the end user, as shown by the solid black arrow in Figure 1.
Recommender systems initially generate candidate items spanning different domains such as books, movies, and games. ⢠The assistant aggregates these cross-domain recommenda- tions. It retrieves the userâs learned personality from its memory. Using the userâs personality profile, the assistant further filters the candidate items to create a tailored list. ⢠Finally, the user receives a unified personalized set of filtered
recommendations from the assistant.
To enable effective filtering across diverse items, the assistant incorporates powerful LLMs. They provide the reasoning skills and real-world knowledge needed to comprehend various item features. HumanâAssistantâRecSys This workflow enables the assis- tant to learn from user feedback and accordingly tune recommender systems, as depicted by the dotted black arrow in Figure 1.
⢠The user first provides feedback on items, e.g., indicating âLike" or âDislike", and the assistant receives this initial feed- back instead of the recommender systems.
⢠The assistant will then start to learn the userâs personality from the userâs feedback.
⢠Lastly, the assistant will process the userâs feedback into the assistantâs feedback. This allows it to selectively forward user preferences to recommender systems. | 2308.09904#9 | RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents | The rapid evolution of the web has led to an exponential growth in content.
Recommender systems play a crucial role in Human-Computer Interaction (HCI) by
tailoring content based on individual preferences. Despite their importance,
challenges persist in balancing recommendation accuracy with user satisfaction,
addressing biases while preserving user privacy, and solving cold-start
problems in cross-domain situations. This research argues that addressing these
issues is not solely the recommender systems' responsibility, and a
human-centered approach is vital. We introduce the RAH Recommender system,
Assistant, and Human) framework, an innovative solution with LLM-based agents
such as Perceive, Learn, Act, Critic, and Reflect, emphasizing the alignment
with user personalities. The framework utilizes the Learn-Act-Critic loop and a
reflection mechanism for improving user alignment. Using the real-world data,
our experiments demonstrate the RAH framework's efficacy in various
recommendation domains, from reducing human burden to mitigating biases and
enhancing user control. Notably, our contributions provide a human-centered
recommendation framework that partners effectively with various recommendation
models. | http://arxiv.org/pdf/2308.09904 | Yubo Shu, Haonan Zhang, Hansu Gu, Peng Zhang, Tun Lu, Dongsheng Li, Ning Gu | cs.IR, cs.AI | null | null | cs.IR | 20230819 | 20231017 | [
{
"id": "2305.07961"
},
{
"id": "2309.07864"
},
{
"id": "2303.14524"
},
{
"id": "2209.07562"
},
{
"id": "2305.16291"
},
{
"id": "2207.12515"
},
{
"id": "2304.03442"
},
{
"id": "2304.10149"
},
{
"id": "1806.08977"
},
{
"id": "2305.00447"
},
{
"id": "2309.02427"
},
{
"id": "2307.14225"
},
{
"id": "2112.00944"
},
{
"id": "2305.16960"
}
] |
2308.10053 | 9 | ⢠LLMs suffer from limitations such as popularity bias and sensitivity to geographical regions.
These findings reveal the unique importance of the superior content/context knowledge in LLMs for CRS tasks, offering great potential to LLMs as an effective approach in CRS; meanwhile, analyses must recognize the challenges in evaluation, datasets, and potential problems (e.g., debiasing) in future CRS design with LLMs. | 2308.10053#9 | Large Language Models as Zero-Shot Conversational Recommenders | In this paper, we present empirical studies on conversational recommendation
tasks using representative large language models in a zero-shot setting with
three primary contributions. (1) Data: To gain insights into model behavior in
"in-the-wild" conversational recommendation scenarios, we construct a new
dataset of recommendation-related conversations by scraping a popular
discussion website. This is the largest public real-world conversational
recommendation dataset to date. (2) Evaluation: On the new dataset and two
existing conversational recommendation datasets, we observe that even without
fine-tuning, large language models can outperform existing fine-tuned
conversational recommendation models. (3) Analysis: We propose various probing
tasks to investigate the mechanisms behind the remarkable performance of large
language models in conversational recommendation. We analyze both the large
language models' behaviors and the characteristics of the datasets, providing a
holistic understanding of the models' effectiveness, limitations and suggesting
directions for the design of future conversational recommenders | http://arxiv.org/pdf/2308.10053 | Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley | cs.IR, cs.AI | Accepted as CIKM 2023 long paper. Longer version is coming soon
(e.g., more details about dataset) | null | cs.IR | 20230819 | 20230819 | [
{
"id": "2302.13971"
},
{
"id": "2304.03879"
},
{
"id": "2303.17568"
},
{
"id": "2305.07961"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "2303.08774"
},
{
"id": "2305.15717"
},
{
"id": "1611.09268"
},
{
"id": "2207.12515"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2305.13112"
},
{
"id": "2112.05197"
},
{
"id": "2305.06474"
},
{
"id": "2304.11406"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2304.01196"
}
] |
2308.09904 | 10 | ⢠Lastly, the assistant will process the userâs feedback into the assistantâs feedback. This allows it to selectively forward user preferences to recommender systems.
By introducing an intermediary assistant focused on the human, RAH opens up new possibilities to address human-centered chal- lenges. The assistantâs capabilities in learning and acting upon user personalities strengthen these human-centered aspects. It facili- tates key functionalities like mitigating user burden and bias while enhancing user control and privacy.
2.2 Human-Centered Design Goals As stated earlier, the key goal of RAH is to address human-centered challenges in recommender systems. This subsection introduces three pivotal design goals for addressing human-centered chal- lenges. (Our methods to achieve the design goals can be found in Section 3.3)
Reduce User Burden. In recommendation, the user burden can come from the initial interactions in a new domain and the redundant feedback across domains. In the RAH framework, the
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
&
XXXâ24, 2024, Singapore
5 Recommended Ree system 2 Ttems Ree system 1 Assistant Feedback Personality Library Prefer Value Filtered Items ee Human Feedback Goal | 2308.09904#10 | RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents | The rapid evolution of the web has led to an exponential growth in content.
Recommender systems play a crucial role in Human-Computer Interaction (HCI) by
tailoring content based on individual preferences. Despite their importance,
challenges persist in balancing recommendation accuracy with user satisfaction,
addressing biases while preserving user privacy, and solving cold-start
problems in cross-domain situations. This research argues that addressing these
issues is not solely the recommender systems' responsibility, and a
human-centered approach is vital. We introduce the RAH Recommender system,
Assistant, and Human) framework, an innovative solution with LLM-based agents
such as Perceive, Learn, Act, Critic, and Reflect, emphasizing the alignment
with user personalities. The framework utilizes the Learn-Act-Critic loop and a
reflection mechanism for improving user alignment. Using the real-world data,
our experiments demonstrate the RAH framework's efficacy in various
recommendation domains, from reducing human burden to mitigating biases and
enhancing user control. Notably, our contributions provide a human-centered
recommendation framework that partners effectively with various recommendation
models. | http://arxiv.org/pdf/2308.09904 | Yubo Shu, Haonan Zhang, Hansu Gu, Peng Zhang, Tun Lu, Dongsheng Li, Ning Gu | cs.IR, cs.AI | null | null | cs.IR | 20230819 | 20231017 | [
{
"id": "2305.07961"
},
{
"id": "2309.07864"
},
{
"id": "2303.14524"
},
{
"id": "2209.07562"
},
{
"id": "2305.16291"
},
{
"id": "2207.12515"
},
{
"id": "2304.03442"
},
{
"id": "2304.10149"
},
{
"id": "1806.08977"
},
{
"id": "2305.00447"
},
{
"id": "2309.02427"
},
{
"id": "2307.14225"
},
{
"id": "2112.00944"
},
{
"id": "2305.16960"
}
] |
2308.10053 | 10 | 2 LLMS AS ZERO-SHOT CRS 2.1 Task Formation Given a user set U, an item set I and a vocabulary V, a conversa- tion can be denoted as ð¶ = (ð¢ð¡ , ð ð¡ , Ið¡ )ð ð¡ =1. That means during the ð¡ th turn of the conversation, a speaker ð¢ð¡ â U generates an utterance ð ð¡ = (ð¤ð )ð ð=1, which is a sequence of words ð¤ð â V. This utterance ð ð¡ also contains a set of mentioned items Ið¡ â I (Ið¡ can be an empty set if no items mentioned). Typically, there are two users in the conversation ð¶ playing the role of seeker and recommender respectively. Let us use the 2nd conversation turn in Figure 1 as an example. Here ð¡ = 2, ð¢ð¡ is [System], ð ð¡ is âYou would love Terminator !â and I2 is a set containing the movie Terminator.
2https://www.reddit.com/
Large Language Models as Zero-Shot Conversational Recommenders | 2308.10053#10 | Large Language Models as Zero-Shot Conversational Recommenders | In this paper, we present empirical studies on conversational recommendation
tasks using representative large language models in a zero-shot setting with
three primary contributions. (1) Data: To gain insights into model behavior in
"in-the-wild" conversational recommendation scenarios, we construct a new
dataset of recommendation-related conversations by scraping a popular
discussion website. This is the largest public real-world conversational
recommendation dataset to date. (2) Evaluation: On the new dataset and two
existing conversational recommendation datasets, we observe that even without
fine-tuning, large language models can outperform existing fine-tuned
conversational recommendation models. (3) Analysis: We propose various probing
tasks to investigate the mechanisms behind the remarkable performance of large
language models in conversational recommendation. We analyze both the large
language models' behaviors and the characteristics of the datasets, providing a
holistic understanding of the models' effectiveness, limitations and suggesting
directions for the design of future conversational recommenders | http://arxiv.org/pdf/2308.10053 | Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley | cs.IR, cs.AI | Accepted as CIKM 2023 long paper. Longer version is coming soon
(e.g., more details about dataset) | null | cs.IR | 20230819 | 20230819 | [
{
"id": "2302.13971"
},
{
"id": "2304.03879"
},
{
"id": "2303.17568"
},
{
"id": "2305.07961"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "2303.08774"
},
{
"id": "2305.15717"
},
{
"id": "1611.09268"
},
{
"id": "2207.12515"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2305.13112"
},
{
"id": "2112.05197"
},
{
"id": "2305.06474"
},
{
"id": "2304.11406"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2304.01196"
}
] |
2308.09904 | 11 | &
XXXâ24, 2024, Singapore
5 Recommended Ree system 2 Ttems Ree system 1 Assistant Feedback Personality Library Prefer Value Filtered Items ee Human Feedback Goal
Figure 1: The figure demonstrates an overall view of the RAH framework. Core workflows can be divided into RecSysâAssistantâHuman(the black solid arrow) and HumanâAssistantâRecSys(the black dotted arrow).
assistant should serve as a personal helper to reduce user burden in multiple ways. In both a single domain and across domains, the assistant should comprehend user tendencies from limited interac- tions and learn a unified user personality. The assistant should be able to express a unified personality to new recommender systems, alleviating the cold start issue and reducing user burden. Besides, the assistant should provide proxy feedback to refine recommender systems, minimizing unnecessary user interactions.
Mitigate bias. Biased recommended results can cause unfairness problems and harm the user experience. In the RAH framework, we design the assistant to represent users, generating more feedback on unseen items and thus mitigating the userâs selection bias. | 2308.09904#11 | RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents | The rapid evolution of the web has led to an exponential growth in content.
Recommender systems play a crucial role in Human-Computer Interaction (HCI) by
tailoring content based on individual preferences. Despite their importance,
challenges persist in balancing recommendation accuracy with user satisfaction,
addressing biases while preserving user privacy, and solving cold-start
problems in cross-domain situations. This research argues that addressing these
issues is not solely the recommender systems' responsibility, and a
human-centered approach is vital. We introduce the RAH Recommender system,
Assistant, and Human) framework, an innovative solution with LLM-based agents
such as Perceive, Learn, Act, Critic, and Reflect, emphasizing the alignment
with user personalities. The framework utilizes the Learn-Act-Critic loop and a
reflection mechanism for improving user alignment. Using the real-world data,
our experiments demonstrate the RAH framework's efficacy in various
recommendation domains, from reducing human burden to mitigating biases and
enhancing user control. Notably, our contributions provide a human-centered
recommendation framework that partners effectively with various recommendation
models. | http://arxiv.org/pdf/2308.09904 | Yubo Shu, Haonan Zhang, Hansu Gu, Peng Zhang, Tun Lu, Dongsheng Li, Ning Gu | cs.IR, cs.AI | null | null | cs.IR | 20230819 | 20231017 | [
{
"id": "2305.07961"
},
{
"id": "2309.07864"
},
{
"id": "2303.14524"
},
{
"id": "2209.07562"
},
{
"id": "2305.16291"
},
{
"id": "2207.12515"
},
{
"id": "2304.03442"
},
{
"id": "2304.10149"
},
{
"id": "1806.08977"
},
{
"id": "2305.00447"
},
{
"id": "2309.02427"
},
{
"id": "2307.14225"
},
{
"id": "2112.00944"
},
{
"id": "2305.16960"
}
] |
2308.10053 | 11 | 2https://www.reddit.com/
Large Language Models as Zero-Shot Conversational Recommenders
Table 1: Dataset Statistics. We denote a subset of Reddit-Movie in 2022 as base, and the entire ten-year dataset as large.
Dataset #Conv. #Turns #Users #Items INSPIRED [22] ReDIAL [41] Reddit-Moviebase Reddit-Movielarge 999 11,348 85,052 634,392 35,686 139,557 133,005 1,669,720 999 764 10,946 36,247 1,967 6,281 24,326 51,203
Following many CRS papers [10, 41, 64, 74], the recommender component of a CRS is specifically designed to optimize the follow- ing objective: during the ðth turn of a conversation, where ð¢ð is the recommender, the recommender takes the conversational context (ð¢ð¡ , ð ð¡ , Ið¡ )ð â1 ð¡ =1 as its input, and generate a ranked list of items ËIð that best matches the ground-truth items in Ið .
# 2.2 Framework | 2308.10053#11 | Large Language Models as Zero-Shot Conversational Recommenders | In this paper, we present empirical studies on conversational recommendation
tasks using representative large language models in a zero-shot setting with
three primary contributions. (1) Data: To gain insights into model behavior in
"in-the-wild" conversational recommendation scenarios, we construct a new
dataset of recommendation-related conversations by scraping a popular
discussion website. This is the largest public real-world conversational
recommendation dataset to date. (2) Evaluation: On the new dataset and two
existing conversational recommendation datasets, we observe that even without
fine-tuning, large language models can outperform existing fine-tuned
conversational recommendation models. (3) Analysis: We propose various probing
tasks to investigate the mechanisms behind the remarkable performance of large
language models in conversational recommendation. We analyze both the large
language models' behaviors and the characteristics of the datasets, providing a
holistic understanding of the models' effectiveness, limitations and suggesting
directions for the design of future conversational recommenders | http://arxiv.org/pdf/2308.10053 | Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley | cs.IR, cs.AI | Accepted as CIKM 2023 long paper. Longer version is coming soon
(e.g., more details about dataset) | null | cs.IR | 20230819 | 20230819 | [
{
"id": "2302.13971"
},
{
"id": "2304.03879"
},
{
"id": "2303.17568"
},
{
"id": "2305.07961"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "2303.08774"
},
{
"id": "2305.15717"
},
{
"id": "1611.09268"
},
{
"id": "2207.12515"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2305.13112"
},
{
"id": "2112.05197"
},
{
"id": "2305.06474"
},
{
"id": "2304.11406"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2304.01196"
}
] |
2308.09904 | 12 | Enhance User Control. Considering the pattern that the rec- ommender system actively interacts with users, it is necessary to address user control in recommendation [26, 27]. However, the ma- jority of the current recommender systems are uncontrollable, and users can only passively receive the recommendation results [8]. Therefore, in the RAH framework, the assistant should enhance user control of the recommendation results they receive and what the recommender systems learn about them, such as non-privacy data.
3 ASSISTANT In this section, we first provide an overview of the assistantâs com- ponents and inner mechanisms. We then elaborate on how the assistant achieves human-centered goals.
3.1 Components 3.1.1 Perceive Agent. The Perceive Agent functions as the ini- tial processing point for incoming information. Specifically, in the context of recommendations, its primary task is to augment the features associated with a given item, thereby enhancing the assis- tantâs overall comprehension. For instance, when provided with a movie name, the Perceive agent can supply additional relevant information about the movie. As illustrated in Figure 2(a), this ad- ditional information generally consists of two components: (1) a concise description of the item, such as a plot summary of the movie, | 2308.09904#12 | RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents | The rapid evolution of the web has led to an exponential growth in content.
Recommender systems play a crucial role in Human-Computer Interaction (HCI) by
tailoring content based on individual preferences. Despite their importance,
challenges persist in balancing recommendation accuracy with user satisfaction,
addressing biases while preserving user privacy, and solving cold-start
problems in cross-domain situations. This research argues that addressing these
issues is not solely the recommender systems' responsibility, and a
human-centered approach is vital. We introduce the RAH Recommender system,
Assistant, and Human) framework, an innovative solution with LLM-based agents
such as Perceive, Learn, Act, Critic, and Reflect, emphasizing the alignment
with user personalities. The framework utilizes the Learn-Act-Critic loop and a
reflection mechanism for improving user alignment. Using the real-world data,
our experiments demonstrate the RAH framework's efficacy in various
recommendation domains, from reducing human burden to mitigating biases and
enhancing user control. Notably, our contributions provide a human-centered
recommendation framework that partners effectively with various recommendation
models. | http://arxiv.org/pdf/2308.09904 | Yubo Shu, Haonan Zhang, Hansu Gu, Peng Zhang, Tun Lu, Dongsheng Li, Ning Gu | cs.IR, cs.AI | null | null | cs.IR | 20230819 | 20231017 | [
{
"id": "2305.07961"
},
{
"id": "2309.07864"
},
{
"id": "2303.14524"
},
{
"id": "2209.07562"
},
{
"id": "2305.16291"
},
{
"id": "2207.12515"
},
{
"id": "2304.03442"
},
{
"id": "2304.10149"
},
{
"id": "1806.08977"
},
{
"id": "2305.00447"
},
{
"id": "2309.02427"
},
{
"id": "2307.14225"
},
{
"id": "2112.00944"
},
{
"id": "2305.16960"
}
] |
2308.10053 | 12 | # 2.2 Framework
Prompting. Our goal is to utilize LLMs as zero-shot conversational recommenders. Specifically, without the need for fine-tuning, we intend to prompt an LLM, denoted as F , using a task description template ð , format requirement ð¹ , and conversational context ð before the ðth turn. This process can be formally represented as: ËIð = Φ (F (ð , ð¹, ð)) .
To better understand this zero-shot recommender, we present an example in Figure 1 with the prompt setup in our experiments.3
Models. We consider several popular LLMs F that exhibit zero-shot prompting abilities in two groups. To try to ensure deterministic results, we set the decoding temperature to 0 for all models.
⢠GPT-3.5-turbo [30]4 and GPT-4 [51] from OPENAI with abilities of solving many complex tasks in zero-shot set- ting [6, 51] but are closed-sourced.
⢠BAIZE [68]5 and Vicuna [11], which are representative open-sourced LLMs fine-tuned based on LLAMA-13B [61]. | 2308.10053#12 | Large Language Models as Zero-Shot Conversational Recommenders | In this paper, we present empirical studies on conversational recommendation
tasks using representative large language models in a zero-shot setting with
three primary contributions. (1) Data: To gain insights into model behavior in
"in-the-wild" conversational recommendation scenarios, we construct a new
dataset of recommendation-related conversations by scraping a popular
discussion website. This is the largest public real-world conversational
recommendation dataset to date. (2) Evaluation: On the new dataset and two
existing conversational recommendation datasets, we observe that even without
fine-tuning, large language models can outperform existing fine-tuned
conversational recommendation models. (3) Analysis: We propose various probing
tasks to investigate the mechanisms behind the remarkable performance of large
language models in conversational recommendation. We analyze both the large
language models' behaviors and the characteristics of the datasets, providing a
holistic understanding of the models' effectiveness, limitations and suggesting
directions for the design of future conversational recommenders | http://arxiv.org/pdf/2308.10053 | Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley | cs.IR, cs.AI | Accepted as CIKM 2023 long paper. Longer version is coming soon
(e.g., more details about dataset) | null | cs.IR | 20230819 | 20230819 | [
{
"id": "2302.13971"
},
{
"id": "2304.03879"
},
{
"id": "2303.17568"
},
{
"id": "2305.07961"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "2303.08774"
},
{
"id": "2305.15717"
},
{
"id": "1611.09268"
},
{
"id": "2207.12515"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2305.13112"
},
{
"id": "2112.05197"
},
{
"id": "2305.06474"
},
{
"id": "2304.11406"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2304.01196"
}
] |
2308.09904 | 13 | and (2) a set of specific attributes related to the item, like the movie tags. Additionally, this information enriched by the Perceive agent can further aid other agents, such as assisting the Learn Agent in extracting personalities from user behaviors.
3.1.2 Learn Agent. The Learn Agentâs mission is to identify hu- man personalities based on interactions with items, such as âLike", âDislike", and user ratings. Drawing inspiration from established research in recommender systems [9, 14, 24], we conceptualize human personalities as a combination of likes and dislikes. In our implementation, we input items, human feedback on items, and insights from the Perceive Agent into the Learn Agent. As depicted in Figure 2(b), the Learn Agent then generates the learned prefer- ences in response to positive feedback and the dislikes for negative feedback. Moreover, instead of direct learning, we require the agent to address two key questions: âWhy might some individuals like the item?" and âWhy might some individuals dislike the item?" These responses aid the agent in filtering out invalid characteristics and promoting a more nuanced understanding of personalities. | 2308.09904#13 | RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents | The rapid evolution of the web has led to an exponential growth in content.
Recommender systems play a crucial role in Human-Computer Interaction (HCI) by
tailoring content based on individual preferences. Despite their importance,
challenges persist in balancing recommendation accuracy with user satisfaction,
addressing biases while preserving user privacy, and solving cold-start
problems in cross-domain situations. This research argues that addressing these
issues is not solely the recommender systems' responsibility, and a
human-centered approach is vital. We introduce the RAH Recommender system,
Assistant, and Human) framework, an innovative solution with LLM-based agents
such as Perceive, Learn, Act, Critic, and Reflect, emphasizing the alignment
with user personalities. The framework utilizes the Learn-Act-Critic loop and a
reflection mechanism for improving user alignment. Using the real-world data,
our experiments demonstrate the RAH framework's efficacy in various
recommendation domains, from reducing human burden to mitigating biases and
enhancing user control. Notably, our contributions provide a human-centered
recommendation framework that partners effectively with various recommendation
models. | http://arxiv.org/pdf/2308.09904 | Yubo Shu, Haonan Zhang, Hansu Gu, Peng Zhang, Tun Lu, Dongsheng Li, Ning Gu | cs.IR, cs.AI | null | null | cs.IR | 20230819 | 20231017 | [
{
"id": "2305.07961"
},
{
"id": "2309.07864"
},
{
"id": "2303.14524"
},
{
"id": "2209.07562"
},
{
"id": "2305.16291"
},
{
"id": "2207.12515"
},
{
"id": "2304.03442"
},
{
"id": "2304.10149"
},
{
"id": "1806.08977"
},
{
"id": "2305.00447"
},
{
"id": "2309.02427"
},
{
"id": "2307.14225"
},
{
"id": "2112.00944"
},
{
"id": "2305.16960"
}
] |
2308.10053 | 13 | ⢠BAIZE [68]5 and Vicuna [11], which are representative open-sourced LLMs fine-tuned based on LLAMA-13B [61].
Processing. We do not assess model weights or output logits from LLMs. Therefore, we apply a post-processor Φ (e.g., fuzzy matching) to convert a recommendation list in natural language to a ranked list ËIð . The approach of generating item titles instead of ranking item IDs is referred to as a generative retrieval [7, 60] paradigm.
3 DATASET Ideally, a large-scale dataset with diverse interactions and real- world conversations is needed to evaluate modelsâ ability in conver- sational recommendation. Existing conversational recommendation datasets are usually crowd-sourced [22, 32, 41, 75] and thus only partially capture realistic conversation dynamics. For example, a crowd worker responded with "Whatever Whatever Iâm open to any suggestion." when asked about movie preferences in ReDIAL; this happens since crowd workers often do not have a particular preference at the time of completing a task. In contrast, a real user could have a very particular need, as shown in Figure 2. | 2308.10053#13 | Large Language Models as Zero-Shot Conversational Recommenders | In this paper, we present empirical studies on conversational recommendation
tasks using representative large language models in a zero-shot setting with
three primary contributions. (1) Data: To gain insights into model behavior in
"in-the-wild" conversational recommendation scenarios, we construct a new
dataset of recommendation-related conversations by scraping a popular
discussion website. This is the largest public real-world conversational
recommendation dataset to date. (2) Evaluation: On the new dataset and two
existing conversational recommendation datasets, we observe that even without
fine-tuning, large language models can outperform existing fine-tuned
conversational recommendation models. (3) Analysis: We propose various probing
tasks to investigate the mechanisms behind the remarkable performance of large
language models in conversational recommendation. We analyze both the large
language models' behaviors and the characteristics of the datasets, providing a
holistic understanding of the models' effectiveness, limitations and suggesting
directions for the design of future conversational recommenders | http://arxiv.org/pdf/2308.10053 | Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley | cs.IR, cs.AI | Accepted as CIKM 2023 long paper. Longer version is coming soon
(e.g., more details about dataset) | null | cs.IR | 20230819 | 20230819 | [
{
"id": "2302.13971"
},
{
"id": "2304.03879"
},
{
"id": "2303.17568"
},
{
"id": "2305.07961"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "2303.08774"
},
{
"id": "2305.15717"
},
{
"id": "1611.09268"
},
{
"id": "2207.12515"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2305.13112"
},
{
"id": "2112.05197"
},
{
"id": "2305.06474"
},
{
"id": "2304.11406"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2304.01196"
}
] |
2308.09904 | 14 | 3.1.3 Act Agent. The Act Agent is responsible for generating actions based on the learned personality. The Act Agent receives an itemâs information and a userâs personality as input. Subse- quently, it generates a predicted action, such as "Like" when the item aligns with the userâs preferences and "Dislike" when it aligns with their dislikes. As shown in Figure 2(c), we incorporate a chain- of-thoughts [35] approach in our implementation: (1) hypothesizing reasons for potential preference or dislikes towards the item, (2) analyzing the likely perception of the item by a human with the given personality, (3) simulating comments on the item from the perspective of the human [15, 45], and finally, (4) predicting the humanâs reaction to the item, categorized as either âlike" or âdislike."
3.1.4 Critic Agent. The core function of the Critic Agent is to evaluate the correctness of actions predicted by Act Agents. A match between the predicted action and the ground truth action (true user actions) suggests that the learned personality model aligns with the user. However, in cases of incorrect predictions, the
XXXâ24, 2024, Singapore
Yubo Shu, et al. | 2308.09904#14 | RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents | The rapid evolution of the web has led to an exponential growth in content.
Recommender systems play a crucial role in Human-Computer Interaction (HCI) by
tailoring content based on individual preferences. Despite their importance,
challenges persist in balancing recommendation accuracy with user satisfaction,
addressing biases while preserving user privacy, and solving cold-start
problems in cross-domain situations. This research argues that addressing these
issues is not solely the recommender systems' responsibility, and a
human-centered approach is vital. We introduce the RAH Recommender system,
Assistant, and Human) framework, an innovative solution with LLM-based agents
such as Perceive, Learn, Act, Critic, and Reflect, emphasizing the alignment
with user personalities. The framework utilizes the Learn-Act-Critic loop and a
reflection mechanism for improving user alignment. Using the real-world data,
our experiments demonstrate the RAH framework's efficacy in various
recommendation domains, from reducing human burden to mitigating biases and
enhancing user control. Notably, our contributions provide a human-centered
recommendation framework that partners effectively with various recommendation
models. | http://arxiv.org/pdf/2308.09904 | Yubo Shu, Haonan Zhang, Hansu Gu, Peng Zhang, Tun Lu, Dongsheng Li, Ning Gu | cs.IR, cs.AI | null | null | cs.IR | 20230819 | 20231017 | [
{
"id": "2305.07961"
},
{
"id": "2309.07864"
},
{
"id": "2303.14524"
},
{
"id": "2209.07562"
},
{
"id": "2305.16291"
},
{
"id": "2207.12515"
},
{
"id": "2304.03442"
},
{
"id": "2304.10149"
},
{
"id": "1806.08977"
},
{
"id": "2305.00447"
},
{
"id": "2309.02427"
},
{
"id": "2307.14225"
},
{
"id": "2112.00944"
},
{
"id": "2305.16960"
}
] |
2308.10053 | 14 | 3We leave more prompting techniques such as CoT [66] in future work. 4Referred as GPT-3.5-t hereafter 5We use BAIZE-V2 in https://huggingface.co/project-baize/baize-v2-13b
CIKM â23, October 21â25, 2023, Birmingham, United Kingdom
MovieLens Dialog Information (Low to High) (User Previously Watched Movies): Back to the Future, Man in Black, Harry Potter, ... Items Only ReDIAL | like 2001: A Space Odyssey and Tangerine, and | watched Enter the Void last night and it was pretty good. Items & Verbal Preference S| Reddit-Movie (Ours) âSomething that | can focus on but nothing too harsh. It can be strange and bizarre, but dreamy Items & visuals and movement and smooth and Complex sometimes unnatural dialogue is what gives it. Verbal Preference It's a sweet sensation. It's how | felt watching Wings of Desire, Eyes Wide Shut, Querelle, for some reason | 2308.10053#14 | Large Language Models as Zero-Shot Conversational Recommenders | In this paper, we present empirical studies on conversational recommendation
tasks using representative large language models in a zero-shot setting with
three primary contributions. (1) Data: To gain insights into model behavior in
"in-the-wild" conversational recommendation scenarios, we construct a new
dataset of recommendation-related conversations by scraping a popular
discussion website. This is the largest public real-world conversational
recommendation dataset to date. (2) Evaluation: On the new dataset and two
existing conversational recommendation datasets, we observe that even without
fine-tuning, large language models can outperform existing fine-tuned
conversational recommendation models. (3) Analysis: We propose various probing
tasks to investigate the mechanisms behind the remarkable performance of large
language models in conversational recommendation. We analyze both the large
language models' behaviors and the characteristics of the datasets, providing a
holistic understanding of the models' effectiveness, limitations and suggesting
directions for the design of future conversational recommenders | http://arxiv.org/pdf/2308.10053 | Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley | cs.IR, cs.AI | Accepted as CIKM 2023 long paper. Longer version is coming soon
(e.g., more details about dataset) | null | cs.IR | 20230819 | 20230819 | [
{
"id": "2302.13971"
},
{
"id": "2304.03879"
},
{
"id": "2303.17568"
},
{
"id": "2305.07961"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "2303.08774"
},
{
"id": "2305.15717"
},
{
"id": "1611.09268"
},
{
"id": "2207.12515"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2305.13112"
},
{
"id": "2112.05197"
},
{
"id": "2305.06474"
},
{
"id": "2304.11406"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2304.01196"
}
] |
2308.09904 | 15 | EI Item: Harry Potter and the Sorce: ed rs Stone (Movie) Description: Harry Potter and the Sorcerer's Stone is the first film in the Harry Potter series based on the novels by J.K. Rowling. The story follows Harry Potter, a young wizard who discovers his magical heritage as .. Characteristic: Fantasy, Adventure, Family-friendly, Magic, Wizardry, Coming-of-age, Bri im, Analyze User Comment: In the user comment, the mention of the plot being "very mysterious" suggests the user appreciates the suspense and intrigue in the narrative. However the user also points out some imprecise plots in Analyze User Action: The user's action indicates liking. (a) Perceive Agent Perceive Agent User Action onan Item Failure Reason and Suggestion i be some duplications in User Pr exit conflicts between User Preference no duplicat User Dispreference. Need Optimize Preference: Yes Need Optimize Dispreference: Yes How to Optimize Preference : Merge similar preferences to avoid redundancy How to Optimize Dispreference : Split the dispreference into more pieces to avoid conflicts. Results: {Optimized Preference} & {Optimized Dispreference} (e) Reflect Agent | 2308.09904#15 | RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents | The rapid evolution of the web has led to an exponential growth in content.
Recommender systems play a crucial role in Human-Computer Interaction (HCI) by
tailoring content based on individual preferences. Despite their importance,
challenges persist in balancing recommendation accuracy with user satisfaction,
addressing biases while preserving user privacy, and solving cold-start
problems in cross-domain situations. This research argues that addressing these
issues is not solely the recommender systems' responsibility, and a
human-centered approach is vital. We introduce the RAH Recommender system,
Assistant, and Human) framework, an innovative solution with LLM-based agents
such as Perceive, Learn, Act, Critic, and Reflect, emphasizing the alignment
with user personalities. The framework utilizes the Learn-Act-Critic loop and a
reflection mechanism for improving user alignment. Using the real-world data,
our experiments demonstrate the RAH framework's efficacy in various
recommendation domains, from reducing human burden to mitigating biases and
enhancing user control. Notably, our contributions provide a human-centered
recommendation framework that partners effectively with various recommendation
models. | http://arxiv.org/pdf/2308.09904 | Yubo Shu, Haonan Zhang, Hansu Gu, Peng Zhang, Tun Lu, Dongsheng Li, Ning Gu | cs.IR, cs.AI | null | null | cs.IR | 20230819 | 20231017 | [
{
"id": "2305.07961"
},
{
"id": "2309.07864"
},
{
"id": "2303.14524"
},
{
"id": "2209.07562"
},
{
"id": "2305.16291"
},
{
"id": "2207.12515"
},
{
"id": "2304.03442"
},
{
"id": "2304.10149"
},
{
"id": "1806.08977"
},
{
"id": "2305.00447"
},
{
"id": "2309.02427"
},
{
"id": "2307.14225"
},
{
"id": "2112.00944"
},
{
"id": "2305.16960"
}
] |
2308.10053 | 15 | MovieLens Dialog Information (Low to High) (User Previously Watched Movies): Back to the Future, Man in Black, Harry Potter, ... Items Only ReDIAL | like 2001: A Space Odyssey and Tangerine, and | watched Enter the Void last night and it was pretty good. Items & Verbal Preference S| Reddit-Movie (Ours) âSomething that | can focus on but nothing too harsh. It can be strange and bizarre, but dreamy Items & visuals and movement and smooth and Complex sometimes unnatural dialogue is what gives it. Verbal Preference It's a sweet sensation. It's how | felt watching Wings of Desire, Eyes Wide Shut, Querelle, for some reason
Figure 2: Typical model inputs from a traditional recommen- dation dataset (MovieLens [21]), an existing CRS dataset (Re- DIAL [41]), and our Reddit-Movie dataset. The Reddit-Movie dataset contains more information in its textual content com- pared to existing datasets where users often explicitly specify their preference. See Section 5.2 for quantitative analysis. | 2308.10053#15 | Large Language Models as Zero-Shot Conversational Recommenders | In this paper, we present empirical studies on conversational recommendation
tasks using representative large language models in a zero-shot setting with
three primary contributions. (1) Data: To gain insights into model behavior in
"in-the-wild" conversational recommendation scenarios, we construct a new
dataset of recommendation-related conversations by scraping a popular
discussion website. This is the largest public real-world conversational
recommendation dataset to date. (2) Evaluation: On the new dataset and two
existing conversational recommendation datasets, we observe that even without
fine-tuning, large language models can outperform existing fine-tuned
conversational recommendation models. (3) Analysis: We propose various probing
tasks to investigate the mechanisms behind the remarkable performance of large
language models in conversational recommendation. We analyze both the large
language models' behaviors and the characteristics of the datasets, providing a
holistic understanding of the models' effectiveness, limitations and suggesting
directions for the design of future conversational recommenders | http://arxiv.org/pdf/2308.10053 | Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley | cs.IR, cs.AI | Accepted as CIKM 2023 long paper. Longer version is coming soon
(e.g., more details about dataset) | null | cs.IR | 20230819 | 20230819 | [
{
"id": "2302.13971"
},
{
"id": "2304.03879"
},
{
"id": "2303.17568"
},
{
"id": "2305.07961"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "2303.08774"
},
{
"id": "2305.15717"
},
{
"id": "1611.09268"
},
{
"id": "2207.12515"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2305.13112"
},
{
"id": "2112.05197"
},
{
"id": "2305.06474"
},
{
"id": "2304.11406"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2304.01196"
}
] |
2308.09904 | 16 | : Split the dispreference into more pieces to avoid conflicts. Results: {Optimized Preference} & {Optimized Dispreference} (e) Reflect Agent Existing Rete |_ Onin Paonaiy Agent iy i New Personality Enriched ! ' Features â ix Learn Candidate Act Assistant Action | Critic Vv Agent | Personality Agent | onthe Item Agent (f) The process of the assistant to learn personalities from user actions. Like: The user and adventure Analyze Why Like: The movie offers an engaging storyline featuring magic, adventure, and coming-of-age themes, which could appeal to « Analyze Why Disli if they are not fans of fantasy or magic-themed movie's focus on a young protagonist and his fii be appealing to . some people might not like the movie tives. The ds might not Le: My rned Preference: | Fantasy and Adventure themes terious and engaging plot | .. User Action: { Like, Dislike or Neutral } Learned Dispreference: | Plot loophole | (b) Learn Agent ay like the movie because it is a a fan of the specific style of British films or if t Based on the user's preferences for fi venture themes, the user may like the movie. However e the user may also dislike the movie | 2308.09904#16 | RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents | The rapid evolution of the web has led to an exponential growth in content.
Recommender systems play a crucial role in Human-Computer Interaction (HCI) by
tailoring content based on individual preferences. Despite their importance,
challenges persist in balancing recommendation accuracy with user satisfaction,
addressing biases while preserving user privacy, and solving cold-start
problems in cross-domain situations. This research argues that addressing these
issues is not solely the recommender systems' responsibility, and a
human-centered approach is vital. We introduce the RAH Recommender system,
Assistant, and Human) framework, an innovative solution with LLM-based agents
such as Perceive, Learn, Act, Critic, and Reflect, emphasizing the alignment
with user personalities. The framework utilizes the Learn-Act-Critic loop and a
reflection mechanism for improving user alignment. Using the real-world data,
our experiments demonstrate the RAH framework's efficacy in various
recommendation domains, from reducing human burden to mitigating biases and
enhancing user control. Notably, our contributions provide a human-centered
recommendation framework that partners effectively with various recommendation
models. | http://arxiv.org/pdf/2308.09904 | Yubo Shu, Haonan Zhang, Hansu Gu, Peng Zhang, Tun Lu, Dongsheng Li, Ning Gu | cs.IR, cs.AI | null | null | cs.IR | 20230819 | 20231017 | [
{
"id": "2305.07961"
},
{
"id": "2309.07864"
},
{
"id": "2303.14524"
},
{
"id": "2209.07562"
},
{
"id": "2305.16291"
},
{
"id": "2207.12515"
},
{
"id": "2304.03442"
},
{
"id": "2304.10149"
},
{
"id": "1806.08977"
},
{
"id": "2305.00447"
},
{
"id": "2309.02427"
},
{
"id": "2307.14225"
},
{
"id": "2112.00944"
},
{
"id": "2305.16960"
}
] |
2308.10053 | 16 | To complement crowd-sourced CRS datasets, we present the Reddit-Movie dataset, the largest-scale conversational movie rec- ommendation dataset to date, with naturally occurring movie rec- ommendation conversations that can be used along with existing crowd-sourced datasets to provide richer perspectives for training and evaluating CRS models. In this work, we conduct our model evaluation and analysis on two commonly used crowd-sourcing datasets: ReDIAL [41] and INSPIRED [22], as well as our newly collected Reddit dataset. We show qualitative examples from the Reddit dataset as in Figure 2 and quantitative analysis in Section 5.2. | 2308.10053#16 | Large Language Models as Zero-Shot Conversational Recommenders | In this paper, we present empirical studies on conversational recommendation
tasks using representative large language models in a zero-shot setting with
three primary contributions. (1) Data: To gain insights into model behavior in
"in-the-wild" conversational recommendation scenarios, we construct a new
dataset of recommendation-related conversations by scraping a popular
discussion website. This is the largest public real-world conversational
recommendation dataset to date. (2) Evaluation: On the new dataset and two
existing conversational recommendation datasets, we observe that even without
fine-tuning, large language models can outperform existing fine-tuned
conversational recommendation models. (3) Analysis: We propose various probing
tasks to investigate the mechanisms behind the remarkable performance of large
language models in conversational recommendation. We analyze both the large
language models' behaviors and the characteristics of the datasets, providing a
holistic understanding of the models' effectiveness, limitations and suggesting
directions for the design of future conversational recommenders | http://arxiv.org/pdf/2308.10053 | Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley | cs.IR, cs.AI | Accepted as CIKM 2023 long paper. Longer version is coming soon
(e.g., more details about dataset) | null | cs.IR | 20230819 | 20230819 | [
{
"id": "2302.13971"
},
{
"id": "2304.03879"
},
{
"id": "2303.17568"
},
{
"id": "2305.07961"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "2303.08774"
},
{
"id": "2305.15717"
},
{
"id": "1611.09268"
},
{
"id": "2207.12515"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2305.13112"
},
{
"id": "2112.05197"
},
{
"id": "2305.06474"
},
{
"id": "2304.11406"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2304.01196"
}
] |
2308.09904 | 17 | specific style of British films or if t Based on the user's preferences for fi venture themes, the user may like the movie. However e the user may also dislike the movie b User Comment (Predicted) : The fa elements kept me engaged, while (c) Act Agent : The predicted action is correct ased on a novel, with he movie if they are not : The predicted action is wrong ntasy and . Reasons: The possible reason is that the userâs prs ence and thus can not provide an strong evidence tasy and adventure (d) Critic Agent | 2308.09904#17 | RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents | The rapid evolution of the web has led to an exponential growth in content.
Recommender systems play a crucial role in Human-Computer Interaction (HCI) by
tailoring content based on individual preferences. Despite their importance,
challenges persist in balancing recommendation accuracy with user satisfaction,
addressing biases while preserving user privacy, and solving cold-start
problems in cross-domain situations. This research argues that addressing these
issues is not solely the recommender systems' responsibility, and a
human-centered approach is vital. We introduce the RAH Recommender system,
Assistant, and Human) framework, an innovative solution with LLM-based agents
such as Perceive, Learn, Act, Critic, and Reflect, emphasizing the alignment
with user personalities. The framework utilizes the Learn-Act-Critic loop and a
reflection mechanism for improving user alignment. Using the real-world data,
our experiments demonstrate the RAH framework's efficacy in various
recommendation domains, from reducing human burden to mitigating biases and
enhancing user control. Notably, our contributions provide a human-centered
recommendation framework that partners effectively with various recommendation
models. | http://arxiv.org/pdf/2308.09904 | Yubo Shu, Haonan Zhang, Hansu Gu, Peng Zhang, Tun Lu, Dongsheng Li, Ning Gu | cs.IR, cs.AI | null | null | cs.IR | 20230819 | 20231017 | [
{
"id": "2305.07961"
},
{
"id": "2309.07864"
},
{
"id": "2303.14524"
},
{
"id": "2209.07562"
},
{
"id": "2305.16291"
},
{
"id": "2207.12515"
},
{
"id": "2304.03442"
},
{
"id": "2304.10149"
},
{
"id": "1806.08977"
},
{
"id": "2305.00447"
},
{
"id": "2309.02427"
},
{
"id": "2307.14225"
},
{
"id": "2112.00944"
},
{
"id": "2305.16960"
}
] |
2308.10053 | 17 | Dataset Construction To construct a CRS dataset from Reddit, we process all Reddit posts from 2012 Jan to 2022 Dec from pushshift.io6. We consider movie recommendation scenarios7 and extract re- lated posts from five related subreddits: r/movies, r/bestofnetflix, r/moviesuggestions, r/netflixbestof and r/truefilm. We process the raw data with the pipeline of conversational recommendation iden- tification, movie mention recognition and movie entity linking8. In our following evaluation, we use the most recent 9k conversations in Reddit-Moviebase from December 2022 as the testing set since these samples occur after GPT-3.5-tâs release. Meanwhile, GPT- 4 [51] also mentioned its pre-training data cut off in Sept. 20219. For other compared models, we use the remaining 76k conversations in Reddit-Moviebase dataset for training and validation.
6https://pushshift.io/ 7Other domains like songs, books can potentially be processed in a similar way 8Check our evaluation data, LLMs scripts, results and the links of Reddit-Movie datasets in https://github.com/AaronHeee/LLMs-as-Zero-Shot-Conversational-RecSys. 9We note that there is a possibility that GPT-4âs newest checkpoint might include a small amount of more recent data [51]. | 2308.10053#17 | Large Language Models as Zero-Shot Conversational Recommenders | In this paper, we present empirical studies on conversational recommendation
tasks using representative large language models in a zero-shot setting with
three primary contributions. (1) Data: To gain insights into model behavior in
"in-the-wild" conversational recommendation scenarios, we construct a new
dataset of recommendation-related conversations by scraping a popular
discussion website. This is the largest public real-world conversational
recommendation dataset to date. (2) Evaluation: On the new dataset and two
existing conversational recommendation datasets, we observe that even without
fine-tuning, large language models can outperform existing fine-tuned
conversational recommendation models. (3) Analysis: We propose various probing
tasks to investigate the mechanisms behind the remarkable performance of large
language models in conversational recommendation. We analyze both the large
language models' behaviors and the characteristics of the datasets, providing a
holistic understanding of the models' effectiveness, limitations and suggesting
directions for the design of future conversational recommenders | http://arxiv.org/pdf/2308.10053 | Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley | cs.IR, cs.AI | Accepted as CIKM 2023 long paper. Longer version is coming soon
(e.g., more details about dataset) | null | cs.IR | 20230819 | 20230819 | [
{
"id": "2302.13971"
},
{
"id": "2304.03879"
},
{
"id": "2303.17568"
},
{
"id": "2305.07961"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "2303.08774"
},
{
"id": "2305.15717"
},
{
"id": "1611.09268"
},
{
"id": "2207.12515"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2305.13112"
},
{
"id": "2112.05197"
},
{
"id": "2305.06474"
},
{
"id": "2304.11406"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2304.01196"
}
] |
2308.09904 | 18 | Figure 2: The components of the assistant and their work pattern.
Critic Agent not only identifies the discrepancy between predic- tions and labels but also analyzes potential reasons for the failure to facilitate corrective measures. As depicted in Figure 2(d), this process can be compared to a code compiler detecting a bug in code and generating an error log, enabling the programmer to identify and rectify the issue. As a result, the reasons for failure are con- veyed to the Learn Agent, prompting a reevaluation of previous attempts and a relearning of the personality [32]. This iterative collaboration between the Learn, Act, and Critic Agents enhances the inference of human personality based on observed actions.
3.2 Enhance Alignment Given the critical importance of aligning with the user, we further implement a Learn-Act-Critic loop and a reflection mechanism to reinforce this alignment. | 2308.09904#18 | RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents | The rapid evolution of the web has led to an exponential growth in content.
Recommender systems play a crucial role in Human-Computer Interaction (HCI) by
tailoring content based on individual preferences. Despite their importance,
challenges persist in balancing recommendation accuracy with user satisfaction,
addressing biases while preserving user privacy, and solving cold-start
problems in cross-domain situations. This research argues that addressing these
issues is not solely the recommender systems' responsibility, and a
human-centered approach is vital. We introduce the RAH Recommender system,
Assistant, and Human) framework, an innovative solution with LLM-based agents
such as Perceive, Learn, Act, Critic, and Reflect, emphasizing the alignment
with user personalities. The framework utilizes the Learn-Act-Critic loop and a
reflection mechanism for improving user alignment. Using the real-world data,
our experiments demonstrate the RAH framework's efficacy in various
recommendation domains, from reducing human burden to mitigating biases and
enhancing user control. Notably, our contributions provide a human-centered
recommendation framework that partners effectively with various recommendation
models. | http://arxiv.org/pdf/2308.09904 | Yubo Shu, Haonan Zhang, Hansu Gu, Peng Zhang, Tun Lu, Dongsheng Li, Ning Gu | cs.IR, cs.AI | null | null | cs.IR | 20230819 | 20231017 | [
{
"id": "2305.07961"
},
{
"id": "2309.07864"
},
{
"id": "2303.14524"
},
{
"id": "2209.07562"
},
{
"id": "2305.16291"
},
{
"id": "2207.12515"
},
{
"id": "2304.03442"
},
{
"id": "2304.10149"
},
{
"id": "1806.08977"
},
{
"id": "2305.00447"
},
{
"id": "2309.02427"
},
{
"id": "2307.14225"
},
{
"id": "2112.00944"
},
{
"id": "2305.16960"
}
] |
2308.10053 | 18 | CIKM â23, October 21â25, 2023, Birmingham, United Kingdom
He, et al.
INSPIRED ReDIAL Reddit INSPIRED ReDIAL Reddit #HIT@S
Figure 3: To show the repeated item shortcut, we count CRS recommendation hits using the Top-K ranked list ð¾ = {1, 5}. We group the ground-truth hits by repeated items (shaded bars) and new items (not shaded bars). The trivial baseline copies existing items from the current conversation history in chronological order, from the most recent and does not recommend new items.
Discussion. From the statistics in Table 1, we observe: (1) The dataset Reddit-Movie stands out as the largest conversational rec- ommendation dataset, encompassing 634,392 conversations and covering 51,203 movies. (2) In comparison to ReDIAL [41] and IN- SPIRED [22], Reddit-Movie contains fewer multi-turn conversations, mainly due to the inherent characteristics of Reddit posts. (3) By ex- amining representative examples depicted in Figure 2, we find that Reddit-Movie conversations tend to include more complex and de- tailed user preference in contrast to ReDIAL, as they originate from real-world conversations on Reddit, enriching the conversational recommendation datasets with a diverse range of discussions. | 2308.10053#18 | Large Language Models as Zero-Shot Conversational Recommenders | In this paper, we present empirical studies on conversational recommendation
tasks using representative large language models in a zero-shot setting with
three primary contributions. (1) Data: To gain insights into model behavior in
"in-the-wild" conversational recommendation scenarios, we construct a new
dataset of recommendation-related conversations by scraping a popular
discussion website. This is the largest public real-world conversational
recommendation dataset to date. (2) Evaluation: On the new dataset and two
existing conversational recommendation datasets, we observe that even without
fine-tuning, large language models can outperform existing fine-tuned
conversational recommendation models. (3) Analysis: We propose various probing
tasks to investigate the mechanisms behind the remarkable performance of large
language models in conversational recommendation. We analyze both the large
language models' behaviors and the characteristics of the datasets, providing a
holistic understanding of the models' effectiveness, limitations and suggesting
directions for the design of future conversational recommenders | http://arxiv.org/pdf/2308.10053 | Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley | cs.IR, cs.AI | Accepted as CIKM 2023 long paper. Longer version is coming soon
(e.g., more details about dataset) | null | cs.IR | 20230819 | 20230819 | [
{
"id": "2302.13971"
},
{
"id": "2304.03879"
},
{
"id": "2303.17568"
},
{
"id": "2305.07961"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "2303.08774"
},
{
"id": "2305.15717"
},
{
"id": "1611.09268"
},
{
"id": "2207.12515"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2305.13112"
},
{
"id": "2112.05197"
},
{
"id": "2305.06474"
},
{
"id": "2304.11406"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2304.01196"
}
] |
2308.09904 | 19 | 3.2 Enhance Alignment Given the critical importance of aligning with the user, we further implement a Learn-Act-Critic loop and a reflection mechanism to reinforce this alignment.
Learn-Act-Critic Loop. As shown in Figure 2(f), our Learn Agent collaborates with the Act and Critic Agents in an iterative process to grasp the userâs personality. Upon receiving user action or feedback, the Learn Agent extracts an initial personality as a candidate. Then, the Act Agent utilizes this candidate as input to predict the userâs actual action in reverse. The Critic Agent then assesses the accuracy of this prediction. If the prediction proves inaccurate, the Critic Agent delves into the underlying reasons and offers suggestions for corrections. The Learn Agent then incorpo- rates these suggestions, refining the candidateâs personality until it meets the Critic Agentâs evaluation.
3.1.5 Reflect Agent. The Reflect Agentâs role is to periodically review the learned personality. As illustrated in Figure 2(e), the Reflect Agentâs input comprises the combination of newly acquired learned personality and existing personalities. The Reflect Agent then evaluates the combined personalities, identifying duplicate likes, duplicate dislikes, and conflicts between likes and dislikes. The rationale behind employing the Reflect Agent is to ensure the rationality of the learned personalities throughout the continuous learning process. | 2308.09904#19 | RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents | The rapid evolution of the web has led to an exponential growth in content.
Recommender systems play a crucial role in Human-Computer Interaction (HCI) by
tailoring content based on individual preferences. Despite their importance,
challenges persist in balancing recommendation accuracy with user satisfaction,
addressing biases while preserving user privacy, and solving cold-start
problems in cross-domain situations. This research argues that addressing these
issues is not solely the recommender systems' responsibility, and a
human-centered approach is vital. We introduce the RAH Recommender system,
Assistant, and Human) framework, an innovative solution with LLM-based agents
such as Perceive, Learn, Act, Critic, and Reflect, emphasizing the alignment
with user personalities. The framework utilizes the Learn-Act-Critic loop and a
reflection mechanism for improving user alignment. Using the real-world data,
our experiments demonstrate the RAH framework's efficacy in various
recommendation domains, from reducing human burden to mitigating biases and
enhancing user control. Notably, our contributions provide a human-centered
recommendation framework that partners effectively with various recommendation
models. | http://arxiv.org/pdf/2308.09904 | Yubo Shu, Haonan Zhang, Hansu Gu, Peng Zhang, Tun Lu, Dongsheng Li, Ning Gu | cs.IR, cs.AI | null | null | cs.IR | 20230819 | 20231017 | [
{
"id": "2305.07961"
},
{
"id": "2309.07864"
},
{
"id": "2303.14524"
},
{
"id": "2209.07562"
},
{
"id": "2305.16291"
},
{
"id": "2207.12515"
},
{
"id": "2304.03442"
},
{
"id": "2304.10149"
},
{
"id": "1806.08977"
},
{
"id": "2305.00447"
},
{
"id": "2309.02427"
},
{
"id": "2307.14225"
},
{
"id": "2112.00944"
},
{
"id": "2305.16960"
}
] |
2308.10053 | 19 | 4 EVALUATION In this section, we evaluate the proposed LLMs-based frameowrk on ReDIAL [41], INSPIRED [22] and our Reddit datasets. We first explain the evaluation setup and a repeated item shortcut of the previous evaluation in Sections 4.1 and 4.2. Then, we re-train models and discuss LLM performance in Section 4.3.
Compared CRS Models. We consider several representative CRS models. For baselines which rely on structured knowledge, we use the entity linking results of ReDIAL and INSPIRED datasets pro- vided by UniCRS [64]. Note that we do not include more works [43, 50, 54] because UniCRS [64] is representative with similar results.
ReDIAL [41]: This model is released along with the ReDIAL dataset with an auto-encoder [58]-based recommender. ⢠KBRD [10]: This model proposes to use the DBPedia [1] to
enhance the semantic knowledge of items or entities.
⢠KGSF [74]: This model incorporates two knowledge graphs to enhance the representations of words and entities, and uses the Mutual Information Maximization method to align the semantic spaces of those two knowledge graphs.
⢠UniCRS [64]: This model uses pre-trained language model, DialoGPT [69], with prompt tuning to conduct recommen- dation and conversation generation tasks respectively. | 2308.10053#19 | Large Language Models as Zero-Shot Conversational Recommenders | In this paper, we present empirical studies on conversational recommendation
tasks using representative large language models in a zero-shot setting with
three primary contributions. (1) Data: To gain insights into model behavior in
"in-the-wild" conversational recommendation scenarios, we construct a new
dataset of recommendation-related conversations by scraping a popular
discussion website. This is the largest public real-world conversational
recommendation dataset to date. (2) Evaluation: On the new dataset and two
existing conversational recommendation datasets, we observe that even without
fine-tuning, large language models can outperform existing fine-tuned
conversational recommendation models. (3) Analysis: We propose various probing
tasks to investigate the mechanisms behind the remarkable performance of large
language models in conversational recommendation. We analyze both the large
language models' behaviors and the characteristics of the datasets, providing a
holistic understanding of the models' effectiveness, limitations and suggesting
directions for the design of future conversational recommenders | http://arxiv.org/pdf/2308.10053 | Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley | cs.IR, cs.AI | Accepted as CIKM 2023 long paper. Longer version is coming soon
(e.g., more details about dataset) | null | cs.IR | 20230819 | 20230819 | [
{
"id": "2302.13971"
},
{
"id": "2304.03879"
},
{
"id": "2303.17568"
},
{
"id": "2305.07961"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "2303.08774"
},
{
"id": "2305.15717"
},
{
"id": "1611.09268"
},
{
"id": "2207.12515"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2305.13112"
},
{
"id": "2112.05197"
},
{
"id": "2305.06474"
},
{
"id": "2304.11406"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2304.01196"
}
] |
2308.09904 | 20 | Reflecting on personality. To attain more accurate and com- prehensive personalities, the assistant must seamlessly integrate the newly acquired personality with existing ones, rather than merely accumulating them. Inspired from [22], our reflection mechanism addresses issues arising from duplication and conflicts in learned personalities (preferences and aversions). Regarding duplication, the assistant can effortlessly merge duplicates without requiring additional information. However, handling conflicts may require a more delicate strategy. The Reflect Agent initiates by deconstructing
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
&
XXXâ24, 2024, Singapore
conflicting traits into finer details to minimize overlaps. If conflicts persist after this step, the Reflect Agent formulates queries for users, seeking their input to resolve the conflicts.
3.3 Human-Centered Approaches In this section, we discuss key human-centered approaches em- ployed within the RAH framework to reduce user burden, mitigate biases, and enhance user control. | 2308.09904#20 | RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents | The rapid evolution of the web has led to an exponential growth in content.
Recommender systems play a crucial role in Human-Computer Interaction (HCI) by
tailoring content based on individual preferences. Despite their importance,
challenges persist in balancing recommendation accuracy with user satisfaction,
addressing biases while preserving user privacy, and solving cold-start
problems in cross-domain situations. This research argues that addressing these
issues is not solely the recommender systems' responsibility, and a
human-centered approach is vital. We introduce the RAH Recommender system,
Assistant, and Human) framework, an innovative solution with LLM-based agents
such as Perceive, Learn, Act, Critic, and Reflect, emphasizing the alignment
with user personalities. The framework utilizes the Learn-Act-Critic loop and a
reflection mechanism for improving user alignment. Using the real-world data,
our experiments demonstrate the RAH framework's efficacy in various
recommendation domains, from reducing human burden to mitigating biases and
enhancing user control. Notably, our contributions provide a human-centered
recommendation framework that partners effectively with various recommendation
models. | http://arxiv.org/pdf/2308.09904 | Yubo Shu, Haonan Zhang, Hansu Gu, Peng Zhang, Tun Lu, Dongsheng Li, Ning Gu | cs.IR, cs.AI | null | null | cs.IR | 20230819 | 20231017 | [
{
"id": "2305.07961"
},
{
"id": "2309.07864"
},
{
"id": "2303.14524"
},
{
"id": "2209.07562"
},
{
"id": "2305.16291"
},
{
"id": "2207.12515"
},
{
"id": "2304.03442"
},
{
"id": "2304.10149"
},
{
"id": "1806.08977"
},
{
"id": "2305.00447"
},
{
"id": "2309.02427"
},
{
"id": "2307.14225"
},
{
"id": "2112.00944"
},
{
"id": "2305.16960"
}
] |
2308.10053 | 20 | ⢠UniCRS [64]: This model uses pre-trained language model, DialoGPT [69], with prompt tuning to conduct recommen- dation and conversation generation tasks respectively.
# 4.1 Evaluation Setup
Repeated vs. New Items. Given a conversation ð¶ = (ð¢ð¡ , ð ð¡ , Ið¡ )ð ð¡ =1, it is challenging to identify the ground-truth recommended items, i.e., whether the mentioned items Ið at the ðth (ð ⤠ð ) turn are used for recommendation purposes. A common evaluation setup assumes that when ð¢ð is the recommender, all items ð â Ið serve as ground-truth recommended items.
In this work, we further split the items ð â Ið into two categories: repeated items or new items. Repeated items are items that have ap- peared in previous conversation turns, i.e., {ð | âð¡ â [1, ð), ð â Ið¡ }; and new items are items not mentioned in previous conversation turns. We explain the details of this categorization in Section 4.2. | 2308.10053#20 | Large Language Models as Zero-Shot Conversational Recommenders | In this paper, we present empirical studies on conversational recommendation
tasks using representative large language models in a zero-shot setting with
three primary contributions. (1) Data: To gain insights into model behavior in
"in-the-wild" conversational recommendation scenarios, we construct a new
dataset of recommendation-related conversations by scraping a popular
discussion website. This is the largest public real-world conversational
recommendation dataset to date. (2) Evaluation: On the new dataset and two
existing conversational recommendation datasets, we observe that even without
fine-tuning, large language models can outperform existing fine-tuned
conversational recommendation models. (3) Analysis: We propose various probing
tasks to investigate the mechanisms behind the remarkable performance of large
language models in conversational recommendation. We analyze both the large
language models' behaviors and the characteristics of the datasets, providing a
holistic understanding of the models' effectiveness, limitations and suggesting
directions for the design of future conversational recommenders | http://arxiv.org/pdf/2308.10053 | Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley | cs.IR, cs.AI | Accepted as CIKM 2023 long paper. Longer version is coming soon
(e.g., more details about dataset) | null | cs.IR | 20230819 | 20230819 | [
{
"id": "2302.13971"
},
{
"id": "2304.03879"
},
{
"id": "2303.17568"
},
{
"id": "2305.07961"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "2303.08774"
},
{
"id": "2305.15717"
},
{
"id": "1611.09268"
},
{
"id": "2207.12515"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2305.13112"
},
{
"id": "2112.05197"
},
{
"id": "2305.06474"
},
{
"id": "2304.11406"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2304.01196"
}
] |
2308.09904 | 21 | Reduce user burden. The assistant reduces user burden through its learning and acting capabilities. It employs the Learn Agent to learn a unified user personality from diverse domain interactions in the userâs history. This unified personality is then extrapolated across domains using the Act Agent, resulting in personalized proxy feedback to instruct recommender systems. This process helps users avoid abundant interactions and thus reduces user burden. Within a single domain, the assistant utilizes powerful LLMs to compre- hend user personalities with fewer actions. Across domains, this unified personality alleviates the âcold startâ issue and reduces the initial feedback burden. Additionally, the assistant can analyze user behavior across mixed domains, gradually constructing a more comprehensive personality that aligns better with the user.
a userâs identity. For example, if a patient expresses inter- est in a treatment-related book, the assistant could provide extra proxy feedback, such as âLikes Professional Medical Literature", to the recommender system, thereby masking the patientâs identity and suggesting they might be a medical professional. In response, the recommender system might suggest a mix of treatment-focused books and advanced med- ical literature. The assistant then uses the Act Agent to filter out the specialist literature, presenting only the relevant treatment-related options to the user. This strategy ensures privacy while delivering personalized recommendations tai- lored to the userâs needs. | 2308.09904#21 | RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents | The rapid evolution of the web has led to an exponential growth in content.
Recommender systems play a crucial role in Human-Computer Interaction (HCI) by
tailoring content based on individual preferences. Despite their importance,
challenges persist in balancing recommendation accuracy with user satisfaction,
addressing biases while preserving user privacy, and solving cold-start
problems in cross-domain situations. This research argues that addressing these
issues is not solely the recommender systems' responsibility, and a
human-centered approach is vital. We introduce the RAH Recommender system,
Assistant, and Human) framework, an innovative solution with LLM-based agents
such as Perceive, Learn, Act, Critic, and Reflect, emphasizing the alignment
with user personalities. The framework utilizes the Learn-Act-Critic loop and a
reflection mechanism for improving user alignment. Using the real-world data,
our experiments demonstrate the RAH framework's efficacy in various
recommendation domains, from reducing human burden to mitigating biases and
enhancing user control. Notably, our contributions provide a human-centered
recommendation framework that partners effectively with various recommendation
models. | http://arxiv.org/pdf/2308.09904 | Yubo Shu, Haonan Zhang, Hansu Gu, Peng Zhang, Tun Lu, Dongsheng Li, Ning Gu | cs.IR, cs.AI | null | null | cs.IR | 20230819 | 20231017 | [
{
"id": "2305.07961"
},
{
"id": "2309.07864"
},
{
"id": "2303.14524"
},
{
"id": "2209.07562"
},
{
"id": "2305.16291"
},
{
"id": "2207.12515"
},
{
"id": "2304.03442"
},
{
"id": "2304.10149"
},
{
"id": "1806.08977"
},
{
"id": "2305.00447"
},
{
"id": "2309.02427"
},
{
"id": "2307.14225"
},
{
"id": "2112.00944"
},
{
"id": "2305.16960"
}
] |
2308.09904 | 22 | 4 EXPERIMENTS SETTING In this section, we outline the specifics of our experiments and dataset preparation. Our evaluation of the RAH framework involves three experiments to assess: (1) the assistantâs alignment with the user preference. (2) the performance of reducing user burden among various domains, and (3) the assistantâs capability to mitigate bias. For all experiments, we utilize the GPT-4-0613 version of the LLM from OpenAI in our assistant.
Mitigate bias. To mitigate bias, the assistant leverages the Act Agent to act on items and generate proxy feedback. Human feed- back, limited by time and energy, tends to be biased towards popular or seen items. The Act Agent addresses this limitation by offering expanded feedback on less popular or unseen items, thus reduc- ing selection bias. This broader interaction history leads to less biased recommendations from the recommender systems. The Ac- tion Agent, based on LLMs, provides nuanced feedback, such as proxy comments, allowing for a deeper understanding of explicit user preferences. This enables recommender systems to focus on genuine user preferences rather than simply fitting to the training data, thus reducing inference bias. | 2308.09904#22 | RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents | The rapid evolution of the web has led to an exponential growth in content.
Recommender systems play a crucial role in Human-Computer Interaction (HCI) by
tailoring content based on individual preferences. Despite their importance,
challenges persist in balancing recommendation accuracy with user satisfaction,
addressing biases while preserving user privacy, and solving cold-start
problems in cross-domain situations. This research argues that addressing these
issues is not solely the recommender systems' responsibility, and a
human-centered approach is vital. We introduce the RAH Recommender system,
Assistant, and Human) framework, an innovative solution with LLM-based agents
such as Perceive, Learn, Act, Critic, and Reflect, emphasizing the alignment
with user personalities. The framework utilizes the Learn-Act-Critic loop and a
reflection mechanism for improving user alignment. Using the real-world data,
our experiments demonstrate the RAH framework's efficacy in various
recommendation domains, from reducing human burden to mitigating biases and
enhancing user control. Notably, our contributions provide a human-centered
recommendation framework that partners effectively with various recommendation
models. | http://arxiv.org/pdf/2308.09904 | Yubo Shu, Haonan Zhang, Hansu Gu, Peng Zhang, Tun Lu, Dongsheng Li, Ning Gu | cs.IR, cs.AI | null | null | cs.IR | 20230819 | 20231017 | [
{
"id": "2305.07961"
},
{
"id": "2309.07864"
},
{
"id": "2303.14524"
},
{
"id": "2209.07562"
},
{
"id": "2305.16291"
},
{
"id": "2207.12515"
},
{
"id": "2304.03442"
},
{
"id": "2304.10149"
},
{
"id": "1806.08977"
},
{
"id": "2305.00447"
},
{
"id": "2309.02427"
},
{
"id": "2307.14225"
},
{
"id": "2112.00944"
},
{
"id": "2305.16960"
}
] |
2308.10053 | 22 | 4.2 Repeated Items Can Be Shortcuts Current evaluation for conversational recommendation systems does not differentiate between repeated and new items in a conver- sation. We observed that this evaluation scheme favors systems that optimize for mentioning repeated items. As shown in Figure 3, a trivial baseline that always copies seen items from the conversation history has better performance than most previous models under the standard evaluation scheme. This phenomenon highlights the risk of shortcut learning [18], where a decision rule performs well against certain benchmarks and evaluations but fails to capture the true intent of the system designer. Indeed, the #HIT@1 for the models tested dropped by more than 60% on average when we focus on new item recommendation only, which is unclear from the overall recommendation performance. After manually checking, we observe a typical pattern of repeated items, which is shown in the ex- ample conversation in Figure 1. In this conversation, Terminator at the 6th turn is used as the ground-truth item. The system re- peated this Terminator because the system quoted this movie for a content-based discussion during the conversation rather than making recommendations. Given the nature of recommendation conversations between two users, it is more probable that items repeated during a conversation are intended for discussion rather
10We show standard errors as error bars in our figures and gray numbers in our tables.
Large Language Models as Zero-Shot Conversational Recommenders
CIKM â23, October 21â25, 2023, Birmingham, United Kingdom | 2308.10053#22 | Large Language Models as Zero-Shot Conversational Recommenders | In this paper, we present empirical studies on conversational recommendation
tasks using representative large language models in a zero-shot setting with
three primary contributions. (1) Data: To gain insights into model behavior in
"in-the-wild" conversational recommendation scenarios, we construct a new
dataset of recommendation-related conversations by scraping a popular
discussion website. This is the largest public real-world conversational
recommendation dataset to date. (2) Evaluation: On the new dataset and two
existing conversational recommendation datasets, we observe that even without
fine-tuning, large language models can outperform existing fine-tuned
conversational recommendation models. (3) Analysis: We propose various probing
tasks to investigate the mechanisms behind the remarkable performance of large
language models in conversational recommendation. We analyze both the large
language models' behaviors and the characteristics of the datasets, providing a
holistic understanding of the models' effectiveness, limitations and suggesting
directions for the design of future conversational recommenders | http://arxiv.org/pdf/2308.10053 | Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley | cs.IR, cs.AI | Accepted as CIKM 2023 long paper. Longer version is coming soon
(e.g., more details about dataset) | null | cs.IR | 20230819 | 20230819 | [
{
"id": "2302.13971"
},
{
"id": "2304.03879"
},
{
"id": "2303.17568"
},
{
"id": "2305.07961"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "2303.08774"
},
{
"id": "2305.15717"
},
{
"id": "1611.09268"
},
{
"id": "2207.12515"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2305.13112"
},
{
"id": "2112.05197"
},
{
"id": "2305.06474"
},
{
"id": "2304.11406"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2304.01196"
}
] |
2308.09904 | 23 | Enhance user control. Different from the traditional frame- work consisting of users and a remote recommendation system, the assistant is designed to prioritize usersâ intentions and objec- tives. With the integration of LLMs, the assistant can operate on personal devices [30], empowering users and providing a more human-centered experience. The Act Agent plays a crucial role in enhancing user control through content filtering and tailored recommendations:
⢠Control recommendation results: Equipped with LLM, the Learn Agent comprehends complex human intentions effectively. The Act Agent then filters items and tailors rec- ommender systems to ensure recommended results align with user intentions. For instance, if a user instructs the as- sistant to exclude horrifying elements, the assistant filters out such movies, books, and games from recommendations and generates proxy actions such as âDislike" for items con- taining these elements.
Our datasets are sourced from three domains on Amazon: Movies, Books, and Video Games. Following the guidelines of previous research [19], we initially filter out users and items with fewer than five interactions. We then retain users who have interactions in more than one domain, allowing us to additionally evaluate RAHâs performance in cross-domain situations (e.g., Movie&Book). Subsequently, to strike a balance between GPT-4 API calls and the training demands of the recommender system, we split the dataset into two parts: | 2308.09904#23 | RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents | The rapid evolution of the web has led to an exponential growth in content.
Recommender systems play a crucial role in Human-Computer Interaction (HCI) by
tailoring content based on individual preferences. Despite their importance,
challenges persist in balancing recommendation accuracy with user satisfaction,
addressing biases while preserving user privacy, and solving cold-start
problems in cross-domain situations. This research argues that addressing these
issues is not solely the recommender systems' responsibility, and a
human-centered approach is vital. We introduce the RAH Recommender system,
Assistant, and Human) framework, an innovative solution with LLM-based agents
such as Perceive, Learn, Act, Critic, and Reflect, emphasizing the alignment
with user personalities. The framework utilizes the Learn-Act-Critic loop and a
reflection mechanism for improving user alignment. Using the real-world data,
our experiments demonstrate the RAH framework's efficacy in various
recommendation domains, from reducing human burden to mitigating biases and
enhancing user control. Notably, our contributions provide a human-centered
recommendation framework that partners effectively with various recommendation
models. | http://arxiv.org/pdf/2308.09904 | Yubo Shu, Haonan Zhang, Hansu Gu, Peng Zhang, Tun Lu, Dongsheng Li, Ning Gu | cs.IR, cs.AI | null | null | cs.IR | 20230819 | 20231017 | [
{
"id": "2305.07961"
},
{
"id": "2309.07864"
},
{
"id": "2303.14524"
},
{
"id": "2209.07562"
},
{
"id": "2305.16291"
},
{
"id": "2207.12515"
},
{
"id": "2304.03442"
},
{
"id": "2304.10149"
},
{
"id": "1806.08977"
},
{
"id": "2305.00447"
},
{
"id": "2309.02427"
},
{
"id": "2307.14225"
},
{
"id": "2112.00944"
},
{
"id": "2305.16960"
}
] |
2308.10053 | 23 | Large Language Models as Zero-Shot Conversational Recommenders
CIKM â23, October 21â25, 2023, Birmingham, United Kingdom
INSPIRED ReDIAL Reddit 0.05 0.025 008 0.07 0.04 0.020 0s a ® os 0.03 0.015 Som | i 3 0.02 oo10 003 | Lr 0.02 I 0.01 0.005 001 ooo HIT 0.00 00 zg24eNneae Z2R2N ERE zg24eNneae agglzank agglzank agglzank 2a 208 2a 308 2a808 ge sese ge sese ge sese & & & INSPIRED ReDIAL Reddit oa 0.14 0.08 0.12 0.10 0.10 0.06 ro 0.08 00s 0.04 0.04 0.04 0.02 0.02 0.02 0.00 0.00 0.00 KBRD KGsF Unicrs GPra KBRD KGsF unicrs GPra KBRD KGsF Unicrs BAIZE Vicuna GPr3.st GPra BAIZE Vicuna BAIZE GPT-3.5-+ ReDIAL Vicuna GPT-3.5-+ ReDIAL ReDIAL
© | 2308.10053#23 | Large Language Models as Zero-Shot Conversational Recommenders | In this paper, we present empirical studies on conversational recommendation
tasks using representative large language models in a zero-shot setting with
three primary contributions. (1) Data: To gain insights into model behavior in
"in-the-wild" conversational recommendation scenarios, we construct a new
dataset of recommendation-related conversations by scraping a popular
discussion website. This is the largest public real-world conversational
recommendation dataset to date. (2) Evaluation: On the new dataset and two
existing conversational recommendation datasets, we observe that even without
fine-tuning, large language models can outperform existing fine-tuned
conversational recommendation models. (3) Analysis: We propose various probing
tasks to investigate the mechanisms behind the remarkable performance of large
language models in conversational recommendation. We analyze both the large
language models' behaviors and the characteristics of the datasets, providing a
holistic understanding of the models' effectiveness, limitations and suggesting
directions for the design of future conversational recommenders | http://arxiv.org/pdf/2308.10053 | Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley | cs.IR, cs.AI | Accepted as CIKM 2023 long paper. Longer version is coming soon
(e.g., more details about dataset) | null | cs.IR | 20230819 | 20230819 | [
{
"id": "2302.13971"
},
{
"id": "2304.03879"
},
{
"id": "2303.17568"
},
{
"id": "2305.07961"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "2303.08774"
},
{
"id": "2305.15717"
},
{
"id": "1611.09268"
},
{
"id": "2207.12515"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2305.13112"
},
{
"id": "2112.05197"
},
{
"id": "2305.06474"
},
{
"id": "2304.11406"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2304.01196"
}
] |
2308.09904 | 24 | ⢠Cross1k. We randomly select 1,000 users from the processed data, capturing their interactions to form a concise dataset. For these users, 1,000 personalized LLM-based assistants are created to learn from and act to them individually. For the following experiments, we further partition the interactions of Cross1k into three sets (Learn Set, Proxy Set, and Unseen Set) using an equal ratio of 1:1:1.
⢠Cross221k. The rest of the dataset includes 221,861 users and 4,624,903 interactions, and it can be used for training a stable recommender system without the challenges tied to insufficient training data.
The statistics of Cross1k and Cross221k can be found in Appen- dix 8.1. To test RAHâs role in reducing bias, we follow the protocols with previous de-bias research [2, 31, 46] to simulate unbiased data for offline evaluation by sampling interactions according to the propensity scores of items.
⢠Control privacy: Beyond operating on personal devices, the assistant employs strategies to enhance privacy and person- alized recommendations. The assistant limits data sharing with recommender platforms and employs obfuscation strate- gies, such as providing obfuscated proxy feedback to mask | 2308.09904#24 | RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents | The rapid evolution of the web has led to an exponential growth in content.
Recommender systems play a crucial role in Human-Computer Interaction (HCI) by
tailoring content based on individual preferences. Despite their importance,
challenges persist in balancing recommendation accuracy with user satisfaction,
addressing biases while preserving user privacy, and solving cold-start
problems in cross-domain situations. This research argues that addressing these
issues is not solely the recommender systems' responsibility, and a
human-centered approach is vital. We introduce the RAH Recommender system,
Assistant, and Human) framework, an innovative solution with LLM-based agents
such as Perceive, Learn, Act, Critic, and Reflect, emphasizing the alignment
with user personalities. The framework utilizes the Learn-Act-Critic loop and a
reflection mechanism for improving user alignment. Using the real-world data,
our experiments demonstrate the RAH framework's efficacy in various
recommendation domains, from reducing human burden to mitigating biases and
enhancing user control. Notably, our contributions provide a human-centered
recommendation framework that partners effectively with various recommendation
models. | http://arxiv.org/pdf/2308.09904 | Yubo Shu, Haonan Zhang, Hansu Gu, Peng Zhang, Tun Lu, Dongsheng Li, Ning Gu | cs.IR, cs.AI | null | null | cs.IR | 20230819 | 20231017 | [
{
"id": "2305.07961"
},
{
"id": "2309.07864"
},
{
"id": "2303.14524"
},
{
"id": "2209.07562"
},
{
"id": "2305.16291"
},
{
"id": "2207.12515"
},
{
"id": "2304.03442"
},
{
"id": "2304.10149"
},
{
"id": "1806.08977"
},
{
"id": "2305.00447"
},
{
"id": "2309.02427"
},
{
"id": "2307.14225"
},
{
"id": "2112.00944"
},
{
"id": "2305.16960"
}
] |
2308.10053 | 24 | ©
Figure 4: CRS recommendation performance on New Items in terms of Recall@K, with ð¾ = {1, 5}. To exclude the influence of repeated items in CRS evaluation, we remove all repeated items in training and testing datasets and re-train all baselines.
Table 2: Recall@1 results of considering all generated item titles (Φ0) and only considering in-dataset item titles (Φ1).
Model INSPIRED Φ1 Φ0 Φ0 ReDIAL Φ1 Φ0 Reddit Φ1 BAIZE Vicuna GPT-3.5-t GPT-4 .019 .019 .028 .011 .047 .015 .062 .017 .028 .011 .033 .012 .052 .015 .066 .017 .021 .002 .020 .002 .041 .003 .043 .003 .021 .002 .020 .002 .043 .003 .046 .004 .012 .001 .012 .001 .022 .001 .022 .001 .013 .008 .012 .001 .023 .001 .023 .001
Table 3: Fraction of Top-K (ð¾ = 20 in our prompt setup) rec- ommendations (#rec) that can be string matched in the IMDB movie database (%imdb) for the different models, which shows a lower bound of non-hallucinated movie titles. | 2308.10053#24 | Large Language Models as Zero-Shot Conversational Recommenders | In this paper, we present empirical studies on conversational recommendation
tasks using representative large language models in a zero-shot setting with
three primary contributions. (1) Data: To gain insights into model behavior in
"in-the-wild" conversational recommendation scenarios, we construct a new
dataset of recommendation-related conversations by scraping a popular
discussion website. This is the largest public real-world conversational
recommendation dataset to date. (2) Evaluation: On the new dataset and two
existing conversational recommendation datasets, we observe that even without
fine-tuning, large language models can outperform existing fine-tuned
conversational recommendation models. (3) Analysis: We propose various probing
tasks to investigate the mechanisms behind the remarkable performance of large
language models in conversational recommendation. We analyze both the large
language models' behaviors and the characteristics of the datasets, providing a
holistic understanding of the models' effectiveness, limitations and suggesting
directions for the design of future conversational recommenders | http://arxiv.org/pdf/2308.10053 | Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley | cs.IR, cs.AI | Accepted as CIKM 2023 long paper. Longer version is coming soon
(e.g., more details about dataset) | null | cs.IR | 20230819 | 20230819 | [
{
"id": "2302.13971"
},
{
"id": "2304.03879"
},
{
"id": "2303.17568"
},
{
"id": "2305.07961"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "2303.08774"
},
{
"id": "2305.15717"
},
{
"id": "1611.09268"
},
{
"id": "2207.12515"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2305.13112"
},
{
"id": "2112.05197"
},
{
"id": "2305.06474"
},
{
"id": "2304.11406"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2304.01196"
}
] |
2308.10053 | 25 | BAIZE Vicuna GPT-3.5-t GPT-4 #rec %imdb #rec %imdb #rec %imdb #rec %imdb 259,333 81.56% 258,984 86.98% 321,048 95.51% 322,323 94.86%
than serving as recommendations. We argue that considering the large portion of repeated items (e.g., more than 15% ground-truth items are repeated items in INSPIRED), it is beneficial to remove repeated items and re-evaluate CRS models to better understand modelsâ recommendation ability. It is worth noting that the rep- etition patterns have also been investigated in evaluating other recommender systems such as next-basket recommendation [40].
# 4.3 LLMs Performance
Finding 1 - LLMs outperform fine-tuned CRS models in a zero-shot setting. For a comparison between modelsâ abilities to recommend new items to the user in conversation, we re-train exist- ing CRS models on all datasets for new item recommendation only. The evaluation results are as shown in Figure 4. Large language models, although not fine-tuned, have the best performance on all datasets. Meanwhile, the performance of all models is uniformly lower on Reddit compared to the other datasets, potentially due to the large number of items and fewer conversation turns, making recommendation more challenging. | 2308.10053#25 | Large Language Models as Zero-Shot Conversational Recommenders | In this paper, we present empirical studies on conversational recommendation
tasks using representative large language models in a zero-shot setting with
three primary contributions. (1) Data: To gain insights into model behavior in
"in-the-wild" conversational recommendation scenarios, we construct a new
dataset of recommendation-related conversations by scraping a popular
discussion website. This is the largest public real-world conversational
recommendation dataset to date. (2) Evaluation: On the new dataset and two
existing conversational recommendation datasets, we observe that even without
fine-tuning, large language models can outperform existing fine-tuned
conversational recommendation models. (3) Analysis: We propose various probing
tasks to investigate the mechanisms behind the remarkable performance of large
language models in conversational recommendation. We analyze both the large
language models' behaviors and the characteristics of the datasets, providing a
holistic understanding of the models' effectiveness, limitations and suggesting
directions for the design of future conversational recommenders | http://arxiv.org/pdf/2308.10053 | Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley | cs.IR, cs.AI | Accepted as CIKM 2023 long paper. Longer version is coming soon
(e.g., more details about dataset) | null | cs.IR | 20230819 | 20230819 | [
{
"id": "2302.13971"
},
{
"id": "2304.03879"
},
{
"id": "2303.17568"
},
{
"id": "2305.07961"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "2303.08774"
},
{
"id": "2305.15717"
},
{
"id": "1611.09268"
},
{
"id": "2207.12515"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2305.13112"
},
{
"id": "2112.05197"
},
{
"id": "2305.06474"
},
{
"id": "2304.11406"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2304.01196"
}
] |
2308.09904 | 26 | Act [> | i | an Learn Movie Book f@ Mixed 0.90 ove os1 | 0:90 0.90 0.90 0.80 9.72 O74 0.80 0.80 0.80 os 0.70 (Gor 0.70 082 oss | 070 063 gg 08 | 070 63 0.65 0.59 087 058 woe | oo Penn 8 Moe nO | lovie 0.50 oso âll a oso Mill 0.50 Lote LR Ler Lote LR ter Lote LR Ler Lote LR Ler 0.90 0.90 0.85 0.90 0.90 0.76 082 979 0.80 0.80 0.80 0.80 0.69 0.70 0.70 0.70 0.70 62 8 0.64 058 o55 °° 0.56 0.59 Book | 59 955 m ic 0.60 0.54 0.55 a | 8 | | oso âi mom 0.50 oso = Eo 050 Loe LR Ler Lote LR ter Lote LR Ler Lote LR ter 0.90 0.90 0.90 0.23 | 0.90 079 g76 0.80 0.80 0.80 0.75 0.80 0.70 062 964 | 0.70 0.70 0.70 063 0.64 987 ose | 2308.09904#26 | RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents | The rapid evolution of the web has led to an exponential growth in content.
Recommender systems play a crucial role in Human-Computer Interaction (HCI) by
tailoring content based on individual preferences. Despite their importance,
challenges persist in balancing recommendation accuracy with user satisfaction,
addressing biases while preserving user privacy, and solving cold-start
problems in cross-domain situations. This research argues that addressing these
issues is not solely the recommender systems' responsibility, and a
human-centered approach is vital. We introduce the RAH Recommender system,
Assistant, and Human) framework, an innovative solution with LLM-based agents
such as Perceive, Learn, Act, Critic, and Reflect, emphasizing the alignment
with user personalities. The framework utilizes the Learn-Act-Critic loop and a
reflection mechanism for improving user alignment. Using the real-world data,
our experiments demonstrate the RAH framework's efficacy in various
recommendation domains, from reducing human burden to mitigating biases and
enhancing user control. Notably, our contributions provide a human-centered
recommendation framework that partners effectively with various recommendation
models. | http://arxiv.org/pdf/2308.09904 | Yubo Shu, Haonan Zhang, Hansu Gu, Peng Zhang, Tun Lu, Dongsheng Li, Ning Gu | cs.IR, cs.AI | null | null | cs.IR | 20230819 | 20231017 | [
{
"id": "2305.07961"
},
{
"id": "2309.07864"
},
{
"id": "2303.14524"
},
{
"id": "2209.07562"
},
{
"id": "2305.16291"
},
{
"id": "2207.12515"
},
{
"id": "2304.03442"
},
{
"id": "2304.10149"
},
{
"id": "1806.08977"
},
{
"id": "2305.00447"
},
{
"id": "2309.02427"
},
{
"id": "2307.14225"
},
{
"id": "2112.00944"
},
{
"id": "2305.16960"
}
] |
2308.10053 | 26 | finding that smaller distilled models via imitation learning cannot fully inherit larger models ability on downstream tasks [20].
Finding 3 - LLMs may generate out-of-dataset item titles, but few hallucinated recommendations. We note that language models trained on open-domain data naturally produce items out of the allowed item set during generation. In practice, removing these items improves the modelsâ recommendation performance. Large language models outperform other models (with GPT-4 being the best) consistently regardless of whether these unknown items are removed or not, as shown in Table 2. Meanwhile, Table 3 shows that around 95% generated recommendations from GPT-based models (around 81% from BAIZE and 87% from Vicuna) can be found in IMDB 11 by string matching. Those lower bounds of these matching rates indicate that there are only a few hallucinated item titles in the LLM recommendations in the movie domain.
5 DETAILED ANALYSIS Observing LLMsâ remarkable conversational recommendation per- formance for zero-shot recommendation, we are interested in what accounts for their effectiveness and what their limitations are. We aim to answer these questions from both a model and data perspective. | 2308.10053#26 | Large Language Models as Zero-Shot Conversational Recommenders | In this paper, we present empirical studies on conversational recommendation
tasks using representative large language models in a zero-shot setting with
three primary contributions. (1) Data: To gain insights into model behavior in
"in-the-wild" conversational recommendation scenarios, we construct a new
dataset of recommendation-related conversations by scraping a popular
discussion website. This is the largest public real-world conversational
recommendation dataset to date. (2) Evaluation: On the new dataset and two
existing conversational recommendation datasets, we observe that even without
fine-tuning, large language models can outperform existing fine-tuned
conversational recommendation models. (3) Analysis: We propose various probing
tasks to investigate the mechanisms behind the remarkable performance of large
language models in conversational recommendation. We analyze both the large
language models' behaviors and the characteristics of the datasets, providing a
holistic understanding of the models' effectiveness, limitations and suggesting
directions for the design of future conversational recommenders | http://arxiv.org/pdf/2308.10053 | Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley | cs.IR, cs.AI | Accepted as CIKM 2023 long paper. Longer version is coming soon
(e.g., more details about dataset) | null | cs.IR | 20230819 | 20230819 | [
{
"id": "2302.13971"
},
{
"id": "2304.03879"
},
{
"id": "2303.17568"
},
{
"id": "2305.07961"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "2303.08774"
},
{
"id": "2305.15717"
},
{
"id": "1611.09268"
},
{
"id": "2207.12515"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2305.13112"
},
{
"id": "2112.05197"
},
{
"id": "2305.06474"
},
{
"id": "2304.11406"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2304.01196"
}
] |
2308.09904 | 27 | 0.80 0.80 0.80 0.75 0.80 0.70 062 964 | 0.70 0.70 0.70 063 0.64 987 ose 959 ose 0.60 come [O° Ss EM [oe ce ce a foe âo peol oso ill oso elm MM | oso 0.50 Loe LR Ler Lote LR ter Lote LR Ler Lote LR Ler 0.90 083 gg) 084 | 0.90 oar 287 086 950 | g.00 0.86 0.94 088 | 00 0.85 og3 088 0.77 0.80 0.80 0.80 0.80 0.80 0.80 BO J o7 0.70 0.70 0.70 2 0.60 0.60 0.60 0.60 Mixed 0.50 0.50 0.50 0.50 Lo ote LR LCR Lote LR LCR Lo ote LR LCR Lo ote LR LCR | 2308.09904#27 | RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents | The rapid evolution of the web has led to an exponential growth in content.
Recommender systems play a crucial role in Human-Computer Interaction (HCI) by
tailoring content based on individual preferences. Despite their importance,
challenges persist in balancing recommendation accuracy with user satisfaction,
addressing biases while preserving user privacy, and solving cold-start
problems in cross-domain situations. This research argues that addressing these
issues is not solely the recommender systems' responsibility, and a
human-centered approach is vital. We introduce the RAH Recommender system,
Assistant, and Human) framework, an innovative solution with LLM-based agents
such as Perceive, Learn, Act, Critic, and Reflect, emphasizing the alignment
with user personalities. The framework utilizes the Learn-Act-Critic loop and a
reflection mechanism for improving user alignment. Using the real-world data,
our experiments demonstrate the RAH framework's efficacy in various
recommendation domains, from reducing human burden to mitigating biases and
enhancing user control. Notably, our contributions provide a human-centered
recommendation framework that partners effectively with various recommendation
models. | http://arxiv.org/pdf/2308.09904 | Yubo Shu, Haonan Zhang, Hansu Gu, Peng Zhang, Tun Lu, Dongsheng Li, Ning Gu | cs.IR, cs.AI | null | null | cs.IR | 20230819 | 20231017 | [
{
"id": "2305.07961"
},
{
"id": "2309.07864"
},
{
"id": "2303.14524"
},
{
"id": "2209.07562"
},
{
"id": "2305.16291"
},
{
"id": "2207.12515"
},
{
"id": "2304.03442"
},
{
"id": "2304.10149"
},
{
"id": "1806.08977"
},
{
"id": "2305.00447"
},
{
"id": "2309.02427"
},
{
"id": "2307.14225"
},
{
"id": "2112.00944"
},
{
"id": "2305.16960"
}
] |
2308.10053 | 27 | Finding 2 - GPT-based models achieve superior performance than open-sourced LLMs. As shown in Figure 4, large language models consistently outperform other models across all three datasets, while GPT-4 is generally better than GPT-3.5-t. We hypothesize this is due to GPT-4âs larger parameter size enables it to retain more correlation information between movie names and user preferences that naturally occurs in the language modelsâ pre-training data. Vi- cuna and BAIZE, while having comparable performance to prior models on most datasets, have significantly lower performance than its teacher, GPT-3.5-t. This is consistent with previous worksâ
# 5.1 Knowledge in LLMs
Experiment Setup. Motivated by the probing work of [53], we posit that two types of knowledge in LLMs can be used in CRS:
⢠Collaborative knowledge, which requires the model to match items with similar ones, according to community in- teractions like âusers who like A typically also like Bâ. In
11Movie titles in https://datasets.imdbws.com/.
CIKM â23, October 21â25, 2023, Birmingham, United Kingdom
He, et al. | 2308.10053#27 | Large Language Models as Zero-Shot Conversational Recommenders | In this paper, we present empirical studies on conversational recommendation
tasks using representative large language models in a zero-shot setting with
three primary contributions. (1) Data: To gain insights into model behavior in
"in-the-wild" conversational recommendation scenarios, we construct a new
dataset of recommendation-related conversations by scraping a popular
discussion website. This is the largest public real-world conversational
recommendation dataset to date. (2) Evaluation: On the new dataset and two
existing conversational recommendation datasets, we observe that even without
fine-tuning, large language models can outperform existing fine-tuned
conversational recommendation models. (3) Analysis: We propose various probing
tasks to investigate the mechanisms behind the remarkable performance of large
language models in conversational recommendation. We analyze both the large
language models' behaviors and the characteristics of the datasets, providing a
holistic understanding of the models' effectiveness, limitations and suggesting
directions for the design of future conversational recommenders | http://arxiv.org/pdf/2308.10053 | Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley | cs.IR, cs.AI | Accepted as CIKM 2023 long paper. Longer version is coming soon
(e.g., more details about dataset) | null | cs.IR | 20230819 | 20230819 | [
{
"id": "2302.13971"
},
{
"id": "2304.03879"
},
{
"id": "2303.17568"
},
{
"id": "2305.07961"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "2303.08774"
},
{
"id": "2305.15717"
},
{
"id": "1611.09268"
},
{
"id": "2207.12515"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2305.13112"
},
{
"id": "2112.05197"
},
{
"id": "2305.06474"
},
{
"id": "2304.11406"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2304.01196"
}
] |
2308.09904 | 28 | Figure 3: Performance evaluation of the assistantâs ability to align with users across singular, cross, and mixed domains. Histogram values represent the F1-Score against user actions. L for Learn Only, C for using Learn-Act-Critic loop, and R for the reflection mechanism.
5.1 Assistantsâ Alignment with Users For the first alignment-focused experiment, we task the assistant with assimilating personalities from the Learn Set and then gener- ating proxy actions for items within the Proxy Set in Cross1k. In order to evaluate our assistantâs alignment with users, an intuitive measure is whether an assistant can take consistent actions with a user. Therefore, the evaluation process is: (1) We instruct the as- sistant to extract usersâ personalities from their interactions in the Learn Set, such as ratings and comments on items. (2) The assistant is then tasked with predicting actions on items in the Proxy Set. We then examine if these predicted actions align with the actual behaviors of users.
Figure 3 presents the F1-score of the personality learning ex- periment. Overall, compared with Learn Only, either the learn-act- critic loop or reflection mechanism is helpful in aligning with users. Moreover, their combined application yields even more significant improvements. | 2308.09904#28 | RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents | The rapid evolution of the web has led to an exponential growth in content.
Recommender systems play a crucial role in Human-Computer Interaction (HCI) by
tailoring content based on individual preferences. Despite their importance,
challenges persist in balancing recommendation accuracy with user satisfaction,
addressing biases while preserving user privacy, and solving cold-start
problems in cross-domain situations. This research argues that addressing these
issues is not solely the recommender systems' responsibility, and a
human-centered approach is vital. We introduce the RAH Recommender system,
Assistant, and Human) framework, an innovative solution with LLM-based agents
such as Perceive, Learn, Act, Critic, and Reflect, emphasizing the alignment
with user personalities. The framework utilizes the Learn-Act-Critic loop and a
reflection mechanism for improving user alignment. Using the real-world data,
our experiments demonstrate the RAH framework's efficacy in various
recommendation domains, from reducing human burden to mitigating biases and
enhancing user control. Notably, our contributions provide a human-centered
recommendation framework that partners effectively with various recommendation
models. | http://arxiv.org/pdf/2308.09904 | Yubo Shu, Haonan Zhang, Hansu Gu, Peng Zhang, Tun Lu, Dongsheng Li, Ning Gu | cs.IR, cs.AI | null | null | cs.IR | 20230819 | 20231017 | [
{
"id": "2305.07961"
},
{
"id": "2309.07864"
},
{
"id": "2303.14524"
},
{
"id": "2209.07562"
},
{
"id": "2305.16291"
},
{
"id": "2207.12515"
},
{
"id": "2304.03442"
},
{
"id": "2304.10149"
},
{
"id": "1806.08977"
},
{
"id": "2305.00447"
},
{
"id": "2309.02427"
},
{
"id": "2307.14225"
},
{
"id": "2112.00944"
},
{
"id": "2305.16960"
}
] |
2308.09904 | 29 | Learning and acting within the same domain yields better results compared to cross-domain operations. Furthermore, the results demonstrate that learning from a mixed domain outperforms learn- ing from any single domain, such as movies, books, or games when considered independently. This suggests that LLM-based assistants possess the capability to reason and extrapolate usersâ personalities across different domains.
To gain a more comprehensive evaluation, we conduct the ex- periment to include both cross-domains and mixed domains. For comparison, we have four tasks for personality learning:
⢠Learn Only: We directly append learned new likes or dis- likes into usersâ personalities without Critic Agent or Reflect Agent.
⢠Learn+Reflect: After appending new likes or dislikes to usersâ personalities, we employ the reflection mechanism to resolve potential duplication and conflicts.
⢠Learn+Critic: After acquiring new likes or dislikes from a particular user action, we input the new likes or dislikes and assess if the Act Agent can accurately infer the original user action in reverse. If not successful, the assistant should attempt another Learn-Act-Critic loop.
⢠Learn+Critic+Reflect: Both the Learn-Act-Critic loop and reflection mechanism are engaged for optimization. | 2308.09904#29 | RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents | The rapid evolution of the web has led to an exponential growth in content.
Recommender systems play a crucial role in Human-Computer Interaction (HCI) by
tailoring content based on individual preferences. Despite their importance,
challenges persist in balancing recommendation accuracy with user satisfaction,
addressing biases while preserving user privacy, and solving cold-start
problems in cross-domain situations. This research argues that addressing these
issues is not solely the recommender systems' responsibility, and a
human-centered approach is vital. We introduce the RAH Recommender system,
Assistant, and Human) framework, an innovative solution with LLM-based agents
such as Perceive, Learn, Act, Critic, and Reflect, emphasizing the alignment
with user personalities. The framework utilizes the Learn-Act-Critic loop and a
reflection mechanism for improving user alignment. Using the real-world data,
our experiments demonstrate the RAH framework's efficacy in various
recommendation domains, from reducing human burden to mitigating biases and
enhancing user control. Notably, our contributions provide a human-centered
recommendation framework that partners effectively with various recommendation
models. | http://arxiv.org/pdf/2308.09904 | Yubo Shu, Haonan Zhang, Hansu Gu, Peng Zhang, Tun Lu, Dongsheng Li, Ning Gu | cs.IR, cs.AI | null | null | cs.IR | 20230819 | 20231017 | [
{
"id": "2305.07961"
},
{
"id": "2309.07864"
},
{
"id": "2303.14524"
},
{
"id": "2209.07562"
},
{
"id": "2305.16291"
},
{
"id": "2207.12515"
},
{
"id": "2304.03442"
},
{
"id": "2304.10149"
},
{
"id": "1806.08977"
},
{
"id": "2305.00447"
},
{
"id": "2309.02427"
},
{
"id": "2307.14225"
},
{
"id": "2112.00944"
},
{
"id": "2305.16960"
}
] |
2308.10053 | 29 | Figure 5: Ablation studies for the research question about the primary knowledge used by LLMs for CRS. Here Φ1 is the post-processor which only considers in-dataset item titles; Φ2 is the post-processor based on Φ1 and further excludes all seen items in conversational context from generated recommendation lists. For inputs like Original (ð0) and ItemOnly (ð1), LLMs show similar performance with Φ1 or Φ2, so we only keep Φ1 here. We consider Φ2 because ItemRemoved (ð2) and ItemRandom (ð3) have no information about already mentioned items, which may cause under-estimated accuracy using Φ1 compared to Original.
Z So, 1 INSPIRED Gad $1,%1 ReDIAL | $2, 02 ] S3,02 Reddit os os 0.08} a Bae a on fi if 0.02 a oo LH Pala) = r 1 {0} sao] [10,+0) 4 [1,5) [1,5) aot [10,+0) [1,5) [5,10) [10,+00) | 2308.10053#29 | Large Language Models as Zero-Shot Conversational Recommenders | In this paper, we present empirical studies on conversational recommendation
tasks using representative large language models in a zero-shot setting with
three primary contributions. (1) Data: To gain insights into model behavior in
"in-the-wild" conversational recommendation scenarios, we construct a new
dataset of recommendation-related conversations by scraping a popular
discussion website. This is the largest public real-world conversational
recommendation dataset to date. (2) Evaluation: On the new dataset and two
existing conversational recommendation datasets, we observe that even without
fine-tuning, large language models can outperform existing fine-tuned
conversational recommendation models. (3) Analysis: We propose various probing
tasks to investigate the mechanisms behind the remarkable performance of large
language models in conversational recommendation. We analyze both the large
language models' behaviors and the characteristics of the datasets, providing a
holistic understanding of the models' effectiveness, limitations and suggesting
directions for the design of future conversational recommenders | http://arxiv.org/pdf/2308.10053 | Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley | cs.IR, cs.AI | Accepted as CIKM 2023 long paper. Longer version is coming soon
(e.g., more details about dataset) | null | cs.IR | 20230819 | 20230819 | [
{
"id": "2302.13971"
},
{
"id": "2304.03879"
},
{
"id": "2303.17568"
},
{
"id": "2305.07961"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "2303.08774"
},
{
"id": "2305.15717"
},
{
"id": "1611.09268"
},
{
"id": "2207.12515"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2305.13112"
},
{
"id": "2112.05197"
},
{
"id": "2305.06474"
},
{
"id": "2304.11406"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2304.01196"
}
] |
2308.09904 | 30 | ⢠Learn+Critic+Reflect: Both the Learn-Act-Critic loop and reflection mechanism are engaged for optimization.
5.2 Reduce Human Burden In the second experiment, we connect the assistant with traditional recommender systems within the RAH framework. To evaluate whether the assistant can reduce user burden, we measure how effectively the assistant can represent users and provide proxy feedback to calibrate the recommender systems using the RAH framework. We perform comparison experiments for various rec- ommendation algorithms, both with and without assistants.
Without assistants, we train recommendation algorithms on Cross221k and the Learn Set of Cross1k. Lastly, we calculate the recommendation metric on the Unseen Set. With assistants, we initially use assistants to learn each userâs personality on Learn Set and let the assistant make proxy feedback on Proxy Set (same as the first experiment). Then we train recommendation models on
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
&
XXXâ24, 2024, Singapore
Table 1: The performance of proxying user feedback and adjusting recommender systems. | 2308.09904#30 | RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents | The rapid evolution of the web has led to an exponential growth in content.
Recommender systems play a crucial role in Human-Computer Interaction (HCI) by
tailoring content based on individual preferences. Despite their importance,
challenges persist in balancing recommendation accuracy with user satisfaction,
addressing biases while preserving user privacy, and solving cold-start
problems in cross-domain situations. This research argues that addressing these
issues is not solely the recommender systems' responsibility, and a
human-centered approach is vital. We introduce the RAH Recommender system,
Assistant, and Human) framework, an innovative solution with LLM-based agents
such as Perceive, Learn, Act, Critic, and Reflect, emphasizing the alignment
with user personalities. The framework utilizes the Learn-Act-Critic loop and a
reflection mechanism for improving user alignment. Using the real-world data,
our experiments demonstrate the RAH framework's efficacy in various
recommendation domains, from reducing human burden to mitigating biases and
enhancing user control. Notably, our contributions provide a human-centered
recommendation framework that partners effectively with various recommendation
models. | http://arxiv.org/pdf/2308.09904 | Yubo Shu, Haonan Zhang, Hansu Gu, Peng Zhang, Tun Lu, Dongsheng Li, Ning Gu | cs.IR, cs.AI | null | null | cs.IR | 20230819 | 20231017 | [
{
"id": "2305.07961"
},
{
"id": "2309.07864"
},
{
"id": "2303.14524"
},
{
"id": "2209.07562"
},
{
"id": "2305.16291"
},
{
"id": "2207.12515"
},
{
"id": "2304.03442"
},
{
"id": "2304.10149"
},
{
"id": "1806.08977"
},
{
"id": "2305.00447"
},
{
"id": "2309.02427"
},
{
"id": "2307.14225"
},
{
"id": "2112.00944"
},
{
"id": "2305.16960"
}
] |
2308.10053 | 30 | Table 4: To understand the content/context knowledge in LLMs and existing CRS models, we re-train the existing CRS models using the same perturbed conversation context Item- Removed (ð2). We include the results of the representative CRS model UniCRS (denoted as CRS*) as well as a represen- tative text-encoder BERT-small [15] (denoted as TextEnc*).
INSPIRED ReDIAL Reddit Model R@1 R@5 R@1 R@5 R@1 R@5 Vicuna GPT-3.5-t GPT-4 .024 .010 .057 .016 .062 .017 .062 .017 .123 .023 .128 .023 .014 .002 .030 .003 .032 .003 .053 .003 .105 .005 .102 .005 .008 .001 .018 .001 .019 .001 .025 .001 .068 .002 .075 .002 CRS* TextEnc* .039 .011 .038 .015 .087 .014 .090 .016 .015 .002 .013 .002 .058 .003 .053 .004 .001 .000 .002 .000 .008 .001 .009 .001 | 2308.10053#30 | Large Language Models as Zero-Shot Conversational Recommenders | In this paper, we present empirical studies on conversational recommendation
tasks using representative large language models in a zero-shot setting with
three primary contributions. (1) Data: To gain insights into model behavior in
"in-the-wild" conversational recommendation scenarios, we construct a new
dataset of recommendation-related conversations by scraping a popular
discussion website. This is the largest public real-world conversational
recommendation dataset to date. (2) Evaluation: On the new dataset and two
existing conversational recommendation datasets, we observe that even without
fine-tuning, large language models can outperform existing fine-tuned
conversational recommendation models. (3) Analysis: We propose various probing
tasks to investigate the mechanisms behind the remarkable performance of large
language models in conversational recommendation. We analyze both the large
language models' behaviors and the characteristics of the datasets, providing a
holistic understanding of the models' effectiveness, limitations and suggesting
directions for the design of future conversational recommenders | http://arxiv.org/pdf/2308.10053 | Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley | cs.IR, cs.AI | Accepted as CIKM 2023 long paper. Longer version is coming soon
(e.g., more details about dataset) | null | cs.IR | 20230819 | 20230819 | [
{
"id": "2302.13971"
},
{
"id": "2304.03879"
},
{
"id": "2303.17568"
},
{
"id": "2305.07961"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "2303.08774"
},
{
"id": "2305.15717"
},
{
"id": "1611.09268"
},
{
"id": "2207.12515"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2305.13112"
},
{
"id": "2112.05197"
},
{
"id": "2305.06474"
},
{
"id": "2304.11406"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2304.01196"
}
] |
2308.09904 | 31 | Method Assistant Movie Book Game Mixed NDCG@10 Recall@10 NDCG@10 Recall@10 NDCG@10 Recall@10 NDCG@10 Recall@10 LightGCN LightGCN No Yes 0.5202 0.5524(+0.0322) 0.5142 0.5339(+0.0197) 0.1283 0.1830(+0.0547) 0.1439 0.1912(+0.0473) 0.3459 0.4330(+0.0871) 0.4309 0.4974(+0.0665) 0.3403 0.4058(+0.0655) 0.1696 0.2033(+0.0337) PLMRec PLMRec No Yes 0.0993 0.1200(+0.0207) 0.1316 0.1692(+0.0376) 0.0092 0.0162(+0.0070) 0.0143 0.0197(+0.0054) 0.3693 0.3981(+0.0288) 0.4630 0.4790(+0.0160) 0.1075 0.1378(+0.0303) 0.0656 0.0766(+0.0110) FM FM No Yes | 2308.09904#31 | RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents | The rapid evolution of the web has led to an exponential growth in content.
Recommender systems play a crucial role in Human-Computer Interaction (HCI) by
tailoring content based on individual preferences. Despite their importance,
challenges persist in balancing recommendation accuracy with user satisfaction,
addressing biases while preserving user privacy, and solving cold-start
problems in cross-domain situations. This research argues that addressing these
issues is not solely the recommender systems' responsibility, and a
human-centered approach is vital. We introduce the RAH Recommender system,
Assistant, and Human) framework, an innovative solution with LLM-based agents
such as Perceive, Learn, Act, Critic, and Reflect, emphasizing the alignment
with user personalities. The framework utilizes the Learn-Act-Critic loop and a
reflection mechanism for improving user alignment. Using the real-world data,
our experiments demonstrate the RAH framework's efficacy in various
recommendation domains, from reducing human burden to mitigating biases and
enhancing user control. Notably, our contributions provide a human-centered
recommendation framework that partners effectively with various recommendation
models. | http://arxiv.org/pdf/2308.09904 | Yubo Shu, Haonan Zhang, Hansu Gu, Peng Zhang, Tun Lu, Dongsheng Li, Ning Gu | cs.IR, cs.AI | null | null | cs.IR | 20230819 | 20231017 | [
{
"id": "2305.07961"
},
{
"id": "2309.07864"
},
{
"id": "2303.14524"
},
{
"id": "2209.07562"
},
{
"id": "2305.16291"
},
{
"id": "2207.12515"
},
{
"id": "2304.03442"
},
{
"id": "2304.10149"
},
{
"id": "1806.08977"
},
{
"id": "2305.00447"
},
{
"id": "2309.02427"
},
{
"id": "2307.14225"
},
{
"id": "2112.00944"
},
{
"id": "2305.16960"
}
] |
2308.10053 | 31 | Figure 6: GPT-3.5-t Recall@5 results grouped by the occur- rences of items in conversation context, and count the con- versations per dataset.
our experiments, we define the collaborative knowledge in LLMs as the ability to make accurate recommendations using item mentions in conversational contexts.
⢠Content/context knowledge, which requires the model to match recommended items with their content or context in- formation. In our experiments, we define the content/context knowledge in LLMs as the ability to make accurate recom- mendations based on all other conversation inputs rather than item mentions, such as contextual descriptions, mentioned genres, and director names.
To understand how LLMs use these two types of knowledge, given the original conversation context ð (Example in Figure 1), we perturb ð with three different strategies as follows and subsequently re-query the LLMs. We denote the original as ð0:
S0 (Original): we use the original conversation context. ⢠S1 (ItemOnly): we keep mentioned items and remove all natural language descriptions in the conversation context. ⢠S2 (ItemRemoved): we remove mentioned items and keep
other content in the conversation context.
⢠S3 (ItemRandom): we replace the mentioned items in the conversation context with items that are uniformly sampled from the item set I of this dataset, to eliminate the potential influence of ð2 on the sentence grammar structure. | 2308.10053#31 | Large Language Models as Zero-Shot Conversational Recommenders | In this paper, we present empirical studies on conversational recommendation
tasks using representative large language models in a zero-shot setting with
three primary contributions. (1) Data: To gain insights into model behavior in
"in-the-wild" conversational recommendation scenarios, we construct a new
dataset of recommendation-related conversations by scraping a popular
discussion website. This is the largest public real-world conversational
recommendation dataset to date. (2) Evaluation: On the new dataset and two
existing conversational recommendation datasets, we observe that even without
fine-tuning, large language models can outperform existing fine-tuned
conversational recommendation models. (3) Analysis: We propose various probing
tasks to investigate the mechanisms behind the remarkable performance of large
language models in conversational recommendation. We analyze both the large
language models' behaviors and the characteristics of the datasets, providing a
holistic understanding of the models' effectiveness, limitations and suggesting
directions for the design of future conversational recommenders | http://arxiv.org/pdf/2308.10053 | Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley | cs.IR, cs.AI | Accepted as CIKM 2023 long paper. Longer version is coming soon
(e.g., more details about dataset) | null | cs.IR | 20230819 | 20230819 | [
{
"id": "2302.13971"
},
{
"id": "2304.03879"
},
{
"id": "2303.17568"
},
{
"id": "2305.07961"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "2303.08774"
},
{
"id": "2305.15717"
},
{
"id": "1611.09268"
},
{
"id": "2207.12515"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2305.13112"
},
{
"id": "2112.05197"
},
{
"id": "2305.06474"
},
{
"id": "2304.11406"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2304.01196"
}
] |
2308.09904 | 32 | 0.1075 0.1378(+0.0303) 0.0656 0.0766(+0.0110) FM FM No Yes 0.3492 0.3919(+0.0427) 0.3871 0.4257(+0.0386) 0.1216 0.1474(+0.0258) 0.1299 0.1603(+0.0304) 0.2917 0.2937(+0.0020) 0.3586 0.3624(+0.0038) 0.2421 0.2549(+0.0128) 0.1262 0.1340(+0.0078) MF MF No Yes 0.3737 0.4300(+0.0563) 0.4450 0.4781(+0.0331) 0.1143 0.1520(+0.0377) 0.1275 0.1593(+0.0318) 0.2074 0.2998(+0.0924) 0.2622 0.3706(+0.1084) 0.1933 0.2651(+0.0718) 0.1054 0.1487(+0.0433) ENMF ENMF No Yes 0.4320 0.5200(+0.0880) | 2308.09904#32 | RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents | The rapid evolution of the web has led to an exponential growth in content.
Recommender systems play a crucial role in Human-Computer Interaction (HCI) by
tailoring content based on individual preferences. Despite their importance,
challenges persist in balancing recommendation accuracy with user satisfaction,
addressing biases while preserving user privacy, and solving cold-start
problems in cross-domain situations. This research argues that addressing these
issues is not solely the recommender systems' responsibility, and a
human-centered approach is vital. We introduce the RAH Recommender system,
Assistant, and Human) framework, an innovative solution with LLM-based agents
such as Perceive, Learn, Act, Critic, and Reflect, emphasizing the alignment
with user personalities. The framework utilizes the Learn-Act-Critic loop and a
reflection mechanism for improving user alignment. Using the real-world data,
our experiments demonstrate the RAH framework's efficacy in various
recommendation domains, from reducing human burden to mitigating biases and
enhancing user control. Notably, our contributions provide a human-centered
recommendation framework that partners effectively with various recommendation
models. | http://arxiv.org/pdf/2308.09904 | Yubo Shu, Haonan Zhang, Hansu Gu, Peng Zhang, Tun Lu, Dongsheng Li, Ning Gu | cs.IR, cs.AI | null | null | cs.IR | 20230819 | 20231017 | [
{
"id": "2305.07961"
},
{
"id": "2309.07864"
},
{
"id": "2303.14524"
},
{
"id": "2209.07562"
},
{
"id": "2305.16291"
},
{
"id": "2207.12515"
},
{
"id": "2304.03442"
},
{
"id": "2304.10149"
},
{
"id": "1806.08977"
},
{
"id": "2305.00447"
},
{
"id": "2309.02427"
},
{
"id": "2307.14225"
},
{
"id": "2112.00944"
},
{
"id": "2305.16960"
}
] |
2308.10053 | 32 | Finding 4 - LLMs mainly rely on content/context knowledge to make recommendations. Figure 5 shows a drop in perfor- mance for most models across various datasets when replacing the original conversation text Original (ð0) with other texts, indicating that LLMs leverage both content/context knowledge and collabora- tive knowledge in recommendation tasks. However, the importance of these knowledge types differs. Our analysis reveals that con- tent/context knowledge is the primary knowledge utilized by LLMs in CRS. When using ItemOnly (ð1) as a replacement for Original, there is an average performance drop of more than 60% in terms of Recall@5. On the other hand, GPT-based models experience only a minor performance drop of less than 10% on average when using ItemRemoved (ð2) or ItemRandom (ð3) instead of Original. Al- though the smaller-sized model Vicuna shows a higher performance drop, it is still considerably milder compared to using ItemOnly. To accurately reflect the recommendation abilities of LLMs with ItemRemoved and ItemRandom, we introduce a new post-processor
Large Language Models as Zero-Shot Conversational Recommenders | 2308.10053#32 | Large Language Models as Zero-Shot Conversational Recommenders | In this paper, we present empirical studies on conversational recommendation
tasks using representative large language models in a zero-shot setting with
three primary contributions. (1) Data: To gain insights into model behavior in
"in-the-wild" conversational recommendation scenarios, we construct a new
dataset of recommendation-related conversations by scraping a popular
discussion website. This is the largest public real-world conversational
recommendation dataset to date. (2) Evaluation: On the new dataset and two
existing conversational recommendation datasets, we observe that even without
fine-tuning, large language models can outperform existing fine-tuned
conversational recommendation models. (3) Analysis: We propose various probing
tasks to investigate the mechanisms behind the remarkable performance of large
language models in conversational recommendation. We analyze both the large
language models' behaviors and the characteristics of the datasets, providing a
holistic understanding of the models' effectiveness, limitations and suggesting
directions for the design of future conversational recommenders | http://arxiv.org/pdf/2308.10053 | Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley | cs.IR, cs.AI | Accepted as CIKM 2023 long paper. Longer version is coming soon
(e.g., more details about dataset) | null | cs.IR | 20230819 | 20230819 | [
{
"id": "2302.13971"
},
{
"id": "2304.03879"
},
{
"id": "2303.17568"
},
{
"id": "2305.07961"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "2303.08774"
},
{
"id": "2305.15717"
},
{
"id": "1611.09268"
},
{
"id": "2207.12515"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2305.13112"
},
{
"id": "2112.05197"
},
{
"id": "2305.06474"
},
{
"id": "2304.11406"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2304.01196"
}
] |
2308.09904 | 33 | 0.1054 0.1487(+0.0433) ENMF ENMF No Yes 0.4320 0.5200(+0.0880) 0.3953 0.4831(+0.0878) 0.0994 0.1224(+0.0230) 0.0997 0.1217(+0.0220) 0.0652 0.0788(+0.0136) 0.1036 0.1247(+0.0211) 0.2630 0.3224(+0.0594) 0.1227 0.1531(+0.0304) NeuralMF NeuralMF No Yes 0.4720 0.4856(+0.0136) 0.4878 0.4906(+0.0028) 0.1364 0.1631(+0.0267) 0.1385 0.1658(+0.0273) 0.2160 0.3507(+0.1347) 0.2704 0.4086(+0.1382) 0.2891 0.3451(+0.0560) 0.1507 0.1742(+0.0235) ItemKNN ItemKNN No Yes 0.1211 0.2131(+0.0920) 0.1035 | 2308.09904#33 | RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents | The rapid evolution of the web has led to an exponential growth in content.
Recommender systems play a crucial role in Human-Computer Interaction (HCI) by
tailoring content based on individual preferences. Despite their importance,
challenges persist in balancing recommendation accuracy with user satisfaction,
addressing biases while preserving user privacy, and solving cold-start
problems in cross-domain situations. This research argues that addressing these
issues is not solely the recommender systems' responsibility, and a
human-centered approach is vital. We introduce the RAH Recommender system,
Assistant, and Human) framework, an innovative solution with LLM-based agents
such as Perceive, Learn, Act, Critic, and Reflect, emphasizing the alignment
with user personalities. The framework utilizes the Learn-Act-Critic loop and a
reflection mechanism for improving user alignment. Using the real-world data,
our experiments demonstrate the RAH framework's efficacy in various
recommendation domains, from reducing human burden to mitigating biases and
enhancing user control. Notably, our contributions provide a human-centered
recommendation framework that partners effectively with various recommendation
models. | http://arxiv.org/pdf/2308.09904 | Yubo Shu, Haonan Zhang, Hansu Gu, Peng Zhang, Tun Lu, Dongsheng Li, Ning Gu | cs.IR, cs.AI | null | null | cs.IR | 20230819 | 20231017 | [
{
"id": "2305.07961"
},
{
"id": "2309.07864"
},
{
"id": "2303.14524"
},
{
"id": "2209.07562"
},
{
"id": "2305.16291"
},
{
"id": "2207.12515"
},
{
"id": "2304.03442"
},
{
"id": "2304.10149"
},
{
"id": "1806.08977"
},
{
"id": "2305.00447"
},
{
"id": "2309.02427"
},
{
"id": "2307.14225"
},
{
"id": "2112.00944"
},
{
"id": "2305.16960"
}
] |
2308.10053 | 33 | Large Language Models as Zero-Shot Conversational Recommenders
Table 5: To understand the collaborative knowledge in LLMs and existing CRS models, we re-train the existing CRS models using the same perturbed conversation context ItemOnly (ð1). We include the results of the representative CRS model Uni- CRS (denoted as CRS*) as well as a representative item-based collaborative model FISM [31] (denoted as ItemCF*).
INSPIRED ReDIAL Reddit Model R@1 R@5 R@1 R@5 R@1 R@5 Vicuna GPT-3.5-t GPT-4 .005 .005 .024 .010 .014 .008 .024 .010 .052 .015 .052 .015 .011 .002 .021 .002 .025 .002 .039 .003 .063 .004 .069 .004 .005 .000 .007 .001 .007 .001 .015 .001 .026 .001 .028 .001 CRS* ItemCF* .038 .013 .042 .012 .085 .019 .087 .016 .025 .002 .029 .003 .072 .004 .088 .004 .003 .000 .004 .001 .015 .001 .018 .001 | 2308.10053#33 | Large Language Models as Zero-Shot Conversational Recommenders | In this paper, we present empirical studies on conversational recommendation
tasks using representative large language models in a zero-shot setting with
three primary contributions. (1) Data: To gain insights into model behavior in
"in-the-wild" conversational recommendation scenarios, we construct a new
dataset of recommendation-related conversations by scraping a popular
discussion website. This is the largest public real-world conversational
recommendation dataset to date. (2) Evaluation: On the new dataset and two
existing conversational recommendation datasets, we observe that even without
fine-tuning, large language models can outperform existing fine-tuned
conversational recommendation models. (3) Analysis: We propose various probing
tasks to investigate the mechanisms behind the remarkable performance of large
language models in conversational recommendation. We analyze both the large
language models' behaviors and the characteristics of the datasets, providing a
holistic understanding of the models' effectiveness, limitations and suggesting
directions for the design of future conversational recommenders | http://arxiv.org/pdf/2308.10053 | Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley | cs.IR, cs.AI | Accepted as CIKM 2023 long paper. Longer version is coming soon
(e.g., more details about dataset) | null | cs.IR | 20230819 | 20230819 | [
{
"id": "2302.13971"
},
{
"id": "2304.03879"
},
{
"id": "2303.17568"
},
{
"id": "2305.07961"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "2303.08774"
},
{
"id": "2305.15717"
},
{
"id": "1611.09268"
},
{
"id": "2207.12515"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2305.13112"
},
{
"id": "2112.05197"
},
{
"id": "2305.06474"
},
{
"id": "2304.11406"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2304.01196"
}
] |
2308.10053 | 34 | denoted as Φ2 (describe in the caption of Figure 5). By employing Φ2, the performance gaps between Original and ItemRemoved (or ItemRandom) are further reduced. Furthermore, Figure 6 demon- strates the consistent and close performance gap between Original and ItemRemoved (or ItemRandom) across different testing samples, which vary in size and the number of item mentions in Original.
These results suggest that given a conversation context, LLMs primarily rely on content/context knowledge rather than collabo- rative knowledge to make recommendations. This behavior inter- estingly diverges from many traditional recommenders like col- laborative filtering [23, 24, 36, 46, 55, 58] or sequential recom- menders [25, 33, 59, 73], where user-interacted items are essential.
Finding 5 - GPT-based LLMs possess better content/context knowledge than existing CRS. From Table 4, we observe the superior recommendation performance of GPT-based LLMs against representative conversational recommendation or text-only mod- els on all datasets, showing the remarkable zero-shot abilities in understanding user preference with the textual inputs and gener- ating correct item titles. We conclude that GPT-based LLMs can provide more accurate recommendations than existing trained CRS models in an ItemRemoved (ð2) setting, demonstrating better con- tent/context knowledge. | 2308.10053#34 | Large Language Models as Zero-Shot Conversational Recommenders | In this paper, we present empirical studies on conversational recommendation
tasks using representative large language models in a zero-shot setting with
three primary contributions. (1) Data: To gain insights into model behavior in
"in-the-wild" conversational recommendation scenarios, we construct a new
dataset of recommendation-related conversations by scraping a popular
discussion website. This is the largest public real-world conversational
recommendation dataset to date. (2) Evaluation: On the new dataset and two
existing conversational recommendation datasets, we observe that even without
fine-tuning, large language models can outperform existing fine-tuned
conversational recommendation models. (3) Analysis: We propose various probing
tasks to investigate the mechanisms behind the remarkable performance of large
language models in conversational recommendation. We analyze both the large
language models' behaviors and the characteristics of the datasets, providing a
holistic understanding of the models' effectiveness, limitations and suggesting
directions for the design of future conversational recommenders | http://arxiv.org/pdf/2308.10053 | Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley | cs.IR, cs.AI | Accepted as CIKM 2023 long paper. Longer version is coming soon
(e.g., more details about dataset) | null | cs.IR | 20230819 | 20230819 | [
{
"id": "2302.13971"
},
{
"id": "2304.03879"
},
{
"id": "2303.17568"
},
{
"id": "2305.07961"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "2303.08774"
},
{
"id": "2305.15717"
},
{
"id": "1611.09268"
},
{
"id": "2207.12515"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2305.13112"
},
{
"id": "2112.05197"
},
{
"id": "2305.06474"
},
{
"id": "2304.11406"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2304.01196"
}
] |
2308.09904 | 35 | Cross221k, Learn Set and the assistantâs proxy feedback, and like- wise test on Unseen Set. The involved recommendation algorithms are as follows:
# 5.3 Mitigate Bias
# Table 2: The performance of alleviating bias.
⢠LightGCN[10]: A model that enhances recommender sys- tems by simplifying neighborhood aggregation, and learns embeddings through linear propagation on the interaction graph.
⢠PLMRec[36]: A recommendation model that uses PLMs like Bert to embed the content of items for deeper semantic min- ing.
Method MF MF+IPS MF+RAH MF+IPS+RAH NDCG@10 Recall@10 0.1835 0.2148 0.5017 0.5196 0.2085 0.2424 0.4326 0.4554
⢠FM[23]: Model that combines SVM advantages with factor- ization models, using factorized parameters to model inter- actions in sparse data.
⢠MF[13]: Use matrix factorization techniques for recommen- dation systems to generate product recommendations by using historical data.
⢠ENMF[3]: Based on simple neural matrix factorization, it optimizes model parameters from the entire training data without sampling.
⢠NeuralMF[11]: A framework that uses deep neural networks modeling collaborative filtering based on implicit feedback and user-item feature interactions. | 2308.09904#35 | RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents | The rapid evolution of the web has led to an exponential growth in content.
Recommender systems play a crucial role in Human-Computer Interaction (HCI) by
tailoring content based on individual preferences. Despite their importance,
challenges persist in balancing recommendation accuracy with user satisfaction,
addressing biases while preserving user privacy, and solving cold-start
problems in cross-domain situations. This research argues that addressing these
issues is not solely the recommender systems' responsibility, and a
human-centered approach is vital. We introduce the RAH Recommender system,
Assistant, and Human) framework, an innovative solution with LLM-based agents
such as Perceive, Learn, Act, Critic, and Reflect, emphasizing the alignment
with user personalities. The framework utilizes the Learn-Act-Critic loop and a
reflection mechanism for improving user alignment. Using the real-world data,
our experiments demonstrate the RAH framework's efficacy in various
recommendation domains, from reducing human burden to mitigating biases and
enhancing user control. Notably, our contributions provide a human-centered
recommendation framework that partners effectively with various recommendation
models. | http://arxiv.org/pdf/2308.09904 | Yubo Shu, Haonan Zhang, Hansu Gu, Peng Zhang, Tun Lu, Dongsheng Li, Ning Gu | cs.IR, cs.AI | null | null | cs.IR | 20230819 | 20231017 | [
{
"id": "2305.07961"
},
{
"id": "2309.07864"
},
{
"id": "2303.14524"
},
{
"id": "2209.07562"
},
{
"id": "2305.16291"
},
{
"id": "2207.12515"
},
{
"id": "2304.03442"
},
{
"id": "2304.10149"
},
{
"id": "1806.08977"
},
{
"id": "2305.00447"
},
{
"id": "2309.02427"
},
{
"id": "2307.14225"
},
{
"id": "2112.00944"
},
{
"id": "2305.16960"
}
] |
2308.10053 | 35 | Finding 6 - LLMs generally possess weaker collaborative knowledge than existing CRS. In Table 5, the results from IN- SPIRED and ReDIAL indicate that LLMs underperform existing representative CRS or ItemCF models by 30% when using only the item-based conversation context ItemOnly (ð1). It indicates that LLMs, trained on a general corpus, typically lack the collaborative knowledge exhibited by representative models trained on the target dataset. There are several possible reasons for this weak collabora- tive knowledge in LLMs. First, the training corpus may not contain sufficient information for LLMs to learn the underlying item sim- ilarities. Second, although LLMs may possess some collaborative knowledge, they might not align with the interactions in the target datasets, possibly because the underlying item similarities can be highly dataset- or platform-dependent.
However, in the case of the Reddit dataset, LLMs outperform baselines in both Recall@1 and Recall@5, as shown in Table 5. This outcome could be attributed to the datasetâs large number of rarely interacted items, resulting in limited collaborative information. The
CIKM â23, October 21â25, 2023, Birmingham, United Kingdom | 2308.10053#35 | Large Language Models as Zero-Shot Conversational Recommenders | In this paper, we present empirical studies on conversational recommendation
tasks using representative large language models in a zero-shot setting with
three primary contributions. (1) Data: To gain insights into model behavior in
"in-the-wild" conversational recommendation scenarios, we construct a new
dataset of recommendation-related conversations by scraping a popular
discussion website. This is the largest public real-world conversational
recommendation dataset to date. (2) Evaluation: On the new dataset and two
existing conversational recommendation datasets, we observe that even without
fine-tuning, large language models can outperform existing fine-tuned
conversational recommendation models. (3) Analysis: We propose various probing
tasks to investigate the mechanisms behind the remarkable performance of large
language models in conversational recommendation. We analyze both the large
language models' behaviors and the characteristics of the datasets, providing a
holistic understanding of the models' effectiveness, limitations and suggesting
directions for the design of future conversational recommenders | http://arxiv.org/pdf/2308.10053 | Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley | cs.IR, cs.AI | Accepted as CIKM 2023 long paper. Longer version is coming soon
(e.g., more details about dataset) | null | cs.IR | 20230819 | 20230819 | [
{
"id": "2302.13971"
},
{
"id": "2304.03879"
},
{
"id": "2303.17568"
},
{
"id": "2305.07961"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "2303.08774"
},
{
"id": "2305.15717"
},
{
"id": "1611.09268"
},
{
"id": "2207.12515"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2305.13112"
},
{
"id": "2112.05197"
},
{
"id": "2305.06474"
},
{
"id": "2304.11406"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2304.01196"
}
] |
2308.09904 | 36 | ⢠NeuralMF[11]: A framework that uses deep neural networks modeling collaborative filtering based on implicit feedback and user-item feature interactions.
⢠ItemKNN[5]: An item-based Top-N recommendation algo- rithm that uses item similarities to determine the recommen- dation set.
Table 1 presents the results of our comparison. The data suggest that, conditioned on an equal number of user interactions, the per- formance of various recommender systems can be improved when the assistant is integrated. Namely, after learning user personalities, the assistant can effectively calibrate recommender systems using proxy feedback. These outcomes resonate with the non-invasion design of the RAH framework. The assistant preserves the inher- ent pattern between the recommender system (which recommends items and gathers feedback) and the user (who receives recommen- dations and provides feedback). As a result, the RAH framework demonstrates remarkable adaptability across various recommender systems. | 2308.09904#36 | RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents | The rapid evolution of the web has led to an exponential growth in content.
Recommender systems play a crucial role in Human-Computer Interaction (HCI) by
tailoring content based on individual preferences. Despite their importance,
challenges persist in balancing recommendation accuracy with user satisfaction,
addressing biases while preserving user privacy, and solving cold-start
problems in cross-domain situations. This research argues that addressing these
issues is not solely the recommender systems' responsibility, and a
human-centered approach is vital. We introduce the RAH Recommender system,
Assistant, and Human) framework, an innovative solution with LLM-based agents
such as Perceive, Learn, Act, Critic, and Reflect, emphasizing the alignment
with user personalities. The framework utilizes the Learn-Act-Critic loop and a
reflection mechanism for improving user alignment. Using the real-world data,
our experiments demonstrate the RAH framework's efficacy in various
recommendation domains, from reducing human burden to mitigating biases and
enhancing user control. Notably, our contributions provide a human-centered
recommendation framework that partners effectively with various recommendation
models. | http://arxiv.org/pdf/2308.09904 | Yubo Shu, Haonan Zhang, Hansu Gu, Peng Zhang, Tun Lu, Dongsheng Li, Ning Gu | cs.IR, cs.AI | null | null | cs.IR | 20230819 | 20231017 | [
{
"id": "2305.07961"
},
{
"id": "2309.07864"
},
{
"id": "2303.14524"
},
{
"id": "2209.07562"
},
{
"id": "2305.16291"
},
{
"id": "2207.12515"
},
{
"id": "2304.03442"
},
{
"id": "2304.10149"
},
{
"id": "1806.08977"
},
{
"id": "2305.00447"
},
{
"id": "2309.02427"
},
{
"id": "2307.14225"
},
{
"id": "2112.00944"
},
{
"id": "2305.16960"
}
] |
2308.10053 | 36 | CIKM â23, October 21â25, 2023, Birmingham, United Kingdom
MsMarco ââ inspired âxâ DallyDialog Reddit + ReDIAL =@= HotpotQa ---- RAND ce Fr = Pr E=9 PT+FT 0.040 0.035 0.030 B00 Ti & o.020 a © cos 0.010 0005} os a0 a5 20 25 0.000 ato Token Counts 2e6 FISM
(a) Entropy v.s. Token Counts (b) Pre-training Effectiveness
Figure 7: The left subfigure shows the entropy of the fre- quency distribution of 1,2,3-grams with respect to number of words drawn from each dataset (item names excluded) to measure the content/context information across datasets. The right subfigure shows the results of processed Reddit collaborative dataset aligned to ML-25M [21]. RAND denotes random baseline, FT denotes fine tuning on Reddit, PT de- notes pre-training on ML-25M, PT+FT means FT after PT.
Reddit dataset contains 12,982 items with no more than 3 mentions as responses. This poses a challenge in correctly ranking these items within the Top-5 or even Top-1 positions. LLMs, which possess at least some understanding of the semantics in item titles, have the chance to outperform baselines trained on datasets containing a large number of cold-start items. | 2308.10053#36 | Large Language Models as Zero-Shot Conversational Recommenders | In this paper, we present empirical studies on conversational recommendation
tasks using representative large language models in a zero-shot setting with
three primary contributions. (1) Data: To gain insights into model behavior in
"in-the-wild" conversational recommendation scenarios, we construct a new
dataset of recommendation-related conversations by scraping a popular
discussion website. This is the largest public real-world conversational
recommendation dataset to date. (2) Evaluation: On the new dataset and two
existing conversational recommendation datasets, we observe that even without
fine-tuning, large language models can outperform existing fine-tuned
conversational recommendation models. (3) Analysis: We propose various probing
tasks to investigate the mechanisms behind the remarkable performance of large
language models in conversational recommendation. We analyze both the large
language models' behaviors and the characteristics of the datasets, providing a
holistic understanding of the models' effectiveness, limitations and suggesting
directions for the design of future conversational recommenders | http://arxiv.org/pdf/2308.10053 | Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley | cs.IR, cs.AI | Accepted as CIKM 2023 long paper. Longer version is coming soon
(e.g., more details about dataset) | null | cs.IR | 20230819 | 20230819 | [
{
"id": "2302.13971"
},
{
"id": "2304.03879"
},
{
"id": "2303.17568"
},
{
"id": "2305.07961"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "2303.08774"
},
{
"id": "2305.15717"
},
{
"id": "1611.09268"
},
{
"id": "2207.12515"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2305.13112"
},
{
"id": "2112.05197"
},
{
"id": "2305.06474"
},
{
"id": "2304.11406"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2304.01196"
}
] |
2308.09904 | 37 | In the RAH framework, the assistant provides an opportunity to address the bias problem. The above experiments demonstrate the capability of assistants to learn from user actions and make proxy feedback on items. Therefore, the assistant can also represent human users to provide proxy feedback on unpopular items and alleviate the bias in the system. To conduct the experiment, we se- lect unpopular items (associated with less than ten reviews) in the Cross1k dataset and randomly sample user assistants to make proxy feedback on unpopular items until these items own no less than ten reviews. For comparison, we also compare a de-biasing method, Inverse Propensity Scoring (IPS) [25]. The IPS method in recom- mender systems adjusts for selection bias by reweighting observed data based on the likelihood of an item being recommended.
Subsequently, we evaluate the performance on simulated unbi- ased test data derived from sampling. Specifically, the probability of sampling a user-item interaction is formulated to be inversely pro- portional to the frequency of the involved item [31]. Table 2 shows that both IPS and RAH are effective in mitigating bias compared with the baseline. Remarkably, when combined, the IPS and RAH approach emerges as a particularly robust de-biasing technique [4], showing a greater efficacy in bias reduction. | 2308.09904#37 | RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents | The rapid evolution of the web has led to an exponential growth in content.
Recommender systems play a crucial role in Human-Computer Interaction (HCI) by
tailoring content based on individual preferences. Despite their importance,
challenges persist in balancing recommendation accuracy with user satisfaction,
addressing biases while preserving user privacy, and solving cold-start
problems in cross-domain situations. This research argues that addressing these
issues is not solely the recommender systems' responsibility, and a
human-centered approach is vital. We introduce the RAH Recommender system,
Assistant, and Human) framework, an innovative solution with LLM-based agents
such as Perceive, Learn, Act, Critic, and Reflect, emphasizing the alignment
with user personalities. The framework utilizes the Learn-Act-Critic loop and a
reflection mechanism for improving user alignment. Using the real-world data,
our experiments demonstrate the RAH framework's efficacy in various
recommendation domains, from reducing human burden to mitigating biases and
enhancing user control. Notably, our contributions provide a human-centered
recommendation framework that partners effectively with various recommendation
models. | http://arxiv.org/pdf/2308.09904 | Yubo Shu, Haonan Zhang, Hansu Gu, Peng Zhang, Tun Lu, Dongsheng Li, Ning Gu | cs.IR, cs.AI | null | null | cs.IR | 20230819 | 20231017 | [
{
"id": "2305.07961"
},
{
"id": "2309.07864"
},
{
"id": "2303.14524"
},
{
"id": "2209.07562"
},
{
"id": "2305.16291"
},
{
"id": "2207.12515"
},
{
"id": "2304.03442"
},
{
"id": "2304.10149"
},
{
"id": "1806.08977"
},
{
"id": "2305.00447"
},
{
"id": "2309.02427"
},
{
"id": "2307.14225"
},
{
"id": "2112.00944"
},
{
"id": "2305.16960"
}
] |
2308.10053 | 37 | Recent research on LLMs in traditional recommendation sys- tems [27, 34, 48] also observes the challenge of effectively leveraging collaborative information without knowing the target interaction data distribution. Additionally, another study [3] on traditional rec- ommendation systems suggests that LLMs are beneficial in a setting with many cold-start items. Our experimental results support these findings within the context of conversational recommendations.
# 5.2 Information from CRS Data | 2308.10053#37 | Large Language Models as Zero-Shot Conversational Recommenders | In this paper, we present empirical studies on conversational recommendation
tasks using representative large language models in a zero-shot setting with
three primary contributions. (1) Data: To gain insights into model behavior in
"in-the-wild" conversational recommendation scenarios, we construct a new
dataset of recommendation-related conversations by scraping a popular
discussion website. This is the largest public real-world conversational
recommendation dataset to date. (2) Evaluation: On the new dataset and two
existing conversational recommendation datasets, we observe that even without
fine-tuning, large language models can outperform existing fine-tuned
conversational recommendation models. (3) Analysis: We propose various probing
tasks to investigate the mechanisms behind the remarkable performance of large
language models in conversational recommendation. We analyze both the large
language models' behaviors and the characteristics of the datasets, providing a
holistic understanding of the models' effectiveness, limitations and suggesting
directions for the design of future conversational recommenders | http://arxiv.org/pdf/2308.10053 | Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley | cs.IR, cs.AI | Accepted as CIKM 2023 long paper. Longer version is coming soon
(e.g., more details about dataset) | null | cs.IR | 20230819 | 20230819 | [
{
"id": "2302.13971"
},
{
"id": "2304.03879"
},
{
"id": "2303.17568"
},
{
"id": "2305.07961"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "2303.08774"
},
{
"id": "2305.15717"
},
{
"id": "1611.09268"
},
{
"id": "2207.12515"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2305.13112"
},
{
"id": "2112.05197"
},
{
"id": "2305.06474"
},
{
"id": "2304.11406"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2304.01196"
}
] |
2308.09904 | 38 | 5.4 Increase User Control 5.4.1 Control Recommendation Results. The first case, as illustrated in Figure 4(a), demonstrates how the assistant can enhance user control over recommended results. In this case, since the user often watches movies with a child, the user expresses dissatisfaction
XXXâ24, 2024, Singapore
# [Human] # User Action: Dislike the Incredibles (Pixar film)
# User Comment: watch films with my kid. childish for adults. [Assistant] too dark for children, yet too mindless violence
# # Learn:
|
Prefer: family movies | Disprefer: heavy dark elements, too childish, lots of violence | ...... [Rec System]
# # Recommend: (1) Coco (2) Ironman (3) Batman: [Assistant] # Act
# The Dark Knight
(1) Like, pass to the user (2) Not Sure, pass to the user to learn from human feedback (3) Dislike, proxy feedback to the recommender system
(a) Control Recommendation Results | 2308.09904#38 | RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents | The rapid evolution of the web has led to an exponential growth in content.
Recommender systems play a crucial role in Human-Computer Interaction (HCI) by
tailoring content based on individual preferences. Despite their importance,
challenges persist in balancing recommendation accuracy with user satisfaction,
addressing biases while preserving user privacy, and solving cold-start
problems in cross-domain situations. This research argues that addressing these
issues is not solely the recommender systems' responsibility, and a
human-centered approach is vital. We introduce the RAH Recommender system,
Assistant, and Human) framework, an innovative solution with LLM-based agents
such as Perceive, Learn, Act, Critic, and Reflect, emphasizing the alignment
with user personalities. The framework utilizes the Learn-Act-Critic loop and a
reflection mechanism for improving user alignment. Using the real-world data,
our experiments demonstrate the RAH framework's efficacy in various
recommendation domains, from reducing human burden to mitigating biases and
enhancing user control. Notably, our contributions provide a human-centered
recommendation framework that partners effectively with various recommendation
models. | http://arxiv.org/pdf/2308.09904 | Yubo Shu, Haonan Zhang, Hansu Gu, Peng Zhang, Tun Lu, Dongsheng Li, Ning Gu | cs.IR, cs.AI | null | null | cs.IR | 20230819 | 20231017 | [
{
"id": "2305.07961"
},
{
"id": "2309.07864"
},
{
"id": "2303.14524"
},
{
"id": "2209.07562"
},
{
"id": "2305.16291"
},
{
"id": "2207.12515"
},
{
"id": "2304.03442"
},
{
"id": "2304.10149"
},
{
"id": "1806.08977"
},
{
"id": "2305.00447"
},
{
"id": "2309.02427"
},
{
"id": "2307.14225"
},
{
"id": "2112.00944"
},
{
"id": "2305.16960"
}
] |
2308.10053 | 38 | # 5.2 Information from CRS Data
Experimental Setup for Finding 7. To understand LLMs in CRS tasks from the data perspective, we first measure the content/context information in CRS datasets. Content/context information refers to the amount of information contained in conversations, exclud- ing the item titles, which reasonably challenges existing CRS and favors LLMs according to the findings in Section 5.1. Specifically, we conduct an entropy-based evaluation for each CRS dataset and compare the conversational datasets with several popular conver- sation and question-answering datasets, namely DailyDialog (chit chat) [45], MsMarco (conversational search) [2], and HotpotQA (question answering). We use ItemRemoved (ð2) conversation texts like Section 5.1, and adopt the geometric mean of the entropy distri- bution of 1,2,3-grams as a surrogate for the amount of information contained in the datasets, following previous work on evaluating information content in text [29]. However, entropy naturally grows with the size of a corpus, and each CRS dataset has a different distri- bution of words per sentence, sentences per dialog, and corpus size. Thus, it would be unfair to compare entropy between corpus on a
CIKM â23, October 21â25, 2023, Birmingham, United Kingdom | 2308.10053#38 | Large Language Models as Zero-Shot Conversational Recommenders | In this paper, we present empirical studies on conversational recommendation
tasks using representative large language models in a zero-shot setting with
three primary contributions. (1) Data: To gain insights into model behavior in
"in-the-wild" conversational recommendation scenarios, we construct a new
dataset of recommendation-related conversations by scraping a popular
discussion website. This is the largest public real-world conversational
recommendation dataset to date. (2) Evaluation: On the new dataset and two
existing conversational recommendation datasets, we observe that even without
fine-tuning, large language models can outperform existing fine-tuned
conversational recommendation models. (3) Analysis: We propose various probing
tasks to investigate the mechanisms behind the remarkable performance of large
language models in conversational recommendation. We analyze both the large
language models' behaviors and the characteristics of the datasets, providing a
holistic understanding of the models' effectiveness, limitations and suggesting
directions for the design of future conversational recommenders | http://arxiv.org/pdf/2308.10053 | Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley | cs.IR, cs.AI | Accepted as CIKM 2023 long paper. Longer version is coming soon
(e.g., more details about dataset) | null | cs.IR | 20230819 | 20230819 | [
{
"id": "2302.13971"
},
{
"id": "2304.03879"
},
{
"id": "2303.17568"
},
{
"id": "2305.07961"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "2303.08774"
},
{
"id": "2305.15717"
},
{
"id": "1611.09268"
},
{
"id": "2207.12515"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2305.13112"
},
{
"id": "2112.05197"
},
{
"id": "2305.06474"
},
{
"id": "2304.11406"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2304.01196"
}
] |
2308.09904 | 39 | (a) Control Recommendation Results
[Human] # User Action: Like The Depression Cure: The 6-Step Program to Beat Depression without Drugs [Assistant] # It can have a potential risk of privacy leakage. Suggest two personality confusion strategies. + Strategy I (pretend a psychologist) Assistant will automatically express more Like on professional psychology textbooks to the recommender system. + Strategy II (pretend a shared account) Assistant will automatically express random Like and Dislike. [Human] (select and enable a protection strategy) [Rec System] (recommend several items) [Assistant] # Act + For the user: filter recommended items from the recommender systems to remain accurate, + For the recommender system: selectively express user real feedback and create some extra feedback to protect privacy
(b) Control Personal Privacy
# Figure 4: The case study.
with the movie The Incredibles citing reasons such as it being "too childish for adults" and "too dark for children." From this feedback, the assistant discerns that the user favors family movies that strike a balance in content, avoiding extremes in themes. | 2308.09904#39 | RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents | The rapid evolution of the web has led to an exponential growth in content.
Recommender systems play a crucial role in Human-Computer Interaction (HCI) by
tailoring content based on individual preferences. Despite their importance,
challenges persist in balancing recommendation accuracy with user satisfaction,
addressing biases while preserving user privacy, and solving cold-start
problems in cross-domain situations. This research argues that addressing these
issues is not solely the recommender systems' responsibility, and a
human-centered approach is vital. We introduce the RAH Recommender system,
Assistant, and Human) framework, an innovative solution with LLM-based agents
such as Perceive, Learn, Act, Critic, and Reflect, emphasizing the alignment
with user personalities. The framework utilizes the Learn-Act-Critic loop and a
reflection mechanism for improving user alignment. Using the real-world data,
our experiments demonstrate the RAH framework's efficacy in various
recommendation domains, from reducing human burden to mitigating biases and
enhancing user control. Notably, our contributions provide a human-centered
recommendation framework that partners effectively with various recommendation
models. | http://arxiv.org/pdf/2308.09904 | Yubo Shu, Haonan Zhang, Hansu Gu, Peng Zhang, Tun Lu, Dongsheng Li, Ning Gu | cs.IR, cs.AI | null | null | cs.IR | 20230819 | 20231017 | [
{
"id": "2305.07961"
},
{
"id": "2309.07864"
},
{
"id": "2303.14524"
},
{
"id": "2209.07562"
},
{
"id": "2305.16291"
},
{
"id": "2207.12515"
},
{
"id": "2304.03442"
},
{
"id": "2304.10149"
},
{
"id": "1806.08977"
},
{
"id": "2305.00447"
},
{
"id": "2309.02427"
},
{
"id": "2307.14225"
},
{
"id": "2112.00944"
},
{
"id": "2305.16960"
}
] |
2308.10053 | 39 | CIKM â23, October 21â25, 2023, Birmingham, United Kingdom
ReDIAL Reddit © The Hangover © Inception 6.0 2.0 Fay g v S 5.0 The Shawshank x © ,The Shawshank ° - 5 Redemption S15] © Se Redemption 5 4.0 s is 8 3 3 = 2 2 3.0 Avengers: @10 ° â E Infinity War £ § 2.0 ° S 2 gos 1.0 ° 0.0 0.0 0.0 1.0 2.0 0.0 01 0.2 0.3 Ground Truth (%) Ground Truth (%)
Figure 8: Scatter plots of the frequency of LLMs (GPT-4) gen- erated recommendations and ground-truth items.
per-dialog, per-turn, or per-dataset basis. To ensure a fair compari- son, we repeatedly draw increasingly large subsets of texts from each of the datasets, compute the entropy of these subsets, and report the trend of entropy growth with respect to the size of the subsampled text for each CRS dataset. | 2308.10053#39 | Large Language Models as Zero-Shot Conversational Recommenders | In this paper, we present empirical studies on conversational recommendation
tasks using representative large language models in a zero-shot setting with
three primary contributions. (1) Data: To gain insights into model behavior in
"in-the-wild" conversational recommendation scenarios, we construct a new
dataset of recommendation-related conversations by scraping a popular
discussion website. This is the largest public real-world conversational
recommendation dataset to date. (2) Evaluation: On the new dataset and two
existing conversational recommendation datasets, we observe that even without
fine-tuning, large language models can outperform existing fine-tuned
conversational recommendation models. (3) Analysis: We propose various probing
tasks to investigate the mechanisms behind the remarkable performance of large
language models in conversational recommendation. We analyze both the large
language models' behaviors and the characteristics of the datasets, providing a
holistic understanding of the models' effectiveness, limitations and suggesting
directions for the design of future conversational recommenders | http://arxiv.org/pdf/2308.10053 | Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley | cs.IR, cs.AI | Accepted as CIKM 2023 long paper. Longer version is coming soon
(e.g., more details about dataset) | null | cs.IR | 20230819 | 20230819 | [
{
"id": "2302.13971"
},
{
"id": "2304.03879"
},
{
"id": "2303.17568"
},
{
"id": "2305.07961"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "2303.08774"
},
{
"id": "2305.15717"
},
{
"id": "1611.09268"
},
{
"id": "2207.12515"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2305.13112"
},
{
"id": "2112.05197"
},
{
"id": "2305.06474"
},
{
"id": "2304.11406"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2304.01196"
}
] |
2308.09904 | 40 | Subsequently, the recommender system suggests three movies: Coco, Ironman, and Batman: The Dark Knight. Leveraging the rea- soning capabilities and real-world knowledge of LLMs, the assistant can make informed decisions on items to align with user intentions. For Coco, the assistant identifies it as a likely match for the user due to its family-friendly nature and passes the recommendation to the user. Regarding Ironman, the assistant, uncertain of its suitability, also passes this recommendation to the user, seeking additional feedback. In contrast, Batman: The Dark Knight, known for its dark and potentially violent content, is deemed possibly unsuitable based on the userâs preferences. The assistant decides to âDislike" this recommendation on behalf of the user, supplying proxy feedback to the recommender system for future refinement.
5.4.2 Control Privacy. The second case, depicted in Figure 4(b), highlights how the assistant can bolster user control concerning personal privacy. In this case, A user expresses interest in a specific book titled The Depression Cure: The 6-Step Program to Beat Depres- sion without Drugs. The assistant identifies that such an action might lead to potential privacy leakagesâexpressing a preference for con- tent on mental health might disclose sensitive information about the user. The assistant offers two personality confusion strategies to help control privacy.
Yubo Shu, et al. | 2308.09904#40 | RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents | The rapid evolution of the web has led to an exponential growth in content.
Recommender systems play a crucial role in Human-Computer Interaction (HCI) by
tailoring content based on individual preferences. Despite their importance,
challenges persist in balancing recommendation accuracy with user satisfaction,
addressing biases while preserving user privacy, and solving cold-start
problems in cross-domain situations. This research argues that addressing these
issues is not solely the recommender systems' responsibility, and a
human-centered approach is vital. We introduce the RAH Recommender system,
Assistant, and Human) framework, an innovative solution with LLM-based agents
such as Perceive, Learn, Act, Critic, and Reflect, emphasizing the alignment
with user personalities. The framework utilizes the Learn-Act-Critic loop and a
reflection mechanism for improving user alignment. Using the real-world data,
our experiments demonstrate the RAH framework's efficacy in various
recommendation domains, from reducing human burden to mitigating biases and
enhancing user control. Notably, our contributions provide a human-centered
recommendation framework that partners effectively with various recommendation
models. | http://arxiv.org/pdf/2308.09904 | Yubo Shu, Haonan Zhang, Hansu Gu, Peng Zhang, Tun Lu, Dongsheng Li, Ning Gu | cs.IR, cs.AI | null | null | cs.IR | 20230819 | 20231017 | [
{
"id": "2305.07961"
},
{
"id": "2309.07864"
},
{
"id": "2303.14524"
},
{
"id": "2209.07562"
},
{
"id": "2305.16291"
},
{
"id": "2207.12515"
},
{
"id": "2304.03442"
},
{
"id": "2304.10149"
},
{
"id": "1806.08977"
},
{
"id": "2305.00447"
},
{
"id": "2309.02427"
},
{
"id": "2307.14225"
},
{
"id": "2112.00944"
},
{
"id": "2305.16960"
}
] |
2308.10053 | 40 | Finding 7 - Reddit provides more content/context informa- tion than the other two CRS datasets. Based on the results in Figure 7a, we observe that the Reddit dataset has the most con- tent/context information among the three conversational recom- mendation datasets. Those observations are also aligned with the results in Figure 5 and table 4, where LLMs â which possess better content/context knowledge than baselines â can achieve higher relative improvements compared to the other two datasets. Mean- while, the content/context information in Reddit is close to question answering and conversational search, which is higher than existing conversational recommendation and chit-chat datasets. | 2308.10053#40 | Large Language Models as Zero-Shot Conversational Recommenders | In this paper, we present empirical studies on conversational recommendation
tasks using representative large language models in a zero-shot setting with
three primary contributions. (1) Data: To gain insights into model behavior in
"in-the-wild" conversational recommendation scenarios, we construct a new
dataset of recommendation-related conversations by scraping a popular
discussion website. This is the largest public real-world conversational
recommendation dataset to date. (2) Evaluation: On the new dataset and two
existing conversational recommendation datasets, we observe that even without
fine-tuning, large language models can outperform existing fine-tuned
conversational recommendation models. (3) Analysis: We propose various probing
tasks to investigate the mechanisms behind the remarkable performance of large
language models in conversational recommendation. We analyze both the large
language models' behaviors and the characteristics of the datasets, providing a
holistic understanding of the models' effectiveness, limitations and suggesting
directions for the design of future conversational recommenders | http://arxiv.org/pdf/2308.10053 | Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley | cs.IR, cs.AI | Accepted as CIKM 2023 long paper. Longer version is coming soon
(e.g., more details about dataset) | null | cs.IR | 20230819 | 20230819 | [
{
"id": "2302.13971"
},
{
"id": "2304.03879"
},
{
"id": "2303.17568"
},
{
"id": "2305.07961"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "2303.08774"
},
{
"id": "2305.15717"
},
{
"id": "1611.09268"
},
{
"id": "2207.12515"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2305.13112"
},
{
"id": "2112.05197"
},
{
"id": "2305.06474"
},
{
"id": "2304.11406"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2304.01196"
}
] |
2308.09904 | 41 | Yubo Shu, et al.
Strategy I (Pretend a Psychologist): The assistant, mimicking the behavior of a psychologist, will express more "Like" on profes- sional psychology textbooks within the recommender system. This action serves to dilute the userâs preference, making it ambiguous whether the original interest in the depression-related book was due to personal reasons or professional curiosity.
Strategy II (Pretend a Shared Account): The assistant will automatically generate a mix of random likes and dislikes. This strategy gives the impression of multiple users sharing on a single account, thereby obfuscating individual preferences and adding a layer of ambiguity to the userâs actions.
If the user adopts one strategy, the assistant selectively provides real user feedback and creates additional feedback, further pro- tecting privacy. Besides, the assistant can also filter items from the recommender system to ensure that recommendations remain personalized despite the noise introduced by the selected strategy. | 2308.09904#41 | RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents | The rapid evolution of the web has led to an exponential growth in content.
Recommender systems play a crucial role in Human-Computer Interaction (HCI) by
tailoring content based on individual preferences. Despite their importance,
challenges persist in balancing recommendation accuracy with user satisfaction,
addressing biases while preserving user privacy, and solving cold-start
problems in cross-domain situations. This research argues that addressing these
issues is not solely the recommender systems' responsibility, and a
human-centered approach is vital. We introduce the RAH Recommender system,
Assistant, and Human) framework, an innovative solution with LLM-based agents
such as Perceive, Learn, Act, Critic, and Reflect, emphasizing the alignment
with user personalities. The framework utilizes the Learn-Act-Critic loop and a
reflection mechanism for improving user alignment. Using the real-world data,
our experiments demonstrate the RAH framework's efficacy in various
recommendation domains, from reducing human burden to mitigating biases and
enhancing user control. Notably, our contributions provide a human-centered
recommendation framework that partners effectively with various recommendation
models. | http://arxiv.org/pdf/2308.09904 | Yubo Shu, Haonan Zhang, Hansu Gu, Peng Zhang, Tun Lu, Dongsheng Li, Ning Gu | cs.IR, cs.AI | null | null | cs.IR | 20230819 | 20231017 | [
{
"id": "2305.07961"
},
{
"id": "2309.07864"
},
{
"id": "2303.14524"
},
{
"id": "2209.07562"
},
{
"id": "2305.16291"
},
{
"id": "2207.12515"
},
{
"id": "2304.03442"
},
{
"id": "2304.10149"
},
{
"id": "1806.08977"
},
{
"id": "2305.00447"
},
{
"id": "2309.02427"
},
{
"id": "2307.14225"
},
{
"id": "2112.00944"
},
{
"id": "2305.16960"
}
] |
2308.10053 | 41 | Finding 8 - Collaborative information is insufficient for satis- factory recommendations, given the current models. Quantify- ing the collaborative information in datasets is challenging. Instead of proposing methods to measure collaborative information, we aim to make new observations based on general performance re- sults presented in Figure 4 and recommendation results using only collaborative information in Table 5. Comparing the performance of the best models in Table 5 under an ItemOnly (ð1) setting with the performance of the best models in Figure 4 under an Original (ð0) setting reveals a significant disparity. For instance, on ReDIAL, the Recall@1 performance is 0.029 for ItemCF* compared to 0.046 for GPT-4, representing a 39.96% decrease. Similarly, for Reddit, the Recall@1 performance is 0.007 compared to 0.023 for GPT-4 both, which is 69.57% lower. We also experimented with other rec- ommender systems, such as transformer-based models [33, 59] to encode the item-only inputs and found similar results. Based on the current performance gap, we find that using the existing mod- els, relying solely on collaborative information, is insufficient to provide | 2308.10053#41 | Large Language Models as Zero-Shot Conversational Recommenders | In this paper, we present empirical studies on conversational recommendation
tasks using representative large language models in a zero-shot setting with
three primary contributions. (1) Data: To gain insights into model behavior in
"in-the-wild" conversational recommendation scenarios, we construct a new
dataset of recommendation-related conversations by scraping a popular
discussion website. This is the largest public real-world conversational
recommendation dataset to date. (2) Evaluation: On the new dataset and two
existing conversational recommendation datasets, we observe that even without
fine-tuning, large language models can outperform existing fine-tuned
conversational recommendation models. (3) Analysis: We propose various probing
tasks to investigate the mechanisms behind the remarkable performance of large
language models in conversational recommendation. We analyze both the large
language models' behaviors and the characteristics of the datasets, providing a
holistic understanding of the models' effectiveness, limitations and suggesting
directions for the design of future conversational recommenders | http://arxiv.org/pdf/2308.10053 | Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley | cs.IR, cs.AI | Accepted as CIKM 2023 long paper. Longer version is coming soon
(e.g., more details about dataset) | null | cs.IR | 20230819 | 20230819 | [
{
"id": "2302.13971"
},
{
"id": "2304.03879"
},
{
"id": "2303.17568"
},
{
"id": "2305.07961"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "2303.08774"
},
{
"id": "2305.15717"
},
{
"id": "1611.09268"
},
{
"id": "2207.12515"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2305.13112"
},
{
"id": "2112.05197"
},
{
"id": "2305.06474"
},
{
"id": "2304.11406"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2304.01196"
}
] |
2308.09904 | 42 | 6 RELATED WORK 6.1 Human-Centered Recommendation The human-centered recommender system [12] focuses on under- standing the characteristics and complex relationships between the recommender system and users in the recommendation scenario. Unlike the "accuracy-only" approach in traditional recommender systems, the human-centered recommender system pays more at- tention to user experience, taking user satisfaction and needs as optimization goals, such as privacy protection. Recent works have shown that this field has attracted researchers from social sciences and computational fields to participate in research together. [39] proposed a new federal recommendation framework called Federal Mask Matrix Factorization (FedMMF), which can protect data pri- vacy in federal recommender systems without sacrificing efficiency and effectiveness. EANA [21] improves the training speed and ef- fectiveness of large-scale recommender systems while protecting user privacy through an embedding-aware noise addition method. [42] proposed a new human-centered dialogue recommendation method, which provides more helpful recommendations to users by understanding and adapting to user needs during the dialogue process. | 2308.09904#42 | RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents | The rapid evolution of the web has led to an exponential growth in content.
Recommender systems play a crucial role in Human-Computer Interaction (HCI) by
tailoring content based on individual preferences. Despite their importance,
challenges persist in balancing recommendation accuracy with user satisfaction,
addressing biases while preserving user privacy, and solving cold-start
problems in cross-domain situations. This research argues that addressing these
issues is not solely the recommender systems' responsibility, and a
human-centered approach is vital. We introduce the RAH Recommender system,
Assistant, and Human) framework, an innovative solution with LLM-based agents
such as Perceive, Learn, Act, Critic, and Reflect, emphasizing the alignment
with user personalities. The framework utilizes the Learn-Act-Critic loop and a
reflection mechanism for improving user alignment. Using the real-world data,
our experiments demonstrate the RAH framework's efficacy in various
recommendation domains, from reducing human burden to mitigating biases and
enhancing user control. Notably, our contributions provide a human-centered
recommendation framework that partners effectively with various recommendation
models. | http://arxiv.org/pdf/2308.09904 | Yubo Shu, Haonan Zhang, Hansu Gu, Peng Zhang, Tun Lu, Dongsheng Li, Ning Gu | cs.IR, cs.AI | null | null | cs.IR | 20230819 | 20231017 | [
{
"id": "2305.07961"
},
{
"id": "2309.07864"
},
{
"id": "2303.14524"
},
{
"id": "2209.07562"
},
{
"id": "2305.16291"
},
{
"id": "2207.12515"
},
{
"id": "2304.03442"
},
{
"id": "2304.10149"
},
{
"id": "1806.08977"
},
{
"id": "2305.00447"
},
{
"id": "2309.02427"
},
{
"id": "2307.14225"
},
{
"id": "2112.00944"
},
{
"id": "2305.16960"
}
] |
2308.10053 | 42 | inputs and found similar results. Based on the current performance gap, we find that using the existing mod- els, relying solely on collaborative information, is insufficient to provide satisfactory recommendations. We speculate that either (1) more advanced models or training methods are required to bet- ter comprehend the collaborative information in CRS datasets, or (2) the collaborative information in CRS datasets is too limited to support satisfactory recommendations. | 2308.10053#42 | Large Language Models as Zero-Shot Conversational Recommenders | In this paper, we present empirical studies on conversational recommendation
tasks using representative large language models in a zero-shot setting with
three primary contributions. (1) Data: To gain insights into model behavior in
"in-the-wild" conversational recommendation scenarios, we construct a new
dataset of recommendation-related conversations by scraping a popular
discussion website. This is the largest public real-world conversational
recommendation dataset to date. (2) Evaluation: On the new dataset and two
existing conversational recommendation datasets, we observe that even without
fine-tuning, large language models can outperform existing fine-tuned
conversational recommendation models. (3) Analysis: We propose various probing
tasks to investigate the mechanisms behind the remarkable performance of large
language models in conversational recommendation. We analyze both the large
language models' behaviors and the characteristics of the datasets, providing a
holistic understanding of the models' effectiveness, limitations and suggesting
directions for the design of future conversational recommenders | http://arxiv.org/pdf/2308.10053 | Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley | cs.IR, cs.AI | Accepted as CIKM 2023 long paper. Longer version is coming soon
(e.g., more details about dataset) | null | cs.IR | 20230819 | 20230819 | [
{
"id": "2302.13971"
},
{
"id": "2304.03879"
},
{
"id": "2303.17568"
},
{
"id": "2305.07961"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "2303.08774"
},
{
"id": "2305.15717"
},
{
"id": "1611.09268"
},
{
"id": "2207.12515"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2305.13112"
},
{
"id": "2112.05197"
},
{
"id": "2305.06474"
},
{
"id": "2304.11406"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2304.01196"
}
] |
2308.09904 | 43 | 6.2 LLM For Recommendation Large Language Models (LLMs) in Natural Language Processing (NLP) are now employed in recommender systems due to their vast knowledge and logical reasoning. LLMs for Recommendation (LLM4Rec) are mainly used in two ways: enhancing features and di- rectly recommending. The first approach leverages LLMs for feature extraction, enhancing traditional systems. Notable works include encoding news[17, 36, 37, 40, 43] and tweets[44] for recommenda- tions. The second approach forms input sequences for LLMs, letting them directly recommend. [16, 33] relied on prompts for recom- mendations. [1] proposed a two-stage method: fine-tuning LLMs with recommendation data and then using them for recommenda- tions. Works like [6, 7, 34] delved into LLMâs role in conversational recommender systems.
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
&
XXXâ24, 2024, Singapore | 2308.09904#43 | RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents | The rapid evolution of the web has led to an exponential growth in content.
Recommender systems play a crucial role in Human-Computer Interaction (HCI) by
tailoring content based on individual preferences. Despite their importance,
challenges persist in balancing recommendation accuracy with user satisfaction,
addressing biases while preserving user privacy, and solving cold-start
problems in cross-domain situations. This research argues that addressing these
issues is not solely the recommender systems' responsibility, and a
human-centered approach is vital. We introduce the RAH Recommender system,
Assistant, and Human) framework, an innovative solution with LLM-based agents
such as Perceive, Learn, Act, Critic, and Reflect, emphasizing the alignment
with user personalities. The framework utilizes the Learn-Act-Critic loop and a
reflection mechanism for improving user alignment. Using the real-world data,
our experiments demonstrate the RAH framework's efficacy in various
recommendation domains, from reducing human burden to mitigating biases and
enhancing user control. Notably, our contributions provide a human-centered
recommendation framework that partners effectively with various recommendation
models. | http://arxiv.org/pdf/2308.09904 | Yubo Shu, Haonan Zhang, Hansu Gu, Peng Zhang, Tun Lu, Dongsheng Li, Ning Gu | cs.IR, cs.AI | null | null | cs.IR | 20230819 | 20231017 | [
{
"id": "2305.07961"
},
{
"id": "2309.07864"
},
{
"id": "2303.14524"
},
{
"id": "2209.07562"
},
{
"id": "2305.16291"
},
{
"id": "2207.12515"
},
{
"id": "2304.03442"
},
{
"id": "2304.10149"
},
{
"id": "1806.08977"
},
{
"id": "2305.00447"
},
{
"id": "2309.02427"
},
{
"id": "2307.14225"
},
{
"id": "2112.00944"
},
{
"id": "2305.16960"
}
] |
2308.10053 | 43 | He, et al.
Ground-Truth Freq By Country Recall@1 with GPT-4 UK US Nhl mat JPN RUSH] USA. mat. RUS USA. AUS CAN ESP FRA. DEU 24223 G re zoue a
Figure 9: Ground-truth item counts in Reddit by country (in log scale) and the corresponding Recall@1 by country.
Experimental Setup for Finding 9. To understand whether the collaborative information from CRS datasets are aligned with pure interaction datasets, we conduct an experiment on the Reddit dataset. In this experiment, we first process the dataset to link the items to a popular interaction dataset ML-25M [21] 12. We then experi- ment with two representative encoders for item-based collaborative filtering based on FISM [31] and Transformer [59] (TRM), respec- tively. We report the testing results on Reddit, with fine-tuning on Reddit (FT), pre-training on ML-25M (PT), and pre-training on ML- 25M then fine-tuning Reddit (PT+FT). Note that since it is a linked dataset with additional processing, the results are not comparable with beforementioned results on Reddit. | 2308.10053#43 | Large Language Models as Zero-Shot Conversational Recommenders | In this paper, we present empirical studies on conversational recommendation
tasks using representative large language models in a zero-shot setting with
three primary contributions. (1) Data: To gain insights into model behavior in
"in-the-wild" conversational recommendation scenarios, we construct a new
dataset of recommendation-related conversations by scraping a popular
discussion website. This is the largest public real-world conversational
recommendation dataset to date. (2) Evaluation: On the new dataset and two
existing conversational recommendation datasets, we observe that even without
fine-tuning, large language models can outperform existing fine-tuned
conversational recommendation models. (3) Analysis: We propose various probing
tasks to investigate the mechanisms behind the remarkable performance of large
language models in conversational recommendation. We analyze both the large
language models' behaviors and the characteristics of the datasets, providing a
holistic understanding of the models' effectiveness, limitations and suggesting
directions for the design of future conversational recommenders | http://arxiv.org/pdf/2308.10053 | Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley | cs.IR, cs.AI | Accepted as CIKM 2023 long paper. Longer version is coming soon
(e.g., more details about dataset) | null | cs.IR | 20230819 | 20230819 | [
{
"id": "2302.13971"
},
{
"id": "2304.03879"
},
{
"id": "2303.17568"
},
{
"id": "2305.07961"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "2303.08774"
},
{
"id": "2305.15717"
},
{
"id": "1611.09268"
},
{
"id": "2207.12515"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2305.13112"
},
{
"id": "2112.05197"
},
{
"id": "2305.06474"
},
{
"id": "2304.11406"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2304.01196"
}
] |
2308.09904 | 44 | RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
&
XXXâ24, 2024, Singapore
6.3 LLM-based Agent With the emergence of Large Language Models (LLMs), their Auton- omy, Reactivity, and Pro-activeness have brought hope and made some progress in the realization of intelligent agents [38]. This is a system that can engage in dialogue, complete tasks, reason, and exhibit a certain degree of autonomous action. Work [22] has demonstrated the feasibility of LLM-based Agents by building an in- telligent town supported by LLMs, showing that LLM-based Agents have strong credibility and adaptability. Work [32] has built an LLM-Based Agent on the Minecraft game platform and proposed an iterative prompt mechanism of environmental feedback â exe- cution error â self-verification, proving that LLM-based Agents have lifelong learning ability in scenarios and strong generalization ability to solve new tasks. Similarly, work [28] divides the LLM- based Agent into three modules: control end, perception end, and action end from the perspective of cognitive science. Work [18] proposes a training paradigm that allows LLM to learn social norms and values from simulated social interactions. | 2308.09904#44 | RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents | The rapid evolution of the web has led to an exponential growth in content.
Recommender systems play a crucial role in Human-Computer Interaction (HCI) by
tailoring content based on individual preferences. Despite their importance,
challenges persist in balancing recommendation accuracy with user satisfaction,
addressing biases while preserving user privacy, and solving cold-start
problems in cross-domain situations. This research argues that addressing these
issues is not solely the recommender systems' responsibility, and a
human-centered approach is vital. We introduce the RAH Recommender system,
Assistant, and Human) framework, an innovative solution with LLM-based agents
such as Perceive, Learn, Act, Critic, and Reflect, emphasizing the alignment
with user personalities. The framework utilizes the Learn-Act-Critic loop and a
reflection mechanism for improving user alignment. Using the real-world data,
our experiments demonstrate the RAH framework's efficacy in various
recommendation domains, from reducing human burden to mitigating biases and
enhancing user control. Notably, our contributions provide a human-centered
recommendation framework that partners effectively with various recommendation
models. | http://arxiv.org/pdf/2308.09904 | Yubo Shu, Haonan Zhang, Hansu Gu, Peng Zhang, Tun Lu, Dongsheng Li, Ning Gu | cs.IR, cs.AI | null | null | cs.IR | 20230819 | 20231017 | [
{
"id": "2305.07961"
},
{
"id": "2309.07864"
},
{
"id": "2303.14524"
},
{
"id": "2209.07562"
},
{
"id": "2305.16291"
},
{
"id": "2207.12515"
},
{
"id": "2304.03442"
},
{
"id": "2304.10149"
},
{
"id": "1806.08977"
},
{
"id": "2305.00447"
},
{
"id": "2309.02427"
},
{
"id": "2307.14225"
},
{
"id": "2112.00944"
},
{
"id": "2305.16960"
}
] |
2308.10053 | 44 | Finding 9 - Collaborative information can be dataset- or platform-dependent. From Figure 7b shows that the models solely pre-trained on ML-25M (PT) outperform a random baseline, indi- cating that the data in CRS may share item similarities with pure interaction data from another platform to some extent. However, Figure 7b also shows a notable performance gap between PT and fine-tuning on Reddit (FT). Additionally, we do not observe further performance improvement when pre-training on ML-25M then fine-tuning on Reddit (PT+FT). These observations indicate that the collaborative information and underlying item similarities, even when utilizing the same items, can be largely influenced by the specific dataset or platform. The finding also may partially explain the inferior zero-shot recommendation performance of LLMs in Ta- ble 5 and suggest the necessity of further checking the alignment of collaborative knowledge in LLMs with the target datasets.
# 5.3 Limitations of LLMs as Zero-shot CRS | 2308.10053#44 | Large Language Models as Zero-Shot Conversational Recommenders | In this paper, we present empirical studies on conversational recommendation
tasks using representative large language models in a zero-shot setting with
three primary contributions. (1) Data: To gain insights into model behavior in
"in-the-wild" conversational recommendation scenarios, we construct a new
dataset of recommendation-related conversations by scraping a popular
discussion website. This is the largest public real-world conversational
recommendation dataset to date. (2) Evaluation: On the new dataset and two
existing conversational recommendation datasets, we observe that even without
fine-tuning, large language models can outperform existing fine-tuned
conversational recommendation models. (3) Analysis: We propose various probing
tasks to investigate the mechanisms behind the remarkable performance of large
language models in conversational recommendation. We analyze both the large
language models' behaviors and the characteristics of the datasets, providing a
holistic understanding of the models' effectiveness, limitations and suggesting
directions for the design of future conversational recommenders | http://arxiv.org/pdf/2308.10053 | Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley | cs.IR, cs.AI | Accepted as CIKM 2023 long paper. Longer version is coming soon
(e.g., more details about dataset) | null | cs.IR | 20230819 | 20230819 | [
{
"id": "2302.13971"
},
{
"id": "2304.03879"
},
{
"id": "2303.17568"
},
{
"id": "2305.07961"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "2303.08774"
},
{
"id": "2305.15717"
},
{
"id": "1611.09268"
},
{
"id": "2207.12515"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2305.13112"
},
{
"id": "2112.05197"
},
{
"id": "2305.06474"
},
{
"id": "2304.11406"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2304.01196"
}
] |
2308.09904 | 45 | [7] Yunfan Gao, Tao Sheng, Youlin Xiang, Yun Xiong, Haofen Wang, and Jiawei Zhang. 2023. Chat-rec: Towards interactive and explainable llms-augmented recommender system. arXiv preprint arXiv:2303.14524 (2023).
[8] Yingqiang Ge, Shuchang Liu, Zuohui Fu, Juntao Tan, Zelong Li, Shuyuan Xu, Yunqi Li, Yikun Xian, and Yongfeng Zhang. 2022. A survey on trustworthy recommender systems. arXiv preprint arXiv:2207.12515 (2022).
[9] Shijie Geng, Shuchang Liu, Zuohui Fu, Yingqiang Ge, and Yongfeng Zhang. 2022. Recommendation as language processing (rlp): A unified pretrain, personalized prompt & predict paradigm (p5). In Proceedings of the 16th ACM Conference on Recommender Systems. 299â315.
[10] Xiangnan He, Kuan Deng, Xiang Wang, Yan Li, Yongdong Zhang, and Meng Wang. 2020. Lightgcn: Simplifying and powering graph convolution network for recommendation. In Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval. 639â648. | 2308.09904#45 | RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents | The rapid evolution of the web has led to an exponential growth in content.
Recommender systems play a crucial role in Human-Computer Interaction (HCI) by
tailoring content based on individual preferences. Despite their importance,
challenges persist in balancing recommendation accuracy with user satisfaction,
addressing biases while preserving user privacy, and solving cold-start
problems in cross-domain situations. This research argues that addressing these
issues is not solely the recommender systems' responsibility, and a
human-centered approach is vital. We introduce the RAH Recommender system,
Assistant, and Human) framework, an innovative solution with LLM-based agents
such as Perceive, Learn, Act, Critic, and Reflect, emphasizing the alignment
with user personalities. The framework utilizes the Learn-Act-Critic loop and a
reflection mechanism for improving user alignment. Using the real-world data,
our experiments demonstrate the RAH framework's efficacy in various
recommendation domains, from reducing human burden to mitigating biases and
enhancing user control. Notably, our contributions provide a human-centered
recommendation framework that partners effectively with various recommendation
models. | http://arxiv.org/pdf/2308.09904 | Yubo Shu, Haonan Zhang, Hansu Gu, Peng Zhang, Tun Lu, Dongsheng Li, Ning Gu | cs.IR, cs.AI | null | null | cs.IR | 20230819 | 20231017 | [
{
"id": "2305.07961"
},
{
"id": "2309.07864"
},
{
"id": "2303.14524"
},
{
"id": "2209.07562"
},
{
"id": "2305.16291"
},
{
"id": "2207.12515"
},
{
"id": "2304.03442"
},
{
"id": "2304.10149"
},
{
"id": "1806.08977"
},
{
"id": "2305.00447"
},
{
"id": "2309.02427"
},
{
"id": "2307.14225"
},
{
"id": "2112.00944"
},
{
"id": "2305.16960"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.