id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
listlengths
1
1
2308.09904#11
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
XXXâ 24, 2024, Singapore Yubo Shu, et al. EI Item: Harry Potter and the Sorce: ed rs Stone (Movie) Description: Harry Potter and the Sorcerer's Stone is the first film in the Harry Potter series based on the novels by J.K. Rowling. The story follows Harry Potter, a young wizard who discovers his magical heritage as .. Characteristic: Fantasy, Adventure, Family-friendly, Magic, Wizardry, Coming-of-age, Bri im, Analyze User Comment: In the user comment, the mention of the plot being "very mysterious" suggests the user appreciates the suspense and intrigue in the narrative. However the user also points out some imprecise plots in Analyze User Action: The user's action indicates liking. (a) Perceive Agent Perceive Agent User Action onan Item Failure Reason and Suggestion i be some duplications in User Pr exit conflicts between User Preference no duplicat User Dispreference. Need Optimize Preference: Yes Need Optimize Dispreference: Yes How to Optimize Preference : Merge similar preferences to avoid redundancy How to Optimize Dispreference : Split the dispreference into more pieces to avoid conflicts. Results: {Optimized Preference} & {Optimized Dispreference} (e) Reflect Agent Existing Rete |_ Onin Paonaiy Agent iy i New Personality Enriched ! ' Features â ix Learn Candidate Act Assistant Action | Critic Vv Agent | Personality Agent | onthe Item Agent (f) The process of the assistant to learn personalities from user actions. Like: The user and adventure Analyze Why Like: The movie offers an engaging storyline featuring magic, adventure, and coming-of-age themes, which could appeal to « Analyze Why Disli if they are not fans of fantasy or magic-themed movie's focus on a young protagonist and his fii be appealing to . some people might not like the movie tives. The ds might not Le: My rned Preference: | Fantasy and Adventure themes terious and engaging plot | .. User Action: { Like, Dislike or Neutral } Learned Dispreference: | Plot loophole | (b) Learn Agent ay like the movie because it is a a fan of the specific style of British films or if t Based on the user's preferences for fi venture themes, the user may like the movie. However e the user may also dislike the movie b User Comment (Predicted) :
2308.09904#10
2308.09904#12
2308.09904
[ "2305.07961" ]
2308.09904#12
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
The fa elements kept me engaged, while (c) Act Agent : The predicted action is correct ased on a novel, with he movie if they are not : The predicted action is wrong ntasy and . Reasons: The possible reason is that the userâ s prs ence and thus can not provide an strong evidence tasy and adventure (d) Critic Agent Figure 2: The components of the assistant and their work pattern. Critic Agent not only identifies the discrepancy between predic- tions and labels but also analyzes potential reasons for the failure to facilitate corrective measures. As depicted in Figure 2(d), this process can be compared to a code compiler detecting a bug in code and generating an error log, enabling the programmer to identify and rectify the issue. As a result, the reasons for failure are con- veyed to the Learn Agent, prompting a reevaluation of previous attempts and a relearning of the personality [32]. This iterative collaboration between the Learn, Act, and Critic Agents enhances the inference of human personality based on observed actions. 3.2 Enhance Alignment Given the critical importance of aligning with the user, we further implement a Learn-Act-Critic loop and a reflection mechanism to reinforce this alignment. Learn-Act-Critic Loop. As shown in Figure 2(f), our Learn Agent collaborates with the Act and Critic Agents in an iterative process to grasp the userâ
2308.09904#11
2308.09904#13
2308.09904
[ "2305.07961" ]
2308.09904#13
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
s personality. Upon receiving user action or feedback, the Learn Agent extracts an initial personality as a candidate. Then, the Act Agent utilizes this candidate as input to predict the userâ s actual action in reverse. The Critic Agent then assesses the accuracy of this prediction. If the prediction proves inaccurate, the Critic Agent delves into the underlying reasons and offers suggestions for corrections. The Learn Agent then incorpo- rates these suggestions, refining the candidateâ s personality until it meets the Critic Agentâ s evaluation. 3.1.5 Reflect Agent. The Reflect Agentâ s role is to periodically review the learned personality. As illustrated in Figure 2(e), the Reflect Agentâ s input comprises the combination of newly acquired learned personality and existing personalities. The Reflect Agent then evaluates the combined personalities, identifying duplicate likes, duplicate dislikes, and conflicts between likes and dislikes. The rationale behind employing the Reflect Agent is to ensure the rationality of the learned personalities throughout the continuous learning process. Reflecting on personality. To attain more accurate and com- prehensive personalities, the assistant must seamlessly integrate the newly acquired personality with existing ones, rather than merely accumulating them. Inspired from [22], our reflection mechanism addresses issues arising from duplication and conflicts in learned personalities (preferences and aversions). Regarding duplication, the assistant can effortlessly merge duplicates without requiring additional information. However, handling conflicts may require a more delicate strategy. The Reflect Agent initiates by deconstructing
2308.09904#12
2308.09904#14
2308.09904
[ "2305.07961" ]
2308.09904#14
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents & XXXâ 24, 2024, Singapore conflicting traits into finer details to minimize overlaps. If conflicts persist after this step, the Reflect Agent formulates queries for users, seeking their input to resolve the conflicts. 3.3 Human-Centered Approaches In this section, we discuss key human-centered approaches em- ployed within the RAH framework to reduce user burden, mitigate biases, and enhance user control. Reduce user burden. The assistant reduces user burden through its learning and acting capabilities. It employs the Learn Agent to learn a unified user personality from diverse domain interactions in the userâ
2308.09904#13
2308.09904#15
2308.09904
[ "2305.07961" ]
2308.09904#15
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
s history. This unified personality is then extrapolated across domains using the Act Agent, resulting in personalized proxy feedback to instruct recommender systems. This process helps users avoid abundant interactions and thus reduces user burden. Within a single domain, the assistant utilizes powerful LLMs to compre- hend user personalities with fewer actions. Across domains, this unified personality alleviates the â cold startâ issue and reduces the initial feedback burden. Additionally, the assistant can analyze user behavior across mixed domains, gradually constructing a more comprehensive personality that aligns better with the user. a userâ s identity. For example, if a patient expresses inter- est in a treatment-related book, the assistant could provide extra proxy feedback, such as â Likes Professional Medical Literature", to the recommender system, thereby masking the patientâ s identity and suggesting they might be a medical professional. In response, the recommender system might suggest a mix of treatment-focused books and advanced med- ical literature. The assistant then uses the Act Agent to filter out the specialist literature, presenting only the relevant treatment-related options to the user. This strategy ensures privacy while delivering personalized recommendations tai- lored to the userâ s needs. 4 EXPERIMENTS SETTING In this section, we outline the specifics of our experiments and dataset preparation. Our evaluation of the RAH framework involves three experiments to assess: (1) the assistantâ s alignment with the user preference. (2) the performance of reducing user burden among various domains, and (3) the assistantâ s capability to mitigate bias. For all experiments, we utilize the GPT-4-0613 version of the LLM from OpenAI in our assistant. Mitigate bias. To mitigate bias, the assistant leverages the Act Agent to act on items and generate proxy feedback. Human feed- back, limited by time and energy, tends to be biased towards popular or seen items. The Act Agent addresses this limitation by offering expanded feedback on less popular or unseen items, thus reduc- ing selection bias. This broader interaction history leads to less biased recommendations from the recommender systems. The Ac- tion Agent, based on LLMs, provides nuanced feedback, such as proxy comments, allowing for a deeper understanding of explicit user preferences. This enables recommender systems to focus on genuine user preferences rather than simply fitting to the training data, thus reducing inference bias. Enhance user control.
2308.09904#14
2308.09904#16
2308.09904
[ "2305.07961" ]
2308.09904#16
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
Different from the traditional frame- work consisting of users and a remote recommendation system, the assistant is designed to prioritize usersâ intentions and objec- tives. With the integration of LLMs, the assistant can operate on personal devices [30], empowering users and providing a more human-centered experience. The Act Agent plays a crucial role in enhancing user control through content filtering and tailored recommendations: â ¢ Control recommendation results: Equipped with LLM, the Learn Agent comprehends complex human intentions effectively. The Act Agent then filters items and tailors rec- ommender systems to ensure recommended results align with user intentions. For instance, if a user instructs the as- sistant to exclude horrifying elements, the assistant filters out such movies, books, and games from recommendations and generates proxy actions such as â Dislike" for items con- taining these elements.
2308.09904#15
2308.09904#17
2308.09904
[ "2305.07961" ]
2308.09904#17
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
Our datasets are sourced from three domains on Amazon: Movies, Books, and Video Games. Following the guidelines of previous research [19], we initially filter out users and items with fewer than five interactions. We then retain users who have interactions in more than one domain, allowing us to additionally evaluate RAHâ s performance in cross-domain situations (e.g., Movie&Book). Subsequently, to strike a balance between GPT-4 API calls and the training demands of the recommender system, we split the dataset into two parts:
2308.09904#16
2308.09904#18
2308.09904
[ "2305.07961" ]
2308.09904#18
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
â ¢ Cross1k. We randomly select 1,000 users from the processed data, capturing their interactions to form a concise dataset. For these users, 1,000 personalized LLM-based assistants are created to learn from and act to them individually. For the following experiments, we further partition the interactions of Cross1k into three sets (Learn Set, Proxy Set, and Unseen Set) using an equal ratio of 1:1:1. â ¢ Cross221k. The rest of the dataset includes 221,861 users and 4,624,903 interactions, and it can be used for training a stable recommender system without the challenges tied to insufficient training data. The statistics of Cross1k and Cross221k can be found in Appen- dix 8.1.
2308.09904#17
2308.09904#19
2308.09904
[ "2305.07961" ]
2308.09904#19
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
To test RAHâ s role in reducing bias, we follow the protocols with previous de-bias research [2, 31, 46] to simulate unbiased data for offline evaluation by sampling interactions according to the propensity scores of items. â ¢ Control privacy: Beyond operating on personal devices, the assistant employs strategies to enhance privacy and person- alized recommendations. The assistant limits data sharing with recommender platforms and employs obfuscation strate- gies, such as providing obfuscated proxy feedback to mask 5 RESULTS AND DISCUSSION In this section, we first showcase our experimental results focusing on alignment, burden reduction, and bias mitigation. Subsequently, we explore case studies emphasizing enhanced user control over recommended outcomes and personal privacy.
2308.09904#18
2308.09904#20
2308.09904
[ "2305.07961" ]
2308.09904#20
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
XXXâ 24, 2024, Singapore Yubo Shu, et al. Act [> | i | an Learn Movie Book f@ Mixed 0.90 ove os1 | 0:90 0.90 0.90 0.80 9.72 O74 0.80 0.80 0.80 os 0.70 (Gor 0.70 082 oss | 070 063 gg 08 | 070 63 0.65 0.59 087 058 woe | oo Penn 8 Moe nO | lovie 0.50 oso â ll a oso Mill 0.50 Lote LR Ler Lote LR ter Lote LR Ler Lote LR Ler 0.90 0.90 0.85 0.90 0.90 0.76 082 979 0.80 0.80 0.80 0.80 0.69 0.70 0.70 0.70 0.70 62 8 0.64 058 o55 °° 0.56 0.59 Book | 59 955 m ic 0.60 0.54 0.55 a | 8 | | oso â i mom 0.50 oso = Eo 050 Loe LR Ler Lote LR ter Lote LR Ler Lote LR ter 0.90 0.90 0.90 0.23 | 0.90 079 g76 0.80 0.80 0.80 0.75 0.80 0.70 062 964 | 0.70 0.70 0.70 063 0.64 987 ose 959 ose 0.60 come [O° Ss EM [oe ce ce a foe â
2308.09904#19
2308.09904#21
2308.09904
[ "2305.07961" ]
2308.09904#21
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
o peol oso ill oso elm MM | oso 0.50 Loe LR Ler Lote LR ter Lote LR Ler Lote LR Ler 0.90 083 gg) 084 | 0.90 oar 287 086 950 | g.00 0.86 0.94 088 | 00 0.85 og3 088 0.77 0.80 0.80 0.80 0.80 0.80 0.80 BO J o7 0.70 0.70 0.70 2 0.60 0.60 0.60 0.60 Mixed 0.50 0.50 0.50 0.50 Lo ote LR LCR Lote LR LCR Lo ote LR LCR Lo ote LR LCR Figure 3: Performance evaluation of the assistantâ s ability to align with users across singular, cross, and mixed domains. Histogram values represent the F1-Score against user actions. L for Learn Only, C for using Learn-Act-Critic loop, and R for the reflection mechanism.
2308.09904#20
2308.09904#22
2308.09904
[ "2305.07961" ]
2308.09904#22
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
5.1 Assistantsâ Alignment with Users For the first alignment-focused experiment, we task the assistant with assimilating personalities from the Learn Set and then gener- ating proxy actions for items within the Proxy Set in Cross1k. In order to evaluate our assistantâ s alignment with users, an intuitive measure is whether an assistant can take consistent actions with a user. Therefore, the evaluation process is: (1) We instruct the as- sistant to extract usersâ personalities from their interactions in the Learn Set, such as ratings and comments on items. (2) The assistant is then tasked with predicting actions on items in the Proxy Set. We then examine if these predicted actions align with the actual behaviors of users. Figure 3 presents the F1-score of the personality learning ex- periment. Overall, compared with Learn Only, either the learn-act- critic loop or reflection mechanism is helpful in aligning with users. Moreover, their combined application yields even more significant improvements. Learning and acting within the same domain yields better results compared to cross-domain operations. Furthermore, the results demonstrate that learning from a mixed domain outperforms learn- ing from any single domain, such as movies, books, or games when considered independently. This suggests that LLM-based assistants possess the capability to reason and extrapolate usersâ personalities across different domains. To gain a more comprehensive evaluation, we conduct the ex- periment to include both cross-domains and mixed domains. For comparison, we have four tasks for personality learning:
2308.09904#21
2308.09904#23
2308.09904
[ "2305.07961" ]
2308.09904#23
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
â ¢ Learn Only: We directly append learned new likes or dis- likes into usersâ personalities without Critic Agent or Reflect Agent. â ¢ Learn+Reflect: After appending new likes or dislikes to usersâ personalities, we employ the reflection mechanism to resolve potential duplication and conflicts. â ¢ Learn+Critic: After acquiring new likes or dislikes from a particular user action, we input the new likes or dislikes and assess if the Act Agent can accurately infer the original user action in reverse. If not successful, the assistant should attempt another Learn-Act-Critic loop.
2308.09904#22
2308.09904#24
2308.09904
[ "2305.07961" ]
2308.09904#24
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
â ¢ Learn+Critic+Reflect: Both the Learn-Act-Critic loop and reflection mechanism are engaged for optimization. 5.2 Reduce Human Burden In the second experiment, we connect the assistant with traditional recommender systems within the RAH framework. To evaluate whether the assistant can reduce user burden, we measure how effectively the assistant can represent users and provide proxy feedback to calibrate the recommender systems using the RAH framework. We perform comparison experiments for various rec- ommendation algorithms, both with and without assistants. Without assistants, we train recommendation algorithms on Cross221k and the Learn Set of Cross1k. Lastly, we calculate the recommendation metric on the Unseen Set. With assistants, we initially use assistants to learn each userâ s personality on Learn Set and let the assistant make proxy feedback on Proxy Set (same as the first experiment). Then we train recommendation models on
2308.09904#23
2308.09904#25
2308.09904
[ "2305.07961" ]
2308.09904#25
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents & XXXâ 24, 2024, Singapore Table 1: The performance of proxying user feedback and adjusting recommender systems. Method Assistant Movie Book Game Mixed NDCG@10 Recall@10 NDCG@10 Recall@10 NDCG@10 Recall@10 NDCG@10 Recall@10 LightGCN LightGCN No Yes 0.5202 0.5524(+0.0322) 0.5142 0.5339(+0.0197) 0.1283 0.1830(+0.0547) 0.1439 0.1912(+0.0473) 0.3459 0.4330(+0.0871) 0.4309 0.4974(+0.0665) 0.3403 0.4058(+0.0655) 0.1696 0.2033(+0.0337) PLMRec PLMRec No Yes 0.0993 0.1200(+0.0207) 0.1316 0.1692(+0.0376) 0.0092 0.0162(+0.0070) 0.0143 0.0197(+0.0054) 0.3693 0.3981(+0.0288) 0.4630 0.4790(+0.0160) 0.1075 0.1378(+0.0303) 0.0656 0.0766(+0.0110) FM FM No Yes 0.3492 0.3919(+0.0427) 0.3871 0.4257(+0.0386) 0.1216 0.1474(+0.0258) 0.1299 0.1603(+0.0304) 0.2917 0.2937(+0.0020) 0.3586 0.3624(+0.0038) 0.2421 0.2549(+0.0128) 0.1262 0.1340(+0.0078) MF MF No Yes 0.3737 0.4300(+0.0563) 0.4450 0.4781(+0.0331) 0.1143 0.1520(+0.0377) 0.1275 0.1593(+0.0318) 0.2074 0.2998(+0.0924) 0.2622 0.3706(+0.1084) 0.1933 0.2651(+0.0718) 0.1054 0.1487(+0.0433) ENMF ENMF No Yes 0.4320 0.5200(+0.0880) 0.3953 0.4831(+0.0878) 0.0994 0.1224(+0.0230) 0.0997 0.1217(+0.0220) 0.0652 0.0788(+0.0136) 0.1036 0.1247(+0.0211) 0.2630 0.3224(+0.0594) 0.1227 0.1531(+0.0304) NeuralMF NeuralMF No Yes 0.4720 0.4856(+0.0136) 0.4878 0.4906(+0.0028) 0.1364 0.1631(+0.0267) 0.1385 0.1658(+0.0273) 0.2160 0.3507(+0.1347) 0.2704 0.4086(+0.1382) 0.2891 0.3451(+0.0560) 0.1507 0.1742(+0.0235) ItemKNN ItemKNN No Yes 0.1211 0.2131(+0.0920) 0.1035 0.1860(+0.0825) 0.0889 0.1517(+0.0628) 0.0694 0.1171(+0.0477) 0.2242 0.2660(+0.0418) 0.3074 0.3125(+0.0051) 0.1657 0.2567(+0.0910) 0.0790 0.1170(+0.0380)
2308.09904#24
2308.09904#26
2308.09904
[ "2305.07961" ]
2308.09904#26
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
Cross221k, Learn Set and the assistantâ s proxy feedback, and like- wise test on Unseen Set. The involved recommendation algorithms are as follows: # 5.3 Mitigate Bias # Table 2: The performance of alleviating bias. â ¢ LightGCN[10]: A model that enhances recommender sys- tems by simplifying neighborhood aggregation, and learns embeddings through linear propagation on the interaction graph. â ¢ PLMRec[36]: A recommendation model that uses PLMs like Bert to embed the content of items for deeper semantic min- ing. Method MF MF+IPS MF+RAH MF+IPS+RAH NDCG@10 Recall@10 0.1835 0.2148 0.5017 0.5196 0.2085 0.2424 0.4326 0.4554
2308.09904#25
2308.09904#27
2308.09904
[ "2305.07961" ]
2308.09904#27
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
â ¢ FM[23]: Model that combines SVM advantages with factor- ization models, using factorized parameters to model inter- actions in sparse data. â ¢ MF[13]: Use matrix factorization techniques for recommen- dation systems to generate product recommendations by using historical data. â ¢ ENMF[3]: Based on simple neural matrix factorization, it optimizes model parameters from the entire training data without sampling. â ¢ NeuralMF[11]: A framework that uses deep neural networks modeling collaborative filtering based on implicit feedback and user-item feature interactions.
2308.09904#26
2308.09904#28
2308.09904
[ "2305.07961" ]
2308.09904#28
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
â ¢ ItemKNN[5]: An item-based Top-N recommendation algo- rithm that uses item similarities to determine the recommen- dation set. Table 1 presents the results of our comparison. The data suggest that, conditioned on an equal number of user interactions, the per- formance of various recommender systems can be improved when the assistant is integrated. Namely, after learning user personalities, the assistant can effectively calibrate recommender systems using proxy feedback. These outcomes resonate with the non-invasion design of the RAH framework. The assistant preserves the inher- ent pattern between the recommender system (which recommends items and gathers feedback) and the user (who receives recommen- dations and provides feedback). As a result, the RAH framework demonstrates remarkable adaptability across various recommender systems. In the RAH framework, the assistant provides an opportunity to address the bias problem. The above experiments demonstrate the capability of assistants to learn from user actions and make proxy feedback on items. Therefore, the assistant can also represent human users to provide proxy feedback on unpopular items and alleviate the bias in the system. To conduct the experiment, we se- lect unpopular items (associated with less than ten reviews) in the Cross1k dataset and randomly sample user assistants to make proxy feedback on unpopular items until these items own no less than ten reviews. For comparison, we also compare a de-biasing method, Inverse Propensity Scoring (IPS) [25]. The IPS method in recom- mender systems adjusts for selection bias by reweighting observed data based on the likelihood of an item being recommended. Subsequently, we evaluate the performance on simulated unbi- ased test data derived from sampling. Specifically, the probability of sampling a user-item interaction is formulated to be inversely pro- portional to the frequency of the involved item [31]. Table 2 shows that both IPS and RAH are effective in mitigating bias compared with the baseline. Remarkably, when combined, the IPS and RAH approach emerges as a particularly robust de-biasing technique [4], showing a greater efficacy in bias reduction. 5.4 Increase User Control 5.4.1 Control Recommendation Results. The first case, as illustrated in Figure 4(a), demonstrates how the assistant can enhance user control over recommended results. In this case, since the user often watches movies with a child, the user expresses dissatisfaction
2308.09904#27
2308.09904#29
2308.09904
[ "2305.07961" ]
2308.09904#29
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
XXXâ 24, 2024, Singapore # [Human] # User Action: Dislike the Incredibles (Pixar film) # User Comment: watch films with my kid. childish for adults. [Assistant] too dark for children, yet too mindless violence # # Learn: | Prefer: family movies | Disprefer: heavy dark elements, too childish, lots of violence | ...... [Rec System] # # Recommend: (1) Coco (2) Ironman (3) Batman: [Assistant] # Act # The Dark Knight (1) Like, pass to the user (2) Not Sure, pass to the user to learn from human feedback (3) Dislike, proxy feedback to the recommender system (a) Control Recommendation Results [Human] # User Action: Like The Depression Cure: The 6-Step Program to Beat Depression without Drugs [Assistant] # It can have a potential risk of privacy leakage. Suggest two personality confusion strategies. + Strategy I (pretend a psychologist) Assistant will automatically express more Like on professional psychology textbooks to the recommender system. + Strategy II (pretend a shared account) Assistant will automatically express random Like and Dislike. [Human] (select and enable a protection strategy) [Rec System] (recommend several items) [Assistant] # Act + For the user: filter recommended items from the recommender systems to remain accurate, + For the recommender system: selectively express user real feedback and create some extra feedback to protect privacy (b) Control Personal Privacy # Figure 4: The case study.
2308.09904#28
2308.09904#30
2308.09904
[ "2305.07961" ]
2308.09904#30
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
with the movie The Incredibles citing reasons such as it being "too childish for adults" and "too dark for children." From this feedback, the assistant discerns that the user favors family movies that strike a balance in content, avoiding extremes in themes. Subsequently, the recommender system suggests three movies: Coco, Ironman, and Batman: The Dark Knight. Leveraging the rea- soning capabilities and real-world knowledge of LLMs, the assistant can make informed decisions on items to align with user intentions. For Coco, the assistant identifies it as a likely match for the user due to its family-friendly nature and passes the recommendation to the user. Regarding Ironman, the assistant, uncertain of its suitability, also passes this recommendation to the user, seeking additional feedback. In contrast, Batman: The Dark Knight, known for its dark and potentially violent content, is deemed possibly unsuitable based on the userâ
2308.09904#29
2308.09904#31
2308.09904
[ "2305.07961" ]
2308.09904#31
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
s preferences. The assistant decides to â Dislike" this recommendation on behalf of the user, supplying proxy feedback to the recommender system for future refinement. 5.4.2 Control Privacy. The second case, depicted in Figure 4(b), highlights how the assistant can bolster user control concerning personal privacy. In this case, A user expresses interest in a specific book titled The Depression Cure: The 6-Step Program to Beat Depres- sion without Drugs. The assistant identifies that such an action might lead to potential privacy leakagesâ expressing a preference for con- tent on mental health might disclose sensitive information about the user. The assistant offers two personality confusion strategies to help control privacy. Yubo Shu, et al. Strategy I (Pretend a Psychologist): The assistant, mimicking the behavior of a psychologist, will express more "Like" on profes- sional psychology textbooks within the recommender system. This action serves to dilute the userâ s preference, making it ambiguous whether the original interest in the depression-related book was due to personal reasons or professional curiosity. Strategy II (Pretend a Shared Account): The assistant will automatically generate a mix of random likes and dislikes. This strategy gives the impression of multiple users sharing on a single account, thereby obfuscating individual preferences and adding a layer of ambiguity to the userâ s actions. If the user adopts one strategy, the assistant selectively provides real user feedback and creates additional feedback, further pro- tecting privacy. Besides, the assistant can also filter items from the recommender system to ensure that recommendations remain personalized despite the noise introduced by the selected strategy. 6 RELATED WORK 6.1 Human-Centered Recommendation The human-centered recommender system [12] focuses on under- standing the characteristics and complex relationships between the recommender system and users in the recommendation scenario. Unlike the "accuracy-only" approach in traditional recommender systems, the human-centered recommender system pays more at- tention to user experience, taking user satisfaction and needs as optimization goals, such as privacy protection. Recent works have shown that this field has attracted researchers from social sciences and computational fields to participate in research together. [39] proposed a new federal recommendation framework called Federal Mask Matrix Factorization (FedMMF), which can protect data pri- vacy in federal recommender systems without sacrificing efficiency and effectiveness.
2308.09904#30
2308.09904#32
2308.09904
[ "2305.07961" ]
2308.09904#32
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
EANA [21] improves the training speed and ef- fectiveness of large-scale recommender systems while protecting user privacy through an embedding-aware noise addition method. [42] proposed a new human-centered dialogue recommendation method, which provides more helpful recommendations to users by understanding and adapting to user needs during the dialogue process. 6.2 LLM For Recommendation Large Language Models (LLMs) in Natural Language Processing (NLP) are now employed in recommender systems due to their vast knowledge and logical reasoning. LLMs for Recommendation (LLM4Rec) are mainly used in two ways: enhancing features and di- rectly recommending. The first approach leverages LLMs for feature extraction, enhancing traditional systems. Notable works include encoding news[17, 36, 37, 40, 43] and tweets[44] for recommenda- tions. The second approach forms input sequences for LLMs, letting them directly recommend. [16, 33] relied on prompts for recom- mendations. [1] proposed a two-stage method: fine-tuning LLMs with recommendation data and then using them for recommenda- tions. Works like [6, 7, 34] delved into LLMâ s role in conversational recommender systems.
2308.09904#31
2308.09904#33
2308.09904
[ "2305.07961" ]
2308.09904#33
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents & XXXâ 24, 2024, Singapore 6.3 LLM-based Agent With the emergence of Large Language Models (LLMs), their Auton- omy, Reactivity, and Pro-activeness have brought hope and made some progress in the realization of intelligent agents [38]. This is a system that can engage in dialogue, complete tasks, reason, and exhibit a certain degree of autonomous action. Work [22] has demonstrated the feasibility of LLM-based Agents by building an in- telligent town supported by LLMs, showing that LLM-based Agents have strong credibility and adaptability. Work [32] has built an LLM-Based Agent on the Minecraft game platform and proposed an iterative prompt mechanism of environmental feedback â
2308.09904#32
2308.09904#34
2308.09904
[ "2305.07961" ]
2308.09904#34
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
exe- cution error â self-verification, proving that LLM-based Agents have lifelong learning ability in scenarios and strong generalization ability to solve new tasks. Similarly, work [28] divides the LLM- based Agent into three modules: control end, perception end, and action end from the perspective of cognitive science. Work [18] proposes a training paradigm that allows LLM to learn social norms and values from simulated social interactions. [7] Yunfan Gao, Tao Sheng, Youlin Xiang, Yun Xiong, Haofen Wang, and Jiawei Zhang. 2023. Chat-rec: Towards interactive and explainable llms-augmented recommender system. arXiv preprint arXiv:2303.14524 (2023). [8] Yingqiang Ge, Shuchang Liu, Zuohui Fu, Juntao Tan, Zelong Li, Shuyuan Xu, Yunqi Li, Yikun Xian, and Yongfeng Zhang. 2022. A survey on trustworthy recommender systems. arXiv preprint arXiv:2207.12515 (2022). [9] Shijie Geng, Shuchang Liu, Zuohui Fu, Yingqiang Ge, and Yongfeng Zhang. 2022. Recommendation as language processing (rlp): A unified pretrain, personalized prompt & predict paradigm (p5). In Proceedings of the 16th ACM Conference on Recommender Systems. 299â 315. [10] Xiangnan He, Kuan Deng, Xiang Wang, Yan Li, Yongdong Zhang, and Meng Wang. 2020. Lightgcn: Simplifying and powering graph convolution network for recommendation.
2308.09904#33
2308.09904#35
2308.09904
[ "2305.07961" ]
2308.09904#35
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
In Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval. 639â 648. [11] Xiangnan He, Lizi Liao, Hanwang Zhang, Liqiang Nie, Xia Hu, and Tat-Seng Chua. 2017. Neural collaborative filtering. In Proceedings of the 26th international conference on world wide web. 173â 182. [12] Joseph Konstan and Loren Terveen. 2021. Human-centered recommender systems: Origins, advances, challenges, and opportunities. AI Magazine 42, 3 (2021), 31â 42. [13] Yehuda Koren, Robert Bell, and Chris Volinsky. 2009. Matrix factorization tech- niques for recommender systems. Computer 42, 8 (2009), 30â 37. [14] Hoyeop Lee, Jinbae Im, Seongwon Jang, Hyunsouk Cho, and Sehee Chung. 2019. Melu:
2308.09904#34
2308.09904#36
2308.09904
[ "2305.07961" ]
2308.09904#36
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
Meta-learned user preference estimator for cold-start recommendation. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 1073â 1082. 7 CONCLUSION AND FUTURE WORK From the perspective of humans, we introduce the RAH framework for recommendations, incorporating the design of the assistant using LLM Agents. Our experiments highlight the efficacy of the Learn-Act-Critic loop and reflection mechanisms in enabling the assistant to align more closely with user personalities. Besides, we evaluate the RAH framework on different recommender systems in reducing user burden and find the generalization capability of the framework, which echoes the non-invasion role of the assistant. Additionally, we measure the assistantâ s capability to provide proxy feedback on unpopular items to mitigate selection bias. Finally, we explore potential solutions to increase user control of recommended results and personal privacy through the assistant. [15] Yujie Lin, Pengjie Ren, Zhumin Chen, Zhaochun Ren, Jun Ma, Maarten de Rijke, et al. 2018. Explainable fashion recommendation with joint outfit matching and comment generation. arXiv preprint arXiv:1806.08977 2 (2018). [16] Junling Liu, Chao Liu, Renjie Lv, Kang Zhou, and Yan Zhang. 2023. Is chatgpt a good recommender? a preliminary study. arXiv preprint arXiv:2304.10149 (2023). [17] Qijiong Liu, Jieming Zhu, Quanyu Dai, and Xiaoming Wu. 2022. Boosting deep ctr prediction with a plug-and-play pre-trainer for news recommendation. In Proceedings of the 29th International Conference on Computational Linguistics. 2823â 2833. [18] Ruibo Liu, Ruixin Yang, Chenyan Jia, Ge Zhang, Denny Zhou, Andrew M Dai, Diyi Yang, and Soroush Vosoughi. 2023. Training Socially Aligned Language Models in Simulated Human Society. arXiv preprint arXiv:2305.16960 (2023). [19] Weiming Liu, Xiaolin Zheng, Mengling Hu, and Chaochao Chen. 2022. Collab- orative filtering with attribution alignment for review-based non-overlapped cross domain recommendation.
2308.09904#35
2308.09904#37
2308.09904
[ "2305.07961" ]
2308.09904#37
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
In Proceedings of the ACM Web Conference 2022. 1181â 1190. [20] Sean M McNee, John Riedl, and Joseph A Konstan. 2006. Being accurate is not enough: how accuracy metrics have hurt recommender systems. In CHIâ 06 extended abstracts on Human factors in computing systems. 1097â 1101. One constraint of our current approach is its reliance on offline evaluations. In the future, we plan to conduct online assessments of the RAH framework, focusing on the sustained influence of the assistant on users and recommender systems. Moreover, we will explore the collaborative relationship between the assistant and humans, such as whether personalities learned from subjective tasks like recommendations can be translated into content creation scenarios that align with user preferences. REFERENCES [1] Keqin Bao, Jizhi Zhang, Yang Zhang, Wenjie Wang, Fuli Feng, and Xiangnan He. 2023.
2308.09904#36
2308.09904#38
2308.09904
[ "2305.07961" ]
2308.09904#38
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
Tallrec: An effective and efficient tuning framework to align large language model with recommendation. arXiv preprint arXiv:2305.00447 (2023). [2] Stephen Bonner and Flavian Vasile. 2018. Causal embeddings for recommendation. In Proceedings of the 12th ACM conference on recommender systems. 104â 112. [3] Chong Chen, Min Zhang, Yongfeng Zhang, Yiqun Liu, and Shaoping Ma. 2020. Efficient neural matrix factorization without sampling for recommendation. ACM Transactions on Information Systems (TOIS) 38, 2 (2020), 1â 28. [21] Lin Ning, Steve Chien, Shuang Song, Mei Chen, Yunqi Xue, and Devora Berlowitz. 2022. EANA:
2308.09904#37
2308.09904#39
2308.09904
[ "2305.07961" ]
2308.09904#39
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
Reducing privacy risk on large-scale recommendation models. In Proceedings of the 16th ACM Conference on Recommender Systems. 399â 407. [22] Joon Sung Park, Joseph C Oâ Brien, Carrie J Cai, Meredith Ringel Morris, Percy Liang, and Michael S Bernstein. 2023. Generative agents: Interactive simulacra of human behavior. arXiv preprint arXiv:2304.03442 (2023). [23] Steffen Rendle. 2010. Factorization machines. In 2010 IEEE International conference on data mining. IEEE, 995â 1000.
2308.09904#38
2308.09904#40
2308.09904
[ "2305.07961" ]
2308.09904#40
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
[24] Scott Sanner, Krisztian Balog, Filip Radlinski, Ben Wedin, and Lucas Dixon. 2023. Large Language Models are Competitive Near Cold-start Recommenders for Language-and Item-based Preferences. arXiv preprint arXiv:2307.14225 (2023). [25] Tobias Schnabel, Adith Swaminathan, Ashudeep Singh, Navin Chandak, and Thorsten Joachims. 2016. Recommendations as treatments: Debiasing learning and evaluation. In international conference on machine learning. PMLR, 1670â 1679.
2308.09904#39
2308.09904#41
2308.09904
[ "2305.07961" ]
2308.09904#41
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
[26] Donghee Shin. 2020. How do users interact with algorithm recommender sys- tems? The interaction of users, algorithms, and performance. Computers in human behavior 109 (2020), 106344. [27] Piotr Sulikowski, Tomasz Zdziebko, Dominik TurzyÅ ski, and Eliasz KaÅ toch. 2018. Human-website interaction monitoring in recommender systems. Procedia Computer Science 126 (2018), 1587â
2308.09904#40
2308.09904#42
2308.09904
[ "2305.07961" ]
2308.09904#42
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
1596. [28] Theodore Sumers, Shunyu Yao, Karthik Narasimhan, and Thomas L Griffiths. 2023. Cognitive architectures for language agents. arXiv preprint arXiv:2309.02427 (2023). [4] Jiawei Chen, Hande Dong, Xiang Wang, Fuli Feng, Meng Wang, and Xiangnan He. 2023. Bias and debias in recommender system: A survey and future directions. ACM Transactions on Information Systems 41, 3 (2023), 1â 39. [29] Kirsten Swearingen and Rashmi Sinha. 2001. Beyond algorithms: An HCI per- spective on recommender systems. In ACM SIGIR 2001 workshop on recommender systems, Vol. 13. 1â 11. [5] Mukund Deshpande and George Karypis. 2004. Item-based top-n recommenda- tion algorithms. ACM Transactions on Information Systems (TOIS) 22, 1 (2004), 143â 177. [6] Luke Friedman, Sameer Ahuja, David Allen, Terry Tan, Hakim Sidahmed, Changbo Long, Jun Xie, Gabriel Schubiner, Ajay Patel, Harsh Lara, et al. 2023.
2308.09904#41
2308.09904#43
2308.09904
[ "2305.07961" ]
2308.09904#43
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
Leveraging Large Language Models in Conversational Recommender Systems. arXiv preprint arXiv:2305.07961 (2023). [30] MLC team. 2023. MLC-LLM. https://github.com/mlc-ai/mlc-llm [31] Qi Wan, Xiangnan He, Xiang Wang, Jiancan Wu, Wei Guo, and Ruiming Tang. 2022. Cross pairwise ranking for unbiased item recommendation. In Proceedings of the ACM Web Conference 2022. 2370â 2378. [32] Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, and Anima Anandkumar. 2023. Voyager: An open-ended embodied agent with large language models. arXiv preprint arXiv:2305.16291 (2023). XXXâ
2308.09904#42
2308.09904#44
2308.09904
[ "2305.07961" ]
2308.09904#44
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
24, 2024, Singapore [33] Lei Wang and Ee-Peng Lim. 2023. Zero-Shot Next-Item Recommendation using Large Pretrained Language Models. ArXiv abs/2304.03153 (2023). https://api. semanticscholar.org/CorpusID:257985012 [34] Xiaolei Wang, Kun Zhou, Ji-Rong Wen, and Wayne Xin Zhao. 2022. Towards unified conversational recommender systems via knowledge-enhanced prompt learning. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Dis- covery and Data Mining. 1929â 1937. [35] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022.
2308.09904#43
2308.09904#45
2308.09904
[ "2305.07961" ]
2308.09904#45
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems 35 (2022), 24824â 24837. [36] Chuhan Wu, Fangzhao Wu, Tao Qi, and Yongfeng Huang. 2021. Empowering news recommendation with pre-trained language models. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Infor- mation Retrieval. 1652â 1656. [37] Chuhan Wu, Fangzhao Wu, Tao Qi, Chao Zhang, Yongfeng Huang, and Tong Xu. 2022.
2308.09904#44
2308.09904#46
2308.09904
[ "2305.07961" ]
2308.09904#46
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
Mm-rec: Visiolinguistic model empowered multimodal news recommenda- tion. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval. 2560â 2564. [38] Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, et al. 2023. The rise and potential of large language model based agents: A survey. arXiv preprint arXiv:2309.07864 (2023). [39] Liu Yang, Junxue Zhang, Di Chai, Leye Wang, Kun Guo, Kai Chen, and Qiang Yang. 2022. Practical and Secure Federated Recommendation with Personalized Mask. In International Workshop on Trustworthy Federated Learning.
2308.09904#45
2308.09904#47
2308.09904
[ "2305.07961" ]
2308.09904#47
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
Springer, 33â 45. [40] Yang Yu, Fangzhao Wu, Chuhan Wu, Jingwei Yi, and Qi Liu. 2021. Tiny- newsrec: Effective and efficient plm-based news recommendation. arXiv preprint arXiv:2112.00944 (2021). [41] Tianzi Zang, Yanmin Zhu, Haobing Liu, Ruohan Zhang, and Jiadi Yu. 2022. A survey on cross-domain recommendation: taxonomies, methods, and future directions. ACM Transactions on Information Systems 41, 2 (2022), 1â 39. [42] Gangyi Zhang. 2023.
2308.09904#46
2308.09904#48
2308.09904
[ "2305.07961" ]
2308.09904#48
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
User-Centric Conversational Recommendation: Adapting the Need of User with Large Language Models. In Proceedings of the 17th ACM Conference on Recommender Systems. 1349â 1354. [43] Qi Zhang, Jingjie Li, Qinglin Jia, Chuyuan Wang, Jieming Zhu, Zhaowei Wang, and Xiuqiang He. 2021. UNBERT: User-News Matching BERT for News Recom- mendation.. In IJCAI. 3356â 3362. [44] Xinyang Zhang, Yury Malkov, Omar Florez, Serim Park, Brian McWilliams, Jiawei Han, and Ahmed El-Kishky. 2022. TwHIN-BERT: a socially-enriched pre- trained language model for multilingual Tweet representations. arXiv preprint arXiv:2209.07562 (2022). [45] Yongfeng Zhang, Xu Chen, et al. 2020. Explainable recommendation: A survey and new perspectives. Foundations and Trends® in Information Retrieval 14, 1 (2020), 1â 101. [46] Yu Zheng, Chen Gao, Xiang Li, Xiangnan He, Yong Li, and Depeng Jin. 2021. Disentangling user interest and conformity for recommendation with causal embedding. In Proceedings of the Web Conference 2021. 2980â 2991. 8 APPENDICES 8.1 The statistics of datasets The number of users, items and interactions in different domains for both Cross1k and Cross221k. # Table 3: Cross1k. Domain #Users 1,045 Movie 1,046 Book 1,044 Game #Items 10,679 20,159 8,984 #Interactions 21,024 24,035 17,169
2308.09904#47
2308.09904#49
2308.09904
[ "2305.07961" ]
2308.09904#49
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
# 8.2 Expansion Experiments of Burden Reduction In our Section 5.2, we have compared the assistantâ s generation of feedback on behalf of users in the Proxy Set, and then passed this feedback to the recommendation system to help users further opti- mize the recommendation system. From our previous results, it can Yubo Shu, et al. Table 4: Cross221k. Domain #Users 221,861 Movie 94,407 Book 7,149 Game #Items 49,791 12,898 12,196 #Interactions 2,313,890 2,240,010 71,003 be seen that, with limited user interaction history and after learn- ing about the userâ s personality, the assistant can effectively act on behalf of the user, optimizing various recommendation systems while reducing repetitive user operations. However, there might be a potential issue that predicting on the userâ s Proxy Set could leak the data distribution. Therefore, we conducted additional experi- ments to investigate whether the assistant truly helps in reducing the userâ s burden. In Table 5, we included an additional experiment: we used a program that randomly decides whether to like or dislike to sim- ulate a non-intelligent assistant. Experimental results show that even randomly guessing likes and dislikes on the proxy dataset can improve the effect of the recommendation system in most experi- ments, indicating potential data distribution leakage risks. However, overall, the assistant designed based on our method outperformed the random program. This further validates our findings that the as- sistant can indeed be relatively intelligent to help users more easily optimize the recommendation system through proxy feedback.
2308.09904#48
2308.09904#50
2308.09904
[ "2305.07961" ]
2308.09904#50
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents & XXXâ 24, 2024, Singapore # Table 5: The performance of proxying user feedback and adjusting recommender systems with the additional comparison. Method Movie Book Game Mixed NDCG@10 Recall@10 NDCG@10 Recall@10 NDCG@10 Recall@10 NDCG@10 Recall@10 LightGCN LightGCN-Random LightGCN-Assistant 0.5202 0.5341(+0.0139) 0.5524(+0.0322) 0.5142 0.5240(+0.0098) 0.5339(+0.0197) 0.1283 0.1527(+0.0244) 0.1830(+0.0547) 0.1439 0.1711(+0.0272) 0.1912(+0.0473) 0.3459 0.4163(+0.0704) 0.4330(+0.0871) 0.4309 0.4934(+0.0625) 0.4974(+0.0665) 0.3403 0.3790(+0.0387) 0.4058(+0.0655) 0.1696 0.1900(+0.0204) 0.2033(+0.0337) PLMRec PLMRec-Random PLMRec-Assistant 0.0993 0.1171(+0.0178) 0.1200(+0.0207) 0.1316 0.1610(+0.0294) 0.1692(+0.0376) 0.0092 0.0149(+0.0057) 0.0162(+0.0070) 0.0143 0.0181(+0.0038) 0.0197(+0.0054) 0.3693 0.3964(+0.0271) 0.3981(+0.0288) 0.4630 0.4743(+0.0113) 0.4790(+0.0160) 0.1075 0.1346(+0.0271) 0.1378(+0.0303) 0.0656 0.0739(+0.0083) 0.0766(+0.0110) FM FM-Random FM-Assistant 0.3492 0.3897(+0.0405) 0.3919(+0.0427) 0.3871 0.4200(+0.0329) 0.4257(+0.0386) 0.1216 0.1443(+0.0227) 0.1474(+0.0258) 0.1299 0.1561(+0.0262) 0.1603(+0.0304) 0.2917 0.2903(-0.0014) 0.2937(+0.0020) 0.3586 0.3529(-0.0057) 0.3624(+0.0038) 0.2421 0.2533(+0.0112) 0.2549(+0.0128) 0.1262 0.1336(+0.0074) 0.1340(+0.0078) MF MF-Random MF-Assistant 0.3737 0.4122(+0.0385) 0.4300(+0.0563) 0.4450 0.4714(+0.0264) 0.4781(+0.0331) 0.1143 0.1434(+0.0291) 0.1520(+0.0377) 0.1275 0.1484(+0.0209) 0.1593(+0.0318) 0.2074 0.2618(+0.0544) 0.2998(+0.0924) 0.2622 0.3422(+0.0800) 0.3706(+0.1084) 0.1933 0.2302(+0.0369) 0.2651(+0.0718) 0.1054 0.1279(+0.0225) 0.1487(+0.0433) ENMF ENMF-Random ENMF-Assistant 0.4320 0.4931(+0.0611) 0.5200(+0.0880) 0.3953 0.4544(+0.0591) 0.4831(+0.0878) 0.0994 0.1195(+0.0201) 0.1224(+0.0230) 0.0997 0.1199(+0.0202) 0.1217(+0.0220) 0.0652 0.0751(+0.0099) 0.0788(+0.0136) 0.1036 0.1156(+0.0120) 0.1247(+0.0211) 0.2630 0.3056(+0.0426) 0.3224(+0.0594) 0.1227 0.1446(+0.0219) 0.1531(+0.0304) NeuMF NeuMF-Random NeuMF-Assistant 0.4720 0.4464(-0.0256) 0.4856(+0.0136) 0.4878 0.4517(-0.0361) 0.4906(+0.0028) 0.1364 0.1559(+0.0195) 0.1631(+0.0267) 0.1385 0.1578(+0.0193) 0.1658(+0.0273) 0.2160 0.3301(+0.1141) 0.3507(+0.1347) 0.2704 0.3913(+0.1209) 0.4086(+0.1382) 0.2891 0.3220(+0.0329) 0.3451(+0.0560) 0.1507 0.1603(+0.0096) 0.1742(+0.0235) ItemKNN ItemKNN-Random ItemKNN-Assistant 0.1211 0.1900(+0.0689) 0.2131(+0.0920) 0.1035 0.1698(+0.0663) 0.1860(+0.0825) 0.0889 0.1326(+0.0437) 0.1517(+0.0628) 0.0694 0.1051(+0.0357) 0.1171(+0.0477) 0.2242 0.2500(+0.0258) 0.2660(+0.0418) 0.3074 0.3035(-0.0039) 0.3125(+0.0051) 0.1657 0.2338(+0.0681) 0.2567(+0.0910) 0.0790 0.1090(+0.0300) 0.1170(+0.0380)
2308.09904#49
2308.09904
[ "2305.07961" ]
2308.09830#0
Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis
3 2 0 2 p e S 8 2 ] I A . s c [ 3 v 0 3 8 9 0 . 8 0 3 2 : v i X r a # Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis # Oscar J. Romero, John Zimmerman, Aaron Steinfeld, Anthony Tomasic Carnegie Mellon University [email protected], [email protected], [email protected], [email protected]
2308.09830#1
2308.09830
[ "2302.02083" ]
2308.09830#1
Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis
# Abstract This paper explores the integration of two AI subdisciplines employed in the development of artificial agents that ex- hibit intelligent behavior: Large Language Models (LLMs) and Cognitive Architectures (CAs). We present three inte- gration approaches, each grounded in theoretical models and supported by preliminary empirical evidence. The modular approach, which introduces four models with varying de- grees of integration, makes use of chain-of-thought prompt- ing, and draws inspiration from augmented LLMs, the Com- mon Model of Cognition, and the simulation theory of cogni- tion. The agency approach, motivated by the Society of Mind theory and the LIDA cognitive architecture, proposes the for- mation of agent collections that interact at micro and macro cognitive levels, driven by either LLMs or symbolic com- ponents. The neuro-symbolic approach, which takes inspira- tion from the CLARION cognitive architecture, proposes a model where bottom-up learning extracts symbolic represen- tations from an LLM layer and top-down guidance utilizes symbolic representations to direct prompt engineering in the LLM layer. These approaches aim to harness the strengths of both LLMs and CAs, while mitigating their weaknesses, thereby advancing the development of more robust AI sys- tems. We discuss the tradeoffs and challenges associated with each approach. Introduction Pre-trained Large Language Models (LLMs) like ChatGPT, GPT-4, and PaLM 2 are generative models that excel in a variety of natural language tasks (Brown et al. 2020; Devlin et al. 2019) and even show promise in interactive decision- making (Li et al. 2022), reasoning (Diao et al. 2023; Xie et al. 2023; Yao et al. 2023b), and modeling aspects of ar- tificial general intelligence (AGI) (Kosinski 2023; Bubeck et al. 2023). However, LLMs face interpretability, consis- tency, and scalability issues (Mialon et al. 2023), partly due to limitations in context window size and sensitivity to prompt structure as they often rely on precise and carefully engineered instructions (Wei et al. 2022).
2308.09830#0
2308.09830#2
2308.09830
[ "2302.02083" ]
2308.09830#2
Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis
Theyâ re criticized for being stochastic parrots and lacking detailed reasoning explanations (Bender et al. 2021). Hallucinations (Welleck et al. 2019; Qian et al. 2022; Wei et al. 2022) and biases (Weidinger et al. 2022; Venkit, Srinath, and Wilson 2022) are further concerns, affecting trustworthiness and ethical aspects (Huang et al. 2023).
2308.09830#1
2308.09830#3
2308.09830
[ "2302.02083" ]
2308.09830#3
Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis
The dependence on larger mod- els for better performance raises resource challenges (Mi- alon et al. 2023), and scalable LLMs incorporating continual learning are still an open question (Scialom et al. 2022). In contrast, Cognitive Architectures (CAs) propose hy- potheses about the fixed structures governing the operation of minds, whether in natural or artificial systems, facilitat- ing intelligent behavior in complex environments (Laird, Lebiere, and Rosenbloom 2017). CAs like ACT-R (Ander- son and Lebiere 2014), SOAR (Laird 2019), CLARION (Sun 2016), and LIDA (Franklin and Patterson 2006) model various human cognitive aspects: memory, learning, reason- ing, perceptual-motor interaction, theory of mind, AGI, and more (Kotseruba and Tsotsos 2020). CAs prioritize bounded rationality, striving for satisfactory decisions under resource constraints, diverging from LLMsâ pursuit of optimality. However, CAs face challenges in knowledge representation and scalability. Their encoded information is limited in size and homogeneous typology, meaning the knowledge pro- cessed by a cognitive agent1 is typically tailored for specific domains and tasks (Lieto, Lebiere, and Oltramari 2018). Unlike humans, CAs struggle with complex knowledge and their actions are confined to manually curated proce- dural knowledge (Park et al. 2023). According to (Mar- cus 2020), LLMs struggle to derive cognitive models from discourse and lack capabilities to reason over those cogni- tive models2. Hence, CAs could play a pivotal role in ei- ther augmenting or leveraging LLMs by contributing to the creation and dynamic updating of cognitive models. Like- wise, cognitive models could be leveraged to better interpret LLMsâ black-box learning algorithms and decision-making processes (Binz and Schulz 2023). Both LLMs and CAs have made valuable and sound con- tributions to the construction of complex autonomous AI agents; however, each approach has its strengths and weak- nesses (as summarized on Table 1).
2308.09830#2
2308.09830#4
2308.09830
[ "2302.02083" ]
2308.09830#4
Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis
Thus, the main con- tribution of this work lies in characterizing the plausible approaches to integrating CAs and LLMs, viewing them through a hybrid and synergetic lens. 1Hereafter, consider a cognitive agent as an artificial agent con- structed on a particular CA. 2A cognitive model should at least include information about the entities in the external world, their properties, and their relation- ships with other entities, as well as the modeling of the cognitive processes that operate over those entities (Marcus 2020). Feature Language processing World knowledge Reasoning Symbolic processing Connectionist processing Knowledge scalability Planning Learning Memory management Consistency (no hallucinations) LLMs CAs ++ ++ -+ -+ ++ +- -+ â
2308.09830#3
2308.09830#5
2308.09830
[ "2302.02083" ]
2308.09830#5
Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis
â -+ -+ -+ ++ ++ -+ -+ +- +- ++ ++ Table 1: Feature comparison between LLMs and CAs. (++) Fully supported. (+-) Almost always supported. (-+) Some- times supported. (â ) Rarely (or not) supported. Relevant Work Chain-of-thought prompting (CoT): CoT prompting (Mi- alon et al. 2023; Diao et al. 2023) enhances LLM reasoning, leading to improved performance in various reasoning and natural language processing tasks. CoT breaks down multi- step problems into intermediate steps, enabling the model to address reasoning problems. ReAct (Yao et al. 2023b) combines both reasoning (CoT prompts) and action (ac- tion plan generation). It organizes a workflow that decom- poses task goals, injects task-relevant knowledge, extracts important observation components, and refines action plans based on feedback. Auto-CoT (Zhang et al. 2022) proposes a model that samples questions with diversity and automat- ically generates demonstrations to correct mistakes in rea- soning chains. The approaches we propose in this paper as- sume using CoT for problem decomposition, allowing a CA to inject its output into each reasoning step. Augmented Language Models: it combines enhanced reasoning skills of an LLM with tools like APIs, DBs, and code interpreters for improved knowledge retrieval, reason- ing, and action execution (Mialon et al. 2023). Program- Aided Language model (PAL) (Gao et al. 2023) reads natu- ral language problems, generates intermediate programs for reasoning, and delegates the solution step to a Python inter- preter. Toolformer (Schick et al. 2023) is a model trained to decide which APIs to call, when to call them, what argu- ments to pass, and how to best incorporate the results into future token prediction. Our modular approach extends the idea of augmenting an LLM with cognitive processing and assumes the usage of external APIs. CAs and LLMs: Generative Agents (Park et al. 2023) is a model that uses a cognitive architecture and an LLM to gen- erate realistic behavior.
2308.09830#4
2308.09830#6
2308.09830
[ "2302.02083" ]
2308.09830#6
Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis
It defines three components: a mem- ory stream for recording comprehensive experiences in nat- ural language, a reflection component for deriving higher- level inferences about self and others, and a planning com- ponent translating these inferences into action plans. This approach differs from ours in that it does not use symbolic structures but unstructured natural language. OlaGPT (Xie et al. 2023) is an LLM cognition framework aiming to solve reasoning problems with human-like problem-solving abil- ities by leveraging CoT. OlaGPT proposes to approximate cognitive modules, such as attention, memory, learning, rea- soning, action selection, and decision-making. The first case of our modular approach resembles OlaGPT to some extent. Open-source experimental applications like Auto-GPT3 and BabyAGI4 aim to advance AGI. Auto-GPT manages long-term and short-term memory, language generation, and summarization. BabyAGI uses LLM chains to perform tasks based on goals. These approaches hold significant poten- tial and are likely to integrate further with human cognition modeling. Although with not a strict commitment to model a cognitive architecture, Voyager (Wang et al. 2023) facil- itates continual learning through an evolving code library for complex behaviors. An iterative prompting mechanism incorporates feedback, errors, and self-verification for pro- gram improvement. (LeCun 2022) outlines the considera- tions for crafting a cognitive architecture using energy min- imization mechanisms, enabling reasoning, prediction, and multi-scale planning. They emphasize that while determin- istic generative architectures withstand energy distribution issues, non-deterministic structures like auto-encoders and joint embeddings are susceptible to collapse. # Integration Approaches In this section, we propose and discuss the tradeoffs of three different approaches for the integration of CAs and LLMs: the modular approach, the agency approach, and the neuro- symbolic approach. To illustrate the practical implementa- tion of each approach, we base our examples on a scenario involving a cognitive agent designed to assist people with visual impairments in everyday tasks such as navigation and exploration of indoor environments, effective use of public transportation, etc. The agent operates on a smartphone de- vice, utilizing sensor data processing, computer vision for object detection, and speech recognition to perceive its en- vironment. Its actions encompass language generation and invocation of external APIs.
2308.09830#5
2308.09830#7
2308.09830
[ "2302.02083" ]
2308.09830#7
Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis
The agent engages in conver- sation with its user, reasons about their needs and requests, constructs shared mental models to achieve goals effectively, and makes decisions that unfold in the short and long term. For the remainder of this paper, let us consider that the inputs of an LLM can be multimodal, involving text and images, while the outputs are exclusively text-based. Con- versely, for the sake of simplicity, CAsâ inputs and outputs are limited to formatted text, although, in practice, various CAs can process diverse modalities. As a reference frame- work for CAsâ structure, our approach adopts the Common Model of Cognition (CMC) (Laird, Lebiere, and Rosen- bloom 2017), which captures a consensus regarding the structures and processes that resemble those found in human cognition. CMC defines five high-level modules, including perception, motor, working memory, declarative long-term memory, and procedural long-term memory, each of which can be further decomposed into multiple sub-modules. Be- havior in the CMC is organized around a cognitive cycle driven by procedural memory, with complex behavior (e.g.,
2308.09830#6
2308.09830#8
2308.09830
[ "2302.02083" ]
2308.09830#8
Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis
3https://github.com/Significant-Gravitas/Auto-GPT 4https://github.com/yoheinakajima/babyagi reasoning, planning, etc.) emerging as sequences of such cy- cles. In each cognitive cycle, the system senses the current situation, interprets it with respect to ongoing goals, and then selects an internal or external action in response. Both the agency and the neuro-symbolic approaches use different ref- erence frames, which will be discussed later. Modular Approach A straightforward way to integrate LLMs and CAs is using a modular approach where either (1) LLMs partially enhance the performance of certain modules and components of a CA, or (2) a CA augments an LLM by injecting reasoning traces and contents from memories into the prompting process. Figure 1 depicts 4 different cases of modular integration. This integration allows modules to be easily replaced by LLMs or their CA module counterparts. Case (a) assumes a recursive prompting scenario (Mialon et al. 2023) where an LLM decomposes a complex problem into subproblems, and the intermediate outputs are aggre- gated to generate a final output. In this case, a CA could be used to prime every intermediate step at the LLM with reasoning traces from procedural knowledge as well as rel- evant content from memories. The mechanism would be as follows: given an initial input i0 (e.g., a userâ s request, exter- nal signals, etc.), the LLM generates an intermediate output o0 (e.g., the first step towards the solution of the userâ s re- quest) and a set of equivalent symbolic structures for both the input, si0 (e.g., intents, entities, and properties recog- nized from the input), and the output, so0 (e.g., symbolic representation of LLMâ s actions and reasoning steps)5. The CA uses those symbolic structures as inputs and executes one or several cognitive cycles, after which, the contents of the working memory (w0), including fired productions, rel- evant information from declarative memories, and actions, are injected as cues into the next intermediate step of the LLM. The process repeats until a final output is generated. Consider this streamlined example:
2308.09830#7
2308.09830#9
2308.09830
[ "2302.02083" ]
2308.09830#9
Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis
A bus rider employs the term â downtownâ ambiguously as the destination. De- pending on the day of the week, the user may refer to two specific places in the downtown area, namely the work- place or the gym. The cognitive agent uses a combination of declarative and procedural knowledge to discern the userâ s transit patterns and effectively disambiguate the term down- town. The triggered productions and relevant contents of the working memory are subsequently integrated into the next LLMâ s recursive step, thereby guiding and enhancing its rea- soning process with new information.
2308.09830#8
2308.09830#10
2308.09830
[ "2302.02083" ]
2308.09830#10
Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis
[i0] [o0] [si0] (intent, (inform, destination, downtown)) [os0] (step, (search, orig, X, destination, downtown)) [w0] User: "when is my next bus to downtown coming?" search bus schedule with destination downtown (semantic, (is_a (downtown, place))) (semantic, (today (saturday))) (episodic, (destination=downtown, place=workplace, ë days=[monday... friday])) (episodic, (destination=downtown, place=gym, # ë days=[saturday, sunday]))
2308.09830#9
2308.09830#11
2308.09830
[ "2302.02083" ]
2308.09830#11
Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis
5Empirical work demonstrates that LLMs can be trained and fine-tuned to learn to encode key aspects of traditional symbolic structures (Pavlick 2023; Romero et al. 2021; Tomasic et al. 2021). (procedural, (if destination == x and today == y [o1] ... ë then destination=place)) (semantic, (destination (gym))) search bus schedule with destination gym Now, case (b) proposes an integration scenario where a CA leverages multimodal LLMs to support both the per- ception and motor modules of the system. At the perception level, the LLM converts external signals (e.g., text and im- ages) into symbolic representations and their relations, along with relevant metadata, which is then stored in the corre- sponding working memory buffers. For instance, we used a description of an image generated by GPT-46, then we pro- vided an in-context learning example to ChatGPT to convert the description into symbolic structures: [prompt] your job is to transform that natural language description into symbolic representations. Example, Input:
2308.09830#10
2308.09830#12
2308.09830
[ "2302.02083" ]
2308.09830#12
Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis
"this is an office with a desk, a computer on top of it, and a chair. There is also a printer on the left of the computer and some pencils on the right". Output: has(office, desk). has(office, computer). has(office, chair). onTopOf(computer, desk). toTheRightOf(printer, computer). toTheLeftOf( computer, printer). toTheLeftOf(computer, pencils).. [prompt] Input: "a busy city street with a clock tower in the background. The street is lined with shops and restaurants on both sides. There are several motorcycles parked on the side of the road, and people are walking down the street.
2308.09830#11
2308.09830#13
2308.09830
[ "2302.02083" ]
2308.09830#13
Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis
The clock tower has a large clock face with roman numerals and a small spire on top...". Output: [ChatGPT] has(city_street, clock_tower). has(clock_tower , clock_face). has(clock_face, roman_numerals). has( clock_face, small_spire). has(city_street, shops). has(city_street, restaurants). has(city_street, motorcycles). toTheLeftOf(clock_tower, city_street). toTheRightOf(city_street, clock_tower). toTheRightOf (shops, city_street)... onTopOf(roman_numerals, clock_face). onTopOf(small_spire, clock_face). onTopOf(clock_face, clock_tower)... As observed in the example, though not entirely accurate, LLMs demonstrate the capability to extract high-level com- positional and spatial relationships between entities from a given image/text and then re-express them using symbolic representations. After generating and storing these symbolic structures in the working memory, other modules of the CA can access them and perform diverse kinds of cognitive pro- cesses. Considering our initial example, it is expected that this symbolic representation of perceived images will en- able both the visually impaired user and the cognitive agent to collaboratively construct shared mental models for navi- gation, thereby enhancing spatial cognition and situational awareness of the user. Conversely, the LLM-based motor module converts the symbol structures that have been stored in the working memory buffers into external actions (e.g., natural language generation, motor control, etc.) 6At the time of writing this paper, OpenAI is holding back GPT- 4 image processing features, so we used a natural language descrip- tion generated with GPT-4 and reported in (Zhu et al. 2023).
2308.09830#12
2308.09830#14
2308.09830
[ "2302.02083" ]
2308.09830#14
Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis
CA . P i i si â ) cow] PM WM ° M w DM | eg FTES a. Cognitively-augmented LLM b. Perception and Motor powered by LLM CA o CA 0; in ipa >[am] |CA To, c. CA powered by LLM d. Internal simulation for anticipation and planning Figure 1: Modular approach. (a) Chain-of-Thought or recursive reasoning that augments an LLM with content generated by a CA. (b) Perception and Motor modules of a CA that leverages the power of LLMs. (c) Multiple modules of a CA that use LLMs to process and/or retrieve data. (d) A CA that leverages LLMs to predict/anticipate future states of the environment in order to perform reasoning and planning (some modules are not shown for the sake of legibility). Red-colored boxes denote LLMs and blue-colored ones denote CAs modules. Perception (P), motor (M), working memory (WM), long-term procedural memory (PM), long-term declarative memory (DM), and Anticipation (A) correspond to modules of a CA. i and o correspond to the input and output of the system, respectively. si and so are symbolic representations of the input i and the output o, respectively. w corresponds to the contents of the working memory. b are module-specific working memory buffers. Solid arrows denote the flow of information and dotted arrows denote predictions of the next input. Unlike case (b), which loosely integrates LLMs and CAs, case (c) proposes an integration where not only the percep- tion/motor modules are driven by LLMs, but also the pro- cedural and declarative (semantic and episodic) memories. Prior research (Park et al. 2023) suggested using LLMs to retain episodic knowledge as lists of observations (depicting agentsâ behaviors in natural language). These can be synthe- sized into high-level observations using LLMsâ summariza- tion abilities, enabling agents to reflect on their experiences across different time spans. From another perspective, we envision the possibility of converting these natural language descriptions into symbolic structures using a proper method for fine-tuning or prompt-engineering an LLM.
2308.09830#13
2308.09830#15
2308.09830
[ "2302.02083" ]
2308.09830#15
Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis
Similarly, the large amount of factual knowledge directly accessible through LLMs can be harnessed to automatically extract knowledge and populate a semantic memory (e.g., an ontology) of a CA, which would otherwise require labo- rious manual knowledge curation and acquisition. Prelimi- nary work in this area aims to populate arbitrary complex nested semantic knowledge schemas (Caufield et al. 2023). Regarding the procedural module, previous work (Park et al. 2023) has shown that LLMs can be leveraged to cre- ate detailed step-by-step plans by translating the agentâ s ob- servations of the environment expressed in natural language into high-level action plans, which are then recursively de- composed into detailed behaviors for action and reaction. Alternatively, we anticipate that, with the proper in-context prompting or fine-tuning, an LLM can translate natural lan- guage descriptions into symbolic structures such as produc- tion rules. For instance, consider the example below where we prompted ChatGPT to generate an ACT-R production rule for filtering out transit searches based on a natural language description and a single example (borrowed from (ACT-R Website. 2015)) used to prime the LLM7: [prompt] Instructions: ... your job is to generate ACT-R productions based on a natural language description that I will provide to you... For instance: Description: generate an ACT-R production that counts until a given number. Production: (p example-counting =goal>.... [prompt] Description: generate an ACT-R production such that if I need to take a bus, check first the weather and the traffic patterns. If it is snowing and the traffic is heavy, then search only for bus routes without transfers and minimum walking distance.
2308.09830#14
2308.09830#16
2308.09830
[ "2302.02083" ]
2308.09830#16
Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis
Production: # [ChatGPT] (p check-weather-traffic-snow-heavy # =goal> isa state =retrieval> # take-bus check-weather-traffic # isa weather traffic # weather-traffic snowing heavy ==> =goal> isa state transfer take-bus search-routes none walking +retrieval> min-distance isa transfer walking bus-routes none min-distance) The condition part of the production checks both that the state of the goal is check-weather-traffic and that the retrieved declarative chunk has the values snowing and heavy for the slots weather and traffic, respec- tively. If so, the action will change the goalâ s state to search-routes and retrieve a declarative chunk for bus routes with no transfers and minimum walking distance. Although the generated production captures correctly the intention of the natural language description, it contains re- dundant slots for transfer and walking on the goal buffer of the action part (in italics). This type of transformation from natural language descriptions to symbolic productions can allow users to instruct and teach explicit procedural knowledge to their agents via conversation and, more inter- estingly, transform symbolic productions back into natural language for explainability purposes. However, it is not clear how an LLM could keep consistency between learned pro- ductions for a large knowledge base. Additionally, at least at its current state, LLMs by themselves cannot compile cer- tain operations over the procedural memory such as conflict resolution and execution, so an LLM would still require an external interaction with a CAâ s procedural engine. Finally, case (d) presents a streamlined approach to the simulation theory of cognition, which states that cognitive functions like planning and anticipation stem from inter-
2308.09830#15
2308.09830#17
2308.09830
[ "2302.02083" ]
2308.09830#17
Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis
7The complete log is available here: https://shareg.pt/nO1zssm. nally simulated interactions with the environment (Shana- han 2006; Hesslow 2012). By inputting appropriate contex- tual information (such as working memory contents, sen- sory input, motor responses, and past experiences), we pos- tulate that LLMs have the potential to forecast likely rep- resentations of the worldâ s states resulting from the current state. That is, upon receiving an initial sensory input (i0), the CA progresses through its standard perception-action path- way. Subsequently, rather than executing the resulting ac- tion (O0) in the real world, the action O0, along with the working memory contents, are used as inputs of the LLM. The LLM then generates a prediction for the next world state (i1), which serves as a simulated input. Next, a sim- ulated output o1 is generated, and then the process cycles until a certain condition is met. By chaining these sequences of perception-action-prediction, the system could anticipate the outcomes of its actions across multiple temporal scales. These simulated sequences may take on either linear con- figurations, predicting only one world state per simulated step, or branching tree-like structures, predicting multiple likely world states per step (tangentially related work has been proposed by (Yao et al. 2023a)). In the latter case, a planning mechanism could explore different branches of the tree by assessing their likelihood of occurrence and per- forming backtracking when necessary. As proposed by (Park et al. 2023), an LLM can be prompted to rank its answers based on a certain scale, similarly, we can prompt it to â guesstimateâ a probability for each node of the tree. Below is a simplified example where the inputs to the LLM are the contents of the working memory (green), simu- lated user actions (red), and simulated system actions (blue). For simplicity, both user and system actions are natural lan- guage and do not involve symbols:
2308.09830#16
2308.09830#18
2308.09830
[ "2302.02083" ]
2308.09830#18
Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis
[prompt] You are an intelligent agent that assists a person who is blind in retrieving information from public transportation. Today is snowing and therefore the traffic is heavy . Predict 3 things that the user will ask you to do and assign a probability to occur to each one. [ChatGPT] Check current bus/train delay (0.6), suggest alternative routes (0.3), provide weather-related alerts (0.1) [prompt] The user requests you to provide weather-related alerts, and you provide those alerts . What do you predict the user will ask next? [ChatGPT] Ask for specific service disruptions (0.4), request tips for navigating in snowy conditions (0.3), inquire about expected clearing times (0.2)
2308.09830#17
2308.09830#19
2308.09830
[ "2302.02083" ]
2308.09830#19
Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis
# Agency Approach The Agency approach operates on two levels - micro and macro (see Figure 2). Inspired by the Society of Mind theory (Minsky 1988) and LIDA cognitive architecture (Franklin and Patterson 2006), micro-level agency occurs within the cognitive architecture itself. Specialized agents process in- formation in parallel, competing for resources like atten- tion and memory. They collaborate by forming coalitions for Cognitive Agent Agency â * agents BY o Pe ° \& Z , A WM }â â } Global Workspace mM kH i io) (2) [e) OM @s 44 Competing input agents cAmodule (@) competing/receiving Agent a. Agency at the micro-level © Active agent LB bomen b. Agency at the macro-level © Cognitive agent (CA + LLM) Figure 2: Agency approach. a) Agents at the micro-level compete for resources and cooperate in decision-making. b) Agents at the macro-level interact with other agents and humans to cooperate in task resolution. P (Perception), M (Motor), WM (working memory), and DM (declarative memory) are modules of a CA. decision-making and problem-solving. In contrast, macro- level agency involves cognitive agents interacting with other agents and humans to collaboratively achieve goals. Consider the case of our cognitive agent designed to aid blind users in indoor navigation. At a micro-level, each agent operates through either a fine-tuned LLM or a symbolic pro- cessor. Cognitive processing unfolds as follows: sensory in- puts are processed by the perception module, yielding ab- stract entities like objects, categories, actions, events, etc., forwarded to the working memory. Then, the working mem- ory cues declarative memories to establish local associa- tions, e.g., user navigation preferences, place familiarity, and more. Specialized agents at the agency observe work- ing memory contents and form coalitions. For instance, object detection and semantic localization constitute one coalition, while natural language understand- ing and semantic grounding form another. These coalitions are transferred to the Global Workspace, where a competi- tive process selects the most relevant coalition.
2308.09830#18
2308.09830#20
2308.09830
[ "2302.02083" ]
2308.09830#20
Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis
If a user ap- proaches a staircase lacking a handrail, the coalition involv- ing object detection and semantic localization takes prece- dence, globally transmitting its contents (e.g., staircase prox- imity and orientation) to other agents. In subsequent cog- nitive cycles, the coalition for natural language generation would be chosen to provide timely warnings to the user. bate (e.g., a and b debating about their reasoning processes to approach a problem while reaching a consensus (Du et al. 2023)), among others. All these kinds of interactions among agents could use natural language in order to foster trans- parency and interpretability, from the userâ s point of view, of the reasoning processes and conciliated actions, although the necessity of symbolic counterparts remains unclear. # Neuro-Symbolic Approach We present a neuro-symbolic approach inspired by the CLARION cognitive architecture, focusing primarily on the action-centered sub-system (ACS), while acknowledg- ing the existence of three additional sub-systems within the architecture. The ACS operates across two distinct levels: the top level (symbolic), responsible for encoding explicit knowledge, and the bottom level (connectionist), tasked with encoding implicit knowledge. Consequently, the architec- ture exhibits a degree of redundancy in knowledge repre- sentation. These two levels synergistically engage in action selection, reasoning, and learning processes. Our focus is to explore the incorporation of LLMs at the bottom level, enhancing the knowledge extraction and integration process while exhibiting potential scalability towards novel scenar- ios. Further details on the mathematical model underpinning the cognitive processes can be found in (Sun 2016). While not a novel architectural approach, its potential lies in the diverse roles agents can assume within coalitions. For instance, an LLM agent engages in pair work, process- ing text or images to produce symbols, while a symbolic agent infers insights from these symbols. Another scenario involves one LLM agent fine-tuned to convert symbol struc- tures into natural language text and another serving a super- visory role, pinpointing errors in the first agentâ
2308.09830#19
2308.09830#21
2308.09830
[ "2302.02083" ]
2308.09830#21
Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis
s output. Now, to better understand macro-level interactions, letâ s consider two users (A and B) alongside their cognitive agents (a and b). Agents a and b collaborate to exchange knowledge and intentions (e.g., a shares spatial insights with b of previ- ous Aâ s exploration of a building, thus aiding Bâ s future nav- igation), negotiate (e.g., a and b helping teammates A and B reach an agreement when having conflicting goals), de-
2308.09830#20
2308.09830#22
2308.09830
[ "2302.02083" ]
2308.09830#22
Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis
CLARION defines three types of symbolic rules at the top level. The fixed rules (FR) are rules that have been hard- wired by an expert and cannot be deleted; Independent- Rule-Learning (IRL) rules are independently generated at the top level, with little involvement (or no involvement at all) of the bottom level, which can be refined or deleted as needed; and Rule-Extraction-Refinement (RER) rules which are extracted from the bottom level. Figure 3 illustrates the process wherein a human provides a natural language instruction to create a new rule and the LLM-based per- ception module extracts symbolic structures that are fur- ther stored in the working memory. Through a template- matching mechanism, the contents of the working mem- ory are expressed as an IRL rule where both its condition and action parts are chunks composed of dimension-value
2308.09830#21
2308.09830#23
2308.09830
[ "2302.02083" ]
2308.09830#23
Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis
User: If it is snowing then 1 discard bus WM Symbolic (Top) Level transfers and (intent, IRL RER ing distance te re) If (intent, C=, â take. bus) GP ~ and (Weather, (Snowing, raining]) then (transfers, fewer) Call = | symbols | (transfers, le» (Cantar, APL oO SO tone). a aD If (intent, take_bus) and (weGthER, SHOWING)... then (ERans fers, Fewer) } a = (transfers, none) 5 . 6 (intent, (walk_dist, min) L» rf .. and (weather, SHOWAG) and (traffic, heavy) then (EPGRSFERS, FEWER) 2 Feedback g take_bus) 2 ao (dest, = | utter: g X store) Top-down Bottom-up + 2 learning learning bus User: How do | (weather, get X store? Natura | celal Connectionist (Bottom) Level â LLM language! | How do! get |<>} Prompt: You are an intelligent agent that assists a blind user... It is and she asks you "how do | get X store?" Which Weath 4 X store? filter you would apply to the bus search? Output: opt for routes with prioritize routes with higher frequencies... jeather cond: snowing # bus prournext is.. Figure 3:
2308.09830#22
2308.09830#24
2308.09830
[ "2302.02083" ]
2308.09830#24
Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis
Neuro-symbolic approach. WM: Working Memory. IRL: Independent Rule Learning. RER: Rule Extraction Refine- ment. G: Generalization scenario. S: Specialization scenario. Orange-colored boxes illustrate the IRL case while the blue- colored boxes illustrate the RER case. Highlighted text represents entities and keywords present at the bottom level that are further extracted and translated into symbols at the top level. pairs8, e.g., chunki((intent, take bus), (weather, snowing)) Ã chunkj((transfers, none), (walk distance, min)). On the other hand, if an action determined at the bot- tom level proves successful (according to a certain crite- rion), an RER rule is formulated and subsequently incorpo- rated into the top level, e.g., given the output generated by the LLM at the bottom level9 on Figure 3, the correspond- ing RER rule is chunki((intent, take bus), (weather, snow- ing)) Ã chunkj((transfers, fewer)). During subsequent in- teractions with the environment, the rule is refined based on the outcomes of its application: if the result is deemed suc- cessful, the ruleâ s conditions may be generalized to make it more universal by adding new values to dimensions (e.g., chunki((intent, take bus), (weather, [snowing, raining])) Ã chunkj((transfers, fewer))). Conversely, if the outcome does not yield success, the rule should be specialized by remov- ing values from dimensions or by adding new dimension- value pairs (e.g., chunki((intent, take bus), (weather, snow- ing), (traffic, heavy)) Ã chunkj((transfers, fewer))). Rule selection in IRL is determined by an information gain function, while RER uses a Boltzmann distribution based on ruleâ s utility function and a base-level activation. The integration of both levels can be achieved through var- ious mechanisms. Stochastic selection involves choosing a level (top or bottom) and a group of rules if the top level is chosen (e.g., FR, RER, or IRL). These selections are based on probabilities assigned by a metacognitive module to each level/group.
2308.09830#23
2308.09830#25
2308.09830
[ "2302.02083" ]
2308.09830#25
Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis
Integration through bottom-up rectification oc- curs when the top level rectifies and incorporates outcomes from the bottom level (e.g., the LLM may discover addi- tional dimension-value pairs not specified by the top level like â prioritize routes with higher frequenciesâ ). Alterna- tively, top-down guidance involves the bottom level utiliz- ing outcomes from the top level, combined with its own knowledge, to make action decisions. This top-down guid- ance can be achieved by using prompt engineering tech-
2308.09830#24
2308.09830#26
2308.09830
[ "2302.02083" ]
2308.09830#26
Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis
8Each dimension may have one or multiple values associated. 9See full output log here: https://sharegpt.com/c/LYIz9in niques to prime the LLM with either FR or IRL rules. Bottom-up learning is facilitated by the rule extraction mechanism, whereas top-down learning can be realized by using both FR and IRL rules as exemples to fine-tune the LLM at the bottom level. Determining whether an outcome from the bottom level is successful requires feedback, often in the form of rewards or reinforcement, which might not be readily available. To address this challenge, we propose two approaches: the incorporation of human-in-the-loop inter- actions, where feedback ensures the coherence of extracted rules, and the utilization of an additional LLM for self-play interactions emulating human feedback. Overall, both the bottom-up and the top-down learning mechanisms support explainability of decision-making and reasoning processes performed by the LLM at the bottom level. Harnessing LLMs at the bottom level of a CLARION-like architecture can contribute remarkably to enhancing the sys- temâ s flexibility and scalability. First, unlike backpropaga- tion neural networks used in CLARION, LLMs are not re- stricted to a fixed number of features and labels. Also, the LLMs-based variation we propose do not require to pre- define dimension-value pairs as CLARION does. Conse- quently, the utilization of LLMs at the bottom level can enable enhanced representational flexibility, with cascad- ing benefits reaching the top level. Secondly, the conver- sion from unstructured natural language to symbols and vice versa can be executed seamlessly by an LLM-based bottom level. Lastly, leveraging an LLM with such broad knowledge of the world, coupled with cross-level learning dynamics and human feedback, can foster continuous learning loops where knowledge is constructed and refined over time. Discussion Among the three approaches discussed so far, there are some commonalities that we highlight next. First, the working memory, along with the perception module, plays an impor- tant role in retaining the most pertinent information while filtering out irrelevant stimuli. This contrasts with the idea of a context window in LLMs, where truncation strategies arbi- trarily delete the oldest tokens observed when the length of the window reaches a maximum, potentially discarding crit- ical parts of the context.
2308.09830#25
2308.09830#27
2308.09830
[ "2302.02083" ]
2308.09830#27
Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis
The contents of the working mem- ory are selectively and intentionally stored and recalled from long-term memories, allowing the agent to continuously in- teract with the environment without losing track of events. A second common aspect among all three approaches is the utilization of LLMs to accurately translate unstructured nat- ural language to symbols and vice versa, as well as to extract factual knowledge about the world. This breakthrough opens up a realm of new possibilities, allowing for the seamless scaling of CAs to tackle complex real-world problems. Third, the three approaches can benefit from multi-modal multi-turn interaction. In cases where cognitive agents col- laborate with humans, there is an opportunity to incremen- tally refine shared mental models of a task through con- tinuous conversational interaction and scene understanding. Fourth, since all the approaches depend, in one way or an- other, on LLMs, they are susceptible to the stochastic nature of LLMs. This stochastic nature leads to variations (some- times remarkable) in the outputs, even when the model is prompted with exactly the same input. And fifth, all three approaches contribute, to a greater or lesser extent, to the continuous construction of cognitive models about the enti- ties in the world, their relationships, and the distinct cogni- tive processes that operate over them. Regarding the Modular approach, the main difference among the four cases presented is the degree of integra- tion between an LLM and a CA. The first case, the cogni- tively augmented LLM, aligns with the current trend of aug- menting LLMs with external tools and interpreters and rep- resents the most loosely integrated model among the four. In this case, the LLM retains control of execution, and the outputs of the CA are solely utilized for in-context learn- ing purposes. The strength of this approach is that recursive LLMs receive gradual guidance during the chain-of-thought reasoning process. However, a notable disadvantage is that, due to the lack of overall control, the CA components can only contribute to reactive (System 1) responses rather than deliberative, high-order (System 2) ones. The second case of the modular approach presents a mod- erately integrated model where only the perception and mo- tor modules of a CA are powered with LLMs.
2308.09830#26
2308.09830#28
2308.09830
[ "2302.02083" ]
2308.09830#28
Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis
The main strength of this model is that it aligns with the evident ben- efits obtained from multi-modal LLMs, which notably en- hance text and image understanding, avoiding the need for task-specific and laborious labeling and training of machine learning models. Another advantage of this case is that it as- sumes a straightforward transformation from sensory inputs to symbolic percepts, which facilitates further processing. However, one main disadvantage is that the other modules of the CA still do not fully leverage the power of LLMs. The third case presents a tightly integrated model that leverages the synergistic interaction between LLMs and symbolic components of a CA. LLMs extract factual knowl- edge from the world, automatically populating ontologies. These semantic representations then facilitate the creation of world models, addressing a limitation of LLMs. Further- more, proper LLMâ
2308.09830#27
2308.09830#29
2308.09830
[ "2302.02083" ]
2308.09830#29
Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis
s prompt engineering techniques would produce syntactically and semantically correct CA produc- tions, which can be later compiled by a symbolic engine. However, a drawback of this integrated system is its heavy reliance on LLM outputs, rendering it susceptible to cascad- ing failures, including hallucinations and biases. The fourth case represents the most tightly integrated model. It involves a module designed for simulating the out- comes of future events. The primary advantage of this case is its capability to anticipate and plan by traversing and back- tracking a tree-like structure of possible events. However, similar to the third case, this system heavily relies on the outputs of the LLM, which might occasionally be inconsis- tent. This inconsistency could lead to erroneous predictions in the early stages of internal simulation, resulting in cascad- ing errors in the planning process. Unlike the Modular approach, which can suffer from overall failures and inconsistencies if individual modules are poorly designed, the Agency approach at the micro- level offers greater robustness from two key angles. First, agents may encode redundant knowledge, resulting in mul- tiple agents capable of achieving the same competence. This redundancy enhances system resilience as individual agents may fail, yet the system can still yield satisfactory outcomes. Second, agent role-playing strategies enable the system to self-reflect and promptly rectify potential deviations in rea- soning processes. At the macro-level, the Agency approach stands out as the only one among the three approaches that considers inter-agent interactions, with a primary focus on collaborative interactions between agents and humans. How- ever, aspects such as communication, coordination, hierar- chies, etc. between agents still remain open questions. The Neuro-symbolic approach is arguably the most tightly integrated model. It leverages the capabilities of LLMs to seamlessly translate unstructured natural language into structured symbolic representations and vice versa. This approach plays a crucial role in extracting rules from the connectionist level and subsequently generalizing and spe- cializing those extracted rules over time. The interactions between the symbolic and connectionist levels enable the continuous construction of explainable models for decision- making and procedural processing based on black-boxed LLMs. However, a potential weakness of this approach lies in its heavy reliance on the LLM layer. # Conclusions
2308.09830#28
2308.09830#30
2308.09830
[ "2302.02083" ]
2308.09830#30
Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis
In this paper, we present three different approaches to inte- grating Cognitive Architectures and Large Language Mod- els from an architectural perspective: a modular approach, an agency approach, and a neuro-symbolic approach. We discuss the trade-offs associated with each approach and provide insights for future research in this area. # Acknowledgements The contents of this paper were developed under grants from the National Institute on Disability, Independent Liv- ing, and Rehabilitation Research (NIDILRR grant numbers 90DPGE0003 and 90REGE0007) References ACT-R Website. 2015. Unit 1: Understanding Production http://act-r.psy.cmu.edu/wordpress/wp-content/ Systems. themes/ACT-R/tutorials/unit1.htm.
2308.09830#29
2308.09830#31
2308.09830
[ "2302.02083" ]
2308.09830#31
Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis
Accessed: 2023-08-03. Anderson, J. R.; and Lebiere, C. J. 2014. The atomic com- ponents of thought. Psychology Press. Bender, E. M.; Gebru, T.; McMillan-Major, A.; and Shmitchell, S. 2021. On the Dangers of Stochastic Par- In Proceed- rots: Can Language Models Be Too Big? ings of the 2021 ACM Conference on Fairness, Account- ability, and Transparency, FAccT â
2308.09830#30
2308.09830#32
2308.09830
[ "2302.02083" ]
2308.09830#32
Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis
21, 610â 623. New York, NY, USA: Association for Computing Machinery. ISBN 9781450383097. Binz, M.; and Schulz, E. 2023. Using cognitive psychology to understand GPT-3. Proceedings of the National Academy of Sciences, 120(6): e2218523120. Brown, T.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J. D.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; Agarwal, S.; Herbert-Voss, A.; Krueger, G.; Henighan, T.; Child, R.; Ramesh, A.; Ziegler, D.; Wu, J.; Winter, C.; Hesse, C.; Chen, M.; Sigler, E.; Litwin, M.; Gray, S.; Chess, B.; Clark, J.; Berner, C.; McCandlish, S.; Radford, A.; Sutskever, I.; and Amodei, D. 2020.
2308.09830#31
2308.09830#33
2308.09830
[ "2302.02083" ]
2308.09830#33
Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis
Language Mod- els are Few-Shot Learners. In Larochelle, H.; Ranzato, M.; Hadsell, R.; Balcan, M.; and Lin, H., eds., Advances in Neu- ral Information Processing Systems, volume 33, 1877â 1901. Curran Associates, Inc. Bubeck, S.; Chandrasekaran, V.; Eldan, R.; Gehrke, J.; Horvitz, E.; Kamar, E.; Lee, P.; Lee, Y. T.; Li, Y.; Lundberg, S.; Nori, H.; Palangi, H.; Ribeiro, M. T.; and Zhang, Y. 2023.
2308.09830#32
2308.09830#34
2308.09830
[ "2302.02083" ]
2308.09830#34
Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis
Sparks of Artificial General Intelligence: Early experiments with GPT-4. arXiv:2303.12712. Caufield, J. H.; Hegde, H.; Emonet, V.; Harris, N. L.; Joachimiak, M. P.; Matentzoglu, N.; Kim, H.; Moxon, S. A.; Reese, J. T.; Haendel, M. A.; et al. 2023.
2308.09830#33
2308.09830#35
2308.09830
[ "2302.02083" ]
2308.09830#35
Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis
Structured prompt interrogation and recursive extraction of semantics (SPIRES): A method for populating knowledge bases using zero-shot learning. arXiv preprint arXiv:2304.02711. Devlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Con- ference of the North American Chapter of the Association for Computational Linguistics: Human Language Technolo- gies, Volume 1 (Long and Short Papers), 4171â 4186.
2308.09830#34
2308.09830#36
2308.09830
[ "2302.02083" ]
2308.09830#36
Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis
Min- neapolis, Minnesota: Association for Computational Lin- guistics. Diao, S.; Wang, P.; Lin, Y.; and Zhang, T. 2023. Ac- tive Prompting with Chain-of-Thought for Large Language Models. arXiv:2302.12246. Du, Y.; Li, S.; Torralba, A.; Tenenbaum, J. B.; and Mor- datch, I. 2023. Improving Factuality and Reasoning in Lan- guage Models through Multiagent Debate. arXiv preprint arXiv:2305.14325. Franklin, S.; and Patterson, F. 2006. The LIDA architec- ture: Adding new modes of learning to an intelligent, au- tonomous, software agent. pat, 703: 764â 1004. Gao, L.; Madaan, A.; Zhou, S.; Alon, U.; Liu, P.; Yang, Y.; Callan, J.; and Neubig, G. 2023. PAL: Program-aided Lan- guage Models. arXiv:2211.10435. Hesslow, G. 2012. The current status of the simulation the- ory of cognition. Brain research, 1428: 71â 79. Huang, X.; Ruan, W.; Huang, W.; Jin, G.; Dong, Y.; Wu, C.; Bensalem, S.; Mu, R.; Qi, Y.; Zhao, X.; Cai, K.; Zhang, Y.; Wu, S.; Xu, P.; Wu, D.; Freitas, A.; and Mustafa, M. A. 2023.
2308.09830#35
2308.09830#37
2308.09830
[ "2302.02083" ]
2308.09830#37
Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis
A Survey of Safety and Trustworthiness of Large Lan- guage Models through the Lens of Verification and Valida- tion. arXiv:2305.11391. Kosinski, M. 2023. Theory of Mind May Have Spontaneously Emerged in Large Language Models. arXiv:2302.02083. Kotseruba, I.; and Tsotsos, J. K. 2020. 40 years of cognitive architectures: core cognitive abilities and practical applica- tions. Artificial Intelligence Review, 53(1): 17â 94. Laird, J. E. 2019.
2308.09830#36
2308.09830#38
2308.09830
[ "2302.02083" ]
2308.09830#38
Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis
The Soar cognitive architecture. MIT press. Laird, J. E.; Lebiere, C.; and Rosenbloom, P. S. 2017. A Standard Model of the Mind: Toward a Common Compu- tational Framework across Artificial Intelligence, Cognitive Science, Neuroscience, and Robotics. AI Magazine, 38(4): 13â 26. LeCun, Y. 2022. A path towards autonomous machine intel- ligence version 0.9. 2, 2022-06-27. Open Review, 62. Li, S.; Puig, X.; Paxton, C.; Du, Y.; Wang, C.; Fan, L.; Chen, T.; Huang, D.; Aky¨urek, E.; Anandkumar, A.; Andreas, J.; Mordatch, I.; Torralba, A.; and Zhu, Y. 2022. Pre-Trained Language Models for Interactive Decision-Making. CoRR, abs/2202.01771. Lieto, A.; Lebiere, C.; and Oltramari, A. 2018. The knowl- edge level in cognitive architectures: Current limitations and possible developments. Cognitive Systems Research, 48: 39â
2308.09830#37
2308.09830#39
2308.09830
[ "2302.02083" ]
2308.09830#39
Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis
55. Cognitive Architectures for Artificial Minds. Marcus, G. 2020. Steps Towards Robust Artificial abs/2002.06177. Mialon, G.; Dess`ı, R.; Lomeli, M.; Nalmpantis, C.; Pa- sunuru, R.; Raileanu, R.; Rozi`ere, B.; Schick, T.; Dwivedi- Yu, J.; Celikyilmaz, A.; Grave, E.; LeCun, Y.; and Scialom, T. 2023. Augmented Language Models: a Survey. arXiv:2302.07842. Minsky, M. 1988. Society of mind. Simon and Schuster.
2308.09830#38
2308.09830#40
2308.09830
[ "2302.02083" ]
2308.09830#40
Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis
Park, J. S.; Oâ Brien, J. C.; Cai, C. J.; Morris, M. R.; Liang, P.; and Bernstein, M. S. 2023. Generative Agents: Interac- tive Simulacra of Human Behavior. arXiv:2304.03442. Pavlick, E. 2023. Symbols and grounding in large language models. Philosophical Transactions of the Royal Society A, 381(2251): 20220041. Qian, J.; Wang, H.; Li, Z.; Li, S.; and Yan, X. 2022. Limita- tions of language models in arithmetic and symbolic induc- tion. arXiv preprint arXiv:2208.05051.
2308.09830#39
2308.09830#41
2308.09830
[ "2302.02083" ]
2308.09830#41
Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis
Romero, O. J.; Wang, A.; Zimmerman, J.; Steinfeld, A.; and Tomasic, A. 2021. A Task-Oriented Dialogue Architecture via Transformer Neural Language Models and Symbolic In- jection. In Proceedings of the 22nd Annual Meeting of the Special Interest Group on Discourse and Dialogue, 438â 444. Singapore and Online: Association for Computational Linguistics. Schick, T.; Dwivedi-Yu, J.; Dess`ı, R.; Raileanu, R.; Lomeli, M.; Zettlemoyer, L.; Cancedda, N.; and Scialom, T. 2023.
2308.09830#40
2308.09830#42
2308.09830
[ "2302.02083" ]
2308.09830#42
Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis
Toolformer: Language Models Can Teach Themselves to Use Tools. arXiv:2302.04761. Scialom et al., T. 2022. Fine-tuned language models are con- tinual learners. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, 6107â 6122. Shanahan, M. 2006. A cognitive architecture that combines internal simulation with a global workspace. Consciousness and cognition, 15(2): 433â
2308.09830#41
2308.09830#43
2308.09830
[ "2302.02083" ]
2308.09830#43
Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis
449. Sun, R. 2016. Anatomy of the mind: exploring psycholog- ical mechanisms and processes with the Clarion cognitive architecture. Oxford University Press. Tomasic, A.; Romero, O. J.; Zimmerman, J.; and Steinfeld, A. 2021. Propositional Reasoning via Neural Transformer Int. Workshop on Neural-Symbolic Language Models. Learning and Reasoning (NESY). Venkit, P. N.; Srinath, M.; and Wilson, S. 2022.
2308.09830#42
2308.09830#44
2308.09830
[ "2302.02083" ]
2308.09830#44
Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis
A Study of Implicit Bias in Pretrained Language Models against Peo- In Proceedings of the 29th Inter- ple with Disabilities. national Conference on Computational Linguistics, 1324â 1332. Gyeongju, Republic of Korea: International Commit- tee on Computational Linguistics. Wang, G.; Xie, Y.; Jiang, Y.; Mandlekar, A.; Xiao, C.; Zhu, Y.; Fan, L.; and Anandkumar, A. 2023.
2308.09830#43
2308.09830#45
2308.09830
[ "2302.02083" ]
2308.09830#45
Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis
Voyager: An open- ended embodied agent with large language models. arXiv preprint arXiv:2305.16291. Wei, J.; Wang, X.; Schuurmans, D.; Bosma, M.; Chi, E. H.; Le, Q.; and Zhou, D. 2022. Chain of Thought Prompt- ing Elicits Reasoning in Large Language Models. CoRR, abs/2201.11903. Weidinger, L.; Uesato, J.; Rauh, M.; Griffin, C.; Huang, P.- S.; Mellor, J.; Glaese, A.; Cheng, M.; Balle, B.; Kasirzadeh, A.; Biles, C.; Brown, S.; Kenton, Z.; Hawkins, W.; Steple- ton, T.; Birhane, A.; Hendricks, L. A.; Rimell, L.; Isaac, W.; Haas, J.; Legassick, S.; Irving, G.; and Gabriel, I. 2022.
2308.09830#44
2308.09830#46
2308.09830
[ "2302.02083" ]
2308.09830#46
Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis
Tax- onomy of Risks Posed by Language Models. In Proceed- ings of the 2022 ACM Conference on Fairness, Account- ability, and Transparency, FAccT â 22, 214â 229. New York, ISBN NY, USA: Association for Computing Machinery. 9781450393522. Welleck, S.; Kulikov, I.; Roller, S.; Dinan, E.; Cho, K.; and Weston, J. 2019.
2308.09830#45
2308.09830#47
2308.09830
[ "2302.02083" ]
2308.09830#47
Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis
Neural text generation with unlikelihood training. arXiv preprint arXiv:1908.04319. Xie, Y.; Xie, T.; Lin, M.; Wei, W.; Li, C.; Kong, B.; Chen, L.; Zhuo, C.; Hu, B.; and Li, Z. 2023. OlaGPT: Empow- ering LLMs With Human-like Problem-Solving Abilities. arXiv:2305.16334. Yao, S.; Yu, D.; Zhao, J.; Shafran, I.; Griffiths, T. L.; Cao, Y.; and Narasimhan, K. 2023a.
2308.09830#46
2308.09830#48
2308.09830
[ "2302.02083" ]
2308.09830#48
Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis
Tree of Thoughts: De- liberate Problem Solving with Large Language Models. arXiv:2305.10601. Yao, S.; Zhao, J.; Yu, D.; Du, N.; Shafran, I.; Narasimhan, K.; and Cao, Y. 2023b. ReAct: Synergizing Reasoning and Acting in Language Models. arXiv:2210.03629. Zhang, Z.; Zhang, A.; Li, M.; and Smola, A. 2022. Auto- matic Chain of Thought Prompting in Large Language Mod- els. arXiv:2210.03493. Zhu, D.; Chen, J.; Shen, X.; Li, X.; and Elhoseiny, M. 2023.
2308.09830#47
2308.09830#49
2308.09830
[ "2302.02083" ]
2308.09830#49
Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis
Minigpt-4: Enhancing vision-language understand- ing with advanced large language models. arXiv preprint arXiv:2304.10592.
2308.09830#48
2308.09830
[ "2302.02083" ]
2308.09687#0
Graph of Thoughts: Solving Elaborate Problems with Large Language Models
3 2 0 2 v o N 4 2 ] L C . s c [ 3 v 7 8 6 9 0 . 8 0 3 2 : v i X r a # Graph of Thoughts: Solving Elaborate Problems with Large Language Models Maciej Besta1*, Nils Blach1*, Ales Kubicek1, Robert Gerstenberger1, Lukas Gianinazzi1, Joanna Gajda2, Tomasz Lehmann2, MichaÅ Podstawski3, Hubert Niewiadomski2, Piotr Nyczyk2, Torsten Hoefler1 1ETH Zurich, 2Cledar, 3Warsaw University of Technology [email protected], [email protected], [email protected] # Abstract We introduce Graph of Thoughts (GoT): a framework that advances prompting capabilities in large language models (LLMs) beyond those offered by paradigms such as Chain-of- Thought or Tree of Thoughts (ToT). The key idea and primary advantage of GoT is the ability to model the information gen- erated by an LLM as an arbitrary graph, where units of infor- mation (â LLM thoughtsâ
2308.09687#1
2308.09687
[ "2302.13971" ]
2308.09687#1
Graph of Thoughts: Solving Elaborate Problems with Large Language Models
) are vertices, and edges correspond to dependencies between these vertices. This approach en- ables combining arbitrary LLM thoughts into synergistic out- comes, distilling the essence of whole networks of thoughts, or enhancing thoughts using feedback loops. We illustrate that GoT offers advantages over state of the art on different tasks, for example increasing the quality of sorting by 62% over ToT, while simultaneously reducing costs by >31%. We ensure that GoT is extensible with new thought transfor- mations and thus can be used to spearhead new prompting schemes. This work brings the LLM reasoning closer to hu- man thinking or brain mechanisms such as recurrence, both of which form complex networks. Website & code: https://github.com/spcl/graph-of-thoughts # 1 Introduction
2308.09687#0
2308.09687#2
2308.09687
[ "2302.13971" ]
2308.09687#2
Graph of Thoughts: Solving Elaborate Problems with Large Language Models
Large language models (LLMs) are taking over the world of AI. Recent years saw a rapid development of models pri- marily based on the decoder-only Transformer variant [65], such as GPT [13, 14, 53, 54], PaLM [19], or LLaMA [63]. Prompt engineering is a resource-efficient approach for solving different LLM tasks. In brief, one includes the task description within the input sent to an LLM. If this descrip- tion is appropriately formulated, the LLM solves the task using its autoregressive token-based mechanism for gener- ating text. Such prompts may contain example tasks with solutions (few-shot prompting, also referred to as in-context learning (ICL)), or even no example tasks at all (zero-shot prompting). In recent years it was shown that this mecha- nism can be used to solve a broad set of tasks that involve mathematical, commonsense, or symbolic reasoning. Chain-of-Thought (CoT) [71] is an approach for prompt- ing, in which one includes the intermediate steps of rea- soning within the prompt (intermediate â thoughtsâ ), besides the task input/output.
2308.09687#1
2308.09687#3
2308.09687
[ "2302.13971" ]
2308.09687#3
Graph of Thoughts: Solving Elaborate Problems with Large Language Models
CoT was shown to significantly im- prove the capability of LLMs to solve problems without re- sorting to any model updates. One major improvement over *Equal contribution CoT, Self-Consistency with CoT (CoT-SC) [67], is a scheme where multiple CoTs are generated, and then the best one is selected as the outcome. More recently, CoT and CoT-SC were extended with Tree of Thoughts (ToT) [43, 75, 77], which models the LLM reasoning process with a tree. This facilitates using different paths of thoughts, and offers novel capabilities such as backtracking from non-promising out- comes. Unfortunately, the ToT approaches still fundamen- tally limit the reasoning abilities within a prompt by impos- ing the rigid tree structure on the thought process. In this work, we argue that fundamentally more power- ful prompting can be achieved by enabling LLM thoughts to form an arbitrary graph structure. This is motivated by nu- merous phenomena such as human reasoning, brain struc- ture, or algorithmic execution. When working on a novel idea, a human would not only follow a chain of thoughts (as in CoT) or try different separate ones (as in ToT), but would actually form a more complex network of thoughts. For example, one could explore a certain chain of reason- ing, backtrack and start a new one, then realize that a cer- tain idea from the previous chain could be combined with the currently explored one, and merge them both into a new solution, taking advantage of their strengths and eliminat- ing their weaknesses. Similarly, brains form complex net- works, with graph-like patterns such as recurrence [28]. Ex- ecuting algorithms also expose networked patterns, often represented by Directed Acyclic Graphs. The correspond- ing graph-enabled transformations bring a promise of more powerful prompting when applied to LLM thoughts, but they are not naturally expressible with CoT or ToT. We observe that these (and many other) thought trans- formations can be naturally enabled when modeling a rea- soning process of an LLM as a graph. For this, we pro- pose Graph of Thoughts (GoT), an approach that en- hances LLMsâ capabilities through networked reasoning (contribution #1).
2308.09687#2
2308.09687#4
2308.09687
[ "2302.13971" ]
2308.09687#4
Graph of Thoughts: Solving Elaborate Problems with Large Language Models
In GoT, an LLM thought is modeled as a vertex, while an edge is a dependency between such thoughts. Using GoT, one can aggregate arbitrary thoughts by constructing vertices that have more than one incom- ing edge. Overall, the graph abstraction harnessed by GoT seamlessly generalizes CoT and ToT to more complex thought patterns, without resorting to any model updates. Yet, putting GoT to practice requires solving several de- sign challenges. For example, what is the best graph struc- ture for different tasks? How to best aggregate thoughts to maximize accuracy and minimize cost?
2308.09687#3
2308.09687#5
2308.09687
[ "2302.13971" ]
2308.09687#5
Graph of Thoughts: Solving Elaborate Problems with Large Language Models
To answer these and many other questions, we carefully design a modular archi- tecture for implementing GoT (contribution #2), coming with two design highlights. First, we enable a fine-grained control over individual thoughts. This enables us to fully control the ongoing conversation with the LLM, and apply advanced thought transformations, such as combining most promising thoughts from the ongoing reasoning into a new one. Second, we ensure that our architecture can be seam- lessly extended with novel thought transformations, patterns of reasoning (i.e., graphs of thoughts), and LLM models. This enables rapid prototyping of novel prompting ideas us- ing GoT, while experimenting with different models such as GPT-3.5, GPT-4, or Llama-2 [64]. We illustrate several use cases for GoT (sorting, keyword counting for summaries, set operations, document merging) and we detail how to implement them using the graph-based paradigm (contribution #3). We evaluate GoT and show its advantages over the state of the art (contribution #4). Over- all, we observe that GoT is particularly well-suited for tasks that can be naturally decomposed into smaller subtasks that are solved individually and then merged for a final solution. Here, GoT outperforms other schemes, for example improv- ing upon CoT and ToT by, respectively, â 70% and â 62%, in terms of the quality of sorting, while simultaneously re- ducing costs by >31% over ToT. We qualitatively compare GoT to other prompting schemes1 in Table 1. GoT is the only one to enable arbitrary graph-based thought transformations within a prompt, such as aggregation, embracing all previously proposed schemes.
2308.09687#4
2308.09687#6
2308.09687
[ "2302.13971" ]
2308.09687#6
Graph of Thoughts: Solving Elaborate Problems with Large Language Models
Sc? Mc? Tr? Ag? Scheme Chain-of-Thought (CoT) [71] Ã Ã Ã Self-Consistency with CoT [67] Ã Ã Thought decomposition [75] Ã Tree-of-Thought (ToT) [43] Ã Tree of Thoughts (ToT) [77] Ã Graph of Thoughts (GoT) Table 1: Comparison of prompting schemes, with re- spect to the supported transformations of thoughts. â Sc?â : thoughts? â Mc?â : multiple chains of single chain of thoughts? â Tr?â : tree of thoughts? â Ag?â : arbitrary graph of thoughts? â â : full support, â â : partial support, â Ã â : no support.
2308.09687#5
2308.09687#7
2308.09687
[ "2302.13971" ]
2308.09687#7
Graph of Thoughts: Solving Elaborate Problems with Large Language Models
Finally, we propose a new metric for evaluating a prompt- ing strategy, the volume of a thought (contribution #5). With this metric, we aim to understand better the differences between prompting schemes. For a given thought v, the vol- ume of v is the number of LLM thoughts, from which one can reach v using directed edges. Intuitively, these are all the LLM thoughts that have had the potential to contribute 1Note that we do not include a recent scheme called Graph-of- Thought [79] because it is not a prompting scheme. While its name suggests close connections to ToT and CoT, as a fine-tuning scheme, it resorts to model updates, and is thus outside the focus of this work. Similarly, the graph-of-thoughts repository [52] does not enable general graph-based reasoning and harnesses instead ToT with BFS.
2308.09687#6
2308.09687#8
2308.09687
[ "2302.13971" ]
2308.09687#8
Graph of Thoughts: Solving Elaborate Problems with Large Language Models
2 to v. We show that GoT, by incorporating thought transfor- mations such as aggregation, enables thoughts to have fun- damentally larger volumes than other schemes. # 2 Background & Notation We first outline background concepts and notation. 2.1 Language Models & In-Context Learning The conversation with the LLM consists of user messages (prompts) and LLM replies (thoughts). We follow the estab- lished notation [77] and we denote a pre-trained language model (LM) with parameters θ as pθ. Lowercase letters such as x, y, z, ... indicate LLM thoughts. We purposefully do not prescribe what is a single â thoughtâ , and instead make it use- case specific. Hence, a single thought can be a paragraph (e.g., in article summary), a document (e.g., in document generation), a block of code (e.g., in code debugging or op- timization), and so on. We next describe specific prompting approaches. Input-Output (IO) The Input-Output (IO) prompting is a straightforward approach, in which we use an LLM to turn an input sequence x into the output y directly, without any intermediate thoughts. Chain-of-Thought (CoT) Second, in Chain-of-Thought (CoT), one introduces intermediate thoughts a1, a2, ... be- tween x and y. This strategy was shown to significantly en- hance various LM tasks over the plain IO baseline, such as mathematical puzzles [71] or general mathematical reason- ing [24]. Multiple CoTs Third, one can generalize CoT into multi- ple CoTs by generating several (independent) k CoTs, and returning the one with the best output (according to some prescribed scoring metric). It was introduced by Wang et al. in the scheme called Self-Consistency with CoT (CoT- SC) [67]. This approach enhances CoT because it offers an opportunity to explore different reasoning paths.
2308.09687#7
2308.09687#9
2308.09687
[ "2302.13971" ]
2308.09687#9
Graph of Thoughts: Solving Elaborate Problems with Large Language Models
However, it does not offer â local explorationâ within a path, such as backtracking. Tree of Thoughts (ToT) Finally, the Tree of Thoughts (ToT) scheme was introduced independently by Yao [77] and Long [43] (where it is referred to as Tree-of-Thought); it was used implicitly to a certain degree by other schemes such as thought decomposition [75]. It enhances CoT-SC by modeling the process or reasoning as a tree of thoughts. A single tree node represents a partial solution. Based on a given node, the thought generator constructs a given number k of new nodes. Then, the state evaluator generates scores for each such new node. Depending on the use case, the eval- uation could be conducted using an LLM itself, or it can har- ness human scores. Finally, the schedule of extending the tree is dictated by the utilized search algorithm (for example BFS or DFS).
2308.09687#8
2308.09687#10
2308.09687
[ "2302.13971" ]