doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
2306.08302
168
[162] Q. Guo, Y. Sun, G. Liu, Z. Wang, Z. Ji, Y. Shen, and X. Wang, “Con- structing chinese historical literature knowledge graph based on bert,” in Web Information Systems and Applications: 18th Inter- national Conference, WISA 2021, Kaifeng, China, September 24–26, 2021, Proceedings 18. Springer, 2021, pp. 323–334. [163] J. Han, N. Collier, W. Buntine, and E. Shareghi, “Pive: Prompt- ing with iterative verification improving graph-based generative capability of llms,” arXiv preprint arXiv:2305.12392, 2023. [164] A. Bosselut, H. Rashkin, M. Sap, C. Malaviya, A. Celikyilmaz, and Y. Choi, “Comet: Commonsense transformers for knowledge graph construction,” in ACL, 2019. [165] S. Hao, B. Tan, K. Tang, H. Zhang, E. P. Xing, and Z. Hu, “Bertnet: Harvesting knowledge graphs from pretrained language mod- els,” arXiv preprint arXiv:2206.14268, 2022.
2306.08302#168
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual knowledge. In contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge. KGs can enhance LLMs by providing external knowledge for inference and interpretability. Meanwhile, KGs are difficult to construct and evolving by nature, which challenges the existing methods in KGs to generate new facts and represent unseen knowledge. Therefore, it is complementary to unify LLMs and KGs together and simultaneously leverage their advantages. In this article, we present a forward-looking roadmap for the unification of LLMs and KGs. Our roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs, which incorporate KGs during the pre-training and inference phases of LLMs, or for the purpose of enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding, completion, construction, graph-to-text generation, and question answering; and 3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a mutually beneficial way to enhance both LLMs and KGs for bidirectional reasoning driven by both data and knowledge. We review and summarize existing efforts within these three frameworks in our roadmap and pinpoint their future research directions.
http://arxiv.org/pdf/2306.08302
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
cs.CL, cs.AI
A short version of this paper was accepted by IEEE Transactions on Knowledge and Data Engineering (TKDE)
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
cs.CL
20230614
20240125
[ { "id": "2309.01538" }, { "id": "2302.13971" }, { "id": "2110.08173" }, { "id": "2203.16747" }, { "id": "2201.05337" }, { "id": "2302.12095" }, { "id": "1810.04805" }, { "id": "2305.13168" }, { "id": "2305.12392" }, { "id": "2206.14268" }, { "id": "2111.08546" }, { "id": "2212.10511" }, { "id": "2107.02137" }, { "id": "2105.10311" }, { "id": "2308.09729" }, { "id": "2310.02129" }, { "id": "1910.12840" }, { "id": "2206.13163" }, { "id": "2303.11146" }, { "id": "2009.02835" }, { "id": "2205.07424" }, { "id": "1909.03193" }, { "id": "2010.15980" }, { "id": "2007.00655" }, { "id": "2203.11171" }, { "id": "2306.06427" }, { "id": "2305.08281" }, { "id": "2104.08696" }, { "id": "2110.08455" }, { "id": "2305.09645" }, { "id": "2310.01061" }, { "id": "2308.03688" }, { "id": "2305.01157" }, { "id": "2310.08975" }, { "id": "2301.08913" }, { "id": "2305.13172" }, { "id": "2212.13428" }, { "id": "2303.10368" }, { "id": "2307.07697" }, { "id": "2308.11730" }, { "id": "2108.01928" }, { "id": "2010.00711" }, { "id": "2304.10592" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2307.12976" }, { "id": "2309.03118" }, { "id": "2304.13712" }, { "id": "2212.01588" }, { "id": "2309.01219" }, { "id": "2302.04023" }, { "id": "2202.08772" }, { "id": "2208.02743" }, { "id": "2201.08239" }, { "id": "2201.08531" }, { "id": "2302.05019" }, { "id": "2003.10555" }, { "id": "1907.11692" }, { "id": "2201.04843" }, { "id": "2206.12617" }, { "id": "2201.05575" }, { "id": "2310.07984" } ]
2306.08302
169
[166] P. West, C. Bhagavatula, J. Hessel, J. Hwang, L. Jiang, R. Le Bras, X. Lu, S. Welleck, and Y. Choi, “Symbolic knowledge distillation: from general language models to commonsense models,” in NAACL, 2022, pp. 4602–4625. [167] L. F. R. Ribeiro, M. Schmitt, H. Sch ¨utze, and I. Gurevych, “Investi- gating pretrained language models for graph-to-text generation,” in Proceedings of the 3rd Workshop on Natural Language Processing for Conversational AI, 2021, pp. 211–227. [168] J. Li, T. Tang, W. X. Zhao, Z. Wei, N. J. Yuan, and J.-R. Wen, “Few-shot knowledge graph-to-text generation with pretrained language models,” in ACL, 2021, pp. 1558–1568.
2306.08302#169
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual knowledge. In contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge. KGs can enhance LLMs by providing external knowledge for inference and interpretability. Meanwhile, KGs are difficult to construct and evolving by nature, which challenges the existing methods in KGs to generate new facts and represent unseen knowledge. Therefore, it is complementary to unify LLMs and KGs together and simultaneously leverage their advantages. In this article, we present a forward-looking roadmap for the unification of LLMs and KGs. Our roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs, which incorporate KGs during the pre-training and inference phases of LLMs, or for the purpose of enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding, completion, construction, graph-to-text generation, and question answering; and 3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a mutually beneficial way to enhance both LLMs and KGs for bidirectional reasoning driven by both data and knowledge. We review and summarize existing efforts within these three frameworks in our roadmap and pinpoint their future research directions.
http://arxiv.org/pdf/2306.08302
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
cs.CL, cs.AI
A short version of this paper was accepted by IEEE Transactions on Knowledge and Data Engineering (TKDE)
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
cs.CL
20230614
20240125
[ { "id": "2309.01538" }, { "id": "2302.13971" }, { "id": "2110.08173" }, { "id": "2203.16747" }, { "id": "2201.05337" }, { "id": "2302.12095" }, { "id": "1810.04805" }, { "id": "2305.13168" }, { "id": "2305.12392" }, { "id": "2206.14268" }, { "id": "2111.08546" }, { "id": "2212.10511" }, { "id": "2107.02137" }, { "id": "2105.10311" }, { "id": "2308.09729" }, { "id": "2310.02129" }, { "id": "1910.12840" }, { "id": "2206.13163" }, { "id": "2303.11146" }, { "id": "2009.02835" }, { "id": "2205.07424" }, { "id": "1909.03193" }, { "id": "2010.15980" }, { "id": "2007.00655" }, { "id": "2203.11171" }, { "id": "2306.06427" }, { "id": "2305.08281" }, { "id": "2104.08696" }, { "id": "2110.08455" }, { "id": "2305.09645" }, { "id": "2310.01061" }, { "id": "2308.03688" }, { "id": "2305.01157" }, { "id": "2310.08975" }, { "id": "2301.08913" }, { "id": "2305.13172" }, { "id": "2212.13428" }, { "id": "2303.10368" }, { "id": "2307.07697" }, { "id": "2308.11730" }, { "id": "2108.01928" }, { "id": "2010.00711" }, { "id": "2304.10592" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2307.12976" }, { "id": "2309.03118" }, { "id": "2304.13712" }, { "id": "2212.01588" }, { "id": "2309.01219" }, { "id": "2302.04023" }, { "id": "2202.08772" }, { "id": "2208.02743" }, { "id": "2201.08239" }, { "id": "2201.08531" }, { "id": "2302.05019" }, { "id": "2003.10555" }, { "id": "1907.11692" }, { "id": "2201.04843" }, { "id": "2206.12617" }, { "id": "2201.05575" }, { "id": "2310.07984" } ]
2306.08302
170
[169] A. Colas, M. Alvandipour, and D. Z. Wang, “GAP: A graph- aware language model framework for knowledge graph-to-text generation,” in Proceedings of the 29th International Conference on Computational Linguistics, 2022, pp. 5755–5769. [170] Z. Jin, Q. Guo, X. Qiu, and Z. Zhang, “GenWiki: A dataset of 1.3 million content-sharing text and graphs for unsupervised graph-to-text generation,” in Proceedings of the 28th International Conference on Computational Linguistics, 2020, pp. 2398–2409. [171] W. Chen, Y. Su, X. Yan, and W. Y. Wang, “KGPT: Knowledge- grounded pre-training for data-to-text generation,” in EMNLP, 2020, pp. 8635–8648. [172] D. Lukovnikov, A. Fischer, and J. Lehmann, “Pretrained trans- formers for simple question answering over knowledge graphs,” in The Semantic Web–ISWC 2019: 18th International Semantic Web Conference, Auckland, New Zealand, October 26–30, 2019, Proceed- ings, Part I 18. Springer, 2019, pp. 470–486.
2306.08302#170
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual knowledge. In contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge. KGs can enhance LLMs by providing external knowledge for inference and interpretability. Meanwhile, KGs are difficult to construct and evolving by nature, which challenges the existing methods in KGs to generate new facts and represent unseen knowledge. Therefore, it is complementary to unify LLMs and KGs together and simultaneously leverage their advantages. In this article, we present a forward-looking roadmap for the unification of LLMs and KGs. Our roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs, which incorporate KGs during the pre-training and inference phases of LLMs, or for the purpose of enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding, completion, construction, graph-to-text generation, and question answering; and 3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a mutually beneficial way to enhance both LLMs and KGs for bidirectional reasoning driven by both data and knowledge. We review and summarize existing efforts within these three frameworks in our roadmap and pinpoint their future research directions.
http://arxiv.org/pdf/2306.08302
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
cs.CL, cs.AI
A short version of this paper was accepted by IEEE Transactions on Knowledge and Data Engineering (TKDE)
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
cs.CL
20230614
20240125
[ { "id": "2309.01538" }, { "id": "2302.13971" }, { "id": "2110.08173" }, { "id": "2203.16747" }, { "id": "2201.05337" }, { "id": "2302.12095" }, { "id": "1810.04805" }, { "id": "2305.13168" }, { "id": "2305.12392" }, { "id": "2206.14268" }, { "id": "2111.08546" }, { "id": "2212.10511" }, { "id": "2107.02137" }, { "id": "2105.10311" }, { "id": "2308.09729" }, { "id": "2310.02129" }, { "id": "1910.12840" }, { "id": "2206.13163" }, { "id": "2303.11146" }, { "id": "2009.02835" }, { "id": "2205.07424" }, { "id": "1909.03193" }, { "id": "2010.15980" }, { "id": "2007.00655" }, { "id": "2203.11171" }, { "id": "2306.06427" }, { "id": "2305.08281" }, { "id": "2104.08696" }, { "id": "2110.08455" }, { "id": "2305.09645" }, { "id": "2310.01061" }, { "id": "2308.03688" }, { "id": "2305.01157" }, { "id": "2310.08975" }, { "id": "2301.08913" }, { "id": "2305.13172" }, { "id": "2212.13428" }, { "id": "2303.10368" }, { "id": "2307.07697" }, { "id": "2308.11730" }, { "id": "2108.01928" }, { "id": "2010.00711" }, { "id": "2304.10592" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2307.12976" }, { "id": "2309.03118" }, { "id": "2304.13712" }, { "id": "2212.01588" }, { "id": "2309.01219" }, { "id": "2302.04023" }, { "id": "2202.08772" }, { "id": "2208.02743" }, { "id": "2201.08239" }, { "id": "2201.08531" }, { "id": "2302.05019" }, { "id": "2003.10555" }, { "id": "1907.11692" }, { "id": "2201.04843" }, { "id": "2206.12617" }, { "id": "2201.05575" }, { "id": "2310.07984" } ]
2306.08302
171
[173] D. Luo, J. Su, and S. Yu, “A bert-based approach with relation- aware attention for knowledge base question answering,” in IJCNN. [174] N. Hu, Y. Wu, G. Qi, D. Min, J. Chen, J. Z. Pan, and Z. Ali, “An empirical study of pre-trained language models in simple knowl- edge graph question answering,” arXiv preprint arXiv:2303.10368, 2023. [175] Y. Xu, C. Zhu, R. Xu, Y. Liu, M. Zeng, and X. Huang, “Fusing context into knowledge graph for commonsense question an- swering,” in ACL, 2021, pp. 1201–1207. [176] M. Zhang, R. Dai, M. Dong, and T. He, “Drlk: Dynamic hierar- chical reasoning with language model and knowledge graph for question answering,” in EMNLP, 2022, pp. 5123–5133.
2306.08302#171
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual knowledge. In contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge. KGs can enhance LLMs by providing external knowledge for inference and interpretability. Meanwhile, KGs are difficult to construct and evolving by nature, which challenges the existing methods in KGs to generate new facts and represent unseen knowledge. Therefore, it is complementary to unify LLMs and KGs together and simultaneously leverage their advantages. In this article, we present a forward-looking roadmap for the unification of LLMs and KGs. Our roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs, which incorporate KGs during the pre-training and inference phases of LLMs, or for the purpose of enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding, completion, construction, graph-to-text generation, and question answering; and 3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a mutually beneficial way to enhance both LLMs and KGs for bidirectional reasoning driven by both data and knowledge. We review and summarize existing efforts within these three frameworks in our roadmap and pinpoint their future research directions.
http://arxiv.org/pdf/2306.08302
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
cs.CL, cs.AI
A short version of this paper was accepted by IEEE Transactions on Knowledge and Data Engineering (TKDE)
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
cs.CL
20230614
20240125
[ { "id": "2309.01538" }, { "id": "2302.13971" }, { "id": "2110.08173" }, { "id": "2203.16747" }, { "id": "2201.05337" }, { "id": "2302.12095" }, { "id": "1810.04805" }, { "id": "2305.13168" }, { "id": "2305.12392" }, { "id": "2206.14268" }, { "id": "2111.08546" }, { "id": "2212.10511" }, { "id": "2107.02137" }, { "id": "2105.10311" }, { "id": "2308.09729" }, { "id": "2310.02129" }, { "id": "1910.12840" }, { "id": "2206.13163" }, { "id": "2303.11146" }, { "id": "2009.02835" }, { "id": "2205.07424" }, { "id": "1909.03193" }, { "id": "2010.15980" }, { "id": "2007.00655" }, { "id": "2203.11171" }, { "id": "2306.06427" }, { "id": "2305.08281" }, { "id": "2104.08696" }, { "id": "2110.08455" }, { "id": "2305.09645" }, { "id": "2310.01061" }, { "id": "2308.03688" }, { "id": "2305.01157" }, { "id": "2310.08975" }, { "id": "2301.08913" }, { "id": "2305.13172" }, { "id": "2212.13428" }, { "id": "2303.10368" }, { "id": "2307.07697" }, { "id": "2308.11730" }, { "id": "2108.01928" }, { "id": "2010.00711" }, { "id": "2304.10592" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2307.12976" }, { "id": "2309.03118" }, { "id": "2304.13712" }, { "id": "2212.01588" }, { "id": "2309.01219" }, { "id": "2302.04023" }, { "id": "2202.08772" }, { "id": "2208.02743" }, { "id": "2201.08239" }, { "id": "2201.08531" }, { "id": "2302.05019" }, { "id": "2003.10555" }, { "id": "1907.11692" }, { "id": "2201.04843" }, { "id": "2206.12617" }, { "id": "2201.05575" }, { "id": "2310.07984" } ]
2306.08302
172
[177] Z. Hu, Y. Xu, W. Yu, S. Wang, Z. Yang, C. Zhu, K.-W. Chang, and Y. Sun, “Empowering language models with knowledge graph reasoning for open-domain question answering,” in EMNLP, 2022, pp. 9562–9581. [178] X. Zhang, A. Bosselut, M. Yasunaga, H. Ren, P. Liang, C. D. Man- ning, and J. Leskovec, “Greaselm: Graph reasoning enhanced language models,” in ICLR, 2022. [179] X. Cao and Y. Liu, “Relmkg: reasoning with pre-trained language models and knowledge graphs for complex question answering,” Applied Intelligence, pp. 1–15, 2022. [180] X. Huang, J. Zhang, D. Li, and P. Li, “Knowledge graph embed- ding based question answering,” in WSDM, 2019, pp. 105–113. [181] H. Wang, F. Zhang, X. Xie, and M. Guo, “Dkn: Deep knowledge- aware network for news recommendation,” in WWW, 2018, pp. 1835–1844.
2306.08302#172
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual knowledge. In contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge. KGs can enhance LLMs by providing external knowledge for inference and interpretability. Meanwhile, KGs are difficult to construct and evolving by nature, which challenges the existing methods in KGs to generate new facts and represent unseen knowledge. Therefore, it is complementary to unify LLMs and KGs together and simultaneously leverage their advantages. In this article, we present a forward-looking roadmap for the unification of LLMs and KGs. Our roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs, which incorporate KGs during the pre-training and inference phases of LLMs, or for the purpose of enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding, completion, construction, graph-to-text generation, and question answering; and 3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a mutually beneficial way to enhance both LLMs and KGs for bidirectional reasoning driven by both data and knowledge. We review and summarize existing efforts within these three frameworks in our roadmap and pinpoint their future research directions.
http://arxiv.org/pdf/2306.08302
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
cs.CL, cs.AI
A short version of this paper was accepted by IEEE Transactions on Knowledge and Data Engineering (TKDE)
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
cs.CL
20230614
20240125
[ { "id": "2309.01538" }, { "id": "2302.13971" }, { "id": "2110.08173" }, { "id": "2203.16747" }, { "id": "2201.05337" }, { "id": "2302.12095" }, { "id": "1810.04805" }, { "id": "2305.13168" }, { "id": "2305.12392" }, { "id": "2206.14268" }, { "id": "2111.08546" }, { "id": "2212.10511" }, { "id": "2107.02137" }, { "id": "2105.10311" }, { "id": "2308.09729" }, { "id": "2310.02129" }, { "id": "1910.12840" }, { "id": "2206.13163" }, { "id": "2303.11146" }, { "id": "2009.02835" }, { "id": "2205.07424" }, { "id": "1909.03193" }, { "id": "2010.15980" }, { "id": "2007.00655" }, { "id": "2203.11171" }, { "id": "2306.06427" }, { "id": "2305.08281" }, { "id": "2104.08696" }, { "id": "2110.08455" }, { "id": "2305.09645" }, { "id": "2310.01061" }, { "id": "2308.03688" }, { "id": "2305.01157" }, { "id": "2310.08975" }, { "id": "2301.08913" }, { "id": "2305.13172" }, { "id": "2212.13428" }, { "id": "2303.10368" }, { "id": "2307.07697" }, { "id": "2308.11730" }, { "id": "2108.01928" }, { "id": "2010.00711" }, { "id": "2304.10592" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2307.12976" }, { "id": "2309.03118" }, { "id": "2304.13712" }, { "id": "2212.01588" }, { "id": "2309.01219" }, { "id": "2302.04023" }, { "id": "2202.08772" }, { "id": "2208.02743" }, { "id": "2201.08239" }, { "id": "2201.08531" }, { "id": "2302.05019" }, { "id": "2003.10555" }, { "id": "1907.11692" }, { "id": "2201.04843" }, { "id": "2206.12617" }, { "id": "2201.05575" }, { "id": "2310.07984" } ]
2306.08302
173
[182] B. Yang, S. W.-t. Yih, X. He, J. Gao, and L. Deng, “Embedding entities and relations for learning and inference in knowledge bases,” in ICLR, 2015. [183] W. Xiong, M. Yu, S. Chang, X. Guo, and W. Y. Wang, “One-shot relational learning for knowledge graphs,” in EMNLP, 2018, pp. 1980–1990. [184] P. Wang, J. Han, C. Li, and R. Pan, “Logic attention based neighborhood aggregation for inductive knowledge graph em- bedding,” in AAAI, vol. 33, no. 01, 2019, pp. 7152–7159. [185] Y. Lin, Z. Liu, M. Sun, Y. Liu, and X. Zhu, “Learning entity and relation embeddings for knowledge graph completion,” in Proceedings of the AAAI conference on artificial intelligence, vol. 29, no. 1, 2015.
2306.08302#173
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual knowledge. In contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge. KGs can enhance LLMs by providing external knowledge for inference and interpretability. Meanwhile, KGs are difficult to construct and evolving by nature, which challenges the existing methods in KGs to generate new facts and represent unseen knowledge. Therefore, it is complementary to unify LLMs and KGs together and simultaneously leverage their advantages. In this article, we present a forward-looking roadmap for the unification of LLMs and KGs. Our roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs, which incorporate KGs during the pre-training and inference phases of LLMs, or for the purpose of enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding, completion, construction, graph-to-text generation, and question answering; and 3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a mutually beneficial way to enhance both LLMs and KGs for bidirectional reasoning driven by both data and knowledge. We review and summarize existing efforts within these three frameworks in our roadmap and pinpoint their future research directions.
http://arxiv.org/pdf/2306.08302
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
cs.CL, cs.AI
A short version of this paper was accepted by IEEE Transactions on Knowledge and Data Engineering (TKDE)
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
cs.CL
20230614
20240125
[ { "id": "2309.01538" }, { "id": "2302.13971" }, { "id": "2110.08173" }, { "id": "2203.16747" }, { "id": "2201.05337" }, { "id": "2302.12095" }, { "id": "1810.04805" }, { "id": "2305.13168" }, { "id": "2305.12392" }, { "id": "2206.14268" }, { "id": "2111.08546" }, { "id": "2212.10511" }, { "id": "2107.02137" }, { "id": "2105.10311" }, { "id": "2308.09729" }, { "id": "2310.02129" }, { "id": "1910.12840" }, { "id": "2206.13163" }, { "id": "2303.11146" }, { "id": "2009.02835" }, { "id": "2205.07424" }, { "id": "1909.03193" }, { "id": "2010.15980" }, { "id": "2007.00655" }, { "id": "2203.11171" }, { "id": "2306.06427" }, { "id": "2305.08281" }, { "id": "2104.08696" }, { "id": "2110.08455" }, { "id": "2305.09645" }, { "id": "2310.01061" }, { "id": "2308.03688" }, { "id": "2305.01157" }, { "id": "2310.08975" }, { "id": "2301.08913" }, { "id": "2305.13172" }, { "id": "2212.13428" }, { "id": "2303.10368" }, { "id": "2307.07697" }, { "id": "2308.11730" }, { "id": "2108.01928" }, { "id": "2010.00711" }, { "id": "2304.10592" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2307.12976" }, { "id": "2309.03118" }, { "id": "2304.13712" }, { "id": "2212.01588" }, { "id": "2309.01219" }, { "id": "2302.04023" }, { "id": "2202.08772" }, { "id": "2208.02743" }, { "id": "2201.08239" }, { "id": "2201.08531" }, { "id": "2302.05019" }, { "id": "2003.10555" }, { "id": "1907.11692" }, { "id": "2201.04843" }, { "id": "2206.12617" }, { "id": "2201.05575" }, { "id": "2310.07984" } ]
2306.08302
174
[186] C. Chen, Y. Wang, A. Sun, B. Li, and L. Kwok-Yan, “Dipping plms sauce: Bridging structure and text for effective knowledge graph completion via conditional soft prompting,” in ACL, 2023. [187] J. Lovelace and C. P. Ros´e, “A framework for adapting pre- trained language models to knowledge graph completion,” in Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022, Abu Dhabi, United Arab Emi- rates, December 7-11, 2022, 2022, pp. 5937–5955. [188] J. Fu, L. Feng, Q. Zhang, X. Huang, and P. Liu, “Larger-context tagging: When and why does it work?” in Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL- HLT 2021, Online, June 6-11, 2021, 2021, pp. 1463–1475.
2306.08302#174
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual knowledge. In contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge. KGs can enhance LLMs by providing external knowledge for inference and interpretability. Meanwhile, KGs are difficult to construct and evolving by nature, which challenges the existing methods in KGs to generate new facts and represent unseen knowledge. Therefore, it is complementary to unify LLMs and KGs together and simultaneously leverage their advantages. In this article, we present a forward-looking roadmap for the unification of LLMs and KGs. Our roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs, which incorporate KGs during the pre-training and inference phases of LLMs, or for the purpose of enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding, completion, construction, graph-to-text generation, and question answering; and 3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a mutually beneficial way to enhance both LLMs and KGs for bidirectional reasoning driven by both data and knowledge. We review and summarize existing efforts within these three frameworks in our roadmap and pinpoint their future research directions.
http://arxiv.org/pdf/2306.08302
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
cs.CL, cs.AI
A short version of this paper was accepted by IEEE Transactions on Knowledge and Data Engineering (TKDE)
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
cs.CL
20230614
20240125
[ { "id": "2309.01538" }, { "id": "2302.13971" }, { "id": "2110.08173" }, { "id": "2203.16747" }, { "id": "2201.05337" }, { "id": "2302.12095" }, { "id": "1810.04805" }, { "id": "2305.13168" }, { "id": "2305.12392" }, { "id": "2206.14268" }, { "id": "2111.08546" }, { "id": "2212.10511" }, { "id": "2107.02137" }, { "id": "2105.10311" }, { "id": "2308.09729" }, { "id": "2310.02129" }, { "id": "1910.12840" }, { "id": "2206.13163" }, { "id": "2303.11146" }, { "id": "2009.02835" }, { "id": "2205.07424" }, { "id": "1909.03193" }, { "id": "2010.15980" }, { "id": "2007.00655" }, { "id": "2203.11171" }, { "id": "2306.06427" }, { "id": "2305.08281" }, { "id": "2104.08696" }, { "id": "2110.08455" }, { "id": "2305.09645" }, { "id": "2310.01061" }, { "id": "2308.03688" }, { "id": "2305.01157" }, { "id": "2310.08975" }, { "id": "2301.08913" }, { "id": "2305.13172" }, { "id": "2212.13428" }, { "id": "2303.10368" }, { "id": "2307.07697" }, { "id": "2308.11730" }, { "id": "2108.01928" }, { "id": "2010.00711" }, { "id": "2304.10592" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2307.12976" }, { "id": "2309.03118" }, { "id": "2304.13712" }, { "id": "2212.01588" }, { "id": "2309.01219" }, { "id": "2302.04023" }, { "id": "2202.08772" }, { "id": "2208.02743" }, { "id": "2201.08239" }, { "id": "2201.08531" }, { "id": "2302.05019" }, { "id": "2003.10555" }, { "id": "1907.11692" }, { "id": "2201.04843" }, { "id": "2206.12617" }, { "id": "2201.05575" }, { "id": "2310.07984" } ]
2306.08302
175
[189] X. Liu, K. Ji, Y. Fu, Z. Du, Z. Yang, and J. Tang, “P-tuning v2: Prompt tuning can be comparable to fine-tuning universally across scales and tasks,” CoRR, vol. abs/2110.07602, 2021. [190] J. Yu, B. Bohnet, and M. Poesio, “Named entity recognition as dependency parsing,” in ACL, 2020, pp. 6470–6476. [191] F. Li, Z. Lin, M. Zhang, and D. Ji, “A span-based model for joint overlapped and discontinuous named entity recognition,” in ACL, 2021, pp. 4814–4828.
2306.08302#175
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual knowledge. In contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge. KGs can enhance LLMs by providing external knowledge for inference and interpretability. Meanwhile, KGs are difficult to construct and evolving by nature, which challenges the existing methods in KGs to generate new facts and represent unseen knowledge. Therefore, it is complementary to unify LLMs and KGs together and simultaneously leverage their advantages. In this article, we present a forward-looking roadmap for the unification of LLMs and KGs. Our roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs, which incorporate KGs during the pre-training and inference phases of LLMs, or for the purpose of enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding, completion, construction, graph-to-text generation, and question answering; and 3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a mutually beneficial way to enhance both LLMs and KGs for bidirectional reasoning driven by both data and knowledge. We review and summarize existing efforts within these three frameworks in our roadmap and pinpoint their future research directions.
http://arxiv.org/pdf/2306.08302
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
cs.CL, cs.AI
A short version of this paper was accepted by IEEE Transactions on Knowledge and Data Engineering (TKDE)
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
cs.CL
20230614
20240125
[ { "id": "2309.01538" }, { "id": "2302.13971" }, { "id": "2110.08173" }, { "id": "2203.16747" }, { "id": "2201.05337" }, { "id": "2302.12095" }, { "id": "1810.04805" }, { "id": "2305.13168" }, { "id": "2305.12392" }, { "id": "2206.14268" }, { "id": "2111.08546" }, { "id": "2212.10511" }, { "id": "2107.02137" }, { "id": "2105.10311" }, { "id": "2308.09729" }, { "id": "2310.02129" }, { "id": "1910.12840" }, { "id": "2206.13163" }, { "id": "2303.11146" }, { "id": "2009.02835" }, { "id": "2205.07424" }, { "id": "1909.03193" }, { "id": "2010.15980" }, { "id": "2007.00655" }, { "id": "2203.11171" }, { "id": "2306.06427" }, { "id": "2305.08281" }, { "id": "2104.08696" }, { "id": "2110.08455" }, { "id": "2305.09645" }, { "id": "2310.01061" }, { "id": "2308.03688" }, { "id": "2305.01157" }, { "id": "2310.08975" }, { "id": "2301.08913" }, { "id": "2305.13172" }, { "id": "2212.13428" }, { "id": "2303.10368" }, { "id": "2307.07697" }, { "id": "2308.11730" }, { "id": "2108.01928" }, { "id": "2010.00711" }, { "id": "2304.10592" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2307.12976" }, { "id": "2309.03118" }, { "id": "2304.13712" }, { "id": "2212.01588" }, { "id": "2309.01219" }, { "id": "2302.04023" }, { "id": "2202.08772" }, { "id": "2208.02743" }, { "id": "2201.08239" }, { "id": "2201.08531" }, { "id": "2302.05019" }, { "id": "2003.10555" }, { "id": "1907.11692" }, { "id": "2201.04843" }, { "id": "2206.12617" }, { "id": "2201.05575" }, { "id": "2310.07984" } ]
2306.08302
176
[192] C. Tan, W. Qiu, M. Chen, R. Wang, and F. Huang, “Boundary enhanced neural span classification for nested named entity recognition,” in The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, 2020, pp. 9016–9023. [193] Y. Xu, H. Huang, C. Feng, and Y. Hu, “A supervised multi-head self-attention network for nested named entity recognition,” in Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intel- ligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021, 2021, pp. 14 185–14 193. [194] J. Yu, B. Ji, S. Li, J. Ma, H. Liu, and H. Xu, “S-NER: A concise and efficient span-based model for named entity recognition,” Sensors, vol. 22, no. 8, p. 2852, 2022.
2306.08302#176
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual knowledge. In contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge. KGs can enhance LLMs by providing external knowledge for inference and interpretability. Meanwhile, KGs are difficult to construct and evolving by nature, which challenges the existing methods in KGs to generate new facts and represent unseen knowledge. Therefore, it is complementary to unify LLMs and KGs together and simultaneously leverage their advantages. In this article, we present a forward-looking roadmap for the unification of LLMs and KGs. Our roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs, which incorporate KGs during the pre-training and inference phases of LLMs, or for the purpose of enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding, completion, construction, graph-to-text generation, and question answering; and 3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a mutually beneficial way to enhance both LLMs and KGs for bidirectional reasoning driven by both data and knowledge. We review and summarize existing efforts within these three frameworks in our roadmap and pinpoint their future research directions.
http://arxiv.org/pdf/2306.08302
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
cs.CL, cs.AI
A short version of this paper was accepted by IEEE Transactions on Knowledge and Data Engineering (TKDE)
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
cs.CL
20230614
20240125
[ { "id": "2309.01538" }, { "id": "2302.13971" }, { "id": "2110.08173" }, { "id": "2203.16747" }, { "id": "2201.05337" }, { "id": "2302.12095" }, { "id": "1810.04805" }, { "id": "2305.13168" }, { "id": "2305.12392" }, { "id": "2206.14268" }, { "id": "2111.08546" }, { "id": "2212.10511" }, { "id": "2107.02137" }, { "id": "2105.10311" }, { "id": "2308.09729" }, { "id": "2310.02129" }, { "id": "1910.12840" }, { "id": "2206.13163" }, { "id": "2303.11146" }, { "id": "2009.02835" }, { "id": "2205.07424" }, { "id": "1909.03193" }, { "id": "2010.15980" }, { "id": "2007.00655" }, { "id": "2203.11171" }, { "id": "2306.06427" }, { "id": "2305.08281" }, { "id": "2104.08696" }, { "id": "2110.08455" }, { "id": "2305.09645" }, { "id": "2310.01061" }, { "id": "2308.03688" }, { "id": "2305.01157" }, { "id": "2310.08975" }, { "id": "2301.08913" }, { "id": "2305.13172" }, { "id": "2212.13428" }, { "id": "2303.10368" }, { "id": "2307.07697" }, { "id": "2308.11730" }, { "id": "2108.01928" }, { "id": "2010.00711" }, { "id": "2304.10592" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2307.12976" }, { "id": "2309.03118" }, { "id": "2304.13712" }, { "id": "2212.01588" }, { "id": "2309.01219" }, { "id": "2302.04023" }, { "id": "2202.08772" }, { "id": "2208.02743" }, { "id": "2201.08239" }, { "id": "2201.08531" }, { "id": "2302.05019" }, { "id": "2003.10555" }, { "id": "1907.11692" }, { "id": "2201.04843" }, { "id": "2206.12617" }, { "id": "2201.05575" }, { "id": "2310.07984" } ]
2306.08302
177
[195] Y. Fu, C. Tan, M. Chen, S. Huang, and F. Huang, “Nested named entity recognition with partially-observed treecrfs,” in AAAI, 2021, pp. 12 839–12 847. [196] C. Lou, S. Yang, and K. Tu, “Nested named entity recognition as latent lexicalized constituency parsing,” in Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, 2022, pp. 6183–6198. [197] S. Yang and K. Tu, “Bottom-up constituency parsing and nested named entity recognition with pointer networks,” in Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, 2022, pp. 2403–2416.
2306.08302#177
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual knowledge. In contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge. KGs can enhance LLMs by providing external knowledge for inference and interpretability. Meanwhile, KGs are difficult to construct and evolving by nature, which challenges the existing methods in KGs to generate new facts and represent unseen knowledge. Therefore, it is complementary to unify LLMs and KGs together and simultaneously leverage their advantages. In this article, we present a forward-looking roadmap for the unification of LLMs and KGs. Our roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs, which incorporate KGs during the pre-training and inference phases of LLMs, or for the purpose of enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding, completion, construction, graph-to-text generation, and question answering; and 3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a mutually beneficial way to enhance both LLMs and KGs for bidirectional reasoning driven by both data and knowledge. We review and summarize existing efforts within these three frameworks in our roadmap and pinpoint their future research directions.
http://arxiv.org/pdf/2306.08302
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
cs.CL, cs.AI
A short version of this paper was accepted by IEEE Transactions on Knowledge and Data Engineering (TKDE)
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
cs.CL
20230614
20240125
[ { "id": "2309.01538" }, { "id": "2302.13971" }, { "id": "2110.08173" }, { "id": "2203.16747" }, { "id": "2201.05337" }, { "id": "2302.12095" }, { "id": "1810.04805" }, { "id": "2305.13168" }, { "id": "2305.12392" }, { "id": "2206.14268" }, { "id": "2111.08546" }, { "id": "2212.10511" }, { "id": "2107.02137" }, { "id": "2105.10311" }, { "id": "2308.09729" }, { "id": "2310.02129" }, { "id": "1910.12840" }, { "id": "2206.13163" }, { "id": "2303.11146" }, { "id": "2009.02835" }, { "id": "2205.07424" }, { "id": "1909.03193" }, { "id": "2010.15980" }, { "id": "2007.00655" }, { "id": "2203.11171" }, { "id": "2306.06427" }, { "id": "2305.08281" }, { "id": "2104.08696" }, { "id": "2110.08455" }, { "id": "2305.09645" }, { "id": "2310.01061" }, { "id": "2308.03688" }, { "id": "2305.01157" }, { "id": "2310.08975" }, { "id": "2301.08913" }, { "id": "2305.13172" }, { "id": "2212.13428" }, { "id": "2303.10368" }, { "id": "2307.07697" }, { "id": "2308.11730" }, { "id": "2108.01928" }, { "id": "2010.00711" }, { "id": "2304.10592" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2307.12976" }, { "id": "2309.03118" }, { "id": "2304.13712" }, { "id": "2212.01588" }, { "id": "2309.01219" }, { "id": "2302.04023" }, { "id": "2202.08772" }, { "id": "2208.02743" }, { "id": "2201.08239" }, { "id": "2201.08531" }, { "id": "2302.05019" }, { "id": "2003.10555" }, { "id": "1907.11692" }, { "id": "2201.04843" }, { "id": "2206.12617" }, { "id": "2201.05575" }, { "id": "2310.07984" } ]
2306.08302
178
[198] F. Li, Z. Lin, M. Zhang, and D. Ji, “A span-based model for joint overlapped and discontinuous named entity recognition,” in Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, 2021, pp. 4814–4828. [199] Q. Liu, H. Lin, X. Xiao, X. Han, L. Sun, and H. Wu, “Fine-grained entity typing via label reasoning,” in Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, 2021, pp. 4611–4622. [200] H. Dai, Y. Song, and H. Wang, “Ultra-fine entity typing with weak supervision from a masked language model,” in Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, 2021, pp. 1790–1799. 25
2306.08302#178
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual knowledge. In contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge. KGs can enhance LLMs by providing external knowledge for inference and interpretability. Meanwhile, KGs are difficult to construct and evolving by nature, which challenges the existing methods in KGs to generate new facts and represent unseen knowledge. Therefore, it is complementary to unify LLMs and KGs together and simultaneously leverage their advantages. In this article, we present a forward-looking roadmap for the unification of LLMs and KGs. Our roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs, which incorporate KGs during the pre-training and inference phases of LLMs, or for the purpose of enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding, completion, construction, graph-to-text generation, and question answering; and 3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a mutually beneficial way to enhance both LLMs and KGs for bidirectional reasoning driven by both data and knowledge. We review and summarize existing efforts within these three frameworks in our roadmap and pinpoint their future research directions.
http://arxiv.org/pdf/2306.08302
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
cs.CL, cs.AI
A short version of this paper was accepted by IEEE Transactions on Knowledge and Data Engineering (TKDE)
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
cs.CL
20230614
20240125
[ { "id": "2309.01538" }, { "id": "2302.13971" }, { "id": "2110.08173" }, { "id": "2203.16747" }, { "id": "2201.05337" }, { "id": "2302.12095" }, { "id": "1810.04805" }, { "id": "2305.13168" }, { "id": "2305.12392" }, { "id": "2206.14268" }, { "id": "2111.08546" }, { "id": "2212.10511" }, { "id": "2107.02137" }, { "id": "2105.10311" }, { "id": "2308.09729" }, { "id": "2310.02129" }, { "id": "1910.12840" }, { "id": "2206.13163" }, { "id": "2303.11146" }, { "id": "2009.02835" }, { "id": "2205.07424" }, { "id": "1909.03193" }, { "id": "2010.15980" }, { "id": "2007.00655" }, { "id": "2203.11171" }, { "id": "2306.06427" }, { "id": "2305.08281" }, { "id": "2104.08696" }, { "id": "2110.08455" }, { "id": "2305.09645" }, { "id": "2310.01061" }, { "id": "2308.03688" }, { "id": "2305.01157" }, { "id": "2310.08975" }, { "id": "2301.08913" }, { "id": "2305.13172" }, { "id": "2212.13428" }, { "id": "2303.10368" }, { "id": "2307.07697" }, { "id": "2308.11730" }, { "id": "2108.01928" }, { "id": "2010.00711" }, { "id": "2304.10592" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2307.12976" }, { "id": "2309.03118" }, { "id": "2304.13712" }, { "id": "2212.01588" }, { "id": "2309.01219" }, { "id": "2302.04023" }, { "id": "2202.08772" }, { "id": "2208.02743" }, { "id": "2201.08239" }, { "id": "2201.08531" }, { "id": "2302.05019" }, { "id": "2003.10555" }, { "id": "1907.11692" }, { "id": "2201.04843" }, { "id": "2206.12617" }, { "id": "2201.05575" }, { "id": "2310.07984" } ]
2306.08302
179
25 JOURNAL OF LATEX CLASS FILES, VOL. ??, NO. ??, MONTH 20YY [201] N. Ding, Y. Chen, X. Han, G. Xu, X. Wang, P. Xie, H. Zheng, Z. Liu, J. Li, and H. Kim, “Prompt-learning for fine-grained entity typing,” in Findings of the Association for Computational Linguistics: EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022, 2022, pp. 6888–6901. [202] W. Pan, W. Wei, and F. Zhu, “Automatic noisy label correction for fine-grained entity typing,” in Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI 2022, Vienna, Austria, 23-29 July 2022, 2022, pp. 4317–4323. [203] B. Li, W. Yin, and M. Chen, “Ultra-fine entity typing with indi- rect supervision from natural language inference,” Trans. Assoc. Comput. Linguistics, vol. 10, pp. 607–622, 2022.
2306.08302#179
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual knowledge. In contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge. KGs can enhance LLMs by providing external knowledge for inference and interpretability. Meanwhile, KGs are difficult to construct and evolving by nature, which challenges the existing methods in KGs to generate new facts and represent unseen knowledge. Therefore, it is complementary to unify LLMs and KGs together and simultaneously leverage their advantages. In this article, we present a forward-looking roadmap for the unification of LLMs and KGs. Our roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs, which incorporate KGs during the pre-training and inference phases of LLMs, or for the purpose of enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding, completion, construction, graph-to-text generation, and question answering; and 3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a mutually beneficial way to enhance both LLMs and KGs for bidirectional reasoning driven by both data and knowledge. We review and summarize existing efforts within these three frameworks in our roadmap and pinpoint their future research directions.
http://arxiv.org/pdf/2306.08302
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
cs.CL, cs.AI
A short version of this paper was accepted by IEEE Transactions on Knowledge and Data Engineering (TKDE)
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
cs.CL
20230614
20240125
[ { "id": "2309.01538" }, { "id": "2302.13971" }, { "id": "2110.08173" }, { "id": "2203.16747" }, { "id": "2201.05337" }, { "id": "2302.12095" }, { "id": "1810.04805" }, { "id": "2305.13168" }, { "id": "2305.12392" }, { "id": "2206.14268" }, { "id": "2111.08546" }, { "id": "2212.10511" }, { "id": "2107.02137" }, { "id": "2105.10311" }, { "id": "2308.09729" }, { "id": "2310.02129" }, { "id": "1910.12840" }, { "id": "2206.13163" }, { "id": "2303.11146" }, { "id": "2009.02835" }, { "id": "2205.07424" }, { "id": "1909.03193" }, { "id": "2010.15980" }, { "id": "2007.00655" }, { "id": "2203.11171" }, { "id": "2306.06427" }, { "id": "2305.08281" }, { "id": "2104.08696" }, { "id": "2110.08455" }, { "id": "2305.09645" }, { "id": "2310.01061" }, { "id": "2308.03688" }, { "id": "2305.01157" }, { "id": "2310.08975" }, { "id": "2301.08913" }, { "id": "2305.13172" }, { "id": "2212.13428" }, { "id": "2303.10368" }, { "id": "2307.07697" }, { "id": "2308.11730" }, { "id": "2108.01928" }, { "id": "2010.00711" }, { "id": "2304.10592" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2307.12976" }, { "id": "2309.03118" }, { "id": "2304.13712" }, { "id": "2212.01588" }, { "id": "2309.01219" }, { "id": "2302.04023" }, { "id": "2202.08772" }, { "id": "2208.02743" }, { "id": "2201.08239" }, { "id": "2201.08531" }, { "id": "2302.05019" }, { "id": "2003.10555" }, { "id": "1907.11692" }, { "id": "2201.04843" }, { "id": "2206.12617" }, { "id": "2201.05575" }, { "id": "2310.07984" } ]
2306.08302
180
[204] S. Broscheit, “Investigating entity knowledge in BERT with sim- ple neural end-to-end entity linking,” CoRR, vol. abs/2003.05473, 2020. [205] N. D. Cao, G. Izacard, S. Riedel, and F. Petroni, “Autoregressive entity retrieval,” in 9th ICLR, ICLR 2021, Virtual Event, Austria, May 3-7, 2021, 2021. [206] N. D. Cao, L. Wu, K. Popat, M. Artetxe, N. Goyal, M. Plekhanov, L. Zettlemoyer, N. Cancedda, S. Riedel, and F. Petroni, “Mul- tilingual autoregressive entity linking,” Trans. Assoc. Comput. Linguistics, vol. 10, pp. 274–290, 2022. [207] N. D. Cao, W. Aziz, and I. Titov, “Highly parallel autoregressive entity linking with discriminative correction,” in Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, 2021, pp. 7662–7669.
2306.08302#180
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual knowledge. In contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge. KGs can enhance LLMs by providing external knowledge for inference and interpretability. Meanwhile, KGs are difficult to construct and evolving by nature, which challenges the existing methods in KGs to generate new facts and represent unseen knowledge. Therefore, it is complementary to unify LLMs and KGs together and simultaneously leverage their advantages. In this article, we present a forward-looking roadmap for the unification of LLMs and KGs. Our roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs, which incorporate KGs during the pre-training and inference phases of LLMs, or for the purpose of enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding, completion, construction, graph-to-text generation, and question answering; and 3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a mutually beneficial way to enhance both LLMs and KGs for bidirectional reasoning driven by both data and knowledge. We review and summarize existing efforts within these three frameworks in our roadmap and pinpoint their future research directions.
http://arxiv.org/pdf/2306.08302
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
cs.CL, cs.AI
A short version of this paper was accepted by IEEE Transactions on Knowledge and Data Engineering (TKDE)
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
cs.CL
20230614
20240125
[ { "id": "2309.01538" }, { "id": "2302.13971" }, { "id": "2110.08173" }, { "id": "2203.16747" }, { "id": "2201.05337" }, { "id": "2302.12095" }, { "id": "1810.04805" }, { "id": "2305.13168" }, { "id": "2305.12392" }, { "id": "2206.14268" }, { "id": "2111.08546" }, { "id": "2212.10511" }, { "id": "2107.02137" }, { "id": "2105.10311" }, { "id": "2308.09729" }, { "id": "2310.02129" }, { "id": "1910.12840" }, { "id": "2206.13163" }, { "id": "2303.11146" }, { "id": "2009.02835" }, { "id": "2205.07424" }, { "id": "1909.03193" }, { "id": "2010.15980" }, { "id": "2007.00655" }, { "id": "2203.11171" }, { "id": "2306.06427" }, { "id": "2305.08281" }, { "id": "2104.08696" }, { "id": "2110.08455" }, { "id": "2305.09645" }, { "id": "2310.01061" }, { "id": "2308.03688" }, { "id": "2305.01157" }, { "id": "2310.08975" }, { "id": "2301.08913" }, { "id": "2305.13172" }, { "id": "2212.13428" }, { "id": "2303.10368" }, { "id": "2307.07697" }, { "id": "2308.11730" }, { "id": "2108.01928" }, { "id": "2010.00711" }, { "id": "2304.10592" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2307.12976" }, { "id": "2309.03118" }, { "id": "2304.13712" }, { "id": "2212.01588" }, { "id": "2309.01219" }, { "id": "2302.04023" }, { "id": "2202.08772" }, { "id": "2208.02743" }, { "id": "2201.08239" }, { "id": "2201.08531" }, { "id": "2302.05019" }, { "id": "2003.10555" }, { "id": "1907.11692" }, { "id": "2201.04843" }, { "id": "2206.12617" }, { "id": "2201.05575" }, { "id": "2310.07984" } ]
2306.08302
181
[208] K. Lee, L. He, and L. Zettlemoyer, “Higher-order coreference resolution with coarse-to-fine inference,” in NAACL, 2018, pp. 687–692. [209] T. M. Lai, T. Bui, and D. S. Kim, “End-to-end neural coreference resolution revisited: A simple yet effective baseline,” in IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2022, Virtual and Singapore, 23-27 May 2022, 2022, pp. 8147–8151. [210] W. Wu, F. Wang, A. Yuan, F. Wu, and J. Li, “Corefqa: Coreference resolution as query-based span prediction,” in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, 2020, pp. 6953–6963.
2306.08302#181
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual knowledge. In contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge. KGs can enhance LLMs by providing external knowledge for inference and interpretability. Meanwhile, KGs are difficult to construct and evolving by nature, which challenges the existing methods in KGs to generate new facts and represent unseen knowledge. Therefore, it is complementary to unify LLMs and KGs together and simultaneously leverage their advantages. In this article, we present a forward-looking roadmap for the unification of LLMs and KGs. Our roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs, which incorporate KGs during the pre-training and inference phases of LLMs, or for the purpose of enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding, completion, construction, graph-to-text generation, and question answering; and 3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a mutually beneficial way to enhance both LLMs and KGs for bidirectional reasoning driven by both data and knowledge. We review and summarize existing efforts within these three frameworks in our roadmap and pinpoint their future research directions.
http://arxiv.org/pdf/2306.08302
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
cs.CL, cs.AI
A short version of this paper was accepted by IEEE Transactions on Knowledge and Data Engineering (TKDE)
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
cs.CL
20230614
20240125
[ { "id": "2309.01538" }, { "id": "2302.13971" }, { "id": "2110.08173" }, { "id": "2203.16747" }, { "id": "2201.05337" }, { "id": "2302.12095" }, { "id": "1810.04805" }, { "id": "2305.13168" }, { "id": "2305.12392" }, { "id": "2206.14268" }, { "id": "2111.08546" }, { "id": "2212.10511" }, { "id": "2107.02137" }, { "id": "2105.10311" }, { "id": "2308.09729" }, { "id": "2310.02129" }, { "id": "1910.12840" }, { "id": "2206.13163" }, { "id": "2303.11146" }, { "id": "2009.02835" }, { "id": "2205.07424" }, { "id": "1909.03193" }, { "id": "2010.15980" }, { "id": "2007.00655" }, { "id": "2203.11171" }, { "id": "2306.06427" }, { "id": "2305.08281" }, { "id": "2104.08696" }, { "id": "2110.08455" }, { "id": "2305.09645" }, { "id": "2310.01061" }, { "id": "2308.03688" }, { "id": "2305.01157" }, { "id": "2310.08975" }, { "id": "2301.08913" }, { "id": "2305.13172" }, { "id": "2212.13428" }, { "id": "2303.10368" }, { "id": "2307.07697" }, { "id": "2308.11730" }, { "id": "2108.01928" }, { "id": "2010.00711" }, { "id": "2304.10592" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2307.12976" }, { "id": "2309.03118" }, { "id": "2304.13712" }, { "id": "2212.01588" }, { "id": "2309.01219" }, { "id": "2302.04023" }, { "id": "2202.08772" }, { "id": "2208.02743" }, { "id": "2201.08239" }, { "id": "2201.08531" }, { "id": "2302.05019" }, { "id": "2003.10555" }, { "id": "1907.11692" }, { "id": "2201.04843" }, { "id": "2206.12617" }, { "id": "2201.05575" }, { "id": "2310.07984" } ]
2306.08302
182
[211] T. M. Lai, H. Ji, T. Bui, Q. H. Tran, F. Dernoncourt, and W. Chang, “A context-dependent gated module for incorporating symbolic semantics into event coreference resolution,” in Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL- HLT 2021, Online, June 6-11, 2021, 2021, pp. 3491–3499. [212] Y. Kirstain, O. Ram, and O. Levy, “Coreference resolution without span representations,” in Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th Interna- tional Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 2: Short Papers), Virtual Event, August 1-6, 2021, 2021, pp. 14–19.
2306.08302#182
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual knowledge. In contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge. KGs can enhance LLMs by providing external knowledge for inference and interpretability. Meanwhile, KGs are difficult to construct and evolving by nature, which challenges the existing methods in KGs to generate new facts and represent unseen knowledge. Therefore, it is complementary to unify LLMs and KGs together and simultaneously leverage their advantages. In this article, we present a forward-looking roadmap for the unification of LLMs and KGs. Our roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs, which incorporate KGs during the pre-training and inference phases of LLMs, or for the purpose of enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding, completion, construction, graph-to-text generation, and question answering; and 3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a mutually beneficial way to enhance both LLMs and KGs for bidirectional reasoning driven by both data and knowledge. We review and summarize existing efforts within these three frameworks in our roadmap and pinpoint their future research directions.
http://arxiv.org/pdf/2306.08302
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
cs.CL, cs.AI
A short version of this paper was accepted by IEEE Transactions on Knowledge and Data Engineering (TKDE)
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
cs.CL
20230614
20240125
[ { "id": "2309.01538" }, { "id": "2302.13971" }, { "id": "2110.08173" }, { "id": "2203.16747" }, { "id": "2201.05337" }, { "id": "2302.12095" }, { "id": "1810.04805" }, { "id": "2305.13168" }, { "id": "2305.12392" }, { "id": "2206.14268" }, { "id": "2111.08546" }, { "id": "2212.10511" }, { "id": "2107.02137" }, { "id": "2105.10311" }, { "id": "2308.09729" }, { "id": "2310.02129" }, { "id": "1910.12840" }, { "id": "2206.13163" }, { "id": "2303.11146" }, { "id": "2009.02835" }, { "id": "2205.07424" }, { "id": "1909.03193" }, { "id": "2010.15980" }, { "id": "2007.00655" }, { "id": "2203.11171" }, { "id": "2306.06427" }, { "id": "2305.08281" }, { "id": "2104.08696" }, { "id": "2110.08455" }, { "id": "2305.09645" }, { "id": "2310.01061" }, { "id": "2308.03688" }, { "id": "2305.01157" }, { "id": "2310.08975" }, { "id": "2301.08913" }, { "id": "2305.13172" }, { "id": "2212.13428" }, { "id": "2303.10368" }, { "id": "2307.07697" }, { "id": "2308.11730" }, { "id": "2108.01928" }, { "id": "2010.00711" }, { "id": "2304.10592" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2307.12976" }, { "id": "2309.03118" }, { "id": "2304.13712" }, { "id": "2212.01588" }, { "id": "2309.01219" }, { "id": "2302.04023" }, { "id": "2202.08772" }, { "id": "2208.02743" }, { "id": "2201.08239" }, { "id": "2201.08531" }, { "id": "2302.05019" }, { "id": "2003.10555" }, { "id": "1907.11692" }, { "id": "2201.04843" }, { "id": "2206.12617" }, { "id": "2201.05575" }, { "id": "2310.07984" } ]
2306.08302
183
[213] R. Thirukovalluru, N. Monath, K. Shridhar, M. Zaheer, M. Sachan, and A. McCallum, “Scaling within document corefer- ence to long texts,” in Findings of the Association for Computational Linguistics: ACL/IJCNLP 2021, Online Event, August 1-6, 2021, ser. Findings of ACL, vol. ACL/IJCNLP 2021, 2021, pp. 3921–3931. [214] I. Beltagy, M. E. Peters, and A. Cohan, “Longformer: The long- document transformer,” CoRR, vol. abs/2004.05150, 2020. [215] C. Alt, M. H ¨ubner, and L. Hennig, “Improving relation extraction by pre-trained language representations,” in 1st Conference on Automated Knowledge Base Construction, AKBC 2019, Amherst, MA, USA, May 20-22, 2019, 2019. [216] L. B. Soares, N. FitzGerald, J. Ling, and T. Kwiatkowski, “Match- ing the blanks: Distributional similarity for relation learning,” in ACL, 2019, pp. 2895–2905.
2306.08302#183
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual knowledge. In contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge. KGs can enhance LLMs by providing external knowledge for inference and interpretability. Meanwhile, KGs are difficult to construct and evolving by nature, which challenges the existing methods in KGs to generate new facts and represent unseen knowledge. Therefore, it is complementary to unify LLMs and KGs together and simultaneously leverage their advantages. In this article, we present a forward-looking roadmap for the unification of LLMs and KGs. Our roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs, which incorporate KGs during the pre-training and inference phases of LLMs, or for the purpose of enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding, completion, construction, graph-to-text generation, and question answering; and 3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a mutually beneficial way to enhance both LLMs and KGs for bidirectional reasoning driven by both data and knowledge. We review and summarize existing efforts within these three frameworks in our roadmap and pinpoint their future research directions.
http://arxiv.org/pdf/2306.08302
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
cs.CL, cs.AI
A short version of this paper was accepted by IEEE Transactions on Knowledge and Data Engineering (TKDE)
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
cs.CL
20230614
20240125
[ { "id": "2309.01538" }, { "id": "2302.13971" }, { "id": "2110.08173" }, { "id": "2203.16747" }, { "id": "2201.05337" }, { "id": "2302.12095" }, { "id": "1810.04805" }, { "id": "2305.13168" }, { "id": "2305.12392" }, { "id": "2206.14268" }, { "id": "2111.08546" }, { "id": "2212.10511" }, { "id": "2107.02137" }, { "id": "2105.10311" }, { "id": "2308.09729" }, { "id": "2310.02129" }, { "id": "1910.12840" }, { "id": "2206.13163" }, { "id": "2303.11146" }, { "id": "2009.02835" }, { "id": "2205.07424" }, { "id": "1909.03193" }, { "id": "2010.15980" }, { "id": "2007.00655" }, { "id": "2203.11171" }, { "id": "2306.06427" }, { "id": "2305.08281" }, { "id": "2104.08696" }, { "id": "2110.08455" }, { "id": "2305.09645" }, { "id": "2310.01061" }, { "id": "2308.03688" }, { "id": "2305.01157" }, { "id": "2310.08975" }, { "id": "2301.08913" }, { "id": "2305.13172" }, { "id": "2212.13428" }, { "id": "2303.10368" }, { "id": "2307.07697" }, { "id": "2308.11730" }, { "id": "2108.01928" }, { "id": "2010.00711" }, { "id": "2304.10592" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2307.12976" }, { "id": "2309.03118" }, { "id": "2304.13712" }, { "id": "2212.01588" }, { "id": "2309.01219" }, { "id": "2302.04023" }, { "id": "2202.08772" }, { "id": "2208.02743" }, { "id": "2201.08239" }, { "id": "2201.08531" }, { "id": "2302.05019" }, { "id": "2003.10555" }, { "id": "1907.11692" }, { "id": "2201.04843" }, { "id": "2206.12617" }, { "id": "2201.05575" }, { "id": "2310.07984" } ]
2306.08302
184
[217] S. Lyu and H. Chen, “Relation classification with entity type restriction,” in Findings of the Association for Computational Lin- guistics: ACL/IJCNLP 2021, Online Event, August 1-6, 2021, ser. Findings of ACL, vol. ACL/IJCNLP 2021, 2021, pp. 390–395. [218] J. Zheng and Z. Chen, “Sentence-level relation extraction via contrastive learning with descriptive relation prompts,” CoRR, vol. abs/2304.04935, 2023. [219] H. Wang, C. Focke, R. Sylvester, N. Mishra, and W. Y. Wang, “Fine-tune bert for docred with two-step process,” CoRR, vol. abs/1909.11898, 2019. [220] H. Tang, Y. Cao, Z. Zhang, J. Cao, F. Fang, S. Wang, and P. Yin, “HIN: hierarchical inference network for document-level relation extraction,” in PAKDD, ser. Lecture Notes in Computer Science, vol. 12084, 2020, pp. 197–209.
2306.08302#184
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual knowledge. In contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge. KGs can enhance LLMs by providing external knowledge for inference and interpretability. Meanwhile, KGs are difficult to construct and evolving by nature, which challenges the existing methods in KGs to generate new facts and represent unseen knowledge. Therefore, it is complementary to unify LLMs and KGs together and simultaneously leverage their advantages. In this article, we present a forward-looking roadmap for the unification of LLMs and KGs. Our roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs, which incorporate KGs during the pre-training and inference phases of LLMs, or for the purpose of enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding, completion, construction, graph-to-text generation, and question answering; and 3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a mutually beneficial way to enhance both LLMs and KGs for bidirectional reasoning driven by both data and knowledge. We review and summarize existing efforts within these three frameworks in our roadmap and pinpoint their future research directions.
http://arxiv.org/pdf/2306.08302
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
cs.CL, cs.AI
A short version of this paper was accepted by IEEE Transactions on Knowledge and Data Engineering (TKDE)
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
cs.CL
20230614
20240125
[ { "id": "2309.01538" }, { "id": "2302.13971" }, { "id": "2110.08173" }, { "id": "2203.16747" }, { "id": "2201.05337" }, { "id": "2302.12095" }, { "id": "1810.04805" }, { "id": "2305.13168" }, { "id": "2305.12392" }, { "id": "2206.14268" }, { "id": "2111.08546" }, { "id": "2212.10511" }, { "id": "2107.02137" }, { "id": "2105.10311" }, { "id": "2308.09729" }, { "id": "2310.02129" }, { "id": "1910.12840" }, { "id": "2206.13163" }, { "id": "2303.11146" }, { "id": "2009.02835" }, { "id": "2205.07424" }, { "id": "1909.03193" }, { "id": "2010.15980" }, { "id": "2007.00655" }, { "id": "2203.11171" }, { "id": "2306.06427" }, { "id": "2305.08281" }, { "id": "2104.08696" }, { "id": "2110.08455" }, { "id": "2305.09645" }, { "id": "2310.01061" }, { "id": "2308.03688" }, { "id": "2305.01157" }, { "id": "2310.08975" }, { "id": "2301.08913" }, { "id": "2305.13172" }, { "id": "2212.13428" }, { "id": "2303.10368" }, { "id": "2307.07697" }, { "id": "2308.11730" }, { "id": "2108.01928" }, { "id": "2010.00711" }, { "id": "2304.10592" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2307.12976" }, { "id": "2309.03118" }, { "id": "2304.13712" }, { "id": "2212.01588" }, { "id": "2309.01219" }, { "id": "2302.04023" }, { "id": "2202.08772" }, { "id": "2208.02743" }, { "id": "2201.08239" }, { "id": "2201.08531" }, { "id": "2302.05019" }, { "id": "2003.10555" }, { "id": "1907.11692" }, { "id": "2201.04843" }, { "id": "2206.12617" }, { "id": "2201.05575" }, { "id": "2310.07984" } ]
2306.08302
185
extraction,” in PAKDD, ser. Lecture Notes in Computer Science, vol. 12084, 2020, pp. 197–209. [221] D. Wang, W. Hu, E. Cao, and W. Sun, “Global-to-local neural networks for document-level relation extraction,” in Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, 2020, pp. 3711–3721. [222] S. Zeng, Y. Wu, and B. Chang, “SIRE: separate intra- and inter-sentential reasoning for document-level relation extrac- tion,” in Findings of the Association for Computational Linguistics: ACL/IJCNLP 2021, Online Event, August 1-6, 2021, ser. Findings of ACL, vol. ACL/IJCNLP 2021, 2021, pp. 524–534. [223] G. Nan, Z. Guo, I. Sekulic, and W. Lu, “Reasoning with latent structure refinement for document-level relation extraction,” in ACL, 2020, pp. 1546–1557.
2306.08302#185
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual knowledge. In contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge. KGs can enhance LLMs by providing external knowledge for inference and interpretability. Meanwhile, KGs are difficult to construct and evolving by nature, which challenges the existing methods in KGs to generate new facts and represent unseen knowledge. Therefore, it is complementary to unify LLMs and KGs together and simultaneously leverage their advantages. In this article, we present a forward-looking roadmap for the unification of LLMs and KGs. Our roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs, which incorporate KGs during the pre-training and inference phases of LLMs, or for the purpose of enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding, completion, construction, graph-to-text generation, and question answering; and 3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a mutually beneficial way to enhance both LLMs and KGs for bidirectional reasoning driven by both data and knowledge. We review and summarize existing efforts within these three frameworks in our roadmap and pinpoint their future research directions.
http://arxiv.org/pdf/2306.08302
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
cs.CL, cs.AI
A short version of this paper was accepted by IEEE Transactions on Knowledge and Data Engineering (TKDE)
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
cs.CL
20230614
20240125
[ { "id": "2309.01538" }, { "id": "2302.13971" }, { "id": "2110.08173" }, { "id": "2203.16747" }, { "id": "2201.05337" }, { "id": "2302.12095" }, { "id": "1810.04805" }, { "id": "2305.13168" }, { "id": "2305.12392" }, { "id": "2206.14268" }, { "id": "2111.08546" }, { "id": "2212.10511" }, { "id": "2107.02137" }, { "id": "2105.10311" }, { "id": "2308.09729" }, { "id": "2310.02129" }, { "id": "1910.12840" }, { "id": "2206.13163" }, { "id": "2303.11146" }, { "id": "2009.02835" }, { "id": "2205.07424" }, { "id": "1909.03193" }, { "id": "2010.15980" }, { "id": "2007.00655" }, { "id": "2203.11171" }, { "id": "2306.06427" }, { "id": "2305.08281" }, { "id": "2104.08696" }, { "id": "2110.08455" }, { "id": "2305.09645" }, { "id": "2310.01061" }, { "id": "2308.03688" }, { "id": "2305.01157" }, { "id": "2310.08975" }, { "id": "2301.08913" }, { "id": "2305.13172" }, { "id": "2212.13428" }, { "id": "2303.10368" }, { "id": "2307.07697" }, { "id": "2308.11730" }, { "id": "2108.01928" }, { "id": "2010.00711" }, { "id": "2304.10592" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2307.12976" }, { "id": "2309.03118" }, { "id": "2304.13712" }, { "id": "2212.01588" }, { "id": "2309.01219" }, { "id": "2302.04023" }, { "id": "2202.08772" }, { "id": "2208.02743" }, { "id": "2201.08239" }, { "id": "2201.08531" }, { "id": "2302.05019" }, { "id": "2003.10555" }, { "id": "1907.11692" }, { "id": "2201.04843" }, { "id": "2206.12617" }, { "id": "2201.05575" }, { "id": "2310.07984" } ]
2306.08302
186
[224] S. Zeng, R. Xu, B. Chang, and L. Li, “Double graph based reasoning for document-level relation extraction,” in Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, 2020, pp. 1630–1640. [225] N. Zhang, X. Chen, X. Xie, S. Deng, C. Tan, M. Chen, F. Huang, L. Si, and H. Chen, “Document-level relation extraction as se- mantic segmentation,” in IJCAI, 2021, pp. 3999–4006. [226] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer-Assisted Intervention - MICCAI 2015 - 18th International Conference Munich, Germany, October 5 - 9, 2015, Proceedings, Part III, ser. Lecture Notes in Computer Science, vol. 9351, 2015, pp. 234–241.
2306.08302#186
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual knowledge. In contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge. KGs can enhance LLMs by providing external knowledge for inference and interpretability. Meanwhile, KGs are difficult to construct and evolving by nature, which challenges the existing methods in KGs to generate new facts and represent unseen knowledge. Therefore, it is complementary to unify LLMs and KGs together and simultaneously leverage their advantages. In this article, we present a forward-looking roadmap for the unification of LLMs and KGs. Our roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs, which incorporate KGs during the pre-training and inference phases of LLMs, or for the purpose of enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding, completion, construction, graph-to-text generation, and question answering; and 3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a mutually beneficial way to enhance both LLMs and KGs for bidirectional reasoning driven by both data and knowledge. We review and summarize existing efforts within these three frameworks in our roadmap and pinpoint their future research directions.
http://arxiv.org/pdf/2306.08302
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
cs.CL, cs.AI
A short version of this paper was accepted by IEEE Transactions on Knowledge and Data Engineering (TKDE)
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
cs.CL
20230614
20240125
[ { "id": "2309.01538" }, { "id": "2302.13971" }, { "id": "2110.08173" }, { "id": "2203.16747" }, { "id": "2201.05337" }, { "id": "2302.12095" }, { "id": "1810.04805" }, { "id": "2305.13168" }, { "id": "2305.12392" }, { "id": "2206.14268" }, { "id": "2111.08546" }, { "id": "2212.10511" }, { "id": "2107.02137" }, { "id": "2105.10311" }, { "id": "2308.09729" }, { "id": "2310.02129" }, { "id": "1910.12840" }, { "id": "2206.13163" }, { "id": "2303.11146" }, { "id": "2009.02835" }, { "id": "2205.07424" }, { "id": "1909.03193" }, { "id": "2010.15980" }, { "id": "2007.00655" }, { "id": "2203.11171" }, { "id": "2306.06427" }, { "id": "2305.08281" }, { "id": "2104.08696" }, { "id": "2110.08455" }, { "id": "2305.09645" }, { "id": "2310.01061" }, { "id": "2308.03688" }, { "id": "2305.01157" }, { "id": "2310.08975" }, { "id": "2301.08913" }, { "id": "2305.13172" }, { "id": "2212.13428" }, { "id": "2303.10368" }, { "id": "2307.07697" }, { "id": "2308.11730" }, { "id": "2108.01928" }, { "id": "2010.00711" }, { "id": "2304.10592" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2307.12976" }, { "id": "2309.03118" }, { "id": "2304.13712" }, { "id": "2212.01588" }, { "id": "2309.01219" }, { "id": "2302.04023" }, { "id": "2202.08772" }, { "id": "2208.02743" }, { "id": "2201.08239" }, { "id": "2201.08531" }, { "id": "2302.05019" }, { "id": "2003.10555" }, { "id": "1907.11692" }, { "id": "2201.04843" }, { "id": "2206.12617" }, { "id": "2201.05575" }, { "id": "2310.07984" } ]
2306.08302
187
[227] W. Zhou, K. Huang, T. Ma, and J. Huang, “Document-level rela- tion extraction with adaptive thresholding and localized context pooling,” in AAAI, 2021, pp. 14 612–14 620. [228] C. Gardent, A. Shimorina, S. Narayan, and L. Perez-Beltrachini, “The WebNLG challenge: Generating text from RDF data,” in Proceedings of the 10th International Conference on Natural Language Generation, 2017, pp. 124–133. [229] J. Guan, Y. Wang, and M. Huang, “Story ending generation with incremental encoding and commonsense knowledge,” in AAAI, 2019, pp. 6473–6480. [230] H. Zhou, T. Young, M. Huang, H. Zhao, J. Xu, and X. Zhu, “Commonsense knowledge aware conversation generation with graph attention,” in IJCAI, 2018, pp. 4623–4629. [231] M. Kale and A. Rastogi, “Text-to-text pre-training for data-to-text tasks,” in Proceedings of the 13th International Conference on Natural Language Generation, 2020, pp. 97–102.
2306.08302#187
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual knowledge. In contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge. KGs can enhance LLMs by providing external knowledge for inference and interpretability. Meanwhile, KGs are difficult to construct and evolving by nature, which challenges the existing methods in KGs to generate new facts and represent unseen knowledge. Therefore, it is complementary to unify LLMs and KGs together and simultaneously leverage their advantages. In this article, we present a forward-looking roadmap for the unification of LLMs and KGs. Our roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs, which incorporate KGs during the pre-training and inference phases of LLMs, or for the purpose of enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding, completion, construction, graph-to-text generation, and question answering; and 3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a mutually beneficial way to enhance both LLMs and KGs for bidirectional reasoning driven by both data and knowledge. We review and summarize existing efforts within these three frameworks in our roadmap and pinpoint their future research directions.
http://arxiv.org/pdf/2306.08302
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
cs.CL, cs.AI
A short version of this paper was accepted by IEEE Transactions on Knowledge and Data Engineering (TKDE)
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
cs.CL
20230614
20240125
[ { "id": "2309.01538" }, { "id": "2302.13971" }, { "id": "2110.08173" }, { "id": "2203.16747" }, { "id": "2201.05337" }, { "id": "2302.12095" }, { "id": "1810.04805" }, { "id": "2305.13168" }, { "id": "2305.12392" }, { "id": "2206.14268" }, { "id": "2111.08546" }, { "id": "2212.10511" }, { "id": "2107.02137" }, { "id": "2105.10311" }, { "id": "2308.09729" }, { "id": "2310.02129" }, { "id": "1910.12840" }, { "id": "2206.13163" }, { "id": "2303.11146" }, { "id": "2009.02835" }, { "id": "2205.07424" }, { "id": "1909.03193" }, { "id": "2010.15980" }, { "id": "2007.00655" }, { "id": "2203.11171" }, { "id": "2306.06427" }, { "id": "2305.08281" }, { "id": "2104.08696" }, { "id": "2110.08455" }, { "id": "2305.09645" }, { "id": "2310.01061" }, { "id": "2308.03688" }, { "id": "2305.01157" }, { "id": "2310.08975" }, { "id": "2301.08913" }, { "id": "2305.13172" }, { "id": "2212.13428" }, { "id": "2303.10368" }, { "id": "2307.07697" }, { "id": "2308.11730" }, { "id": "2108.01928" }, { "id": "2010.00711" }, { "id": "2304.10592" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2307.12976" }, { "id": "2309.03118" }, { "id": "2304.13712" }, { "id": "2212.01588" }, { "id": "2309.01219" }, { "id": "2302.04023" }, { "id": "2202.08772" }, { "id": "2208.02743" }, { "id": "2201.08239" }, { "id": "2201.08531" }, { "id": "2302.05019" }, { "id": "2003.10555" }, { "id": "1907.11692" }, { "id": "2201.04843" }, { "id": "2206.12617" }, { "id": "2201.05575" }, { "id": "2310.07984" } ]
2306.08302
188
[232] M. Mintz, S. Bills, R. Snow, and D. Jurafsky, “Distant supervision for relation extraction without labeled data,” in ACL, 2009, pp. 1003–1011. [233] A. Saxena, A. Tripathi, and P. Talukdar, “Improving multi-hop question answering over knowledge graphs using knowledge base embeddings,” in ACL, 2020, pp. 4498–4507. [234] Y. Feng, X. Chen, B. Y. Lin, P. Wang, J. Yan, and X. Ren, “Scalable multi-hop relational reasoning for knowledge-aware question answering,” in EMNLP, 2020, pp. 1295–1309. [235] Y. Yan, R. Li, S. Wang, H. Zhang, Z. Daoguang, F. Zhang, W. Wu, and W. Xu, “Large-scale relation learning for question answering over knowledge bases with pre-trained language models,” in EMNLP, 2021, pp. 3653–3660.
2306.08302#188
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual knowledge. In contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge. KGs can enhance LLMs by providing external knowledge for inference and interpretability. Meanwhile, KGs are difficult to construct and evolving by nature, which challenges the existing methods in KGs to generate new facts and represent unseen knowledge. Therefore, it is complementary to unify LLMs and KGs together and simultaneously leverage their advantages. In this article, we present a forward-looking roadmap for the unification of LLMs and KGs. Our roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs, which incorporate KGs during the pre-training and inference phases of LLMs, or for the purpose of enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding, completion, construction, graph-to-text generation, and question answering; and 3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a mutually beneficial way to enhance both LLMs and KGs for bidirectional reasoning driven by both data and knowledge. We review and summarize existing efforts within these three frameworks in our roadmap and pinpoint their future research directions.
http://arxiv.org/pdf/2306.08302
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
cs.CL, cs.AI
A short version of this paper was accepted by IEEE Transactions on Knowledge and Data Engineering (TKDE)
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
cs.CL
20230614
20240125
[ { "id": "2309.01538" }, { "id": "2302.13971" }, { "id": "2110.08173" }, { "id": "2203.16747" }, { "id": "2201.05337" }, { "id": "2302.12095" }, { "id": "1810.04805" }, { "id": "2305.13168" }, { "id": "2305.12392" }, { "id": "2206.14268" }, { "id": "2111.08546" }, { "id": "2212.10511" }, { "id": "2107.02137" }, { "id": "2105.10311" }, { "id": "2308.09729" }, { "id": "2310.02129" }, { "id": "1910.12840" }, { "id": "2206.13163" }, { "id": "2303.11146" }, { "id": "2009.02835" }, { "id": "2205.07424" }, { "id": "1909.03193" }, { "id": "2010.15980" }, { "id": "2007.00655" }, { "id": "2203.11171" }, { "id": "2306.06427" }, { "id": "2305.08281" }, { "id": "2104.08696" }, { "id": "2110.08455" }, { "id": "2305.09645" }, { "id": "2310.01061" }, { "id": "2308.03688" }, { "id": "2305.01157" }, { "id": "2310.08975" }, { "id": "2301.08913" }, { "id": "2305.13172" }, { "id": "2212.13428" }, { "id": "2303.10368" }, { "id": "2307.07697" }, { "id": "2308.11730" }, { "id": "2108.01928" }, { "id": "2010.00711" }, { "id": "2304.10592" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2307.12976" }, { "id": "2309.03118" }, { "id": "2304.13712" }, { "id": "2212.01588" }, { "id": "2309.01219" }, { "id": "2302.04023" }, { "id": "2202.08772" }, { "id": "2208.02743" }, { "id": "2201.08239" }, { "id": "2201.08531" }, { "id": "2302.05019" }, { "id": "2003.10555" }, { "id": "1907.11692" }, { "id": "2201.04843" }, { "id": "2206.12617" }, { "id": "2201.05575" }, { "id": "2310.07984" } ]
2306.08302
189
[236] J. Zhang, X. Zhang, J. Yu, J. Tang, J. Tang, C. Li, and H. Chen, “Subgraph retrieval enhanced model for multi-hop knowledge base question answering,” in ACL (Volume 1: Long Papers), 2022, pp. 5773–5784. [237] J. Jiang, K. Zhou, Z. Dong, K. Ye, W. X. Zhao, and J.-R. Wen, “Structgpt: A general framework for large language model to reason over structured data,” arXiv preprint arXiv:2305.09645, 2023. [238] H. Zhu, H. Peng, Z. Lyu, L. Hou, J. Li, and J. Xiao, “Pre-training language model incorporating domain-specific heterogeneous knowledge into a unified representation,” Expert Systems with Applications, vol. 215, p. 119369, 2023. [239] C. Feng, X. Zhang, and Z. Fei, “Knowledge solver: Teaching llms to search for domain knowledge from knowledge graphs,” arXiv preprint arXiv:2309.03118, 2023.
2306.08302#189
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual knowledge. In contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge. KGs can enhance LLMs by providing external knowledge for inference and interpretability. Meanwhile, KGs are difficult to construct and evolving by nature, which challenges the existing methods in KGs to generate new facts and represent unseen knowledge. Therefore, it is complementary to unify LLMs and KGs together and simultaneously leverage their advantages. In this article, we present a forward-looking roadmap for the unification of LLMs and KGs. Our roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs, which incorporate KGs during the pre-training and inference phases of LLMs, or for the purpose of enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding, completion, construction, graph-to-text generation, and question answering; and 3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a mutually beneficial way to enhance both LLMs and KGs for bidirectional reasoning driven by both data and knowledge. We review and summarize existing efforts within these three frameworks in our roadmap and pinpoint their future research directions.
http://arxiv.org/pdf/2306.08302
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
cs.CL, cs.AI
A short version of this paper was accepted by IEEE Transactions on Knowledge and Data Engineering (TKDE)
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
cs.CL
20230614
20240125
[ { "id": "2309.01538" }, { "id": "2302.13971" }, { "id": "2110.08173" }, { "id": "2203.16747" }, { "id": "2201.05337" }, { "id": "2302.12095" }, { "id": "1810.04805" }, { "id": "2305.13168" }, { "id": "2305.12392" }, { "id": "2206.14268" }, { "id": "2111.08546" }, { "id": "2212.10511" }, { "id": "2107.02137" }, { "id": "2105.10311" }, { "id": "2308.09729" }, { "id": "2310.02129" }, { "id": "1910.12840" }, { "id": "2206.13163" }, { "id": "2303.11146" }, { "id": "2009.02835" }, { "id": "2205.07424" }, { "id": "1909.03193" }, { "id": "2010.15980" }, { "id": "2007.00655" }, { "id": "2203.11171" }, { "id": "2306.06427" }, { "id": "2305.08281" }, { "id": "2104.08696" }, { "id": "2110.08455" }, { "id": "2305.09645" }, { "id": "2310.01061" }, { "id": "2308.03688" }, { "id": "2305.01157" }, { "id": "2310.08975" }, { "id": "2301.08913" }, { "id": "2305.13172" }, { "id": "2212.13428" }, { "id": "2303.10368" }, { "id": "2307.07697" }, { "id": "2308.11730" }, { "id": "2108.01928" }, { "id": "2010.00711" }, { "id": "2304.10592" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2307.12976" }, { "id": "2309.03118" }, { "id": "2304.13712" }, { "id": "2212.01588" }, { "id": "2309.01219" }, { "id": "2302.04023" }, { "id": "2202.08772" }, { "id": "2208.02743" }, { "id": "2201.08239" }, { "id": "2201.08531" }, { "id": "2302.05019" }, { "id": "2003.10555" }, { "id": "1907.11692" }, { "id": "2201.04843" }, { "id": "2206.12617" }, { "id": "2201.05575" }, { "id": "2310.07984" } ]
2306.08302
190
[240] J. Sun, C. Xu, L. Tang, S. Wang, C. Lin, Y. Gong, H.-Y. Shum, and J. Guo, “Think-on-graph: Deep and responsible reasoning of large language model with knowledge graph,” arXiv preprint arXiv:2307.07697, 2023. 26 JOURNAL OF LATEX CLASS FILES, VOL. ??, NO. ??, MONTH 20YY [241] B. He, D. Zhou, J. Xiao, X. Jiang, Q. Liu, N. J. Yuan, and T. Xu, “BERT-MK: Integrating graph contextualized knowledge into pre-trained language models,” in EMNLP, 2020, pp. 2281–2290. [242] Y. Su, X. Han, Z. Zhang, Y. Lin, P. Li, Z. Liu, J. Zhou, and M. Sun, “Cokebert: Contextual knowledge selection and embedding to- wards enhanced pre-trained language models,” AI Open, vol. 2, pp. 127–134, 2021.
2306.08302#190
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual knowledge. In contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge. KGs can enhance LLMs by providing external knowledge for inference and interpretability. Meanwhile, KGs are difficult to construct and evolving by nature, which challenges the existing methods in KGs to generate new facts and represent unseen knowledge. Therefore, it is complementary to unify LLMs and KGs together and simultaneously leverage their advantages. In this article, we present a forward-looking roadmap for the unification of LLMs and KGs. Our roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs, which incorporate KGs during the pre-training and inference phases of LLMs, or for the purpose of enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding, completion, construction, graph-to-text generation, and question answering; and 3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a mutually beneficial way to enhance both LLMs and KGs for bidirectional reasoning driven by both data and knowledge. We review and summarize existing efforts within these three frameworks in our roadmap and pinpoint their future research directions.
http://arxiv.org/pdf/2306.08302
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
cs.CL, cs.AI
A short version of this paper was accepted by IEEE Transactions on Knowledge and Data Engineering (TKDE)
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
cs.CL
20230614
20240125
[ { "id": "2309.01538" }, { "id": "2302.13971" }, { "id": "2110.08173" }, { "id": "2203.16747" }, { "id": "2201.05337" }, { "id": "2302.12095" }, { "id": "1810.04805" }, { "id": "2305.13168" }, { "id": "2305.12392" }, { "id": "2206.14268" }, { "id": "2111.08546" }, { "id": "2212.10511" }, { "id": "2107.02137" }, { "id": "2105.10311" }, { "id": "2308.09729" }, { "id": "2310.02129" }, { "id": "1910.12840" }, { "id": "2206.13163" }, { "id": "2303.11146" }, { "id": "2009.02835" }, { "id": "2205.07424" }, { "id": "1909.03193" }, { "id": "2010.15980" }, { "id": "2007.00655" }, { "id": "2203.11171" }, { "id": "2306.06427" }, { "id": "2305.08281" }, { "id": "2104.08696" }, { "id": "2110.08455" }, { "id": "2305.09645" }, { "id": "2310.01061" }, { "id": "2308.03688" }, { "id": "2305.01157" }, { "id": "2310.08975" }, { "id": "2301.08913" }, { "id": "2305.13172" }, { "id": "2212.13428" }, { "id": "2303.10368" }, { "id": "2307.07697" }, { "id": "2308.11730" }, { "id": "2108.01928" }, { "id": "2010.00711" }, { "id": "2304.10592" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2307.12976" }, { "id": "2309.03118" }, { "id": "2304.13712" }, { "id": "2212.01588" }, { "id": "2309.01219" }, { "id": "2302.04023" }, { "id": "2202.08772" }, { "id": "2208.02743" }, { "id": "2201.08239" }, { "id": "2201.08531" }, { "id": "2302.05019" }, { "id": "2003.10555" }, { "id": "1907.11692" }, { "id": "2201.04843" }, { "id": "2206.12617" }, { "id": "2201.05575" }, { "id": "2310.07984" } ]
2306.08302
191
[243] D. Yu, C. Zhu, Y. Yang, and M. Zeng, “JAKET: joint pre-training of knowledge graph and language understanding,” in AAAI, 2022, pp. 11 630–11 638. [244] X. Wang, P. Kapanipathi, R. Musa, M. Yu, K. Talamadupula, I. Abdelaziz, M. Chang, A. Fokoue, B. Makni, N. Mattei, and M. Witbrock, “Improving natural language inference using exter- nal knowledge in the science questions domain,” in AAAI, 2019, pp. 7208–7215. [245] Y. Sun, Q. Shi, L. Qi, and Y. Zhang, “JointLK: Joint reasoning with language models and knowledge graphs for commonsense question answering,” in NAACL, 2022, pp. 5049–5060. [246] X. Liu, H. Yu, H. Zhang, Y. Xu, X. Lei, H. Lai, Y. Gu, H. Ding, K. Men, K. Yang et al., “Agentbench: Evaluating llms as agents,” arXiv preprint arXiv:2308.03688, 2023.
2306.08302#191
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual knowledge. In contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge. KGs can enhance LLMs by providing external knowledge for inference and interpretability. Meanwhile, KGs are difficult to construct and evolving by nature, which challenges the existing methods in KGs to generate new facts and represent unseen knowledge. Therefore, it is complementary to unify LLMs and KGs together and simultaneously leverage their advantages. In this article, we present a forward-looking roadmap for the unification of LLMs and KGs. Our roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs, which incorporate KGs during the pre-training and inference phases of LLMs, or for the purpose of enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding, completion, construction, graph-to-text generation, and question answering; and 3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a mutually beneficial way to enhance both LLMs and KGs for bidirectional reasoning driven by both data and knowledge. We review and summarize existing efforts within these three frameworks in our roadmap and pinpoint their future research directions.
http://arxiv.org/pdf/2306.08302
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
cs.CL, cs.AI
A short version of this paper was accepted by IEEE Transactions on Knowledge and Data Engineering (TKDE)
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
cs.CL
20230614
20240125
[ { "id": "2309.01538" }, { "id": "2302.13971" }, { "id": "2110.08173" }, { "id": "2203.16747" }, { "id": "2201.05337" }, { "id": "2302.12095" }, { "id": "1810.04805" }, { "id": "2305.13168" }, { "id": "2305.12392" }, { "id": "2206.14268" }, { "id": "2111.08546" }, { "id": "2212.10511" }, { "id": "2107.02137" }, { "id": "2105.10311" }, { "id": "2308.09729" }, { "id": "2310.02129" }, { "id": "1910.12840" }, { "id": "2206.13163" }, { "id": "2303.11146" }, { "id": "2009.02835" }, { "id": "2205.07424" }, { "id": "1909.03193" }, { "id": "2010.15980" }, { "id": "2007.00655" }, { "id": "2203.11171" }, { "id": "2306.06427" }, { "id": "2305.08281" }, { "id": "2104.08696" }, { "id": "2110.08455" }, { "id": "2305.09645" }, { "id": "2310.01061" }, { "id": "2308.03688" }, { "id": "2305.01157" }, { "id": "2310.08975" }, { "id": "2301.08913" }, { "id": "2305.13172" }, { "id": "2212.13428" }, { "id": "2303.10368" }, { "id": "2307.07697" }, { "id": "2308.11730" }, { "id": "2108.01928" }, { "id": "2010.00711" }, { "id": "2304.10592" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2307.12976" }, { "id": "2309.03118" }, { "id": "2304.13712" }, { "id": "2212.01588" }, { "id": "2309.01219" }, { "id": "2302.04023" }, { "id": "2202.08772" }, { "id": "2208.02743" }, { "id": "2201.08239" }, { "id": "2201.08531" }, { "id": "2302.05019" }, { "id": "2003.10555" }, { "id": "1907.11692" }, { "id": "2201.04843" }, { "id": "2206.12617" }, { "id": "2201.05575" }, { "id": "2310.07984" } ]
2306.08302
192
[247] Y. Wang, N. Lipka, R. A. Rossi, A. Siu, R. Zhang, and T. Derr, “Knowledge graph prompting for multi-document question an- swering,” arXiv preprint arXiv:2308.11730, 2023. [248] A. Zeng, M. Liu, R. Lu, B. Wang, X. Liu, Y. Dong, and J. Tang, “Agenttuning: Enabling generalized agent abilities for llms,” 2023. [249] W. Kry´sci ´nski, B. McCann, C. Xiong, and R. Socher, “Evaluating the factual consistency of abstractive text summarization,” arXiv preprint arXiv:1910.12840, 2019. [250] Z. Ji, Z. Liu, N. Lee, T. Yu, B. Wilie, M. Zeng, and P. Fung, “Rho (\ρ): Reducing hallucination in open-domain dialogues with knowledge grounding,” arXiv preprint arXiv:2212.01588, 2022.
2306.08302#192
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual knowledge. In contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge. KGs can enhance LLMs by providing external knowledge for inference and interpretability. Meanwhile, KGs are difficult to construct and evolving by nature, which challenges the existing methods in KGs to generate new facts and represent unseen knowledge. Therefore, it is complementary to unify LLMs and KGs together and simultaneously leverage their advantages. In this article, we present a forward-looking roadmap for the unification of LLMs and KGs. Our roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs, which incorporate KGs during the pre-training and inference phases of LLMs, or for the purpose of enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding, completion, construction, graph-to-text generation, and question answering; and 3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a mutually beneficial way to enhance both LLMs and KGs for bidirectional reasoning driven by both data and knowledge. We review and summarize existing efforts within these three frameworks in our roadmap and pinpoint their future research directions.
http://arxiv.org/pdf/2306.08302
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
cs.CL, cs.AI
A short version of this paper was accepted by IEEE Transactions on Knowledge and Data Engineering (TKDE)
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
cs.CL
20230614
20240125
[ { "id": "2309.01538" }, { "id": "2302.13971" }, { "id": "2110.08173" }, { "id": "2203.16747" }, { "id": "2201.05337" }, { "id": "2302.12095" }, { "id": "1810.04805" }, { "id": "2305.13168" }, { "id": "2305.12392" }, { "id": "2206.14268" }, { "id": "2111.08546" }, { "id": "2212.10511" }, { "id": "2107.02137" }, { "id": "2105.10311" }, { "id": "2308.09729" }, { "id": "2310.02129" }, { "id": "1910.12840" }, { "id": "2206.13163" }, { "id": "2303.11146" }, { "id": "2009.02835" }, { "id": "2205.07424" }, { "id": "1909.03193" }, { "id": "2010.15980" }, { "id": "2007.00655" }, { "id": "2203.11171" }, { "id": "2306.06427" }, { "id": "2305.08281" }, { "id": "2104.08696" }, { "id": "2110.08455" }, { "id": "2305.09645" }, { "id": "2310.01061" }, { "id": "2308.03688" }, { "id": "2305.01157" }, { "id": "2310.08975" }, { "id": "2301.08913" }, { "id": "2305.13172" }, { "id": "2212.13428" }, { "id": "2303.10368" }, { "id": "2307.07697" }, { "id": "2308.11730" }, { "id": "2108.01928" }, { "id": "2010.00711" }, { "id": "2304.10592" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2307.12976" }, { "id": "2309.03118" }, { "id": "2304.13712" }, { "id": "2212.01588" }, { "id": "2309.01219" }, { "id": "2302.04023" }, { "id": "2202.08772" }, { "id": "2208.02743" }, { "id": "2201.08239" }, { "id": "2201.08531" }, { "id": "2302.05019" }, { "id": "2003.10555" }, { "id": "1907.11692" }, { "id": "2201.04843" }, { "id": "2206.12617" }, { "id": "2201.05575" }, { "id": "2310.07984" } ]
2306.08302
193
[251] S. Feng, V. Balachandran, Y. Bai, and Y. Tsvetkov, “Factkb: Gen- eralizable factuality evaluation using language models enhanced with factual knowledge,” arXiv preprint arXiv:2305.08281, 2023. [252] Y. Yao, P. Wang, B. Tian, S. Cheng, Z. Li, S. Deng, H. Chen, and N. Zhang, “Editing large language models: Problems, methods, and opportunities,” arXiv preprint arXiv:2305.13172, 2023. [253] Z. Li, N. Zhang, Y. Yao, M. Wang, X. Chen, and H. Chen, “Unveiling the pitfalls of knowledge editing for large language models,” arXiv preprint arXiv:2310.02129, 2023. [254] R. Cohen, E. Biran, O. Yoran, A. Globerson, and M. Geva, “Evaluating the ripple effects of knowledge editing in language models,” arXiv preprint arXiv:2307.12976, 2023.
2306.08302#193
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual knowledge. In contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge. KGs can enhance LLMs by providing external knowledge for inference and interpretability. Meanwhile, KGs are difficult to construct and evolving by nature, which challenges the existing methods in KGs to generate new facts and represent unseen knowledge. Therefore, it is complementary to unify LLMs and KGs together and simultaneously leverage their advantages. In this article, we present a forward-looking roadmap for the unification of LLMs and KGs. Our roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs, which incorporate KGs during the pre-training and inference phases of LLMs, or for the purpose of enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding, completion, construction, graph-to-text generation, and question answering; and 3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a mutually beneficial way to enhance both LLMs and KGs for bidirectional reasoning driven by both data and knowledge. We review and summarize existing efforts within these three frameworks in our roadmap and pinpoint their future research directions.
http://arxiv.org/pdf/2306.08302
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
cs.CL, cs.AI
A short version of this paper was accepted by IEEE Transactions on Knowledge and Data Engineering (TKDE)
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
cs.CL
20230614
20240125
[ { "id": "2309.01538" }, { "id": "2302.13971" }, { "id": "2110.08173" }, { "id": "2203.16747" }, { "id": "2201.05337" }, { "id": "2302.12095" }, { "id": "1810.04805" }, { "id": "2305.13168" }, { "id": "2305.12392" }, { "id": "2206.14268" }, { "id": "2111.08546" }, { "id": "2212.10511" }, { "id": "2107.02137" }, { "id": "2105.10311" }, { "id": "2308.09729" }, { "id": "2310.02129" }, { "id": "1910.12840" }, { "id": "2206.13163" }, { "id": "2303.11146" }, { "id": "2009.02835" }, { "id": "2205.07424" }, { "id": "1909.03193" }, { "id": "2010.15980" }, { "id": "2007.00655" }, { "id": "2203.11171" }, { "id": "2306.06427" }, { "id": "2305.08281" }, { "id": "2104.08696" }, { "id": "2110.08455" }, { "id": "2305.09645" }, { "id": "2310.01061" }, { "id": "2308.03688" }, { "id": "2305.01157" }, { "id": "2310.08975" }, { "id": "2301.08913" }, { "id": "2305.13172" }, { "id": "2212.13428" }, { "id": "2303.10368" }, { "id": "2307.07697" }, { "id": "2308.11730" }, { "id": "2108.01928" }, { "id": "2010.00711" }, { "id": "2304.10592" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2307.12976" }, { "id": "2309.03118" }, { "id": "2304.13712" }, { "id": "2212.01588" }, { "id": "2309.01219" }, { "id": "2302.04023" }, { "id": "2202.08772" }, { "id": "2208.02743" }, { "id": "2201.08239" }, { "id": "2201.08531" }, { "id": "2302.05019" }, { "id": "2003.10555" }, { "id": "1907.11692" }, { "id": "2201.04843" }, { "id": "2206.12617" }, { "id": "2201.05575" }, { "id": "2310.07984" } ]
2306.08302
194
[255] S. Diao, Z. Huang, R. Xu, X. Li, Y. Lin, X. Zhou, and T. Zhang, “Black-box prompt learning for pre-trained language models,” arXiv preprint arXiv:2201.08531, 2022. [256] T. Sun, Y. Shao, H. Qian, X. Huang, and X. Qiu, “Black-box tuning for language-model-as-a-service,” in International Conference on Machine Learning. PMLR, 2022, pp. 20 841–20 855. [257] X. Chen, A. Shrivastava, and A. Gupta, “NEIL: extracting visual knowledge from web data,” in IEEE International Conference on Computer Vision, ICCV 2013, Sydney, Australia, December 1-8, 2013, 2013, pp. 1409–1416. [258] M. Warren and P. J. Hayes, “Bounding ambiguity: Experiences with an image annotation system,” in Proceedings of the 1st Work- shop on Subjectivity, Ambiguity and Disagreement in Crowdsourcing, ser. CEUR Workshop Proceedings, vol. 2276, 2018, pp. 41–54.
2306.08302#194
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual knowledge. In contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge. KGs can enhance LLMs by providing external knowledge for inference and interpretability. Meanwhile, KGs are difficult to construct and evolving by nature, which challenges the existing methods in KGs to generate new facts and represent unseen knowledge. Therefore, it is complementary to unify LLMs and KGs together and simultaneously leverage their advantages. In this article, we present a forward-looking roadmap for the unification of LLMs and KGs. Our roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs, which incorporate KGs during the pre-training and inference phases of LLMs, or for the purpose of enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding, completion, construction, graph-to-text generation, and question answering; and 3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a mutually beneficial way to enhance both LLMs and KGs for bidirectional reasoning driven by both data and knowledge. We review and summarize existing efforts within these three frameworks in our roadmap and pinpoint their future research directions.
http://arxiv.org/pdf/2306.08302
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
cs.CL, cs.AI
A short version of this paper was accepted by IEEE Transactions on Knowledge and Data Engineering (TKDE)
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
cs.CL
20230614
20240125
[ { "id": "2309.01538" }, { "id": "2302.13971" }, { "id": "2110.08173" }, { "id": "2203.16747" }, { "id": "2201.05337" }, { "id": "2302.12095" }, { "id": "1810.04805" }, { "id": "2305.13168" }, { "id": "2305.12392" }, { "id": "2206.14268" }, { "id": "2111.08546" }, { "id": "2212.10511" }, { "id": "2107.02137" }, { "id": "2105.10311" }, { "id": "2308.09729" }, { "id": "2310.02129" }, { "id": "1910.12840" }, { "id": "2206.13163" }, { "id": "2303.11146" }, { "id": "2009.02835" }, { "id": "2205.07424" }, { "id": "1909.03193" }, { "id": "2010.15980" }, { "id": "2007.00655" }, { "id": "2203.11171" }, { "id": "2306.06427" }, { "id": "2305.08281" }, { "id": "2104.08696" }, { "id": "2110.08455" }, { "id": "2305.09645" }, { "id": "2310.01061" }, { "id": "2308.03688" }, { "id": "2305.01157" }, { "id": "2310.08975" }, { "id": "2301.08913" }, { "id": "2305.13172" }, { "id": "2212.13428" }, { "id": "2303.10368" }, { "id": "2307.07697" }, { "id": "2308.11730" }, { "id": "2108.01928" }, { "id": "2010.00711" }, { "id": "2304.10592" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2307.12976" }, { "id": "2309.03118" }, { "id": "2304.13712" }, { "id": "2212.01588" }, { "id": "2309.01219" }, { "id": "2302.04023" }, { "id": "2202.08772" }, { "id": "2208.02743" }, { "id": "2201.08239" }, { "id": "2201.08531" }, { "id": "2302.05019" }, { "id": "2003.10555" }, { "id": "1907.11692" }, { "id": "2201.04843" }, { "id": "2206.12617" }, { "id": "2201.05575" }, { "id": "2310.07984" } ]
2306.08302
195
[259] Z. Chen, Y. Huang, J. Chen, Y. Geng, Y. Fang, J. Z. Pan, N. Zhang, and W. Zhang, “Lako: Knowledge-driven visual estion answer- ing via late knowledge-to-text injection,” 2022. [260] R. Girdhar, A. El-Nouby, Z. Liu, M. Singh, K. V. Alwala, A. Joulin, and I. Misra, “Imagebind: One embedding space to bind them all,” in ICCV, 2023, pp. 15 180–15 190. [261] J. Zhang, Z. Yin, P. Chen, and S. Nichele, “Emotion recognition using multi-modal data and machine learning techniques: A tutorial and review,” Information Fusion, vol. 59, pp. 103–126, 2020. [262] H. Zhang, B. Wu, X. Yuan, S. Pan, H. Tong, and J. Pei, “Trust- worthy graph neural networks: Aspects, methods and trends,” arXiv:2205.07424, 2022.
2306.08302#195
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual knowledge. In contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge. KGs can enhance LLMs by providing external knowledge for inference and interpretability. Meanwhile, KGs are difficult to construct and evolving by nature, which challenges the existing methods in KGs to generate new facts and represent unseen knowledge. Therefore, it is complementary to unify LLMs and KGs together and simultaneously leverage their advantages. In this article, we present a forward-looking roadmap for the unification of LLMs and KGs. Our roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs, which incorporate KGs during the pre-training and inference phases of LLMs, or for the purpose of enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding, completion, construction, graph-to-text generation, and question answering; and 3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a mutually beneficial way to enhance both LLMs and KGs for bidirectional reasoning driven by both data and knowledge. We review and summarize existing efforts within these three frameworks in our roadmap and pinpoint their future research directions.
http://arxiv.org/pdf/2306.08302
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
cs.CL, cs.AI
A short version of this paper was accepted by IEEE Transactions on Knowledge and Data Engineering (TKDE)
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
cs.CL
20230614
20240125
[ { "id": "2309.01538" }, { "id": "2302.13971" }, { "id": "2110.08173" }, { "id": "2203.16747" }, { "id": "2201.05337" }, { "id": "2302.12095" }, { "id": "1810.04805" }, { "id": "2305.13168" }, { "id": "2305.12392" }, { "id": "2206.14268" }, { "id": "2111.08546" }, { "id": "2212.10511" }, { "id": "2107.02137" }, { "id": "2105.10311" }, { "id": "2308.09729" }, { "id": "2310.02129" }, { "id": "1910.12840" }, { "id": "2206.13163" }, { "id": "2303.11146" }, { "id": "2009.02835" }, { "id": "2205.07424" }, { "id": "1909.03193" }, { "id": "2010.15980" }, { "id": "2007.00655" }, { "id": "2203.11171" }, { "id": "2306.06427" }, { "id": "2305.08281" }, { "id": "2104.08696" }, { "id": "2110.08455" }, { "id": "2305.09645" }, { "id": "2310.01061" }, { "id": "2308.03688" }, { "id": "2305.01157" }, { "id": "2310.08975" }, { "id": "2301.08913" }, { "id": "2305.13172" }, { "id": "2212.13428" }, { "id": "2303.10368" }, { "id": "2307.07697" }, { "id": "2308.11730" }, { "id": "2108.01928" }, { "id": "2010.00711" }, { "id": "2304.10592" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2307.12976" }, { "id": "2309.03118" }, { "id": "2304.13712" }, { "id": "2212.01588" }, { "id": "2309.01219" }, { "id": "2302.04023" }, { "id": "2202.08772" }, { "id": "2208.02743" }, { "id": "2201.08239" }, { "id": "2201.08531" }, { "id": "2302.05019" }, { "id": "2003.10555" }, { "id": "1907.11692" }, { "id": "2201.04843" }, { "id": "2206.12617" }, { "id": "2201.05575" }, { "id": "2310.07984" } ]
2306.08302
196
[263] T. Wu, M. Caccia, Z. Li, Y.-F. Li, G. Qi, and G. Haffari, “Pretrained language model in continual learning: A comparative study,” in ICLR, 2022. [264] X. L. Li, A. Kuncoro, J. Hoffmann, C. de Masson d’Autume, P. Blunsom, and A. Nematzadeh, “A systematic investigation of commonsense knowledge in large language models,” in Proceed- ings of the 2022 Conference on Empirical Methods in Natural Language Processing, 2022, pp. 11 838–11 855. [265] Y. Zheng, H. Y. Koh, J. Ju, A. T. Nguyen, L. T. May, G. I. Webb, and S. Pan, “Large language models for scientific synthesis, inference and explanation,” arXiv preprint arXiv:2310.07984, 2023.
2306.08302#196
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual knowledge. In contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge. KGs can enhance LLMs by providing external knowledge for inference and interpretability. Meanwhile, KGs are difficult to construct and evolving by nature, which challenges the existing methods in KGs to generate new facts and represent unseen knowledge. Therefore, it is complementary to unify LLMs and KGs together and simultaneously leverage their advantages. In this article, we present a forward-looking roadmap for the unification of LLMs and KGs. Our roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs, which incorporate KGs during the pre-training and inference phases of LLMs, or for the purpose of enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding, completion, construction, graph-to-text generation, and question answering; and 3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a mutually beneficial way to enhance both LLMs and KGs for bidirectional reasoning driven by both data and knowledge. We review and summarize existing efforts within these three frameworks in our roadmap and pinpoint their future research directions.
http://arxiv.org/pdf/2306.08302
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
cs.CL, cs.AI
A short version of this paper was accepted by IEEE Transactions on Knowledge and Data Engineering (TKDE)
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
cs.CL
20230614
20240125
[ { "id": "2309.01538" }, { "id": "2302.13971" }, { "id": "2110.08173" }, { "id": "2203.16747" }, { "id": "2201.05337" }, { "id": "2302.12095" }, { "id": "1810.04805" }, { "id": "2305.13168" }, { "id": "2305.12392" }, { "id": "2206.14268" }, { "id": "2111.08546" }, { "id": "2212.10511" }, { "id": "2107.02137" }, { "id": "2105.10311" }, { "id": "2308.09729" }, { "id": "2310.02129" }, { "id": "1910.12840" }, { "id": "2206.13163" }, { "id": "2303.11146" }, { "id": "2009.02835" }, { "id": "2205.07424" }, { "id": "1909.03193" }, { "id": "2010.15980" }, { "id": "2007.00655" }, { "id": "2203.11171" }, { "id": "2306.06427" }, { "id": "2305.08281" }, { "id": "2104.08696" }, { "id": "2110.08455" }, { "id": "2305.09645" }, { "id": "2310.01061" }, { "id": "2308.03688" }, { "id": "2305.01157" }, { "id": "2310.08975" }, { "id": "2301.08913" }, { "id": "2305.13172" }, { "id": "2212.13428" }, { "id": "2303.10368" }, { "id": "2307.07697" }, { "id": "2308.11730" }, { "id": "2108.01928" }, { "id": "2010.00711" }, { "id": "2304.10592" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2307.12976" }, { "id": "2309.03118" }, { "id": "2304.13712" }, { "id": "2212.01588" }, { "id": "2309.01219" }, { "id": "2302.04023" }, { "id": "2202.08772" }, { "id": "2208.02743" }, { "id": "2201.08239" }, { "id": "2201.08531" }, { "id": "2302.05019" }, { "id": "2003.10555" }, { "id": "1907.11692" }, { "id": "2201.04843" }, { "id": "2206.12617" }, { "id": "2201.05575" }, { "id": "2310.07984" } ]
2306.08302
197
[266] B. Min, H. Ross, E. Sulem, A. P. B. Veyseh, T. H. Nguyen, O. Sainz, E. Agirre, I. Heintz, and D. Roth, “Recent advances in natural language processing via large pre-trained language models: A survey,” ACM Computing Surveys, vol. 56, no. 2, pp. 1–40, 2023. [267] J. Wei, M. Bosma, V. Zhao, K. Guu, A. W. Yu, B. Lester, N. Du, A. M. Dai, and Q. V. Le, “Finetuned language models are zero- shot learners,” in International Conference on Learning Representa- tions, 2021. [268] Y. Zhang, Y. Li, L. Cui, D. Cai, L. Liu, T. Fu, X. Huang, E. Zhao, Y. Zhang, Y. Chen, L. Wang, A. T. Luu, W. Bi, F. Shi, and S. Shi, “Siren’s song in the ai ocean: A survey on hallucination in large language models,” arXiv preprint arXiv:2309.01219, 2023.
2306.08302#197
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual knowledge. In contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge. KGs can enhance LLMs by providing external knowledge for inference and interpretability. Meanwhile, KGs are difficult to construct and evolving by nature, which challenges the existing methods in KGs to generate new facts and represent unseen knowledge. Therefore, it is complementary to unify LLMs and KGs together and simultaneously leverage their advantages. In this article, we present a forward-looking roadmap for the unification of LLMs and KGs. Our roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs, which incorporate KGs during the pre-training and inference phases of LLMs, or for the purpose of enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding, completion, construction, graph-to-text generation, and question answering; and 3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a mutually beneficial way to enhance both LLMs and KGs for bidirectional reasoning driven by both data and knowledge. We review and summarize existing efforts within these three frameworks in our roadmap and pinpoint their future research directions.
http://arxiv.org/pdf/2306.08302
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
cs.CL, cs.AI
A short version of this paper was accepted by IEEE Transactions on Knowledge and Data Engineering (TKDE)
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
cs.CL
20230614
20240125
[ { "id": "2309.01538" }, { "id": "2302.13971" }, { "id": "2110.08173" }, { "id": "2203.16747" }, { "id": "2201.05337" }, { "id": "2302.12095" }, { "id": "1810.04805" }, { "id": "2305.13168" }, { "id": "2305.12392" }, { "id": "2206.14268" }, { "id": "2111.08546" }, { "id": "2212.10511" }, { "id": "2107.02137" }, { "id": "2105.10311" }, { "id": "2308.09729" }, { "id": "2310.02129" }, { "id": "1910.12840" }, { "id": "2206.13163" }, { "id": "2303.11146" }, { "id": "2009.02835" }, { "id": "2205.07424" }, { "id": "1909.03193" }, { "id": "2010.15980" }, { "id": "2007.00655" }, { "id": "2203.11171" }, { "id": "2306.06427" }, { "id": "2305.08281" }, { "id": "2104.08696" }, { "id": "2110.08455" }, { "id": "2305.09645" }, { "id": "2310.01061" }, { "id": "2308.03688" }, { "id": "2305.01157" }, { "id": "2310.08975" }, { "id": "2301.08913" }, { "id": "2305.13172" }, { "id": "2212.13428" }, { "id": "2303.10368" }, { "id": "2307.07697" }, { "id": "2308.11730" }, { "id": "2108.01928" }, { "id": "2010.00711" }, { "id": "2304.10592" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2307.12976" }, { "id": "2309.03118" }, { "id": "2304.13712" }, { "id": "2212.01588" }, { "id": "2309.01219" }, { "id": "2302.04023" }, { "id": "2202.08772" }, { "id": "2208.02743" }, { "id": "2201.08239" }, { "id": "2201.08531" }, { "id": "2302.05019" }, { "id": "2003.10555" }, { "id": "1907.11692" }, { "id": "2201.04843" }, { "id": "2206.12617" }, { "id": "2201.05575" }, { "id": "2310.07984" } ]
2306.08302
198
APPENDIX A PROS AND CONS FOR LLMS AND KGS In this section, we introduce the pros and cons of LLMs and KGs in detail. We summarize the pros and cons of LLMs and KGs in Fig. 1, respectively. # LLM pros. General Knowledge [11]: LLMs pre-trained on large- scale corpora, which contain a large amount of gen- eral knowledge, such as commonsense knowledge [264] and factual knowledge [14]. Such knowledge can be distilled from LLMs and used for downstream tasks [265]. Language Processing [12]: LLMs have shown great per- formance in understanding natural language [266]. Therefore, LLMs can be used in many natural lan- guage processing tasks, such as question answering [4], machine translation [5], and text generation [6].
2306.08302#198
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual knowledge. In contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge. KGs can enhance LLMs by providing external knowledge for inference and interpretability. Meanwhile, KGs are difficult to construct and evolving by nature, which challenges the existing methods in KGs to generate new facts and represent unseen knowledge. Therefore, it is complementary to unify LLMs and KGs together and simultaneously leverage their advantages. In this article, we present a forward-looking roadmap for the unification of LLMs and KGs. Our roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs, which incorporate KGs during the pre-training and inference phases of LLMs, or for the purpose of enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding, completion, construction, graph-to-text generation, and question answering; and 3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a mutually beneficial way to enhance both LLMs and KGs for bidirectional reasoning driven by both data and knowledge. We review and summarize existing efforts within these three frameworks in our roadmap and pinpoint their future research directions.
http://arxiv.org/pdf/2306.08302
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
cs.CL, cs.AI
A short version of this paper was accepted by IEEE Transactions on Knowledge and Data Engineering (TKDE)
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
cs.CL
20230614
20240125
[ { "id": "2309.01538" }, { "id": "2302.13971" }, { "id": "2110.08173" }, { "id": "2203.16747" }, { "id": "2201.05337" }, { "id": "2302.12095" }, { "id": "1810.04805" }, { "id": "2305.13168" }, { "id": "2305.12392" }, { "id": "2206.14268" }, { "id": "2111.08546" }, { "id": "2212.10511" }, { "id": "2107.02137" }, { "id": "2105.10311" }, { "id": "2308.09729" }, { "id": "2310.02129" }, { "id": "1910.12840" }, { "id": "2206.13163" }, { "id": "2303.11146" }, { "id": "2009.02835" }, { "id": "2205.07424" }, { "id": "1909.03193" }, { "id": "2010.15980" }, { "id": "2007.00655" }, { "id": "2203.11171" }, { "id": "2306.06427" }, { "id": "2305.08281" }, { "id": "2104.08696" }, { "id": "2110.08455" }, { "id": "2305.09645" }, { "id": "2310.01061" }, { "id": "2308.03688" }, { "id": "2305.01157" }, { "id": "2310.08975" }, { "id": "2301.08913" }, { "id": "2305.13172" }, { "id": "2212.13428" }, { "id": "2303.10368" }, { "id": "2307.07697" }, { "id": "2308.11730" }, { "id": "2108.01928" }, { "id": "2010.00711" }, { "id": "2304.10592" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2307.12976" }, { "id": "2309.03118" }, { "id": "2304.13712" }, { "id": "2212.01588" }, { "id": "2309.01219" }, { "id": "2302.04023" }, { "id": "2202.08772" }, { "id": "2208.02743" }, { "id": "2201.08239" }, { "id": "2201.08531" }, { "id": "2302.05019" }, { "id": "2003.10555" }, { "id": "1907.11692" }, { "id": "2201.04843" }, { "id": "2206.12617" }, { "id": "2201.05575" }, { "id": "2310.07984" } ]
2306.08302
199
• Generalizability [13]: LLMs enable great generalizabil- ity, which can be applied to various downstream tasks [267]. By providing few-shot examples [59] or finetuning on multi-task data [3], LLMs achieve great performance on many tasks. # LLM cons. Implicit Knowledge [14]: LLMs represent knowledge implicitly in their parameters. It is difficult to inter- pret or validate the knowledge obtained by LLMs. • Hallucination [15]: LLMs often experience hallucina- tions by generating content that while seemingly plausible but are factually incorrect [268]. This prob- lem greatly reduces the trustworthiness of LLMs in real-world scenarios. Indecisiveness [16]: LLMs perform reasoning by gen- erating from a probability model, which is an in- decisive process. The generated results are sampled from the probability distribution, which is difficult to control. Black-box [17]: LLMs are criticized for their lack of interpretability. It is unclear to know the specific pat- terns and functions LLMs use to arrive at predictions or decisions. Lacking Domain-specific/New Knowledge [18]: LLMs trained on general corpus might not be able to gen- eralize well to specific domains or new knowledge due to the lack of domain-specific knowledge or new training data.
2306.08302#199
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual knowledge. In contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge. KGs can enhance LLMs by providing external knowledge for inference and interpretability. Meanwhile, KGs are difficult to construct and evolving by nature, which challenges the existing methods in KGs to generate new facts and represent unseen knowledge. Therefore, it is complementary to unify LLMs and KGs together and simultaneously leverage their advantages. In this article, we present a forward-looking roadmap for the unification of LLMs and KGs. Our roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs, which incorporate KGs during the pre-training and inference phases of LLMs, or for the purpose of enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding, completion, construction, graph-to-text generation, and question answering; and 3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a mutually beneficial way to enhance both LLMs and KGs for bidirectional reasoning driven by both data and knowledge. We review and summarize existing efforts within these three frameworks in our roadmap and pinpoint their future research directions.
http://arxiv.org/pdf/2306.08302
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
cs.CL, cs.AI
A short version of this paper was accepted by IEEE Transactions on Knowledge and Data Engineering (TKDE)
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
cs.CL
20230614
20240125
[ { "id": "2309.01538" }, { "id": "2302.13971" }, { "id": "2110.08173" }, { "id": "2203.16747" }, { "id": "2201.05337" }, { "id": "2302.12095" }, { "id": "1810.04805" }, { "id": "2305.13168" }, { "id": "2305.12392" }, { "id": "2206.14268" }, { "id": "2111.08546" }, { "id": "2212.10511" }, { "id": "2107.02137" }, { "id": "2105.10311" }, { "id": "2308.09729" }, { "id": "2310.02129" }, { "id": "1910.12840" }, { "id": "2206.13163" }, { "id": "2303.11146" }, { "id": "2009.02835" }, { "id": "2205.07424" }, { "id": "1909.03193" }, { "id": "2010.15980" }, { "id": "2007.00655" }, { "id": "2203.11171" }, { "id": "2306.06427" }, { "id": "2305.08281" }, { "id": "2104.08696" }, { "id": "2110.08455" }, { "id": "2305.09645" }, { "id": "2310.01061" }, { "id": "2308.03688" }, { "id": "2305.01157" }, { "id": "2310.08975" }, { "id": "2301.08913" }, { "id": "2305.13172" }, { "id": "2212.13428" }, { "id": "2303.10368" }, { "id": "2307.07697" }, { "id": "2308.11730" }, { "id": "2108.01928" }, { "id": "2010.00711" }, { "id": "2304.10592" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2307.12976" }, { "id": "2309.03118" }, { "id": "2304.13712" }, { "id": "2212.01588" }, { "id": "2309.01219" }, { "id": "2302.04023" }, { "id": "2202.08772" }, { "id": "2208.02743" }, { "id": "2201.08239" }, { "id": "2201.08531" }, { "id": "2302.05019" }, { "id": "2003.10555" }, { "id": "1907.11692" }, { "id": "2201.04843" }, { "id": "2206.12617" }, { "id": "2201.05575" }, { "id": "2310.07984" } ]
2306.08302
200
27 JOURNAL OF LATEX CLASS FILES, VOL. ??, NO. ??, MONTH 20YY # KG pros. Structural Knowledge [19]: KGs store facts in a struc- tural format (i.e., triples), which can be understand- able by both humans and machines. • Accuracy [20]: Facts in KGs are usually manually curated or validated by experts, which are more accurate and dependable than those in LLMs. Decisiveness [21]: The factual knowledge in KGs is stored in a decisive manner. The reasoning algorithm in KGs is also deterministic, which can provide deci- sive results. Interpretability [22]: KGs are renowned for their sym- bolic reasoning ability, which provides an inter- pretable reasoning process that can be understood by humans. • Domain-specific Knowledge [23]: Many domains can construct their KGs by experts to provide precise and dependable domain-specific knowledge. • Evolving Knowledge [24]: The facts in KGs are contin- uously evolving. The KGs can be updated with new facts by inserting new triples and deleting outdated ones. # KG cons.
2306.08302#200
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual knowledge. In contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge. KGs can enhance LLMs by providing external knowledge for inference and interpretability. Meanwhile, KGs are difficult to construct and evolving by nature, which challenges the existing methods in KGs to generate new facts and represent unseen knowledge. Therefore, it is complementary to unify LLMs and KGs together and simultaneously leverage their advantages. In this article, we present a forward-looking roadmap for the unification of LLMs and KGs. Our roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs, which incorporate KGs during the pre-training and inference phases of LLMs, or for the purpose of enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding, completion, construction, graph-to-text generation, and question answering; and 3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a mutually beneficial way to enhance both LLMs and KGs for bidirectional reasoning driven by both data and knowledge. We review and summarize existing efforts within these three frameworks in our roadmap and pinpoint their future research directions.
http://arxiv.org/pdf/2306.08302
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
cs.CL, cs.AI
A short version of this paper was accepted by IEEE Transactions on Knowledge and Data Engineering (TKDE)
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
cs.CL
20230614
20240125
[ { "id": "2309.01538" }, { "id": "2302.13971" }, { "id": "2110.08173" }, { "id": "2203.16747" }, { "id": "2201.05337" }, { "id": "2302.12095" }, { "id": "1810.04805" }, { "id": "2305.13168" }, { "id": "2305.12392" }, { "id": "2206.14268" }, { "id": "2111.08546" }, { "id": "2212.10511" }, { "id": "2107.02137" }, { "id": "2105.10311" }, { "id": "2308.09729" }, { "id": "2310.02129" }, { "id": "1910.12840" }, { "id": "2206.13163" }, { "id": "2303.11146" }, { "id": "2009.02835" }, { "id": "2205.07424" }, { "id": "1909.03193" }, { "id": "2010.15980" }, { "id": "2007.00655" }, { "id": "2203.11171" }, { "id": "2306.06427" }, { "id": "2305.08281" }, { "id": "2104.08696" }, { "id": "2110.08455" }, { "id": "2305.09645" }, { "id": "2310.01061" }, { "id": "2308.03688" }, { "id": "2305.01157" }, { "id": "2310.08975" }, { "id": "2301.08913" }, { "id": "2305.13172" }, { "id": "2212.13428" }, { "id": "2303.10368" }, { "id": "2307.07697" }, { "id": "2308.11730" }, { "id": "2108.01928" }, { "id": "2010.00711" }, { "id": "2304.10592" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2307.12976" }, { "id": "2309.03118" }, { "id": "2304.13712" }, { "id": "2212.01588" }, { "id": "2309.01219" }, { "id": "2302.04023" }, { "id": "2202.08772" }, { "id": "2208.02743" }, { "id": "2201.08239" }, { "id": "2201.08531" }, { "id": "2302.05019" }, { "id": "2003.10555" }, { "id": "1907.11692" }, { "id": "2201.04843" }, { "id": "2206.12617" }, { "id": "2201.05575" }, { "id": "2310.07984" } ]
2306.08302
201
Incompleteness [25]: KGs are hard to construct and often incomplete, which limits the ability of KGs to provide comprehensive knowledge. Lacking Language Understanding [33]: Most studies on KGs model the structure of knowledge, but ignore the textual information in KGs. The textual informa- tion in KGs is often ignored in KG-related tasks, such as KG completion [26] and KGQA [43]. • Unseen Facts [27]: KGs are dynamically changing, which makes it difficult to model unseen entities and represent new facts. 28
2306.08302#201
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual knowledge. In contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge. KGs can enhance LLMs by providing external knowledge for inference and interpretability. Meanwhile, KGs are difficult to construct and evolving by nature, which challenges the existing methods in KGs to generate new facts and represent unseen knowledge. Therefore, it is complementary to unify LLMs and KGs together and simultaneously leverage their advantages. In this article, we present a forward-looking roadmap for the unification of LLMs and KGs. Our roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs, which incorporate KGs during the pre-training and inference phases of LLMs, or for the purpose of enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding, completion, construction, graph-to-text generation, and question answering; and 3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a mutually beneficial way to enhance both LLMs and KGs for bidirectional reasoning driven by both data and knowledge. We review and summarize existing efforts within these three frameworks in our roadmap and pinpoint their future research directions.
http://arxiv.org/pdf/2306.08302
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
cs.CL, cs.AI
A short version of this paper was accepted by IEEE Transactions on Knowledge and Data Engineering (TKDE)
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
cs.CL
20230614
20240125
[ { "id": "2309.01538" }, { "id": "2302.13971" }, { "id": "2110.08173" }, { "id": "2203.16747" }, { "id": "2201.05337" }, { "id": "2302.12095" }, { "id": "1810.04805" }, { "id": "2305.13168" }, { "id": "2305.12392" }, { "id": "2206.14268" }, { "id": "2111.08546" }, { "id": "2212.10511" }, { "id": "2107.02137" }, { "id": "2105.10311" }, { "id": "2308.09729" }, { "id": "2310.02129" }, { "id": "1910.12840" }, { "id": "2206.13163" }, { "id": "2303.11146" }, { "id": "2009.02835" }, { "id": "2205.07424" }, { "id": "1909.03193" }, { "id": "2010.15980" }, { "id": "2007.00655" }, { "id": "2203.11171" }, { "id": "2306.06427" }, { "id": "2305.08281" }, { "id": "2104.08696" }, { "id": "2110.08455" }, { "id": "2305.09645" }, { "id": "2310.01061" }, { "id": "2308.03688" }, { "id": "2305.01157" }, { "id": "2310.08975" }, { "id": "2301.08913" }, { "id": "2305.13172" }, { "id": "2212.13428" }, { "id": "2303.10368" }, { "id": "2307.07697" }, { "id": "2308.11730" }, { "id": "2108.01928" }, { "id": "2010.00711" }, { "id": "2304.10592" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2307.12976" }, { "id": "2309.03118" }, { "id": "2304.13712" }, { "id": "2212.01588" }, { "id": "2309.01219" }, { "id": "2302.04023" }, { "id": "2202.08772" }, { "id": "2208.02743" }, { "id": "2201.08239" }, { "id": "2201.08531" }, { "id": "2302.05019" }, { "id": "2003.10555" }, { "id": "1907.11692" }, { "id": "2201.04843" }, { "id": "2206.12617" }, { "id": "2201.05575" }, { "id": "2310.07984" } ]
2306.07906
0
3 2 0 2 n u J 3 1 ] L C . s c [ 1 v 6 0 9 7 0 . 6 0 3 2 : v i X r a WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences Hanyu Lai∗ [email protected] Tsinghua University Beijing, China Yifan Xu [email protected] Tsinghua University Beijing, China Aohan Zeng [email protected] Tsinghua University Beijing, China Zhengxiao Du [email protected] Tsinghua University Beijing, China Peng Zhang [email protected] Zhipu.AI Beijing, China Yuxiao Dong† [email protected] Tsinghua University Beijing, China Jie Tang† [email protected] Tsinghua University Beijing, China
2306.07906#0
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
We present WebGLM, a web-enhanced question-answering system based on the General Language Model (GLM). Its goal is to augment a pre-trained large language model (LLM) with web search and retrieval capabilities while being efficient for real-world deployments. To achieve this, we develop WebGLM with strategies for the LLM-augmented retriever, bootstrapped generator, and human preference-aware scorer. Specifically, we identify and address the limitations of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency, and cost-effectiveness advantages. In addition, we propose systematic criteria for evaluating web-enhanced QA systems. We conduct multi-dimensional human evaluation and quantitative ablation studies, which suggest the outperformance of the proposed WebGLM designs over existing systems. WebGLM with the 10-billion-parameter GLM (10B) is shown to perform better than the similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human evaluation. The code, demo, and data are at \url{https://github.com/THUDM/WebGLM}.
http://arxiv.org/pdf/2306.07906
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
cs.CL, cs.AI
Accepted to KDD 2023
null
cs.CL
20230613
20230613
[ { "id": "2208.03299" }, { "id": "2204.02311" }, { "id": "2006.14799" }, { "id": "2112.09332" }, { "id": "2210.02414" }, { "id": "2209.01975" }, { "id": "2205.10782" }, { "id": "2211.05100" }, { "id": "2202.12837" }, { "id": "2103.10385" }, { "id": "2205.01068" } ]
2306.07799
1
# Abstract Large-scale language models, like ChatGPT, have garnered significant media attention and stunned the public with their remarkable ca- pacity for generating coherent text from short natural language prompts. In this paper, we aim to conduct a systematic inspection of Chat- GPT’s performance in two controllable gener- ation tasks, with respect to ChatGPT’s ability to adapt its output to different target audiences (expert vs. layman) and writing styles (formal vs. informal). Additionally, we evaluate the faithfulness of the generated text, and compare the model’s performance with human-authored texts. Our findings indicate that the stylistic variations produced by humans are consider- ably larger than those demonstrated by Chat- GPT, and the generated texts diverge from hu- man samples in several characteristics, such as the distribution of word types. Moreover, we observe that ChatGPT sometimes incorporates factual errors or hallucinations when adapting the text to suit a specific style.1 # Introduction
2306.07799#1
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer
Large-scale language models, like ChatGPT, have garnered significant media attention and stunned the public with their remarkable capacity for generating coherent text from short natural language prompts. In this paper, we aim to conduct a systematic inspection of ChatGPT's performance in two controllable generation tasks, with respect to ChatGPT's ability to adapt its output to different target audiences (expert vs. layman) and writing styles (formal vs. informal). Additionally, we evaluate the faithfulness of the generated text, and compare the model's performance with human-authored texts. Our findings indicate that the stylistic variations produced by humans are considerably larger than those demonstrated by ChatGPT, and the generated texts diverge from human samples in several characteristics, such as the distribution of word types. Moreover, we observe that ChatGPT sometimes incorporates factual errors or hallucinations when adapting the text to suit a specific style.
http://arxiv.org/pdf/2306.07799
Dongqi Pu, Vera Demberg
cs.CL, cs.AI, cs.LG
ACL-SRW 2023
null
cs.CL
20230613
20230613
[ { "id": "2302.14229" }, { "id": "2302.04023" }, { "id": "2302.06476" }, { "id": "2303.17580" }, { "id": "2201.05337" }, { "id": "2302.13007" }, { "id": "2303.11381" }, { "id": "2304.05613" }, { "id": "2302.09419" }, { "id": "2301.08745" }, { "id": "2204.13362" }, { "id": "2302.08081" }, { "id": "2301.08653" }, { "id": "2305.16784" }, { "id": "1912.02164" }, { "id": "2303.01067" } ]
2306.07906
1
Jie Tang† [email protected] Tsinghua University Beijing, China Abstract We present WebGLM, a web-enhanced question-answering system based on the General Language Model (GLM). Its goal is to aug- ment a pre-trained large language model (LLM) with web search and retrieval capabilities while being efficient for real-world de- ployments. To achieve this, we develop WebGLM with strategies for the LLM-augmented retriever, bootstrapped generator, and hu- man preference-aware scorer. Specifically, we identify and address the limitations of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency, and cost-effectiveness advan- tages. In addition, we propose systematic criteria for evaluating web-enhanced QA systems. We conduct multi-dimensional human evaluation and quantitative ablation studies, which suggest the outperformance of the proposed WebGLM designs over existing systems. WebGLM with the 10-billion-parameter GLM (10B) is shown to perform better than the similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human evaluation. The code, demo, and data are at https://github.com/THUDM/WebGLM.
2306.07906#1
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
We present WebGLM, a web-enhanced question-answering system based on the General Language Model (GLM). Its goal is to augment a pre-trained large language model (LLM) with web search and retrieval capabilities while being efficient for real-world deployments. To achieve this, we develop WebGLM with strategies for the LLM-augmented retriever, bootstrapped generator, and human preference-aware scorer. Specifically, we identify and address the limitations of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency, and cost-effectiveness advantages. In addition, we propose systematic criteria for evaluating web-enhanced QA systems. We conduct multi-dimensional human evaluation and quantitative ablation studies, which suggest the outperformance of the proposed WebGLM designs over existing systems. WebGLM with the 10-billion-parameter GLM (10B) is shown to perform better than the similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human evaluation. The code, demo, and data are at \url{https://github.com/THUDM/WebGLM}.
http://arxiv.org/pdf/2306.07906
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
cs.CL, cs.AI
Accepted to KDD 2023
null
cs.CL
20230613
20230613
[ { "id": "2208.03299" }, { "id": "2204.02311" }, { "id": "2006.14799" }, { "id": "2112.09332" }, { "id": "2210.02414" }, { "id": "2209.01975" }, { "id": "2205.10782" }, { "id": "2211.05100" }, { "id": "2202.12837" }, { "id": "2103.10385" }, { "id": "2205.01068" } ]
2306.07799
2
# Introduction Generative Pre-trained Transformer (GPT; e.g., ChatGPT) models, which produce results from given conditional input prompts, have exhibited exceptional performance on various natural lan- guage understanding (NLU) and generation (NLG) tasks (Jiao et al., 2023; Wang et al., 2023a; Bang et al., 2023b; Zhou et al., 2023; Dai et al., 2023). For instance, in NLU tasks, Qin et al. (2023) have proved that ChatGPT is comparable to state-of- the-art fine-tuning models in language reasoning. In NLG tasks, Yang et al. (2023a) assessed four widely used benchmark datasets, such as QMSum, and confirmed ChatGPT’s comparability to tradi- tional fine-tuning methods. Peng et al. (2023) fur- ther investigated effective strategies for machine translation using ChatGPT and highlight its strong
2306.07799#2
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer
Large-scale language models, like ChatGPT, have garnered significant media attention and stunned the public with their remarkable capacity for generating coherent text from short natural language prompts. In this paper, we aim to conduct a systematic inspection of ChatGPT's performance in two controllable generation tasks, with respect to ChatGPT's ability to adapt its output to different target audiences (expert vs. layman) and writing styles (formal vs. informal). Additionally, we evaluate the faithfulness of the generated text, and compare the model's performance with human-authored texts. Our findings indicate that the stylistic variations produced by humans are considerably larger than those demonstrated by ChatGPT, and the generated texts diverge from human samples in several characteristics, such as the distribution of word types. Moreover, we observe that ChatGPT sometimes incorporates factual errors or hallucinations when adapting the text to suit a specific style.
http://arxiv.org/pdf/2306.07799
Dongqi Pu, Vera Demberg
cs.CL, cs.AI, cs.LG
ACL-SRW 2023
null
cs.CL
20230613
20230613
[ { "id": "2302.14229" }, { "id": "2302.04023" }, { "id": "2302.06476" }, { "id": "2303.17580" }, { "id": "2201.05337" }, { "id": "2302.13007" }, { "id": "2303.11381" }, { "id": "2304.05613" }, { "id": "2302.09419" }, { "id": "2301.08745" }, { "id": "2204.13362" }, { "id": "2302.08081" }, { "id": "2301.08653" }, { "id": "2305.16784" }, { "id": "1912.02164" }, { "id": "2303.01067" } ]
2306.07906
2
why do people Why do people try to maintain eye contact while communicating? People try to maintain eye contact while communicating because itis a form of body language which can help improve communication and create a positive impression. Eye contact is @ non-verbal form of ‘communication which can help show that you are actively listening and paying attention) It can also make ‘others more likely to like and respect you, making it an important tool for building and maintaining relationships "!, Furthermore, a lack of eye contact can negatively impact communication, making mmisunderstandings more likely to occur. ®! Lute TD dsite [1] Why Eye Contact is Important in Communication <” Eye contact Isa type of body language that Is extremely important during communication and conversation. ‘Sometimes, our eyes and body language speak even more than words, Keeping eye contact with the person you are talking to shows that you are actively Iistening and paying attention. [2] Why Eye Contact is Important during Conversation? ? Figure 1: A screenshot of WebGLM’s response to an example question with web references. CCS Concepts • Computing methodologies → Natural language generation; • Software and its engineering → Development frameworks and environments. ∗XL, HL, and HY contributed equally and this work was done when HY interned at Tsinghua. †Corresponding Authors: YD and JT.
2306.07906#2
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
We present WebGLM, a web-enhanced question-answering system based on the General Language Model (GLM). Its goal is to augment a pre-trained large language model (LLM) with web search and retrieval capabilities while being efficient for real-world deployments. To achieve this, we develop WebGLM with strategies for the LLM-augmented retriever, bootstrapped generator, and human preference-aware scorer. Specifically, we identify and address the limitations of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency, and cost-effectiveness advantages. In addition, we propose systematic criteria for evaluating web-enhanced QA systems. We conduct multi-dimensional human evaluation and quantitative ablation studies, which suggest the outperformance of the proposed WebGLM designs over existing systems. WebGLM with the 10-billion-parameter GLM (10B) is shown to perform better than the similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human evaluation. The code, demo, and data are at \url{https://github.com/THUDM/WebGLM}.
http://arxiv.org/pdf/2306.07906
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
cs.CL, cs.AI
Accepted to KDD 2023
null
cs.CL
20230613
20230613
[ { "id": "2208.03299" }, { "id": "2204.02311" }, { "id": "2006.14799" }, { "id": "2112.09332" }, { "id": "2210.02414" }, { "id": "2209.01975" }, { "id": "2205.10782" }, { "id": "2211.05100" }, { "id": "2202.12837" }, { "id": "2103.10385" }, { "id": "2205.01068" } ]
2306.07799
3
translation ability. Additionally, ChatGPT can even facilitate multi-modal tasks (Yang et al., 2023b; Shen et al., 2023), as well as the application of data augmentation (Dai et al., 2023). Although the stud- ies mentioned above have demonstrated notable performance of ChatGPT across different domains, there remains a dearth of qualitative and quantita- tive evaluation of the texts generated by ChatGPT. Such an evaluation is vital to uncover the behav- ioral differences, potential limitations, and chal- lenges associated with ChatGPT-generated texts, especially when compared with human-authored texts.
2306.07799#3
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer
Large-scale language models, like ChatGPT, have garnered significant media attention and stunned the public with their remarkable capacity for generating coherent text from short natural language prompts. In this paper, we aim to conduct a systematic inspection of ChatGPT's performance in two controllable generation tasks, with respect to ChatGPT's ability to adapt its output to different target audiences (expert vs. layman) and writing styles (formal vs. informal). Additionally, we evaluate the faithfulness of the generated text, and compare the model's performance with human-authored texts. Our findings indicate that the stylistic variations produced by humans are considerably larger than those demonstrated by ChatGPT, and the generated texts diverge from human samples in several characteristics, such as the distribution of word types. Moreover, we observe that ChatGPT sometimes incorporates factual errors or hallucinations when adapting the text to suit a specific style.
http://arxiv.org/pdf/2306.07799
Dongqi Pu, Vera Demberg
cs.CL, cs.AI, cs.LG
ACL-SRW 2023
null
cs.CL
20230613
20230613
[ { "id": "2302.14229" }, { "id": "2302.04023" }, { "id": "2302.06476" }, { "id": "2303.17580" }, { "id": "2201.05337" }, { "id": "2302.13007" }, { "id": "2303.11381" }, { "id": "2304.05613" }, { "id": "2302.09419" }, { "id": "2301.08745" }, { "id": "2204.13362" }, { "id": "2302.08081" }, { "id": "2301.08653" }, { "id": "2305.16784" }, { "id": "1912.02164" }, { "id": "2303.01067" } ]
2306.07906
3
∗XL, HL, and HY contributed equally and this work was done when HY interned at Tsinghua. †Corresponding Authors: YD and JT. # Keywords Large Language Model; Pre-Trained Model; Human Preference Alignment; General Language Model ACM Reference Format: Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, and Jie Tang. 2023. WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD ’23), August 6–10, 2023, Long Beach, CA, USA. ACM, New York, NY, USA, 42 pages. https://doi.org/10.1145/3580305.3599931
2306.07906#3
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
We present WebGLM, a web-enhanced question-answering system based on the General Language Model (GLM). Its goal is to augment a pre-trained large language model (LLM) with web search and retrieval capabilities while being efficient for real-world deployments. To achieve this, we develop WebGLM with strategies for the LLM-augmented retriever, bootstrapped generator, and human preference-aware scorer. Specifically, we identify and address the limitations of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency, and cost-effectiveness advantages. In addition, we propose systematic criteria for evaluating web-enhanced QA systems. We conduct multi-dimensional human evaluation and quantitative ablation studies, which suggest the outperformance of the proposed WebGLM designs over existing systems. WebGLM with the 10-billion-parameter GLM (10B) is shown to perform better than the similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human evaluation. The code, demo, and data are at \url{https://github.com/THUDM/WebGLM}.
http://arxiv.org/pdf/2306.07906
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
cs.CL, cs.AI
Accepted to KDD 2023
null
cs.CL
20230613
20230613
[ { "id": "2208.03299" }, { "id": "2204.02311" }, { "id": "2006.14799" }, { "id": "2112.09332" }, { "id": "2210.02414" }, { "id": "2209.01975" }, { "id": "2205.10782" }, { "id": "2211.05100" }, { "id": "2202.12837" }, { "id": "2103.10385" }, { "id": "2205.01068" } ]
2306.07799
4
Controllable text generation seems to be a task in which ChatGPT-like models could potentially excel. This task is driven by the desire to tailor text for a diverse array of target users (e.g., experts and laypersons) (Kumar et al., 2022; Cao et al., 2020; Luo et al., 2022), and thereby enhancing the accessibility of textual information. In con- trollable text generation, one delineates a partic- ular set of parameters or provides a prompt that defines the intended target style. This area has re- cently received growing interest from researchers in the field (Hu and Li, 2021; Li et al., 2022; Zhang et al., 2022; Dathathri et al., 2019a; August et al., 2022; Carlsson et al., 2022; Gu et al., 2022; Li et al., 2022; Keskar et al., 2019; Dathathri et al., 2019b). The traditional natural language genera- tion task (Pu and Sima’an, 2022), which focuses solely on adequately responding with respect to a given input, can be regarded as a special case of controllable natural language generation, wherein the control setting remains unconditioned. Consid-
2306.07799#4
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer
Large-scale language models, like ChatGPT, have garnered significant media attention and stunned the public with their remarkable capacity for generating coherent text from short natural language prompts. In this paper, we aim to conduct a systematic inspection of ChatGPT's performance in two controllable generation tasks, with respect to ChatGPT's ability to adapt its output to different target audiences (expert vs. layman) and writing styles (formal vs. informal). Additionally, we evaluate the faithfulness of the generated text, and compare the model's performance with human-authored texts. Our findings indicate that the stylistic variations produced by humans are considerably larger than those demonstrated by ChatGPT, and the generated texts diverge from human samples in several characteristics, such as the distribution of word types. Moreover, we observe that ChatGPT sometimes incorporates factual errors or hallucinations when adapting the text to suit a specific style.
http://arxiv.org/pdf/2306.07799
Dongqi Pu, Vera Demberg
cs.CL, cs.AI, cs.LG
ACL-SRW 2023
null
cs.CL
20230613
20230613
[ { "id": "2302.14229" }, { "id": "2302.04023" }, { "id": "2302.06476" }, { "id": "2303.17580" }, { "id": "2201.05337" }, { "id": "2302.13007" }, { "id": "2303.11381" }, { "id": "2304.05613" }, { "id": "2302.09419" }, { "id": "2301.08745" }, { "id": "2204.13362" }, { "id": "2302.08081" }, { "id": "2301.08653" }, { "id": "2305.16784" }, { "id": "1912.02164" }, { "id": "2303.01067" } ]
2306.07906
4
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. KDD ’23, August 6–10, 2023, Long Beach, CA, USA. © 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 979-8-4007-0103-0/23/08. . . $15.00 https://doi.org/10.1145/3580305.3599931 1 Introduction Large language models (LLMs), such as GPT-3 [3], PaLM [5], OPT [37], BLOOM [32], and GLM-130B [36], have significantly pushed the boundary of machines’ ability on language understanding and gen- eration. Question answering [15, 28], one of the most fundamental language applications, has also been substantially advanced by the recent LLM developments. Existing studies suggest that the
2306.07906#4
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
We present WebGLM, a web-enhanced question-answering system based on the General Language Model (GLM). Its goal is to augment a pre-trained large language model (LLM) with web search and retrieval capabilities while being efficient for real-world deployments. To achieve this, we develop WebGLM with strategies for the LLM-augmented retriever, bootstrapped generator, and human preference-aware scorer. Specifically, we identify and address the limitations of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency, and cost-effectiveness advantages. In addition, we propose systematic criteria for evaluating web-enhanced QA systems. We conduct multi-dimensional human evaluation and quantitative ablation studies, which suggest the outperformance of the proposed WebGLM designs over existing systems. WebGLM with the 10-billion-parameter GLM (10B) is shown to perform better than the similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human evaluation. The code, demo, and data are at \url{https://github.com/THUDM/WebGLM}.
http://arxiv.org/pdf/2306.07906
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
cs.CL, cs.AI
Accepted to KDD 2023
null
cs.CL
20230613
20230613
[ { "id": "2208.03299" }, { "id": "2204.02311" }, { "id": "2006.14799" }, { "id": "2112.09332" }, { "id": "2210.02414" }, { "id": "2209.01975" }, { "id": "2205.10782" }, { "id": "2211.05100" }, { "id": "2202.12837" }, { "id": "2103.10385" }, { "id": "2205.01068" } ]
2306.07799
5
responding with respect to a given input, can be regarded as a special case of controllable natural language generation, wherein the control setting remains unconditioned. Consid- ering ChatGPT as the most recent language gen- eration capability, the assessment of its language generation proficiency, specifically in the realm of controllable language generation, remains largely uncharted. Therefore, our study delves into two distinct applications of ChatGPT, namely control- lable summary generation and sentence style trans1The project information of our study can be accessed at https://dongqi.me/projects/ChatGPT_vs_Human.
2306.07799#5
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer
Large-scale language models, like ChatGPT, have garnered significant media attention and stunned the public with their remarkable capacity for generating coherent text from short natural language prompts. In this paper, we aim to conduct a systematic inspection of ChatGPT's performance in two controllable generation tasks, with respect to ChatGPT's ability to adapt its output to different target audiences (expert vs. layman) and writing styles (formal vs. informal). Additionally, we evaluate the faithfulness of the generated text, and compare the model's performance with human-authored texts. Our findings indicate that the stylistic variations produced by humans are considerably larger than those demonstrated by ChatGPT, and the generated texts diverge from human samples in several characteristics, such as the distribution of word types. Moreover, we observe that ChatGPT sometimes incorporates factual errors or hallucinations when adapting the text to suit a specific style.
http://arxiv.org/pdf/2306.07799
Dongqi Pu, Vera Demberg
cs.CL, cs.AI, cs.LG
ACL-SRW 2023
null
cs.CL
20230613
20230613
[ { "id": "2302.14229" }, { "id": "2302.04023" }, { "id": "2302.06476" }, { "id": "2303.17580" }, { "id": "2201.05337" }, { "id": "2302.13007" }, { "id": "2303.11381" }, { "id": "2304.05613" }, { "id": "2302.09419" }, { "id": "2301.08745" }, { "id": "2204.13362" }, { "id": "2302.08081" }, { "id": "2301.08653" }, { "id": "2305.16784" }, { "id": "1912.02164" }, { "id": "2303.01067" } ]
2306.07906
5
KDD ’23, August 6–10, 2023, Long Beach, CA, USA. Human-level 40 20 ia a i} WebGLM WebGPT WebGPT Perplexity.ai (10B) (175B) (13B) Win Rate Against Human (%) w ° Figure 2: The win rates of popular web-enhanced QA sys- tems against human references. WebGLM (10B) performs com- parably to WebGPT (175B), approaching human-level QA ability. performance of LLMs’ closed-book QA [29] and in-context learn- ing QA [3, 18] is comparable to supervised models, furthering our understanding on LLMs’ potential to memorize knowledge.
2306.07906#5
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
We present WebGLM, a web-enhanced question-answering system based on the General Language Model (GLM). Its goal is to augment a pre-trained large language model (LLM) with web search and retrieval capabilities while being efficient for real-world deployments. To achieve this, we develop WebGLM with strategies for the LLM-augmented retriever, bootstrapped generator, and human preference-aware scorer. Specifically, we identify and address the limitations of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency, and cost-effectiveness advantages. In addition, we propose systematic criteria for evaluating web-enhanced QA systems. We conduct multi-dimensional human evaluation and quantitative ablation studies, which suggest the outperformance of the proposed WebGLM designs over existing systems. WebGLM with the 10-billion-parameter GLM (10B) is shown to perform better than the similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human evaluation. The code, demo, and data are at \url{https://github.com/THUDM/WebGLM}.
http://arxiv.org/pdf/2306.07906
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
cs.CL, cs.AI
Accepted to KDD 2023
null
cs.CL
20230613
20230613
[ { "id": "2208.03299" }, { "id": "2204.02311" }, { "id": "2006.14799" }, { "id": "2112.09332" }, { "id": "2210.02414" }, { "id": "2209.01975" }, { "id": "2205.10782" }, { "id": "2211.05100" }, { "id": "2202.12837" }, { "id": "2103.10385" }, { "id": "2205.01068" } ]
2306.07799
6
fer. In the former, we examine ChatGPT’s ability to generate summaries that cater to two distinct readerships, namely experts and non-experts, for a given academic literature. Concerning sentence style transfer, we investigate ChatGPT’s capability to generate both formal and informal sentences for the task of sentence formality. The objective of this study is to tackle the research question: In relation to the human- produced text, to what extent does ChatGPT- generated content demonstrate significant diver- gence from human behavior and the potential susceptibility to inaccuracies? Our primary con- tributions are enumerated below: • To the best of our knowledge, we are the first to utilize ChatGPT to evaluate its effective- ness in controllable text generation. • Our findings indicate that there are substan- tial performance disparities between the text generated by ChatGPT and that generated by humans. • Our study exposes and quantifies the existence of numerous hard-to-spot errors in the text generated by ChatGPT, which have a tendency to amplify with successive transformations of the text. # 2 Related Work # 2.1 Controllable Text Summarization
2306.07799#6
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer
Large-scale language models, like ChatGPT, have garnered significant media attention and stunned the public with their remarkable capacity for generating coherent text from short natural language prompts. In this paper, we aim to conduct a systematic inspection of ChatGPT's performance in two controllable generation tasks, with respect to ChatGPT's ability to adapt its output to different target audiences (expert vs. layman) and writing styles (formal vs. informal). Additionally, we evaluate the faithfulness of the generated text, and compare the model's performance with human-authored texts. Our findings indicate that the stylistic variations produced by humans are considerably larger than those demonstrated by ChatGPT, and the generated texts diverge from human samples in several characteristics, such as the distribution of word types. Moreover, we observe that ChatGPT sometimes incorporates factual errors or hallucinations when adapting the text to suit a specific style.
http://arxiv.org/pdf/2306.07799
Dongqi Pu, Vera Demberg
cs.CL, cs.AI, cs.LG
ACL-SRW 2023
null
cs.CL
20230613
20230613
[ { "id": "2302.14229" }, { "id": "2302.04023" }, { "id": "2302.06476" }, { "id": "2303.17580" }, { "id": "2201.05337" }, { "id": "2302.13007" }, { "id": "2303.11381" }, { "id": "2304.05613" }, { "id": "2302.09419" }, { "id": "2301.08745" }, { "id": "2204.13362" }, { "id": "2302.08081" }, { "id": "2301.08653" }, { "id": "2305.16784" }, { "id": "1912.02164" }, { "id": "2303.01067" } ]
2306.07906
6
However, even for LLMs, their capacity is not unlimited, and when it comes to challenges that require sufficient rare-knowledge, LLMs fail to meet up human expectations. Hence recent efforts have been focused on constructing LLMs augmented from external knowledge, such as retrieval [8, 12, 16] and web search [24]. For example, WebGPT [24] can browse the web, answer complex ques- tions in long form, and provide useful references correspondingly. Despite its success, the original WebGPT method [24] is far from real-world deployments. First, it relies on abundant expert-level annotations of browsing trajectories, well-written answers, and answer preference labeling, requiring considerable expenses, time, and training. Second, the behavior cloning method (i.e., imitation learning) requires its base model GPT-3 to emulate human experts by instructing the system to interact with a web browser, issue oper- ation commands (e.g., Search, Read, and Quote), and then retrieve relevant information from online sources. Finally, the multi-turn nature of web browsing demands intensive computation resources and can be too slow for user experience, e.g., costing about 31 seconds for WebGPT-13B to response a 500-token prompt.
2306.07906#6
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
We present WebGLM, a web-enhanced question-answering system based on the General Language Model (GLM). Its goal is to augment a pre-trained large language model (LLM) with web search and retrieval capabilities while being efficient for real-world deployments. To achieve this, we develop WebGLM with strategies for the LLM-augmented retriever, bootstrapped generator, and human preference-aware scorer. Specifically, we identify and address the limitations of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency, and cost-effectiveness advantages. In addition, we propose systematic criteria for evaluating web-enhanced QA systems. We conduct multi-dimensional human evaluation and quantitative ablation studies, which suggest the outperformance of the proposed WebGLM designs over existing systems. WebGLM with the 10-billion-parameter GLM (10B) is shown to perform better than the similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human evaluation. The code, demo, and data are at \url{https://github.com/THUDM/WebGLM}.
http://arxiv.org/pdf/2306.07906
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
cs.CL, cs.AI
Accepted to KDD 2023
null
cs.CL
20230613
20230613
[ { "id": "2208.03299" }, { "id": "2204.02311" }, { "id": "2006.14799" }, { "id": "2112.09332" }, { "id": "2210.02414" }, { "id": "2209.01975" }, { "id": "2205.10782" }, { "id": "2211.05100" }, { "id": "2202.12837" }, { "id": "2103.10385" }, { "id": "2205.01068" } ]
2306.07799
7
# 2 Related Work # 2.1 Controllable Text Summarization Controllable text summarization is a rapidly evolv- ing field that aims to produce summaries with spe- cific characteristics, such as length, style, or con- tent (Shen et al., 2022b; Chan et al., 2021; Sarkhel et al., 2020; Shen et al., 2022a; Goldsack et al., 2022; Keskar et al., 2019; Dathathri et al., 2019b; He et al., 2022; Earle et al., 2021; Liu et al., 2022b). A range of approaches has been proposed for this task, including the use of sequence-to-sequence models such as the Transformer model (Vaswani et al., 2017). These models have demonstrated promising progress in producing high-quality sum- maries that can be modulated according to specific requirements (Fan et al., 2018; Wu et al., 2021; Amplayo et al., 2021). Additionally, other tech- niques also have been proposed to enhance the controllability of the summaries, such as condi- tional generation (He et al., 2022; Luo et al., 2022), prompt-based summarization (Yang et al., 2022; Liu et al., 2022a; Zhang and Song, 2022), and multi-task learning (Cui and Hu, 2021; Gu et al., 2022).
2306.07799#7
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer
Large-scale language models, like ChatGPT, have garnered significant media attention and stunned the public with their remarkable capacity for generating coherent text from short natural language prompts. In this paper, we aim to conduct a systematic inspection of ChatGPT's performance in two controllable generation tasks, with respect to ChatGPT's ability to adapt its output to different target audiences (expert vs. layman) and writing styles (formal vs. informal). Additionally, we evaluate the faithfulness of the generated text, and compare the model's performance with human-authored texts. Our findings indicate that the stylistic variations produced by humans are considerably larger than those demonstrated by ChatGPT, and the generated texts diverge from human samples in several characteristics, such as the distribution of word types. Moreover, we observe that ChatGPT sometimes incorporates factual errors or hallucinations when adapting the text to suit a specific style.
http://arxiv.org/pdf/2306.07799
Dongqi Pu, Vera Demberg
cs.CL, cs.AI, cs.LG
ACL-SRW 2023
null
cs.CL
20230613
20230613
[ { "id": "2302.14229" }, { "id": "2302.04023" }, { "id": "2302.06476" }, { "id": "2303.17580" }, { "id": "2201.05337" }, { "id": "2302.13007" }, { "id": "2303.11381" }, { "id": "2304.05613" }, { "id": "2302.09419" }, { "id": "2301.08745" }, { "id": "2204.13362" }, { "id": "2302.08081" }, { "id": "2301.08653" }, { "id": "2305.16784" }, { "id": "1912.02164" }, { "id": "2303.01067" } ]
2306.07906
7
In this work, we present WebGLM—a practical web-enhanced QA system based on the 10-billion-parameter General Language Model (GLM-10B) [6]. An example is illustrated in Figure 1. It is efficient, cost-effective, human preference-aware, and most importantly, of comparable quality to WebGPT. The system employs multiple new strategies and designs to achieve good performance, including: An LLM-augmented Retriever: a two-staged retriever that im- plements coarse-grained web search and fine-grained LLM-distilled retrieval. It is inspired by the fact that LLMs like GPT-3 can natu- rally learn to adopt correct references, and such ability could be distilled to improve smaller dense retrievers. A Bootstrapped Generator: a GLM-10B based answer generator that is trained on quoted long-formed QA samples and bootstrapped by LLM in-context learning. We discover that instead of relying on expensive human expert writing in WebGPT, LLMs can be enabled to learn to generate high-quality data with proper citation-based filtering.
2306.07906#7
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
We present WebGLM, a web-enhanced question-answering system based on the General Language Model (GLM). Its goal is to augment a pre-trained large language model (LLM) with web search and retrieval capabilities while being efficient for real-world deployments. To achieve this, we develop WebGLM with strategies for the LLM-augmented retriever, bootstrapped generator, and human preference-aware scorer. Specifically, we identify and address the limitations of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency, and cost-effectiveness advantages. In addition, we propose systematic criteria for evaluating web-enhanced QA systems. We conduct multi-dimensional human evaluation and quantitative ablation studies, which suggest the outperformance of the proposed WebGLM designs over existing systems. WebGLM with the 10-billion-parameter GLM (10B) is shown to perform better than the similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human evaluation. The code, demo, and data are at \url{https://github.com/THUDM/WebGLM}.
http://arxiv.org/pdf/2306.07906
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
cs.CL, cs.AI
Accepted to KDD 2023
null
cs.CL
20230613
20230613
[ { "id": "2208.03299" }, { "id": "2204.02311" }, { "id": "2006.14799" }, { "id": "2112.09332" }, { "id": "2210.02414" }, { "id": "2209.01975" }, { "id": "2205.10782" }, { "id": "2211.05100" }, { "id": "2202.12837" }, { "id": "2103.10385" }, { "id": "2205.01068" } ]
2306.07799
8
# 2.2 Text Style Transfer Text style transfer is a task that involves trans- forming an input sentence into a desired style while retaining its style-independent semantics (Jin et al., 2022; Zhu et al., 2021; Dai et al., 2019; Li et al., 2020; Babakov et al., 2022; Mir et al., 2019; Ramesh Kashyap et al., 2022; Tokpo and Calders, 2022). To achieve this, prior research has exam- ined sequence-to-sequence learning strategies that utilize parallel corpora with paired source/target sentences in different styles (Cheng et al., 2020; Hu et al., 2021; Nouri, 2022). Owing to the consid- erable demand for human resources and material investments in data labeling, parallel data across diverse styles are scarce. This has led to an in- creased interest in exploring more pragmatic situa- tions where only non-parallel stylized corpora are accessible (Malmi et al., 2020; Reif et al., 2022). 2.3 ChatGPT ChatGPT2 is a large language model (LLM), which is built upon the innovations and improvements of its predecessors, such as GPT-33. In terms of training strategies, ChatGPT employs instruction learning and reinforcement learning from human feedback (RLHF; Ouyang et al., 2022) to enhance its overall performance and adaptability.
2306.07799#8
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer
Large-scale language models, like ChatGPT, have garnered significant media attention and stunned the public with their remarkable capacity for generating coherent text from short natural language prompts. In this paper, we aim to conduct a systematic inspection of ChatGPT's performance in two controllable generation tasks, with respect to ChatGPT's ability to adapt its output to different target audiences (expert vs. layman) and writing styles (formal vs. informal). Additionally, we evaluate the faithfulness of the generated text, and compare the model's performance with human-authored texts. Our findings indicate that the stylistic variations produced by humans are considerably larger than those demonstrated by ChatGPT, and the generated texts diverge from human samples in several characteristics, such as the distribution of word types. Moreover, we observe that ChatGPT sometimes incorporates factual errors or hallucinations when adapting the text to suit a specific style.
http://arxiv.org/pdf/2306.07799
Dongqi Pu, Vera Demberg
cs.CL, cs.AI, cs.LG
ACL-SRW 2023
null
cs.CL
20230613
20230613
[ { "id": "2302.14229" }, { "id": "2302.04023" }, { "id": "2302.06476" }, { "id": "2303.17580" }, { "id": "2201.05337" }, { "id": "2302.13007" }, { "id": "2303.11381" }, { "id": "2304.05613" }, { "id": "2302.09419" }, { "id": "2301.08745" }, { "id": "2204.13362" }, { "id": "2302.08081" }, { "id": "2301.08653" }, { "id": "2305.16784" }, { "id": "1912.02164" }, { "id": "2303.01067" } ]
2306.07906
8
A Human Preference-aware Scorer: a scorer, that is trained over online QA forums’ user thumb-up signals, is able to learn human majority preferences on different answers. Compared to WebGPT’s expert labeling, we prove that a proper dataset construction could also produce a high-quality scorer. Liu and Lai and Yu, et al. Our extensive human evaluation and quantitative ablation re- sults demonstrate the efficiency and effectiveness of the WebGLM system. Specifically, WebGLM (10B) surpasses the similar-scaled WebGPT (13B) and performs comparably to WebGPT (175B) on our Turing test (Cf. Figure 2). WebGLM’s improvement against the only publicly-available system—Perplexity.ai—also makes it among the best public web-enhanced QA systems as of this submission. To sum up, in this paper, we make the following contributions: • We construct WebGLM, an efficient web-enhanced QA sys- tem with human preferences. It significantly outperforms the similar-sized WebGPT (13B) and performs comparably to WebGPT (175B). It also surpasses Perplexity.ai—a popular system powered by LLMs and search engines.
2306.07906#8
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
We present WebGLM, a web-enhanced question-answering system based on the General Language Model (GLM). Its goal is to augment a pre-trained large language model (LLM) with web search and retrieval capabilities while being efficient for real-world deployments. To achieve this, we develop WebGLM with strategies for the LLM-augmented retriever, bootstrapped generator, and human preference-aware scorer. Specifically, we identify and address the limitations of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency, and cost-effectiveness advantages. In addition, we propose systematic criteria for evaluating web-enhanced QA systems. We conduct multi-dimensional human evaluation and quantitative ablation studies, which suggest the outperformance of the proposed WebGLM designs over existing systems. WebGLM with the 10-billion-parameter GLM (10B) is shown to perform better than the similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human evaluation. The code, demo, and data are at \url{https://github.com/THUDM/WebGLM}.
http://arxiv.org/pdf/2306.07906
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
cs.CL, cs.AI
Accepted to KDD 2023
null
cs.CL
20230613
20230613
[ { "id": "2208.03299" }, { "id": "2204.02311" }, { "id": "2006.14799" }, { "id": "2112.09332" }, { "id": "2210.02414" }, { "id": "2209.01975" }, { "id": "2205.10782" }, { "id": "2211.05100" }, { "id": "2202.12837" }, { "id": "2103.10385" }, { "id": "2205.01068" } ]
2306.07799
9
Upon its emergence, ChatGPT has garnered con- siderable attention from researchers, who have un- dertaken initial studies into the model. Scholars such as Baidoo-Anu and Owusu Ansah (2023); Rudolph et al. (2023); West (2023); Sobania et al. (2023); Gilson et al. (2023); Lai et al. (2023); Wang et al. (2023b) have explored the notable strengths of ChatGPT from the fields of education, science, programming, healthcare, and text generation, re- spectively. However, Bang et al. (2023a) discov- ered that ChatGPT suffers from hallucination is- sues in the context of logical reasoning. Due to its immense and inaccessible training corpus and pa- rameters, and the inability to access external knowl- edge for reliable sources of support, it is imperative to question whether ChatGPT demonstrates the same hallucination issue as other LLMs when per- forming sentence generation. Based on these clues, we firmly assert that in-depth analysis of the text generated by ChatGPT and its behavioral patterns are both significant and valuable, and can provide meaningful insights to the readers of this paper. # 2https://openai.com/blog/chatgpt 3https://openai.com/research/instruction-following
2306.07799#9
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer
Large-scale language models, like ChatGPT, have garnered significant media attention and stunned the public with their remarkable capacity for generating coherent text from short natural language prompts. In this paper, we aim to conduct a systematic inspection of ChatGPT's performance in two controllable generation tasks, with respect to ChatGPT's ability to adapt its output to different target audiences (expert vs. layman) and writing styles (formal vs. informal). Additionally, we evaluate the faithfulness of the generated text, and compare the model's performance with human-authored texts. Our findings indicate that the stylistic variations produced by humans are considerably larger than those demonstrated by ChatGPT, and the generated texts diverge from human samples in several characteristics, such as the distribution of word types. Moreover, we observe that ChatGPT sometimes incorporates factual errors or hallucinations when adapting the text to suit a specific style.
http://arxiv.org/pdf/2306.07799
Dongqi Pu, Vera Demberg
cs.CL, cs.AI, cs.LG
ACL-SRW 2023
null
cs.CL
20230613
20230613
[ { "id": "2302.14229" }, { "id": "2302.04023" }, { "id": "2302.06476" }, { "id": "2303.17580" }, { "id": "2201.05337" }, { "id": "2302.13007" }, { "id": "2303.11381" }, { "id": "2304.05613" }, { "id": "2302.09419" }, { "id": "2301.08745" }, { "id": "2204.13362" }, { "id": "2302.08081" }, { "id": "2301.08653" }, { "id": "2305.16784" }, { "id": "1912.02164" }, { "id": "2303.01067" } ]
2306.07906
9
• We identify WebGPT’s limitations on real-world deploy- ments. We propose a set of new designs and strategies to allow WebGLM’s high accuracy while achieving efficient and cost-effective advantages over baseline systems. • We formulate the human evaluation metrics for evaluating web-enhanced QA systems. Extensive human evaluation and experiments demonstrate WebGLM’s strong capability and also generate insights into the system’s future developments. 2 Related Work The construction of web-enhanced QA systems is a systematic project that requires cross-domain collaboration, including large language models, open-domain question answering, retrieval aug- mentation, and reinforcement learning from human feedback. Here we briefly introduce related literature on them.
2306.07906#9
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
We present WebGLM, a web-enhanced question-answering system based on the General Language Model (GLM). Its goal is to augment a pre-trained large language model (LLM) with web search and retrieval capabilities while being efficient for real-world deployments. To achieve this, we develop WebGLM with strategies for the LLM-augmented retriever, bootstrapped generator, and human preference-aware scorer. Specifically, we identify and address the limitations of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency, and cost-effectiveness advantages. In addition, we propose systematic criteria for evaluating web-enhanced QA systems. We conduct multi-dimensional human evaluation and quantitative ablation studies, which suggest the outperformance of the proposed WebGLM designs over existing systems. WebGLM with the 10-billion-parameter GLM (10B) is shown to perform better than the similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human evaluation. The code, demo, and data are at \url{https://github.com/THUDM/WebGLM}.
http://arxiv.org/pdf/2306.07906
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
cs.CL, cs.AI
Accepted to KDD 2023
null
cs.CL
20230613
20230613
[ { "id": "2208.03299" }, { "id": "2204.02311" }, { "id": "2006.14799" }, { "id": "2112.09332" }, { "id": "2210.02414" }, { "id": "2209.01975" }, { "id": "2205.10782" }, { "id": "2211.05100" }, { "id": "2202.12837" }, { "id": "2103.10385" }, { "id": "2205.01068" } ]
2306.07799
10
# 2https://openai.com/blog/chatgpt 3https://openai.com/research/instruction-following # 3 Study on Controllable Summarization # 3.1 Prompt Formulation In this section, our main objective is to test the zero-shot performance of ChatGPT on controllable summarization, with the goal to generate sum- maries for laymen vs. experts. To this end, we constructed several prompts as natural language instructions for ChatGPT. The prompts we tested include for the layman style: Please give me a layman / simple / simplified and understandable / easy-to-comprehend / straightforward / general audience summary of X, where X was replaced by the source text that should be summarized. Sim- ilarly, for the expert summary, we experimented with the prompts: Please give me an expert / a technical / comprehensive and detailed / difficult- to-comprehend / in-depth / complicated summary of X. # 3.2 Experimental Setup
2306.07799#10
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer
Large-scale language models, like ChatGPT, have garnered significant media attention and stunned the public with their remarkable capacity for generating coherent text from short natural language prompts. In this paper, we aim to conduct a systematic inspection of ChatGPT's performance in two controllable generation tasks, with respect to ChatGPT's ability to adapt its output to different target audiences (expert vs. layman) and writing styles (formal vs. informal). Additionally, we evaluate the faithfulness of the generated text, and compare the model's performance with human-authored texts. Our findings indicate that the stylistic variations produced by humans are considerably larger than those demonstrated by ChatGPT, and the generated texts diverge from human samples in several characteristics, such as the distribution of word types. Moreover, we observe that ChatGPT sometimes incorporates factual errors or hallucinations when adapting the text to suit a specific style.
http://arxiv.org/pdf/2306.07799
Dongqi Pu, Vera Demberg
cs.CL, cs.AI, cs.LG
ACL-SRW 2023
null
cs.CL
20230613
20230613
[ { "id": "2302.14229" }, { "id": "2302.04023" }, { "id": "2302.06476" }, { "id": "2303.17580" }, { "id": "2201.05337" }, { "id": "2302.13007" }, { "id": "2303.11381" }, { "id": "2304.05613" }, { "id": "2302.09419" }, { "id": "2301.08745" }, { "id": "2204.13362" }, { "id": "2302.08081" }, { "id": "2301.08653" }, { "id": "2305.16784" }, { "id": "1912.02164" }, { "id": "2303.01067" } ]
2306.07906
10
Large Language Models (LLMs). Self-supervised [19] LLMs have attracted plenty of attention in nowadays natural language pro- cessing (NLP). Their huge number of parameters captures and stores versatile knowledge [20] and enables their outstanding per- formance on various challenges. Typical LLMs include GPT-3 [3], PALM [5], OPT [37], BLOOM [32], and GLM-130B [36]. One of the fascinating LLM properties is prompt-based in-context learn- ing (ICL), which allows tuning-free task transfer via prepended demonstration samples. Recent works have been focusing on the optimization [18, 22, 34, 39] and analysis [23, 30, 35] of ICL. Open-domain Question Answering (Open QA). Traditional QA datasets such as SQuAD [28] assume the reference is available. On the contrary, open-domain QA targets the open world and is more practical but challenging. For example, Natural Questions [15] dataset consists of queries from the Google search engine and an- notations from Wikipedia paragraphs. Web Questions [2] derives open-domain questions from knowledge bases. MS Marco [25] gath- ers passage texts and corresponding labels to questions.
2306.07906#10
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
We present WebGLM, a web-enhanced question-answering system based on the General Language Model (GLM). Its goal is to augment a pre-trained large language model (LLM) with web search and retrieval capabilities while being efficient for real-world deployments. To achieve this, we develop WebGLM with strategies for the LLM-augmented retriever, bootstrapped generator, and human preference-aware scorer. Specifically, we identify and address the limitations of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency, and cost-effectiveness advantages. In addition, we propose systematic criteria for evaluating web-enhanced QA systems. We conduct multi-dimensional human evaluation and quantitative ablation studies, which suggest the outperformance of the proposed WebGLM designs over existing systems. WebGLM with the 10-billion-parameter GLM (10B) is shown to perform better than the similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human evaluation. The code, demo, and data are at \url{https://github.com/THUDM/WebGLM}.
http://arxiv.org/pdf/2306.07906
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
cs.CL, cs.AI
Accepted to KDD 2023
null
cs.CL
20230613
20230613
[ { "id": "2208.03299" }, { "id": "2204.02311" }, { "id": "2006.14799" }, { "id": "2112.09332" }, { "id": "2210.02414" }, { "id": "2209.01975" }, { "id": "2205.10782" }, { "id": "2211.05100" }, { "id": "2202.12837" }, { "id": "2103.10385" }, { "id": "2205.01068" } ]
2306.07799
11
# 3.2 Experimental Setup For all experiments, we used ChatGPT gpt-3.5- turbo, which was, at the time of experimentation, the best-performing publicly accessible version pro- vided by OpenAI. For the hyper-parameter setting, we set temperature = 0, top p = 1, frequency penalty = 0.2, and presence penalty = 0.2. For summary generation, we configured the maximum number of generated tokens to 512. The remaining hyper- parameters were set to their default values as recom- mended by OpenAI. It is noteworthy that ChatGPT has the potential to generate empty responses (i.e., empty strings) as the result of network transmis- sion timeouts or API request overloads. Should this arise, we adhere to the established practice of resubmitting the request until ChatGPT provides non-empty responses. All of our experiments were conducted on the version of ChatGPT between 15 Feb 2023 and 30 Apr 2023 by using the OpenAI’s ChatGPT API.4 We should emphasize that to prevent any potential interference from the prior responses, we cleared the conversation history each time we submit a new query to ChatGPT. Unless otherwise specified, we refrained from engaging in any further conversation with ChatGPT to modify its responses. # 3.3 Dataset
2306.07799#11
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer
Large-scale language models, like ChatGPT, have garnered significant media attention and stunned the public with their remarkable capacity for generating coherent text from short natural language prompts. In this paper, we aim to conduct a systematic inspection of ChatGPT's performance in two controllable generation tasks, with respect to ChatGPT's ability to adapt its output to different target audiences (expert vs. layman) and writing styles (formal vs. informal). Additionally, we evaluate the faithfulness of the generated text, and compare the model's performance with human-authored texts. Our findings indicate that the stylistic variations produced by humans are considerably larger than those demonstrated by ChatGPT, and the generated texts diverge from human samples in several characteristics, such as the distribution of word types. Moreover, we observe that ChatGPT sometimes incorporates factual errors or hallucinations when adapting the text to suit a specific style.
http://arxiv.org/pdf/2306.07799
Dongqi Pu, Vera Demberg
cs.CL, cs.AI, cs.LG
ACL-SRW 2023
null
cs.CL
20230613
20230613
[ { "id": "2302.14229" }, { "id": "2302.04023" }, { "id": "2302.06476" }, { "id": "2303.17580" }, { "id": "2201.05337" }, { "id": "2302.13007" }, { "id": "2303.11381" }, { "id": "2304.05613" }, { "id": "2302.09419" }, { "id": "2301.08745" }, { "id": "2204.13362" }, { "id": "2302.08081" }, { "id": "2301.08653" }, { "id": "2305.16784" }, { "id": "1912.02164" }, { "id": "2303.01067" } ]
2306.07906
11
However, most Open QA datasets and models are limited to answer short answer phrases, while people usually prefer more in- formative long-formed answers with references. A possible reason is that constructing and evaluating long-formed QA datasets with open-world references are difficult, requiring expert-level annota- tions. Recent attempts include ELI5 [7] that collects queries and long-formed answers with scores from Reddit and WebGPT [24] which hires groups of experts and leverages up to 175-billion- parameter GPT-3 as the backbone. WebGLM aims to provide an- other effective and cost-effective solution for the challenge. WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
2306.07906#11
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
We present WebGLM, a web-enhanced question-answering system based on the General Language Model (GLM). Its goal is to augment a pre-trained large language model (LLM) with web search and retrieval capabilities while being efficient for real-world deployments. To achieve this, we develop WebGLM with strategies for the LLM-augmented retriever, bootstrapped generator, and human preference-aware scorer. Specifically, we identify and address the limitations of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency, and cost-effectiveness advantages. In addition, we propose systematic criteria for evaluating web-enhanced QA systems. We conduct multi-dimensional human evaluation and quantitative ablation studies, which suggest the outperformance of the proposed WebGLM designs over existing systems. WebGLM with the 10-billion-parameter GLM (10B) is shown to perform better than the similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human evaluation. The code, demo, and data are at \url{https://github.com/THUDM/WebGLM}.
http://arxiv.org/pdf/2306.07906
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
cs.CL, cs.AI
Accepted to KDD 2023
null
cs.CL
20230613
20230613
[ { "id": "2208.03299" }, { "id": "2204.02311" }, { "id": "2006.14799" }, { "id": "2112.09332" }, { "id": "2210.02414" }, { "id": "2209.01975" }, { "id": "2205.10782" }, { "id": "2211.05100" }, { "id": "2202.12837" }, { "id": "2103.10385" }, { "id": "2205.01068" } ]
2306.07799
12
# 3.3 Dataset We selected ELIFE (Goldsack et al., 2022) dataset for our experiments. It contains summaries of aca4https://platform.openai.com/overview demic literature that exhibit varying levels of read- ability, tailored to suit either expert or non-expert audiences. By means of this dataset, we can exam- ine to what extent ChatGPT can regulate the sum- mary generation process in accordance with the intended target users, and compare its summaries to human summaries. # 3.4 Metrics In order to assess automatically whether ChatGPT summaries substantially differ in terms of their au- dience design based on the given prompt, we opted for a set of three automatic readability metrics: Flesch Reading Ease (FRE; Kincaid et al., 1975), Coleman-Liau Index (CLI; Coleman and Liau, 1975), and Dale-Chall Readability Score (DCR; Chall and Dale, 1995).
2306.07799#12
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer
Large-scale language models, like ChatGPT, have garnered significant media attention and stunned the public with their remarkable capacity for generating coherent text from short natural language prompts. In this paper, we aim to conduct a systematic inspection of ChatGPT's performance in two controllable generation tasks, with respect to ChatGPT's ability to adapt its output to different target audiences (expert vs. layman) and writing styles (formal vs. informal). Additionally, we evaluate the faithfulness of the generated text, and compare the model's performance with human-authored texts. Our findings indicate that the stylistic variations produced by humans are considerably larger than those demonstrated by ChatGPT, and the generated texts diverge from human samples in several characteristics, such as the distribution of word types. Moreover, we observe that ChatGPT sometimes incorporates factual errors or hallucinations when adapting the text to suit a specific style.
http://arxiv.org/pdf/2306.07799
Dongqi Pu, Vera Demberg
cs.CL, cs.AI, cs.LG
ACL-SRW 2023
null
cs.CL
20230613
20230613
[ { "id": "2302.14229" }, { "id": "2302.04023" }, { "id": "2302.06476" }, { "id": "2303.17580" }, { "id": "2201.05337" }, { "id": "2302.13007" }, { "id": "2303.11381" }, { "id": "2304.05613" }, { "id": "2302.09419" }, { "id": "2301.08745" }, { "id": "2204.13362" }, { "id": "2302.08081" }, { "id": "2301.08653" }, { "id": "2305.16784" }, { "id": "1912.02164" }, { "id": "2303.01067" } ]
2306.07906
12
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences Retrieval-augmentation. Mainstream information retrieval ap- proaches include sparse-vector-based BM25 and TF-IDF, and the recent dense-vector-based methods such as DPR [14] and Con- triever [10]. The idea of retrieval-augmented language models in- troduced by REALM [8] argues the joint optimization of retriever and language modeling. Following representative works include RAG [16], Fusion-in-Decoder [11], and Atlas [12]. The idea of We- bGPT also loosely falls into the field, as it asks the LLM to interact with the browser to seek relevant information for better accuracy. Nevertheless, it can cost intensive computation and is too slow for practical deployment. In this work, WebGLM tackles the problem efficiently by distilling LLMs’ knowledge to smaller retrievers.
2306.07906#12
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
We present WebGLM, a web-enhanced question-answering system based on the General Language Model (GLM). Its goal is to augment a pre-trained large language model (LLM) with web search and retrieval capabilities while being efficient for real-world deployments. To achieve this, we develop WebGLM with strategies for the LLM-augmented retriever, bootstrapped generator, and human preference-aware scorer. Specifically, we identify and address the limitations of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency, and cost-effectiveness advantages. In addition, we propose systematic criteria for evaluating web-enhanced QA systems. We conduct multi-dimensional human evaluation and quantitative ablation studies, which suggest the outperformance of the proposed WebGLM designs over existing systems. WebGLM with the 10-billion-parameter GLM (10B) is shown to perform better than the similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human evaluation. The code, demo, and data are at \url{https://github.com/THUDM/WebGLM}.
http://arxiv.org/pdf/2306.07906
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
cs.CL, cs.AI
Accepted to KDD 2023
null
cs.CL
20230613
20230613
[ { "id": "2208.03299" }, { "id": "2204.02311" }, { "id": "2006.14799" }, { "id": "2112.09332" }, { "id": "2210.02414" }, { "id": "2209.01975" }, { "id": "2205.10782" }, { "id": "2211.05100" }, { "id": "2202.12837" }, { "id": "2103.10385" }, { "id": "2205.01068" } ]
2306.07799
13
The Flesch Reading Ease (Kincaid et al., 1975) is a metric that gauges the comprehensibility of a given text. This index relies on the average num- ber of syllables per word and the average num- ber of words per sentence. A higher score signi- fies an easier-to-understand text. Additionally, the Coleman-Liau Index (Coleman and Liau, 1975) is a measure of the text’s difficulty level, which considers the average number of characters per sen- tence and the average number of sentences per 100 words. A higher score indicates a more challenging text. The Dale-Chall Readability Score (Chall and Dale, 1995) is computed by comparing the number of complex words in the text with a list of common words. A higher score denotes a more challenging text.
2306.07799#13
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer
Large-scale language models, like ChatGPT, have garnered significant media attention and stunned the public with their remarkable capacity for generating coherent text from short natural language prompts. In this paper, we aim to conduct a systematic inspection of ChatGPT's performance in two controllable generation tasks, with respect to ChatGPT's ability to adapt its output to different target audiences (expert vs. layman) and writing styles (formal vs. informal). Additionally, we evaluate the faithfulness of the generated text, and compare the model's performance with human-authored texts. Our findings indicate that the stylistic variations produced by humans are considerably larger than those demonstrated by ChatGPT, and the generated texts diverge from human samples in several characteristics, such as the distribution of word types. Moreover, we observe that ChatGPT sometimes incorporates factual errors or hallucinations when adapting the text to suit a specific style.
http://arxiv.org/pdf/2306.07799
Dongqi Pu, Vera Demberg
cs.CL, cs.AI, cs.LG
ACL-SRW 2023
null
cs.CL
20230613
20230613
[ { "id": "2302.14229" }, { "id": "2302.04023" }, { "id": "2302.06476" }, { "id": "2303.17580" }, { "id": "2201.05337" }, { "id": "2302.13007" }, { "id": "2303.11381" }, { "id": "2304.05613" }, { "id": "2302.09419" }, { "id": "2301.08745" }, { "id": "2204.13362" }, { "id": "2302.08081" }, { "id": "2301.08653" }, { "id": "2305.16784" }, { "id": "1912.02164" }, { "id": "2303.01067" } ]
2306.07906
13
Reinforcement Learning from Human Feedback (RLHF). Au- tomated scoring of text generation is a well-established area of research. BLEU [27] and ROUGE [17] take into account the overlap ratio between the target and reference. METEOR [1] considers the accuracy and recall rate of the whole corpus. Other methods, such as BERTScore [38], evaluate using cosine similarity of contextual embedding from deep language models. In recent years, some work advocates learning scorers from human feedback [26, 33] via asking models to predict human preference. The scorers, or namely reward models, can be used to optimize the text generator via reinforce- ment learning. Such methods, which WebGPT is also affiliated with, have achieved great success in real-world applications. 3 The WebGLM System Constructing an LLM-based web-enhanced QA system can be ex- pensive and challenging. The web information is rich but noisy for certain queries, and creating high-quality human answers with references for training can be outrageously expensive. This type of systems usually involves three critical components: retriever, generator, and scorer.
2306.07906#13
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
We present WebGLM, a web-enhanced question-answering system based on the General Language Model (GLM). Its goal is to augment a pre-trained large language model (LLM) with web search and retrieval capabilities while being efficient for real-world deployments. To achieve this, we develop WebGLM with strategies for the LLM-augmented retriever, bootstrapped generator, and human preference-aware scorer. Specifically, we identify and address the limitations of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency, and cost-effectiveness advantages. In addition, we propose systematic criteria for evaluating web-enhanced QA systems. We conduct multi-dimensional human evaluation and quantitative ablation studies, which suggest the outperformance of the proposed WebGLM designs over existing systems. WebGLM with the 10-billion-parameter GLM (10B) is shown to perform better than the similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human evaluation. The code, demo, and data are at \url{https://github.com/THUDM/WebGLM}.
http://arxiv.org/pdf/2306.07906
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
cs.CL, cs.AI
Accepted to KDD 2023
null
cs.CL
20230613
20230613
[ { "id": "2208.03299" }, { "id": "2204.02311" }, { "id": "2006.14799" }, { "id": "2112.09332" }, { "id": "2210.02414" }, { "id": "2209.01975" }, { "id": "2205.10782" }, { "id": "2211.05100" }, { "id": "2202.12837" }, { "id": "2103.10385" }, { "id": "2205.01068" } ]
2306.07799
14
We also employed Rouge scores (Lin, 2004) to evaluate the performance of ChatGPT in the task of text summarization, with the aim of comparing its efficacy against the state-of-the-art model. In order to assess the extent to which the summaries re-use word sequences from the original text, we further- more evaluated N-gram novelty (See et al., 2017; Gehrmann et al., 2019; Pu et al., 2022). Finally, we quantified inconsistency based on factual con- sistency checking metric SummaC (Laban et al., 2022), as well as hallucination checking metric (Cao et al., 2022; Fischer et al., 2021). SummaC (Laban et al., 2022) uses sentence compression and summarization techniques to extract important information and improve the detection of inconsis- tencies in NLI models by segmenting documents and aggregating scores. Named entity hallucination (Fischer et al., 2021) flags potential hallucinations in named entities if they do not match the original sources. We here used BERT semantic similarity, rather than exact matching, when computing the named entities matching. # 3.5 Results on Controllable Summarization # 3.5.1 Effect of Prompt Formulation
2306.07799#14
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer
Large-scale language models, like ChatGPT, have garnered significant media attention and stunned the public with their remarkable capacity for generating coherent text from short natural language prompts. In this paper, we aim to conduct a systematic inspection of ChatGPT's performance in two controllable generation tasks, with respect to ChatGPT's ability to adapt its output to different target audiences (expert vs. layman) and writing styles (formal vs. informal). Additionally, we evaluate the faithfulness of the generated text, and compare the model's performance with human-authored texts. Our findings indicate that the stylistic variations produced by humans are considerably larger than those demonstrated by ChatGPT, and the generated texts diverge from human samples in several characteristics, such as the distribution of word types. Moreover, we observe that ChatGPT sometimes incorporates factual errors or hallucinations when adapting the text to suit a specific style.
http://arxiv.org/pdf/2306.07799
Dongqi Pu, Vera Demberg
cs.CL, cs.AI, cs.LG
ACL-SRW 2023
null
cs.CL
20230613
20230613
[ { "id": "2302.14229" }, { "id": "2302.04023" }, { "id": "2302.06476" }, { "id": "2303.17580" }, { "id": "2201.05337" }, { "id": "2302.13007" }, { "id": "2303.11381" }, { "id": "2304.05613" }, { "id": "2302.09419" }, { "id": "2301.08745" }, { "id": "2204.13362" }, { "id": "2302.08081" }, { "id": "2301.08653" }, { "id": "2305.16784" }, { "id": "1912.02164" }, { "id": "2303.01067" } ]
2306.07906
14
Take WebGPT [24] as an example, which employs experts for dataset annotation. Its retriever leverages GPT-3 to “behavior-clone” human experts’ web-browsing trajectory to search, read, and quote. In addition, the generator is trained on expert-written long answers with references. And finally, the scorer learns to predict experts’ preferences over different answers, and its scores serve as rewards for the generator’s reinforcement learning. Despite WebGPT’s pri- mary success, its retrieval can be slow, and the data annotations required for training the generator and scorer are too costly, signif- icantly hindering its wide public adoptions. In this work, we aim to build an efficient web-enhanced QA sys- tem that understands human preferences for actual use. To combine the advantages of LLMs and well-established open QA studies, we present a series of new designs and strategies for our web-enhanced QA system WebGLM based on GLM [6]: • An LLM-augmented Retriever: we design two stages: coarse- grained web search and fine-grained LLM-augmented dense re- trieval [10], for finding relevant references given queries.
2306.07906#14
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
We present WebGLM, a web-enhanced question-answering system based on the General Language Model (GLM). Its goal is to augment a pre-trained large language model (LLM) with web search and retrieval capabilities while being efficient for real-world deployments. To achieve this, we develop WebGLM with strategies for the LLM-augmented retriever, bootstrapped generator, and human preference-aware scorer. Specifically, we identify and address the limitations of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency, and cost-effectiveness advantages. In addition, we propose systematic criteria for evaluating web-enhanced QA systems. We conduct multi-dimensional human evaluation and quantitative ablation studies, which suggest the outperformance of the proposed WebGLM designs over existing systems. WebGLM with the 10-billion-parameter GLM (10B) is shown to perform better than the similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human evaluation. The code, demo, and data are at \url{https://github.com/THUDM/WebGLM}.
http://arxiv.org/pdf/2306.07906
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
cs.CL, cs.AI
Accepted to KDD 2023
null
cs.CL
20230613
20230613
[ { "id": "2208.03299" }, { "id": "2204.02311" }, { "id": "2006.14799" }, { "id": "2112.09332" }, { "id": "2210.02414" }, { "id": "2209.01975" }, { "id": "2205.10782" }, { "id": "2211.05100" }, { "id": "2202.12837" }, { "id": "2103.10385" }, { "id": "2205.01068" } ]
2306.07799
15
# 3.5 Results on Controllable Summarization # 3.5.1 Effect of Prompt Formulation Table 1 illustrates that different prompt versions are somewhat consistent regarding whether the instruc- tions asking for layman summaries actually lead to more readable texts than those asking for expert summaries, with FRE ranging between scores of 31 and 38 for automatically generated layman sum- maries, and between 28 and 37 for automatically generated expert summaries. Conversely, human- written summaries exhibit very large differences according to the automatic metrics, with FRE of 53.1 for layman summaries and 22.5 for expert summaries. Similar effects are observed for the CLI and DCR measures. This preliminary test was conducted on a subset of the ELIFE dataset, con- taining merely 500 random samples; for the rest of the tests, we proceeded to the entire dataset, select- ing the prompts asking for “layman” and “expert” summaries, as responses for these prompts seemed to align with the right direction wrt. the readability measures.
2306.07799#15
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer
Large-scale language models, like ChatGPT, have garnered significant media attention and stunned the public with their remarkable capacity for generating coherent text from short natural language prompts. In this paper, we aim to conduct a systematic inspection of ChatGPT's performance in two controllable generation tasks, with respect to ChatGPT's ability to adapt its output to different target audiences (expert vs. layman) and writing styles (formal vs. informal). Additionally, we evaluate the faithfulness of the generated text, and compare the model's performance with human-authored texts. Our findings indicate that the stylistic variations produced by humans are considerably larger than those demonstrated by ChatGPT, and the generated texts diverge from human samples in several characteristics, such as the distribution of word types. Moreover, we observe that ChatGPT sometimes incorporates factual errors or hallucinations when adapting the text to suit a specific style.
http://arxiv.org/pdf/2306.07799
Dongqi Pu, Vera Demberg
cs.CL, cs.AI, cs.LG
ACL-SRW 2023
null
cs.CL
20230613
20230613
[ { "id": "2302.14229" }, { "id": "2302.04023" }, { "id": "2302.06476" }, { "id": "2303.17580" }, { "id": "2201.05337" }, { "id": "2302.13007" }, { "id": "2303.11381" }, { "id": "2304.05613" }, { "id": "2302.09419" }, { "id": "2301.08745" }, { "id": "2204.13362" }, { "id": "2302.08081" }, { "id": "2301.08653" }, { "id": "2305.16784" }, { "id": "1912.02164" }, { "id": "2303.01067" } ]
2306.07906
15
• A Bootstrapped Generator: we derive WebGLM-QA, an LLM- bootstrapped quoted and long-formed QA dataset via in-context learning and corresponding strategies to clean and refine. It in- cludes 45k high-quality after filtering and 83k noisy but diverse samples before filtering. The backbone of WebGLM system is a GLM model trained on the dataset. KDD ’23, August 6–10, 2023, Long Beach, CA, USA. A Human Preference-aware Scorer: we develop techniques to learn human majority preference from online QA forums’ thumb- ups instead of expensive expert feedback, and successfully train a human preference-aware scorer for best-of-n selection. The LLM API used for research purpose in this work is text- davinci-003 unless specified. In the following sections, we will introduce the algorithm and implementation details of each com- ponent, which finally form the WebGLM pipeline sequentially.
2306.07906#15
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
We present WebGLM, a web-enhanced question-answering system based on the General Language Model (GLM). Its goal is to augment a pre-trained large language model (LLM) with web search and retrieval capabilities while being efficient for real-world deployments. To achieve this, we develop WebGLM with strategies for the LLM-augmented retriever, bootstrapped generator, and human preference-aware scorer. Specifically, we identify and address the limitations of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency, and cost-effectiveness advantages. In addition, we propose systematic criteria for evaluating web-enhanced QA systems. We conduct multi-dimensional human evaluation and quantitative ablation studies, which suggest the outperformance of the proposed WebGLM designs over existing systems. WebGLM with the 10-billion-parameter GLM (10B) is shown to perform better than the similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human evaluation. The code, demo, and data are at \url{https://github.com/THUDM/WebGLM}.
http://arxiv.org/pdf/2306.07906
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
cs.CL, cs.AI
Accepted to KDD 2023
null
cs.CL
20230613
20230613
[ { "id": "2208.03299" }, { "id": "2204.02311" }, { "id": "2006.14799" }, { "id": "2112.09332" }, { "id": "2210.02414" }, { "id": "2209.01975" }, { "id": "2205.10782" }, { "id": "2211.05100" }, { "id": "2202.12837" }, { "id": "2103.10385" }, { "id": "2205.01068" } ]
2306.07799
16
FRE DCR CLI Prompt version 37.26† 14.82† 11.21† layman 31.92† 15.70† 11.54† simple 35.48† 15.17† 11.21† simplified and understand. 36.59† 14.93† 11.32† easy-to-comprehend 31.74† 15.58† 11.42† straightforward 35.86† 14.98† 10.96† general audience 8.90 12.36 human answer (for layman) 53.06 29.89† 15.91† 11.88† expert 36.65† 13.76† 12.20† technical comprehensive and detailed 31.62† 15.47† 11.15† 28.95† 16.14† 11.71† difficult-to-comprehend 34.37† 14.93† 10.82† in-depth 29.05† 15.76† 11.40† complicated 11.79 17.65 22.54 human answer (for expert) Table 1: Reading difficulty on different prompts, tested on a set of 500 randomly selected items from the dataset. † indicates statistical significance (p<0.05) against cor- responding human answers via paired t-test. # 3.5.2 Reading Difficulty Control Table 2 corroborates that the results of the whole dataset are consistent with the findings from the smaller sample. We conclude that ChatGPT can
2306.07799#16
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer
Large-scale language models, like ChatGPT, have garnered significant media attention and stunned the public with their remarkable capacity for generating coherent text from short natural language prompts. In this paper, we aim to conduct a systematic inspection of ChatGPT's performance in two controllable generation tasks, with respect to ChatGPT's ability to adapt its output to different target audiences (expert vs. layman) and writing styles (formal vs. informal). Additionally, we evaluate the faithfulness of the generated text, and compare the model's performance with human-authored texts. Our findings indicate that the stylistic variations produced by humans are considerably larger than those demonstrated by ChatGPT, and the generated texts diverge from human samples in several characteristics, such as the distribution of word types. Moreover, we observe that ChatGPT sometimes incorporates factual errors or hallucinations when adapting the text to suit a specific style.
http://arxiv.org/pdf/2306.07799
Dongqi Pu, Vera Demberg
cs.CL, cs.AI, cs.LG
ACL-SRW 2023
null
cs.CL
20230613
20230613
[ { "id": "2302.14229" }, { "id": "2302.04023" }, { "id": "2302.06476" }, { "id": "2303.17580" }, { "id": "2201.05337" }, { "id": "2302.13007" }, { "id": "2303.11381" }, { "id": "2304.05613" }, { "id": "2302.09419" }, { "id": "2301.08745" }, { "id": "2204.13362" }, { "id": "2302.08081" }, { "id": "2301.08653" }, { "id": "2305.16784" }, { "id": "1912.02164" }, { "id": "2303.01067" } ]
2306.07906
16
3.1 LLM-augmented Retriever In conventional open QA, the systems usually only retrieve from reliable sources (e.g., Wikipedia) and fail to benefit from whole web-scale knowledge. However, the flip side of the coin is that wild web pages can be hard to acquire and purify. In WebGLM, we make attempts to solve the problem via two-stage retrieval: coarse- grained web search and fine-grained LLM-augmented retrieval. 3.1.1 Coarse-grained Web Search We leverage third-party web search engines (i.e., Google API) to acquire primary candidate web page URLs. In most cases, from our observation, these pages can cover the necessary contexts and knowledge to answer questions besides considerably abun- dant irrelevant information. The procedures are shown in Figure 3. Specifically, it can be roughly divided into three steps: (1) Search: At this stage, we enter the question into the search API and will obtain a list of URLs for potentially-relevant pages (usually less than 10).
2306.07906#16
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
We present WebGLM, a web-enhanced question-answering system based on the General Language Model (GLM). Its goal is to augment a pre-trained large language model (LLM) with web search and retrieval capabilities while being efficient for real-world deployments. To achieve this, we develop WebGLM with strategies for the LLM-augmented retriever, bootstrapped generator, and human preference-aware scorer. Specifically, we identify and address the limitations of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency, and cost-effectiveness advantages. In addition, we propose systematic criteria for evaluating web-enhanced QA systems. We conduct multi-dimensional human evaluation and quantitative ablation studies, which suggest the outperformance of the proposed WebGLM designs over existing systems. WebGLM with the 10-billion-parameter GLM (10B) is shown to perform better than the similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human evaluation. The code, demo, and data are at \url{https://github.com/THUDM/WebGLM}.
http://arxiv.org/pdf/2306.07906
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
cs.CL, cs.AI
Accepted to KDD 2023
null
cs.CL
20230613
20230613
[ { "id": "2208.03299" }, { "id": "2204.02311" }, { "id": "2006.14799" }, { "id": "2112.09332" }, { "id": "2210.02414" }, { "id": "2209.01975" }, { "id": "2205.10782" }, { "id": "2211.05100" }, { "id": "2202.12837" }, { "id": "2103.10385" }, { "id": "2205.01068" } ]
2306.07799
17
# 3.5.2 Reading Difficulty Control Table 2 corroborates that the results of the whole dataset are consistent with the findings from the smaller sample. We conclude that ChatGPT can produce summaries with different levels of reading difficulty to a certain extent based on the provided prompts. Notably, ChatGPT-generated sentences for expert-style summaries show greater complex- ity than those for layman-style summaries. How- ever, the magnitude of the difference in the reading difficulty scores between the two types of sum- maries is considerably smaller than that observed in human-written summaries. DCR FRE 8.93 52.42 11.78 23.20 ChatGPT Layman 37.38†‡ 14.78†‡ 11.17†‡ 30.38†‡ 15.82†‡ 11.85†‡ ChatGPT Expert Table 2: Reading difficulty scores by automatic metrics; † and ‡ indicate statistical significance (p<0.05) against same-style human answers, and opposite-style ChatGPT answers via paired t-test, respectively. # 3.5.3 Comparison to Previous SOTA Model
2306.07799#17
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer
Large-scale language models, like ChatGPT, have garnered significant media attention and stunned the public with their remarkable capacity for generating coherent text from short natural language prompts. In this paper, we aim to conduct a systematic inspection of ChatGPT's performance in two controllable generation tasks, with respect to ChatGPT's ability to adapt its output to different target audiences (expert vs. layman) and writing styles (formal vs. informal). Additionally, we evaluate the faithfulness of the generated text, and compare the model's performance with human-authored texts. Our findings indicate that the stylistic variations produced by humans are considerably larger than those demonstrated by ChatGPT, and the generated texts diverge from human samples in several characteristics, such as the distribution of word types. Moreover, we observe that ChatGPT sometimes incorporates factual errors or hallucinations when adapting the text to suit a specific style.
http://arxiv.org/pdf/2306.07799
Dongqi Pu, Vera Demberg
cs.CL, cs.AI, cs.LG
ACL-SRW 2023
null
cs.CL
20230613
20230613
[ { "id": "2302.14229" }, { "id": "2302.04023" }, { "id": "2302.06476" }, { "id": "2303.17580" }, { "id": "2201.05337" }, { "id": "2302.13007" }, { "id": "2303.11381" }, { "id": "2304.05613" }, { "id": "2302.09419" }, { "id": "2301.08745" }, { "id": "2204.13362" }, { "id": "2302.08081" }, { "id": "2301.08653" }, { "id": "2305.16784" }, { "id": "1912.02164" }, { "id": "2303.01067" } ]
2306.07906
17
(2) Fetch: Then, we crawl the corresponding HTML contents ac- cording to the URLs obtained. Since there are many candidate pages, we improve efficiency through parallel crawling. (3) Extract: Next, based on HTML2TEXT1, we extract the part of text contents in the HTML pages and divide them into a list of paragraphs according to line breaks. Since the web crawl usually takes sufficient time, we have paid great efforts to optimize the speed of the component to allow user- acceptable responding speed (Cf. Figure 4). For example, in the “Fetch” step, if the page is loaded synchronously, the loading time will be 2-3 minutes long. The parallel asynchronous enables the quick loading of most pages in 5s (about 98%). 3.1.2 Through the first three stages, we have retrieved a number of po- tential contexts to questions. However, many of them are still irrel- evant even under the filtering of widely-used dense retrievers (in our trial, up to 30% of top-ranked contexts are unrelated). As a so- lution, WebGPT [24] uses behavior cloning (i.e., imitation learning) to leverage LLMs’ strong language comprehensibility for reference selection. Notwithstanding its effectiveness, the strategy is slow in deployment and expensive in labeling.
2306.07906#17
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
We present WebGLM, a web-enhanced question-answering system based on the General Language Model (GLM). Its goal is to augment a pre-trained large language model (LLM) with web search and retrieval capabilities while being efficient for real-world deployments. To achieve this, we develop WebGLM with strategies for the LLM-augmented retriever, bootstrapped generator, and human preference-aware scorer. Specifically, we identify and address the limitations of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency, and cost-effectiveness advantages. In addition, we propose systematic criteria for evaluating web-enhanced QA systems. We conduct multi-dimensional human evaluation and quantitative ablation studies, which suggest the outperformance of the proposed WebGLM designs over existing systems. WebGLM with the 10-billion-parameter GLM (10B) is shown to perform better than the similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human evaluation. The code, demo, and data are at \url{https://github.com/THUDM/WebGLM}.
http://arxiv.org/pdf/2306.07906
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
cs.CL, cs.AI
Accepted to KDD 2023
null
cs.CL
20230613
20230613
[ { "id": "2208.03299" }, { "id": "2204.02311" }, { "id": "2006.14799" }, { "id": "2112.09332" }, { "id": "2210.02414" }, { "id": "2209.01975" }, { "id": "2205.10782" }, { "id": "2211.05100" }, { "id": "2202.12837" }, { "id": "2103.10385" }, { "id": "2205.01068" } ]
2306.07799
18
# 3.5.3 Comparison to Previous SOTA Model We also compared summaries generated by Chat- GPT to a previous state-of-the-art (SOTA) neural fine-tuned summarization model (Pu et al., 2023). On the same test split, the summaries produced by ChatGPT reached Rouge-1=25.53, Rouge-2=5.48, Rouge-L=13.30 under unsupervised learning, and Rouge-1=47.88, Rouge-2=13.75, Rouge-L=42.44 in few-shot learning use the training samples from the same subset of Section 3.5.1, while the model by Pu et al. (2023) reached Rouge-1=48.70, Rouge- 2=14.84, and Rouge-L=46.13. # 3.5.4 Disparities in Summarization Behavior
2306.07799#18
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer
Large-scale language models, like ChatGPT, have garnered significant media attention and stunned the public with their remarkable capacity for generating coherent text from short natural language prompts. In this paper, we aim to conduct a systematic inspection of ChatGPT's performance in two controllable generation tasks, with respect to ChatGPT's ability to adapt its output to different target audiences (expert vs. layman) and writing styles (formal vs. informal). Additionally, we evaluate the faithfulness of the generated text, and compare the model's performance with human-authored texts. Our findings indicate that the stylistic variations produced by humans are considerably larger than those demonstrated by ChatGPT, and the generated texts diverge from human samples in several characteristics, such as the distribution of word types. Moreover, we observe that ChatGPT sometimes incorporates factual errors or hallucinations when adapting the text to suit a specific style.
http://arxiv.org/pdf/2306.07799
Dongqi Pu, Vera Demberg
cs.CL, cs.AI, cs.LG
ACL-SRW 2023
null
cs.CL
20230613
20230613
[ { "id": "2302.14229" }, { "id": "2302.04023" }, { "id": "2302.06476" }, { "id": "2303.17580" }, { "id": "2201.05337" }, { "id": "2302.13007" }, { "id": "2303.11381" }, { "id": "2304.05613" }, { "id": "2302.09419" }, { "id": "2301.08745" }, { "id": "2204.13362" }, { "id": "2302.08081" }, { "id": "2301.08653" }, { "id": "2305.16784" }, { "id": "1912.02164" }, { "id": "2303.01067" } ]
2306.07799
19
# 3.5.4 Disparities in Summarization Behavior We next examined whether ChatGPT and Humans are consistent with each other regarding the read- ability of summarization with respect to different items – it could be possible, that some texts simply lead to less readable summaries than others. How- ever, we discovered that Pearson correlations of FRE scores for summaries by humans and Chat- GPT were only 0.31 for expert summaries, and 0.2 for layman summaries. (Scores were similarly low for the CLI and DCR metrics.) In addition, the sta- tistical significance test elucidates the noteworthy divergence between the distinctive response styles produced by ChatGPT and the analogous styles of human-generated answers. Following this, we contrasted the n-gram novelty of human vs. ChatGPT summaries wrt. the original texts. Figure 1 reveals that a significantly higher number of novel 4-grams are present in human- written summaries, particularly those aimed at lay- men. This suggests that ChatGPT summaries are slightly more extractive compared to human sum- maries. 4-gram Novelity ° ° w a ro o il o B ° Nu Human Layman Human Expert ChatGPT Layman ChatGPT Expert Candidate Figure 1: Comparison of abstractiveness between Chat- GPT and human-generated summaries # Inconsistencies and Hallucinations
2306.07799#19
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer
Large-scale language models, like ChatGPT, have garnered significant media attention and stunned the public with their remarkable capacity for generating coherent text from short natural language prompts. In this paper, we aim to conduct a systematic inspection of ChatGPT's performance in two controllable generation tasks, with respect to ChatGPT's ability to adapt its output to different target audiences (expert vs. layman) and writing styles (formal vs. informal). Additionally, we evaluate the faithfulness of the generated text, and compare the model's performance with human-authored texts. Our findings indicate that the stylistic variations produced by humans are considerably larger than those demonstrated by ChatGPT, and the generated texts diverge from human samples in several characteristics, such as the distribution of word types. Moreover, we observe that ChatGPT sometimes incorporates factual errors or hallucinations when adapting the text to suit a specific style.
http://arxiv.org/pdf/2306.07799
Dongqi Pu, Vera Demberg
cs.CL, cs.AI, cs.LG
ACL-SRW 2023
null
cs.CL
20230613
20230613
[ { "id": "2302.14229" }, { "id": "2302.04023" }, { "id": "2302.06476" }, { "id": "2303.17580" }, { "id": "2201.05337" }, { "id": "2302.13007" }, { "id": "2303.11381" }, { "id": "2304.05613" }, { "id": "2302.09419" }, { "id": "2301.08745" }, { "id": "2204.13362" }, { "id": "2302.08081" }, { "id": "2301.08653" }, { "id": "2305.16784" }, { "id": "1912.02164" }, { "id": "2303.01067" } ]
2306.07906
19
WebGLM Question: eee Why is it sometimes Reference [2]: ... our body learns that and so it Fine-tuned “}m} Dense hard to eat — — 7 - Web Page | Fine-grained References WebGLM-QA dataset Online QA Forums Paragraph I LLM Reference [1]: LLMs ICL _, Correction Answer 1 Answer2 Answer T praccccocssony Reference Beyond that, when Bootstrap —_—& Filtering ' Paragraph 2 4-| Adoption you wait till you — > ~ = _ are absolutely... t Training } Comparison Pairs WebGLM Generator Human Preference-aware Scorer learns to accept... (Answer HP -0.2 —_{ There are several reasons why not eating ... burning - = jeve Reference [3]: ... 3]. Another reason is ... after not Retriever ; [3]. Anot " Paragraph J after long periods rt called gluconeogenesis(2). eating for oon $ going without fp eee +++ | Also, leptin levels can . r | ood your ... ‘ ive a while? ‘Paragraph j f-# (Answer) K ab o.6 | rapidly decline in ... }---------------- Retriever —---------------- {----- Generator ----4------- - Scorer -------- { (expensive, slow, and intensive computation) (expensive) (expensive & slow)
2306.07906#19
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
We present WebGLM, a web-enhanced question-answering system based on the General Language Model (GLM). Its goal is to augment a pre-trained large language model (LLM) with web search and retrieval capabilities while being efficient for real-world deployments. To achieve this, we develop WebGLM with strategies for the LLM-augmented retriever, bootstrapped generator, and human preference-aware scorer. Specifically, we identify and address the limitations of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency, and cost-effectiveness advantages. In addition, we propose systematic criteria for evaluating web-enhanced QA systems. We conduct multi-dimensional human evaluation and quantitative ablation studies, which suggest the outperformance of the proposed WebGLM designs over existing systems. WebGLM with the 10-billion-parameter GLM (10B) is shown to perform better than the similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human evaluation. The code, demo, and data are at \url{https://github.com/THUDM/WebGLM}.
http://arxiv.org/pdf/2306.07906
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
cs.CL, cs.AI
Accepted to KDD 2023
null
cs.CL
20230613
20230613
[ { "id": "2208.03299" }, { "id": "2204.02311" }, { "id": "2006.14799" }, { "id": "2112.09332" }, { "id": "2210.02414" }, { "id": "2209.01975" }, { "id": "2205.10782" }, { "id": "2211.05100" }, { "id": "2202.12837" }, { "id": "2103.10385" }, { "id": "2205.01068" } ]
2306.07799
20
Figure 1: Comparison of abstractiveness between Chat- GPT and human-generated summaries # Inconsistencies and Hallucinations Given that ChatGPT has previously been reported to generate misinformation, we sought to evalu- ate its risk of hallucinating on our specific task. Figure 2 demonstrates that the SummaC consis- tency scores are lower for ChatGPT-generated sum- maries than for human-written summaries. A cor- responding phenomenon is verified in the halluci- nation assessment. Precision scores provided in Table 3 demonstrates the extent to which ChatGPT- generated text contains named entities that are ab- sent in the source text. A lower precision score suggests that the generated text has more named entities that lack support in the source text. The re- call scores reflect the ability of ChatGPT to capture named entities from the source text. A lower recall score implies that ChatGPT has missed a consid- erable number of named entities from the source text. F1 score represents the harmonic mean of the precision and recall scores. By examining Table 3, our findings demonstrate that ChatGPT gener- ates a greater number of named entities that are not present in the source text after undergoing multiple iterations of text conversions and modification. For example, in an expert summary, ChatGPT misin- terpreted the meaning of “Geocode” as “regional regulations”. # Intermediary Discussion
2306.07799#20
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer
Large-scale language models, like ChatGPT, have garnered significant media attention and stunned the public with their remarkable capacity for generating coherent text from short natural language prompts. In this paper, we aim to conduct a systematic inspection of ChatGPT's performance in two controllable generation tasks, with respect to ChatGPT's ability to adapt its output to different target audiences (expert vs. layman) and writing styles (formal vs. informal). Additionally, we evaluate the faithfulness of the generated text, and compare the model's performance with human-authored texts. Our findings indicate that the stylistic variations produced by humans are considerably larger than those demonstrated by ChatGPT, and the generated texts diverge from human samples in several characteristics, such as the distribution of word types. Moreover, we observe that ChatGPT sometimes incorporates factual errors or hallucinations when adapting the text to suit a specific style.
http://arxiv.org/pdf/2306.07799
Dongqi Pu, Vera Demberg
cs.CL, cs.AI, cs.LG
ACL-SRW 2023
null
cs.CL
20230613
20230613
[ { "id": "2302.14229" }, { "id": "2302.04023" }, { "id": "2302.06476" }, { "id": "2303.17580" }, { "id": "2201.05337" }, { "id": "2302.13007" }, { "id": "2303.11381" }, { "id": "2304.05613" }, { "id": "2302.09419" }, { "id": "2301.08745" }, { "id": "2204.13362" }, { "id": "2302.08081" }, { "id": "2301.08653" }, { "id": "2305.16784" }, { "id": "1912.02164" }, { "id": "2303.01067" } ]
2306.07906
20
Figure 3: WebGLM system pipeline. Our system includes three sub-modules: LLM-augmented retriever recalls the top-5 most relevant paragraphs as the reference sources; Bootstrapped generator yields answers according to the question and reference sources; Human preference-aware scorer assesses all answers and picks the highest-scored one as the final result. Compared to WebGPT, WebGLM is a more efficient and cost-effective web-enhanced QA system with comparable answer quality. 8.0 mmm Search mm Extract . mmm Fetch Retrieval — . 6.0 G o —— 2.0 || || oo i Avg. 50% 75% 90% Figure 4: WebGLM retriever time analysis. 50% of queries can be done within 4.0s, and 90% of them can be loaded within 10.0s. Most of time is spent on fetching web pages after searching. correction method based on Rouge-1 precision to match quota- tions and references (see those details in Section 3.2). Therefore, the labels we use for training are the Rouge-1 precision scores of a query-reference pair.
2306.07906#20
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
We present WebGLM, a web-enhanced question-answering system based on the General Language Model (GLM). Its goal is to augment a pre-trained large language model (LLM) with web search and retrieval capabilities while being efficient for real-world deployments. To achieve this, we develop WebGLM with strategies for the LLM-augmented retriever, bootstrapped generator, and human preference-aware scorer. Specifically, we identify and address the limitations of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency, and cost-effectiveness advantages. In addition, we propose systematic criteria for evaluating web-enhanced QA systems. We conduct multi-dimensional human evaluation and quantitative ablation studies, which suggest the outperformance of the proposed WebGLM designs over existing systems. WebGLM with the 10-billion-parameter GLM (10B) is shown to perform better than the similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human evaluation. The code, demo, and data are at \url{https://github.com/THUDM/WebGLM}.
http://arxiv.org/pdf/2306.07906
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
cs.CL, cs.AI
Accepted to KDD 2023
null
cs.CL
20230613
20230613
[ { "id": "2208.03299" }, { "id": "2204.02311" }, { "id": "2006.14799" }, { "id": "2112.09332" }, { "id": "2210.02414" }, { "id": "2209.01975" }, { "id": "2205.10782" }, { "id": "2211.05100" }, { "id": "2202.12837" }, { "id": "2103.10385" }, { "id": "2205.01068" } ]
2306.07799
21
# Intermediary Discussion Our experiments show that ChatGPT-generated summaries do not adapt as strongly to the target audience as human-authored summaries. One posSummaCCony ot an et TN De conan PY ran AE ore Tart EST WET gers 2M we Surnen PES aces Mart < Canaidate Figure 2: Summary consistency detection. L stands for layman, E for expert. Candidate Human Layman Human Expert ChatGPT Layman ChatGPT Expert ChatGPT L2E2L ChatGPT E2L2E Precision Recall F1 0.63 0.70 0.61 0.73 0.47† 0.58† 0.63† 0.49† 0.39†‡ 0.51†‡ 0.47†‡ 0.62†‡ 0.78 0.92 0.75‡ 0.90‡ 0.74‡ 0.88‡ Table 3: Named entity hallucination on Elife dataset. † and ‡ indicate statistical significance (p<0.05) against same-style human answers, and opposite-style ChatGPT answers via paired t-test, respectively. L stands for layman, E for expert.
2306.07799#21
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer
Large-scale language models, like ChatGPT, have garnered significant media attention and stunned the public with their remarkable capacity for generating coherent text from short natural language prompts. In this paper, we aim to conduct a systematic inspection of ChatGPT's performance in two controllable generation tasks, with respect to ChatGPT's ability to adapt its output to different target audiences (expert vs. layman) and writing styles (formal vs. informal). Additionally, we evaluate the faithfulness of the generated text, and compare the model's performance with human-authored texts. Our findings indicate that the stylistic variations produced by humans are considerably larger than those demonstrated by ChatGPT, and the generated texts diverge from human samples in several characteristics, such as the distribution of word types. Moreover, we observe that ChatGPT sometimes incorporates factual errors or hallucinations when adapting the text to suit a specific style.
http://arxiv.org/pdf/2306.07799
Dongqi Pu, Vera Demberg
cs.CL, cs.AI, cs.LG
ACL-SRW 2023
null
cs.CL
20230613
20230613
[ { "id": "2302.14229" }, { "id": "2302.04023" }, { "id": "2302.06476" }, { "id": "2303.17580" }, { "id": "2201.05337" }, { "id": "2302.13007" }, { "id": "2303.11381" }, { "id": "2304.05613" }, { "id": "2302.09419" }, { "id": "2301.08745" }, { "id": "2204.13362" }, { "id": "2302.08081" }, { "id": "2301.08653" }, { "id": "2305.16784" }, { "id": "1912.02164" }, { "id": "2303.01067" } ]
2306.07906
21
In the fine-tuning, we use two Contrievers to encode questions and references individually, and compute their inner products as the predictions. We leverage Mean Square Error (MSE) as the loss func- tion for the predictions and Rouge-1 precision scores to train the Contrievers. Our further quantitative experiment demonstrates that the augmentation significantly improves Contriever web-enhanced QA retrieval accuracy (see Table 7 for details). into embeddings and retrieves by finding the maximum inner prod- uct pair of them. We transfer LLMs’ natural property of reference adoption to small retrievers to improve them. Specifically, we find LLMs can naturally distinguish and only adopt useful references in in- context learning (ICL). We create a 200-query dataset, where each query is accompanied with 5 top- ranked candidate references from Contriever. We manually annotate the relevance of each piece of reference (Cf. Table 1). We find only 68.6% of them are related. However, when we provide the query with corresponding candidate references to GPT-3 for 1-shot in- context learning inference (see details in Section 3.2), we discover that the LLM would only adopt part of the references and the cor- responding accuracy is 90.2%, far better than Contriever’s. Method Acc. Contriever LLM ICL adoption 68.6% 90.2%
2306.07906#21
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
We present WebGLM, a web-enhanced question-answering system based on the General Language Model (GLM). Its goal is to augment a pre-trained large language model (LLM) with web search and retrieval capabilities while being efficient for real-world deployments. To achieve this, we develop WebGLM with strategies for the LLM-augmented retriever, bootstrapped generator, and human preference-aware scorer. Specifically, we identify and address the limitations of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency, and cost-effectiveness advantages. In addition, we propose systematic criteria for evaluating web-enhanced QA systems. We conduct multi-dimensional human evaluation and quantitative ablation studies, which suggest the outperformance of the proposed WebGLM designs over existing systems. WebGLM with the 10-billion-parameter GLM (10B) is shown to perform better than the similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human evaluation. The code, demo, and data are at \url{https://github.com/THUDM/WebGLM}.
http://arxiv.org/pdf/2306.07906
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
cs.CL, cs.AI
Accepted to KDD 2023
null
cs.CL
20230613
20230613
[ { "id": "2208.03299" }, { "id": "2204.02311" }, { "id": "2006.14799" }, { "id": "2112.09332" }, { "id": "2210.02414" }, { "id": "2209.01975" }, { "id": "2205.10782" }, { "id": "2211.05100" }, { "id": "2202.12837" }, { "id": "2103.10385" }, { "id": "2205.01068" } ]
2306.07799
22
sible reason could be that ChatGPT, given the zero- shot setting, had no way to “know” how strongly the texts should be adapted to the target style. Fur- thermore, we identified evidence for potential hal- lucinations generated during summarization. We, therefore, carried out two post-hoc experiments: (1) We modified the prompt to include an example from the dataset, so ChatGPT would have a chance to know the expected level of text adaptation. (2) We subjected the resulting summaries to several re-writing steps and test whether this further inten- sifies the occurrence of hallucinations. # 3.6.1 Follow-up Experiment: Example Inclusion in Prompt We experimented with prompts that also include a human summary example. Unlike the previous few-shot learning experiment, we do not adjust the parameters of the ChatGPT, but just let the model perform unsupervised reasoning through the con- tents of the prompt. We observe (see Appendix Table 7) that when guided by a human example from the dataset, the summaries generated by Chat- GPT indeed tend to be more aligned with human performance, particularly on the Flesch Reading Ease metric (49.23 for layman, 28.88 for expert summaries). However, no significant changes are detected in other metrics. The degree of control over the summarization style has increased, yet it remains inferior to human capabilities.
2306.07799#22
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer
Large-scale language models, like ChatGPT, have garnered significant media attention and stunned the public with their remarkable capacity for generating coherent text from short natural language prompts. In this paper, we aim to conduct a systematic inspection of ChatGPT's performance in two controllable generation tasks, with respect to ChatGPT's ability to adapt its output to different target audiences (expert vs. layman) and writing styles (formal vs. informal). Additionally, we evaluate the faithfulness of the generated text, and compare the model's performance with human-authored texts. Our findings indicate that the stylistic variations produced by humans are considerably larger than those demonstrated by ChatGPT, and the generated texts diverge from human samples in several characteristics, such as the distribution of word types. Moreover, we observe that ChatGPT sometimes incorporates factual errors or hallucinations when adapting the text to suit a specific style.
http://arxiv.org/pdf/2306.07799
Dongqi Pu, Vera Demberg
cs.CL, cs.AI, cs.LG
ACL-SRW 2023
null
cs.CL
20230613
20230613
[ { "id": "2302.14229" }, { "id": "2302.04023" }, { "id": "2302.06476" }, { "id": "2303.17580" }, { "id": "2201.05337" }, { "id": "2302.13007" }, { "id": "2303.11381" }, { "id": "2304.05613" }, { "id": "2302.09419" }, { "id": "2301.08745" }, { "id": "2204.13362" }, { "id": "2302.08081" }, { "id": "2301.08653" }, { "id": "2305.16784" }, { "id": "1912.02164" }, { "id": "2303.01067" } ]
2306.07906
22
Method Acc. Contriever LLM ICL adoption 68.6% 90.2% 3.1.3 Retrieval is no doubt the most time-consuming part in any web- scale QA system. A slow QA system, whatever high its accuracy is, would spoil the user experience. We report the speed of each steps in our LLM-augmented retriever. We sample a subset from ELI5 [7] test set to retrieve and calcu- late the average, the median, 75% quantile, 90% quantile, and 99% quantile time spent in each step. From Figure 4, we can know that our average time spent is about 5.3s, the median total time spent is about 4.07s, and 90% of searches can be loaded in 10s. The main bottleneck of our retrieval is in the second step of fetching each page, when we have to request multiple web pages from different sources. Consequently, due the contents of various pages on the network are different, some pages take very long time to load, or just cannot be returned correctly. In Appendix B, we conduct a more detailed analysis of retrieval efficiency and point out that the retrieval efficiency of WebGLM is far better than that of WebGPT.
2306.07906#22
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
We present WebGLM, a web-enhanced question-answering system based on the General Language Model (GLM). Its goal is to augment a pre-trained large language model (LLM) with web search and retrieval capabilities while being efficient for real-world deployments. To achieve this, we develop WebGLM with strategies for the LLM-augmented retriever, bootstrapped generator, and human preference-aware scorer. Specifically, we identify and address the limitations of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency, and cost-effectiveness advantages. In addition, we propose systematic criteria for evaluating web-enhanced QA systems. We conduct multi-dimensional human evaluation and quantitative ablation studies, which suggest the outperformance of the proposed WebGLM designs over existing systems. WebGLM with the 10-billion-parameter GLM (10B) is shown to perform better than the similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human evaluation. The code, demo, and data are at \url{https://github.com/THUDM/WebGLM}.
http://arxiv.org/pdf/2306.07906
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
cs.CL, cs.AI
Accepted to KDD 2023
null
cs.CL
20230613
20230613
[ { "id": "2208.03299" }, { "id": "2204.02311" }, { "id": "2006.14799" }, { "id": "2112.09332" }, { "id": "2210.02414" }, { "id": "2209.01975" }, { "id": "2205.10782" }, { "id": "2211.05100" }, { "id": "2202.12837" }, { "id": "2103.10385" }, { "id": "2205.01068" } ]
2306.07799
23
# 3.6.2 Follow-up Experiment: Repeated Re-writing Summaries are further re-written based on the prompt Please give me a layman/expert style version of X, where X was the previously gen- erated summary. Figure 2 and Table 3 display the performance of ChatGPT after re-writing in the entries “ChatGPT L2E2L" and “ChatGPT E2L2E” which stand for the order in which instructions were given (L stands for layman, and E for expert). The examinations point out that misinformation and hallucinations may be further increased during subsequent rewriting (lower SummaC scores, lower values in the named entity hallucination metric). # 4 Study on Text Formality Transfer # 4.1 Prompt Formulation and Experimental Setup Our subsequent set of experiments investigates ChatGPT’s capacity for style transfer concerning language formality. Our prompt for this task was formulated as Please give me a formal / an infor- mal version of X. We utilized the same experimen- tal setup as for the summarization task; however, we restricted the maximum number of generated tokens to 32. We again experimented with vari- ous prompts, as shown in Table 4 below. Unless otherwise specified, all experiments used the same configuration. # 4.2 Dataset
2306.07799#23
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer
Large-scale language models, like ChatGPT, have garnered significant media attention and stunned the public with their remarkable capacity for generating coherent text from short natural language prompts. In this paper, we aim to conduct a systematic inspection of ChatGPT's performance in two controllable generation tasks, with respect to ChatGPT's ability to adapt its output to different target audiences (expert vs. layman) and writing styles (formal vs. informal). Additionally, we evaluate the faithfulness of the generated text, and compare the model's performance with human-authored texts. Our findings indicate that the stylistic variations produced by humans are considerably larger than those demonstrated by ChatGPT, and the generated texts diverge from human samples in several characteristics, such as the distribution of word types. Moreover, we observe that ChatGPT sometimes incorporates factual errors or hallucinations when adapting the text to suit a specific style.
http://arxiv.org/pdf/2306.07799
Dongqi Pu, Vera Demberg
cs.CL, cs.AI, cs.LG
ACL-SRW 2023
null
cs.CL
20230613
20230613
[ { "id": "2302.14229" }, { "id": "2302.04023" }, { "id": "2302.06476" }, { "id": "2303.17580" }, { "id": "2201.05337" }, { "id": "2302.13007" }, { "id": "2303.11381" }, { "id": "2304.05613" }, { "id": "2302.09419" }, { "id": "2301.08745" }, { "id": "2204.13362" }, { "id": "2302.08081" }, { "id": "2301.08653" }, { "id": "2305.16784" }, { "id": "1912.02164" }, { "id": "2303.01067" } ]
2306.07906
23
In Appendix B, we conduct a more detailed analysis of retrieval efficiency and point out that the retrieval efficiency of WebGLM is far better than that of WebGPT. Augmentation Implementation. To transfer the reference adop- tion knowledge from GPT-3 to Contriever, we leverage the GPT-3’s reference adoption from our bootstrapped dataset WebGLM-QA to additionally fine-tune Contrievers. As the reference marks gen- erated by GPT-3 can be wrong sometimes, we use the citation 3.2 Bootstrapped Generator A major obstacle in building web-enhanced QA system is the high cost for curating expert-level QA datasets that are long-formed and properly cited. Compared to traditional or free-formed QA, WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences KDD ’23, August 6–10, 2023, Long Beach, CA, USA.
2306.07906#23
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
We present WebGLM, a web-enhanced question-answering system based on the General Language Model (GLM). Its goal is to augment a pre-trained large language model (LLM) with web search and retrieval capabilities while being efficient for real-world deployments. To achieve this, we develop WebGLM with strategies for the LLM-augmented retriever, bootstrapped generator, and human preference-aware scorer. Specifically, we identify and address the limitations of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency, and cost-effectiveness advantages. In addition, we propose systematic criteria for evaluating web-enhanced QA systems. We conduct multi-dimensional human evaluation and quantitative ablation studies, which suggest the outperformance of the proposed WebGLM designs over existing systems. WebGLM with the 10-billion-parameter GLM (10B) is shown to perform better than the similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human evaluation. The code, demo, and data are at \url{https://github.com/THUDM/WebGLM}.
http://arxiv.org/pdf/2306.07906
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
cs.CL, cs.AI
Accepted to KDD 2023
null
cs.CL
20230613
20230613
[ { "id": "2208.03299" }, { "id": "2204.02311" }, { "id": "2006.14799" }, { "id": "2112.09332" }, { "id": "2210.02414" }, { "id": "2209.01975" }, { "id": "2205.10782" }, { "id": "2211.05100" }, { "id": "2202.12837" }, { "id": "2103.10385" }, { "id": "2205.01068" } ]
2306.07799
24
# 4.2 Dataset We investigated whether ChatGPT can proficiently execute style transfer on sentences using data from the GYAFC (Rao and Tetreault, 2018) dataset. The dataset has two branches, Entertainment & Music (EM) and Family & Relationships (FR). With the aid of this dataset, we aim to evaluate ChatGPT’s ability for sentence style transfer, examine the dif- ferences in vocabulary selection and syntactic struc- tures between ChatGPT and human performance, and identify the limitations of ChatGPT. # 4.3 Metrics To evaluate the level of formality in the generated text, we utilized Text Formality Score (Heylighen
2306.07799#24
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer
Large-scale language models, like ChatGPT, have garnered significant media attention and stunned the public with their remarkable capacity for generating coherent text from short natural language prompts. In this paper, we aim to conduct a systematic inspection of ChatGPT's performance in two controllable generation tasks, with respect to ChatGPT's ability to adapt its output to different target audiences (expert vs. layman) and writing styles (formal vs. informal). Additionally, we evaluate the faithfulness of the generated text, and compare the model's performance with human-authored texts. Our findings indicate that the stylistic variations produced by humans are considerably larger than those demonstrated by ChatGPT, and the generated texts diverge from human samples in several characteristics, such as the distribution of word types. Moreover, we observe that ChatGPT sometimes incorporates factual errors or hallucinations when adapting the text to suit a specific style.
http://arxiv.org/pdf/2306.07799
Dongqi Pu, Vera Demberg
cs.CL, cs.AI, cs.LG
ACL-SRW 2023
null
cs.CL
20230613
20230613
[ { "id": "2302.14229" }, { "id": "2302.04023" }, { "id": "2302.06476" }, { "id": "2303.17580" }, { "id": "2201.05337" }, { "id": "2302.13007" }, { "id": "2303.11381" }, { "id": "2304.05613" }, { "id": "2302.09419" }, { "id": "2301.08745" }, { "id": "2204.13362" }, { "id": "2302.08081" }, { "id": "2301.08653" }, { "id": "2305.16784" }, { "id": "1912.02164" }, { "id": "2303.01067" } ]
2306.07906
24
Question: Why is it sometimes hard to eat after not eating for a while? 1 | Reference [1]: Beyond that, when | | you wait till you're absolutely 1 | ravenous to eat, it’s easy to eat past the point of fullness ... Reference [2]: ... our body learns that and so it learns to accept a smaller amount. Reference [3]: Sometimes after $01] food your immune system . I gave a friend an instruction and a question with references. The friend a al read the instruction and wrote an are considered bad tend to ... : 'Reference [2]: Some words are ; considered "bad" because they ... : ‘Question: Why did we decide that ' certain words were "bad" and , tshouldn’t be used in social settings?: Answer: Words considered bad relate {to negative ways of talking about + certain words. [2] Read the references provided and answer the corresponding question Reference [1]: Beyond that, when you wait till you're absolutely ... Reference [2]: ... our body learns that and so it learns to accept ... Reference [3]: Sometimes after long periods of going without food ... Question: Why is it sometimes hard to eat after not eating for a while? es < There are several reasons why not eating ... ' ' ' ' | long periods of going without
2306.07906#24
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
We present WebGLM, a web-enhanced question-answering system based on the General Language Model (GLM). Its goal is to augment a pre-trained large language model (LLM) with web search and retrieval capabilities while being efficient for real-world deployments. To achieve this, we develop WebGLM with strategies for the LLM-augmented retriever, bootstrapped generator, and human preference-aware scorer. Specifically, we identify and address the limitations of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency, and cost-effectiveness advantages. In addition, we propose systematic criteria for evaluating web-enhanced QA systems. We conduct multi-dimensional human evaluation and quantitative ablation studies, which suggest the outperformance of the proposed WebGLM designs over existing systems. WebGLM with the 10-billion-parameter GLM (10B) is shown to perform better than the similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human evaluation. The code, demo, and data are at \url{https://github.com/THUDM/WebGLM}.
http://arxiv.org/pdf/2306.07906
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
cs.CL, cs.AI
Accepted to KDD 2023
null
cs.CL
20230613
20230613
[ { "id": "2208.03299" }, { "id": "2204.02311" }, { "id": "2006.14799" }, { "id": "2112.09332" }, { "id": "2210.02414" }, { "id": "2209.01975" }, { "id": "2205.10782" }, { "id": "2211.05100" }, { "id": "2202.12837" }, { "id": "2103.10385" }, { "id": "2205.01068" } ]
2306.07799
25
To evaluate the level of formality in the generated text, we utilized Text Formality Score (Heylighen and Dewaele, 1999) and MTLD Lexical Diversity (McCarthy and Jarvis, 2010) metric. The Text For- mality Score (Heylighen and Dewaele, 1999) is a metric that quantifies the degree of formality in lan- guage usage within a text, based on the adherence to formal linguistic norms. Another measure that evaluates language formality is the MTLD Lexi- cal Diversity metric (McCarthy and Jarvis, 2010). This index measures the diversity and richness of the vocabulary used in the text, based on the fre- quency and number of unique words. A higher MTLD score indicates a greater variety of vocabu- lary, which typically corresponds to a more formal language style. We also utilized BLEU (Papineni et al., 2002) score to draw a comparison between ChatGPT and SOTA approach. We additionally as- sessed the distribution of POS tags in the generated different styles, as well as the distribution of depen- dency labels5. For quantifying misinformation and hallucinations, we used DAE and named entity hal- lucination checking. The DAE algorithm (Goyal and Durrett, 2020) utilizes dependency arcs to iden- tify entailment relationships between propositions and identify inconsistencies in factual information based on syntactic and semantic structures.
2306.07799#25
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer
Large-scale language models, like ChatGPT, have garnered significant media attention and stunned the public with their remarkable capacity for generating coherent text from short natural language prompts. In this paper, we aim to conduct a systematic inspection of ChatGPT's performance in two controllable generation tasks, with respect to ChatGPT's ability to adapt its output to different target audiences (expert vs. layman) and writing styles (formal vs. informal). Additionally, we evaluate the faithfulness of the generated text, and compare the model's performance with human-authored texts. Our findings indicate that the stylistic variations produced by humans are considerably larger than those demonstrated by ChatGPT, and the generated texts diverge from human samples in several characteristics, such as the distribution of word types. Moreover, we observe that ChatGPT sometimes incorporates factual errors or hallucinations when adapting the text to suit a specific style.
http://arxiv.org/pdf/2306.07799
Dongqi Pu, Vera Demberg
cs.CL, cs.AI, cs.LG
ACL-SRW 2023
null
cs.CL
20230613
20230613
[ { "id": "2302.14229" }, { "id": "2302.04023" }, { "id": "2302.06476" }, { "id": "2303.17580" }, { "id": "2201.05337" }, { "id": "2302.13007" }, { "id": "2303.11381" }, { "id": "2304.05613" }, { "id": "2302.09419" }, { "id": "2301.08745" }, { "id": "2204.13362" }, { "id": "2302.08081" }, { "id": "2301.08653" }, { "id": "2305.16784" }, { "id": "1912.02164" }, { "id": "2303.01067" } ]
2306.07906
25
... Question: Why is it sometimes hard to eat after not eating for a while? es < There are several reasons why not eating ... ' ' ' ' | long periods of going without ' ' ' Read the references provided and answer | __ L Answer: the corresponding question burning through your muscle[1][3]. Another reason is ... called gluconeogenesis[2]. Also, leptin levels can rapidly decline in ... (a) Prompt Formulation (b) Instruction Inducting (c) Few-shot In-context Learning
2306.07906#25
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
We present WebGLM, a web-enhanced question-answering system based on the General Language Model (GLM). Its goal is to augment a pre-trained large language model (LLM) with web search and retrieval capabilities while being efficient for real-world deployments. To achieve this, we develop WebGLM with strategies for the LLM-augmented retriever, bootstrapped generator, and human preference-aware scorer. Specifically, we identify and address the limitations of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency, and cost-effectiveness advantages. In addition, we propose systematic criteria for evaluating web-enhanced QA systems. We conduct multi-dimensional human evaluation and quantitative ablation studies, which suggest the outperformance of the proposed WebGLM designs over existing systems. WebGLM with the 10-billion-parameter GLM (10B) is shown to perform better than the similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human evaluation. The code, demo, and data are at \url{https://github.com/THUDM/WebGLM}.
http://arxiv.org/pdf/2306.07906
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
cs.CL, cs.AI
Accepted to KDD 2023
null
cs.CL
20230613
20230613
[ { "id": "2208.03299" }, { "id": "2204.02311" }, { "id": "2006.14799" }, { "id": "2112.09332" }, { "id": "2210.02414" }, { "id": "2209.01975" }, { "id": "2205.10782" }, { "id": "2211.05100" }, { "id": "2202.12837" }, { "id": "2103.10385" }, { "id": "2205.01068" } ]
2306.07799
26
# 4.4 Results on Formality Control # 4.4.1 Effect of Prompt Formulation Table 4 presents the results for a set of 500 random samples from the GYAFC dataset. We observe that the Formality scores are very similar for ChatGPT formal vs. informal texts. We note however that the difference in ratings for human-written texts is also small for this metric. The MTLD metric on the other hand shows higher values for ChatGPT- generated formal texts; in fact, the scores are sub- stantially larger than those of human-written texts, but differ not much from each other. We therefore proceed with the prompts using the formulation formal/informal for the rest of the experiments on the whole dataset. # 4.4.2 Sentence Formality Control Table 5 offers supplementary evidence from the full dataset supporting ChatGPT’s capacity to mod- ify the formality level of sentences. By employing the Formality indicator (Heylighen and Dewaele, 1999), it is apparent that the generated text tends to manifest a higher level of formality overall. A primary factor contributing to this result is the pre5https://spacy.io/
2306.07799#26
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer
Large-scale language models, like ChatGPT, have garnered significant media attention and stunned the public with their remarkable capacity for generating coherent text from short natural language prompts. In this paper, we aim to conduct a systematic inspection of ChatGPT's performance in two controllable generation tasks, with respect to ChatGPT's ability to adapt its output to different target audiences (expert vs. layman) and writing styles (formal vs. informal). Additionally, we evaluate the faithfulness of the generated text, and compare the model's performance with human-authored texts. Our findings indicate that the stylistic variations produced by humans are considerably larger than those demonstrated by ChatGPT, and the generated texts diverge from human samples in several characteristics, such as the distribution of word types. Moreover, we observe that ChatGPT sometimes incorporates factual errors or hallucinations when adapting the text to suit a specific style.
http://arxiv.org/pdf/2306.07799
Dongqi Pu, Vera Demberg
cs.CL, cs.AI, cs.LG
ACL-SRW 2023
null
cs.CL
20230613
20230613
[ { "id": "2302.14229" }, { "id": "2302.04023" }, { "id": "2302.06476" }, { "id": "2303.17580" }, { "id": "2201.05337" }, { "id": "2302.13007" }, { "id": "2303.11381" }, { "id": "2304.05613" }, { "id": "2302.09419" }, { "id": "2301.08745" }, { "id": "2204.13362" }, { "id": "2302.08081" }, { "id": "2301.08653" }, { "id": "2305.16784" }, { "id": "1912.02164" }, { "id": "2303.01067" } ]
2306.07906
26
Figure 5: We construct WebGLM-QA for generator training via LLM in-context bootstrapping. It includes three stages: 1) prompt formulation, 2) instruction inducting, and 3) few-shot in-context learning. In this way, we avoid the outrageous cost in time and money for hiring experts but still create a high-quality quoted long-formed QA dataset. we expect the system to yield fact-grounded answers with correct references (see example in 5). WebGPT reports to hire a group of full-time experts to write answers for training, which is far beyond ordinary budgets. Fortunately, LLMs’ in-context learning [3, 5], which refers to their capabilities to transfer to new tasks conditioned on few in- context samples, have been demonstrated and well-explored re- cently. Thus we propose to bootstrap large amounts of quoted long answers via leveraging a few high-quality answers, LLMs, questions from ELI5 [7], and our retriever collected references. Additionally, since bootstrapped samples are not always satisfying, we design corresponding correction and selection strategies to filter out a high-quality subset for real training. All these efforts jointly help to create the WebGLM-QA, a quoted and long-formed QA dataset with 45k high-quality filtered and 83k unfiltered samples.
2306.07906#26
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
We present WebGLM, a web-enhanced question-answering system based on the General Language Model (GLM). Its goal is to augment a pre-trained large language model (LLM) with web search and retrieval capabilities while being efficient for real-world deployments. To achieve this, we develop WebGLM with strategies for the LLM-augmented retriever, bootstrapped generator, and human preference-aware scorer. Specifically, we identify and address the limitations of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency, and cost-effectiveness advantages. In addition, we propose systematic criteria for evaluating web-enhanced QA systems. We conduct multi-dimensional human evaluation and quantitative ablation studies, which suggest the outperformance of the proposed WebGLM designs over existing systems. WebGLM with the 10-billion-parameter GLM (10B) is shown to perform better than the similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human evaluation. The code, demo, and data are at \url{https://github.com/THUDM/WebGLM}.
http://arxiv.org/pdf/2306.07906
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
cs.CL, cs.AI
Accepted to KDD 2023
null
cs.CL
20230613
20230613
[ { "id": "2208.03299" }, { "id": "2204.02311" }, { "id": "2006.14799" }, { "id": "2112.09332" }, { "id": "2210.02414" }, { "id": "2209.01975" }, { "id": "2205.10782" }, { "id": "2211.05100" }, { "id": "2202.12837" }, { "id": "2103.10385" }, { "id": "2205.01068" } ]
2306.07799
27
Prompt version informal unprofessional spoken version easygoing casual laid-back human answer (for informal) formal professional written stately grandiose majestic human answer (for formal) Formality MTLD 13.22† 16.23† 14.47† 14.11† 16.30† 13.94† 11.42 31.23† 31.98† 29.69† 34.43† 30.71† 33.49† 14.99 51.09 51.20 51.30† 51.43† 51.00 51.27 50.76 52.22† 51.96† 51.62† 51.30† 52.85† 52.23† 53.92 Table 4: Text formality on different prompts, tested on a set of 500 randomly selected items from the dataset. † indicates statistical significance (p<0.05) against corre- sponding human answers via paired t-test. disposition of ChatGPT’s training corpus towards written sources, encompassing materials such as books and news articles, as opposed to spoken lan- guage corpora (OpenAI, 2023). This perspective is further corroborated by an examination of the gen- erated sentence samples. The MTLD metric under- scores that ChatGPT’s lexical diversity is consider- ably lower when generating informal sentences, but shows a marked increase when generating formal sentences.
2306.07799#27
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer
Large-scale language models, like ChatGPT, have garnered significant media attention and stunned the public with their remarkable capacity for generating coherent text from short natural language prompts. In this paper, we aim to conduct a systematic inspection of ChatGPT's performance in two controllable generation tasks, with respect to ChatGPT's ability to adapt its output to different target audiences (expert vs. layman) and writing styles (formal vs. informal). Additionally, we evaluate the faithfulness of the generated text, and compare the model's performance with human-authored texts. Our findings indicate that the stylistic variations produced by humans are considerably larger than those demonstrated by ChatGPT, and the generated texts diverge from human samples in several characteristics, such as the distribution of word types. Moreover, we observe that ChatGPT sometimes incorporates factual errors or hallucinations when adapting the text to suit a specific style.
http://arxiv.org/pdf/2306.07799
Dongqi Pu, Vera Demberg
cs.CL, cs.AI, cs.LG
ACL-SRW 2023
null
cs.CL
20230613
20230613
[ { "id": "2302.14229" }, { "id": "2302.04023" }, { "id": "2302.06476" }, { "id": "2303.17580" }, { "id": "2201.05337" }, { "id": "2302.13007" }, { "id": "2303.11381" }, { "id": "2304.05613" }, { "id": "2302.09419" }, { "id": "2301.08745" }, { "id": "2204.13362" }, { "id": "2302.08081" }, { "id": "2301.08653" }, { "id": "2305.16784" }, { "id": "1912.02164" }, { "id": "2303.01067" } ]
2306.07906
27
The dataset can be formulated as a set D (Q, A, R, C), where Q, A, R represents the question set, the answer set, and the reference set respectively, C ⊆ Q × A × 2R denotes the triple set of (question, answer, valid references). Prompt Formulation. Since we input many contents to the API, including a few of demonstrations (i.e., high-quality samples (𝑞𝑑 , 𝛼𝑑 , R𝑑 )), the question, and the corresponding references, their for- mulation could impact the performance significantly. We compare several types of prompts, including the order between question and its references (i.e., before or after, Cf. Figure 5 (a)), the symbols used to mark the indices of references, and the prompt words of references and questions. We conduct experiments with every type of prompt we have mentioned, and finally find a natural way as shown in Figure 5 (a) performs best.
2306.07906#27
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
We present WebGLM, a web-enhanced question-answering system based on the General Language Model (GLM). Its goal is to augment a pre-trained large language model (LLM) with web search and retrieval capabilities while being efficient for real-world deployments. To achieve this, we develop WebGLM with strategies for the LLM-augmented retriever, bootstrapped generator, and human preference-aware scorer. Specifically, we identify and address the limitations of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency, and cost-effectiveness advantages. In addition, we propose systematic criteria for evaluating web-enhanced QA systems. We conduct multi-dimensional human evaluation and quantitative ablation studies, which suggest the outperformance of the proposed WebGLM designs over existing systems. WebGLM with the 10-billion-parameter GLM (10B) is shown to perform better than the similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human evaluation. The code, demo, and data are at \url{https://github.com/THUDM/WebGLM}.
http://arxiv.org/pdf/2306.07906
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
cs.CL, cs.AI
Accepted to KDD 2023
null
cs.CL
20230613
20230613
[ { "id": "2208.03299" }, { "id": "2204.02311" }, { "id": "2006.14799" }, { "id": "2112.09332" }, { "id": "2210.02414" }, { "id": "2209.01975" }, { "id": "2205.10782" }, { "id": "2211.05100" }, { "id": "2202.12837" }, { "id": "2103.10385" }, { "id": "2205.01068" } ]
2306.07799
28
Dataset R F - C F A Y G Candidate Human Informal Human Formal ChatGPT Informal ChatGPT Formal Formality MTLD 15.20 18.70 14.60‡ 31.68†‡ 49.87 53.57 50.77†‡ 52.06†‡ M E - C F A Y G Human Informal Human Formal ChatGPT Informal ChatGPT Formal 50.11 53.76 51.02†‡ 51.98†‡ 12.11 15.82 12.01‡ 29.80†‡ Table 5: Text formality scores by automatic metrics; † and ‡ indicate statistical significance (p<0.05) against same-style human answers, and opposite-style ChatGPT answers via paired t-test, respectively. 4.4.3 Comparison to Previous SOTA Model We also find that ChatGPT outperforms the previ- ous supervised SOTA model (Nouri, 2022) by train- ing on the same subset at Section 4.4.1 for few-shot learning, as evident from the higher BLEU score. Specifically, ChatGPT yields superior scores of
2306.07799#28
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer
Large-scale language models, like ChatGPT, have garnered significant media attention and stunned the public with their remarkable capacity for generating coherent text from short natural language prompts. In this paper, we aim to conduct a systematic inspection of ChatGPT's performance in two controllable generation tasks, with respect to ChatGPT's ability to adapt its output to different target audiences (expert vs. layman) and writing styles (formal vs. informal). Additionally, we evaluate the faithfulness of the generated text, and compare the model's performance with human-authored texts. Our findings indicate that the stylistic variations produced by humans are considerably larger than those demonstrated by ChatGPT, and the generated texts diverge from human samples in several characteristics, such as the distribution of word types. Moreover, we observe that ChatGPT sometimes incorporates factual errors or hallucinations when adapting the text to suit a specific style.
http://arxiv.org/pdf/2306.07799
Dongqi Pu, Vera Demberg
cs.CL, cs.AI, cs.LG
ACL-SRW 2023
null
cs.CL
20230613
20230613
[ { "id": "2302.14229" }, { "id": "2302.04023" }, { "id": "2302.06476" }, { "id": "2303.17580" }, { "id": "2201.05337" }, { "id": "2302.13007" }, { "id": "2303.11381" }, { "id": "2304.05613" }, { "id": "2302.09419" }, { "id": "2301.08745" }, { "id": "2204.13362" }, { "id": "2302.08081" }, { "id": "2301.08653" }, { "id": "2305.16784" }, { "id": "1912.02164" }, { "id": "2303.01067" } ]
2306.07906
28
Instruction Inducting. Next, we need a proper instruction (e.g., “Please write a answer based on the question and references.”) for guiding the LLM to generate a qualified answer. Recent work [9] suggests that we can take advantage of the LLM itself to design instructions for in-context learning instead of human handcrafting. We use several high-quality examples to induce a few possible instructions (Cf. Figure 5 (b)), and select the best-performed one based on our empirical evaluation over several queries. Different from free text generation, in web-enhanced QA each answer 𝛼 ∈ A contains quotations and thus is in the form of 𝛼 = (< 𝑠1, ∇1 >, < 𝑠2, ∇2 >, · · · , < 𝑠𝑛, ∇𝑛 >) where < 𝑠𝑘, ∇𝑘 > represents the k-th segment in answer 𝛼, 𝑠𝑘 is a piece of quoted text, and ∇𝑘 ⊂ R is a set of references that 𝑠𝑘 cites.
2306.07906#28
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
We present WebGLM, a web-enhanced question-answering system based on the General Language Model (GLM). Its goal is to augment a pre-trained large language model (LLM) with web search and retrieval capabilities while being efficient for real-world deployments. To achieve this, we develop WebGLM with strategies for the LLM-augmented retriever, bootstrapped generator, and human preference-aware scorer. Specifically, we identify and address the limitations of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency, and cost-effectiveness advantages. In addition, we propose systematic criteria for evaluating web-enhanced QA systems. We conduct multi-dimensional human evaluation and quantitative ablation studies, which suggest the outperformance of the proposed WebGLM designs over existing systems. WebGLM with the 10-billion-parameter GLM (10B) is shown to perform better than the similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human evaluation. The code, demo, and data are at \url{https://github.com/THUDM/WebGLM}.
http://arxiv.org/pdf/2306.07906
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
cs.CL, cs.AI
Accepted to KDD 2023
null
cs.CL
20230613
20230613
[ { "id": "2208.03299" }, { "id": "2204.02311" }, { "id": "2006.14799" }, { "id": "2112.09332" }, { "id": "2210.02414" }, { "id": "2209.01975" }, { "id": "2205.10782" }, { "id": "2211.05100" }, { "id": "2202.12837" }, { "id": "2103.10385" }, { "id": "2205.01068" } ]
2306.07799
29
0.711 and 0.697 in the EM and FR branches, as compared to the SOTA model’s scores of 0.671 and 0.652. However, ChatGPT achieved only 0.07 and 0.06 BLEU scores on the EM and FR branches, respectively, in the unsupervised setting. # 4.4.4 Effect of Example Inclusion in Prompt We again examined the impact of including an ex- ample of the dataset into the prompt and find that this again helps ChatGPT slightly with matching the dataset style (with details provided in Table 8). Specifically, the formality score for the informal style is 50.67, while it climbs to 52.13 for the for- mal style, with the MTLD score also displaying an increase from 14.81 for informal texts to 19.22 for formal texts. # 4.4.5 Disparities in Style Transfer Behavior
2306.07799#29
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer
Large-scale language models, like ChatGPT, have garnered significant media attention and stunned the public with their remarkable capacity for generating coherent text from short natural language prompts. In this paper, we aim to conduct a systematic inspection of ChatGPT's performance in two controllable generation tasks, with respect to ChatGPT's ability to adapt its output to different target audiences (expert vs. layman) and writing styles (formal vs. informal). Additionally, we evaluate the faithfulness of the generated text, and compare the model's performance with human-authored texts. Our findings indicate that the stylistic variations produced by humans are considerably larger than those demonstrated by ChatGPT, and the generated texts diverge from human samples in several characteristics, such as the distribution of word types. Moreover, we observe that ChatGPT sometimes incorporates factual errors or hallucinations when adapting the text to suit a specific style.
http://arxiv.org/pdf/2306.07799
Dongqi Pu, Vera Demberg
cs.CL, cs.AI, cs.LG
ACL-SRW 2023
null
cs.CL
20230613
20230613
[ { "id": "2302.14229" }, { "id": "2302.04023" }, { "id": "2302.06476" }, { "id": "2303.17580" }, { "id": "2201.05337" }, { "id": "2302.13007" }, { "id": "2303.11381" }, { "id": "2304.05613" }, { "id": "2302.09419" }, { "id": "2301.08745" }, { "id": "2204.13362" }, { "id": "2302.08081" }, { "id": "2301.08653" }, { "id": "2305.16784" }, { "id": "1912.02164" }, { "id": "2303.01067" } ]
2306.07906
29
3.2.1 We adopt a subset of questions from ELI5 train set as our Q and leverage a vanilla Contriever [10] (without LLM augmentation yet) in fine-grained retrieval to produce references R. In this work we first try on OpenAI text-davinci-003 API to conduct 1-shot in- context learning inference to generate quoted long-formed answers (while other LLMs such as GLM-130B [36] could be good options too). Since the in-context learning can be volatile to input forms and prompts, we take many trails to finally determine the best bootstrapping strategies as follows: (1) Few-shot In-Context Learning. We study the best shots needed for generating good quoted long-formed answers. Because the ref- erence parts often occupies much of sequence length, we notice that one-shot learning can surpass few-shot learning in terms of answer’s quality in most time. Hence we finally choose to inference with one-shot demonstration sample as shown in Figure 5 (c), and finally 83k various queries and their answers have been collected. We record the details of choosing prompts and instructions in Appendix C.
2306.07906#29
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
We present WebGLM, a web-enhanced question-answering system based on the General Language Model (GLM). Its goal is to augment a pre-trained large language model (LLM) with web search and retrieval capabilities while being efficient for real-world deployments. To achieve this, we develop WebGLM with strategies for the LLM-augmented retriever, bootstrapped generator, and human preference-aware scorer. Specifically, we identify and address the limitations of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency, and cost-effectiveness advantages. In addition, we propose systematic criteria for evaluating web-enhanced QA systems. We conduct multi-dimensional human evaluation and quantitative ablation studies, which suggest the outperformance of the proposed WebGLM designs over existing systems. WebGLM with the 10-billion-parameter GLM (10B) is shown to perform better than the similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human evaluation. The code, demo, and data are at \url{https://github.com/THUDM/WebGLM}.
http://arxiv.org/pdf/2306.07906
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
cs.CL, cs.AI
Accepted to KDD 2023
null
cs.CL
20230613
20230613
[ { "id": "2208.03299" }, { "id": "2204.02311" }, { "id": "2006.14799" }, { "id": "2112.09332" }, { "id": "2210.02414" }, { "id": "2209.01975" }, { "id": "2205.10782" }, { "id": "2211.05100" }, { "id": "2202.12837" }, { "id": "2103.10385" }, { "id": "2205.01068" } ]
2306.07799
30
# 4.4.5 Disparities in Style Transfer Behavior In terms of controlling the formality of sentence style, ChatGPT’s performance still exhibits sig- nificant differences compared to human behavior. While the by-item correlation is slightly higher for this dataset than for the summary task (Pear- son correlation of around 0.4 for formal style and 0.5 for informal style on the Formality metric; 0.3 for MTLD measure), there are interesting dispari- ties between the distributions of POS tags between ChatGPT and humans. The examination of statisti- cal significance further substantiates our antecedent observation, indicating a substantial disparity be- tween the different response styles engendered by the model, as well as between the answers conform- ing to the same styles exhibited by humans. Figure 3 illustrates the absolute differences in the distribution of Part-of-Speech (POS) tags. Based on this figure, it is evident that ChatGPT employs a higher frequency of adjectives, adpositions, de- terminers, and nouns in the generation of formal sentences when compared to those produced by human writers. Conversely, in the generation of informal sentences, ChatGPT tends to utilize more auxiliary words and punctuation marks. These vari- ances in word choice between formal and informal styles, as exemplified by ChatGPT, are indicative of differences in its selected vocabulary for distinct stylistic modes compare with humans.
2306.07799#30
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer
Large-scale language models, like ChatGPT, have garnered significant media attention and stunned the public with their remarkable capacity for generating coherent text from short natural language prompts. In this paper, we aim to conduct a systematic inspection of ChatGPT's performance in two controllable generation tasks, with respect to ChatGPT's ability to adapt its output to different target audiences (expert vs. layman) and writing styles (formal vs. informal). Additionally, we evaluate the faithfulness of the generated text, and compare the model's performance with human-authored texts. Our findings indicate that the stylistic variations produced by humans are considerably larger than those demonstrated by ChatGPT, and the generated texts diverge from human samples in several characteristics, such as the distribution of word types. Moreover, we observe that ChatGPT sometimes incorporates factual errors or hallucinations when adapting the text to suit a specific style.
http://arxiv.org/pdf/2306.07799
Dongqi Pu, Vera Demberg
cs.CL, cs.AI, cs.LG
ACL-SRW 2023
null
cs.CL
20230613
20230613
[ { "id": "2302.14229" }, { "id": "2302.04023" }, { "id": "2302.06476" }, { "id": "2303.17580" }, { "id": "2201.05337" }, { "id": "2302.13007" }, { "id": "2303.11381" }, { "id": "2304.05613" }, { "id": "2302.09419" }, { "id": "2301.08745" }, { "id": "2204.13362" }, { "id": "2302.08081" }, { "id": "2301.08653" }, { "id": "2305.16784" }, { "id": "1912.02164" }, { "id": "2303.01067" } ]
2306.07906
30
Appendix C. 3.2.2 Citation Correction We have produced a large amount of well-written quoted long- formed answers using GPT-3 in-context learning. However, in our examination, we observe that the answers sometimes cite the wrong or invalid (i.e., nonexistent) references in their citation numbers. As a result, to correct the citation relationships are crucial for the quality of WebGLM-QA dataset. KDD ’23, August 6–10, 2023, Long Beach, CA, USA.
2306.07906#30
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
We present WebGLM, a web-enhanced question-answering system based on the General Language Model (GLM). Its goal is to augment a pre-trained large language model (LLM) with web search and retrieval capabilities while being efficient for real-world deployments. To achieve this, we develop WebGLM with strategies for the LLM-augmented retriever, bootstrapped generator, and human preference-aware scorer. Specifically, we identify and address the limitations of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency, and cost-effectiveness advantages. In addition, we propose systematic criteria for evaluating web-enhanced QA systems. We conduct multi-dimensional human evaluation and quantitative ablation studies, which suggest the outperformance of the proposed WebGLM designs over existing systems. WebGLM with the 10-billion-parameter GLM (10B) is shown to perform better than the similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human evaluation. The code, demo, and data are at \url{https://github.com/THUDM/WebGLM}.
http://arxiv.org/pdf/2306.07906
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
cs.CL, cs.AI
Accepted to KDD 2023
null
cs.CL
20230613
20230613
[ { "id": "2208.03299" }, { "id": "2204.02311" }, { "id": "2006.14799" }, { "id": "2112.09332" }, { "id": "2210.02414" }, { "id": "2209.01975" }, { "id": "2205.10782" }, { "id": "2211.05100" }, { "id": "2202.12837" }, { "id": "2103.10385" }, { "id": "2205.01068" } ]
2306.07799
31
By analyzing the distribution of dependency la- bels (Appendix Figures 5, 6, 7, 8), it is also clear that, in comparison to human-authored sentences, ChatGPT utilizes a greater frequency of adjectival modifiers, auxiliaries, determiners, objects of the preposition, and prepositional modifiers for formal sentences. Contrarily, compounds and dependents are infrequently employed in the generation of in- formal sentences by ChatGPT. Informal Style a0) ADP abv Aux um PART PRON 3 2 ororn | I con ver sco] | ve POS Tags Formal Style = a0) yw? aD Aux sm con) er int) Noun aR PRON rom] | smmce vere POS Tags Figure 3: Absolute differences in POS tags distribution of ChatGPT and human-generated sentences: GYAFC - EM
2306.07799#31
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer
Large-scale language models, like ChatGPT, have garnered significant media attention and stunned the public with their remarkable capacity for generating coherent text from short natural language prompts. In this paper, we aim to conduct a systematic inspection of ChatGPT's performance in two controllable generation tasks, with respect to ChatGPT's ability to adapt its output to different target audiences (expert vs. layman) and writing styles (formal vs. informal). Additionally, we evaluate the faithfulness of the generated text, and compare the model's performance with human-authored texts. Our findings indicate that the stylistic variations produced by humans are considerably larger than those demonstrated by ChatGPT, and the generated texts diverge from human samples in several characteristics, such as the distribution of word types. Moreover, we observe that ChatGPT sometimes incorporates factual errors or hallucinations when adapting the text to suit a specific style.
http://arxiv.org/pdf/2306.07799
Dongqi Pu, Vera Demberg
cs.CL, cs.AI, cs.LG
ACL-SRW 2023
null
cs.CL
20230613
20230613
[ { "id": "2302.14229" }, { "id": "2302.04023" }, { "id": "2302.06476" }, { "id": "2303.17580" }, { "id": "2201.05337" }, { "id": "2302.13007" }, { "id": "2303.11381" }, { "id": "2304.05613" }, { "id": "2302.09419" }, { "id": "2301.08745" }, { "id": "2204.13362" }, { "id": "2302.08081" }, { "id": "2301.08653" }, { "id": "2305.16784" }, { "id": "1912.02164" }, { "id": "2303.01067" } ]
2306.07906
31
KDD ’23, August 6–10, 2023, Long Beach, CA, USA. Despite the fact that the citation numbers can be wrong, the contents quoted in the answer are often correct. Thus we propose to amend the citation number according to the quotation similarity to references, by splitting an answer into few segments by generated citation numbers and match then to references. For a question 𝑞, our retrieved references are defined as R and our answer can be defined as 𝛼. We define text segments S = {𝑠1, 𝑠2, · · · , 𝑠𝑛 }, and for each pair (𝑠, ∇) ∈ S × R, we compute citation match scores 𝑓 (𝑠, 𝑟 ) for 𝑟 ∈ R. We pick a threshold 𝑇 , and the final citation 𝑟 for each segment (𝑠, ∇) ∈ 𝛼 can be described as: ∇𝑖 = {𝑟 |𝑓 (𝑠𝑖, 𝑟 ) ≥ 𝑇 }, 𝑟 ∈ R For our application, we finally adopt Rouge-1 score as the 𝑓 and the threshold 𝑇 selection is introduced in the Section 3.2.3.
2306.07906#31
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
We present WebGLM, a web-enhanced question-answering system based on the General Language Model (GLM). Its goal is to augment a pre-trained large language model (LLM) with web search and retrieval capabilities while being efficient for real-world deployments. To achieve this, we develop WebGLM with strategies for the LLM-augmented retriever, bootstrapped generator, and human preference-aware scorer. Specifically, we identify and address the limitations of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency, and cost-effectiveness advantages. In addition, we propose systematic criteria for evaluating web-enhanced QA systems. We conduct multi-dimensional human evaluation and quantitative ablation studies, which suggest the outperformance of the proposed WebGLM designs over existing systems. WebGLM with the 10-billion-parameter GLM (10B) is shown to perform better than the similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human evaluation. The code, demo, and data are at \url{https://github.com/THUDM/WebGLM}.
http://arxiv.org/pdf/2306.07906
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
cs.CL, cs.AI
Accepted to KDD 2023
null
cs.CL
20230613
20230613
[ { "id": "2208.03299" }, { "id": "2204.02311" }, { "id": "2006.14799" }, { "id": "2112.09332" }, { "id": "2210.02414" }, { "id": "2209.01975" }, { "id": "2205.10782" }, { "id": "2211.05100" }, { "id": "2202.12837" }, { "id": "2103.10385" }, { "id": "2205.01068" } ]
2306.07799
32
Figure 3: Absolute differences in POS tags distribution of ChatGPT and human-generated sentences: GYAFC - EM Inconsistencies and Hallucinations 4.4.6 In order to assess the risk of introducing erroneous information when ChatGPT performs sentence style transformation, we employed DAE (Goyal and Durrett, 2020) at the sentence level to exam- ine the factuality after text style transformation, and compare again the effect of multiple re-writes. Similar to before, F denotes formal style, I signifies informal style, and X2X2X (X ∈ {F, I}) represents multiple rewriting transformations of the text. The outcomes of our inquiry are depicted in Figure 4, and Appendix Figure 14. We also again scrutinized the potential incorporation of hallucinatory infor- mation regarding named entities in the ChatGPT- generated text, and the findings are presented in Appendix Table 9. 1.00 - 0.75- 2 0.50- a 0.25- 0.00- a anal anor ener fer ena yar2\ crt PN oy a\ sis nor for GPT F Aa it ena i) fax 2 pul Candidate Figure 4: Dependency arc entailment: GYAFC - EM. Data points>0.95≈Accurate. To clarify discrepancies, cutoff point=0.95.
2306.07799#32
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer
Large-scale language models, like ChatGPT, have garnered significant media attention and stunned the public with their remarkable capacity for generating coherent text from short natural language prompts. In this paper, we aim to conduct a systematic inspection of ChatGPT's performance in two controllable generation tasks, with respect to ChatGPT's ability to adapt its output to different target audiences (expert vs. layman) and writing styles (formal vs. informal). Additionally, we evaluate the faithfulness of the generated text, and compare the model's performance with human-authored texts. Our findings indicate that the stylistic variations produced by humans are considerably larger than those demonstrated by ChatGPT, and the generated texts diverge from human samples in several characteristics, such as the distribution of word types. Moreover, we observe that ChatGPT sometimes incorporates factual errors or hallucinations when adapting the text to suit a specific style.
http://arxiv.org/pdf/2306.07799
Dongqi Pu, Vera Demberg
cs.CL, cs.AI, cs.LG
ACL-SRW 2023
null
cs.CL
20230613
20230613
[ { "id": "2302.14229" }, { "id": "2302.04023" }, { "id": "2302.06476" }, { "id": "2303.17580" }, { "id": "2201.05337" }, { "id": "2302.13007" }, { "id": "2303.11381" }, { "id": "2304.05613" }, { "id": "2302.09419" }, { "id": "2301.08745" }, { "id": "2204.13362" }, { "id": "2302.08081" }, { "id": "2301.08653" }, { "id": "2305.16784" }, { "id": "1912.02164" }, { "id": "2303.01067" } ]
2306.07906
32
For our application, we finally adopt Rouge-1 score as the 𝑓 and the threshold 𝑇 selection is introduced in the Section 3.2.3. 3.2.3 After correction, we further investigate more issues that could potentially influence the dataset quality. And in short, we discover that most of them are related or could be solved via checking the citation quality. We will discard a piece of generated sample if it presents any problems in the following: • Hallucination [13]: the answer leverages the internal knowl- edge of LLMs instead of references, which is not factual-grounded and sometimes severely wrong. It can be identified via the low overlapping ratio between all references and the answer. • Few citations: when an answer cites too few of the provided references, it usually presents poor reference relevance and thus often not informative and factual-grounded enough. • Low citation accuracy: if an answer have too many wrong
2306.07906#32
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
We present WebGLM, a web-enhanced question-answering system based on the General Language Model (GLM). Its goal is to augment a pre-trained large language model (LLM) with web search and retrieval capabilities while being efficient for real-world deployments. To achieve this, we develop WebGLM with strategies for the LLM-augmented retriever, bootstrapped generator, and human preference-aware scorer. Specifically, we identify and address the limitations of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency, and cost-effectiveness advantages. In addition, we propose systematic criteria for evaluating web-enhanced QA systems. We conduct multi-dimensional human evaluation and quantitative ablation studies, which suggest the outperformance of the proposed WebGLM designs over existing systems. WebGLM with the 10-billion-parameter GLM (10B) is shown to perform better than the similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human evaluation. The code, demo, and data are at \url{https://github.com/THUDM/WebGLM}.
http://arxiv.org/pdf/2306.07906
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
cs.CL, cs.AI
Accepted to KDD 2023
null
cs.CL
20230613
20230613
[ { "id": "2208.03299" }, { "id": "2204.02311" }, { "id": "2006.14799" }, { "id": "2112.09332" }, { "id": "2210.02414" }, { "id": "2209.01975" }, { "id": "2205.10782" }, { "id": "2211.05100" }, { "id": "2202.12837" }, { "id": "2103.10385" }, { "id": "2205.01068" } ]
2306.07799
33
Figure 4: Dependency arc entailment: GYAFC - EM. Data points>0.95≈Accurate. To clarify discrepancies, cutoff point=0.95. Upon conducting factuality checking (see Fig- ure 4, and Appendix Figure 14), it is discovered that ChatGPT’s performance is inferior to that of humans in sentence-style rewriting. Interestingly, with the increase in the number of text conversions and rewritings, ChatGPT’s tendency to commit fac- tual errors escalates while the output increasingly deviates from the original text, compromising the fidelity of the final result. In a particular instance, the human-generated formal expression states “She # oF is a poor vocalist", whereas the formal rendition provided by ChatGPT articulates “She does not possess the ability to sing". This discrepancy rep- resents a significant semantic alteration. The de- gree of dependency arc entailment is low in this case. Similarly, Appendix Table 9 reveals that re- call scores on the named entity hallucination metric are lower in ChatGPT sentences than in human sen- tences. # 4.4.7 Qualitative Examples
2306.07799#33
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer
Large-scale language models, like ChatGPT, have garnered significant media attention and stunned the public with their remarkable capacity for generating coherent text from short natural language prompts. In this paper, we aim to conduct a systematic inspection of ChatGPT's performance in two controllable generation tasks, with respect to ChatGPT's ability to adapt its output to different target audiences (expert vs. layman) and writing styles (formal vs. informal). Additionally, we evaluate the faithfulness of the generated text, and compare the model's performance with human-authored texts. Our findings indicate that the stylistic variations produced by humans are considerably larger than those demonstrated by ChatGPT, and the generated texts diverge from human samples in several characteristics, such as the distribution of word types. Moreover, we observe that ChatGPT sometimes incorporates factual errors or hallucinations when adapting the text to suit a specific style.
http://arxiv.org/pdf/2306.07799
Dongqi Pu, Vera Demberg
cs.CL, cs.AI, cs.LG
ACL-SRW 2023
null
cs.CL
20230613
20230613
[ { "id": "2302.14229" }, { "id": "2302.04023" }, { "id": "2302.06476" }, { "id": "2303.17580" }, { "id": "2201.05337" }, { "id": "2302.13007" }, { "id": "2303.11381" }, { "id": "2304.05613" }, { "id": "2302.09419" }, { "id": "2301.08745" }, { "id": "2204.13362" }, { "id": "2302.08081" }, { "id": "2301.08653" }, { "id": "2305.16784" }, { "id": "1912.02164" }, { "id": "2303.01067" } ]
2306.07906
33
• Low citation accuracy: if an answer have too many wrong citation numbers, we assume it as a low-quality one. We calculate the F1 for the similarity and overlapping calculation. We test Rouge-L (whose best threshold is 0.4) and Rouge-1 (whose best one is 0.57) on a set of manually checked samples, and find that Rouge-1 is better. It is due to the fact that LLMs would often rewrite and paraphrase the reference contents including exchang- ing phrase orders. In that case, a high-quality answer may hold a high informative Rouge-1 score, but a low Rouge-L score, which computes the longest common subsequence co-occurrence. After all the filtering conditions mentioned above, the number of samples drops from 83k to 45k, which becomes a high quality quoted long-formed QA dataset for web-hanced QA system training. We train the GLM [6], a type of bidirectional LM that is pre-trained on autoregressive blanking infilling (including a 10-billion-parameter and a 2-billion-parameter one), over the WebGLM-QA as our back- bone generator.
2306.07906#33
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
We present WebGLM, a web-enhanced question-answering system based on the General Language Model (GLM). Its goal is to augment a pre-trained large language model (LLM) with web search and retrieval capabilities while being efficient for real-world deployments. To achieve this, we develop WebGLM with strategies for the LLM-augmented retriever, bootstrapped generator, and human preference-aware scorer. Specifically, we identify and address the limitations of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency, and cost-effectiveness advantages. In addition, we propose systematic criteria for evaluating web-enhanced QA systems. We conduct multi-dimensional human evaluation and quantitative ablation studies, which suggest the outperformance of the proposed WebGLM designs over existing systems. WebGLM with the 10-billion-parameter GLM (10B) is shown to perform better than the similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human evaluation. The code, demo, and data are at \url{https://github.com/THUDM/WebGLM}.
http://arxiv.org/pdf/2306.07906
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
cs.CL, cs.AI
Accepted to KDD 2023
null
cs.CL
20230613
20230613
[ { "id": "2208.03299" }, { "id": "2204.02311" }, { "id": "2006.14799" }, { "id": "2112.09332" }, { "id": "2210.02414" }, { "id": "2209.01975" }, { "id": "2205.10782" }, { "id": "2211.05100" }, { "id": "2202.12837" }, { "id": "2103.10385" }, { "id": "2205.01068" } ]