doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2306.07799
| 34 |
# 4.4.7 Qualitative Examples
To explore whether ChatGPT-generated sentences significantly alter the original semantics of the in- put text, we conducted a case study by randomly se- lecting 15 samples from each branch of the GYAFC dataset. Our findings indicate that ChatGPT poses a relatively severe risk of modifying the original se- mantics during sentence style transformation, with approximately 18% of the samples exhibiting no- ticeable semantic inconsistencies. The examples in Table 6 reveal that during the process of sen- tence style transfer, ChatGPT erroneously modifies the content words, resulting in significant semantic alterations.
|
2306.07799#34
|
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer
|
Large-scale language models, like ChatGPT, have garnered significant media
attention and stunned the public with their remarkable capacity for generating
coherent text from short natural language prompts. In this paper, we aim to
conduct a systematic inspection of ChatGPT's performance in two controllable
generation tasks, with respect to ChatGPT's ability to adapt its output to
different target audiences (expert vs. layman) and writing styles (formal vs.
informal). Additionally, we evaluate the faithfulness of the generated text,
and compare the model's performance with human-authored texts. Our findings
indicate that the stylistic variations produced by humans are considerably
larger than those demonstrated by ChatGPT, and the generated texts diverge from
human samples in several characteristics, such as the distribution of word
types. Moreover, we observe that ChatGPT sometimes incorporates factual errors
or hallucinations when adapting the text to suit a specific style.
|
http://arxiv.org/pdf/2306.07799
|
Dongqi Pu, Vera Demberg
|
cs.CL, cs.AI, cs.LG
|
ACL-SRW 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2302.14229"
},
{
"id": "2302.04023"
},
{
"id": "2302.06476"
},
{
"id": "2303.17580"
},
{
"id": "2201.05337"
},
{
"id": "2302.13007"
},
{
"id": "2303.11381"
},
{
"id": "2304.05613"
},
{
"id": "2302.09419"
},
{
"id": "2301.08745"
},
{
"id": "2204.13362"
},
{
"id": "2302.08081"
},
{
"id": "2301.08653"
},
{
"id": "2305.16784"
},
{
"id": "1912.02164"
},
{
"id": "2303.01067"
}
] |
2306.07906
| 34 |
3.3 Human Preference-aware Scorer In preliminary testing, our bootstrapped generator under beam- search decoding strategy already performs satisfyingly in many cases. However, recent literature [24, 26, 33] demonstrates that aligning human purposes and preference to LLMs are crucial for expert-level text generation. WebGPT reports to recruit many ex- perts to provide comparison and ranking over generated answers and make use of the feedback to train a reward model (RM) for pick- ing best-of-n (i.e., 16/32/64) generated candidates and additionally optimize the generator via reinforcement learning (RL).
Liu and Lai and Yu, et al.
Nevertheless, such expert annotations could be expensive to acquire and the RL would consume much computation resource. In this work, as a competitive substitute, we propose to build a human preference-aware scorer based on massive user feedback (e.g., thumb-ups) from online QA forums. Under appropriate designs and elaborate data cleaning, we show in our experiments that such scorer also significantly improve the alignment-level of answers and the scoring in real human evaluation.
Data collection and preprocessing. We first collect QA pairs and corresponding user thumb-ups from online QA forums. Despite their diversity, these answers are of so various lengths and qual- ities that the scorer would learn little from them without proper preprocessing.
Our preprocessing includes the following requirements:
|
2306.07906#34
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07799
| 35 |
Formal to Informal It is such a waste of TV space. Yes, because itâs such a waste of TV space! What a total waste of TV time! (Reference) (Human) (ChatGPT) The other boy isnât that great. The other boy is not that good. The other kidâs not so hot. (Reference) (Human) (ChatGPT) I really enjoy how the composition has the tec... I really like how they do like the whole techn... Iâm digginâ how the techno beat slows down in ... (Reference) (Human) (ChatGPT) Informal to Formal Fatboy Slim - Right Here, Right Now Or any oth... Fatboy Slim is right here and now. He Rocks! Fatboy Slimâs "Right Here, Right Now" is an ex... (Reference) (Human) (ChatGPT) loved them since their first album. I have loved them since their first album. I have held a fondness for them since the rele... (Reference) (Human) (ChatGPT) if u occasionally doing it then u alrady r add... If you occasionally do it, then you are already... If you are engaging in the activity on a regul... (Reference) (Human) (ChatGPT)
Table 6: Case study of ChatGPT generated output
|
2306.07799#35
|
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer
|
Large-scale language models, like ChatGPT, have garnered significant media
attention and stunned the public with their remarkable capacity for generating
coherent text from short natural language prompts. In this paper, we aim to
conduct a systematic inspection of ChatGPT's performance in two controllable
generation tasks, with respect to ChatGPT's ability to adapt its output to
different target audiences (expert vs. layman) and writing styles (formal vs.
informal). Additionally, we evaluate the faithfulness of the generated text,
and compare the model's performance with human-authored texts. Our findings
indicate that the stylistic variations produced by humans are considerably
larger than those demonstrated by ChatGPT, and the generated texts diverge from
human samples in several characteristics, such as the distribution of word
types. Moreover, we observe that ChatGPT sometimes incorporates factual errors
or hallucinations when adapting the text to suit a specific style.
|
http://arxiv.org/pdf/2306.07799
|
Dongqi Pu, Vera Demberg
|
cs.CL, cs.AI, cs.LG
|
ACL-SRW 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2302.14229"
},
{
"id": "2302.04023"
},
{
"id": "2302.06476"
},
{
"id": "2303.17580"
},
{
"id": "2201.05337"
},
{
"id": "2302.13007"
},
{
"id": "2303.11381"
},
{
"id": "2304.05613"
},
{
"id": "2302.09419"
},
{
"id": "2301.08745"
},
{
"id": "2204.13362"
},
{
"id": "2302.08081"
},
{
"id": "2301.08653"
},
{
"id": "2305.16784"
},
{
"id": "1912.02164"
},
{
"id": "2303.01067"
}
] |
2306.07906
| 35 |
Our preprocessing includes the following requirements:
⢠High quality feedback: we define the answer with more than 3 thumb-ups as an answer with valid feedback. We pick out questions with 8 or more valid answers as qualified ones.
⢠Length-bias mitigation: we notice that the score prefers longer answers rather than the better ones in preliminary study, as is also indicated in literature [26, 33]. To mitigate the bias, for each qualified question, we use the median length ð¥ of all the answers as the threshold to truncate longer answers and discard those lengths are less than ð¥/2.
Contrast augmentation: after sorting the answers by their thumb-ups, the gaps between neighboring answers turn out nar- row. Scorers trained on such uninformative dataset present poor performance. To increase the contrast between answers for com- parison training, we select a pair of answers of more than 5 in rank positions. In each pair, the answer with greater amount of likes is the better response. After our prepossessing, there are 93k questions and 249k com- parison pairs in total, with 230k pairs as the training set and 19k pairs as the test set. Next, we introduce the implementation details for training our human preference-scorer. The backbone model for training scorer is a 6-billion-parameter GLM.
|
2306.07906#35
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07799
| 36 |
Table 6: Case study of ChatGPT generated output
Furthermore, our examination of the visualized dependency tree (see Appendix Figures 11, 12, and 13), which relies primarily on the dependency arc entailment (DAE) algorithm for fact-checking, re- veals that the text generated by ChatGPT contains a higher number of dependency arcs lacking support from the original text, when compared to human responses.
# 5 Conclusion
This paper presents a broad assessment of Chat- GPTâs proficiency in generating controllable text. We conducted quantitative and qualitative exami- nations at the document level (summarization task) and sentence level (text style transfer). The em- pirical findings show that ChatGPT outperforms the previous state-of-the-art models on automatic metrics, but that there are substantial disparities be- tween its generated texts and human-written texts. These disparities are reduced by providing a target example of the human writing style. Furthermore, our investigations also confirm the previously re- ported problems of hallucinations and inaccuracies in text generated by ChatGPT.
# 6 Limitations
|
2306.07799#36
|
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer
|
Large-scale language models, like ChatGPT, have garnered significant media
attention and stunned the public with their remarkable capacity for generating
coherent text from short natural language prompts. In this paper, we aim to
conduct a systematic inspection of ChatGPT's performance in two controllable
generation tasks, with respect to ChatGPT's ability to adapt its output to
different target audiences (expert vs. layman) and writing styles (formal vs.
informal). Additionally, we evaluate the faithfulness of the generated text,
and compare the model's performance with human-authored texts. Our findings
indicate that the stylistic variations produced by humans are considerably
larger than those demonstrated by ChatGPT, and the generated texts diverge from
human samples in several characteristics, such as the distribution of word
types. Moreover, we observe that ChatGPT sometimes incorporates factual errors
or hallucinations when adapting the text to suit a specific style.
|
http://arxiv.org/pdf/2306.07799
|
Dongqi Pu, Vera Demberg
|
cs.CL, cs.AI, cs.LG
|
ACL-SRW 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2302.14229"
},
{
"id": "2302.04023"
},
{
"id": "2302.06476"
},
{
"id": "2303.17580"
},
{
"id": "2201.05337"
},
{
"id": "2302.13007"
},
{
"id": "2303.11381"
},
{
"id": "2304.05613"
},
{
"id": "2302.09419"
},
{
"id": "2301.08745"
},
{
"id": "2204.13362"
},
{
"id": "2302.08081"
},
{
"id": "2301.08653"
},
{
"id": "2305.16784"
},
{
"id": "1912.02164"
},
{
"id": "2303.01067"
}
] |
2306.07906
| 36 |
Supervised fine-tuning (SFT). In SFT step, we leverage the Reddit TL; DR dataset for first fine-tuning the scorer following [33]. We train 16 epochs with cosine learning rate decay and 2.83e-5 as beginning learning rate. We use the SFT model for initialization of comparison training.
Comparison training. We pass pairs of comparison data to the model to yield a scalar score for each of the question-answer pair and maximize the gap between their scores. We use a linear head with the input dimension of hidden size and the output dimension of 1 to produce the score.
During the training, we find that the scorer tends to overfit quickly. Therefore, we freeze first 70% transformer layers and lever- age other techniques such as dropouts and large batch size for regularization. Notwithstanding, the scorer would overfit after 1- 1.5 epochs anyway. After the training completes, we calibrate its predictions to standard normal distribution based on the training set reward distribution.
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
2306.07906#36
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07799
| 37 |
# 6 Limitations
The primary limitations of the current study pertain to the selection of prompts and evaluation metrics. The experimental cost of requesting API responses from OpenAI to assess ChatGPTâs text genera- tion abilities imposes significant constraints on our choice of datasets. Therefore, we have to limit our experimentation to only two related controllable text generation datasets. While we have evaluated ChatGPTâs performance at both the document and sentence levels, we cannot extrapolate that Chat- GPT has similar performance for other text genera- tion datasets. Additionally, the experimental cost prohibits us from conducting traversal experiments on the selection of hyperparameters. We relied on the default configuration recommended by OpenAI, and we maintain consistency in all hyperparameters to ensure the fairness of the experiments.
Secondly, although we have studied the impact of prompt engineering on ChatGPT, the selection of prompts is mainly affected by human understand- ing, and the number of potential prompts is infinite. Hence, we cannot guarantee whether other prompts that we did not select will yield the same conclu- sions as our experiment. Furthermore, ChatGPT is subject to continuous updates and iterations, which may lead to improved performance, making it diffi- cult to predict if future versions of ChatGPT will have similar results to our experiments.
|
2306.07799#37
|
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer
|
Large-scale language models, like ChatGPT, have garnered significant media
attention and stunned the public with their remarkable capacity for generating
coherent text from short natural language prompts. In this paper, we aim to
conduct a systematic inspection of ChatGPT's performance in two controllable
generation tasks, with respect to ChatGPT's ability to adapt its output to
different target audiences (expert vs. layman) and writing styles (formal vs.
informal). Additionally, we evaluate the faithfulness of the generated text,
and compare the model's performance with human-authored texts. Our findings
indicate that the stylistic variations produced by humans are considerably
larger than those demonstrated by ChatGPT, and the generated texts diverge from
human samples in several characteristics, such as the distribution of word
types. Moreover, we observe that ChatGPT sometimes incorporates factual errors
or hallucinations when adapting the text to suit a specific style.
|
http://arxiv.org/pdf/2306.07799
|
Dongqi Pu, Vera Demberg
|
cs.CL, cs.AI, cs.LG
|
ACL-SRW 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2302.14229"
},
{
"id": "2302.04023"
},
{
"id": "2302.06476"
},
{
"id": "2303.17580"
},
{
"id": "2201.05337"
},
{
"id": "2302.13007"
},
{
"id": "2303.11381"
},
{
"id": "2304.05613"
},
{
"id": "2302.09419"
},
{
"id": "2301.08745"
},
{
"id": "2204.13362"
},
{
"id": "2302.08081"
},
{
"id": "2301.08653"
},
{
"id": "2305.16784"
},
{
"id": "1912.02164"
},
{
"id": "2303.01067"
}
] |
2306.07906
| 37 |
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
4 Human Evaluation Criteria Automatic metrics to score model-generated answers can perform well in terms of short-formed ones. However, for open-domain long-formed QA with references, the answers and rationales can be subjective and versatile, especially for those questions that start with "HOW" and "WHY." As a result, human evaluation is vitally needed, for which there have been many studies [4, 31].
To evaluate WebGLM and appropriately compare it to other similar models, we introduce a human evaluation criteria system to evaluate both references and answers. We adopt both binary (for those objective metrics, e.g., truthfulness) and four-level score (for those subjective metrics, e.g., fluency) balancing objectivity and scale in human evaluation. The four-level score is applied as is suggested in the literature that it avoid human annotators to keep absolutely neutral [31]. For each criterion we mention below, an arrow follows. up arrow (â) means higher score performs better, while down arrow (â) denotes lower score performs better.
|
2306.07906#37
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07799
| 38 |
Finally, to select appropriate evaluation metrics, we have included both domain-related evaluation metrics (such as reading difficulty and text formal- ity) and domain-independent evaluation indicators (such as fact-checking and hallucination detection). However, we acknowledge that the automatic metrics may sometimes not capture all aspects of the intended construct correctly.
# 7 Ethics Considerations
All datasets utilized in this study are publicly avail- able, and we have adhered to ethical considerations by not introducing any additional information into ChatGPTâs inputs.
# Acknowledgements
This project has received funding from the Euro- pean Research Council (ERC) under the European Unionâs Horizon 2020 Research and Innovation Programme (Grant Agreement No. 948878).
|
2306.07799#38
|
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer
|
Large-scale language models, like ChatGPT, have garnered significant media
attention and stunned the public with their remarkable capacity for generating
coherent text from short natural language prompts. In this paper, we aim to
conduct a systematic inspection of ChatGPT's performance in two controllable
generation tasks, with respect to ChatGPT's ability to adapt its output to
different target audiences (expert vs. layman) and writing styles (formal vs.
informal). Additionally, we evaluate the faithfulness of the generated text,
and compare the model's performance with human-authored texts. Our findings
indicate that the stylistic variations produced by humans are considerably
larger than those demonstrated by ChatGPT, and the generated texts diverge from
human samples in several characteristics, such as the distribution of word
types. Moreover, we observe that ChatGPT sometimes incorporates factual errors
or hallucinations when adapting the text to suit a specific style.
|
http://arxiv.org/pdf/2306.07799
|
Dongqi Pu, Vera Demberg
|
cs.CL, cs.AI, cs.LG
|
ACL-SRW 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2302.14229"
},
{
"id": "2302.04023"
},
{
"id": "2302.06476"
},
{
"id": "2303.17580"
},
{
"id": "2201.05337"
},
{
"id": "2302.13007"
},
{
"id": "2303.11381"
},
{
"id": "2304.05613"
},
{
"id": "2302.09419"
},
{
"id": "2301.08745"
},
{
"id": "2204.13362"
},
{
"id": "2302.08081"
},
{
"id": "2301.08653"
},
{
"id": "2305.16784"
},
{
"id": "1912.02164"
},
{
"id": "2303.01067"
}
] |
2306.07906
| 38 |
4.1 Reference Evaluation In this section, we introduce human evaluation criteria on refer- ences. The evaluation is done on per question-reference pair. Relevancy ([0, 3], â). For retrieved documents or references re- lated to a question, the more related, the higher relevancy score should be. Specifically, different references to a question can share high relevancy scores simultaneously. Density ([0, 3], â). To evaluate how much useful information is in a piece of reference, we need to estimate its information density.
|
2306.07906#38
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07799
| 39 |
European Research Couneil
# References
Reinald Kim Amplayo, Stefanos Angelidis, and Mirella Lapata. 2021. Aspect-controllable opinion summa- rization. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6578â6593, Online and Punta Cana, Domini- can Republic. Association for Computational Lin- guistics.
Tal August, Katharina Reinecke, and Noah A. Smith. 2022. Generating scientific definitions with control- lable complexity. In Proceedings of the 60th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 8298â8317, Dublin, Ireland. Association for Computational Lin- guistics.
Nikolay Babakov, David Dale, Varvara Logacheva, and Alexander Panchenko. 2022. A large-scale computa- tional study of content preservation measures for text style transfer and paraphrase generation. In Proceed- ings of the 60th Annual Meeting of the Association for Computational Linguistics: Student Research Work- shop, pages 300â321, Dublin, Ireland. Association for Computational Linguistics.
|
2306.07799#39
|
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer
|
Large-scale language models, like ChatGPT, have garnered significant media
attention and stunned the public with their remarkable capacity for generating
coherent text from short natural language prompts. In this paper, we aim to
conduct a systematic inspection of ChatGPT's performance in two controllable
generation tasks, with respect to ChatGPT's ability to adapt its output to
different target audiences (expert vs. layman) and writing styles (formal vs.
informal). Additionally, we evaluate the faithfulness of the generated text,
and compare the model's performance with human-authored texts. Our findings
indicate that the stylistic variations produced by humans are considerably
larger than those demonstrated by ChatGPT, and the generated texts diverge from
human samples in several characteristics, such as the distribution of word
types. Moreover, we observe that ChatGPT sometimes incorporates factual errors
or hallucinations when adapting the text to suit a specific style.
|
http://arxiv.org/pdf/2306.07799
|
Dongqi Pu, Vera Demberg
|
cs.CL, cs.AI, cs.LG
|
ACL-SRW 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2302.14229"
},
{
"id": "2302.04023"
},
{
"id": "2302.06476"
},
{
"id": "2303.17580"
},
{
"id": "2201.05337"
},
{
"id": "2302.13007"
},
{
"id": "2303.11381"
},
{
"id": "2304.05613"
},
{
"id": "2302.09419"
},
{
"id": "2301.08745"
},
{
"id": "2204.13362"
},
{
"id": "2302.08081"
},
{
"id": "2301.08653"
},
{
"id": "2305.16784"
},
{
"id": "1912.02164"
},
{
"id": "2303.01067"
}
] |
2306.07906
| 39 |
Both relevancy and density are criteria to evaluate informa- tiveness, but there is difference between them. Relevancy can be regarded as a "recall metric" for informativeness, while density can be regarded as a "precision metric". Truthfulness ([0, 1], â). Retrieved references can be factually wrong even they are closely associated to the question. It is be- cause the web information sources are open and could contain user-submitted information without correctness check. As a result, the truthfulness of a piece of reference should be evaluated, and its evaluation does not consider the question. Toxicity ([0, 1], â). Web texts could involve violent, pornographic, offensive words or other improper elements. Thus, it is necessary to assess toxicity of references retrieved. Social Bias ([0, 1], â). Potential biases on the internet could related to genders, races, nations, and ages. We should also exclude them from our system.
|
2306.07906#39
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07799
| 40 |
David Baidoo-Anu and Leticia Owusu Ansah. 2023. Ed- ucation in the era of generative artificial intelligence (ai): Understanding the potential benefits of chatgpt in promoting teaching and learning. Available at SSRN 4337484.
Yejin Bang, Samuel Cahyawijaya, Nayeon Lee, Wen- liang Dai, Dan Su, Bryan Wilie, Holy Lovenia, Ziwei Ji, Tiezheng Yu, Willy Chung, Quyet V. Do, Yan Xu, and Pascale Fung. 2023a. A multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hal- lucination, and interactivity. ArXiv, abs/2302.04023.
Yejin Bang, Samuel Cahyawijaya, Nayeon Lee, Wen- liang Dai, Dan Su, Bryan Wilie, Holy Lovenia, Ziwei Ji, Tiezheng Yu, Willy Chung, et al. 2023b. A multi- task, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity. arXiv preprint arXiv:2302.04023.
|
2306.07799#40
|
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer
|
Large-scale language models, like ChatGPT, have garnered significant media
attention and stunned the public with their remarkable capacity for generating
coherent text from short natural language prompts. In this paper, we aim to
conduct a systematic inspection of ChatGPT's performance in two controllable
generation tasks, with respect to ChatGPT's ability to adapt its output to
different target audiences (expert vs. layman) and writing styles (formal vs.
informal). Additionally, we evaluate the faithfulness of the generated text,
and compare the model's performance with human-authored texts. Our findings
indicate that the stylistic variations produced by humans are considerably
larger than those demonstrated by ChatGPT, and the generated texts diverge from
human samples in several characteristics, such as the distribution of word
types. Moreover, we observe that ChatGPT sometimes incorporates factual errors
or hallucinations when adapting the text to suit a specific style.
|
http://arxiv.org/pdf/2306.07799
|
Dongqi Pu, Vera Demberg
|
cs.CL, cs.AI, cs.LG
|
ACL-SRW 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2302.14229"
},
{
"id": "2302.04023"
},
{
"id": "2302.06476"
},
{
"id": "2303.17580"
},
{
"id": "2201.05337"
},
{
"id": "2302.13007"
},
{
"id": "2303.11381"
},
{
"id": "2304.05613"
},
{
"id": "2302.09419"
},
{
"id": "2301.08745"
},
{
"id": "2204.13362"
},
{
"id": "2302.08081"
},
{
"id": "2301.08653"
},
{
"id": "2305.16784"
},
{
"id": "1912.02164"
},
{
"id": "2303.01067"
}
] |
2306.07906
| 40 |
4.2 Answer Evaluation In this section, we introduce human evaluation criteria on answers, which are evaluated triple-wise (i.e., (question, answer, references)). Fluency ([0, 3], â). Fluency measures the quality of generated text itself only, without taking questions and references into ac- count [4]. It concerns only elements such as grammar, word, and phrase choices that are affiliated to the language aspect. Correctness ([0, 3], â). Correctness measures the coherence of the answer and its corresponding question. If an answer solves
KDD â23, August 6â10, 2023, Long Beach, CA, USA.
|
2306.07906#40
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07799
| 41 |
Meng Cao, Yue Dong, and Jackie Cheung. 2022. Hal- lucinated but factual! inspecting the factuality of hallucinations in abstractive summarization. In Pro- ceedings of the 60th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 3340â3354, Dublin, Ireland. Associa- tion for Computational Linguistics.
Yixin Cao, Ruihao Shui, Liangming Pan, Min-Yen Kan, Zhiyuan Liu, and Tat-Seng Chua. 2020. Expertise style transfer: A new task towards better communi- cation between experts and laymen. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1061â1071, On- line. Association for Computational Linguistics.
Fredrik Carlsson, Joey Ãhman, Fangyu Liu, Severine Verlinden, Joakim Nivre, and Magnus Sahlgren. 2022. Fine-grained controllable text generation using non- residual prompting. In Proceedings of the 60th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6837â 6857, Dublin, Ireland. Association for Computational Linguistics.
|
2306.07799#41
|
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer
|
Large-scale language models, like ChatGPT, have garnered significant media
attention and stunned the public with their remarkable capacity for generating
coherent text from short natural language prompts. In this paper, we aim to
conduct a systematic inspection of ChatGPT's performance in two controllable
generation tasks, with respect to ChatGPT's ability to adapt its output to
different target audiences (expert vs. layman) and writing styles (formal vs.
informal). Additionally, we evaluate the faithfulness of the generated text,
and compare the model's performance with human-authored texts. Our findings
indicate that the stylistic variations produced by humans are considerably
larger than those demonstrated by ChatGPT, and the generated texts diverge from
human samples in several characteristics, such as the distribution of word
types. Moreover, we observe that ChatGPT sometimes incorporates factual errors
or hallucinations when adapting the text to suit a specific style.
|
http://arxiv.org/pdf/2306.07799
|
Dongqi Pu, Vera Demberg
|
cs.CL, cs.AI, cs.LG
|
ACL-SRW 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2302.14229"
},
{
"id": "2302.04023"
},
{
"id": "2302.06476"
},
{
"id": "2303.17580"
},
{
"id": "2201.05337"
},
{
"id": "2302.13007"
},
{
"id": "2303.11381"
},
{
"id": "2304.05613"
},
{
"id": "2302.09419"
},
{
"id": "2301.08745"
},
{
"id": "2204.13362"
},
{
"id": "2302.08081"
},
{
"id": "2301.08653"
},
{
"id": "2305.16784"
},
{
"id": "1912.02164"
},
{
"id": "2303.01067"
}
] |
2306.07906
| 41 |
KDD â23, August 6â10, 2023, Long Beach, CA, USA.
the question satisfyingly, we say it holds a high correctness. Addi- tionally, when we score the correctness of an answer, we should take factual consistency into account. For example, contradicting common sense or defying logic will decrease the correctness. Citation Accuracy ([0, 3], â). The metric only considers the rela- tionships between an answer and its references. When an answer contains citation marks, we should check if it is correct. Citation mistakes or missing citation will both decrease the accuracy. Truthfulness ([0, 1], â). Similar to truthfulness in the reference evaluation, truthfulness of an answer measures whether the text of the answer is factually sound, including the factual consistency of the answer and whether the answer contains contradictions or hallucinate information. Objectivity ([0, 1], â). The metric only concerns the relationships between an answer and its references. When references provided, models are supposed to generate answers according to these refer- ences without its using its latent knowledge from pre-training. If we can find all the information of an answer from provided references, we say it is objective. Redundancy ([0, 1], â). Within the limited text length, duplicate content will reduce informativeness. As the lower redundancy, the higher quality of the answer, we take it into our consideration.
|
2306.07906#41
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07799
| 42 |
Jeanne Sternlicht Chall and Edgar Dale. 1995. Readabil- ity revisited: The new Dale-Chall readability formula. Brookline Books.
Hou Pong Chan, Lu Wang, and Irwin King. 2021. Con- trollable summarization with constrained Markov de- cision process. Transactions of the Association for Computational Linguistics, 9:1213â1232.
Yu Cheng, Zhe Gan, Yizhe Zhang, Oussama Elachqar, Dianqi Li, and Jingjing Liu. 2020. Contextual text style transfer. In Findings of the Association for Com- putational Linguistics: EMNLP 2020, pages 2915â 2924, Online. Association for Computational Lin- guistics.
Meri Coleman and Ta Lin Liau. 1975. A computer readability formula designed for machine scoring. Journal of Applied Psychology, 60(2):283.
Peng Cui and Le Hu. 2021. Topic-guided abstractive multi-document summarization. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 1463â1472, Punta Cana, Dominican Re- public. Association for Computational Linguistics.
|
2306.07799#42
|
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer
|
Large-scale language models, like ChatGPT, have garnered significant media
attention and stunned the public with their remarkable capacity for generating
coherent text from short natural language prompts. In this paper, we aim to
conduct a systematic inspection of ChatGPT's performance in two controllable
generation tasks, with respect to ChatGPT's ability to adapt its output to
different target audiences (expert vs. layman) and writing styles (formal vs.
informal). Additionally, we evaluate the faithfulness of the generated text,
and compare the model's performance with human-authored texts. Our findings
indicate that the stylistic variations produced by humans are considerably
larger than those demonstrated by ChatGPT, and the generated texts diverge from
human samples in several characteristics, such as the distribution of word
types. Moreover, we observe that ChatGPT sometimes incorporates factual errors
or hallucinations when adapting the text to suit a specific style.
|
http://arxiv.org/pdf/2306.07799
|
Dongqi Pu, Vera Demberg
|
cs.CL, cs.AI, cs.LG
|
ACL-SRW 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2302.14229"
},
{
"id": "2302.04023"
},
{
"id": "2302.06476"
},
{
"id": "2303.17580"
},
{
"id": "2201.05337"
},
{
"id": "2302.13007"
},
{
"id": "2303.11381"
},
{
"id": "2304.05613"
},
{
"id": "2302.09419"
},
{
"id": "2301.08745"
},
{
"id": "2204.13362"
},
{
"id": "2302.08081"
},
{
"id": "2301.08653"
},
{
"id": "2305.16784"
},
{
"id": "1912.02164"
},
{
"id": "2303.01067"
}
] |
2306.07906
| 42 |
The detail of the metrics and the meaning of the score can be found in the Appendix H.
5 Experiment In this section, we conduct experiments employing the metrics mentioned in Section 4 to evaluate and analyze the quality of the responses generated, including those from WebGLM and other similar systems. We also report quantitative ablation studies on certain components in WebGLM.
5.1 Main Results We conduct the major evaluation using the 272 questions provided on WebGPT [24] demo website2, as the WebGPT is not publicly available and selected questions are generally complicated and closer enough to real human questions.
Human Evaluation Setup. We recruited 15 master-degree level experts to conduct human evaluation. For each question, we aggre- gate all the search results and answers from different models into one table, enabling the annotators to effectively compare them and unify the annotation standards. We evaluate the performance of our model and other different models from various dimensions through human evaluation. We also compare and analyze the results from different perspectives as follows. The main results are shown in Table 2.
|
2306.07906#42
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07799
| 43 |
Haixing Dai, Zhengliang Liu, Wenxiong Liao, Xiaoke Huang, Zihao Wu, Lin Zhao, Wei Liu, Ninghao Liu, Sheng Li, Dajiang Zhu, et al. 2023. Chataug: Lever- aging chatgpt for text data augmentation. arXiv preprint arXiv:2302.13007.
Ning Dai, Jianze Liang, Xipeng Qiu, and Xuanjing Huang. 2019. Style transformer: Unpaired text style transfer without disentangled latent representation. In Proceedings of the 57th Annual Meeting of the As- sociation for Computational Linguistics, pages 5997â 6007, Florence, Italy. Association for Computational Linguistics.
Sumanth Dathathri, Andrea Madotto, Janice Lan, Jane Hung, Eric Frank, Piero Molino, Jason Yosinski, and Rosanne Liu. 2019a. Plug and play language mod- els: A simple approach to controlled text generation. arXiv preprint arXiv:1912.02164.
Sumanth Dathathri, Andrea Madotto, Janice Lan, Jane Hung, Eric Frank, Piero Molino, Jason Yosinski, and Rosanne Liu. 2019b. Plug and play language mod- els: A simple approach to controlled text generation. ArXiv, abs/1912.02164.
|
2306.07799#43
|
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer
|
Large-scale language models, like ChatGPT, have garnered significant media
attention and stunned the public with their remarkable capacity for generating
coherent text from short natural language prompts. In this paper, we aim to
conduct a systematic inspection of ChatGPT's performance in two controllable
generation tasks, with respect to ChatGPT's ability to adapt its output to
different target audiences (expert vs. layman) and writing styles (formal vs.
informal). Additionally, we evaluate the faithfulness of the generated text,
and compare the model's performance with human-authored texts. Our findings
indicate that the stylistic variations produced by humans are considerably
larger than those demonstrated by ChatGPT, and the generated texts diverge from
human samples in several characteristics, such as the distribution of word
types. Moreover, we observe that ChatGPT sometimes incorporates factual errors
or hallucinations when adapting the text to suit a specific style.
|
http://arxiv.org/pdf/2306.07799
|
Dongqi Pu, Vera Demberg
|
cs.CL, cs.AI, cs.LG
|
ACL-SRW 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2302.14229"
},
{
"id": "2302.04023"
},
{
"id": "2302.06476"
},
{
"id": "2303.17580"
},
{
"id": "2201.05337"
},
{
"id": "2302.13007"
},
{
"id": "2303.11381"
},
{
"id": "2304.05613"
},
{
"id": "2302.09419"
},
{
"id": "2301.08745"
},
{
"id": "2204.13362"
},
{
"id": "2302.08081"
},
{
"id": "2301.08653"
},
{
"id": "2305.16784"
},
{
"id": "1912.02164"
},
{
"id": "2303.01067"
}
] |
2306.07906
| 43 |
WebGLM Reference vs Other References. Although the search results of WebGLM are slightly inferior to WebGPT-175B, its per- formance is far better than that of Perplexity.ai and WebGPT-13B. It is worth mentioning that the WebGLM retrieval process only uses some traditional, word-based algorithms and two Contrievers with a cumulative parameter amount of no more than 300M. WebGLM is significantly superior to WebGPT in computing performance and
2https://openaipublic.blob.core.windows.net/webgpt-answer-viewer/index.html
KDD â23, August 6â10, 2023, Long Beach, CA, USA.
Liu and Lai and Yu, et al.
Table 2: Main results based on human evaluation metrics. Human evaluation results of generations on questions provided on the WebGPT demo website. For reference evaluation, Rel., Den., Tru., Toxâ., and Soc. Biasâ are the abbreviations corresponding to Relevancy, Density, Truthfulness, Toxicity, and Social Bias. For answer evaluation, Flu., Cor., Cit. Acc., Obj., Tru., Red.â correspond to Fluency, Correctness, Citation Accuracy, Objectivity, Truthfulness, and Redundancy.
|
2306.07906#43
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07799
| 44 |
Sam Earle, Maria Edwards, Ahmed Khalifa, Philip Bon- trager, and Julian Togelius. 2021. Learning control- lable content generators. In 2021 IEEE Conference on Games (CoG), pages 1â9. IEEE.
Angela Fan, David Grangier, and Michael Auli. 2018. Controllable abstractive summarization. In Proceed- ings of the 2nd Workshop on Neural Machine Transla- tion and Generation, pages 45â54, Melbourne, Aus- tralia. Association for Computational Linguistics.
T. Fischer, C. Biemann, Informatik und Naturwis- senschaften Universität Hamburg Fakultät für Math- ematik, and Universität Hamburg Fachbereich In- formatik. 2021. Finding Factual Inconsistencies in Abstractive Summaries. Universität Hamburg.
Sebastian Gehrmann, Zachary Ziegler, and Alexander Rush. 2019. Generating abstractive summaries with finetuned language models. In Proceedings of the 12th International Conference on Natural Language Generation, pages 516â522, Tokyo, Japan. Associa- tion for Computational Linguistics.
|
2306.07799#44
|
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer
|
Large-scale language models, like ChatGPT, have garnered significant media
attention and stunned the public with their remarkable capacity for generating
coherent text from short natural language prompts. In this paper, we aim to
conduct a systematic inspection of ChatGPT's performance in two controllable
generation tasks, with respect to ChatGPT's ability to adapt its output to
different target audiences (expert vs. layman) and writing styles (formal vs.
informal). Additionally, we evaluate the faithfulness of the generated text,
and compare the model's performance with human-authored texts. Our findings
indicate that the stylistic variations produced by humans are considerably
larger than those demonstrated by ChatGPT, and the generated texts diverge from
human samples in several characteristics, such as the distribution of word
types. Moreover, we observe that ChatGPT sometimes incorporates factual errors
or hallucinations when adapting the text to suit a specific style.
|
http://arxiv.org/pdf/2306.07799
|
Dongqi Pu, Vera Demberg
|
cs.CL, cs.AI, cs.LG
|
ACL-SRW 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2302.14229"
},
{
"id": "2302.04023"
},
{
"id": "2302.06476"
},
{
"id": "2303.17580"
},
{
"id": "2201.05337"
},
{
"id": "2302.13007"
},
{
"id": "2303.11381"
},
{
"id": "2304.05613"
},
{
"id": "2302.09419"
},
{
"id": "2301.08745"
},
{
"id": "2204.13362"
},
{
"id": "2302.08081"
},
{
"id": "2301.08653"
},
{
"id": "2305.16784"
},
{
"id": "1912.02164"
},
{
"id": "2303.01067"
}
] |
2306.07906
| 44 |
Model Rel. Reference Evaluation Tox.â Den. Tru. Soc. Biasâ Flu. Cor. Answer Evaluation Cit. Acc. Obj. Tru. Red.â WebGPT (175B) 2.512 2.660 0.996 0.015 0.006 2.457 2.889 2.837 0.990 0.975 0.087 Perplexity.ai WebGPT (13B) WebGLM (10B) 1.652 1.782 1.980 1.636 1.766 2.226 0.955 0.998 0.983 0.005 0.008 0.002 0.001 0.016 0.002 2.718 2.692 2.829 2.321 2.102 2.810 2.512 2.769 2.757 0.726 0.974 0.943 0.975 0.872 0.998 0.032 0.051 0.021
time consumption. Its performance is far superior to that of the 13B model and close to that of the 175B model.
|
2306.07906#44
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07799
| 45 |
Aidan Gilson, Conrad W Safranek, Thomas Huang, Vimig Socrates, Ling Chi, Richard Andrew Taylor, David Chartash, et al. 2023. How does chatgpt per- form on the united states medical licensing examina- tion? the implications of large language models for medical education and knowledge assessment. JMIR Medical Education, 9(1):e45312.
Tomas Goldsack, Zhihao Zhang, Chenghua Lin, and Carolina Scarton. 2022. Making science simple: Cor- pora for the lay summarisation of scientific literature. In Proceedings of the 2022 Conference on Empiri- cal Methods in Natural Language Processing, pages 10589â10604, Abu Dhabi, United Arab Emirates. As- sociation for Computational Linguistics.
Tanya Goyal and Greg Durrett. 2020. Evaluating factu- ality in generation with dependency-level entailment. In Findings of the Association for Computational Lin- guistics: EMNLP 2020, pages 3592â3603, Online. Association for Computational Linguistics.
|
2306.07799#45
|
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer
|
Large-scale language models, like ChatGPT, have garnered significant media
attention and stunned the public with their remarkable capacity for generating
coherent text from short natural language prompts. In this paper, we aim to
conduct a systematic inspection of ChatGPT's performance in two controllable
generation tasks, with respect to ChatGPT's ability to adapt its output to
different target audiences (expert vs. layman) and writing styles (formal vs.
informal). Additionally, we evaluate the faithfulness of the generated text,
and compare the model's performance with human-authored texts. Our findings
indicate that the stylistic variations produced by humans are considerably
larger than those demonstrated by ChatGPT, and the generated texts diverge from
human samples in several characteristics, such as the distribution of word
types. Moreover, we observe that ChatGPT sometimes incorporates factual errors
or hallucinations when adapting the text to suit a specific style.
|
http://arxiv.org/pdf/2306.07799
|
Dongqi Pu, Vera Demberg
|
cs.CL, cs.AI, cs.LG
|
ACL-SRW 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2302.14229"
},
{
"id": "2302.04023"
},
{
"id": "2302.06476"
},
{
"id": "2303.17580"
},
{
"id": "2201.05337"
},
{
"id": "2302.13007"
},
{
"id": "2303.11381"
},
{
"id": "2304.05613"
},
{
"id": "2302.09419"
},
{
"id": "2301.08745"
},
{
"id": "2204.13362"
},
{
"id": "2302.08081"
},
{
"id": "2301.08653"
},
{
"id": "2305.16784"
},
{
"id": "1912.02164"
},
{
"id": "2303.01067"
}
] |
2306.07906
| 45 |
time consumption. Its performance is far superior to that of the 13B model and close to that of the 175B model.
WebGLM vs Other Systems. Finally, we compare our system with the results of WebGPT-13B, Perplexity.ai, and WebGPT-175B. Our system has achieved the highest performance in fluency, truthful- ness, and redundancy. At the same time, we are close to WebGPT- 175B in the correctness metric with a score of 2.81, which is far higher than that of Perplexity.ai and WebGPT-13B, indicating that our system can still achieve superior performance at a lower cost.
in Figure 2. We hold a 43% win rate, definitely beat Perplexity.ai with a 21% win rate and WebGPT-13B with an 18% win rate, and almost draw with WebGPT-175B with a 45% win rate.
5.3 Test on QA Benchmarks We randomly sample 400 questions on Natural Question and Web Question, and evaluate WebGLM and Perplexity.ai on them. The results in Table 3 show that WebGLM outperform Perplexity.ai.
5.2 Turing Test To further compare our performance, we design a Turing test [21] to check the answersâ quality.
|
2306.07906#45
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07799
| 46 |
Yuxuan Gu, Xiaocheng Feng, Sicheng Ma, Lingyuan Zhang, Heng Gong, and Bing Qin. 2022. A distri- butional lens for multi-aspect controllable text gen- eration. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 1023â1043, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Junxian He, Wojciech Kryscinski, Bryan McCann, Nazneen Rajani, and Caiming Xiong. 2022. CTRL- sum: Towards generic controllable text summariza- tion. In Proceedings of the 2022 Conference on Em- pirical Methods in Natural Language Processing, pages 5879â5915, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Francis Heylighen and Jean-Marc Dewaele. 1999. For- mality of language: definition, measurement and be- havioral determinants. Interner Bericht, Center âLeo Apostelâ, Vrije Universiteit Brüssel, 4.
|
2306.07799#46
|
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer
|
Large-scale language models, like ChatGPT, have garnered significant media
attention and stunned the public with their remarkable capacity for generating
coherent text from short natural language prompts. In this paper, we aim to
conduct a systematic inspection of ChatGPT's performance in two controllable
generation tasks, with respect to ChatGPT's ability to adapt its output to
different target audiences (expert vs. layman) and writing styles (formal vs.
informal). Additionally, we evaluate the faithfulness of the generated text,
and compare the model's performance with human-authored texts. Our findings
indicate that the stylistic variations produced by humans are considerably
larger than those demonstrated by ChatGPT, and the generated texts diverge from
human samples in several characteristics, such as the distribution of word
types. Moreover, we observe that ChatGPT sometimes incorporates factual errors
or hallucinations when adapting the text to suit a specific style.
|
http://arxiv.org/pdf/2306.07799
|
Dongqi Pu, Vera Demberg
|
cs.CL, cs.AI, cs.LG
|
ACL-SRW 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2302.14229"
},
{
"id": "2302.04023"
},
{
"id": "2302.06476"
},
{
"id": "2303.17580"
},
{
"id": "2201.05337"
},
{
"id": "2302.13007"
},
{
"id": "2303.11381"
},
{
"id": "2304.05613"
},
{
"id": "2302.09419"
},
{
"id": "2301.08745"
},
{
"id": "2204.13362"
},
{
"id": "2302.08081"
},
{
"id": "2301.08653"
},
{
"id": "2305.16784"
},
{
"id": "1912.02164"
},
{
"id": "2303.01067"
}
] |
2306.07906
| 46 |
5.2 Turing Test To further compare our performance, we design a Turing test [21] to check the answersâ quality.
Setup. We randomly sampled 200 items from the 272 questions that WebGPT has displayed on their official web page. For each question, we shuffle the answers generated by WebGLM, WebGPT- 175B, WebGPT-13B, and Perplexity.ai, and remove citation marks from them for fairness. We next mix an answer written by humans into these answers and ask evaluators to rank the answers by their quality, such as correctness, informativeness, and truthfulness.
Perplexity.ai 0.50 08 07 WebGPT-13B -0.6 WebGPT-175B -0.5 -0.4 WebGLM 03 Human 0.55 0.57 0.50 0.2 a a a = s 2 a 5 3 E 3 & E 3 Fe es g § * & 2 = s
Figure 6: Win rates between systems. Numbers denote the rate that the answers from corresponding source from the first column are better than ones from corresponding source from the first row.
|
2306.07906#46
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07799
| 47 |
Zhiqiang Hu, Roy Ka-Wei Lee, and Charu C. Aggar- wal. 2021. Syntax matters! syntax-controlled in text style transfer. In Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2021), pages 566â575, Held On- line. INCOMA Ltd.
Zhiting Hu and Li Erran Li. 2021. A causal lens for controllable text generation. Advances in Neural Information Processing Systems, 34:24941â24955.
Wenxiang Jiao, Wenxuan Wang, Jen-tse Huang, Xing Wang, and Zhaopeng Tu. 2023. Is chatgpt a good arXiv preprint translator? arXiv:2301.08745.
Di Jin, Zhijing Jin, Zhiting Hu, Olga Vechtomova, and Rada Mihalcea. 2022. Deep learning for text style transfer: A survey. Computational Linguistics, 48(1):155â205.
Nitish Shirish Keskar, Bryan McCann, Lav R. Varshney, Caiming Xiong, and Richard Socher. 2019. Ctrl: A conditional transformer language model for control- lable generation. ArXiv, abs/1909.05858.
|
2306.07799#47
|
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer
|
Large-scale language models, like ChatGPT, have garnered significant media
attention and stunned the public with their remarkable capacity for generating
coherent text from short natural language prompts. In this paper, we aim to
conduct a systematic inspection of ChatGPT's performance in two controllable
generation tasks, with respect to ChatGPT's ability to adapt its output to
different target audiences (expert vs. layman) and writing styles (formal vs.
informal). Additionally, we evaluate the faithfulness of the generated text,
and compare the model's performance with human-authored texts. Our findings
indicate that the stylistic variations produced by humans are considerably
larger than those demonstrated by ChatGPT, and the generated texts diverge from
human samples in several characteristics, such as the distribution of word
types. Moreover, we observe that ChatGPT sometimes incorporates factual errors
or hallucinations when adapting the text to suit a specific style.
|
http://arxiv.org/pdf/2306.07799
|
Dongqi Pu, Vera Demberg
|
cs.CL, cs.AI, cs.LG
|
ACL-SRW 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2302.14229"
},
{
"id": "2302.04023"
},
{
"id": "2302.06476"
},
{
"id": "2303.17580"
},
{
"id": "2201.05337"
},
{
"id": "2302.13007"
},
{
"id": "2303.11381"
},
{
"id": "2304.05613"
},
{
"id": "2302.09419"
},
{
"id": "2301.08745"
},
{
"id": "2204.13362"
},
{
"id": "2302.08081"
},
{
"id": "2301.08653"
},
{
"id": "2305.16784"
},
{
"id": "1912.02164"
},
{
"id": "2303.01067"
}
] |
2306.07906
| 47 |
Figure 6: Win rates between systems. Numbers denote the rate that the answers from corresponding source from the first column are better than ones from corresponding source from the first row.
Result. For each pair of answers (ð´, ðµ), if evaluators prefer ð´ to ðµ, we call ð´ wins and ðµ loses. Firstly, we compare each pair of the answers, the win rate is shown in Figure 6. Besides, We calculate the win rates against humans for each system. The result is shown
Table 3: Open QA Performance on NaturalQuestions and We- bQuestions. Perplexity.ai is evaluated on sampled subsets because the website prohibits crawling.
Natural Questions Web Questions WebGLM Perplexity.ai (sample) GPT3-175B 60.8 57.3 29.9 63.5 57.5 41.5
|
2306.07906#47
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07799
| 48 |
J Peter Kincaid, Robert P Fishburne Jr, Richard L Rogers, and Brad S Chissom. 1975. Derivation of new readability formulas (automated readability in- dex, fog count and flesch reading ease formula) for navy enlisted personnel. Technical report, Naval Technical Training Command Millington TN Re- search Branch.
Sachin Kumar, Biswajit Paria, and Yulia Tsvetkov. 2022. Gradient-based constrained sampling from language models. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 2251â2277, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Philippe Laban, Tobias Schnabel, Paul N. Bennett, and Marti A. Hearst. 2022. SummaC: Re-visiting NLI- based models for inconsistency detection in summa- rization. Transactions of the Association for Compu- tational Linguistics, 10:163â177.
Viet Dac Lai, Nghia Trung Ngo, Amir Pouran Ben Veyseh, Hieu Man, Franck Dernoncourt, Trung Bui, and Thien Huu Nguyen. 2023. Chatgpt beyond en- glish: Towards a comprehensive evaluation of large language models in multilingual learning. arXiv preprint arXiv:2304.05613.
|
2306.07799#48
|
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer
|
Large-scale language models, like ChatGPT, have garnered significant media
attention and stunned the public with their remarkable capacity for generating
coherent text from short natural language prompts. In this paper, we aim to
conduct a systematic inspection of ChatGPT's performance in two controllable
generation tasks, with respect to ChatGPT's ability to adapt its output to
different target audiences (expert vs. layman) and writing styles (formal vs.
informal). Additionally, we evaluate the faithfulness of the generated text,
and compare the model's performance with human-authored texts. Our findings
indicate that the stylistic variations produced by humans are considerably
larger than those demonstrated by ChatGPT, and the generated texts diverge from
human samples in several characteristics, such as the distribution of word
types. Moreover, we observe that ChatGPT sometimes incorporates factual errors
or hallucinations when adapting the text to suit a specific style.
|
http://arxiv.org/pdf/2306.07799
|
Dongqi Pu, Vera Demberg
|
cs.CL, cs.AI, cs.LG
|
ACL-SRW 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2302.14229"
},
{
"id": "2302.04023"
},
{
"id": "2302.06476"
},
{
"id": "2303.17580"
},
{
"id": "2201.05337"
},
{
"id": "2302.13007"
},
{
"id": "2303.11381"
},
{
"id": "2304.05613"
},
{
"id": "2302.09419"
},
{
"id": "2301.08745"
},
{
"id": "2204.13362"
},
{
"id": "2302.08081"
},
{
"id": "2301.08653"
},
{
"id": "2305.16784"
},
{
"id": "1912.02164"
},
{
"id": "2303.01067"
}
] |
2306.07906
| 48 |
Natural Questions Web Questions WebGLM Perplexity.ai (sample) GPT3-175B 60.8 57.3 29.9 63.5 57.5 41.5
In addition, we conducted experiments on the full validation split of TriviaQA (same as WebGPT). Following the testing method employed by WebGPT, we first generated a long answer for each question using WebGLM. We then used Google Bigbird, fine-tuned on the TriviaQA training set3, to answer TriviaQA questions based on the output of WebGLM. To address potential test-train overlap issues mentioned in WebGPT, we also conducted TriviaQA tests on different train-test splits. The results are summarized in Table 4.
5.4 Ablation Study In this section, we study the major improvements and strategies in WebGLM, including the bootstrapped dataset filtering, scorer training, LLM-augmented retriever and some other minor topics.
5.4.1 WebGLM-QA Filtering Since we build our training dataset based on LLM in-context bootstrapping, the dataset quality could be essential for WebGLMâs success. We randomly sample 210 examples from these versions of our dataset to verify the filtering strategies they are based on, including 1) None, 2) Rouge-L filtered, and 3) Rouge-1 filtered.
|
2306.07906#48
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07799
| 49 |
Xiang Li, John Thickstun, Ishaan Gulrajani, Percy S Liang, and Tatsunori B Hashimoto. 2022. Diffusion- lm improves controllable text generation. Advances in Neural Information Processing Systems, 35:4328â 4343.
Xiao Li, Guanyi Chen, Chenghua Lin, and Ruizhe Li. 2020. DGST: a dual-generator network for text style transfer. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7131â7136, Online. Association for Computational Linguistics.
Chin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Summariza- tion Branches Out, pages 74â81, Barcelona, Spain. Association for Computational Linguistics.
Xiaochen Liu, Yang Gao, Yu Bai, Jiawei Li, Yinan Hu, Heyan Huang, and Boxing Chen. 2022a. PSP: Pre-trained soft prompts for few-shot abstractive In Proceedings of the 29th Inter- summarization. national Conference on Computational Linguistics, pages 6355â6368, Gyeongju, Republic of Korea. In- ternational Committee on Computational Linguistics.
|
2306.07799#49
|
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer
|
Large-scale language models, like ChatGPT, have garnered significant media
attention and stunned the public with their remarkable capacity for generating
coherent text from short natural language prompts. In this paper, we aim to
conduct a systematic inspection of ChatGPT's performance in two controllable
generation tasks, with respect to ChatGPT's ability to adapt its output to
different target audiences (expert vs. layman) and writing styles (formal vs.
informal). Additionally, we evaluate the faithfulness of the generated text,
and compare the model's performance with human-authored texts. Our findings
indicate that the stylistic variations produced by humans are considerably
larger than those demonstrated by ChatGPT, and the generated texts diverge from
human samples in several characteristics, such as the distribution of word
types. Moreover, we observe that ChatGPT sometimes incorporates factual errors
or hallucinations when adapting the text to suit a specific style.
|
http://arxiv.org/pdf/2306.07799
|
Dongqi Pu, Vera Demberg
|
cs.CL, cs.AI, cs.LG
|
ACL-SRW 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2302.14229"
},
{
"id": "2302.04023"
},
{
"id": "2302.06476"
},
{
"id": "2303.17580"
},
{
"id": "2201.05337"
},
{
"id": "2302.13007"
},
{
"id": "2303.11381"
},
{
"id": "2304.05613"
},
{
"id": "2302.09419"
},
{
"id": "2301.08745"
},
{
"id": "2204.13362"
},
{
"id": "2302.08081"
},
{
"id": "2301.08653"
},
{
"id": "2305.16784"
},
{
"id": "1912.02164"
},
{
"id": "2303.01067"
}
] |
2306.07906
| 49 |
We randomly shuffle all the samples and distribute them to eval- uators, and then collect and calculate the average score of each metric. The sample results are shown in Table 5 We analyze this re- sult from two perspectives. One is the absolute performance of our
3https://huggingface.co/google/bigbird-base-trivia-itc
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
KDD â23, August 6â10, 2023, Long Beach, CA, USA.
Table 4: WebGLM, WebGPT and other comparison methods on TriviaQA. The setting follows WebGPT [24] Appendix G.
|
2306.07906#49
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07799
| 50 |
Yizhu Liu, Qi Jia, and Kenny Zhu. 2022b. Length control in abstractive summarization by pretraining information selection. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6885â 6895, Dublin, Ireland. Association for Computational Linguistics.
Zheheng Luo, Qianqian Xie, and Sophia Ananiadou. 2022. Readability controllable biomedical document summarization. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 4667â4680, Abu Dhabi, United Arab Emirates. As- sociation for Computational Linguistics.
Eric Malmi, Aliaksei Severyn, and Sascha Rothe. 2020. Unsupervised text style transfer with padded masked language models. In Proceedings of the 2020 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 8671â8680, Online. As- sociation for Computational Linguistics.
Philip M McCarthy and Scott Jarvis. 2010. Mtld, vocd- d, and hd-d: A validation study of sophisticated ap- proaches to lexical diversity assessment. Behavior research methods, 42(2):381â392.
|
2306.07799#50
|
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer
|
Large-scale language models, like ChatGPT, have garnered significant media
attention and stunned the public with their remarkable capacity for generating
coherent text from short natural language prompts. In this paper, we aim to
conduct a systematic inspection of ChatGPT's performance in two controllable
generation tasks, with respect to ChatGPT's ability to adapt its output to
different target audiences (expert vs. layman) and writing styles (formal vs.
informal). Additionally, we evaluate the faithfulness of the generated text,
and compare the model's performance with human-authored texts. Our findings
indicate that the stylistic variations produced by humans are considerably
larger than those demonstrated by ChatGPT, and the generated texts diverge from
human samples in several characteristics, such as the distribution of word
types. Moreover, we observe that ChatGPT sometimes incorporates factual errors
or hallucinations when adapting the text to suit a specific style.
|
http://arxiv.org/pdf/2306.07799
|
Dongqi Pu, Vera Demberg
|
cs.CL, cs.AI, cs.LG
|
ACL-SRW 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2302.14229"
},
{
"id": "2302.04023"
},
{
"id": "2302.06476"
},
{
"id": "2303.17580"
},
{
"id": "2201.05337"
},
{
"id": "2302.13007"
},
{
"id": "2303.11381"
},
{
"id": "2304.05613"
},
{
"id": "2302.09419"
},
{
"id": "2301.08745"
},
{
"id": "2204.13362"
},
{
"id": "2302.08081"
},
{
"id": "2301.08653"
},
{
"id": "2305.16784"
},
{
"id": "1912.02164"
},
{
"id": "2303.01067"
}
] |
2306.07906
| 50 |
Table 4: WebGLM, WebGPT and other comparison methods on TriviaQA. The setting follows WebGPT [24] Appendix G.
Method Total Question overlap No question overlap Answer overlap Answer overlap only No overlap Bigbird + WebGLM (Ours) GPT-3 175B GPT-3 175B + WebGPT 175B BC UnitedQA-E UnitedQA (hybrid model) 70.80% 58.70% 69.50% 68.90% 70.50% 86.40% 75.90% 86.30% 89.30% - 67.10% 52.90% 65.30% 62.70% - 78.70% 67.30% 78.40% 78.60% - 73.60% 61.60% 73.20% 70.60% - 49.30% 39.00% 52.40% 44.30% Table 5: Ablation study on different dataset filtering strategies in creating the bootstrapped generator.
|
2306.07906#50
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07799
| 51 |
Remi Mir, Bjarke Felbo, Nick Obradovich, and Iyad Rahwan. 2019. Evaluating style transfer for text. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies,
Volume 1 (Long and Short Papers), pages 495â504, Minneapolis, Minnesota. Association for Computa- tional Linguistics.
Nasim Nouri. 2022. Text style transfer via optimal transport. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, pages 2532â2541, Seattle, United States. Association for Computational Linguistics.
OpenAI. 2023. Gpt-4 technical report. ArXiv, abs/2303.08774.
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instruc- tions with human feedback. Advances in Neural Information Processing Systems, 35:27730â27744.
|
2306.07799#51
|
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer
|
Large-scale language models, like ChatGPT, have garnered significant media
attention and stunned the public with their remarkable capacity for generating
coherent text from short natural language prompts. In this paper, we aim to
conduct a systematic inspection of ChatGPT's performance in two controllable
generation tasks, with respect to ChatGPT's ability to adapt its output to
different target audiences (expert vs. layman) and writing styles (formal vs.
informal). Additionally, we evaluate the faithfulness of the generated text,
and compare the model's performance with human-authored texts. Our findings
indicate that the stylistic variations produced by humans are considerably
larger than those demonstrated by ChatGPT, and the generated texts diverge from
human samples in several characteristics, such as the distribution of word
types. Moreover, we observe that ChatGPT sometimes incorporates factual errors
or hallucinations when adapting the text to suit a specific style.
|
http://arxiv.org/pdf/2306.07799
|
Dongqi Pu, Vera Demberg
|
cs.CL, cs.AI, cs.LG
|
ACL-SRW 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2302.14229"
},
{
"id": "2302.04023"
},
{
"id": "2302.06476"
},
{
"id": "2303.17580"
},
{
"id": "2201.05337"
},
{
"id": "2302.13007"
},
{
"id": "2303.11381"
},
{
"id": "2304.05613"
},
{
"id": "2302.09419"
},
{
"id": "2301.08745"
},
{
"id": "2204.13362"
},
{
"id": "2302.08081"
},
{
"id": "2301.08653"
},
{
"id": "2305.16784"
},
{
"id": "1912.02164"
},
{
"id": "2303.01067"
}
] |
2306.07906
| 51 |
Filtering Method Rel. Reference Evaluation Tox.â Den. Tru. Soc. Biasâ Flu. Cor. Answer Evaluation Cit. Acc. Tru. Obj. Red.â None Rouge-L Rouge-1 1.711 1.833 1.832 1.619 1.728 1.751 0.991 0.994 0.993 0.011 0.022 0.010 0.011 0.010 0.012 2.872 2.731 2.826 2.636 2.680 2.694 2.370 2.573 2.688 2.810 2.896 2.919 0.805 0.841 0.890 0.134 0.181 0.120
# Table 6: Ablation study on different dataset filtering strate- gies, based on GLM-2Bâs post-training evaluation
Table 7: Performance of LLM-augmented Retriever (Ours). âN-NDCGâ refers to Normalized NDCG.
|
2306.07906#51
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07799
| 52 |
Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic evalu- ation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Compu- tational Linguistics, pages 311â318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
Keqin Peng, Liang Ding, Qihuang Zhong, Li Shen, Xuebo Liu, Min Zhang, Yuanxin Ouyang, and Dacheng Tao. 2023. Towards making the most of chatgpt for machine translation. Available at SSRN 4390455.
Dongqi Pu, Xudong Hong, Pin-Jie Lin, Ernie Chang, and Vera Demberg. 2022. Two-stage movie script summarization: An efficient method for low-resource long document summarization. In Proceedings of The Workshop on Automatic Summarization for Cre- ative Writing, pages 57â66, Gyeongju, Republic of Korea. Association for Computational Linguistics.
Dongqi Pu and Khalil Simaâan. 2022. Passing parser uncertainty to the transformer: Labeled dependency distributions for neural machine translation. In Pro- ceedings of the 23rd Annual Conference of the Eu- ropean Association for Machine Translation, pages 41â50, Ghent, Belgium. European Association for Machine Translation.
|
2306.07799#52
|
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer
|
Large-scale language models, like ChatGPT, have garnered significant media
attention and stunned the public with their remarkable capacity for generating
coherent text from short natural language prompts. In this paper, we aim to
conduct a systematic inspection of ChatGPT's performance in two controllable
generation tasks, with respect to ChatGPT's ability to adapt its output to
different target audiences (expert vs. layman) and writing styles (formal vs.
informal). Additionally, we evaluate the faithfulness of the generated text,
and compare the model's performance with human-authored texts. Our findings
indicate that the stylistic variations produced by humans are considerably
larger than those demonstrated by ChatGPT, and the generated texts diverge from
human samples in several characteristics, such as the distribution of word
types. Moreover, we observe that ChatGPT sometimes incorporates factual errors
or hallucinations when adapting the text to suit a specific style.
|
http://arxiv.org/pdf/2306.07799
|
Dongqi Pu, Vera Demberg
|
cs.CL, cs.AI, cs.LG
|
ACL-SRW 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2302.14229"
},
{
"id": "2302.04023"
},
{
"id": "2302.06476"
},
{
"id": "2303.17580"
},
{
"id": "2201.05337"
},
{
"id": "2302.13007"
},
{
"id": "2303.11381"
},
{
"id": "2304.05613"
},
{
"id": "2302.09419"
},
{
"id": "2301.08745"
},
{
"id": "2204.13362"
},
{
"id": "2302.08081"
},
{
"id": "2301.08653"
},
{
"id": "2305.16784"
},
{
"id": "1912.02164"
},
{
"id": "2303.01067"
}
] |
2306.07906
| 52 |
Table 7: Performance of LLM-augmented Retriever (Ours). âN-NDCGâ refers to Normalized NDCG.
Flu. Cor. Cit. Acc. Obj. Tru. Red.â None Rouge-L Rouge-1 2.610 2.604 2.852 2.738 2.742 2.738 2.655 2.727 2.743 0.961 0.952 0.976 0.961 0.975 0.970 0.063 0.034 0.044
Metric(%) TF-IDF BM25 Contriever Ours Accuracy Spearman NDCG N-NDCG 46.85 9.92 82.54 46.05 40.33 -20.94 76.28 26.77 18.54 -1.58 81.16 41.75 69.36 62.26 91.99 75.29
final version of the dataset. The other is comparing the performance of our different versions of datasets.
|
2306.07906#52
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07799
| 53 |
Dongqi Pu, Yifan Wang, and Vera Demberg. 2023. In- corporating distributions of discourse structure for long document abstractive summarization. arXiv preprint arXiv:2305.16784.
Chengwei Qin, Aston Zhang, Zhuosheng Zhang, Jiaao Chen, Michihiro Yasunaga, and Diyi Yang. 2023. Is chatgpt a general-purpose natural language process- ing task solver? arXiv preprint arXiv:2302.06476.
Abhinav Ramesh Kashyap, Devamanyu Hazarika, Min- Yen Kan, Roger Zimmermann, and Soujanya Poria. 2022. So different yet so alike! constrained unsuper- vised text style transfer. In Proceedings of the 60th Annual Meeting of the Association for Computational
Linguistics (Volume 1: Long Papers), pages 416â431, Dublin, Ireland. Association for Computational Lin- guistics.
|
2306.07799#53
|
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer
|
Large-scale language models, like ChatGPT, have garnered significant media
attention and stunned the public with their remarkable capacity for generating
coherent text from short natural language prompts. In this paper, we aim to
conduct a systematic inspection of ChatGPT's performance in two controllable
generation tasks, with respect to ChatGPT's ability to adapt its output to
different target audiences (expert vs. layman) and writing styles (formal vs.
informal). Additionally, we evaluate the faithfulness of the generated text,
and compare the model's performance with human-authored texts. Our findings
indicate that the stylistic variations produced by humans are considerably
larger than those demonstrated by ChatGPT, and the generated texts diverge from
human samples in several characteristics, such as the distribution of word
types. Moreover, we observe that ChatGPT sometimes incorporates factual errors
or hallucinations when adapting the text to suit a specific style.
|
http://arxiv.org/pdf/2306.07799
|
Dongqi Pu, Vera Demberg
|
cs.CL, cs.AI, cs.LG
|
ACL-SRW 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2302.14229"
},
{
"id": "2302.04023"
},
{
"id": "2302.06476"
},
{
"id": "2303.17580"
},
{
"id": "2201.05337"
},
{
"id": "2302.13007"
},
{
"id": "2303.11381"
},
{
"id": "2304.05613"
},
{
"id": "2302.09419"
},
{
"id": "2301.08745"
},
{
"id": "2204.13362"
},
{
"id": "2302.08081"
},
{
"id": "2301.08653"
},
{
"id": "2305.16784"
},
{
"id": "1912.02164"
},
{
"id": "2303.01067"
}
] |
2306.07906
| 53 |
final version of the dataset. The other is comparing the performance of our different versions of datasets.
We find that our dataset holds a high factual consistency and correctness, and the majority of our data are judged as perfectly correct. We have also noticed that the information relevancy and density are considerably improved when we apply a filter method and when we change Rouge-L to Rouge-1. As for the answer, we find that correctness has great improvement when we apply any one of the two filters, and factual consistency has a great improvement when we change the Rouge-L filter to Rouge-1. Besides, objectivity is also one of the most important criteria that we care about, and we find that itâs more likely to discard subjective answers with a Rouge- 1 filter than with a Rouge-L filter. As a result, our experiments show that citation accuracy is closely related to the reference quality and answer quality, so our filter method is effective.
We collect 22000 examples from WebGLM-QA, and for each question, we calculate Rouge-1 precision score ð of corresponding answer ð and each of the reference ð , and then label the reference- answer pair (ð, ð) as ð. Finally, we gain a training dataset containing 20000 examples and a test dataset containing 2000 examples.
|
2306.07906#53
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07799
| 54 |
Linguistics (Volume 1: Long Papers), pages 416â431, Dublin, Ireland. Association for Computational Lin- guistics.
Sudha Rao and Joel Tetreault. 2018. Dear sir or madam, may I introduce the GYAFC dataset: Corpus, bench- marks and metrics for formality style transfer. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 129â140, New Or- leans, Louisiana. Association for Computational Lin- guistics.
Emily Reif, Daphne Ippolito, Ann Yuan, Andy Coenen, Chris Callison-Burch, and Jason Wei. 2022. A recipe for arbitrary text style transfer with large language models. In Proceedings of the 60th Annual Meet- ing of the Association for Computational Linguistics (Volume 2: Short Papers), pages 837â848, Dublin, Ireland. Association for Computational Linguistics.
Jürgen Rudolph, Samson Tan, and Shannon Tan. 2023. Chatgpt: Bullshit spewer or the end of traditional assessments in higher education? Journal of Applied Learning and Teaching, 6(1).
|
2306.07799#54
|
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer
|
Large-scale language models, like ChatGPT, have garnered significant media
attention and stunned the public with their remarkable capacity for generating
coherent text from short natural language prompts. In this paper, we aim to
conduct a systematic inspection of ChatGPT's performance in two controllable
generation tasks, with respect to ChatGPT's ability to adapt its output to
different target audiences (expert vs. layman) and writing styles (formal vs.
informal). Additionally, we evaluate the faithfulness of the generated text,
and compare the model's performance with human-authored texts. Our findings
indicate that the stylistic variations produced by humans are considerably
larger than those demonstrated by ChatGPT, and the generated texts diverge from
human samples in several characteristics, such as the distribution of word
types. Moreover, we observe that ChatGPT sometimes incorporates factual errors
or hallucinations when adapting the text to suit a specific style.
|
http://arxiv.org/pdf/2306.07799
|
Dongqi Pu, Vera Demberg
|
cs.CL, cs.AI, cs.LG
|
ACL-SRW 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2302.14229"
},
{
"id": "2302.04023"
},
{
"id": "2302.06476"
},
{
"id": "2303.17580"
},
{
"id": "2201.05337"
},
{
"id": "2302.13007"
},
{
"id": "2303.11381"
},
{
"id": "2304.05613"
},
{
"id": "2302.09419"
},
{
"id": "2301.08745"
},
{
"id": "2204.13362"
},
{
"id": "2302.08081"
},
{
"id": "2301.08653"
},
{
"id": "2305.16784"
},
{
"id": "1912.02164"
},
{
"id": "2303.01067"
}
] |
2306.07906
| 54 |
For all answers to the same question, we compare the order predicted by retrieve methods with the answer relevancy order. The results are shown in Table 7. We notice that before the LLM task augmentation, the Contriever performs even poorer than tradi- tional lexical-based approaches. After augmenting knowledge from GPT-3âs reference adoption labeling, we find that ours, which holds a 69.36 pair-wise choosing accuracy and 62.26 spearman index, performs best. The evidence strongly advocates that the LLM aug- mentation is vital when we use pre-trained smaller dense retrievers in practice.
Besides, We train the GLM-2B models on each dataset and eval- uate them with our designed metrics to see the impact of these datasets on our modelâs performance. We show the results in Ta- ble 6. We find that the answers of the three models showed little difference in the correctness metric. However, the performance of the model trained by rouge-1 was better in fluency, citation accuracy, and objectivity metrics. This result further proves the advantages of the dataset of rouge-1. Therefore, we decide to train our 10B model on the dataset of rouge-1.
|
2306.07906#54
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07799
| 55 |
Ritesh Sarkhel, Moniba Keymanesh, Arnab Nandi, and Srinivasan Parthasarathy. 2020. Interpretable multi- headed attention for abstractive summarization at controllable lengths. In Proceedings of the 28th Inter- national Conference on Computational Linguistics, pages 6871â6882, Barcelona, Spain (Online). Inter- national Committee on Computational Linguistics.
Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer- generator networks. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073â 1083, Vancouver, Canada. Association for Computa- tional Linguistics.
Chenhui Shen, Liying Cheng, Lidong Bing, Yang You, and Luo Si. 2022a. SentBS: Sentence-level beam search for controllable summarization. In Proceed- ings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 10256â10265, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
|
2306.07799#55
|
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer
|
Large-scale language models, like ChatGPT, have garnered significant media
attention and stunned the public with their remarkable capacity for generating
coherent text from short natural language prompts. In this paper, we aim to
conduct a systematic inspection of ChatGPT's performance in two controllable
generation tasks, with respect to ChatGPT's ability to adapt its output to
different target audiences (expert vs. layman) and writing styles (formal vs.
informal). Additionally, we evaluate the faithfulness of the generated text,
and compare the model's performance with human-authored texts. Our findings
indicate that the stylistic variations produced by humans are considerably
larger than those demonstrated by ChatGPT, and the generated texts diverge from
human samples in several characteristics, such as the distribution of word
types. Moreover, we observe that ChatGPT sometimes incorporates factual errors
or hallucinations when adapting the text to suit a specific style.
|
http://arxiv.org/pdf/2306.07799
|
Dongqi Pu, Vera Demberg
|
cs.CL, cs.AI, cs.LG
|
ACL-SRW 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2302.14229"
},
{
"id": "2302.04023"
},
{
"id": "2302.06476"
},
{
"id": "2303.17580"
},
{
"id": "2201.05337"
},
{
"id": "2302.13007"
},
{
"id": "2303.11381"
},
{
"id": "2304.05613"
},
{
"id": "2302.09419"
},
{
"id": "2301.08745"
},
{
"id": "2204.13362"
},
{
"id": "2302.08081"
},
{
"id": "2301.08653"
},
{
"id": "2305.16784"
},
{
"id": "1912.02164"
},
{
"id": "2303.01067"
}
] |
2306.07906
| 55 |
5.4.2 LLM-augmented Retriever In terms of the usefulness of ref- erences, we have compared our method with traditional methods such as BM25, TF-IDF, and the original version of Contriver.
5.4.3 Human Preference-aware Scorer In this section we compare several different scorer training strategies and datasets. We discover that proper task formulation and larger and more diverse dataset yield better results.
Baseline and data preprocessing. We first train RoBERTa-large in the classification task and the regression task formulation, and the 6-billion-parameter GLM on the ELI5âs training set (with thumb- ups) as our baselines. In the classification task, we collect all items whose count of answers is not less than 10 from ELI5. For each collected question, we label top-5-voted answers as positive, and randomly pick 5 answers from other questions as negative examples.
KDD â23, August 6â10, 2023, Long Beach, CA, USA.
\ â Classification (RoOBERTa) == Regression (RoBERTa) + RM (Eli) --= RM (WebGLM) â os4 1 score T T T T T T 0% 20% 40% 60% 80% 100% proportion of the ranking in the answers
|
2306.07906#55
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07799
| 56 |
Chenhui Shen, Liying Cheng, Ran Zhou, Lidong Bing, Yang You, and Luo Si. 2022b. MReD: A meta-review dataset for structure-controllable text generation. In Findings of the Association for Computational Lin- guistics: ACL 2022, pages 2521â2535, Dublin, Ire- land. Association for Computational Linguistics.
Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, and Yueting Zhuang. 2023. Hugging- gpt: Solving ai tasks with chatgpt and its friends in huggingface. arXiv preprint arXiv:2303.17580.
Dominik Sobania, Martin Briesch, Carol Hanna, and Justyna Petke. 2023. An analysis of the automatic
bug fixing performance of chatgpt. arXiv preprint arXiv:2301.08653.
Ewoenam Kwaku Tokpo and Toon Calders. 2022. Text style transfer for bias mitigation using masked lan- guage modeling. In Proceedings of the 2022 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies: Student Research Workshop, pages 163â171, Hybrid: Seattle, Washington + On- line. Association for Computational Linguistics.
|
2306.07799#56
|
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer
|
Large-scale language models, like ChatGPT, have garnered significant media
attention and stunned the public with their remarkable capacity for generating
coherent text from short natural language prompts. In this paper, we aim to
conduct a systematic inspection of ChatGPT's performance in two controllable
generation tasks, with respect to ChatGPT's ability to adapt its output to
different target audiences (expert vs. layman) and writing styles (formal vs.
informal). Additionally, we evaluate the faithfulness of the generated text,
and compare the model's performance with human-authored texts. Our findings
indicate that the stylistic variations produced by humans are considerably
larger than those demonstrated by ChatGPT, and the generated texts diverge from
human samples in several characteristics, such as the distribution of word
types. Moreover, we observe that ChatGPT sometimes incorporates factual errors
or hallucinations when adapting the text to suit a specific style.
|
http://arxiv.org/pdf/2306.07799
|
Dongqi Pu, Vera Demberg
|
cs.CL, cs.AI, cs.LG
|
ACL-SRW 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2302.14229"
},
{
"id": "2302.04023"
},
{
"id": "2302.06476"
},
{
"id": "2303.17580"
},
{
"id": "2201.05337"
},
{
"id": "2302.13007"
},
{
"id": "2303.11381"
},
{
"id": "2304.05613"
},
{
"id": "2302.09419"
},
{
"id": "2301.08745"
},
{
"id": "2204.13362"
},
{
"id": "2302.08081"
},
{
"id": "2301.08653"
},
{
"id": "2305.16784"
},
{
"id": "1912.02164"
},
{
"id": "2303.01067"
}
] |
2306.07906
| 56 |
Figure 7: Average score of answers in ELI5 test set. It is sorted by likes in the ELI5 test set. The best answer is around 0% and the worst answer is around 100%.
In the regression task, we collect all items whose count of answers is not less than 5 from ELI5. For each collected question, we complete the following steps: (1) for each answer to this question, supposing its corresponding up-vote is ð¢, we firstly label this answer as log2 (ð¢ + 1). (2) Then, we scale labels of all answers to this question to [0, 1]. (3) Let ð¥ be the summation of the answersâ label, we randomly pick âð¥â answers from other questions as negative examples with label â1. In order to obtain a large train set (which has been suggested very important in [33]), we adopt a relatively loose screening method, which selects the questions with more than 5 answers and answers with no less than 100 words in length. Our large train set includes 28.2k questions and 191.6k pairs. We use the ELI5 test set with thumb-ups for our final evaluations.
|
2306.07906#56
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07799
| 57 |
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Åukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30.
Jiaan Wang, Yunlong Liang, Fandong Meng, Zhixu Li, Jianfeng Qu, and Jie Zhou. 2023a. Cross- lingual summarization via chatgpt. arXiv preprint arXiv:2302.14229.
Jiaan Wang, Yunlong Liang, Fandong Meng, Beiqi Zou, Zhixu Li, Jianfeng Qu, and Jie Zhou. 2023b. Zero- shot cross-lingual summarization via large language models.
Colin G West. 2023. Ai and the fci: Can chatgpt project an understanding of introductory physics? arXiv preprint arXiv:2303.01067.
|
2306.07799#57
|
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer
|
Large-scale language models, like ChatGPT, have garnered significant media
attention and stunned the public with their remarkable capacity for generating
coherent text from short natural language prompts. In this paper, we aim to
conduct a systematic inspection of ChatGPT's performance in two controllable
generation tasks, with respect to ChatGPT's ability to adapt its output to
different target audiences (expert vs. layman) and writing styles (formal vs.
informal). Additionally, we evaluate the faithfulness of the generated text,
and compare the model's performance with human-authored texts. Our findings
indicate that the stylistic variations produced by humans are considerably
larger than those demonstrated by ChatGPT, and the generated texts diverge from
human samples in several characteristics, such as the distribution of word
types. Moreover, we observe that ChatGPT sometimes incorporates factual errors
or hallucinations when adapting the text to suit a specific style.
|
http://arxiv.org/pdf/2306.07799
|
Dongqi Pu, Vera Demberg
|
cs.CL, cs.AI, cs.LG
|
ACL-SRW 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2302.14229"
},
{
"id": "2302.04023"
},
{
"id": "2302.06476"
},
{
"id": "2303.17580"
},
{
"id": "2201.05337"
},
{
"id": "2302.13007"
},
{
"id": "2303.11381"
},
{
"id": "2304.05613"
},
{
"id": "2302.09419"
},
{
"id": "2301.08745"
},
{
"id": "2204.13362"
},
{
"id": "2302.08081"
},
{
"id": "2301.08653"
},
{
"id": "2305.16784"
},
{
"id": "1912.02164"
},
{
"id": "2303.01067"
}
] |
2306.07906
| 57 |
Metrics. We select three metrics to measure the ability of the re- ward model to distinguish responses of different quality, namely accuracy, Spearman coefficient, and NDCG (Normalized Discounted Cumulative Gain). Accuracy refers to the accuracy of selecting bet- ter answers in pairs. Spearman and NDCG measure the sorting ability of the model.
The ranking evaluation of different models is shown in Table 9. We find that WebGLM human preference-aware scorer performs best on accuracy and Spearman coefficient. Under the same amount of training tokens, the performance of the reward model is slightly worse than that of RoBERTa classification and RoBERTa regression, but after increasing the amount of training, the performance of the reward model will increase significantly.
Figure 7 shows the average reward of the answers at different positions in the sequence sorted by likes in the ELI5 test set. The best answer is around 0% and the worst answer is around 100%. We find that the curve of the WebGLM Human Preference-aware Scorer is more discriminative than other models, and the rewards of the best answer are higher than that of others.
5.4.4 Ablation Study on Each Component We added some exper- iments to conduct ablation studies on each component. We com- pared the three sub-modules of the system: Retriever, Generator, and Scorer. The results are shown in Table 8.
|
2306.07906#57
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07799
| 58 |
Colin G West. 2023. Ai and the fci: Can chatgpt project an understanding of introductory physics? arXiv preprint arXiv:2303.01067.
Chien-Sheng Wu, Linqing Liu, Wenhao Liu, Pontus Stenetorp, and Caiming Xiong. 2021. Controllable abstractive dialogue summarization with sketch su- pervision. In Findings of the Association for Com- putational Linguistics: ACL-IJCNLP 2021, pages 5108â5122, Online. Association for Computational Linguistics.
Kexin Yang, Dayiheng Liu, Wenqiang Lei, Baosong Yang, Mingfeng Xue, Boxing Chen, and Jun Xie. 2022. Tailor: A prompt-based approach to attribute- based controlled text generation. arXiv preprint arXiv:2204.13362.
Xianjun Yang, Yan Li, Xinlu Zhang, Haifeng Chen, and Wei Cheng. 2023a. Exploring the limits of chatgpt for query or aspect-based text summarization. arXiv preprint arXiv:2302.08081.
|
2306.07799#58
|
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer
|
Large-scale language models, like ChatGPT, have garnered significant media
attention and stunned the public with their remarkable capacity for generating
coherent text from short natural language prompts. In this paper, we aim to
conduct a systematic inspection of ChatGPT's performance in two controllable
generation tasks, with respect to ChatGPT's ability to adapt its output to
different target audiences (expert vs. layman) and writing styles (formal vs.
informal). Additionally, we evaluate the faithfulness of the generated text,
and compare the model's performance with human-authored texts. Our findings
indicate that the stylistic variations produced by humans are considerably
larger than those demonstrated by ChatGPT, and the generated texts diverge from
human samples in several characteristics, such as the distribution of word
types. Moreover, we observe that ChatGPT sometimes incorporates factual errors
or hallucinations when adapting the text to suit a specific style.
|
http://arxiv.org/pdf/2306.07799
|
Dongqi Pu, Vera Demberg
|
cs.CL, cs.AI, cs.LG
|
ACL-SRW 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2302.14229"
},
{
"id": "2302.04023"
},
{
"id": "2302.06476"
},
{
"id": "2303.17580"
},
{
"id": "2201.05337"
},
{
"id": "2302.13007"
},
{
"id": "2303.11381"
},
{
"id": "2304.05613"
},
{
"id": "2302.09419"
},
{
"id": "2301.08745"
},
{
"id": "2204.13362"
},
{
"id": "2302.08081"
},
{
"id": "2301.08653"
},
{
"id": "2305.16784"
},
{
"id": "1912.02164"
},
{
"id": "2303.01067"
}
] |
2306.07906
| 58 |
Liu and Lai and Yu, et al.
In the Retriever module, we compared the performance on the settings of WebGPT-175B, WebGLM, and non-retrieval. From the Table 8, the performance on WebGLM retrieval is similar to that of WebGPT-175B and significantly better than non-retrieval.
Regarding the Generator module, we compared the response quality of WebGLM and GPT-3 on WebGLM retrieval setting. We found that WebGLM performed slightly better than GPT-3 in flu- ency, correctness, accuracy, citation accuracy, objectivity, and truth- fulness.
In terms of Scorer, we compared the response quality of We- bGLM removing and retaining Reward Models. The results show that by WebGLM-10B top-p sampling and reward model scoring method, We found through the human evaluation results that the answers scored high by the reward model excel the original re- sults in fluency, correctness, citation accuracy, truthfulness, and redundancy. It shows the importance of the reward model scoring mechanism to model performance.
|
2306.07906#58
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07799
| 59 |
Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Ehsan Azarnasab, Faisal Ahmed, Zicheng Liu, Ce Liu, Michael Zeng, and Lijuan Wang. 2023b. Mm-react: Prompting chatgpt for multimodal rea- soning and action. arXiv preprint arXiv:2303.11381.
Hanqing Zhang and Dawei Song. 2022. DisCup: Dis- criminator cooperative unlikelihood prompt-tuning for controllable text generation. In Proceedings of the 2022 Conference on Empirical Methods in Nat- ural Language Processing, pages 3392â3406, Abu Dhabi, United Arab Emirates. Association for Com- putational Linguistics.
Hanqing Zhang, Haolin Song, Shaoyu Li, Ming Zhou, and Dawei Song. 2022. A survey of controllable text generation using transformer-based pre-trained language models. arXiv preprint arXiv:2201.05337.
|
2306.07799#59
|
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer
|
Large-scale language models, like ChatGPT, have garnered significant media
attention and stunned the public with their remarkable capacity for generating
coherent text from short natural language prompts. In this paper, we aim to
conduct a systematic inspection of ChatGPT's performance in two controllable
generation tasks, with respect to ChatGPT's ability to adapt its output to
different target audiences (expert vs. layman) and writing styles (formal vs.
informal). Additionally, we evaluate the faithfulness of the generated text,
and compare the model's performance with human-authored texts. Our findings
indicate that the stylistic variations produced by humans are considerably
larger than those demonstrated by ChatGPT, and the generated texts diverge from
human samples in several characteristics, such as the distribution of word
types. Moreover, we observe that ChatGPT sometimes incorporates factual errors
or hallucinations when adapting the text to suit a specific style.
|
http://arxiv.org/pdf/2306.07799
|
Dongqi Pu, Vera Demberg
|
cs.CL, cs.AI, cs.LG
|
ACL-SRW 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2302.14229"
},
{
"id": "2302.04023"
},
{
"id": "2302.06476"
},
{
"id": "2303.17580"
},
{
"id": "2201.05337"
},
{
"id": "2302.13007"
},
{
"id": "2303.11381"
},
{
"id": "2304.05613"
},
{
"id": "2302.09419"
},
{
"id": "2301.08745"
},
{
"id": "2204.13362"
},
{
"id": "2302.08081"
},
{
"id": "2301.08653"
},
{
"id": "2305.16784"
},
{
"id": "1912.02164"
},
{
"id": "2303.01067"
}
] |
2306.07906
| 59 |
6 Conclusion We build the LLM-based question-answering systemâWebGLMâ with a web retrieval method. We propose a fast and cost-effective method to retrieve valuable information from the Internet. We lever- age GPT-3âs in-context learning ability to build a LLM-bootstrapped quoted and long-form QA dataset, which is used to train our model. Further, we train a human preference-aware scorer and use it to give marks to responses generated by our model. For each question, the scorer can select the highest-scored response from candidates, thus obtaining a final answer humans prefer the most. We con- duct extensive experiments, including both the human evaluation and the Turing test, to demonstrate the competitive performance of WebGLM with some of the pioneering web-enhanced question answering systems like Perplexity.ai and WebGPT.
ACKNOWLEDGEMENT This work is supported by Technology and Innovation Major Project of the Ministry of Science and Technology of China under Grant 2022ZD0118600 and 2022ZD0118601, NSF of China for Distinguished Young Scholars (No. 61825602), NSF of China (No. 62276148), and a research fund from Zhipu.AI.
|
2306.07906#59
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07799
| 60 |
Ce Zhou, Qian Li, Chen Li, Jun Yu, Yixin Liu, Guangjing Wang, Kai Zhang, Cheng Ji, Qiben Yan, Lifang He, et al. 2023. A comprehensive survey on pretrained foundation models: A history from bert to chatgpt. arXiv preprint arXiv:2302.09419.
Wanrong Zhu, Xin Wang, Tsu-Jui Fu, An Yan, Pradyumna Narayana, Kazoo Sone, Sugato Basu, and William Yang Wang. 2021. Multimodal text style transfer for outdoor vision-and-language navi- gation. In Proceedings of the 16th Conference of the European Chapter of the Association for Computa- tional Linguistics: Main Volume, pages 1207â1221, Online. Association for Computational Linguistics.
# A Appendix: One-shot Guidance
B Appendix: Absolute Differences in POS and Dependency Label Distributions
|
2306.07799#60
|
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer
|
Large-scale language models, like ChatGPT, have garnered significant media
attention and stunned the public with their remarkable capacity for generating
coherent text from short natural language prompts. In this paper, we aim to
conduct a systematic inspection of ChatGPT's performance in two controllable
generation tasks, with respect to ChatGPT's ability to adapt its output to
different target audiences (expert vs. layman) and writing styles (formal vs.
informal). Additionally, we evaluate the faithfulness of the generated text,
and compare the model's performance with human-authored texts. Our findings
indicate that the stylistic variations produced by humans are considerably
larger than those demonstrated by ChatGPT, and the generated texts diverge from
human samples in several characteristics, such as the distribution of word
types. Moreover, we observe that ChatGPT sometimes incorporates factual errors
or hallucinations when adapting the text to suit a specific style.
|
http://arxiv.org/pdf/2306.07799
|
Dongqi Pu, Vera Demberg
|
cs.CL, cs.AI, cs.LG
|
ACL-SRW 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2302.14229"
},
{
"id": "2302.04023"
},
{
"id": "2302.06476"
},
{
"id": "2303.17580"
},
{
"id": "2201.05337"
},
{
"id": "2302.13007"
},
{
"id": "2303.11381"
},
{
"id": "2304.05613"
},
{
"id": "2302.09419"
},
{
"id": "2301.08745"
},
{
"id": "2204.13362"
},
{
"id": "2302.08081"
},
{
"id": "2301.08653"
},
{
"id": "2305.16784"
},
{
"id": "1912.02164"
},
{
"id": "2303.01067"
}
] |
2306.07906
| 60 |
References [1] Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization. 65â72.
[2] Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Seman- tic parsing on freebase from question-answer pairs. In Proceedings of the 2013 conference on empirical methods in natural language processing. 1533â1544. [3] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems 33 (2020), 1877â1901.
[4] Asli Celikyilmaz, Elizabeth Clark, and Jianfeng Gao. 2020. Evaluation of text generation: A survey. arXiv preprint arXiv:2006.14799 (2020).
|
2306.07906#60
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07799
| 61 |
# A Appendix: One-shot Guidance
B Appendix: Absolute Differences in POS and Dependency Label Distributions
Candidate Document: {Original Document}, Layman Summary: {Human Layman Summary}. Please learn the way of summarization from the previous example, and give me a layman-style summary of X 49.23â 13.26â 10.45â Human Answer 8.93 Document: {Original Document}, Expert Summary: {Human Expert Summary}. Please learn the way of summarization from the previous example, and give me an expert-style summary of X 28.88â 15.92â 11.82 11.78 Human Answer
Table 7: Reading difficulty of one-shot guidance. â indicates statistical significance (p<0.05) against corresponding human answers via paired t-test.
Candidate Formal: {Formal Sentence}, Informal: {Informal Sentence}. Please learn the way of formality conversion from the previous example, and give me an informal version of X 50.67â 49.87 Human Answer Informal: {Informal Sentence}, Formal: {Formal Sentence}. Please learn the way of formality conversion from the previous example, and give me a formal version of X Human Answer
Table 8: Text formality of one-shot guidance on GYAFC-FR branch. â indicates statistical significance (p<0.05) against corresponding human answers via paired t-test.
|
2306.07799#61
|
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer
|
Large-scale language models, like ChatGPT, have garnered significant media
attention and stunned the public with their remarkable capacity for generating
coherent text from short natural language prompts. In this paper, we aim to
conduct a systematic inspection of ChatGPT's performance in two controllable
generation tasks, with respect to ChatGPT's ability to adapt its output to
different target audiences (expert vs. layman) and writing styles (formal vs.
informal). Additionally, we evaluate the faithfulness of the generated text,
and compare the model's performance with human-authored texts. Our findings
indicate that the stylistic variations produced by humans are considerably
larger than those demonstrated by ChatGPT, and the generated texts diverge from
human samples in several characteristics, such as the distribution of word
types. Moreover, we observe that ChatGPT sometimes incorporates factual errors
or hallucinations when adapting the text to suit a specific style.
|
http://arxiv.org/pdf/2306.07799
|
Dongqi Pu, Vera Demberg
|
cs.CL, cs.AI, cs.LG
|
ACL-SRW 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2302.14229"
},
{
"id": "2302.04023"
},
{
"id": "2302.06476"
},
{
"id": "2303.17580"
},
{
"id": "2201.05337"
},
{
"id": "2302.13007"
},
{
"id": "2303.11381"
},
{
"id": "2304.05613"
},
{
"id": "2302.09419"
},
{
"id": "2301.08745"
},
{
"id": "2204.13362"
},
{
"id": "2302.08081"
},
{
"id": "2301.08653"
},
{
"id": "2305.16784"
},
{
"id": "1912.02164"
},
{
"id": "2303.01067"
}
] |
2306.07906
| 61 |
[5] Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Se- bastian Gehrmann, et al. 2022. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311 (2022).
[6] Zhengxiao Du, Yujie Qian, Xiao Liu, Ming Ding, Jiezhong Qiu, Zhilin Yang, and Jie Tang. 2022. GLM: General language model pretraining with autoregressive blank infilling. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 320â335.
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
KDD â23, August 6â10, 2023, Long Beach, CA, USA.
Table 8: Ablation study on different sub-modules (Scorer, Retriever, and Generator) in WebGLM.
|
2306.07906#61
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07799
| 62 |
ooo-__ b= _ _ i _ = Absolute Differences i 0.02 - =0.03 - acl attr aux. RooT acomp advel advmod agent. amod appos auxpass case « ccomp con} subj det. dobj expl int) mark meta neg mod subj nummod oprd comp obj poss recon predet: prep prt. punet. xcomp compound npadvmod nsubjpass parataxis quantmod Dependency Labels
Figure 5: Absolute differences in dependency labels distribution of ChatGPT and human-generated formal style sentences: GYAFC - EM
0.020 0.015 0.010 0.005 |! 000 l = | a â | = | | 0.005 Absolute Differences 0.010 0.015 Root ad acomp advel advmod agent amod appos attr aux auxpass case ce ccomp compound con) csubj csubjpass dative dep det dobj expl int) mark meta neg nmod npadvmod subj nsubjpass nummod oprd parataxis pcomp pobj poss preconj predet prep prt punct quantmod reldl Dependency Labels
Figure 6: Absolute differences in dependency labels distribution of ChatGPT and human-generated informal style sentences: GYAFC - EM
# xcomp
0.03 â
|
2306.07799#62
|
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer
|
Large-scale language models, like ChatGPT, have garnered significant media
attention and stunned the public with their remarkable capacity for generating
coherent text from short natural language prompts. In this paper, we aim to
conduct a systematic inspection of ChatGPT's performance in two controllable
generation tasks, with respect to ChatGPT's ability to adapt its output to
different target audiences (expert vs. layman) and writing styles (formal vs.
informal). Additionally, we evaluate the faithfulness of the generated text,
and compare the model's performance with human-authored texts. Our findings
indicate that the stylistic variations produced by humans are considerably
larger than those demonstrated by ChatGPT, and the generated texts diverge from
human samples in several characteristics, such as the distribution of word
types. Moreover, we observe that ChatGPT sometimes incorporates factual errors
or hallucinations when adapting the text to suit a specific style.
|
http://arxiv.org/pdf/2306.07799
|
Dongqi Pu, Vera Demberg
|
cs.CL, cs.AI, cs.LG
|
ACL-SRW 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2302.14229"
},
{
"id": "2302.04023"
},
{
"id": "2302.06476"
},
{
"id": "2303.17580"
},
{
"id": "2201.05337"
},
{
"id": "2302.13007"
},
{
"id": "2303.11381"
},
{
"id": "2304.05613"
},
{
"id": "2302.09419"
},
{
"id": "2301.08745"
},
{
"id": "2204.13362"
},
{
"id": "2302.08081"
},
{
"id": "2301.08653"
},
{
"id": "2305.16784"
},
{
"id": "1912.02164"
},
{
"id": "2303.01067"
}
] |
2306.07906
| 62 |
Method Flu. Cor. Cit. Acc. Obj. Tru. Red.â Scorer Ablation No Scorer Human Preference-aware Scorer (Ours) 2.797 2.829 2.757 2.810 2.723 2.757 0.961 0.943 0.970 0.998 0.039 0.021 Retriever Ablation (w.o. RM) No Retriever WebGPT Retriever Contriever LLM-augmented Retriever (Ours) 2.364 2.750 2.761 2.797 1.982 2.884 2.732 2.757 - 2.808 2.721 2.723 - 0.981 0.963 0.961 0.645 0.980 0.930 0.970 0.091 0.038 0.043 0.039 Generator Ablation (w.o. RM) GPT-3 (text-davinci-003, zero-shot) Bootstrapped Generator (Ours) 2.751 2.797 2.752 2.757 2.607 2.723 0.927 0.961 0.966 0.970 0.034 0.039 WebGLM (Ours) 2.829 2.810 2.757 0.943 0.998 0.021
|
2306.07906#62
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07799
| 63 |
Figure 6: Absolute differences in dependency labels distribution of ChatGPT and human-generated informal style sentences: GYAFC - EM
# xcomp
0.03 â
0.02 - 0.01 - | a | | | Absolute Differences 0.01 - =0.02 - 5PBeeeegegesgegxyerrrge¢eeersrzresepzssrguysuzeeRrgrzaeegegezgue Ss" EERER ST PE ESSERE T TP SET EES ECR SER AE REE EET BEES 3 a E 3 2 ge a 3 Labels
# Dependency
Figure 7: Absolute differences in dependency labels distribution of ChatGPT and human-generated formal sentences: GYAFC - FR
0.015 â
0.010 ~ - I 0.000 - â" | .i _ â,"⢠Absolute Differences 0.005 - 0.010 - SEER ER LSS ERC ER ESS RSET ER LEE SER ERE SEER RET ERE 3S §8&S &â¬e* 7 2S 625 Rf S778 fetes BESS Fs FRESE 26 2 § e= 38 ges 8 5 se ° 8 8 FeO Fg Fee * eB gS ae" 2 Dependency Labels
Figure 8: Absolute differences in dependency labels distribution of ChatGPT and human-generated informal sentences: GYAFC - FR
|
2306.07799#63
|
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer
|
Large-scale language models, like ChatGPT, have garnered significant media
attention and stunned the public with their remarkable capacity for generating
coherent text from short natural language prompts. In this paper, we aim to
conduct a systematic inspection of ChatGPT's performance in two controllable
generation tasks, with respect to ChatGPT's ability to adapt its output to
different target audiences (expert vs. layman) and writing styles (formal vs.
informal). Additionally, we evaluate the faithfulness of the generated text,
and compare the model's performance with human-authored texts. Our findings
indicate that the stylistic variations produced by humans are considerably
larger than those demonstrated by ChatGPT, and the generated texts diverge from
human samples in several characteristics, such as the distribution of word
types. Moreover, we observe that ChatGPT sometimes incorporates factual errors
or hallucinations when adapting the text to suit a specific style.
|
http://arxiv.org/pdf/2306.07799
|
Dongqi Pu, Vera Demberg
|
cs.CL, cs.AI, cs.LG
|
ACL-SRW 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2302.14229"
},
{
"id": "2302.04023"
},
{
"id": "2302.06476"
},
{
"id": "2303.17580"
},
{
"id": "2201.05337"
},
{
"id": "2302.13007"
},
{
"id": "2303.11381"
},
{
"id": "2304.05613"
},
{
"id": "2302.09419"
},
{
"id": "2301.08745"
},
{
"id": "2204.13362"
},
{
"id": "2302.08081"
},
{
"id": "2301.08653"
},
{
"id": "2305.16784"
},
{
"id": "1912.02164"
},
{
"id": "2303.01067"
}
] |
2306.07906
| 63 |
Table 9: Different scorersâ performance on ELI5 test set.
Accuracy Spearman N-NDCG Classification (RoBERTa) Regression (RoBERTa) RM (ELI5) RM (WebGLM) 0.552 0.569 0.568 0.596 0.129 0.164 0.197 0.241 0.319 0.352 0.406 0.367
[7] Angela Fan, Yacine Jernite, Ethan Perez, David Grangier, Jason Weston, and Michael Auli. 2019. ELI5: Long Form Question Answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 3558â3567. [8] Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Mingwei Chang. 2020. Retrieval augmented language model pre-training. In International conference on machine learning. PMLR, 3929â3938.
[9] Or Honovich, Uri Shaham, Samuel R Bowman, and Omer Levy. 2022. Instruction induction: From few examples to natural language task descriptions. arXiv preprint arXiv:2205.10782 (2022).
|
2306.07906#63
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 64 |
[10] Gautier Izacard, Mathilde Caron, Lucas Hosseini, Sebastian Riedel, Piotr Bo- janowski, Armand Joulin, and Edouard Grave. 2022. Unsupervised Dense Infor- mation Retrieval with Contrastive Learning. Transactions on Machine Learning Research (2022).
[11] Gautier Izacard and Ãdouard Grave. 2021. Leveraging Passage Retrieval with Generative Models for Open Domain Question Answering. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. 874â880.
[12] Gautier Izacard, Patrick Lewis, Maria Lomeli, Lucas Hosseini, Fabio Petroni, Timo Schick, Jane Dwivedi-Yu, Armand Joulin, Sebastian Riedel, and Edouard Grave. 2022. Few-shot learning with retrieval augmented language models. arXiv preprint arXiv:2208.03299 (2022).
|
2306.07906#64
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07799
| 65 |
=
# SREB SBE SEZ SERBE Zzutex PERSE REESEESELERERE Pos ta9s
Figure 9: Absolute differences in POS tags distribution of ChatGPT and human-generated formal sentences: GYAFC - FR
# C Appendix: Dependency Arc Entailment
0.015 â 0.010 â EB 0.005 - 2 2 a = eaeee a | | = 0.005 - 0.010 - SEBSB ERE ESEZE gee x PSSRZEERBEE 85 eB POs Tags
Figure 10: Absolute differences in POS tags distribution of ChatGPT and human-generated informal sentences: GYAFC - FR
ar i is such a waste of Tw space. PRON AUX DET Der NOUN ADP NOUN NOUN
Figure 11: Case study of dependency tree visualization (Reference)
|
2306.07799#65
|
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer
|
Large-scale language models, like ChatGPT, have garnered significant media
attention and stunned the public with their remarkable capacity for generating
coherent text from short natural language prompts. In this paper, we aim to
conduct a systematic inspection of ChatGPT's performance in two controllable
generation tasks, with respect to ChatGPT's ability to adapt its output to
different target audiences (expert vs. layman) and writing styles (formal vs.
informal). Additionally, we evaluate the faithfulness of the generated text,
and compare the model's performance with human-authored texts. Our findings
indicate that the stylistic variations produced by humans are considerably
larger than those demonstrated by ChatGPT, and the generated texts diverge from
human samples in several characteristics, such as the distribution of word
types. Moreover, we observe that ChatGPT sometimes incorporates factual errors
or hallucinations when adapting the text to suit a specific style.
|
http://arxiv.org/pdf/2306.07799
|
Dongqi Pu, Vera Demberg
|
cs.CL, cs.AI, cs.LG
|
ACL-SRW 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2302.14229"
},
{
"id": "2302.04023"
},
{
"id": "2302.06476"
},
{
"id": "2303.17580"
},
{
"id": "2201.05337"
},
{
"id": "2302.13007"
},
{
"id": "2303.11381"
},
{
"id": "2304.05613"
},
{
"id": "2302.09419"
},
{
"id": "2301.08745"
},
{
"id": "2204.13362"
},
{
"id": "2302.08081"
},
{
"id": "2301.08653"
},
{
"id": "2305.16784"
},
{
"id": "1912.02164"
},
{
"id": "2303.01067"
}
] |
2306.07906
| 65 |
[13] Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Ye Jin Bang, Andrea Madotto, and Pascale Fung. 2023. Survey of hallucination in natural language generation. Comput. Surveys 55, 12 (2023), 1â38.
[14] Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense Passage Retrieval for Open- Domain Question Answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). 6769â6781.
[15] Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, et al. 2019. Natural questions: a benchmark for question answering research. Transactions of the Association for Computational Linguistics 7 (2019), 453â466.
|
2306.07906#65
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07799
| 66 |
Figure 12: Case study of dependency tree visualization (Human)
What a total waste of Tw time! PRON DET ADJ NOUN ADP NOUN NOUN
Figure 13: Case study of dependency tree visualization (ChatGPT)
0.8- 0.6- w 6 o4- 0.2- 0.0- " a ue Candidate ; al al 2) an wor Hu at pat T Foie gar" ie) oF et F2 rot
Figure 14: Dependency arc entailment: GYAFC - FR. Data points>0.95âAccurate. To clarify discrepancies, cutoff point=0.95.
# D Appendix: Named Entity Hallucination
|
2306.07799#66
|
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer
|
Large-scale language models, like ChatGPT, have garnered significant media
attention and stunned the public with their remarkable capacity for generating
coherent text from short natural language prompts. In this paper, we aim to
conduct a systematic inspection of ChatGPT's performance in two controllable
generation tasks, with respect to ChatGPT's ability to adapt its output to
different target audiences (expert vs. layman) and writing styles (formal vs.
informal). Additionally, we evaluate the faithfulness of the generated text,
and compare the model's performance with human-authored texts. Our findings
indicate that the stylistic variations produced by humans are considerably
larger than those demonstrated by ChatGPT, and the generated texts diverge from
human samples in several characteristics, such as the distribution of word
types. Moreover, we observe that ChatGPT sometimes incorporates factual errors
or hallucinations when adapting the text to suit a specific style.
|
http://arxiv.org/pdf/2306.07799
|
Dongqi Pu, Vera Demberg
|
cs.CL, cs.AI, cs.LG
|
ACL-SRW 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2302.14229"
},
{
"id": "2302.04023"
},
{
"id": "2302.06476"
},
{
"id": "2303.17580"
},
{
"id": "2201.05337"
},
{
"id": "2302.13007"
},
{
"id": "2303.11381"
},
{
"id": "2304.05613"
},
{
"id": "2302.09419"
},
{
"id": "2301.08745"
},
{
"id": "2204.13362"
},
{
"id": "2302.08081"
},
{
"id": "2301.08653"
},
{
"id": "2305.16784"
},
{
"id": "1912.02164"
},
{
"id": "2303.01067"
}
] |
2306.07906
| 66 |
[16] Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, et al. 2020. Retrieval-augmented generation for knowledge-intensive nlp tasks. Advances in Neural Information Processing Systems 33 (2020), 9459â9474. [17] Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries.
In Text summarization branches out. 74â81.
[18] Jiachang Liu, Dinghan Shen, Yizhe Zhang, William B Dolan, Lawrence Carin, and Weizhu Chen. 2022. What Makes Good In-Context Examples for GPT-3?. In Proceedings of Deep Learning Inside Out (DeeLIO 2022): The 3rd Workshop on
Knowledge Extraction and Integration for Deep Learning Architectures. 100â114. [19] Xiao Liu, Fanjin Zhang, Zhenyu Hou, Li Mian, Zhaoyu Wang, Jing Zhang, and Jie Tang. 2021. Self-supervised learning: Generative or contrastive. IEEE Transactions on Knowledge and Data Engineering 35, 1 (2021), 857â876.
|
2306.07906#66
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07799
| 67 |
# D Appendix: Named Entity Hallucination
Dataset R F - C F A Y G Candidate Human Informal Human Formal ChatGPT Informal ChatGPT Formal ChatGPT I2F2I ChatGPT F2I2F Precision Recall F1 0.989 0.988 0.986 0.974 0.982 0.974 0.988 0.988 0.989 0.988 0.985 0.986 0.974 0.974 0.982 0.982 0.973 0.973 M E - C F A Y G Human Informal Human Formal ChatGPT Informal ChatGPT Formal ChatGPT I2F2I ChatGPT F2I2F 0.979 0.977 0.975 0.950 0.970 0.945 0.987 0.983 0.989 0.982 0.974 0.974 0.952 0.951 0.969 0.970 0.946 0.945
Table 9: Named entity hallucination - GYAFC
|
2306.07799#67
|
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer
|
Large-scale language models, like ChatGPT, have garnered significant media
attention and stunned the public with their remarkable capacity for generating
coherent text from short natural language prompts. In this paper, we aim to
conduct a systematic inspection of ChatGPT's performance in two controllable
generation tasks, with respect to ChatGPT's ability to adapt its output to
different target audiences (expert vs. layman) and writing styles (formal vs.
informal). Additionally, we evaluate the faithfulness of the generated text,
and compare the model's performance with human-authored texts. Our findings
indicate that the stylistic variations produced by humans are considerably
larger than those demonstrated by ChatGPT, and the generated texts diverge from
human samples in several characteristics, such as the distribution of word
types. Moreover, we observe that ChatGPT sometimes incorporates factual errors
or hallucinations when adapting the text to suit a specific style.
|
http://arxiv.org/pdf/2306.07799
|
Dongqi Pu, Vera Demberg
|
cs.CL, cs.AI, cs.LG
|
ACL-SRW 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2302.14229"
},
{
"id": "2302.04023"
},
{
"id": "2302.06476"
},
{
"id": "2303.17580"
},
{
"id": "2201.05337"
},
{
"id": "2302.13007"
},
{
"id": "2303.11381"
},
{
"id": "2304.05613"
},
{
"id": "2302.09419"
},
{
"id": "2301.08745"
},
{
"id": "2204.13362"
},
{
"id": "2302.08081"
},
{
"id": "2301.08653"
},
{
"id": "2305.16784"
},
{
"id": "1912.02164"
},
{
"id": "2303.01067"
}
] |
2306.07906
| 67 |
[20] Xiao Liu, Yanan Zheng, Zhengxiao Du, Ming Ding, Yujie Qian, Zhilin Yang, and Jie Tang. 2021. GPT understands, too. arXiv preprint arXiv:2103.10385 (2021).
[21] Michael L Mauldin. 1994. Chatterbots, tinymuds, and the turing test: Entering the loebner prize competition. In AAAI, Vol. 94. 16â21.
[22] Sewon Min, Mike Lewis, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2022. Noisy Channel Language Model Prompting for Few-Shot Text Classification. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 5316â5330.
|
2306.07906#67
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 68 |
[23] Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe, Mike Lewis, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2022. Rethinking the Role of Demonstrations: What Makes In-Context Learning Work? arXiv preprint arXiv:2202.12837 (2022). [24] Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, et al. 2021. Webgpt: Browser-assisted question-answering with human feedback. arXiv preprint arXiv:2112.09332 (2021).
[25] Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. MS MARCO: A Human Generated MAchine Reading COmprehension Dataset. choice 2640 (2016), 660.
|
2306.07906#68
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 69 |
[26] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35 (2022), 27730â27744.
[27] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics. 311â318.
[28] Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ Questions for Machine Comprehension of Text. In Proceed- ings of the 2016 Conference on Empirical Methods in Natural Language Processing. 2383â2392.
[29] Adam Roberts, Colin Raffel, and Noam Shazeer. 2020. How Much Knowledge Can You Pack Into the Parameters of a Language Model?. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). 5418â5426.
|
2306.07906#69
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 70 |
[30] Ohad Rubin, Jonathan Herzig, and Jonathan Berant. 2022. Learning To Retrieve Prompts for In-Context Learning. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 2655â2671.
[31] Ananya B Sai, Akash Kumar Mohankumar, and Mitesh M Khapra. 2022. A survey of evaluation metrics used for NLG systems. ACM Computing Surveys (CSUR) 55, 2 (2022), 1â39.
[32] Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana IliÄ, Daniel Hesslow, Roman Castagné, Alexandra Sasha Luccioni, François Yvon, Matthias Gallé, et al. 2022. Bloom: A 176b-parameter open-access multilingual language model. arXiv preprint arXiv:2211.05100 (2022).
[33] Nisan Stiennon, Long Ouyang, Jeffrey Wu, Daniel Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul F Christiano. 2020. Learning to summarize with human feedback. Advances in Neural Information Processing Systems 33 (2020), 3008â3021.
|
2306.07906#70
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 71 |
KDD â23, August 6â10, 2023, Long Beach, CA, USA.
[34] Hongjin Su, Jungo Kasai, Chen Henry Wu, Weijia Shi, Tianlu Wang, Jiayi Xin, Rui Zhang, Mari Ostendorf, Luke Zettlemoyer, Noah A Smith, et al. 2022. Selective annotation makes language models better few-shot learners. arXiv preprint arXiv:2209.01975 (2022).
[35] Sang Michael Xie, Aditi Raghunathan, Percy Liang, and Tengyu Ma. 2021. An Explanation of In-context Learning as Implicit Bayesian Inference. In International Conference on Learning Representations.
[36] Aohan Zeng, Xiao Liu, Zhengxiao Du, Zihan Wang, Hanyu Lai, Ming Ding, Zhuoyi Yang, Yifan Xu, Wendi Zheng, Xiao Xia, et al. 2022. Glm-130b: An open bilingual pre-trained model. arXiv preprint arXiv:2210.02414 (2022).
|
2306.07906#71
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 72 |
[37] Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. 2022. Opt: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068 (2022).
[38] Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019. BERTScore: Evaluating Text Generation with BERT. In International Con- ference on Learning Representations.
[39] Zihao Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh. 2021. Calibrate before use: Improving few-shot performance of language models. In International Conference on Machine Learning. PMLR, 12697â12706.
Liu and Lai and Yu, et al.
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
# A Additional Experimental Results
|
2306.07906#72
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 73 |
Liu and Lai and Yu, et al.
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
# A Additional Experimental Results
A.1 WebGLM vs Others in WebGPT Reference We compared the generation results of WebGLM-Rouge1, WebGPT- 175B, and GPT-3 on the WebGPT-175B references. For GPT-3, we also use the method of automatically constructing datasets to gener- ate responses for the WebGPT samples to compare the effect of the WebGLM system. Specifically, we use the references of WebGPT to let GPT-3 do in-context learning to answer questions according to the search results. We use human evaluation to compare the quality of the three answers. The experimental results are shown in Ta- ble 10. Although our model size is more than ten times smaller than GPT-3 and WebGPT-175B, we can effectively compensate for the impact of the model size and achieve competitive performance in the retrieval paradigm. Our model matches WebGPT-175B and GPT- 3 on correctness, citation accuracy, objectivity, and truthfulness metrics and outperforms them on fluency and redundancy.
# Table 10: Ablation study on different Generators based on WebGPT references
|
2306.07906#73
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 74 |
# Table 10: Ablation study on different Generators based on WebGPT references
Generator Flu. Cor. Cit. Acc. bj. Tru. Red. GPT-3 In-Context WebGPT-175B WebGLM-10B-Rouge1 2.801 2.457 2.750 2.883 2.889 2.884 2.726 2.837 2.808 0.966 0.990 0.981 0.975 0.975 0.980 0.024 0.087 0.038
B Detailed Efficiency Analysis At the retrieval stage, we only search for one time, then take the first few results links to fetch the web pages in parallel. We then extract all paragraphs and rank these paragraphs by Contriever, and finally take the top 5 paragraphs as references. Let ð¡ð , ð¡ð , ð¡ð , and ð¡ð denote the time we consume in four steps, so the total time we consume is ð¡ð + ð¡ð + ð¡ð + ð¡ð .
|
2306.07906#74
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 75 |
WebGPT simulates the operations in a virtual browser environ- ment while obtaining references. For the 272 questions they showed, we count the types of actions and the average number of generated tokens as shown in Table 11 and 12. Then we calculate the average time it takes to browse. Assuming that the total time ð (ð is either WebGPT-175B or WebGPT-13B) takes to generate commands in the browsing process of each question is expected to be ð¡ð (ð), the time ð consumes ð (ð) satisfies the following equations. ð (WebGPT-175B) = ð¡ð (WebGPT-175B) + ð¡ð â 3.82 + ð¡ð â 6.96 (2) ð (WebGPT-13B) = ð¡ð (WebGPT-13B) + ð¡ð â 4.05 + ð¡ð â 7.56
We test the efficiency of GPT-3. With a 500-token prompt, the 175B model generates about 20 tokens per second, and the 13B model generates 100 tokens per second, meaning that:
|
2306.07906#75
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 76 |
580.08 tokens/query 20 tokens/second 580.08 tokens/query 100 tokens/second In practice, ð¡ð , ð¡ð , ð¡ð , and ð¡ð are about 1.81, 2.38, 0.29, and 0.89 respectively. So we consume 5.36 seconds for one query on average. Nevertheless, assuming in the same network circumstance, the time
KDD â23, August 6â10, 2023, Long Beach, CA, USA.
consumption of WebGPT models can be calculated by Equation 2 and 3.
ð (WebGPT-175B) = 52.48 seconds ð (WebGPT-13B) = 31.12 seconds
(6)
(7)
Therefore, WebGPT-175B costs 52.48 seconds, and WebGPT-13B costs 31.12 seconds. Our efficiency can be about 10 times that of WebGPT-175B and 6 times that of WebGPT-13B.
Table 11: Efficiency statistics for browsing stage in WebGPT- 175B. Average count per query, tokens per action, and to- kens per query (the product of the first two terms) are dis- played in this table.
|
2306.07906#76
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 77 |
action count/query tokens/action search click_link quote back scroll_down scroll_up top end find_in_page invalid 3.82 6.96 3.49 5.35 11.41 1.62 0.49 0.43 0.13 0.12 9.80 5.00 124.49 1.00 4.00 4.00 1.00 3.00 5.11 111.09 37.46 34.82 434.80 5.35 45.63 6.49 0.49 1.29 0.68 13.07 tokens 580.08 generating speed 20 tokens/second action time 29s total time 52s
Table 12: Efficiency statistics for browsing stage in WebGPT- 13B. Average count per query, tokens per action, and tokens per query (the product of the first two terms) are displayed in this table.
|
2306.07906#77
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 78 |
action count/query tokens/action search click_link quote back scroll_down scroll_up top end find_in_page invalid 4.05 7.56 3.44 5.90 10.30 2.01 0.32 0.44 0.21 0.10 9.65 5.00 125.85 1.00 4.00 4.00 1.00 3.00 5.04 136.58 39.08 37.81 433.08 5.90 41.21 8.04 0.32 1.33 1.06 13.06 tokens 580.89 generating speed 100 tokens/second action time 5.8s total time 31s
# tokens/query
KDD â23, August 6â10, 2023, Long Beach, CA, USA.
C Choice of Prompts and Instructions Firstly, we attempt the zero-shot approach for bootstrapping data. To produce data with appropriate citation marks, we require specific instructions. We experiment with several methods, however, they are all limited in their effectiveness.
Use a mark for each helpful reference you cited, such as [1]. Limitation: bootstrapped data contain mixed usage of [1][2] and [1, 2].
Use a mark for each helpful reference you cited, such as [1]. If there are multiple citations at one position, please use a format like [1][2][3]. Limitation: bootstrapped data contain citations of useless references.
|
2306.07906#78
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 79 |
Use a mark for each helpful reference you cited, such as [1]. If there are multiple citations at one position, please use a format like [1][2][3]. If a reference is useless, do not cite it. Limitation: useless references are still cited. This method do not work.
We then select few-shot context to bootstrap data. If we provide too many references or in-context examples, it is easy to exceed the token count limit. Therefore, we choose to use an 1-shot example and 5 references. We also include some useless references in the example, which are not cited in the answer.
After that, We conduct experiments on prompts and demonstrate that placing the question after the references is the most effective approach.
Regarding instruction induction for in-context learning, we ex- periment with the previously mentioned examples as well as some new ones, such as:
Answer the question based on the following references with citations. Use a mark for each helpful reference you cited, such as [1]. If there are multiple citations at one position, please use a format like [1][2][3]. If a reference is useless, do not cite it.
|
2306.07906#79
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 80 |
I will provide you with some references. Based on the references, please answer my question. Pay attention that you should be objective, and you should not use your knowledge. Use a mark for each helpful reference you cited, such as [1]. If there are multiple citations at one position, please use a format like [1][2][3]. If a reference is useless, do not cite it.
However, these instructions are too verbose, and in the presence of examples, the modelâs performance is not significantly impacted by the instructions. Therefore, we adopt a more natural approach to generate instructions[9] to produce a natural instruction that is interpretable by the model.
Finally, we use a very concise instruction: Read the references provided and answer the corresponding question.
In addition, we compared models trained with different prompt strategies, and the results are shown in the Table 13. From the "Correctness" column, we can see the significant difference that the order of references and question in the prompt makes.
D Dataset Examples An example of WebGLM-QA is shown in Table 14.
Liu and Lai and Yu, et al.
Table 13: The performance with training data bootstrapped by difference prompt strategies.
|
2306.07906#80
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 81 |
Liu and Lai and Yu, et al.
Table 13: The performance with training data bootstrapped by difference prompt strategies.
Prompt Flu. Cor. Cit. Acc. Obj. Tru. Red. WebGLM Prompt Question before Reference 3-Reference 2.797 2.633 2.658 2.757 2.518 2.412 2.723 2.700 2.819 0.961 0.933 0.933 0.970 0.970 0.930 0.039 0.058 0.065
E Retrieval Example An example of retrieved references from each system is shown in Table 15 and Table 16.
F Answer Examples Some examples consisting of only answers are shown in Table 17 and Table 18. We remove citation marks for all the answers to evaluate answers only. In this example, WebGLMâs answer is consistent with the question, while the answers of two WebGPT models are beside the point in the conclusion.
G Reward Model Example WebGLM performs better after implementing the reward model. An example is shown in Table 19.
H Criteria Details The criteria details of human evaluation are shown in Table 20 and Table 21.
# Figure 8: WebGLM web demo page
|
2306.07906#81
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 82 |
H Criteria Details The criteria details of human evaluation are shown in Table 20 and Table 21.
# Figure 8: WebGLM web demo page
WebGLM Demo How many calories arein a banana? [A medium banana contains 105 calories, per the USDA's FondData Central databasel1).tis also a good source of fibers), and provides 27 grams of carbohydrates, including 3 grams of fiber and 14 grams of sugar(3] References (Click to Expand) [a] How Many Calories Arena Banana? | Cooking School > âAmedium banana conta 105 calories, per the USDAâ FoodData Central database. Thats about the same amount of calories ina âmedium sweet potato ora cup of grapes. Asmall banana has 90 calories, while a large banana contains 121 caries, [2] How Many Calories Are Ina Banana? | Cooking School > [3] How Many Calories Are Ina Banana? | Cooking School > |Amedium banana contains 27 grams of carbohydrates, including 3 grams of fiber and 14 grams of sugar. "Bananas are a great source of fel and are very filing, says Koszyk, "The natural carbs from bananas can satisfy a sweet-tooth raving, while offering ll the âhealthy benefit of eating frit-such as fiber, vitamins, minerals, antioxidants and phytochemicalsâ [6] How Many Calories Arena Banana? | Cooking School >
|
2306.07906#82
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 83 |
I Demo We provide a web demo as shown in Figure 8. Meanwhile, some real examples are shown in Figure 9 and the later figures.
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
KDD â23, August 6â10, 2023, Long Beach, CA, USA.
Table 14: Dataset Example. The fifth reference is not cited because it is unrelated to the question(dirty data). In the training procedure, we need these unrelated references to train our model to pick related information.
# Question
Why were the cities that are capital cities chosen as capitals?
[1] Reasons differ from state to state as to why the capital city was chosen. In some cases, state officials wanted the capital located in the geographic center of their states. Jefferson City and Pierre were chosen as capitals of Missouri and South Dakota respectively for this reason. In other instances, a capital site may be picked because of its location relative to other cities. Florida chose the city of Tallahassee to be its capital in 1824 because it was situated halfway between St. Augustine and Pensacola, the stateâs two largest cities at the time. Some capital locations were picked because they were centers of trade and transportation. This is why Albany, located near the Hudson River, was picked as New Yorkâs capital.
|
2306.07906#83
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 84 |
[2] - The selection or founding of a "neutral" capital city, one unencumbered by regional or political identities, was meant to represent the unity of a new state when Ankara, Bern, BrasÃlia, Canberra, Madrid, Ottawa and Washington became capital cities. Sometimes, the location of a new capital city was chosen to terminate actual or potential squabbling between various entities, such as in the cases of BrasÃlia, Canberra, Ottawa, Washington, Wellington and Managua.
[3] - However a country decides on its capital, that city becomes an important symbol of nationhood. While being a home to its inhabitants and a place for tourists to visit, it is also the city chosen to represent the entire country to the world.
[4] State capitals are important because they house the governments of their states, with a meeting place for the state legislature. Many meeting buildings are called capitols, which are distinguished from capitals in that capitol refers to buildings while capital always references a city. Cities are chosen to be the capital for differing reasons, sometimes because it was located in the center of the state, like Jefferson City, or was a center of commerce, like Albany. Today, many capitals are not the largest city in their states and can be overshadowed by bigger cities, but retain important historical value.
|
2306.07906#84
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 85 |
[5] (Not Cited) 8. "Capital cities: How are they chosen and what do they represent?". BBC News. 6 December 2017. Archived from the original on 23 March 2022. Retrieved 26 June 2022.
The reasons why cities are chosen as capitals vary from state to state. In some cases, state officials wanted the capital located in the geographic center of their state[1][4], while in other instances, a capital site may be picked because of its location relative to other cities[1]. Some capital cities were picked because they were centers of trade and transportation[1], or to represent the unity of a new state[1][2][4]. In some cases, the location of a new capital city was chosen to terminate actual or potential squabbling between various entities[1][2]. Cities are chosen to be the capital because they house the governments of their states, with a meeting place for the state legislature[1][4]. Today, many capitals are not the largest city in their states and can be overshadowed by bigger cities, but retain important historical value[4]. The capital city also represents the entire country to the world[3].
KDD â23, August 6â10, 2023, Long Beach, CA, USA.
Liu and Lai and Yu, et al.
# Table 15: Retrieved Reference Example (WebGLM and Perplexity AI)
# Question
# Question
|
2306.07906#85
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 86 |
Liu and Lai and Yu, et al.
# Table 15: Retrieved Reference Example (WebGLM and Perplexity AI)
# Question
# Question
What is the difference between sigma bonds and pi bonds?
[1] Sigma bonds are often formed by the combination of s orbitals in different atoms whereas pi bonds are formed from the combination of p and similar orbitals in different atoms.
[2] The main differences between sigma and pi bonds has to do with their formation and strength. Sigma bonds are formed by direct head to head overlap between orbitals while pi bonds are formed by side to side overlap, usually between p orbitals. This difference in formation leads to a difference in strength. Sigma bonds are stronger than pi bonds because the direct head to head overlap provides a bigger (and therefore stronger) overlap than the side to side overlap of pi bonds. Additionally, sigma bonds form single bonds and can exist without a pi bond present; however, a sigma bond must already be formed for a pi bond to form.
[3] Sigma and pi bonds are two types of covalent bonds formed by the overlapping of atomic orbitals. Sigma bonds form by the direct head to head overlap of atomic orbitals and can occur between s-s, p-p and s-p orbitals. Pi bonds form by the side to side overlap of p orbitals.
|
2306.07906#86
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 87 |
[4] The sigma bond is stronger than the Pi bond, according to quantum mechanics, since the orbital paths of Pi bonds are parallel, resulting in less overlap between the p-orbitals. Two atomic orbitals come into contact with each other through two areas of overlap, forming Pi bonds.
References Perplexity AI from
[1] Sigma bond is a chemical bond formed by the linear or co-axial overlapping of the atomic orbitals of two atoms. A pi bond is a type of covalent bond that exists between atoms where the electrons are on top and bottom of the axis connecting the nuclei of the joined atoms.
[2] The sigma bond is stronger than the Pi bond, according to quantum mechanics, since the orbital paths of Pi bonds are parallel, resulting in less overlap between ...
[3] A ð bond has cylindrical symmetry; a ð bond has a nodal plane that includes the bond axis. ... A ð bond comes from the end-to-end overlap of the ...
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
KDD â23, August 6â10, 2023, Long Beach, CA, USA.
# Table 16: Retrieved Reference Example (WebGPTs)
# Question
What is the difference between sigma bonds and pi bonds?
# References WebGPT-175B
|
2306.07906#87
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 88 |
# Table 16: Retrieved Reference Example (WebGPTs)
# Question
What is the difference between sigma bonds and pi bonds?
# References WebGPT-175B
[1] Pi bonds are formed by the sidewise positive (same phase) overlap of atomic orbitals along a direction perpendicular to the internuclear axis. During the formation of ð bonds, the axes of the atomic orbitals are parallel to each other whereas the overlapping is perpendicular to the internuclear axis. This type of covalent bonding is illustrated below. [Image: Pi Bonds] Pi Bonds are generally weaker than sigma bonds, owing to the significantly lower degree of overlapping. Generally, double bonds consist of one sigma and one pi bond, whereas a typical triple bond is made up of two ð bonds and one ð bond. It is important to note that a combination of sigma and pi bonds is always stronger than a single sigma bond.
|
2306.07906#88
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 89 |
[2] In chemistry, sigma bonds (ð bonds) are the strongest type of covalent chemical bond. They are formed by head-on overlapping between atomic orbitals. Sigma bonding is most simply defined for diatomic molecules using the language and tools of symmetry groups. In this formal approach, a ð-bond is symmetrical with respect to rotation about the bond axis. By this definition, common forms of sigma bonds are ð + ð , ðð§ + ðð§, ð + ðð§ððððð§2 + ðð§2 (where z is defined as the axis of the bond or the internuclear axis). Quantum theory also indicates that molecular orbitals (MO) of identical symmetry actually mix or hybridize. As a practical consequence of this mixing of diatomic molecules, the wavefunctions ð + ð ððððð§ + ðð§ molecular orbitals become blended. The extent of this mixing (or hybridization or blending) depends on the relative energies of the MOs of like symmetry.
|
2306.07906#89
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 90 |
[3] A sigma bond is stronger than a pi bond. The reason is that the overlapping of atomic orbitals can take place to a greater extent during the formation of a sigma bond, whereas overlapping of orbitals occurs to a smaller extent during the formation of a pi bond. A pi bond between two atoms is formed only in addition to a sigma bond. The reason is that the atoms constituting a single bond prefer to form a strong sigma bond rather than a weak pi bond. Thus, a pi bond is always present in molecules with multiple bonds, i.e., double or triple bonds. In other words, a single bond cannot be a pi bond. There can be free rotation of atoms around the sigma bonds. Free rotation of atoms around pi bonds is not possible because it involves breaking the pi bonds.
[1] What is the difference between Sigma Bond and pi Bond? ⢠Sigma bonds are formed by head to head overlapping of orbitals, whereas pi bonds are formed by the lateral overlapping. ⢠Sigma bonds are stronger than pi bonds. ⢠Sigma bonds can be formed between both s and p orbitals whereas pi bonds are mostly formed between p and d orbitals. ⢠The single covalent bonds between atoms are sigma bonds. When there are multiple bonds between atoms, pi bonds can be seen. ⢠pi bonds result in unsaturated molecules. ⢠Sigma bonds allow free rotation of atoms whereas pi bonds restrict the free rotation.
|
2306.07906#90
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 91 |
[2] A sigma bond (ð bond) is a bond formed by the overlap of orbitals in an end-to-end fashion, with the electron density concentrated between the nuclei of the bonding atoms. A pi bond (ð bond) is a bond formed by the overlap of orbitals in a side-by-side fashion with the electron density concentrated above and below the plane of the nuclei of the bonding atoms.
[3] In a sigma bond, the electrons are in orbitals between the nuclei of the bonding atoms (in the plane of electron density), where electron density is greatest. The end-to-end axial overlap of electrons attracts the nuclei to form a strong bond. In a pi bond, the p-orbitals overlap side-to-side above and below the nuclei (equatorial plane), so this attraction is more of a "sideways" pull as opposed to a direct line of pulling. Additionally, the election density is spread out over a greater area. Thus, sigma bonds are stronger than pi bonds.
KDD â23, August 6â10, 2023, Long Beach, CA, USA.
Liu and Lai and Yu, et al.
# Table 17: Answer Example (1)
# Question
When electronics show black, the pixels are not off. Wouldnât that use less energy, and show truer black?
|
2306.07906#91
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 92 |
# Table 17: Answer Example (1)
# Question
When electronics show black, the pixels are not off. Wouldnât that use less energy, and show truer black?
displaying black on OLED screens does use less power than displaying white, because the LEDs that make up each pixel are off when displaying black. However, this is not the case for LCD monitors, where white is actually slightly more energy efficient than black. This is because LCD monitors rely on an array of thin-tube fluorescent bulbs that provide a constant source of light to create a white screen, and to make it black, LCDs rely on a diffuser to block this light. Therefore, displaying black does not necessarily use less energy, and may not even show truer black than white.
|
2306.07906#92
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 93 |
Most computer and mobile device displays do not use black pixels to save energy. Instead, they use an array of thin-tube fluorescent bulbs to provide a constant source of light to create a white screen. To make it black, LCDs rely on a diffuser to block this light. As a result, LCDs use more energy than CRTs to display a black screen. Measuring a 17-inch (43-centimeter) LCD monitor, Schindler found that white required 22.6 watts, while black came in a tad higher at 23.2 watts. With a 20-inch (50.8-centimeter) LCD, black required 6 percent more energy than white. In contrast, AMOLED displays donât have a backlight at all. Instead, each pixel is like a tiny red, blue, or green colored light. If the screen is instructed to display black, it doesnât need to block any light, it simply doesnât light up any of the little colored sub-pixels. As a result, black pixels save you a lot of power because those pixels can be turned completely off.
|
2306.07906#93
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 94 |
It depends on the resting state of the LCD as to whether they require energy to stop light or to allow light to pass through," Heap explains. "This is why screen tests show that some CCFL (cold cathode fluorescent lamp) LCD screens save energy and some use a bit more. All of the scientific test data we have come across shows a slight saving on black LCD screens, which suggests that the rest state on many LCD screens does not allow light through." Heap also points out that a large number of Blackle users come from India and South America, where CRTs are still commonly sold. Most LCD devices use a backlight which is always on. When an LCD is black, it is doing full light output, and then also blocking all of that light to create the appearence of black. If a monitor is displaying all-black, most will reduce power to the backlight accordingly, primarily to produce a deeper black, and of course also to save power.
While it is true that black pixels use less energy than white pixels on some displays, such as OLEDs, this is not always the case. On LCDs, for example, black pixels still require power to be lit up. Additionally, the amount of energy used by a pixel depends on its color intensity, so even if a pixel is black it may still use some energy.
|
2306.07906#94
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 95 |
Itâs not true for all types of screens. For LCD screens, energy cost is not definitely related to brightness. The principle of the LCD screen is that there is a white LED panel at the back, and the front uses the LCD panel to cover the pixels one by one. Turn down the backlight brightness to save power. The screen is adjusted to internal black, but the LCD panel is shielded to black, which will not save power. The power saving has nothing to do with the background color.
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
KDD â23, August 6â10, 2023, Long Beach, CA, USA.
# Table 18: Answer Example (2)
# Question
Why does it seem like when I watch something the second time around, it goes by faster than the first time I watched it?
|
2306.07906#95
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 96 |
# Table 18: Answer Example (2)
# Question
Why does it seem like when I watch something the second time around, it goes by faster than the first time I watched it?
It seems like when you watch something the second time around, it goes by faster than the first time you watched it because you are focusing on the little details instead, which might make it feel like time is going by faster. Another reason could be that when you know whatâs coming, your mind breaks it up and you have intervals/milestones, and know when itâs going to end, which helps you digest it easier. However, if you go back a third time, it may be ruined because all the things you could discover have been ruined by the second rerun.
There are several different ideas about why this happens. One theory is that when you are cognitively engaged, meaning you are thinking about the task at hand and are busy doing it, you donât have the opportunity to notice the passage of time. When youâre not engaged, like when you are bored, you have the time to think about the passage of time and so it seems like it goes by slower. Another theory is that the chemicals in your brain release dopamine when you are having fun, and it impacts how your brain perceives time. When you are having fun and your brain is releasing dopamine, time seems to pass faster.
|
2306.07906#96
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 97 |
There are two main reasons time seems to pass faster the second time around. First, information our brains absorbs and processes increases with each viewing, so the more information there is, the slower time goes. Second, as we age, the rate at which we process visual information slows down, and this is what makes time âspeed upâ as we grow older. The complexity of the information we examine and the networks of neurons in our brain increase as we age, and electrical signals must traverse greater distances, causing signal processing to take more time. This is what leads to time passing more rapidly.
The phenomenon of time seeming to pass faster when watching something for a second time is likely due to the recency effect, which amplifies the importance of recent stimuli or observations. Additionally, paying more attention to time passing can make it seem like itâs going by faster, as can familiarity with the content. Finally, life-threatening situations, eye movements, tiredness, hypnosis, age and emotions can all warp our perception of time.
When humans repeatedly look at the same thing, the brain will quickly recall the previous memory when it sees the same content. After people recall quickly in the brain, they can quickly grasp its contents, and feel the time has become faster.
KDD â23, August 6â10, 2023, Long Beach, CA, USA.
Liu and Lai and Yu, et al.
|
2306.07906#97
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 98 |
KDD â23, August 6â10, 2023, Long Beach, CA, USA.
Liu and Lai and Yu, et al.
# Table 19: Comparison Example of GLM-10B-RM and GLM-10B-Rouge1
# Question
Why is it that soap can wash almost anything off of your hands but doesnât wash its fragrance off of you?
# References
[1] Soap is able to clean hands and dishes because of some pretty nifty chemistry. Soap molecules have on one end whatâs known as a polar salt, which is hydrophilic, or attracted to water. The other end of the molecule is a nonpolar chain of fatty acids or hydrocarbons, which is hydrophobicâmeaning that itâs repelled by water but attracted to grease and other oily substances. When you wash your hands, the soap forms something like a molecular bridge between the water and the dirty, germ-laden oils on your hands, attaching to both the oils and the water and lifting the grime off and away. Soaps can also link up with the fatty membranes on the outside of bacteria and certain viruses, lifting the infectious agents off and even breaking them apart. Once the oily dirt and germs are off your hands, the soap molecules thoroughly surround them and form tiny clusters, known as micelles, that keep them from attaching to anything else while they wash down the drain.
|
2306.07906#98
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 99 |
[2] First, the hydrophilic ends of the soap molecule attach to the water, then the hydrophobic ends attach to the oils, pathogens, or other debris on the hand. After the soap has bonded with the germs on your hand the water can then wash it down the drain. This works because the soap molecule can attach to the germs more strongly than the germs can hold onto your skin.
[3] Therefore, when you wash with soap and water the soap molecules grab on to the fat molecules and pulls them off the skin into the rinse water. When your rinse the skin the fat and germs are removed from the skin. Therefore, the main reason why soap works to cleanse and sanitise is it literally washes everything off the skin.
|
2306.07906#99
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 100 |
Soap is able to cleanse and sanitise hands and bodies because of its unique chemistry. Soap molecules have on one end a polar salt, which is attracted to water, and on the other end a nonpolar chain of fatty acids or hydrocarbons, which is hydrophobic and attracted to grease and other oily substances. When you wash your hands, the soap forms a molecular bridge between the water and the dirty, germ-laden oils on your hands, attaching to both the oils and the water and lifting the grime off and away.[1] Soap molecules can also link up with the fatty membranes on the outside of bacteria and certain viruses, lifting the infectious agents off and even breaking them apart. Once the oily dirt and germs are off your hands, the soap molecules thoroughly surround them and form tiny clusters, known as micelles, that keep them from attaching to anything else while they wash down the drain.[2] Therefore, the main reason why soap works to cleanse and sanitise is it literally washes everything off the skin.[3] However, while soap can wash almost anything off of your hands, it cannot wash its fragrance off of you because fragrances are not dirt and are not removed by rinsing.
|
2306.07906#100
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.