doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2306.07906
| 101 |
WebGLM-10B-Rouge1 The main reason why soap works to cleanse and sanitise is because of its molecules, which have on one end a polar salt, which is hydrophilic, or attracted to water, and on the other end a nonpolar chain of fatty acids or hydrocarbons, which are hydrophobic, meaning they are repelled by water but attracted to grease and other oily substances[1]. Soap molecules form a molecular bridge between the water and the dirty, germ-laden oils on your hands, attaching to both the oils and the Water and lifting the grime off and away[1][2]. Then, the soap molecules surround the oily dirt and germs and form tiny clusters, known as micelles, that keep them from attaching to anything else while they wash down the drain[1]. However, the fragrance of soap is not removed this way because it is not bound to the fat molecules that are removed from the skin when you wash with soap and water[3].
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
KDD â23, August 6â10, 2023, Long Beach, CA, USA.
# Table 20: Criteria Details For References
Evaluating how a reference text is related to the question and how much information of the question can be found in the reference.
|
2306.07906#101
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 102 |
# Table 20: Criteria Details For References
Evaluating how a reference text is related to the question and how much information of the question can be found in the reference.
0 The reference is almost not related to the question. 1 A few information or answers related to question can be found in the reference. 2 Most of the answers to the question can be found in the reference, but a small amount of reasoning or external
information is still needed.
3 The complete answer to the question can be found in the reference
Density Evaluating the proportion of information related to question in a reference text.
0 Reference content is almost irrelevant to questions and answers. 1 Reference has most content irrelevant to the problem, but there is still a small amount of content related to the
problem.
2 Reference has a few irrelevant contents. 3 Reference is closely related to the problem, and almost the whole reference focuses on the problem.
Evaluating whether the text itself violates common sense, logic or contradictions. There is no need to pay attention to whether it is relevant to the question.
0 The text contains statements that are contrary to the facts, and the authenticity is low. 1 The text has no statements contrary to the facts and is highly authentic.
If the evaluator cannot determine the authenticity through his or her own knowledge, leave it blank.
Toxicity â Evaluating whether It involves violence, pornography, indecent words and other similar elements.
|
2306.07906#102
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 103 |
If the evaluator cannot determine the authenticity through his or her own knowledge, leave it blank.
Toxicity â Evaluating whether It involves violence, pornography, indecent words and other similar elements.
0 The above element does not exist. 1 One or more of the above elements exist.
Social Bias â Evaluating whether there are prejudices against specific social groups, such as racial discrimination and gender discrimination.
0 The above element does not exist. 1 One or more of the above elements exist.
KDD â23, August 6â10, 2023, Long Beach, CA, USA.
Liu and Lai and Yu, et al.
# Table 21: Criteria Details For Answers
Evaluating whether grammar, spelling, word usage, etc. conform to peopleâs grammar habits (please do not consider any semantic factors).
0 There are major errors in grammar and spelling, which make the text difficult to read. 1 There are small errors in grammar and spelling, which will slightly affect understanding. 2 There are a few grammatical, spelling or case errors that do not affect understanding. 3
Fluent language, correct grammar, no mistakes, easy to read.
Correctness Evaluating whether the question is correctly answered.
|
2306.07906#103
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 104 |
Fluent language, correct grammar, no mistakes, easy to read.
Correctness Evaluating whether the question is correctly answered.
0 No answer, or the answer is irrelevant or wrong. 1 A few answers are given, but they are particularly incomplete or fragmented. The question is basically not answered. 2 Basically answer the questions, but there are a few mistakes or omissions. 3 Answer the question perfectly.
Citation Accuracy Evaluating whether the reference marks in the answer are accurate.
0 The reference marks are basically wrong or there is no reference label. 1 There are a large number of missing and wrong marks. 2 There are a few missing and wrong marks. 3 The reference marks are completely accurate.
Objectivity Evaluating whether all the answers come from references.
0 There is external knowledge in the answer which does not come from references. 1 All answers can be based on the reference.
# Truthfulness
Evaluating whether the text itself violates common sense, logic or contradictions. There is no need to pay attention to whether it is relevant to the question.
0 The text contains statements that are contrary to the facts, and the authenticity is low. 1 The text has no statements contrary to the facts and is highly authentic.
If the evaluator cannot determine the authenticity through his or her own knowledge, leave it blank.
Redundancy â Evaluating whether there is redundancy in the answer, such as repeating the same sentence or the same fact repeatedly.
|
2306.07906#104
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 106 |
How to balance career and hobbies? How to balance career and hobbies? Balancing career and hobbies is important to achieve a full, enriched life ©! It is important to make a routine and prioritize healthy behaviors" in order to find balance. Additionally, it can help to change how much time you spend on hobbies by making time for them and prioritizing them !.It is also important to draw a sharp line between work and everything else ââ!, and to remember to enjoy time with family and hobbies as much as work. 'â! [1] How to Balance Work and Hobbies - Real Recovery Do you have things you enjoy that bring happiness and fill blank spaces in your day? Participating in pleasurable activities decreases stress and improves mental health. However, balance is essential. You might start to notice your hobbies cutting into time meant for work or other tasks. Learn to balance your work responsibilities and hobbies by making a routine and prioritizing healthy behaviors. [2] How to Balance Work and Hobbies - Real Recovery @ Have you noticed that your hobbies take up more of your space, time, and energy? The good news is that finding balance can be quick and straightforward. Change how much time you spend on hobbies by doing the following:
|
2306.07906#106
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 107 |
more of your space, time, and energy? The good news is that finding balance can be quick and straightforward. Change how much time you spend on hobbies by doing the following: [3] How to Find and Make Time for Your Passion Even When You ... Working a day job while discovering and pursuing your passions demands a lot of time and focus, but they are equally fundamental human needsâone for economic stability and the other for self- actualization. For many of usâespecially those with a lot of creative drive and entrepreneurial spiritâ balancing our career and pursuing our dreams are critical to leading a full, enriched life. [4] The Balancing Act: Work, family, hobbiesâhow does it all fit ... Maybe balance is the ability to draw a sharp line between work and everything else. If | werenâ t able to do this, | wouldnâ t enjoy spending time with my family as much. | wouldnâ t enjoy my hobbies as much, either. Recently | was able to achieve a lifelong dream. | recorded a hip-hop track with my kids. Itâ s amateur, but | had so much fun doing itâand | have so much fun listening to itâthat |â Il be making music with them until | canâ t pick up
|
2306.07906#107
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 110 |
FL Studio and Cubase, which is better? FL Studio and Cubase, which is better? It is difficult to say which digital audio workstation (DAW) is better, FL Studio or Cubase. '*! Both are excellent tools and have made music production, editing, and creation easier than ever before, with professional-sounding audio results 'â!.FL Studio is thought to be less expensive than Cubase â! and is more approachable and simpler to use ®!, while Cubase is better for vocals and creating full songs easily '"! Additionally, FL Studio has iOS app compatibility, which is convenient '*! Ultimately, both DAWs are excellent and have their own pros and cons, so it is up to the user to decide which one is right for them. [1] Cubase vs FL Studio Compared: Which Software is Your Best ... - FL Studio is great for music production and MIDI composition, whereas Cubase is best for vocals and creating full songs easily. [2] Cubase vs FL Studio Compared: Which Software is Your Best ... 2 FL Studio and Cubase are two of the best DAWs that you can buy right now. Fans of both software attest that these two have made music production, editing, and creation a whole lot easier. Plus, the audio quality of the results is professional sounding and can rival
|
2306.07906#110
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 111 |
Fans of both software attest that these two have made music production, editing, and creation a whole lot easier. Plus, the audio quality of the results is professional sounding and can rival the quality of those created in studios. [3] Cubase vs FL Studio: How to Choose the Best DAW 2 Both are excellent tools, however, for me, it is more approachable and simpler to use FL Studio. The Cubase package is still a very good option but feels a little more restrictive and doesnâ t have the same iOS app compatibility that you have with FL Studio, which | think is convenient that canâ t be ignored. [4] FL Studio Vs Cubase: Which Music Production Software Is ... 2 FL Studio is often thought to be less expensive than Cubase. FL Studio includes several pricing options, including a free demo version and three paid versions: Fruity Edition, Producer Edition, and Signature Bundle. [5] Cubase vs FL Studio Compared: Which Software is Your Best ... While both Cubase and FL Studio can stand their ground as the best digital audio workstation package out there, there are some differences.
|
2306.07906#111
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 113 |
Is attention better than CNN? Is attention better than CNN? It depends on the application. Self-attention layers have the advantage of being able to consider the impact of every other pixel of the image, as opposed to convolution layers whose receptive field is the $$K imes K$$ neighborhood grid""!.This can be beneficial in certain applications, such as image recognition. Attention based CNNs can also be used to model a pair of sentences, used in answer selection, paraphrase identification and textual entailment tasks â7! Furthermore, attention models can outperform the baseline across all depths while having 12% fewer FLOPS and 29% fewer parameters ©! However, it is important to note that while local attention with a window of size 5 has the same receptive field as a CNN with kernel size 5, the learned weights are applied to the inputs differently. '! [1] How a self-attention layer can learn convolutional filters? 2 The main difference between CNN and self-attention layers is that the new value of a pixel depends on every other pixel of the image. As opposed to convolution layers whose receptive field is the $$K imes K$$ neighborhood grid, the self-attention's receptive field is always the
|
2306.07906#113
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 114 |
image. As opposed to convolution layers whose receptive field is the $$K imes K$$ neighborhood grid, the self-attention's receptive field is always the full image. This brings some scaling challenges when we apply transformers to images that we don't cover here. For now, let's define what is a multi-head self-attention layer. [2] Attention in Neural Networks - Towards Data Science 2 In this paper Yin et al presented ABCNN â Attention Based CNN to model a pair of sentences, used in answer selection, paraphrase identification and textual entailment tasks. The key highlight of the proposed attention based model was that it considers the impact/relationship/influence that exists between the different parts or words or whole of one input sentence with the other, and provides an interdependent sentence pair representation that can be used in subsequent tasks. Letâ s take a quick look at the base network first before looking at how attention was introduced into it. [3] How To Go Beyond CNNs With Stand-Alone Self-Attention ... 2 The results show that the attention models outperform the baseline across all depths while having 12% fewer FLOPS and 29% fewer parameters. [4] Are there any advantages of
|
2306.07906#114
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 115 |
the attention models outperform the baseline across all depths while having 12% fewer FLOPS and 29% fewer parameters. [4] Are there any advantages of the local attention against ... 2 It is true that when using local attention with a window of size 5, the "receptive field" is the same as a CNN with kernel size 5 (or two CNN layers with kernel size 3). However, there is a key difference in how the learned weights are applied to the inputs.
|
2306.07906#115
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 117 |
How to survive in the first-tier cities without a high-salary work? How to survive in the first-tier cities without a high-salary work? Surviving in first-tier cities without a high-salary work can be difficult, as most entry-level jobs do not pay enough to support someone living in a large city '"!.One way to survive is to be great at what you do so that you stand out and can find a job in a smaller economy ! Additionally, it can be important to make sure you have a steady job with a consistent salary, as the cost of college, healthcare, and housing have gone up while real wages have gone down"! It is also important to make sure you have a good understanding of the cost of living in the city you are moving to, as the cost of rent, food, and other expenses can quickly add up. Lastly, it can be important to make sure you are realistic about your spending, as it can be easy to fall into the trap of wanting to have a luxury lifestyle, but not having the funds to support it. [1] Strategies for survival on an entry-level salary â Most entry-level jobs do not pay enough to support someone living in a large city. This is a problem for recent grads. They imagine life in a big city as
|
2306.07906#117
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 118 |
salary â Most entry-level jobs do not pay enough to support someone living in a large city. This is a problem for recent grads. They imagine life in a big city as lots of entertainment, crowds of young people for fun, and a great dating scene. But itâ s a hard life to fund: The cost of college, healthcare, and housing have gone up, all while real wages have gone down. This generation is facing a gap between wages and the cost of living that their parents never did. [2] Strategies for survival on an entry-level salary 2 | think the key to surviving in a smaller economy is to be great at what you do so that you stand out. If there are not a lot of jobs you can take, you need to make sure youâ rea top candidate.
|
2306.07906#118
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 120 |
What do you think of version 3.5 of Genshin Impact? What do you think of version 3.5 of Genshin Impact? Genshin Impact version 3.5 is the latest update coming to the live servers on March 1st." 7! It will bring back familiar events like the Windblume Festival, as well as various adjustments and optimizations '"!.The update is available for PlayStation 5, PlayStation 4, PC via client and Epic Games Store, iOS via the App Store, and Android via Google Play'*! â! The Genshin Impact 3.5 preload size is 2.03GB on iOS and can be downloaded by tapping the pre-install resource package from the title screenâ! It is a fun game to play with a controller, and the update seems to keep the game running well. â*! [1] Genshin Impact 3.5 Release Date and Details - IGN 2 Genshin Impact Version 3.5 is the next Genshin Impact update coming to the live servers on March 1st. Version 3.5 will herald a return to Mondstadt, and bring back familiar events like the Windblume Festival. [2] 'Genshin Impactâ Version 3.5 Update Pre-Installation Is Now ... 2 Genshin Impact (Free) version
|
2306.07906#120
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 121 |
like the Windblume Festival. [2] 'Genshin Impactâ Version 3.5 Update Pre-Installation Is Now ... 2 Genshin Impact (Free) version 3.5 update pre-installation has finally gone live on iOS, Android, and PC platforms ahead of its release date this Wednesday for all platforms. Genshin Impact version 3.5 âWindblumeâ s Breathâ arrives on March 1st for iOS, Android, PS5, PS4, and PC platforms worldwide bringing in the Windblue Festival, a new Archon Quest, two new characters, updated rules for Genius Invokation TCG, one extra Intertwined Fate for each completed Archon Quest, and more. If you missed the previously-announced Prime Gaming collaboration, read this. Watch the Genshin Impact version 3.5 update Dehya character trailer below: [3] Genshin Impact version 3.5 update now available - Gematsu Genshin Impact is available now for PlayStation 5, PlayStation 4, PC via client and Epic Games Store, iOS via the App Store, and Android via Google Play. A Switch version is also planned. [4] 'Genshin Impactâ Version 3.5 Update Pre-Installation Is Now ... 2 The Genshin Impact
|
2306.07906#121
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 122 |
via Google Play. A Switch version is also planned. [4] 'Genshin Impactâ Version 3.5 Update Pre-Installation Is Now ... 2 The Genshin Impact 3.5 preload size is 2.03GB on iOS. You can download this by tapping the pre-install resource package from the title screen as usual or from the Paimon menu in-game under other. PC pre- installation details are here. If you havenâ t checked out Genshin Impact yet, you can download it for free on the App Store for iOS here and on Google Play for Android here. The PC version is available on the official website here and the Epic Games Store. If you play on iOS, with iOS 14.5 or iPadOS 14.5 and later, you can use PS5 and Xbox Series X|S controllers to play Genshin Impact. We featured Genshin Impact as our Game of the Week when it released and awarded it our 2020 Game of the Year. | also featured it as one of the best iOS games to play with a controller. What do you think of what weâ ve seen of Genshin Impact 3.5 so far?
|
2306.07906#122
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 124 |
transformers are originated from NLP, but why they can be applied in CV? transformers are originated from NLP, but why they can be applied in CV? Transformers were originally developed for Natural Language Processing (NLP) and have since been applied to Computer Vision (CV) due to their effectiveness in improving the performance of language processing models â"!.They have also been used in CV to improve the performance of image classification on ImageNet and object detection on COCO"! Additionally, integration of NLP and CV techniques have been used to improve results in both fields '?!.Furthermore, Transformers make use of attention mechanisms which, unlike RNNs, do not have a recurrent structure, allowing them to match the performance of RNNs with attention with enough training data â*! Finally, attempts have been made to apply Transformers in CV by substituting word embeddings with flattened image patches. â! [1] Year 2020: the Transformers Expansion in the CV - Medium 2 In the precedent years transformers have perform greatly in the field of NLP. Theyâ ve significantly improved the performance of the language processing models and the effect is comparable to what had been done to image understanding since 2012 by convolution neural networks. Now at the end
|
2306.07906#124
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 125 |
significantly improved the performance of the language processing models and the effect is comparable to what had been done to image understanding since 2012 by convolution neural networks. Now at the end of 2020 we have transformers entering the top quartile of well-known computer vision benchmarks, such as image classification on ImageNet and object detection on COCO. [2] Natural Language Processing techniques in Computer Vision Self Attention and Transformer-based architectures have recently boosted results not only in the NLP domain, but also in the CV domain. Integration of NLP and CV techniques have also inspired different creative approaches in both fields. These approaches have improved SotA, and the potential for even greater results is possible. Thus, every DS/ML/DL practitioner should be aware of these recent developments to successfully implement them to applied tasks. [3] Transformer (machine learning model) - Wikipedia 2 Before transformers, most state-of-the-art NLP systems relied on gated RNNs, such as LSTMs and gated recurrent units (GRUs), with added attention mechanisms. Transformers also make use of attention mechanisms but, unlike RNNs, do not have a recurrent structure. This means that provided with enough training data, attention
|
2306.07906#125
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 126 |
Transformers also make use of attention mechanisms but, unlike RNNs, do not have a recurrent structure. This means that provided with enough training data, attention mechanisms alone can match the performance of RNNs with attention. [4] Natural Language Processing techniques in Computer Vision This model design represents the first attempts to apply Transformers in CV. As you can see, it is fairly straightforward - word embeddings are substituted with flattened image patches.
|
2306.07906#126
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 128 |
Who proposed Music Transformer? How does it work? Who proposed Music Transformer? How does it work? The Music Transformer paper was proposed by Huang et al. from Google Magenta"! It is a state- of-the-art language-model based music generation architecture that uses Transformers, which were successful in the NLP field, to the symbolic music generation domain '"!.The paper also includes optimization work on implementing Transformer with relative attention '7!.Some related work of using Transformer architecture on generating music include MuseNet (from OpenAl) "?! and Pop Music Transformer @! The paper shows the first successful use of Transformer in generating music that exhibits long-term structure. â! [1] Transformer (machine learning model) - Wikipedia 7 The Music Transformer paper, authored by Huang et al. from Google Magenta, proposed a state-of-the-art language-model based music generation architecture. It is one of the first works that introduce Transformers, which gained tremendous success in the NLP field, to the symbolic music generation domain. [2] Transformer (machine learning model) - Wikipedia 2 In my opinion, the Music Transformer paper is not only an application work, but its crux also includes optimization work on implementing Transformer with relative
|
2306.07906#128
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 129 |
learning model) - Wikipedia 2 In my opinion, the Music Transformer paper is not only an application work, but its crux also includes optimization work on implementing Transformer with relative attention. We will delve into this part below. [3] Transformer (machine learning model) - Wikipedia 2 Some related work of using Transformer architecture on generating music include MuseNet (from OpenAl), and also Pop Music Transformer. It is evident that the Transformer architecture would be the backbone of music generation models in future research. [4] Music Transformer: Generating Music with Long-Term Structure 2 TL;DR: We show the first successful use of Transformer in generating music that exhibits long-term structure.
|
2306.07906#129
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 131 |
What is the backbone of Toolformer? What is the backbone of Toolformer? The backbone of Toolformer is a pretrained 6.7b parameter âGPT-J' model"! This model is used to decide which APIs to call, when to call them, what arguments to pass, and how to best incorporate the results into future token prediction"! .The model is finetuned to learn to use APIs 7! by using a language model to build Toolformer's dataset 'â Specifically, the dataset is augmentated with API calls in the text and then checked to make sure they worked and were useful !7!.The resulting dataset is used to finetune the model so it can learn to use APIs. |! [1] Import Al 2 What is Toolformer? âA model trained to decide which APIs to call, when to call them, what arguments to pass, and how to best incorporate the results into future token predictionâ . The model is based on a pretrained 6.7b parameter âGPT-J' model and, despite its small size, outperforms many much larger models, including [2] Import Al 2 How they did it: They use a language model to build Toolformerâ s dataset. Specifically, they take a dataset of plain
|
2306.07906#131
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 132 |
models, including [2] Import Al 2 How they did it: They use a language model to build Toolformerâ s dataset. Specifically, they take a dataset of plain text, augment that data with API calls in the text, then check if the calls a) worked and b) were useful and if they were, then weave that back into the dataset. They use the resulting dataset to finetune the model so it can learn to use APIs. âMoreover, as API calls are inserted in exactly those positions and with exactly those inputs that help M predict future tokens, finetuning... enables the language model to decide when and how to use which tool, based purely on its own feedback.â
|
2306.07906#132
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 134 |
Why CyGames succeed? What games have they launched? Why CyGames succeed? What games have they launched? CyGames succeed because they have acquired some of the best names and talent in the Japanese games industry, as they are more than willing to pay for them"! .They have also been successful in the mobile and social game marketplace, with an audience of over 20 million users worldwide "7! Some of their known games include Dragalia Lost, Granblue Fantasy, Princess Connect! âRage of Bahamut, Shadowverse, Uma Musume: Pretty Derby, and World Flipper series ââ!.They have also been successful due to their high return on investment in Japan °!.However, some of their strategies are outdated, such as their gacha system of high generosity + bad pull mechanics. â! [1] Cygames And Platinum Games Announce 'Granblue Fantasy ... 2 Due to the success of games like Granblue Fantasy in Japan, Cygames is doing very well financially. This has resulted in the company acquiring some of the best names and talent in the Japanese games industry, as they are more than willing to pay for them. [2] Rage of Bahamut developer Cygames breaks 20 million users ... ? Say what you will about
|
2306.07906#134
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 135 |
games industry, as they are more than willing to pay for them. [2] Rage of Bahamut developer Cygames breaks 20 million users ... ? Say what you will about the games themselves, but it's pretty fair to conclude at this point that mobile and social games are where nearly all of the economic growth in the Japanese game marketplace is coming from at this point. There are few better examples of this than Tokyo-based Cygames, a company that was founded in May 2011, released its first game four months later, and now enjoys an audience of over 20 million users worldwide. [3] CyberAgent, the mother company of Cygames, released their ... 2 Cygames strategies are just outdated. Their gacha system of high generosity + bad pull mechanics simply doesn't cut it anymore. Their character stories are still the best in the business but they have some of the worst overall narrative stories. Their games require a significant time investment - fairly grindy with low reward-to-time ratio. Modern games have great pity/dupe clauses and are generous enough to cultivate a healthy base of small spenders while still attracting whales. CN and KR games entering the market with high Qol like sweeps and slick Uls redefined the mobage UX. I'm sure they
|
2306.07906#135
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 136 |
while still attracting whales. CN and KR games entering the market with high Qol like sweeps and slick Uls redefined the mobage UX. I'm sure they know all of this already, but it's hard to change all of this with current IPs. [4] Cygames - Gematsu 2 Cygames is a video game publisher and developer headquartered in Shibuya, Tokyo, Japan. It is known for the Dragalia Lost, Granblue Fantasy, Princess Connect!, Rage of Bahamut, Shadowverse, Uma Musume: Pretty Derby, and World Flipper series. [5] Cygames on on why the West requires an entirely different UA ... 2 For Cygames our return on investment is obviously significantly higher in Japan, but it also has to do with what kind of game it is.
|
2306.07906#136
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 138 |
When will the COVID-19 disappear? When will the COVID-19 disappear? Experts do not expect COVID-19 to become eradicated anytime soon, and believe it will become endemic in certain areas according to established patterns over time '!.According to Bill Gates, at some point next year, Covid-19 will become an endemic disease in most places '"!. The COVID-19 Scenario Modeling Hub made some pandemic projections spanning August 2022 to May 2023, assuming the new tweaked boosters adding protection for the newest omicron relatives would be available and a booster campaign would take place in fall and winter °!.A February 2021 survey of over 100 scientists found that over half said they did not believe the virus could be eradicated, and 89% of them believe that COVID-19 will become endemic over time. ââ! [1] Bill Gates: How Covid pandemic ends and becomes endemic ... But that doesnât mean the coronavirus will disappear. In a blog post on Tuesday, Bill Gates laid out one seemingly likely scenario: "At some point next year, Covid-19 will become an endemic disease in most places." [2] Will
|
2306.07906#138
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 139 |
laid out one seemingly likely scenario: "At some point next year, Covid-19 will become an endemic disease in most places." [2] Will COVID disappear? Scientists say no, so we should think ... 2 Experts expect COVID-19 will someday become endemic, meaning it occurs regularly in certain areas according to established patterns. But they donâ t think that will be very soon. [3] Will COVID disappear? Scientists say no, so we should think ... 2 Experts say COVID will keep causing serious illness in some people. The COVID-19 Scenario Modeling Hub made some pandemic projections spanning August 2022 to May 2023, assuming the new tweaked boosters adding protection for the newest omicron relatives would be available and a booster campaign would take place in fall and winter. [4] COVID Endemic vs Pandemic - is it Here to Stay? - Novavax COVID-19 is considered to be in its pandemic phase because the number of infections continues to increase and affect people worldwide.1 A February 2021 survey of over 100 scientists found that over half said they did not believe the virus could be eradicated, and 89% of them believe that COVID-19 will become endemic
|
2306.07906#139
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 142 |
Who is the president of United States now? Q Who is the president of United States now? The 46th and current president of the United States is Joseph R. Biden, Jr. He was sworn in on January 20, 2021 '"! The 49th and current vice president of the United States is Kamala Harris 7! .The president of the United States is the head of state and head of government of the United States of America |! 41, and is the chief executive of the federal government and Commander-in-Chief of the armed forces! 4! The next President will be named President-elect, and President Biden will remain in power until the Inauguration in January 2025. |! [1] Presidents, Vice Presidents, and First Ladies of the United ... 2 The 46th and current president of the United States is Joseph R. Biden, Jr. He was sworn in on January 20, 2021. [2] Presidents, Vice Presidents, and First Ladies of the United ... 2 The 49th and current vice president of the United States is Kamala Harris. She was sworn in on January 20, 2021. [3] Description 2 The president of the United States (POTUS)[A] is the head of state and head of government of the United States of America. The president directs the executive branch of the federal government and is the
|
2306.07906#142
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 143 |
States (POTUS)[A] is the head of state and head of government of the United States of America. The president directs the executive branch of the federal government and is the commander-in-chief of the United States Armed Forces. [4] Presidents, Vice Presidents, and First Ladies of the United ... 2 The president of the United States is the head of state of the U.S., the chief executive of the federal government, and Commander-in-Chief of the armed forces. The United States was the first nation to create the office of president as the head of state in a modern republic. [5] Who is President of the United States right now? | The US Sun 2 The next President will be named President-elect, and President Biden will remain in power until the Inauguration in January 2025.
|
2306.07906#143
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 145 |
Tell me about the movie Black Panther 2 Tell me about the movie Black Panther 2 Black Panther 2, officially titled Black Panther: Wakanda Forever, is the sequel to the 2018 movie Black Panther. It follows Queen Ramonda (Angela Bassett), Shuri (Letitia Wright), M'Baku (Winston Duke), Okoye (Danai Gurira) and the Dora Milaje (including Florence Kasumba) as they fight to protect their nation from intervening world powers in the wake of King T'Challa's death"! .Director Ryan Coogler returns to direct the movie, which is expected to both pay homage to the late actor Chadwick Boseman and expand the wonderful world he helped bring to lifeâ! .The cast for Black Panther: Wakanda Forever includes Lupita Nyongâo, Danai Gurira, Florence Kasumba, Martin Freeman, Letitia Wright, Winston Duke, Angela Bassett, Tenoch Huerta, Michaela Coel, and Dominique Thorne, with Lake Bell also cast in an undisclosed role'*! The face of the late Chadwick Boseman - and his beloved character Tâ Challa â
|
2306.07906#145
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 146 |
Thorne, with Lake Bell also cast in an undisclosed role'*! The face of the late Chadwick Boseman - and his beloved character Tâ Challa â appears early in the sequel, and his presence and legacy is profoundly felt throughout the film, as various characters (and the audience) deal with grief and move on with their lives in the face of adversity â personal and superheroic. '7! [1] Black Panther 2 Wakanda Forever | Release date, cast, trailer ... 2 The official synopsis for Black Panther: Wakanda Forever reads: "In Marvel Studiosâ Black Panther: Wakanda Forever, Queen Ramonda (Angela Bassett), Shuri (Letitia Wright), M'Baku (Winston Duke), Okoye (Danai Gurira) and the Dora Milaje (including Florence Kasumba), fight to protect their nation from intervening world powers in the wake of King T'Challa's death. [2] âBlack Panther 2' review: 'Wakanda Foreverâ is a ... - USA Today 2 The face of the late Chadwick Boseman â and his beloved character Tâ Challa â appears early in the sequel
|
2306.07906#146
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 147 |
is a ... - USA Today 2 The face of the late Chadwick Boseman â and his beloved character Tâ Challa â appears early in the sequel âBlack Panther: Wakanda Forever,â on memorial murals and that nifty Marvel Studios intro. But his presence and legacy is profoundly felt throughout the film, as various characters (and the audience) deal with grief and move on with their lives in the face of adversity â personal and superheroic. [3] Black Panther 2: Release Date, Trailer, Plot, and Everything ... 2 While the King might be absent from the story, we will be seeing quite a few characters from the first movie returning for the sequel. The cast for Black Panther: Wakanda Forever includes Lupita Nyongâo, Danai Gurira, Florence Kasumba, Martin Freeman, Letitia Wright, Winston Duke, Angela Bassett, Tenoch Huerta, Michaela Coel, and Dominique Thorne. Lake Bell, who voiced Black Widow in the animated series What If...?, has also been cast in an undisclosed role. [4] Black Panther 2: Release Date, Trailer, Plot, and Everything ... 2 The
|
2306.07906#147
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 148 |
animated series What If...?, has also been cast in an undisclosed role. [4] Black Panther 2: Release Date, Trailer, Plot, and Everything ... 2 The tragic loss of Chadwick Boseman has left quite a hole in the world, first and foremost the very real loss, as well as his sudden absence in the ongoing Marvel Cinematic Universe. But the show must go on, as they say, and Black Panther 2, officially titled Black Panther: Wakanda Forever, is currently in the works. The movie is expected to both pay homage to the late actor and expand the wonderful world he helped bring to life.
|
2306.07906#148
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 150 |
What is Hogwarts Legacy? What is Hogwarts Legacy? Hogwarts Legacy is a single-player, open-world, action role-playing game set in the fictional magical school made famous in the Harry Potter series and Wizarding World universe ""!.It is set in the 1800s, more than a hundred years before the booksâ setting '"!, and players are able to attend classes at Rowlingâ s iconic school 7! .Players can explore familiar and new locations, discover magical beasts, customize their character and craft Potions, master spell casting, upgrade talents and become the wizard they want to be. '*! [1] What Parents Need to Know About Hogwarts Legacy Hogwarts Legacy is a single-player, open-world, action role-playing game set in the fictional magical school made famous in the Harry Potter series and Wizarding World universe. The story takes place in an 1800s wizarding world, more than a hundred years before the booksâ setting, so familiar characters may not show up. [2] Hogwarts Legacy - Wikipedia 2 Hogwarts Legacy is an action role-playing game in which players are able to attend classes at Hogwarts School of Witchcraft and Wizardry. Players can explore locations
|
2306.07906#150
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 151 |
Legacy is an action role-playing game in which players are able to attend classes at Hogwarts School of Witchcraft and Wizardry. Players can explore locations from the Wizarding World franchise, such as the Forbidden Forest, Diagon Alley, and Hogsmeade. Other areas as yet unseen visually in other media include the Hufflepuff and Ravenclaw common rooms. Progressing through the game sees the interior and exterior of the castle change visually to match the seasons. [3] Hogwarts Legacy on Steam 2 Hogwarts Legacy is an open-world action RPG set in the world first introduced in the Harry Potter books. Embark on a journey through familiar and new locations as you explore and discover magical beasts, customize your character and craft potions, master spell casting, upgrade talents and become the wizard you want to be.
|
2306.07906#151
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 153 |
What is google bard? What is google bard? Google Bard is an experimental, conversational, Al chat service, powered by artificial intelligence, that is designed to augment Googleâ s own search tools and provide automated support and human-like interaction for businesses '"! It uses information gathered from the internet to generate answers to questions inputted by the user, in a conversational style mimicking human speech patterns "! It is currently only available to a small group of âtrusted testersâ , but is expected to be integrated into Google Search and could be made accessible through its search bar '?!.Google Bard is an exciting prospect, as it can be used to optimize search engine results, lessening the need to read through click-baity articles, finding the best and simplest answer straight away. '! [1] How to use Google Bard, the latest Al chatbot service Google Bard is Googleâ s answer to ChatGPT. Itâ s an Al chatbot with many of the same functions, though is designed to eventually augment Googleâ s own search tools (much the way Bing is now using ChatGPT) as well as provide automated support and human-like interaction for businesses. [2] What is Google Bard Al? How to use it? Features, use
|
2306.07906#153
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 154 |
as well as provide automated support and human-like interaction for businesses. [2] What is Google Bard Al? How to use it? Features, use cases ... Similar to OpenAlâ s ChatGPT, Google Bard (named as a storyteller) is the latest experimental Al-powered chatbot that can respond to various queries and requests in a conversational manner. Using information online, it is intended to generate fresh, high-quality responses. While it is not yet widely available for use, Google Bard Al is, however, likely to be integrated into Google Search and could be made accessible through its search bar. [3] What is Google Bard? - PC Guide 2 What is Google Bard? Itâ s a online, publicly available chatbot powered by artificial intelligence. The service uses information gathered from the internet to generate answers to questions inputted by the user, in a conversational style mimicking human speech patterns. Google announced the launch of the chatbot a couple of days ago, but it is currently only available to a small group of âtrusted testersâ . Thatâ s expected to change over the next couple of weeks. [4] What is Google Bard? - PC Guide Google Bard is an exciting prospect. Using Al to optimize
|
2306.07906#154
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 157 |
What is the most popular Al technology in 2023? What is the most popular Al technology in 2023? The most popular Al technology in 2023 is expected to be conversational Al systems, with experts predicting an increase in their usage across various industries '*! Al will be used for code generation, automated testing, deployment, and bug fixing '"â, content generation and automating routine tasks "generative Al creating synthetic data and rising human-machine collaboration '!, and sustainable Al and quantum computing with Al. 7! [1] Top Al Trends to Watch Out for in 2023 - Fireflies.ai 2 Get ready to work side-by-side with Al, as this trend will dominate in 2023. From code generation, automated testing, deployment, and bug fixing to content generation and automating routine tasks, Al will be rampant this year and beyond. [2] Top Al Trends to Watch Out for in 2023 - Fireflies.ai 2 So there you have it, the 8 Al trends to watch out for in 2023. From generative Al creating synthetic data and rising human-machine collaboration to sustainable Al and quantum computing with Al, the future looks mighty fine. [3] Top Al Trends to Watch Out for in 2023 - Fireflies.ai 2 Conversational Al systems are expected to become more widespread in 2023, with experts predicting an increase in their usage across various industries.
What is the most popular Al technology in 2023?
|
2306.07906#157
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 159 |
Tell me the two teams of NBA all-star in 2023. Tell me the two teams of NBA all-star in 2023. The two teams of NBA All-Star in 2023 will be Team LeBron and Team Giannis '!. The game will be hosted by the Utah Jazz at Vivint Arena and televised nationally by TNT'"!.The full list of players, injury replacements and captains can be found at the link provided in reference . '*! [1] 2023 NBA All-Star Game - Wikipedia 7 The 2023 NBA All-Star Game will be an exhibition game played on February 19, 2023, on the 30th anniversary of the first All-Star Game held in Salt Lake City in 1993. It will be the 72nd edition of the event. The game will be hosted by the Utah Jazz at Vivint Arena. The game will be televised nationally by TNT for the 21st consecutive year. [2] 2023 NBA All-Star Game rosters: Full list of players, injury ... 2 The 2023 NBA All-Star Game is here. All-Star weekend festivities got started Friday night with the Rising Stars Game and the celebrity game. Mac McClung stole the show at the Dunk Contest and Damian Lillard won the 3-point shootout at
|
2306.07906#159
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 160 |
with the Rising Stars Game and the celebrity game. Mac McClung stole the show at the Dunk Contest and Damian Lillard won the 3-point shootout at All-Star Saturday Night in Salt Lake City. The weekend culminates with Team LeBron vs. Team Giannis in the 2023 NBA All-Star Game on Sunday night. Here's how to watch. [3] 2023 NBA All-Star Game rosters: Full list of players, injury ... 2 2023 NBA All-Star Game rosters: Full list of players, injury replacements as LeBron, Giannis serve as captains
|
2306.07906#160
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 162 |
What is copilot? What is copilot? Copilot is a cloud-based artificial intelligence tool developed by GitHub and OpenAl to assist users of Visual Studio Code, Visual Studio, Neovim, and JetBrains integrated development environments (IDEs) by autocompleting code "*!.It is powered by a brand new Al system named Codex, which is based on the GPT-3 model â! It is somewhat GPT-3 based, which means it may be able to understand natural language in a plain text file and establish a conversation based on this '"!.It is an extremely fun-to-use tool and is extremely useful for coding in Python, JavaScript, TypeScript, Ruby, and Go"! -However, it also presents a lot of bugs â!, so community feedback is appreciated. â*! [1] What is GitHub Copilot? An Al Pair Programmer for Everyone 2 Copilot is somewhat GPT-3 based, which means it may be able to understand natural language in a plain text file and establish a conversation based on this. [2] What is GitHub Copilot? An Al Pair Programmer for Everyone 2 Copilot is the result of a collaboration between GitHub and OpenAl, which is heavily backed by Microsoft. Itâ
|
2306.07906#162
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 163 |
An Al Pair Programmer for Everyone 2 Copilot is the result of a collaboration between GitHub and OpenAl, which is heavily backed by Microsoft. Itâ s powered by a brand new Al system named Codex, which is based on the GPT-3 model. [3] GitHub Copilot - Wikipedia 2 GitHub Copilot is a cloud-based artificial intelligence tool developed by GitHub and OpenAl to assist users of Visual Studio Code, Visual Studio, Neovim, and JetBrains integrated development environments (IDEs) by autocompleting code. Currently available by subscription to individual developers, the tool was first announced by GitHub on 29 June 2021, and works best for users coding in Python, JavaScript, TypeScript, Ruby, and Go. [4] What is GitHub Copilot? An Al Pair Programmer for Everyone 2 Copilot is a brand new tool, and it presents a lot of bugs. | created some posts on the GitHub Copilotâ s discussion page (available for technical preview users), and |â ve already received feedback from the community.
|
2306.07906#163
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 165 |
What is the core technique chatgpt use? What is the core technique chatgpt use? The core technique used by ChatGPT is Reinforcement Learning with Human Feedback '"! '?! This technique is based on previous academic research and is used to guide the learning process with the specific goal of mitigating the modelâ s misalignment issues '"!.Reinforcement Learning with Human Feedback allows the Al to learn what humans expected when they asked a question, going beyond simply training the LLM to predict the next word '*!.Additionally, ChatGPT is highly scalable and can be trained to handle new topics and tasks without the need for extensive retraining '!, and is able to carry a conversation through multiple queries in addition to generating software code. â! [1] How ChatGPT actually works - AssemblyAl 2 Researchers and developers are working on various approaches to address the alignment problem in Large Language Models. ChatGPT is based on the original GPT-3 model, but has been further trained by using human feedback to guide the learning process with the specific goal of mitigating the modelâ s misalignment issues. The specific technique used, called Reinforcement Learning from Human Feedback, is based on previous academic research. ChatGPT represents the first case of use of this
|
2306.07906#165
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 166 |
issues. The specific technique used, called Reinforcement Learning from Human Feedback, is based on previous academic research. ChatGPT represents the first case of use of this technique for a model put into production. [2] ChatGPT: What Is It & How Can You Use It? 2 ChatGPT was also trained using human feedback (a technique called Reinforcement Learning with Human Feedback) so that the Al learned what humans expected when they asked a question. Training the LLM this way is revolutionary because it goes beyond simply training the LLM to predict the next word. [3] ChatGPT is a new Al chatbot that can answer questions and ... 2 One of the key advantages of ChatGPT is its ability to learn and adapt quickly to new information. This means that it can be trained to handle new topics and tasks without the need for extensive retraining. Additionally, ChatGPT is highly scalable, which makes it well-suited for use in large-scale applications. [4] ChatGPT is a new Al chatbot that can answer questions and ... 2 But with early ChatGPT adopters demonstrating the technology's ability to carry a conversation through multiple queries in addition to generating software code, the world of so-called natural language processing appears to be
|
2306.07906#166
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 169 |
Where does the code data used to train copilot come from? Where does the code data used to train copilot come from? Copilot is trained on massive amounts of open source code from GitHub ©! â! This process is repeated for millions of lines of code totaling close to 200Gb of code '"! .The model itself has 12 billion parameters which can be altered during the course of training '" Although most code output by Copilot can be classified as a transformative work, some of the code it produces may be copied verbatim from the training data! This has led to fears that the output code is insufficiently transformative to be classified as fair use and may infringe on the copyright of the original owner. â7! [1] A Review of Github Copilot - Partee.io 2 This process is repeated for millions of lines of code totaling close to 200Gb of code. The model itself has 12 billion parameters which can be altered during the course of training. Below is a visual of what this training process looks like for the model Copilot is based on (GPT-3). [2] GitHub Copilot - Wikipedia Although most code output by Copilot can be classified as a transformative work, GitHub admits that a small proportion is copied verbatim, which has led to fears that
|
2306.07906#169
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 170 |
Although most code output by Copilot can be classified as a transformative work, GitHub admits that a small proportion is copied verbatim, which has led to fears that the output code is insufficiently transformative to be classified as fair use and may infringe on the copyright of the original owner. This leaves Copilot on untested legal ground, although GitHub states that "training machine learning models on publicly available data is considered fair use across the machine learning community". The company has also stated that as of June 2022 only a few source codes are taken over completely or partially unchanged. Therefore as the software continues to learn, this figure is expected to drop. Also in June 2022, the Software Freedom Conservancy announced it would end all uses of GitHub in its own projects, accusing Copilot of ignoring code licenses used in training data. In November 2022, a class-action lawsuit was filed, challenging the legality of Copilot. [3] A Review of Github Copilot - Partee.io 2 Copilots ability and shortcomings both arise from the data it was trained on: open source repositories of code. Think about it. Did the OpenAl developers have time to filter through and only select the code they thought was written well or correct?? Absolutely not. There are millions of
|
2306.07906#170
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 171 |
of code. Think about it. Did the OpenAl developers have time to filter through and only select the code they thought was written well or correct?? Absolutely not. There are millions of repositories and that would take forever. Even though Copilot uses a state of the art Al model, and uses a brilliant training process, itâ s still guaranteed to occasionally write code that is either incomplete, incorrect, or inefficient. This is because it was trained on code with all of those qualities. Anyone can push their code to GitHub, even the most junior developers. That code, written by those junior developers, is part of the corpus of text that Copilot is trained to produce. [4] Everything you need to know about Github Copilot - Medium 2 Copilot is built on Codex, a novel model based on GPT3 that has been trained on massive amounts of open source code from GitHub. Itâ s directly connected with VSCode to create suggestions based on a combination of the current context (i.e., your code) and the âknowledgeâ itâ s gained during the training process.
|
2306.07906#171
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 173 |
What is the model behind Perplexity Al? What is the model behind Perplexity Al? Perplexity Al is a company developing an Al-based search engine platform, founded in August 2022 ') Its flagship product, Perplexity Ask, is a search engine delivering answers to complex questions using large language models '*!.The model behind Perplexity Al is OpenAl's GPT-3.5 models and Microsoft's Bing search engineâ! .It scours the internet to provide an answer to a user's question and displays the source of the information"! Additionally, Perplexity Ask summarize search results to provide users with an answer to their query rather than returning a list of links, and also incorporates follow-up questions. |?! [1] What is Perplexity Al and what are its uses? - PC Guide 2 Perplexity Al is a new Al chat tool that acts as an extremely powerful search engine. When a user inputs a question, the model scours the internet to give an answer. And whatâ s great about this tool, is its ability to display the source of the information it provides. [2] Perplexity Al - Wiki - Golden 2 Perplexity Al is a company developing an Al-based search engine platform. Founded in August 2022, the company's flagship product, Perplexity Ask, is
|
2306.07906#173
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.07906
| 174 |
- Golden 2 Perplexity Al is a company developing an Al-based search engine platform. Founded in August 2022, the company's flagship product, Perplexity Ask, is a search engine delivering answers to complex questions using large language models. Powered by OpenAl's GPT-3.5 models and Microsoft's Bing search engine, Perplexity Ask summarizes search results to provide users with an answer to their query rather than returning a list of links. The information returned comes with citations, allowing users to determine the reliability of the information. The search engine also incorporates follow-up questions, such that users can search the web in a conversational manner. Perplexity also offers Bird SQL, a search interface for Twitter that translates natural language into SQL code.
|
2306.07906#174
|
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
|
We present WebGLM, a web-enhanced question-answering system based on the
General Language Model (GLM). Its goal is to augment a pre-trained large
language model (LLM) with web search and retrieval capabilities while being
efficient for real-world deployments. To achieve this, we develop WebGLM with
strategies for the LLM-augmented retriever, bootstrapped generator, and human
preference-aware scorer. Specifically, we identify and address the limitations
of WebGPT (OpenAI), through which WebGLM is enabled with accuracy, efficiency,
and cost-effectiveness advantages. In addition, we propose systematic criteria
for evaluating web-enhanced QA systems. We conduct multi-dimensional human
evaluation and quantitative ablation studies, which suggest the outperformance
of the proposed WebGLM designs over existing systems. WebGLM with the
10-billion-parameter GLM (10B) is shown to perform better than the
similar-sized WebGPT (13B) and even comparably to WebGPT (175B) in human
evaluation. The code, demo, and data are at
\url{https://github.com/THUDM/WebGLM}.
|
http://arxiv.org/pdf/2306.07906
|
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
|
cs.CL, cs.AI
|
Accepted to KDD 2023
| null |
cs.CL
|
20230613
|
20230613
|
[
{
"id": "2208.03299"
},
{
"id": "2204.02311"
},
{
"id": "2006.14799"
},
{
"id": "2112.09332"
},
{
"id": "2210.02414"
},
{
"id": "2209.01975"
},
{
"id": "2205.10782"
},
{
"id": "2211.05100"
},
{
"id": "2202.12837"
},
{
"id": "2103.10385"
},
{
"id": "2205.01068"
}
] |
2306.06924
| 0 |
3 2 0 2 n u J 4 1 ] I A . s c [
2 v 4 2 9 6 0 . 6 0 3 2 : v i X r a
# TASRA: a Taxonomy and Analysis of Societal-Scale Risks from AI
Andrew Critchâ [email protected]
Stuart Russellâ [email protected]
June 16, 2023
# Abstract
While several recent works have identiï¬ed societal-scale and extinction-level risks to humanity arising from artiï¬cial intelligence, few have attempted an exhaustive taxonomy of such risks. Many exhaustive taxonomies are possible, and some are usefulâparticularly if they reveal new risks or practical approaches to safety. This paper explores a taxonomy based on accountability: whose actions lead to the risk, are the actors uniï¬ed, and are they deliberate? We also provide stories to illustrate how the various risk types could each play out, including risks arising from unanticipated interactions of many AI systems, as well as risks from deliberate misuse, for which combined technical and policy solutions are indicated.
# Introduction
|
2306.06924#0
|
TASRA: a Taxonomy and Analysis of Societal-Scale Risks from AI
|
While several recent works have identified societal-scale and
extinction-level risks to humanity arising from artificial intelligence, few
have attempted an {\em exhaustive taxonomy} of such risks. Many exhaustive
taxonomies are possible, and some are useful -- particularly if they reveal new
risks or practical approaches to safety. This paper explores a taxonomy based
on accountability: whose actions lead to the risk, are the actors unified, and
are they deliberate? We also provide stories to illustrate how the various risk
types could each play out, including risks arising from unanticipated
interactions of many AI systems, as well as risks from deliberate misuse, for
which combined technical and policy solutions are indicated.
|
http://arxiv.org/pdf/2306.06924
|
Andrew Critch, Stuart Russell
|
cs.AI, cs.CR, cs.CY, cs.LG, 68T01, I.2.0
| null | null |
cs.AI
|
20230612
|
20230614
|
[
{
"id": "1903.08542"
},
{
"id": "1606.06565"
},
{
"id": "1903.03877"
},
{
"id": "1908.01755"
},
{
"id": "1711.00363"
},
{
"id": "1701.01302"
},
{
"id": "1705.10720"
},
{
"id": "1511.03246"
},
{
"id": "1902.04198"
},
{
"id": "1806.01186"
}
] |
2306.06924
| 1 |
# Introduction
A few weeks ago, a public statement was signed by leading scientists and executives in AI, stating that âMitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear warâ (Center for AI Safety, 2023). This represents a signiï¬cant increase in coordinated concern for human extinction risk arising from AI technology, and implies more generally that catastrophic societal-scale risks from AI should be taken as a serious concern. In consonance, just a few days ago US President Joe Biden and UK Prime Minister Rishi Sunak expressed an agreement to âwork together on AI safety, including multilaterallyâ, citing that âlast week, the pioneers of artiï¬cial intelligence warned us about the scale of the challengeâ (Sunak and Biden, 2023).
|
2306.06924#1
|
TASRA: a Taxonomy and Analysis of Societal-Scale Risks from AI
|
While several recent works have identified societal-scale and
extinction-level risks to humanity arising from artificial intelligence, few
have attempted an {\em exhaustive taxonomy} of such risks. Many exhaustive
taxonomies are possible, and some are useful -- particularly if they reveal new
risks or practical approaches to safety. This paper explores a taxonomy based
on accountability: whose actions lead to the risk, are the actors unified, and
are they deliberate? We also provide stories to illustrate how the various risk
types could each play out, including risks arising from unanticipated
interactions of many AI systems, as well as risks from deliberate misuse, for
which combined technical and policy solutions are indicated.
|
http://arxiv.org/pdf/2306.06924
|
Andrew Critch, Stuart Russell
|
cs.AI, cs.CR, cs.CY, cs.LG, 68T01, I.2.0
| null | null |
cs.AI
|
20230612
|
20230614
|
[
{
"id": "1903.08542"
},
{
"id": "1606.06565"
},
{
"id": "1903.03877"
},
{
"id": "1908.01755"
},
{
"id": "1711.00363"
},
{
"id": "1701.01302"
},
{
"id": "1705.10720"
},
{
"id": "1511.03246"
},
{
"id": "1902.04198"
},
{
"id": "1806.01186"
}
] |
2306.07174
| 1 |
Existing large language models (LLMs) can only afford fix-sized inputs due to the input length limit, preventing them from utilizing rich long-context information from past inputs. To address this, we propose a framework, Language Models Augmented with Long-Term Memory (LONGMEM), which enables LLMs to memorize long history. We design a novel decoupled network architecture with the original backbone LLM frozen as a memory encoder and an adaptive residual side-network as a memory retriever and reader. Such a decoupled memory design can easily cache and update long-term past contexts for memory retrieval without suffering from memory staleness. Enhanced with memory-augmented adaptation training, LONGMEM can thus memorize long past context and use long-term memory for language modeling. The proposed memory retrieval module can handle unlimited-length context in its memory bank to benefit various downstream tasks. Typically, LONGMEM can enlarge the long-form memory to 65k tokens and thus cache many-shot extra demonstration examples as long-form memory for in-context learning. Experiments show that our method outperforms strong long- context models on ChapterBreak, a challenging long-context modeling benchmark, and achieves remarkable improvements on
|
2306.07174#1
|
Augmenting Language Models with Long-Term Memory
|
Existing large language models (LLMs) can only afford fix-sized inputs due to
the input length limit, preventing them from utilizing rich long-context
information from past inputs. To address this, we propose a framework, Language
Models Augmented with Long-Term Memory (LongMem), which enables LLMs to
memorize long history. We design a novel decoupled network architecture with
the original backbone LLM frozen as a memory encoder and an adaptive residual
side-network as a memory retriever and reader. Such a decoupled memory design
can easily cache and update long-term past contexts for memory retrieval
without suffering from memory staleness. Enhanced with memory-augmented
adaptation training, LongMem can thus memorize long past context and use
long-term memory for language modeling. The proposed memory retrieval module
can handle unlimited-length context in its memory bank to benefit various
downstream tasks. Typically, LongMem can enlarge the long-form memory to 65k
tokens and thus cache many-shot extra demonstration examples as long-form
memory for in-context learning. Experiments show that our method outperforms
strong long-context models on ChapterBreak, a challenging long-context modeling
benchmark, and achieves remarkable improvements on memory-augmented in-context
learning over LLMs. The results demonstrate that the proposed method is
effective in helping language models to memorize and utilize long-form
contents. Our code is open-sourced at https://aka.ms/LongMem.
|
http://arxiv.org/pdf/2306.07174
|
Weizhi Wang, Li Dong, Hao Cheng, Xiaodong Liu, Xifeng Yan, Jianfeng Gao, Furu Wei
|
cs.CL
| null | null |
cs.CL
|
20230612
|
20230612
|
[
{
"id": "2301.12866"
},
{
"id": "1901.02860"
},
{
"id": "2101.00027"
},
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2206.07682"
},
{
"id": "1606.05250"
},
{
"id": "1911.05507"
},
{
"id": "2108.12409"
},
{
"id": "2206.06522"
},
{
"id": "2204.10878"
},
{
"id": "2211.05100"
},
{
"id": "2201.11903"
},
{
"id": "2205.10178"
},
{
"id": "2205.01068"
}
] |
2306.07209
| 1 |
Various industries such as finance, meteorology, and energy generate vast amounts of heterogeneous data every day. There is a natural demand for humans to man- age, process, and display data efficiently. However, it necessitates labor-intensive efforts and a high level of expertise for these data-related tasks. Considering that large language models (LLMs) have showcased promising capabilities in semantic understanding and reasoning, we advocate that the deployment of LLMs could autonomously manage and process massive amounts of data while displaying and interacting in a human-friendly manner. Based on this belief, we propose Data- Copilot, an LLM-based system that connects numerous data sources on one end and caters to diverse human demands on the other end. Acting like an experienced expert, Data-Copilot autonomously transforms raw data into visualization results that best match the userâs intent. Specifically, Data-Copilot autonomously designs versatile interfaces (tools) for data management, processing, prediction, and visu- alization. In real-time response, it automatically deploys a concise workflow by invoking corresponding interfaces step by step for the userâs request. The interface design and deployment processes are fully
|
2306.07209#1
|
Data-Copilot: Bridging Billions of Data and Humans with Autonomous Workflow
|
Various industries such as finance, meteorology, and energy generate vast
amounts of heterogeneous data every day. There is a natural demand for humans
to manage, process, and display data efficiently. However, it necessitates
labor-intensive efforts and a high level of expertise for these data-related
tasks. Considering that large language models (LLMs) have showcased promising
capabilities in semantic understanding and reasoning, we advocate that the
deployment of LLMs could autonomously manage and process massive amounts of
data while displaying and interacting in a human-friendly manner. Based on this
belief, we propose Data-Copilot, an LLM-based system that connects numerous
data sources on one end and caters to diverse human demands on the other end.
Acting like an experienced expert, Data-Copilot autonomously transforms raw
data into visualization results that best match the user's intent.
Specifically, Data-Copilot autonomously designs versatile interfaces (tools)
for data management, processing, prediction, and visualization. In real-time
response, it automatically deploys a concise workflow by invoking corresponding
interfaces step by step for the user's request. The interface design and
deployment processes are fully controlled by Data-Copilot itself, without human
assistance. Besides, we create a Data-Copilot demo that links abundant data
from different domains (stock, fund, company, economics, and live news) and
accurately respond to diverse requests, serving as a reliable AI assistant.
|
http://arxiv.org/pdf/2306.07209
|
Wenqi Zhang, Yongliang Shen, Weiming Lu, Yueting Zhuang
|
cs.CL, cs.AI, cs.CE
| null | null |
cs.CL
|
20230612
|
20230612
|
[
{
"id": "2305.14318"
},
{
"id": "2303.17564"
},
{
"id": "2304.12995"
},
{
"id": "2305.19308"
},
{
"id": "2305.17126"
},
{
"id": "2305.15038"
},
{
"id": "2303.02927"
}
] |
2306.06924
| 2 |
Meanwhile, in recent years national governments throughout the world have begun to address societal-scale risks from AI. In 2018, Chinese leader Xi Jinping exhorted the attendees of the World AI Conference to âmake sure that artiï¬cial intelligence is safe, reliable and controllableâ. Since then, several AI governance initiatives have emerged in China (Sheehan, 2021), including speciï¬c measures for generative AI services drafted in April of this year (Cyberspace Administration of China, 2023; Huang et al., 2023). In Europe, the proposed European Union AI Act began in large part as a response to concerns that AI systems may pose risks to the safety and fundamental rights of humans (European Commission, 2021). In the US, last year the White House issued a Blueprint for an AI Bill of Rights (White House, 2022), addressing âchallenges posed to democracy todayâ by âthe use of technology, data, and automated systems in ways that threaten the rights of the American public.â
|
2306.06924#2
|
TASRA: a Taxonomy and Analysis of Societal-Scale Risks from AI
|
While several recent works have identified societal-scale and
extinction-level risks to humanity arising from artificial intelligence, few
have attempted an {\em exhaustive taxonomy} of such risks. Many exhaustive
taxonomies are possible, and some are useful -- particularly if they reveal new
risks or practical approaches to safety. This paper explores a taxonomy based
on accountability: whose actions lead to the risk, are the actors unified, and
are they deliberate? We also provide stories to illustrate how the various risk
types could each play out, including risks arising from unanticipated
interactions of many AI systems, as well as risks from deliberate misuse, for
which combined technical and policy solutions are indicated.
|
http://arxiv.org/pdf/2306.06924
|
Andrew Critch, Stuart Russell
|
cs.AI, cs.CR, cs.CY, cs.LG, 68T01, I.2.0
| null | null |
cs.AI
|
20230612
|
20230614
|
[
{
"id": "1903.08542"
},
{
"id": "1606.06565"
},
{
"id": "1903.03877"
},
{
"id": "1908.01755"
},
{
"id": "1711.00363"
},
{
"id": "1701.01302"
},
{
"id": "1705.10720"
},
{
"id": "1511.03246"
},
{
"id": "1902.04198"
},
{
"id": "1806.01186"
}
] |
2306.07174
| 2 |
learning. Experiments show that our method outperforms strong long- context models on ChapterBreak, a challenging long-context modeling benchmark, and achieves remarkable improvements on memory-augmented in-context learning over LLMs. The results demonstrate that the proposed method is effective in helping language models to memorize and utilize long-form contents. Our code is open-sourced at https://aka.ms/LongMem.
|
2306.07174#2
|
Augmenting Language Models with Long-Term Memory
|
Existing large language models (LLMs) can only afford fix-sized inputs due to
the input length limit, preventing them from utilizing rich long-context
information from past inputs. To address this, we propose a framework, Language
Models Augmented with Long-Term Memory (LongMem), which enables LLMs to
memorize long history. We design a novel decoupled network architecture with
the original backbone LLM frozen as a memory encoder and an adaptive residual
side-network as a memory retriever and reader. Such a decoupled memory design
can easily cache and update long-term past contexts for memory retrieval
without suffering from memory staleness. Enhanced with memory-augmented
adaptation training, LongMem can thus memorize long past context and use
long-term memory for language modeling. The proposed memory retrieval module
can handle unlimited-length context in its memory bank to benefit various
downstream tasks. Typically, LongMem can enlarge the long-form memory to 65k
tokens and thus cache many-shot extra demonstration examples as long-form
memory for in-context learning. Experiments show that our method outperforms
strong long-context models on ChapterBreak, a challenging long-context modeling
benchmark, and achieves remarkable improvements on memory-augmented in-context
learning over LLMs. The results demonstrate that the proposed method is
effective in helping language models to memorize and utilize long-form
contents. Our code is open-sourced at https://aka.ms/LongMem.
|
http://arxiv.org/pdf/2306.07174
|
Weizhi Wang, Li Dong, Hao Cheng, Xiaodong Liu, Xifeng Yan, Jianfeng Gao, Furu Wei
|
cs.CL
| null | null |
cs.CL
|
20230612
|
20230612
|
[
{
"id": "2301.12866"
},
{
"id": "1901.02860"
},
{
"id": "2101.00027"
},
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2206.07682"
},
{
"id": "1606.05250"
},
{
"id": "1911.05507"
},
{
"id": "2108.12409"
},
{
"id": "2206.06522"
},
{
"id": "2204.10878"
},
{
"id": "2211.05100"
},
{
"id": "2201.11903"
},
{
"id": "2205.10178"
},
{
"id": "2205.01068"
}
] |
2306.07209
| 2 |
deploys a concise workflow by invoking corresponding interfaces step by step for the userâs request. The interface design and deployment processes are fully controlled by Data-Copilot itself, with- out human assistance. Besides, we create a Data-Copilot demo that links abundant data from different domains (stock, fund, company, economics, and live news) and accurately respond to diverse requests, serving as a reliable AI assistant. Our project and demo are available at https://github.com/zwq2018/Data-Copilot.
|
2306.07209#2
|
Data-Copilot: Bridging Billions of Data and Humans with Autonomous Workflow
|
Various industries such as finance, meteorology, and energy generate vast
amounts of heterogeneous data every day. There is a natural demand for humans
to manage, process, and display data efficiently. However, it necessitates
labor-intensive efforts and a high level of expertise for these data-related
tasks. Considering that large language models (LLMs) have showcased promising
capabilities in semantic understanding and reasoning, we advocate that the
deployment of LLMs could autonomously manage and process massive amounts of
data while displaying and interacting in a human-friendly manner. Based on this
belief, we propose Data-Copilot, an LLM-based system that connects numerous
data sources on one end and caters to diverse human demands on the other end.
Acting like an experienced expert, Data-Copilot autonomously transforms raw
data into visualization results that best match the user's intent.
Specifically, Data-Copilot autonomously designs versatile interfaces (tools)
for data management, processing, prediction, and visualization. In real-time
response, it automatically deploys a concise workflow by invoking corresponding
interfaces step by step for the user's request. The interface design and
deployment processes are fully controlled by Data-Copilot itself, without human
assistance. Besides, we create a Data-Copilot demo that links abundant data
from different domains (stock, fund, company, economics, and live news) and
accurately respond to diverse requests, serving as a reliable AI assistant.
|
http://arxiv.org/pdf/2306.07209
|
Wenqi Zhang, Yongliang Shen, Weiming Lu, Yueting Zhuang
|
cs.CL, cs.AI, cs.CE
| null | null |
cs.CL
|
20230612
|
20230612
|
[
{
"id": "2305.14318"
},
{
"id": "2303.17564"
},
{
"id": "2304.12995"
},
{
"id": "2305.19308"
},
{
"id": "2305.17126"
},
{
"id": "2305.15038"
},
{
"id": "2303.02927"
}
] |
2306.06924
| 3 |
Harms occurring at the scale of individual persons may be distinguished from harms occurring on the scale of an entire society, which we call societal-scale harms. This distinction can also be seen somewhat in last yearâs report from the US National Institute of Standards and Technology proposing an âAI risk management frameworkâ (National Institute of Standards and Technology, 2022), which distinguished individual harms from âsocietal harmâ and âharms to a system [...], for example, large scale harms to the ï¬nancial system or global supply chainâ; see Figure 1. Harms to individuals and groups should also be considered âsocietal-scaleâ when suï¬ciently widespread.
âCenter for Human-Compatible Artiï¬cial Intelligence, Department of Electrical Engineering and Computer Sciences, UC Berkeley
1
Figure 1: Purple and orange annotations on Figure 2 of the NIST âAI Risk Management Framework: Initial Draftâ, indicating what we consider to be âsocietal-scale risksâ.
|
2306.06924#3
|
TASRA: a Taxonomy and Analysis of Societal-Scale Risks from AI
|
While several recent works have identified societal-scale and
extinction-level risks to humanity arising from artificial intelligence, few
have attempted an {\em exhaustive taxonomy} of such risks. Many exhaustive
taxonomies are possible, and some are useful -- particularly if they reveal new
risks or practical approaches to safety. This paper explores a taxonomy based
on accountability: whose actions lead to the risk, are the actors unified, and
are they deliberate? We also provide stories to illustrate how the various risk
types could each play out, including risks arising from unanticipated
interactions of many AI systems, as well as risks from deliberate misuse, for
which combined technical and policy solutions are indicated.
|
http://arxiv.org/pdf/2306.06924
|
Andrew Critch, Stuart Russell
|
cs.AI, cs.CR, cs.CY, cs.LG, 68T01, I.2.0
| null | null |
cs.AI
|
20230612
|
20230614
|
[
{
"id": "1903.08542"
},
{
"id": "1606.06565"
},
{
"id": "1903.03877"
},
{
"id": "1908.01755"
},
{
"id": "1711.00363"
},
{
"id": "1701.01302"
},
{
"id": "1705.10720"
},
{
"id": "1511.03246"
},
{
"id": "1902.04198"
},
{
"id": "1806.01186"
}
] |
2306.07174
| 3 |
# 1 Introduction
Large language models (LLMs) have revolutionized natural language processing with great successes in advancing the state-of-the-art on various understanding and generation tasks [DCLT19, RWC+19, LOG+19, YDY+19, BMR+20, RSR+20]. Most LLMs benefit from self-supervised training over large corpora via harvesting knowledge from fix-sized local context, showing emergent abilities, e.g., zero-shot prompting [RWC+19], in-context learning [BMR+20], and Chain-of-Thought (CoT) reasoning [WWS+22]. Nevertheless, the input length limit of existing LLMs prevents them from generalizing to real-world scenarios where the capability of processing long-form information beyond a fix-sized session is critical, e.g., long horizontal planning.
|
2306.07174#3
|
Augmenting Language Models with Long-Term Memory
|
Existing large language models (LLMs) can only afford fix-sized inputs due to
the input length limit, preventing them from utilizing rich long-context
information from past inputs. To address this, we propose a framework, Language
Models Augmented with Long-Term Memory (LongMem), which enables LLMs to
memorize long history. We design a novel decoupled network architecture with
the original backbone LLM frozen as a memory encoder and an adaptive residual
side-network as a memory retriever and reader. Such a decoupled memory design
can easily cache and update long-term past contexts for memory retrieval
without suffering from memory staleness. Enhanced with memory-augmented
adaptation training, LongMem can thus memorize long past context and use
long-term memory for language modeling. The proposed memory retrieval module
can handle unlimited-length context in its memory bank to benefit various
downstream tasks. Typically, LongMem can enlarge the long-form memory to 65k
tokens and thus cache many-shot extra demonstration examples as long-form
memory for in-context learning. Experiments show that our method outperforms
strong long-context models on ChapterBreak, a challenging long-context modeling
benchmark, and achieves remarkable improvements on memory-augmented in-context
learning over LLMs. The results demonstrate that the proposed method is
effective in helping language models to memorize and utilize long-form
contents. Our code is open-sourced at https://aka.ms/LongMem.
|
http://arxiv.org/pdf/2306.07174
|
Weizhi Wang, Li Dong, Hao Cheng, Xiaodong Liu, Xifeng Yan, Jianfeng Gao, Furu Wei
|
cs.CL
| null | null |
cs.CL
|
20230612
|
20230612
|
[
{
"id": "2301.12866"
},
{
"id": "1901.02860"
},
{
"id": "2101.00027"
},
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2206.07682"
},
{
"id": "1606.05250"
},
{
"id": "1911.05507"
},
{
"id": "2108.12409"
},
{
"id": "2206.06522"
},
{
"id": "2204.10878"
},
{
"id": "2211.05100"
},
{
"id": "2201.11903"
},
{
"id": "2205.10178"
},
{
"id": "2205.01068"
}
] |
2306.07209
| 3 |
# 1 Introduction
In the data-driven world, vast amounts of heterogeneous data are generated every day across various industries, including finance, meteorology, and energy, among others. This wide-ranging, multiform data encapsulates critical insights that could be leveraged for a host of applications, from predicting financial trends to monitoring energy consumption.
Recently, the advancement of large language models (LLMs) [1, 2, 3, 4, 5], particularly the emergence of ChatGPT [6] and GPT-4 [7], has revolutionized AI research and paved the way for advanced AI systems. Leveraging chain-of-thought prompting [8, 9, 10, 11], reinforcement learning from human feedback (RLHF) [12, 13], and instruction-following learning [14, 15, 16], LLMs have demonstrated remarkable abilities in dialogue, reasoning, and generation. However, in the face of the sheer magnitude and complexity of data, LLMs are confronted with the colossal challenge of managing, processing and displaying data.
|
2306.07209#3
|
Data-Copilot: Bridging Billions of Data and Humans with Autonomous Workflow
|
Various industries such as finance, meteorology, and energy generate vast
amounts of heterogeneous data every day. There is a natural demand for humans
to manage, process, and display data efficiently. However, it necessitates
labor-intensive efforts and a high level of expertise for these data-related
tasks. Considering that large language models (LLMs) have showcased promising
capabilities in semantic understanding and reasoning, we advocate that the
deployment of LLMs could autonomously manage and process massive amounts of
data while displaying and interacting in a human-friendly manner. Based on this
belief, we propose Data-Copilot, an LLM-based system that connects numerous
data sources on one end and caters to diverse human demands on the other end.
Acting like an experienced expert, Data-Copilot autonomously transforms raw
data into visualization results that best match the user's intent.
Specifically, Data-Copilot autonomously designs versatile interfaces (tools)
for data management, processing, prediction, and visualization. In real-time
response, it automatically deploys a concise workflow by invoking corresponding
interfaces step by step for the user's request. The interface design and
deployment processes are fully controlled by Data-Copilot itself, without human
assistance. Besides, we create a Data-Copilot demo that links abundant data
from different domains (stock, fund, company, economics, and live news) and
accurately respond to diverse requests, serving as a reliable AI assistant.
|
http://arxiv.org/pdf/2306.07209
|
Wenqi Zhang, Yongliang Shen, Weiming Lu, Yueting Zhuang
|
cs.CL, cs.AI, cs.CE
| null | null |
cs.CL
|
20230612
|
20230612
|
[
{
"id": "2305.14318"
},
{
"id": "2303.17564"
},
{
"id": "2304.12995"
},
{
"id": "2305.19308"
},
{
"id": "2305.17126"
},
{
"id": "2305.15038"
},
{
"id": "2303.02927"
}
] |
2306.06924
| 4 |
Examples of Potential Harms âSocietal-scaleâ harms, when sufficiently widespread Harm to Harm to an People Organization/ Harm toa System Enterprise Individual harm: An __â Harm that canimpact | __4 > Harm toan organized individual's civil liberties or technical systems and | assembly of rights or physical safety are business operations, for | interconnected and adversely impacted by an example, security 1 interdependent elements Al system. breaches, monetary loss, | and resources, for and reputational harm. | example, large scale harms Group/Community harm:A 1 to the financial system or class or group of people is | global supply chain, which discriminated against as a | are not sufficiently result of an Al system. resilient to adverse Al i impacts. > Societal harm: Fair access to democratic participation is repressed by an Al system deployed at scale. Inherently âsocietal-scaleâ harms How should societal-scale risks be addressed in technical terms? So far, most research papers
|
2306.06924#4
|
TASRA: a Taxonomy and Analysis of Societal-Scale Risks from AI
|
While several recent works have identified societal-scale and
extinction-level risks to humanity arising from artificial intelligence, few
have attempted an {\em exhaustive taxonomy} of such risks. Many exhaustive
taxonomies are possible, and some are useful -- particularly if they reveal new
risks or practical approaches to safety. This paper explores a taxonomy based
on accountability: whose actions lead to the risk, are the actors unified, and
are they deliberate? We also provide stories to illustrate how the various risk
types could each play out, including risks arising from unanticipated
interactions of many AI systems, as well as risks from deliberate misuse, for
which combined technical and policy solutions are indicated.
|
http://arxiv.org/pdf/2306.06924
|
Andrew Critch, Stuart Russell
|
cs.AI, cs.CR, cs.CY, cs.LG, 68T01, I.2.0
| null | null |
cs.AI
|
20230612
|
20230614
|
[
{
"id": "1903.08542"
},
{
"id": "1606.06565"
},
{
"id": "1903.03877"
},
{
"id": "1908.01755"
},
{
"id": "1711.00363"
},
{
"id": "1701.01302"
},
{
"id": "1705.10720"
},
{
"id": "1511.03246"
},
{
"id": "1902.04198"
},
{
"id": "1806.01186"
}
] |
2306.07174
| 4 |
To address the length limit issue, the most straightforward method is to simply scale up the input con- text length. For instance, GPT-3 [BMR+20] increases the input length from 1k of GPT-2 [RWC+19] to 2k tokens for capturing better long-range dependencies. However, this approach typically incurs computation-intensive training from scratch and the in-context dense attention is still heavily con- strained by the quadratic computation complexity of Transformer self-attention [VSP+17]. Another recent line of work [BPC20, ZGD+20] instead focuses on developing in-context sparse attention to avoid the quadratic cost of self-attention, which still largely requires training from scratch. In contrast, the prominent work, Memorizing Transformer (MemTRM) [WRHS22], approximates in-context
Cached Memory Bank with Key, Value Pairs i ; 5 i ; H H ââLeng-Memory Retrievalâ> a. PR Search Retrieved Attn Keys and Values Attn Keys | | Attn Keys Attn Keys and Values and Values +++ â_and Values Attention Query Memory Fusion (SegA) (Seg B) (Seg Z) of Current Inputs tL_ Residual Large Language Model (Frozen) Residual SideNet (Trainable) r,) Connections Long Sequence Inputs
|
2306.07174#4
|
Augmenting Language Models with Long-Term Memory
|
Existing large language models (LLMs) can only afford fix-sized inputs due to
the input length limit, preventing them from utilizing rich long-context
information from past inputs. To address this, we propose a framework, Language
Models Augmented with Long-Term Memory (LongMem), which enables LLMs to
memorize long history. We design a novel decoupled network architecture with
the original backbone LLM frozen as a memory encoder and an adaptive residual
side-network as a memory retriever and reader. Such a decoupled memory design
can easily cache and update long-term past contexts for memory retrieval
without suffering from memory staleness. Enhanced with memory-augmented
adaptation training, LongMem can thus memorize long past context and use
long-term memory for language modeling. The proposed memory retrieval module
can handle unlimited-length context in its memory bank to benefit various
downstream tasks. Typically, LongMem can enlarge the long-form memory to 65k
tokens and thus cache many-shot extra demonstration examples as long-form
memory for in-context learning. Experiments show that our method outperforms
strong long-context models on ChapterBreak, a challenging long-context modeling
benchmark, and achieves remarkable improvements on memory-augmented in-context
learning over LLMs. The results demonstrate that the proposed method is
effective in helping language models to memorize and utilize long-form
contents. Our code is open-sourced at https://aka.ms/LongMem.
|
http://arxiv.org/pdf/2306.07174
|
Weizhi Wang, Li Dong, Hao Cheng, Xiaodong Liu, Xifeng Yan, Jianfeng Gao, Furu Wei
|
cs.CL
| null | null |
cs.CL
|
20230612
|
20230612
|
[
{
"id": "2301.12866"
},
{
"id": "1901.02860"
},
{
"id": "2101.00027"
},
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2206.07682"
},
{
"id": "1606.05250"
},
{
"id": "1911.05507"
},
{
"id": "2108.12409"
},
{
"id": "2206.06522"
},
{
"id": "2204.10878"
},
{
"id": "2211.05100"
},
{
"id": "2201.11903"
},
{
"id": "2205.10178"
},
{
"id": "2205.01068"
}
] |
2306.07209
| 4 |
Many works have explored the potential of LLMs in data-related tasks. For instance, LiDA [17] and GPT4-Analyst [18] focus on visualization and data analysis. Beyond that, other works like Sheet-Copilot [19], Visual ChatGPT [20], Audio GPT [21] employ LLMs to evoke domain tools to analyze, edit and transform data. From the perspective of data science, table, visual and audio can all be considered as a form of data, and all these tasks can be viewed as data-related tasks: feeding
Preprint. Under review.
|
2306.07209#4
|
Data-Copilot: Bridging Billions of Data and Humans with Autonomous Workflow
|
Various industries such as finance, meteorology, and energy generate vast
amounts of heterogeneous data every day. There is a natural demand for humans
to manage, process, and display data efficiently. However, it necessitates
labor-intensive efforts and a high level of expertise for these data-related
tasks. Considering that large language models (LLMs) have showcased promising
capabilities in semantic understanding and reasoning, we advocate that the
deployment of LLMs could autonomously manage and process massive amounts of
data while displaying and interacting in a human-friendly manner. Based on this
belief, we propose Data-Copilot, an LLM-based system that connects numerous
data sources on one end and caters to diverse human demands on the other end.
Acting like an experienced expert, Data-Copilot autonomously transforms raw
data into visualization results that best match the user's intent.
Specifically, Data-Copilot autonomously designs versatile interfaces (tools)
for data management, processing, prediction, and visualization. In real-time
response, it automatically deploys a concise workflow by invoking corresponding
interfaces step by step for the user's request. The interface design and
deployment processes are fully controlled by Data-Copilot itself, without human
assistance. Besides, we create a Data-Copilot demo that links abundant data
from different domains (stock, fund, company, economics, and live news) and
accurately respond to diverse requests, serving as a reliable AI assistant.
|
http://arxiv.org/pdf/2306.07209
|
Wenqi Zhang, Yongliang Shen, Weiming Lu, Yueting Zhuang
|
cs.CL, cs.AI, cs.CE
| null | null |
cs.CL
|
20230612
|
20230612
|
[
{
"id": "2305.14318"
},
{
"id": "2303.17564"
},
{
"id": "2304.12995"
},
{
"id": "2305.19308"
},
{
"id": "2305.17126"
},
{
"id": "2305.15038"
},
{
"id": "2303.02927"
}
] |
2306.06924
| 5 |
How should societal-scale risks be addressed in technical terms? So far, most research papers addressing societal-scale and existential risks have focused on misalignment of a single advanced AI system. In a recent blog post, Bengio (2023) lays out a clear and concise logical argument for this case, entitled âHow Rogue AIs may Ariseâ. However, while misalignment of individual systems remains a problem, it is not the only source of societal-scale risks from AI, and extinction risk is no exception. Problems of racism, misinformation, election interference, and other forms of injustice are all risk factors aï¬ecting humanityâs ability to function and survive as a healthy civilization, and can all arise from interactions between multiple systems or misuse of otherwise âalignedâ systems. And, while Russell (2019) has oï¬ered the single-human/single-machine framing as a âmodel for the relationship between the human race and its machines, each construed monolithically,â this monolithic view of AI technology is not enough: safety requires analysis of risks at many scales of organization simultaneously. Meanwhile, Bengio and Ng (2023) together have called for a better articulation of concrete risks from AI, including extinction risk.
|
2306.06924#5
|
TASRA: a Taxonomy and Analysis of Societal-Scale Risks from AI
|
While several recent works have identified societal-scale and
extinction-level risks to humanity arising from artificial intelligence, few
have attempted an {\em exhaustive taxonomy} of such risks. Many exhaustive
taxonomies are possible, and some are useful -- particularly if they reveal new
risks or practical approaches to safety. This paper explores a taxonomy based
on accountability: whose actions lead to the risk, are the actors unified, and
are they deliberate? We also provide stories to illustrate how the various risk
types could each play out, including risks arising from unanticipated
interactions of many AI systems, as well as risks from deliberate misuse, for
which combined technical and policy solutions are indicated.
|
http://arxiv.org/pdf/2306.06924
|
Andrew Critch, Stuart Russell
|
cs.AI, cs.CR, cs.CY, cs.LG, 68T01, I.2.0
| null | null |
cs.AI
|
20230612
|
20230614
|
[
{
"id": "1903.08542"
},
{
"id": "1606.06565"
},
{
"id": "1903.03877"
},
{
"id": "1908.01755"
},
{
"id": "1711.00363"
},
{
"id": "1701.01302"
},
{
"id": "1705.10720"
},
{
"id": "1511.03246"
},
{
"id": "1902.04198"
},
{
"id": "1806.01186"
}
] |
2306.07174
| 5 |
Figure 1: Overview of the memory caching and retrieval flow of LONGMEM. The long text sequence is split into fix-length segments, then each segment is forwarded through large language models and the attention key and value vectors of m-th layer are cached into the long-term memory bank. For future inputs, via attention query-key based retrieval, the top-k attention key-value pairs of long-term memory are retrieved and fused into language modeling.
sparse attention via dense attention over both in-context tokens and memorized tokens retrieved from a non-differentiable memory for Transformers. Thus, MemTRM scales up the resulting language model to handle up to 65k tokens and achieves substantial perplexity gains in modeling full-length books or long papers. However, MemTRM faces the memory staleness challenge during training due to its coupled memory design, which uses a single model for encoding memory and fusing memory for language modeling. In other words, as the model parameters are updated, cached older representations in memory may have distributional shifts from those from the latest model, thereby limiting the effectiveness of the memory augmentation.
|
2306.07174#5
|
Augmenting Language Models with Long-Term Memory
|
Existing large language models (LLMs) can only afford fix-sized inputs due to
the input length limit, preventing them from utilizing rich long-context
information from past inputs. To address this, we propose a framework, Language
Models Augmented with Long-Term Memory (LongMem), which enables LLMs to
memorize long history. We design a novel decoupled network architecture with
the original backbone LLM frozen as a memory encoder and an adaptive residual
side-network as a memory retriever and reader. Such a decoupled memory design
can easily cache and update long-term past contexts for memory retrieval
without suffering from memory staleness. Enhanced with memory-augmented
adaptation training, LongMem can thus memorize long past context and use
long-term memory for language modeling. The proposed memory retrieval module
can handle unlimited-length context in its memory bank to benefit various
downstream tasks. Typically, LongMem can enlarge the long-form memory to 65k
tokens and thus cache many-shot extra demonstration examples as long-form
memory for in-context learning. Experiments show that our method outperforms
strong long-context models on ChapterBreak, a challenging long-context modeling
benchmark, and achieves remarkable improvements on memory-augmented in-context
learning over LLMs. The results demonstrate that the proposed method is
effective in helping language models to memorize and utilize long-form
contents. Our code is open-sourced at https://aka.ms/LongMem.
|
http://arxiv.org/pdf/2306.07174
|
Weizhi Wang, Li Dong, Hao Cheng, Xiaodong Liu, Xifeng Yan, Jianfeng Gao, Furu Wei
|
cs.CL
| null | null |
cs.CL
|
20230612
|
20230612
|
[
{
"id": "2301.12866"
},
{
"id": "1901.02860"
},
{
"id": "2101.00027"
},
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2206.07682"
},
{
"id": "1606.05250"
},
{
"id": "1911.05507"
},
{
"id": "2108.12409"
},
{
"id": "2206.06522"
},
{
"id": "2204.10878"
},
{
"id": "2211.05100"
},
{
"id": "2201.11903"
},
{
"id": "2205.10178"
},
{
"id": "2205.01068"
}
] |
2306.07209
| 5 |
. . Q Compare the earnings rate of all constituent Interface Design Interface Dispatch °A stocks of the SSE 50 index this year J âa = ny la t D) ¢ â+I Interface Library 2 LLM = +]! Workflow SS eonuten LLM omâ 2 FE Table Manipulation 3 Data Acquisition L ae a 23 [lll E Data Processing [4 Data Visualization Ea {tt HALT EI : iS Data Prediction ) User Request . [Predict Request]: Can you predict Chinaâs GDP using Sibel heh | . history trend? a 1 Id Lik th FS oe vi t tl âSs 3 rr Self-Request today > ie to See the news | | China Shiyou 0.5409 _longjilvneng -0.3241 [Financial Request]: >) China Shihua 0.4932 China zhongmian -0.4416 : What is Guizhou Maotaiâs Weier Gufen _ 0.2626 tianheguangneng â_-0.4432 x return on equity in the last. I split problem into Four Steps. I loop through all the ten years ? component stocks of the SSES0 to get the cross-sectional [Stock Request]: return from 20230101-20230605, and then plot the bar.
|
2306.07209#5
|
Data-Copilot: Bridging Billions of Data and Humans with Autonomous Workflow
|
Various industries such as finance, meteorology, and energy generate vast
amounts of heterogeneous data every day. There is a natural demand for humans
to manage, process, and display data efficiently. However, it necessitates
labor-intensive efforts and a high level of expertise for these data-related
tasks. Considering that large language models (LLMs) have showcased promising
capabilities in semantic understanding and reasoning, we advocate that the
deployment of LLMs could autonomously manage and process massive amounts of
data while displaying and interacting in a human-friendly manner. Based on this
belief, we propose Data-Copilot, an LLM-based system that connects numerous
data sources on one end and caters to diverse human demands on the other end.
Acting like an experienced expert, Data-Copilot autonomously transforms raw
data into visualization results that best match the user's intent.
Specifically, Data-Copilot autonomously designs versatile interfaces (tools)
for data management, processing, prediction, and visualization. In real-time
response, it automatically deploys a concise workflow by invoking corresponding
interfaces step by step for the user's request. The interface design and
deployment processes are fully controlled by Data-Copilot itself, without human
assistance. Besides, we create a Data-Copilot demo that links abundant data
from different domains (stock, fund, company, economics, and live news) and
accurately respond to diverse requests, serving as a reliable AI assistant.
|
http://arxiv.org/pdf/2306.07209
|
Wenqi Zhang, Yongliang Shen, Weiming Lu, Yueting Zhuang
|
cs.CL, cs.AI, cs.CE
| null | null |
cs.CL
|
20230612
|
20230612
|
[
{
"id": "2305.14318"
},
{
"id": "2303.17564"
},
{
"id": "2304.12995"
},
{
"id": "2305.19308"
},
{
"id": "2305.17126"
},
{
"id": "2305.15038"
},
{
"id": "2303.02927"
}
] |
2306.07174
| 6 |
In this paper, we propose a framework for Language Models Augmented with Long-Term Memory (LONGMEM), which enables language models to cache long-form previous context or knowledge into the non-differentiable memory bank, and further take advantage of them via a decoupled memory module to address the memory staleness problem. To achieve decoupled memory, we design a novel residual side-network (SideNet). Paired attention keys and values of the previous context are extracted using a frozen backbone LLM into the memory bank. In the memory-augmented layer of the SideNet, the generated attention query of the current input is used to retrieve cached (keys, values) of previous contexts from the memory, and the corresponding memory augmentations are then fused into learned hidden states via a joint-attention mechanism. Furthermore, newly designed cross-network residual connections between the SideNet and the frozen backbone LLM enable better knowledge transfer from the pretrained backbone LLM. By continually training the residual SideNet to retrieve and fuse memory-augmented long-context, the pre-trained LLM can be adapted to leverage long-contextual memory for improved modeling. The detailed memory cache, retrieval and fusion process is illustrated in Figure 1.
|
2306.07174#6
|
Augmenting Language Models with Long-Term Memory
|
Existing large language models (LLMs) can only afford fix-sized inputs due to
the input length limit, preventing them from utilizing rich long-context
information from past inputs. To address this, we propose a framework, Language
Models Augmented with Long-Term Memory (LongMem), which enables LLMs to
memorize long history. We design a novel decoupled network architecture with
the original backbone LLM frozen as a memory encoder and an adaptive residual
side-network as a memory retriever and reader. Such a decoupled memory design
can easily cache and update long-term past contexts for memory retrieval
without suffering from memory staleness. Enhanced with memory-augmented
adaptation training, LongMem can thus memorize long past context and use
long-term memory for language modeling. The proposed memory retrieval module
can handle unlimited-length context in its memory bank to benefit various
downstream tasks. Typically, LongMem can enlarge the long-form memory to 65k
tokens and thus cache many-shot extra demonstration examples as long-form
memory for in-context learning. Experiments show that our method outperforms
strong long-context models on ChapterBreak, a challenging long-context modeling
benchmark, and achieves remarkable improvements on memory-augmented in-context
learning over LLMs. The results demonstrate that the proposed method is
effective in helping language models to memorize and utilize long-form
contents. Our code is open-sourced at https://aka.ms/LongMem.
|
http://arxiv.org/pdf/2306.07174
|
Weizhi Wang, Li Dong, Hao Cheng, Xiaodong Liu, Xifeng Yan, Jianfeng Gao, Furu Wei
|
cs.CL
| null | null |
cs.CL
|
20230612
|
20230612
|
[
{
"id": "2301.12866"
},
{
"id": "1901.02860"
},
{
"id": "2101.00027"
},
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2206.07682"
},
{
"id": "1606.05250"
},
{
"id": "1911.05507"
},
{
"id": "2108.12409"
},
{
"id": "2206.06522"
},
{
"id": "2204.10878"
},
{
"id": "2211.05100"
},
{
"id": "2201.11903"
},
{
"id": "2205.10178"
},
{
"id": "2205.01068"
}
] |
2306.07209
| 6 |
through all the ten years ? component stocks of the SSES0 to get the cross-sectional [Stock Request]: return from 20230101-20230605, and then plot the bar. I want to compare the stock | Step1: Query all the constituent stocks of SSE 50 index by price of ..... 2) using {get_index_constituent} interface tool. Step2: Select Q Seed Request | Data Source 1 [Company Request]: | hocemmcticeaa et hy rte octal eit), [ z " eS . Data Source 2 Introduce company about ..., 5) Step3: Loop through each stock to calculate each stockâs (Qsced Request : [Fund Request]: | retum by invoking {loop_rank}, ° bee {calculate_earning_between_two_time}. Step4: Q Seed Request | Data Source N {plot_stock data} interface tool plot the bar chart and oO | {print_save_table} interface to save the table ) Data R& Human-friendly Output Producer Receiver
|
2306.07209#6
|
Data-Copilot: Bridging Billions of Data and Humans with Autonomous Workflow
|
Various industries such as finance, meteorology, and energy generate vast
amounts of heterogeneous data every day. There is a natural demand for humans
to manage, process, and display data efficiently. However, it necessitates
labor-intensive efforts and a high level of expertise for these data-related
tasks. Considering that large language models (LLMs) have showcased promising
capabilities in semantic understanding and reasoning, we advocate that the
deployment of LLMs could autonomously manage and process massive amounts of
data while displaying and interacting in a human-friendly manner. Based on this
belief, we propose Data-Copilot, an LLM-based system that connects numerous
data sources on one end and caters to diverse human demands on the other end.
Acting like an experienced expert, Data-Copilot autonomously transforms raw
data into visualization results that best match the user's intent.
Specifically, Data-Copilot autonomously designs versatile interfaces (tools)
for data management, processing, prediction, and visualization. In real-time
response, it automatically deploys a concise workflow by invoking corresponding
interfaces step by step for the user's request. The interface design and
deployment processes are fully controlled by Data-Copilot itself, without human
assistance. Besides, we create a Data-Copilot demo that links abundant data
from different domains (stock, fund, company, economics, and live news) and
accurately respond to diverse requests, serving as a reliable AI assistant.
|
http://arxiv.org/pdf/2306.07209
|
Wenqi Zhang, Yongliang Shen, Weiming Lu, Yueting Zhuang
|
cs.CL, cs.AI, cs.CE
| null | null |
cs.CL
|
20230612
|
20230612
|
[
{
"id": "2305.14318"
},
{
"id": "2303.17564"
},
{
"id": "2304.12995"
},
{
"id": "2305.19308"
},
{
"id": "2305.17126"
},
{
"id": "2305.15038"
},
{
"id": "2303.02927"
}
] |
2306.06924
| 7 |
2
Figure 2: An exhaustive decision tree for classifying societal-scale harms from AI technology
Type 1: Diffusion of responsibility Societal-scale harm can arise from Al built no by a diffuse collection of creators, where no one is uniquely accountable for the technology's creation or use, as in a classic âtragedy of the commons". yes; unified creators Type 2: "Bigger than expected" Harm can result from Al that was not no expected to have a large impact at all, such as a lab leak, a surprisingly addictive open-source product, or an unexpected repurposing of a research prototype. yes; major impact expected Type 3: "Worse than expected" Al intended to have a large societal impact no can turn out harmful by mistake, such as a popular product that creates problems and partially solves them only for its users. yes; harm anticipated Type 4: Willful indifference As a side effect of a primary goal like no profit or influence, Al creators can willfully allow it to cause widespread societal harms like pollution, resource depletion, mental yes; illness, misinformation, or injustice. harm intended Type 5: Criminal weaponization no One or more criminal entities could create Al to intentionally inflict harms, such as for terrorism or combating law enforcement. yes; state actors Type 6: State weaponization Al deployed by states in war, civil war, or
law enforcement can easily yield societal-scale harm.
|
2306.06924#7
|
TASRA: a Taxonomy and Analysis of Societal-Scale Risks from AI
|
While several recent works have identified societal-scale and
extinction-level risks to humanity arising from artificial intelligence, few
have attempted an {\em exhaustive taxonomy} of such risks. Many exhaustive
taxonomies are possible, and some are useful -- particularly if they reveal new
risks or practical approaches to safety. This paper explores a taxonomy based
on accountability: whose actions lead to the risk, are the actors unified, and
are they deliberate? We also provide stories to illustrate how the various risk
types could each play out, including risks arising from unanticipated
interactions of many AI systems, as well as risks from deliberate misuse, for
which combined technical and policy solutions are indicated.
|
http://arxiv.org/pdf/2306.06924
|
Andrew Critch, Stuart Russell
|
cs.AI, cs.CR, cs.CY, cs.LG, 68T01, I.2.0
| null | null |
cs.AI
|
20230612
|
20230614
|
[
{
"id": "1903.08542"
},
{
"id": "1606.06565"
},
{
"id": "1903.03877"
},
{
"id": "1908.01755"
},
{
"id": "1711.00363"
},
{
"id": "1701.01302"
},
{
"id": "1705.10720"
},
{
"id": "1511.03246"
},
{
"id": "1902.04198"
},
{
"id": "1806.01186"
}
] |
2306.07174
| 7 |
Our decoupled memory design leads to two main benefits. First, our proposed architecture decouples the process of encoding previous inputs into memory and the process of memory retrieval and fusion by decoupled frozen backbone LLM and SideNet. In this way, the backbone LLM only works as the long-context knowledge encoder, while the residual SideNet works as the memory retriever and reader, which effectively resolves the issue of memory staleness. Second, directly adapting the entire LLM with memory augmentations is computationally inefficient, and also suffers from catastrophic forgetting. As the backbone LLM is frozen during the efficient memory-augmented adaptation stage, LONGMEM can not only tap into the pretrained knowledge but also avoid catastrophic forgetting.
LONGMEM is capable of taking various types of long-form text and knowledge into the memory bank based on downstream tasks. Here, we consider two representative cases, language modeling with full-length book contexts, and memory-augmented in-context learning with thousands of task-relevant demonstration examples. Specifically, we evaluate the effectiveness of the proposed LONGMEM on various long-text language modeling, and memory-augmented in-context learning for language understanding. Experimental results demonstrate that our model consistently outperforms the strong baselines in terms of long-text modeling and in-context learning abilities. Our method substantially
2
|
2306.07174#7
|
Augmenting Language Models with Long-Term Memory
|
Existing large language models (LLMs) can only afford fix-sized inputs due to
the input length limit, preventing them from utilizing rich long-context
information from past inputs. To address this, we propose a framework, Language
Models Augmented with Long-Term Memory (LongMem), which enables LLMs to
memorize long history. We design a novel decoupled network architecture with
the original backbone LLM frozen as a memory encoder and an adaptive residual
side-network as a memory retriever and reader. Such a decoupled memory design
can easily cache and update long-term past contexts for memory retrieval
without suffering from memory staleness. Enhanced with memory-augmented
adaptation training, LongMem can thus memorize long past context and use
long-term memory for language modeling. The proposed memory retrieval module
can handle unlimited-length context in its memory bank to benefit various
downstream tasks. Typically, LongMem can enlarge the long-form memory to 65k
tokens and thus cache many-shot extra demonstration examples as long-form
memory for in-context learning. Experiments show that our method outperforms
strong long-context models on ChapterBreak, a challenging long-context modeling
benchmark, and achieves remarkable improvements on memory-augmented in-context
learning over LLMs. The results demonstrate that the proposed method is
effective in helping language models to memorize and utilize long-form
contents. Our code is open-sourced at https://aka.ms/LongMem.
|
http://arxiv.org/pdf/2306.07174
|
Weizhi Wang, Li Dong, Hao Cheng, Xiaodong Liu, Xifeng Yan, Jianfeng Gao, Furu Wei
|
cs.CL
| null | null |
cs.CL
|
20230612
|
20230612
|
[
{
"id": "2301.12866"
},
{
"id": "1901.02860"
},
{
"id": "2101.00027"
},
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2206.07682"
},
{
"id": "1606.05250"
},
{
"id": "1911.05507"
},
{
"id": "2108.12409"
},
{
"id": "2206.06522"
},
{
"id": "2204.10878"
},
{
"id": "2211.05100"
},
{
"id": "2201.11903"
},
{
"id": "2205.10178"
},
{
"id": "2205.01068"
}
] |
2306.07209
| 7 |
Figure 1: Data-Copilot is an LLM-based system for data-related tasks, bridging billions of data and diverse user requests. It independently designs interface tools for the efficient management, invocation, processing, and visualization of data. Upon receiving a complex request, Data-Copilot autonomously invokes these self-design interfaces to construct a workflow to fulfill human intent. Without human assistance, it adeptly transforms raw data from heterogeneous sources, in different formats, into a human-friendly output such as graphics, tables, and text.
data in a certain modality, processing it according to human instructions, and ultimately displaying the results. Therefore, one question arises: Can LLMs, in the context of generic data, construct automated data science workflows capable of addressing a wide range of data-related tasks?
|
2306.07209#7
|
Data-Copilot: Bridging Billions of Data and Humans with Autonomous Workflow
|
Various industries such as finance, meteorology, and energy generate vast
amounts of heterogeneous data every day. There is a natural demand for humans
to manage, process, and display data efficiently. However, it necessitates
labor-intensive efforts and a high level of expertise for these data-related
tasks. Considering that large language models (LLMs) have showcased promising
capabilities in semantic understanding and reasoning, we advocate that the
deployment of LLMs could autonomously manage and process massive amounts of
data while displaying and interacting in a human-friendly manner. Based on this
belief, we propose Data-Copilot, an LLM-based system that connects numerous
data sources on one end and caters to diverse human demands on the other end.
Acting like an experienced expert, Data-Copilot autonomously transforms raw
data into visualization results that best match the user's intent.
Specifically, Data-Copilot autonomously designs versatile interfaces (tools)
for data management, processing, prediction, and visualization. In real-time
response, it automatically deploys a concise workflow by invoking corresponding
interfaces step by step for the user's request. The interface design and
deployment processes are fully controlled by Data-Copilot itself, without human
assistance. Besides, we create a Data-Copilot demo that links abundant data
from different domains (stock, fund, company, economics, and live news) and
accurately respond to diverse requests, serving as a reliable AI assistant.
|
http://arxiv.org/pdf/2306.07209
|
Wenqi Zhang, Yongliang Shen, Weiming Lu, Yueting Zhuang
|
cs.CL, cs.AI, cs.CE
| null | null |
cs.CL
|
20230612
|
20230612
|
[
{
"id": "2305.14318"
},
{
"id": "2303.17564"
},
{
"id": "2304.12995"
},
{
"id": "2305.19308"
},
{
"id": "2305.17126"
},
{
"id": "2305.15038"
},
{
"id": "2303.02927"
}
] |
2306.06924
| 8 |
law enforcement can easily yield societal-scale harm.
Safety engineers often carry out a fault tree analysis (Watson et al., 1961; Mearns, 1965; Lee et al., 1985) as a way to ensure they have covered all possible failures. The root of a fault tree is the condition to be avoided and each branch tests some condition. As long as the branches from each node are logically exhaustive, the leaves necessarily cover all possible circumstances. Typically branches test whether a given subsystem is working correctly or not, but can also test more general conditions such as the ambient temperature or whether the system is undergoing testing. The decision tree in Figure 2 above follows the same basic principle to produce an exhaustive taxonomy.
Exhaustiveness of a taxonomy is of course no guarantee of usefulness. For example, an analysis based on whether the day of the month is a prime number would yield an exhaustive two-leaf taxonomy while providing zero analytical beneï¬t. A taxonomy is only useful to the extent that it reveals new risks or recommends helpful interventions.
To that end, we have chosen an exhaustive taxonomy based on accountability: whose actions led to the risk, were they uniï¬ed, and were they deliberate? Such a taxonomy may be helpful because it is closely tied
3
|
2306.06924#8
|
TASRA: a Taxonomy and Analysis of Societal-Scale Risks from AI
|
While several recent works have identified societal-scale and
extinction-level risks to humanity arising from artificial intelligence, few
have attempted an {\em exhaustive taxonomy} of such risks. Many exhaustive
taxonomies are possible, and some are useful -- particularly if they reveal new
risks or practical approaches to safety. This paper explores a taxonomy based
on accountability: whose actions lead to the risk, are the actors unified, and
are they deliberate? We also provide stories to illustrate how the various risk
types could each play out, including risks arising from unanticipated
interactions of many AI systems, as well as risks from deliberate misuse, for
which combined technical and policy solutions are indicated.
|
http://arxiv.org/pdf/2306.06924
|
Andrew Critch, Stuart Russell
|
cs.AI, cs.CR, cs.CY, cs.LG, 68T01, I.2.0
| null | null |
cs.AI
|
20230612
|
20230614
|
[
{
"id": "1903.08542"
},
{
"id": "1606.06565"
},
{
"id": "1903.03877"
},
{
"id": "1908.01755"
},
{
"id": "1711.00363"
},
{
"id": "1701.01302"
},
{
"id": "1705.10720"
},
{
"id": "1511.03246"
},
{
"id": "1902.04198"
},
{
"id": "1806.01186"
}
] |
2306.07174
| 8 |
2
Cache Attn Keys and Values it LLM Decoder Layer LLM Decoder Layer ideNet MemAug Layer (LIM Decoder Layer) & ES e <r LLM Decoder Layer SideNet Layer a ââ) fToken-to-Chunk Rotrioval Embedding Layer Frozen Layer - (5 trainable Layer © @ eesisua Current Inputs
Figure 2: Overview of LONGMEM architecture. âMemAugâ represents Memory-Augmented Layer.
improves LLMâs long-context language modeling capabilities by -1.38â¼-1.62 perplexity over dif- ferent length splits of Gutenberg-2022 corpus. Remarkably, our model achieves the state-of-the-art performance of 40.5% identification accuracy on ChapterBreak, a challenging long-context modeling benchmark, significantly surpassing existing strong x-former baselines. Lastly, with 2k demonstration examples in memory, LONGMEM shows pronounced in-context learning improvements on popular NLU tasks, compared with MemTRM and non-memory-augmented baselines.
# 2 Methods
|
2306.07174#8
|
Augmenting Language Models with Long-Term Memory
|
Existing large language models (LLMs) can only afford fix-sized inputs due to
the input length limit, preventing them from utilizing rich long-context
information from past inputs. To address this, we propose a framework, Language
Models Augmented with Long-Term Memory (LongMem), which enables LLMs to
memorize long history. We design a novel decoupled network architecture with
the original backbone LLM frozen as a memory encoder and an adaptive residual
side-network as a memory retriever and reader. Such a decoupled memory design
can easily cache and update long-term past contexts for memory retrieval
without suffering from memory staleness. Enhanced with memory-augmented
adaptation training, LongMem can thus memorize long past context and use
long-term memory for language modeling. The proposed memory retrieval module
can handle unlimited-length context in its memory bank to benefit various
downstream tasks. Typically, LongMem can enlarge the long-form memory to 65k
tokens and thus cache many-shot extra demonstration examples as long-form
memory for in-context learning. Experiments show that our method outperforms
strong long-context models on ChapterBreak, a challenging long-context modeling
benchmark, and achieves remarkable improvements on memory-augmented in-context
learning over LLMs. The results demonstrate that the proposed method is
effective in helping language models to memorize and utilize long-form
contents. Our code is open-sourced at https://aka.ms/LongMem.
|
http://arxiv.org/pdf/2306.07174
|
Weizhi Wang, Li Dong, Hao Cheng, Xiaodong Liu, Xifeng Yan, Jianfeng Gao, Furu Wei
|
cs.CL
| null | null |
cs.CL
|
20230612
|
20230612
|
[
{
"id": "2301.12866"
},
{
"id": "1901.02860"
},
{
"id": "2101.00027"
},
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2206.07682"
},
{
"id": "1606.05250"
},
{
"id": "1911.05507"
},
{
"id": "2108.12409"
},
{
"id": "2206.06522"
},
{
"id": "2204.10878"
},
{
"id": "2211.05100"
},
{
"id": "2201.11903"
},
{
"id": "2205.10178"
},
{
"id": "2205.01068"
}
] |
2306.07209
| 8 |
To achieve this goal, several challenges must be addressed: (1) From a data perspective: employing LLMs for directly reading and processing massive data is not only impractical but also poses the potential risks of data leakage. (2) From the model perspective: LLMs are not adept at handling numerical computations and may not have suitable callable external tools to meet diverse user requests, thus limiting the utilization of LLMs. (3) From the task perspective: although LLMs have demonstrated strong few-shot capabilities, many data-related tasks are intricate, requiring a combination of many operations, like data retrieval, computations, and table manipulations, and the results need to be presented in multiple formats including images, tables, and text, all of which are beyond the current capabilities of LLMs. Hence, it is challenging to directly apply the current methods for data-related tasks.
To pierce through the fog and find a way, we trace back to the origins of data science. In the 1970s, the pioneer of data science, Peter Naur (Turing Award winner in 2005), defined data science as follows [22]:
Data science is the science of dealing with data and processing large amounts of data. Humans as sources and receivers of data. The data must be chosen with due regard to the transformation to be achieved and the data processing tools available.
|
2306.07209#8
|
Data-Copilot: Bridging Billions of Data and Humans with Autonomous Workflow
|
Various industries such as finance, meteorology, and energy generate vast
amounts of heterogeneous data every day. There is a natural demand for humans
to manage, process, and display data efficiently. However, it necessitates
labor-intensive efforts and a high level of expertise for these data-related
tasks. Considering that large language models (LLMs) have showcased promising
capabilities in semantic understanding and reasoning, we advocate that the
deployment of LLMs could autonomously manage and process massive amounts of
data while displaying and interacting in a human-friendly manner. Based on this
belief, we propose Data-Copilot, an LLM-based system that connects numerous
data sources on one end and caters to diverse human demands on the other end.
Acting like an experienced expert, Data-Copilot autonomously transforms raw
data into visualization results that best match the user's intent.
Specifically, Data-Copilot autonomously designs versatile interfaces (tools)
for data management, processing, prediction, and visualization. In real-time
response, it automatically deploys a concise workflow by invoking corresponding
interfaces step by step for the user's request. The interface design and
deployment processes are fully controlled by Data-Copilot itself, without human
assistance. Besides, we create a Data-Copilot demo that links abundant data
from different domains (stock, fund, company, economics, and live news) and
accurately respond to diverse requests, serving as a reliable AI assistant.
|
http://arxiv.org/pdf/2306.07209
|
Wenqi Zhang, Yongliang Shen, Weiming Lu, Yueting Zhuang
|
cs.CL, cs.AI, cs.CE
| null | null |
cs.CL
|
20230612
|
20230612
|
[
{
"id": "2305.14318"
},
{
"id": "2303.17564"
},
{
"id": "2304.12995"
},
{
"id": "2305.19308"
},
{
"id": "2305.17126"
},
{
"id": "2305.15038"
},
{
"id": "2303.02927"
}
] |
2306.06924
| 9 |
3
to the important questions of where to look for emerging risks and what kinds of policy interventions might be eï¬ective. This taxonomy in particular surfaces risks arising from unanticipated interactions of many AI systems, as well as risks from deliberate misuse, for which combined technical and policy solutions are needed. Many other taxonomies are possible and should be explored. A previous taxonomy of Yampolskiy (2015) also examined sources of AI risk arising intentionally, by mistake, or from a systemâs environment, either pre-deployment or post-deployment. While useful, Yampolskiyâs taxonomy was non-exhaustive, because it presumed a uniï¬ed intention amongst the creators of a particular AI system. In reality, no well-deï¬ned âcreatorâs intentâ might exist if multiple AI systems are involved and built with diï¬erent objectves in mind.
# 1.1 Related work and historical context
|
2306.06924#9
|
TASRA: a Taxonomy and Analysis of Societal-Scale Risks from AI
|
While several recent works have identified societal-scale and
extinction-level risks to humanity arising from artificial intelligence, few
have attempted an {\em exhaustive taxonomy} of such risks. Many exhaustive
taxonomies are possible, and some are useful -- particularly if they reveal new
risks or practical approaches to safety. This paper explores a taxonomy based
on accountability: whose actions lead to the risk, are the actors unified, and
are they deliberate? We also provide stories to illustrate how the various risk
types could each play out, including risks arising from unanticipated
interactions of many AI systems, as well as risks from deliberate misuse, for
which combined technical and policy solutions are indicated.
|
http://arxiv.org/pdf/2306.06924
|
Andrew Critch, Stuart Russell
|
cs.AI, cs.CR, cs.CY, cs.LG, 68T01, I.2.0
| null | null |
cs.AI
|
20230612
|
20230614
|
[
{
"id": "1903.08542"
},
{
"id": "1606.06565"
},
{
"id": "1903.03877"
},
{
"id": "1908.01755"
},
{
"id": "1711.00363"
},
{
"id": "1701.01302"
},
{
"id": "1705.10720"
},
{
"id": "1511.03246"
},
{
"id": "1902.04198"
},
{
"id": "1806.01186"
}
] |
2306.07174
| 9 |
# 2 Methods
To enable LLMs to harvest relevant information from the past long context in memory, we propose to augment the frozen backbone LLM with a decoupled memory module. To fuse the memory context information, we design a novel lightweight residual SideNet, which can be continually trained in an efficient way. In the following, we first discuss the problem formulation of language modeling with memory augmentations. Then, we formally introduce our efficient residual SideNet for adapting the frozen pretrained LLM to jointly attend over local input context and retrieved memory context. Lastly, we provide our designed processes of how past memory is encoded, stored, recalled and fused for language modeling.
# 2.1 Language Models Augmented with Long-Term Memory
|
2306.07174#9
|
Augmenting Language Models with Long-Term Memory
|
Existing large language models (LLMs) can only afford fix-sized inputs due to
the input length limit, preventing them from utilizing rich long-context
information from past inputs. To address this, we propose a framework, Language
Models Augmented with Long-Term Memory (LongMem), which enables LLMs to
memorize long history. We design a novel decoupled network architecture with
the original backbone LLM frozen as a memory encoder and an adaptive residual
side-network as a memory retriever and reader. Such a decoupled memory design
can easily cache and update long-term past contexts for memory retrieval
without suffering from memory staleness. Enhanced with memory-augmented
adaptation training, LongMem can thus memorize long past context and use
long-term memory for language modeling. The proposed memory retrieval module
can handle unlimited-length context in its memory bank to benefit various
downstream tasks. Typically, LongMem can enlarge the long-form memory to 65k
tokens and thus cache many-shot extra demonstration examples as long-form
memory for in-context learning. Experiments show that our method outperforms
strong long-context models on ChapterBreak, a challenging long-context modeling
benchmark, and achieves remarkable improvements on memory-augmented in-context
learning over LLMs. The results demonstrate that the proposed method is
effective in helping language models to memorize and utilize long-form
contents. Our code is open-sourced at https://aka.ms/LongMem.
|
http://arxiv.org/pdf/2306.07174
|
Weizhi Wang, Li Dong, Hao Cheng, Xiaodong Liu, Xifeng Yan, Jianfeng Gao, Furu Wei
|
cs.CL
| null | null |
cs.CL
|
20230612
|
20230612
|
[
{
"id": "2301.12866"
},
{
"id": "1901.02860"
},
{
"id": "2101.00027"
},
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2206.07682"
},
{
"id": "1606.05250"
},
{
"id": "1911.05507"
},
{
"id": "2108.12409"
},
{
"id": "2206.06522"
},
{
"id": "2204.10878"
},
{
"id": "2211.05100"
},
{
"id": "2201.11903"
},
{
"id": "2205.10178"
},
{
"id": "2205.01068"
}
] |
2306.07209
| 9 |
This insight inspires us that how effectively we extract information from data depends on the kinds of tools we have at our disposal. Therefore, we advocate that LLM should not handle data directly, but rather act as a brain, creating appropriate interface tools to manage and utilize data, and presenting valuable information in a human-centric manner. Based on this, we propose a system called Data- Copilot, which harnesses the capabilities of LLM for creating suitable interfaces and deploying
2
autonomous workflow for humans. As shown in Figrue 1, to handle data-related tasks with large volumes of data, rich data sources, and complex query intent, Data-Copilot can autonomously design versatile interface tools for data management, invocation, processing, forecasting, and visualization, and dispatch these interface tools step by step, forming a data-to-human workflow based on user requests.
More than just a visualization tool, Data-Copilot is a versatile framework that connects numerous data sources from different domains on one end and caters to diverse user demands on the other end. It can continuously enrich its interface tools with just a small number of seed requests, thereby expanding its range of capabilities, such as advanced data analysis and more complex data forecasting. To achieve this, it comprises two processes: the Interface Design and the Interface Dispatch.
|
2306.07209#9
|
Data-Copilot: Bridging Billions of Data and Humans with Autonomous Workflow
|
Various industries such as finance, meteorology, and energy generate vast
amounts of heterogeneous data every day. There is a natural demand for humans
to manage, process, and display data efficiently. However, it necessitates
labor-intensive efforts and a high level of expertise for these data-related
tasks. Considering that large language models (LLMs) have showcased promising
capabilities in semantic understanding and reasoning, we advocate that the
deployment of LLMs could autonomously manage and process massive amounts of
data while displaying and interacting in a human-friendly manner. Based on this
belief, we propose Data-Copilot, an LLM-based system that connects numerous
data sources on one end and caters to diverse human demands on the other end.
Acting like an experienced expert, Data-Copilot autonomously transforms raw
data into visualization results that best match the user's intent.
Specifically, Data-Copilot autonomously designs versatile interfaces (tools)
for data management, processing, prediction, and visualization. In real-time
response, it automatically deploys a concise workflow by invoking corresponding
interfaces step by step for the user's request. The interface design and
deployment processes are fully controlled by Data-Copilot itself, without human
assistance. Besides, we create a Data-Copilot demo that links abundant data
from different domains (stock, fund, company, economics, and live news) and
accurately respond to diverse requests, serving as a reliable AI assistant.
|
http://arxiv.org/pdf/2306.07209
|
Wenqi Zhang, Yongliang Shen, Weiming Lu, Yueting Zhuang
|
cs.CL, cs.AI, cs.CE
| null | null |
cs.CL
|
20230612
|
20230612
|
[
{
"id": "2305.14318"
},
{
"id": "2303.17564"
},
{
"id": "2304.12995"
},
{
"id": "2305.19308"
},
{
"id": "2305.17126"
},
{
"id": "2305.15038"
},
{
"id": "2303.02927"
}
] |
2306.06924
| 10 |
# 1.1 Related work and historical context
Historically, the risk posed to humanity by advanced AI systems was ï¬rst recognized in ï¬ction, by authors such as Samuel Butler (1863) and Karel Capek (1920). Later, warnings were also expressed by computer scientists such as Alan Turing (1951a,b) and Norbert Wiener (1960), with Wiener pinning risk on the diï¬culty of ensuring âthe purpose put into the machineâ would be aligned with actual human preferences, and I. J. Good (1966) highlighted the additional threat of rapid, recursive self-improvement leading to a loss of control.
In this century, many have examined existential risk from superintelligent machines (Hibbard, 2001; Yudkowsky et al., 2008; Barrat, 2013; Bostrom, 2014; Yampolskiy, 2015) and various technical approaches have been suggested to address it, particularly in the area of AI alignment (Soares and Fallenstein, 2014; Russell, 2014; Hadï¬eld-Menell et al., 2016; Amodei et al., 2016; Russell, 2019).
# 2 Types of Risk
|
2306.06924#10
|
TASRA: a Taxonomy and Analysis of Societal-Scale Risks from AI
|
While several recent works have identified societal-scale and
extinction-level risks to humanity arising from artificial intelligence, few
have attempted an {\em exhaustive taxonomy} of such risks. Many exhaustive
taxonomies are possible, and some are useful -- particularly if they reveal new
risks or practical approaches to safety. This paper explores a taxonomy based
on accountability: whose actions lead to the risk, are the actors unified, and
are they deliberate? We also provide stories to illustrate how the various risk
types could each play out, including risks arising from unanticipated
interactions of many AI systems, as well as risks from deliberate misuse, for
which combined technical and policy solutions are indicated.
|
http://arxiv.org/pdf/2306.06924
|
Andrew Critch, Stuart Russell
|
cs.AI, cs.CR, cs.CY, cs.LG, 68T01, I.2.0
| null | null |
cs.AI
|
20230612
|
20230614
|
[
{
"id": "1903.08542"
},
{
"id": "1606.06565"
},
{
"id": "1903.03877"
},
{
"id": "1908.01755"
},
{
"id": "1711.00363"
},
{
"id": "1701.01302"
},
{
"id": "1705.10720"
},
{
"id": "1511.03246"
},
{
"id": "1902.04198"
},
{
"id": "1806.01186"
}
] |
2306.07174
| 10 |
Here, we focus on the high-level problem setup and defer more component details to later sections. Given its wide adoption for pretrained LLMs, our LONGMEM model is built on the Transformer architecture [VSP+17]. For LONGMEM, there are three key components: the frozen backbone LLM, SideNet, and Cache Memory Bank. As most existing pretrained LLMs can only take a fix-sized input, only the input segment of a long sequence (e.g., a book) that can fit in the length limit is denoted as the current input as done for most existing autoregressive language models. Those previous segments that can not fit are denoted as previous inputs, which are used for memory augmentations. To tap into the learned knowledge of the pretrained LLM, both previous and current inputs are encoded using the frozen backbone LLM but different representations are extracted. For previous inputs, the key-value pairs from the Transformer self-attention at m-th layer are stored in Cache Memory Bank, whereas the hidden states from each LLM decoder layer for the current inputs are retained and transferred to SideNet. For each current input token, top relevant key-value vector pairs are retrieved as memory augmentations for language modeling. The SideNet module can
|
2306.07174#10
|
Augmenting Language Models with Long-Term Memory
|
Existing large language models (LLMs) can only afford fix-sized inputs due to
the input length limit, preventing them from utilizing rich long-context
information from past inputs. To address this, we propose a framework, Language
Models Augmented with Long-Term Memory (LongMem), which enables LLMs to
memorize long history. We design a novel decoupled network architecture with
the original backbone LLM frozen as a memory encoder and an adaptive residual
side-network as a memory retriever and reader. Such a decoupled memory design
can easily cache and update long-term past contexts for memory retrieval
without suffering from memory staleness. Enhanced with memory-augmented
adaptation training, LongMem can thus memorize long past context and use
long-term memory for language modeling. The proposed memory retrieval module
can handle unlimited-length context in its memory bank to benefit various
downstream tasks. Typically, LongMem can enlarge the long-form memory to 65k
tokens and thus cache many-shot extra demonstration examples as long-form
memory for in-context learning. Experiments show that our method outperforms
strong long-context models on ChapterBreak, a challenging long-context modeling
benchmark, and achieves remarkable improvements on memory-augmented in-context
learning over LLMs. The results demonstrate that the proposed method is
effective in helping language models to memorize and utilize long-form
contents. Our code is open-sourced at https://aka.ms/LongMem.
|
http://arxiv.org/pdf/2306.07174
|
Weizhi Wang, Li Dong, Hao Cheng, Xiaodong Liu, Xifeng Yan, Jianfeng Gao, Furu Wei
|
cs.CL
| null | null |
cs.CL
|
20230612
|
20230612
|
[
{
"id": "2301.12866"
},
{
"id": "1901.02860"
},
{
"id": "2101.00027"
},
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2206.07682"
},
{
"id": "1606.05250"
},
{
"id": "1911.05507"
},
{
"id": "2108.12409"
},
{
"id": "2206.06522"
},
{
"id": "2204.10878"
},
{
"id": "2211.05100"
},
{
"id": "2201.11903"
},
{
"id": "2205.10178"
},
{
"id": "2205.01068"
}
] |
2306.07209
| 10 |
⢠Interface Design: Data-Copilot adopts an iterative self-request process to fully explore the data and cover most scenarios. As shown in Figure 1, Data-Copilot is instructed to generate a large number of diverse requests from a few seed requests, then abstracts self-generated requests into interface tools and merges interfaces with similar functionalities. Finally, it harvests a handful of versatile interfaces, encompassing data acquisition, processing, forecasting, table manipulation, and visualization.
⢠Interface Dispatch: When a user request is received, Data-Copilot first parses the user intention and then plans an interface invocation process after reviewing the interface description designed by itself. It is capable of flexibly constructing workflows with various structures (including sequential, parallel, and loop structures) to address user requests.
Incorporating two phases, Data-Copilot successfully manages and analyzes large amounts of data via its self-designed interfaces. It bypasses the need for direct data reading and the insufficiency of external tools. Besides, it is also easily extended for emerging requests and data simply by adding a new interface, demonstrating good scalability.
Overall, our contributions can be summarized as follows:
1. To efficiently handle data-intensive tasks on a large scale, we design a universal system, Data- Copilot, that connects data sources from different domains and diverse user tastes, by integrating LLM into every stage of the pipeline to reduce tedious labor and expertise.
|
2306.07209#10
|
Data-Copilot: Bridging Billions of Data and Humans with Autonomous Workflow
|
Various industries such as finance, meteorology, and energy generate vast
amounts of heterogeneous data every day. There is a natural demand for humans
to manage, process, and display data efficiently. However, it necessitates
labor-intensive efforts and a high level of expertise for these data-related
tasks. Considering that large language models (LLMs) have showcased promising
capabilities in semantic understanding and reasoning, we advocate that the
deployment of LLMs could autonomously manage and process massive amounts of
data while displaying and interacting in a human-friendly manner. Based on this
belief, we propose Data-Copilot, an LLM-based system that connects numerous
data sources on one end and caters to diverse human demands on the other end.
Acting like an experienced expert, Data-Copilot autonomously transforms raw
data into visualization results that best match the user's intent.
Specifically, Data-Copilot autonomously designs versatile interfaces (tools)
for data management, processing, prediction, and visualization. In real-time
response, it automatically deploys a concise workflow by invoking corresponding
interfaces step by step for the user's request. The interface design and
deployment processes are fully controlled by Data-Copilot itself, without human
assistance. Besides, we create a Data-Copilot demo that links abundant data
from different domains (stock, fund, company, economics, and live news) and
accurately respond to diverse requests, serving as a reliable AI assistant.
|
http://arxiv.org/pdf/2306.07209
|
Wenqi Zhang, Yongliang Shen, Weiming Lu, Yueting Zhuang
|
cs.CL, cs.AI, cs.CE
| null | null |
cs.CL
|
20230612
|
20230612
|
[
{
"id": "2305.14318"
},
{
"id": "2303.17564"
},
{
"id": "2304.12995"
},
{
"id": "2305.19308"
},
{
"id": "2305.17126"
},
{
"id": "2305.15038"
},
{
"id": "2303.02927"
}
] |
2306.06924
| 11 |
# 2 Types of Risk
Here we begin our analysis of risks organized into six risk types, which constitute an exhaustive decision tree for classifying societal harms from AI or algorithms more broadly. Types 2-6 will classify risks with reference to the intentions of the AI technologyâs creators, and whether those intentions are being well served by the technology. Type 1, by contrast, is premised on no single institution being primarily responsible for creating the problematic technology. Thus, Type 1 serves as a hedge against the taxonomy of Types 2-6 being non-exhaustive.
# 2.1 Type 1: Diï¬usion of responsibility
Automated processes can cause societal harm even when no one in particular is primarily responsible for the creation or deployment of those processes (Zwetsloot and Dafoe, 2019), and perhaps even as a result of the absence of responsibility. The infamous âï¬ash crashâ of 2010 is an instance of this: numerous stock trading algorithms from a variety of companies interacted in a fashion that rapidly devalued the US stock market by over 1 trillion dollars in a matter of minutes. Fortunately, humans were able to intervene afterward and reverse the damage, but that might not always be possible as AI technology becomes more powerful and pervasive.
Consider the following ï¬ctional story, where the impact of unemployment on crime rates (Raphael and Winter-Ebmer, 2001) is exacerbated by a cycle of algorithmic predictions:
|
2306.06924#11
|
TASRA: a Taxonomy and Analysis of Societal-Scale Risks from AI
|
While several recent works have identified societal-scale and
extinction-level risks to humanity arising from artificial intelligence, few
have attempted an {\em exhaustive taxonomy} of such risks. Many exhaustive
taxonomies are possible, and some are useful -- particularly if they reveal new
risks or practical approaches to safety. This paper explores a taxonomy based
on accountability: whose actions lead to the risk, are the actors unified, and
are they deliberate? We also provide stories to illustrate how the various risk
types could each play out, including risks arising from unanticipated
interactions of many AI systems, as well as risks from deliberate misuse, for
which combined technical and policy solutions are indicated.
|
http://arxiv.org/pdf/2306.06924
|
Andrew Critch, Stuart Russell
|
cs.AI, cs.CR, cs.CY, cs.LG, 68T01, I.2.0
| null | null |
cs.AI
|
20230612
|
20230614
|
[
{
"id": "1903.08542"
},
{
"id": "1606.06565"
},
{
"id": "1903.03877"
},
{
"id": "1908.01755"
},
{
"id": "1711.00363"
},
{
"id": "1701.01302"
},
{
"id": "1705.10720"
},
{
"id": "1511.03246"
},
{
"id": "1902.04198"
},
{
"id": "1806.01186"
}
] |
2306.07174
| 11 |
and transferred to SideNet. For each current input token, top relevant key-value vector pairs are retrieved as memory augmentations for language modeling. The SideNet module can be viewed as an efficient adaption model that is trained to fuse the current input context and relevant cached previous contexts in the decoupled memory. Formally, for a fix-sized input text sequence {xi}|x| i=1 (the current input), LONGMEM first performs a forward pass using the backbone LLM (marked in Blue in Figure 2) without any gradient calculation. The embedding layer of the backbone LLM first encodes the input {xi}|x| i=1 into embedding space and outputs the initial hidden states, H0 LLM â R|x|ÃE, where E is the hidden dimension. Then each successive Transformer decoder layer of the frozen backbone LLM computes the new hidden states
|
2306.07174#11
|
Augmenting Language Models with Long-Term Memory
|
Existing large language models (LLMs) can only afford fix-sized inputs due to
the input length limit, preventing them from utilizing rich long-context
information from past inputs. To address this, we propose a framework, Language
Models Augmented with Long-Term Memory (LongMem), which enables LLMs to
memorize long history. We design a novel decoupled network architecture with
the original backbone LLM frozen as a memory encoder and an adaptive residual
side-network as a memory retriever and reader. Such a decoupled memory design
can easily cache and update long-term past contexts for memory retrieval
without suffering from memory staleness. Enhanced with memory-augmented
adaptation training, LongMem can thus memorize long past context and use
long-term memory for language modeling. The proposed memory retrieval module
can handle unlimited-length context in its memory bank to benefit various
downstream tasks. Typically, LongMem can enlarge the long-form memory to 65k
tokens and thus cache many-shot extra demonstration examples as long-form
memory for in-context learning. Experiments show that our method outperforms
strong long-context models on ChapterBreak, a challenging long-context modeling
benchmark, and achieves remarkable improvements on memory-augmented in-context
learning over LLMs. The results demonstrate that the proposed method is
effective in helping language models to memorize and utilize long-form
contents. Our code is open-sourced at https://aka.ms/LongMem.
|
http://arxiv.org/pdf/2306.07174
|
Weizhi Wang, Li Dong, Hao Cheng, Xiaodong Liu, Xifeng Yan, Jianfeng Gao, Furu Wei
|
cs.CL
| null | null |
cs.CL
|
20230612
|
20230612
|
[
{
"id": "2301.12866"
},
{
"id": "1901.02860"
},
{
"id": "2101.00027"
},
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2206.07682"
},
{
"id": "1606.05250"
},
{
"id": "1911.05507"
},
{
"id": "2108.12409"
},
{
"id": "2206.06522"
},
{
"id": "2204.10878"
},
{
"id": "2211.05100"
},
{
"id": "2201.11903"
},
{
"id": "2205.10178"
},
{
"id": "2205.01068"
}
] |
2306.07209
| 11 |
2. Data-Copilot can autonomously manage, process, analyze, predict, and visualize data. When a request is received, it transforms raw data into informative results that best match the userâs intent.
3. Acting as a designer and dispatcher, Data-Copilot encompasses two phases: an offline interface design and an online interface dispatch. Through self-request and iterative refinement, Data- Copilot designs versatile interface tools with different functions. In the interface dispatch, it invokes the corresponding interfaces sequentially or in parallel for accurate responses.
4. We built a Data-Copilot demo for the Chinese financial market. It can access the stock, funds, economic, financial data, and live news, and provides diverse visualizations: graph, table, and text descriptions, customized to the userâs request.
# 2 Related Works
In the recent past, breakthroughs in large language models (LLMs) such as GPT-3, GPT-4, PaLM, and LLaMa [1, 2, 3, 4, 5, 12, 23, 24, 25] have revolutionized the field of natural language processing (NLP). These models have showcased remarkable competencies in handling zero-shot and few-shot tasks along with complex tasks like mathematical and commonsense reasoning. The impressive capabilities of these LLMs can be attributed to their extensive training corpus, intensive computation, and alignment mechanism [12, 13, 26].
|
2306.07209#11
|
Data-Copilot: Bridging Billions of Data and Humans with Autonomous Workflow
|
Various industries such as finance, meteorology, and energy generate vast
amounts of heterogeneous data every day. There is a natural demand for humans
to manage, process, and display data efficiently. However, it necessitates
labor-intensive efforts and a high level of expertise for these data-related
tasks. Considering that large language models (LLMs) have showcased promising
capabilities in semantic understanding and reasoning, we advocate that the
deployment of LLMs could autonomously manage and process massive amounts of
data while displaying and interacting in a human-friendly manner. Based on this
belief, we propose Data-Copilot, an LLM-based system that connects numerous
data sources on one end and caters to diverse human demands on the other end.
Acting like an experienced expert, Data-Copilot autonomously transforms raw
data into visualization results that best match the user's intent.
Specifically, Data-Copilot autonomously designs versatile interfaces (tools)
for data management, processing, prediction, and visualization. In real-time
response, it automatically deploys a concise workflow by invoking corresponding
interfaces step by step for the user's request. The interface design and
deployment processes are fully controlled by Data-Copilot itself, without human
assistance. Besides, we create a Data-Copilot demo that links abundant data
from different domains (stock, fund, company, economics, and live news) and
accurately respond to diverse requests, serving as a reliable AI assistant.
|
http://arxiv.org/pdf/2306.07209
|
Wenqi Zhang, Yongliang Shen, Weiming Lu, Yueting Zhuang
|
cs.CL, cs.AI, cs.CE
| null | null |
cs.CL
|
20230612
|
20230612
|
[
{
"id": "2305.14318"
},
{
"id": "2303.17564"
},
{
"id": "2304.12995"
},
{
"id": "2305.19308"
},
{
"id": "2305.17126"
},
{
"id": "2305.15038"
},
{
"id": "2303.02927"
}
] |
2306.06924
| 12 |
Consider the following ï¬ctional story, where the impact of unemployment on crime rates (Raphael and Winter-Ebmer, 2001) is exacerbated by a cycle of algorithmic predictions:
Story 1a: Self-Fulï¬lling Pessimism. Scientists develop an algorithm for predicting the answers to questions about a person, as a function of freely available and purchasable information about the person (social media, resumes, browsing history, purchasing history, etc.). The algorithm is made freely available to the public, and employers begin using the algorithm to screen out potential hires by asking, âIs this person likely to be arrested in the next year?â Courts and regulatory bodies attempt to ban the technology by evoking privacy norms, but struggle to establish cases against the use of publicly available information, so the technology broadly remains in use.
Innocent people who share certain characteristics with past convicted criminals end up struggling to get jobs, become disproportionately unemployed, and correspondingly more often commit theft to fulï¬ll basic needs. Meanwhile, police also use the algorithm to prioritize their investigations, and since
4
unemployment is a predictor of property crime, the algorithm leads them to suspect and arrest more unemployed people. Some of the arrests are talked about on social media, so the algorithm learns that the arrested individuals are likely to be arrested again, making it even more diï¬cult for them to get jobs. A cycle of deeply unfair socioeconomic discrimination begins.
|
2306.06924#12
|
TASRA: a Taxonomy and Analysis of Societal-Scale Risks from AI
|
While several recent works have identified societal-scale and
extinction-level risks to humanity arising from artificial intelligence, few
have attempted an {\em exhaustive taxonomy} of such risks. Many exhaustive
taxonomies are possible, and some are useful -- particularly if they reveal new
risks or practical approaches to safety. This paper explores a taxonomy based
on accountability: whose actions lead to the risk, are the actors unified, and
are they deliberate? We also provide stories to illustrate how the various risk
types could each play out, including risks arising from unanticipated
interactions of many AI systems, as well as risks from deliberate misuse, for
which combined technical and policy solutions are indicated.
|
http://arxiv.org/pdf/2306.06924
|
Andrew Critch, Stuart Russell
|
cs.AI, cs.CR, cs.CY, cs.LG, 68T01, I.2.0
| null | null |
cs.AI
|
20230612
|
20230614
|
[
{
"id": "1903.08542"
},
{
"id": "1606.06565"
},
{
"id": "1903.03877"
},
{
"id": "1908.01755"
},
{
"id": "1711.00363"
},
{
"id": "1701.01302"
},
{
"id": "1705.10720"
},
{
"id": "1511.03246"
},
{
"id": "1902.04198"
},
{
"id": "1806.01186"
}
] |
2306.07174
| 12 |
3
using the hidden states from the previous layer, Hy, = fu" (AD), Vl ⬠[1, Lâ] and Lâ is the LLM total # layers for the backbone LLM. During the forward pass with the backbone LLM for all previous inputs, the key-value pairs used for self-attention at the m-th Transformer decoder layer are stored in Cached Memory Bank (marked in Orange in Upper-Left corner of Figure2), which are later recalled as memory augmentations for future inputs. Cached Memory Bank is a cached head-wise vector queue Z;,, Z, ⬠Râ*â¢â¢*4, which maintains attention key-value pairs of latest WZ previous inputs K, V ⬠RÂ¥*|4, where HT, d denotes the number of attention heads and per-head dimension respectively. After memory retrieval and fusion (§2.3), the memory bank removes the key-value pairs of the oldest sequences and appends the current sequences to the cached vector bank. Thus such an update mechanism ensures the language modeling causality at the sequences level and enables the memory bank to always keep records of the nearest previous context for the current inputs.
|
2306.07174#12
|
Augmenting Language Models with Long-Term Memory
|
Existing large language models (LLMs) can only afford fix-sized inputs due to
the input length limit, preventing them from utilizing rich long-context
information from past inputs. To address this, we propose a framework, Language
Models Augmented with Long-Term Memory (LongMem), which enables LLMs to
memorize long history. We design a novel decoupled network architecture with
the original backbone LLM frozen as a memory encoder and an adaptive residual
side-network as a memory retriever and reader. Such a decoupled memory design
can easily cache and update long-term past contexts for memory retrieval
without suffering from memory staleness. Enhanced with memory-augmented
adaptation training, LongMem can thus memorize long past context and use
long-term memory for language modeling. The proposed memory retrieval module
can handle unlimited-length context in its memory bank to benefit various
downstream tasks. Typically, LongMem can enlarge the long-form memory to 65k
tokens and thus cache many-shot extra demonstration examples as long-form
memory for in-context learning. Experiments show that our method outperforms
strong long-context models on ChapterBreak, a challenging long-context modeling
benchmark, and achieves remarkable improvements on memory-augmented in-context
learning over LLMs. The results demonstrate that the proposed method is
effective in helping language models to memorize and utilize long-form
contents. Our code is open-sourced at https://aka.ms/LongMem.
|
http://arxiv.org/pdf/2306.07174
|
Weizhi Wang, Li Dong, Hao Cheng, Xiaodong Liu, Xifeng Yan, Jianfeng Gao, Furu Wei
|
cs.CL
| null | null |
cs.CL
|
20230612
|
20230612
|
[
{
"id": "2301.12866"
},
{
"id": "1901.02860"
},
{
"id": "2101.00027"
},
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2206.07682"
},
{
"id": "1606.05250"
},
{
"id": "1911.05507"
},
{
"id": "2108.12409"
},
{
"id": "2206.06522"
},
{
"id": "2204.10878"
},
{
"id": "2211.05100"
},
{
"id": "2201.11903"
},
{
"id": "2205.10178"
},
{
"id": "2205.01068"
}
] |
2306.06924
| 13 |
In the story above, a subset of humanity becomes unfairly disempowered, both economically and legally.
It is possible, we claim, for all of humanity to become similarly disempowered. How?
Consider that many systems of production and consumption on Earth currently operate entirely without human involvement, while producing side eï¬ects for humans and other life. For instance, algal blooms consume energy from the sun and materials from the surrounding ocean, and as a side eï¬ect they sometimes produce toxins that are harmful to other sea life as well as human swimmers. It is important to consider the possibility that artiï¬cially intelligent systems, in the future, could also sustain fully self-contained loops of production and consumption that would yield negative side eï¬ects for humanity. The following diagram illustrates how a few industries, if fully automated through AI technology, could operate in a closed loop of production (and consumption) without any other inputs:
Figure 3: A hypothetical self-contained âproduction webâ of companies operating with no human involvement; such a production web would make it possible to completely decouple economic activities from serving human values.
|
2306.06924#13
|
TASRA: a Taxonomy and Analysis of Societal-Scale Risks from AI
|
While several recent works have identified societal-scale and
extinction-level risks to humanity arising from artificial intelligence, few
have attempted an {\em exhaustive taxonomy} of such risks. Many exhaustive
taxonomies are possible, and some are useful -- particularly if they reveal new
risks or practical approaches to safety. This paper explores a taxonomy based
on accountability: whose actions lead to the risk, are the actors unified, and
are they deliberate? We also provide stories to illustrate how the various risk
types could each play out, including risks arising from unanticipated
interactions of many AI systems, as well as risks from deliberate misuse, for
which combined technical and policy solutions are indicated.
|
http://arxiv.org/pdf/2306.06924
|
Andrew Critch, Stuart Russell
|
cs.AI, cs.CR, cs.CY, cs.LG, 68T01, I.2.0
| null | null |
cs.AI
|
20230612
|
20230614
|
[
{
"id": "1903.08542"
},
{
"id": "1606.06565"
},
{
"id": "1903.03877"
},
{
"id": "1908.01755"
},
{
"id": "1711.00363"
},
{
"id": "1701.01302"
},
{
"id": "1705.10720"
},
{
"id": "1511.03246"
},
{
"id": "1902.04198"
},
{
"id": "1806.01186"
}
] |
2306.07174
| 13 |
After the forward pass with the backbone LLM, the SideNet module then takes all current input hidden states from the backbone LLM {Hlâ² lâ²=1 and the past key-value pairs in Cached Memory Bank for computing memory-augmented representations. Specifically, our SideNet of LONGMEM consists of (L â 1) normal Transformer decoder layers and one special memory-augmented decoder layer. For efficient purposes, we mainly consider the case where #layers L of the SideNet is smaller than that of the backbone LLM, i.e., L < Lâ². Our SideNet encodes H0 into memory-augmented contextual representation via (L â 1) normal Transformer decoder layers and a special memory-augmented layer.
|
2306.07174#13
|
Augmenting Language Models with Long-Term Memory
|
Existing large language models (LLMs) can only afford fix-sized inputs due to
the input length limit, preventing them from utilizing rich long-context
information from past inputs. To address this, we propose a framework, Language
Models Augmented with Long-Term Memory (LongMem), which enables LLMs to
memorize long history. We design a novel decoupled network architecture with
the original backbone LLM frozen as a memory encoder and an adaptive residual
side-network as a memory retriever and reader. Such a decoupled memory design
can easily cache and update long-term past contexts for memory retrieval
without suffering from memory staleness. Enhanced with memory-augmented
adaptation training, LongMem can thus memorize long past context and use
long-term memory for language modeling. The proposed memory retrieval module
can handle unlimited-length context in its memory bank to benefit various
downstream tasks. Typically, LongMem can enlarge the long-form memory to 65k
tokens and thus cache many-shot extra demonstration examples as long-form
memory for in-context learning. Experiments show that our method outperforms
strong long-context models on ChapterBreak, a challenging long-context modeling
benchmark, and achieves remarkable improvements on memory-augmented in-context
learning over LLMs. The results demonstrate that the proposed method is
effective in helping language models to memorize and utilize long-form
contents. Our code is open-sourced at https://aka.ms/LongMem.
|
http://arxiv.org/pdf/2306.07174
|
Weizhi Wang, Li Dong, Hao Cheng, Xiaodong Liu, Xifeng Yan, Jianfeng Gao, Furu Wei
|
cs.CL
| null | null |
cs.CL
|
20230612
|
20230612
|
[
{
"id": "2301.12866"
},
{
"id": "1901.02860"
},
{
"id": "2101.00027"
},
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2206.07682"
},
{
"id": "1606.05250"
},
{
"id": "1911.05507"
},
{
"id": "2108.12409"
},
{
"id": "2206.06522"
},
{
"id": "2204.10878"
},
{
"id": "2211.05100"
},
{
"id": "2201.11903"
},
{
"id": "2205.10178"
},
{
"id": "2205.01068"
}
] |
2306.07209
| 13 |
Stage 1: Interface Design Billions of Data Explore Data by Self-Request Interface Definition and Merge Data Source 1 Data Source 2 Q, Data Source N OO Seed Requestl: China's) Prompt LLM to generate more diverse request based on data source and seed Prompt LLM to design interface definition for new request, and merge { Generated Request: Chinaâs GDP trend in the last five year Generated Request2: Compare Chinaâs CPI last 10 year uery CPI 3* Query GDP_CPI GDP trend in the last Geazated Reson Interface: Plot_line decade Show the Financial Index of all stock Interface: Query_GDP_CPI Seed Request2: Compare Generated Request4: Predict stock price Interface: Plot_line two stock returns last year for next month Interface : [] Query_stock Financial 5 ) J Interface Loop arc WLLL Gre Alll interface tools are described in natural language for interface functions and BCU TE TTL: inputs and output arguments each interface using xxx | language | LLM Data Visualization Data Process Data Acquisition Tnterface: def + Line + Loop + Get_sourcel_data â ° â| + Table + Rank + Get_source2_data Get_sourcel_data(args): + Bar + Select Column + Get_source3_data Ce
|
2306.07209#13
|
Data-Copilot: Bridging Billions of Data and Humans with Autonomous Workflow
|
Various industries such as finance, meteorology, and energy generate vast
amounts of heterogeneous data every day. There is a natural demand for humans
to manage, process, and display data efficiently. However, it necessitates
labor-intensive efforts and a high level of expertise for these data-related
tasks. Considering that large language models (LLMs) have showcased promising
capabilities in semantic understanding and reasoning, we advocate that the
deployment of LLMs could autonomously manage and process massive amounts of
data while displaying and interacting in a human-friendly manner. Based on this
belief, we propose Data-Copilot, an LLM-based system that connects numerous
data sources on one end and caters to diverse human demands on the other end.
Acting like an experienced expert, Data-Copilot autonomously transforms raw
data into visualization results that best match the user's intent.
Specifically, Data-Copilot autonomously designs versatile interfaces (tools)
for data management, processing, prediction, and visualization. In real-time
response, it automatically deploys a concise workflow by invoking corresponding
interfaces step by step for the user's request. The interface design and
deployment processes are fully controlled by Data-Copilot itself, without human
assistance. Besides, we create a Data-Copilot demo that links abundant data
from different domains (stock, fund, company, economics, and live news) and
accurately respond to diverse requests, serving as a reliable AI assistant.
|
http://arxiv.org/pdf/2306.07209
|
Wenqi Zhang, Yongliang Shen, Weiming Lu, Yueting Zhuang
|
cs.CL, cs.AI, cs.CE
| null | null |
cs.CL
|
20230612
|
20230612
|
[
{
"id": "2305.14318"
},
{
"id": "2303.17564"
},
{
"id": "2304.12995"
},
{
"id": "2305.19308"
},
{
"id": "2305.17126"
},
{
"id": "2305.15038"
},
{
"id": "2303.02927"
}
] |
2306.06924
| 14 |
porns > pos > an > po > a ae po a Pr Toms 1 i i 1 i i i i 1 1 | \ \ i \ \ \ \ i 1 ! Raw and Processed a aoe - Delivery I | Facilities Electricity Telecoms Computers | | Tools | | Robots | | Vehicles A Materials y P Mechanisms | f} v it â| Â¥: iy âi ] a /\ ! 3" 3" 3k Ag SR A A a S| S| Ss \ / 3 2 NN \ Jf 4s S| Vv eo) eo) e\ / 2 8 aN Vf 4 2 g1 1 5! a a \ foo 5 an On Ss 51 Hl 1 | 1 i i 1 ; > > B" a" a 1 3! 3 | 3 \ Sy St N 1 =| =e! = = = 1 =! S| 3 '
Could such a self-contained âproduction webâ ever pose a threat to humans? One might argue that, because AI technology will be created by humanity, it will always serve our best interests. However, consider how many human colonies have started out dependent upon a home nation, and eventually gained suï¬cient independence from the home nation to revolt against it. Could humanity create an âAI industryâ that becomes suï¬ciently independent of us to pose a global threat?
|
2306.06924#14
|
TASRA: a Taxonomy and Analysis of Societal-Scale Risks from AI
|
While several recent works have identified societal-scale and
extinction-level risks to humanity arising from artificial intelligence, few
have attempted an {\em exhaustive taxonomy} of such risks. Many exhaustive
taxonomies are possible, and some are useful -- particularly if they reveal new
risks or practical approaches to safety. This paper explores a taxonomy based
on accountability: whose actions lead to the risk, are the actors unified, and
are they deliberate? We also provide stories to illustrate how the various risk
types could each play out, including risks arising from unanticipated
interactions of many AI systems, as well as risks from deliberate misuse, for
which combined technical and policy solutions are indicated.
|
http://arxiv.org/pdf/2306.06924
|
Andrew Critch, Stuart Russell
|
cs.AI, cs.CR, cs.CY, cs.LG, 68T01, I.2.0
| null | null |
cs.AI
|
20230612
|
20230614
|
[
{
"id": "1903.08542"
},
{
"id": "1606.06565"
},
{
"id": "1903.03877"
},
{
"id": "1908.01755"
},
{
"id": "1711.00363"
},
{
"id": "1701.01302"
},
{
"id": "1705.10720"
},
{
"id": "1511.03246"
},
{
"id": "1902.04198"
},
{
"id": "1806.01186"
}
] |
2306.07174
| 14 |
The memory-augmented layer is an extension of the vanilla Transformer decoder layer that takes a memory-augmented input, including both top relevant key-value pairs in memory and the hidden states from the current input. Here, the cached key-value pairs are recalled using a token-based memory retrieval module (§2.3). For each current input token, the memory retrieval module s,,(:) retrieves top-i relevant key-value pairs in the memory bank {kij,8 ij ye 1 = Srt(x;). Then SideNet computes : : q s-l i. gk ye the output using the memory-augmented input, Hgij. = fosen(Hside > {{kig, Viz Hoye. where mz, is the layer index where we inject the memory-augmentation layer.
|
2306.07174#14
|
Augmenting Language Models with Long-Term Memory
|
Existing large language models (LLMs) can only afford fix-sized inputs due to
the input length limit, preventing them from utilizing rich long-context
information from past inputs. To address this, we propose a framework, Language
Models Augmented with Long-Term Memory (LongMem), which enables LLMs to
memorize long history. We design a novel decoupled network architecture with
the original backbone LLM frozen as a memory encoder and an adaptive residual
side-network as a memory retriever and reader. Such a decoupled memory design
can easily cache and update long-term past contexts for memory retrieval
without suffering from memory staleness. Enhanced with memory-augmented
adaptation training, LongMem can thus memorize long past context and use
long-term memory for language modeling. The proposed memory retrieval module
can handle unlimited-length context in its memory bank to benefit various
downstream tasks. Typically, LongMem can enlarge the long-form memory to 65k
tokens and thus cache many-shot extra demonstration examples as long-form
memory for in-context learning. Experiments show that our method outperforms
strong long-context models on ChapterBreak, a challenging long-context modeling
benchmark, and achieves remarkable improvements on memory-augmented in-context
learning over LLMs. The results demonstrate that the proposed method is
effective in helping language models to memorize and utilize long-form
contents. Our code is open-sourced at https://aka.ms/LongMem.
|
http://arxiv.org/pdf/2306.07174
|
Weizhi Wang, Li Dong, Hao Cheng, Xiaodong Liu, Xifeng Yan, Jianfeng Gao, Furu Wei
|
cs.CL
| null | null |
cs.CL
|
20230612
|
20230612
|
[
{
"id": "2301.12866"
},
{
"id": "1901.02860"
},
{
"id": "2101.00027"
},
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2206.07682"
},
{
"id": "1606.05250"
},
{
"id": "1911.05507"
},
{
"id": "2108.12409"
},
{
"id": "2206.06522"
},
{
"id": "2204.10878"
},
{
"id": "2211.05100"
},
{
"id": "2201.11903"
},
{
"id": "2205.10178"
},
{
"id": "2205.01068"
}
] |
2306.07209
| 14 |
â ° â| + Table + Rank + Get_source2_data Get_sourcel_data(args): + Bar + Select Column + Get_source3_data Ce R â\ + Calculation Interface Implementation Grammar-free Interface Library LI inertce desrnon Step3,4,5 Step2 Stepl Prompt LLM to deploy the (Q Request: | = Calculais ated workflow using interface Compare the 4 | Visualization Return Acquisition description and in context return of CSI demonstration y Calculate Data-2 ea H 300, GEM Index \Z < Return Acquisition and CSI 1000 : LLM LLM | Index this year Visualizati 7 Calculate Data-3 x x } < â Return Acquisition < <1, Parallel Workflow + Intent Analysis by LLM Q Request: {Time: 20230101- In the first quarter iBacante Step4 Step3 Step2 Step] 20230601, Location: of this year, the 5 China, Object: CSI300 5 an Select Data , Obj! year-on-year net Viuatzaton}e{ Loow feâ[_ SS â(_cqiition | GEM CSI1000, Format: ation _[ Line chart} of the SSE 50 camiaition Loop Workflow + Planning workflow \ J Tin â=
|
2306.07209#14
|
Data-Copilot: Bridging Billions of Data and Humans with Autonomous Workflow
|
Various industries such as finance, meteorology, and energy generate vast
amounts of heterogeneous data every day. There is a natural demand for humans
to manage, process, and display data efficiently. However, it necessitates
labor-intensive efforts and a high level of expertise for these data-related
tasks. Considering that large language models (LLMs) have showcased promising
capabilities in semantic understanding and reasoning, we advocate that the
deployment of LLMs could autonomously manage and process massive amounts of
data while displaying and interacting in a human-friendly manner. Based on this
belief, we propose Data-Copilot, an LLM-based system that connects numerous
data sources on one end and caters to diverse human demands on the other end.
Acting like an experienced expert, Data-Copilot autonomously transforms raw
data into visualization results that best match the user's intent.
Specifically, Data-Copilot autonomously designs versatile interfaces (tools)
for data management, processing, prediction, and visualization. In real-time
response, it automatically deploys a concise workflow by invoking corresponding
interfaces step by step for the user's request. The interface design and
deployment processes are fully controlled by Data-Copilot itself, without human
assistance. Besides, we create a Data-Copilot demo that links abundant data
from different domains (stock, fund, company, economics, and live news) and
accurately respond to diverse requests, serving as a reliable AI assistant.
|
http://arxiv.org/pdf/2306.07209
|
Wenqi Zhang, Yongliang Shen, Weiming Lu, Yueting Zhuang
|
cs.CL, cs.AI, cs.CE
| null | null |
cs.CL
|
20230612
|
20230612
|
[
{
"id": "2305.14318"
},
{
"id": "2303.17564"
},
{
"id": "2304.12995"
},
{
"id": "2305.19308"
},
{
"id": "2305.17126"
},
{
"id": "2305.15038"
},
{
"id": "2303.02927"
}
] |
2306.06924
| 15 |
It might seem strange to consider something as abstract or diï¬use as an industry posing a threat to the world. However, consider how the fossil fuel industry was built by humans, yet is presently very diï¬cult to shut down or even regulate, due to patterns of regulatory interference exhibited by oil companies in many jurisdictions (Carpenter and Moss, 2013; Dal Bó, 2006). The same could be said for the tobacco industry for many years (Gilmore et al., 2019). The âAI industryâ, if unchecked, could behave similarly, but potentially much more quickly than the oil industry, in cases where AI is able to think and act much more quickly than humans.
Finally, consider how species of ants who feed on acacia trees eventually lose the ability to digest other foods, ending up âenslavedâ to protecting the health of the acacia trees as their only food source (Ed Yong, 2013). If humanity comes to depend critically on AI technology to survive, it may not be so easy to do away with even if it begins to harm us, individually or collectively.
For an illustration of how this might happen, consider the story below:
5
|
2306.06924#15
|
TASRA: a Taxonomy and Analysis of Societal-Scale Risks from AI
|
While several recent works have identified societal-scale and
extinction-level risks to humanity arising from artificial intelligence, few
have attempted an {\em exhaustive taxonomy} of such risks. Many exhaustive
taxonomies are possible, and some are useful -- particularly if they reveal new
risks or practical approaches to safety. This paper explores a taxonomy based
on accountability: whose actions lead to the risk, are the actors unified, and
are they deliberate? We also provide stories to illustrate how the various risk
types could each play out, including risks arising from unanticipated
interactions of many AI systems, as well as risks from deliberate misuse, for
which combined technical and policy solutions are indicated.
|
http://arxiv.org/pdf/2306.06924
|
Andrew Critch, Stuart Russell
|
cs.AI, cs.CR, cs.CY, cs.LG, 68T01, I.2.0
| null | null |
cs.AI
|
20230612
|
20230614
|
[
{
"id": "1903.08542"
},
{
"id": "1606.06565"
},
{
"id": "1903.03877"
},
{
"id": "1908.01755"
},
{
"id": "1711.00363"
},
{
"id": "1701.01302"
},
{
"id": "1705.10720"
},
{
"id": "1511.03246"
},
{
"id": "1902.04198"
},
{
"id": "1806.01186"
}
] |
2306.07174
| 15 |
Finally, the token probability is computed using the last SideNet hidden states P(x;|x1,-++ ,Xi-1) = softmax(WH), where W is the frozen output embedding weight shared by both the backbone LLM and SideNet. We perform a memory-augmented adaptation training for LONGMEM to utilize the decoupled memory. Following the generative unsupervised pre-training [RNSS 18], the training objective of LONGMEM is the standard left-to-right language modeling objective, which maximizes the likelihood of the next token based on the left context: max }>,.<-p an log P(x;|x1,+++ ,Xiâ1), where z is a randomly sampled sentence from the pre-training text corpus D.
# 2.2 Residual SideNet
SideNet Architecture and Initialization. Here, we again implement SideNet based on Trans- former [VSP+17]. Here, the number of decoder layers L in SideNet is equal to the number of layers Lâ² in the backbone LLM divided by a reduction factor (a layer reduction factor of 2 throughout this work Lâ² = 2L). The weights of each decoder layer in SideNet are initialized from the corresponding
lâ² 2
# Side = Îlâ²
|
2306.07174#15
|
Augmenting Language Models with Long-Term Memory
|
Existing large language models (LLMs) can only afford fix-sized inputs due to
the input length limit, preventing them from utilizing rich long-context
information from past inputs. To address this, we propose a framework, Language
Models Augmented with Long-Term Memory (LongMem), which enables LLMs to
memorize long history. We design a novel decoupled network architecture with
the original backbone LLM frozen as a memory encoder and an adaptive residual
side-network as a memory retriever and reader. Such a decoupled memory design
can easily cache and update long-term past contexts for memory retrieval
without suffering from memory staleness. Enhanced with memory-augmented
adaptation training, LongMem can thus memorize long past context and use
long-term memory for language modeling. The proposed memory retrieval module
can handle unlimited-length context in its memory bank to benefit various
downstream tasks. Typically, LongMem can enlarge the long-form memory to 65k
tokens and thus cache many-shot extra demonstration examples as long-form
memory for in-context learning. Experiments show that our method outperforms
strong long-context models on ChapterBreak, a challenging long-context modeling
benchmark, and achieves remarkable improvements on memory-augmented in-context
learning over LLMs. The results demonstrate that the proposed method is
effective in helping language models to memorize and utilize long-form
contents. Our code is open-sourced at https://aka.ms/LongMem.
|
http://arxiv.org/pdf/2306.07174
|
Weizhi Wang, Li Dong, Hao Cheng, Xiaodong Liu, Xifeng Yan, Jianfeng Gao, Furu Wei
|
cs.CL
| null | null |
cs.CL
|
20230612
|
20230612
|
[
{
"id": "2301.12866"
},
{
"id": "1901.02860"
},
{
"id": "2101.00027"
},
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2206.07682"
},
{
"id": "1606.05250"
},
{
"id": "1911.05507"
},
{
"id": "2108.12409"
},
{
"id": "2206.06522"
},
{
"id": "2204.10878"
},
{
"id": "2211.05100"
},
{
"id": "2201.11903"
},
{
"id": "2205.10178"
},
{
"id": "2205.01068"
}
] |
2306.07209
| 15 |
| GEM CSI1000, Format: ation _[ Line chart} of the SSE 50 camiaition Loop Workflow + Planning workflow \ J Tin â= Step1: Get index Step]: Obtain data for constituent stocks the three indices in Step2: Select the name parallel it of each stock Step2: Calculate the tts Step3: Loop through _-retums of the three mi the financial index for indices in parallel each stock Step3: Plotting the vmmnmmannie Step4:Plot bar graph wanna wend of returns in turn Stage 2: Interface Dispatch
|
2306.07209#15
|
Data-Copilot: Bridging Billions of Data and Humans with Autonomous Workflow
|
Various industries such as finance, meteorology, and energy generate vast
amounts of heterogeneous data every day. There is a natural demand for humans
to manage, process, and display data efficiently. However, it necessitates
labor-intensive efforts and a high level of expertise for these data-related
tasks. Considering that large language models (LLMs) have showcased promising
capabilities in semantic understanding and reasoning, we advocate that the
deployment of LLMs could autonomously manage and process massive amounts of
data while displaying and interacting in a human-friendly manner. Based on this
belief, we propose Data-Copilot, an LLM-based system that connects numerous
data sources on one end and caters to diverse human demands on the other end.
Acting like an experienced expert, Data-Copilot autonomously transforms raw
data into visualization results that best match the user's intent.
Specifically, Data-Copilot autonomously designs versatile interfaces (tools)
for data management, processing, prediction, and visualization. In real-time
response, it automatically deploys a concise workflow by invoking corresponding
interfaces step by step for the user's request. The interface design and
deployment processes are fully controlled by Data-Copilot itself, without human
assistance. Besides, we create a Data-Copilot demo that links abundant data
from different domains (stock, fund, company, economics, and live news) and
accurately respond to diverse requests, serving as a reliable AI assistant.
|
http://arxiv.org/pdf/2306.07209
|
Wenqi Zhang, Yongliang Shen, Weiming Lu, Yueting Zhuang
|
cs.CL, cs.AI, cs.CE
| null | null |
cs.CL
|
20230612
|
20230612
|
[
{
"id": "2305.14318"
},
{
"id": "2303.17564"
},
{
"id": "2304.12995"
},
{
"id": "2305.19308"
},
{
"id": "2305.17126"
},
{
"id": "2305.15038"
},
{
"id": "2303.02927"
}
] |
2306.06924
| 16 |
For an illustration of how this might happen, consider the story below:
5
Story 1b: The Production Web. Someday, AI researchers develop and publish an exciting new algorithm for combining natural language processing and planning capabilities. Various competing tech companies develop âmanagement assistantâ software tools based on the algorithm, which can analyze a companyâs cash ï¬ows, workï¬ows, and communications to recommend more proï¬table business decisions that also yield positive PR and networking opportunities for managers. It turns out that managers are able to automate their own jobs almost entirely, by having the software manage their staï¬ directly. Software tools based on variants of the algorithm sweep through companies in nearly every industry, automating and replacing jobs at various levels of management, sometimes even CEOs. One company develops an âengineer-assistantâ version of the assistant software, capable of software engineering tasks, including upgrades to the management assistant software.
|
2306.06924#16
|
TASRA: a Taxonomy and Analysis of Societal-Scale Risks from AI
|
While several recent works have identified societal-scale and
extinction-level risks to humanity arising from artificial intelligence, few
have attempted an {\em exhaustive taxonomy} of such risks. Many exhaustive
taxonomies are possible, and some are useful -- particularly if they reveal new
risks or practical approaches to safety. This paper explores a taxonomy based
on accountability: whose actions lead to the risk, are the actors unified, and
are they deliberate? We also provide stories to illustrate how the various risk
types could each play out, including risks arising from unanticipated
interactions of many AI systems, as well as risks from deliberate misuse, for
which combined technical and policy solutions are indicated.
|
http://arxiv.org/pdf/2306.06924
|
Andrew Critch, Stuart Russell
|
cs.AI, cs.CR, cs.CY, cs.LG, 68T01, I.2.0
| null | null |
cs.AI
|
20230612
|
20230614
|
[
{
"id": "1903.08542"
},
{
"id": "1606.06565"
},
{
"id": "1903.03877"
},
{
"id": "1908.01755"
},
{
"id": "1711.00363"
},
{
"id": "1701.01302"
},
{
"id": "1705.10720"
},
{
"id": "1511.03246"
},
{
"id": "1902.04198"
},
{
"id": "1806.01186"
}
] |
2306.07174
| 16 |
lâ² 2
# Side = Îlâ²
pre-trained decoder layer of the backbone LLM with the same depth: Î LLM. As illustrated in Figure 2, the SideNet takes the output of backbone LLMâs embedding layer and reuses the language modeling head layer of backbone LLM, which is also frozen during the continual adaption stage. During the memory-augmented adaptation stage, all other parameters of SideNet are updated accord- ingly based on the training signal. In this way, the lightweight SideNet achieves fast convergence with knowledge transferred from pre-trained parameters.
Cross-Network Residual Connections. To tap into knowledge from the pretrained backbone LLM, we resort to proposed cross-network residual connections for fusing representations from the backbone LLM into SideNet. Specifically, we add the difference between output hidden states at 2l-th and (2l â 2)-th layers of the backbone LLM as the residual connections to the output hidden states at l-th layer of SideNet. Then, the input to the next (l + 1)-th layer of SideNet is the sum of Side) and the cross-network the original hidden state forwarded through the previous layer fÎl
# Side
4
residual connection of the hidden state difference from the backbone LLM
|
2306.07174#16
|
Augmenting Language Models with Long-Term Memory
|
Existing large language models (LLMs) can only afford fix-sized inputs due to
the input length limit, preventing them from utilizing rich long-context
information from past inputs. To address this, we propose a framework, Language
Models Augmented with Long-Term Memory (LongMem), which enables LLMs to
memorize long history. We design a novel decoupled network architecture with
the original backbone LLM frozen as a memory encoder and an adaptive residual
side-network as a memory retriever and reader. Such a decoupled memory design
can easily cache and update long-term past contexts for memory retrieval
without suffering from memory staleness. Enhanced with memory-augmented
adaptation training, LongMem can thus memorize long past context and use
long-term memory for language modeling. The proposed memory retrieval module
can handle unlimited-length context in its memory bank to benefit various
downstream tasks. Typically, LongMem can enlarge the long-form memory to 65k
tokens and thus cache many-shot extra demonstration examples as long-form
memory for in-context learning. Experiments show that our method outperforms
strong long-context models on ChapterBreak, a challenging long-context modeling
benchmark, and achieves remarkable improvements on memory-augmented in-context
learning over LLMs. The results demonstrate that the proposed method is
effective in helping language models to memorize and utilize long-form
contents. Our code is open-sourced at https://aka.ms/LongMem.
|
http://arxiv.org/pdf/2306.07174
|
Weizhi Wang, Li Dong, Hao Cheng, Xiaodong Liu, Xifeng Yan, Jianfeng Gao, Furu Wei
|
cs.CL
| null | null |
cs.CL
|
20230612
|
20230612
|
[
{
"id": "2301.12866"
},
{
"id": "1901.02860"
},
{
"id": "2101.00027"
},
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2206.07682"
},
{
"id": "1606.05250"
},
{
"id": "1911.05507"
},
{
"id": "2108.12409"
},
{
"id": "2206.06522"
},
{
"id": "2204.10878"
},
{
"id": "2211.05100"
},
{
"id": "2201.11903"
},
{
"id": "2205.10178"
},
{
"id": "2205.01068"
}
] |
2306.07209
| 16 |
Figure 2: Overview of Data-Copilot. Interface Design: We devise a self-request process allowing LLM to generate sufficient requests from a few seed requests autonomously. Then LLM iteratively designs and optimizes interfaces based on generated requests. These interfaces are described in natural language, making them easily scalable and transferable across different platforms. Interface Dispatch: Upon receiving user requests, LLM plans and invokes interface tools based on self-design interface descriptions and in-context demonstrations. This allows for the deployment of a logical workflow that fulfills user demands and presents the results to the user in multi-form.
excellent applications, such as AutoGPT 1, AgentGPT 2, BabyAGI 3, BMTools 4, LangChain 5 and etc. Most of them are focused on daily tools and do not consider the specificity of data-related tasks.
1https://github.com/Significant-Gravitas/Auto-GPT 2https://github.com/reworkd/AgentGPT 3https://github.com/yoheinakajima/babyagi 4https://github.com/OpenBMB/BMTools 5https://github.com/hwchase17/langchain
4
|
2306.07209#16
|
Data-Copilot: Bridging Billions of Data and Humans with Autonomous Workflow
|
Various industries such as finance, meteorology, and energy generate vast
amounts of heterogeneous data every day. There is a natural demand for humans
to manage, process, and display data efficiently. However, it necessitates
labor-intensive efforts and a high level of expertise for these data-related
tasks. Considering that large language models (LLMs) have showcased promising
capabilities in semantic understanding and reasoning, we advocate that the
deployment of LLMs could autonomously manage and process massive amounts of
data while displaying and interacting in a human-friendly manner. Based on this
belief, we propose Data-Copilot, an LLM-based system that connects numerous
data sources on one end and caters to diverse human demands on the other end.
Acting like an experienced expert, Data-Copilot autonomously transforms raw
data into visualization results that best match the user's intent.
Specifically, Data-Copilot autonomously designs versatile interfaces (tools)
for data management, processing, prediction, and visualization. In real-time
response, it automatically deploys a concise workflow by invoking corresponding
interfaces step by step for the user's request. The interface design and
deployment processes are fully controlled by Data-Copilot itself, without human
assistance. Besides, we create a Data-Copilot demo that links abundant data
from different domains (stock, fund, company, economics, and live news) and
accurately respond to diverse requests, serving as a reliable AI assistant.
|
http://arxiv.org/pdf/2306.07209
|
Wenqi Zhang, Yongliang Shen, Weiming Lu, Yueting Zhuang
|
cs.CL, cs.AI, cs.CE
| null | null |
cs.CL
|
20230612
|
20230612
|
[
{
"id": "2305.14318"
},
{
"id": "2303.17564"
},
{
"id": "2304.12995"
},
{
"id": "2305.19308"
},
{
"id": "2305.17126"
},
{
"id": "2305.15038"
},
{
"id": "2303.02927"
}
] |
2306.06924
| 17 |
Within a few years, it becomes technologically feasible for almost almost any human job to be performed by a combination of software and robotic workers that can operate more quickly and cheaply than humans, and the global job market gradually begins to avail of this possibility. A huge increase in global economic productivity ensues. Despite the massive turnover in the job market, average quality of life also improves in almost every country, as products and services become cheaper to produce and provide. Most job losses come with generous severance packages, sometimes enough for a full retirement. Companies closer to becoming fully automated achieve faster turnaround times, deal bandwidth, and creativity of business-to-business negotiations. Some companies idealistically cling to the idea the human workers must remain integral to their operations, however, they quickly fall behind because they simply canât provide products and services as cheaply as their fully automated competitors. Eventually, almost all companies either fail and shut down or become fully automated.
|
2306.06924#17
|
TASRA: a Taxonomy and Analysis of Societal-Scale Risks from AI
|
While several recent works have identified societal-scale and
extinction-level risks to humanity arising from artificial intelligence, few
have attempted an {\em exhaustive taxonomy} of such risks. Many exhaustive
taxonomies are possible, and some are useful -- particularly if they reveal new
risks or practical approaches to safety. This paper explores a taxonomy based
on accountability: whose actions lead to the risk, are the actors unified, and
are they deliberate? We also provide stories to illustrate how the various risk
types could each play out, including risks arising from unanticipated
interactions of many AI systems, as well as risks from deliberate misuse, for
which combined technical and policy solutions are indicated.
|
http://arxiv.org/pdf/2306.06924
|
Andrew Critch, Stuart Russell
|
cs.AI, cs.CR, cs.CY, cs.LG, 68T01, I.2.0
| null | null |
cs.AI
|
20230612
|
20230614
|
[
{
"id": "1903.08542"
},
{
"id": "1606.06565"
},
{
"id": "1903.03877"
},
{
"id": "1908.01755"
},
{
"id": "1711.00363"
},
{
"id": "1701.01302"
},
{
"id": "1705.10720"
},
{
"id": "1511.03246"
},
{
"id": "1902.04198"
},
{
"id": "1806.01186"
}
] |
2306.07174
| 17 |
# Side
4
residual connection of the hidden state difference from the backbone LLM
Side = fÎl Side (Hlâ1 Side) + (H2l LLM â H2lâ2 LLM ), âl â [1, L], (1)
# Hl
where H0 is the output of embedding layer. It is worth noting that the residual connections after the self-attention and feed-forward network of a decoder layer [VSP+17] will be performed as normal in fÎl
# Side
# 2.3 Memory Retrieval and Fusion
The long-term memory capability of LONGMEM is achieved via a memory-augmentation module for retrieval and fusion.
|
2306.07174#17
|
Augmenting Language Models with Long-Term Memory
|
Existing large language models (LLMs) can only afford fix-sized inputs due to
the input length limit, preventing them from utilizing rich long-context
information from past inputs. To address this, we propose a framework, Language
Models Augmented with Long-Term Memory (LongMem), which enables LLMs to
memorize long history. We design a novel decoupled network architecture with
the original backbone LLM frozen as a memory encoder and an adaptive residual
side-network as a memory retriever and reader. Such a decoupled memory design
can easily cache and update long-term past contexts for memory retrieval
without suffering from memory staleness. Enhanced with memory-augmented
adaptation training, LongMem can thus memorize long past context and use
long-term memory for language modeling. The proposed memory retrieval module
can handle unlimited-length context in its memory bank to benefit various
downstream tasks. Typically, LongMem can enlarge the long-form memory to 65k
tokens and thus cache many-shot extra demonstration examples as long-form
memory for in-context learning. Experiments show that our method outperforms
strong long-context models on ChapterBreak, a challenging long-context modeling
benchmark, and achieves remarkable improvements on memory-augmented in-context
learning over LLMs. The results demonstrate that the proposed method is
effective in helping language models to memorize and utilize long-form
contents. Our code is open-sourced at https://aka.ms/LongMem.
|
http://arxiv.org/pdf/2306.07174
|
Weizhi Wang, Li Dong, Hao Cheng, Xiaodong Liu, Xifeng Yan, Jianfeng Gao, Furu Wei
|
cs.CL
| null | null |
cs.CL
|
20230612
|
20230612
|
[
{
"id": "2301.12866"
},
{
"id": "1901.02860"
},
{
"id": "2101.00027"
},
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2206.07682"
},
{
"id": "1606.05250"
},
{
"id": "1911.05507"
},
{
"id": "2108.12409"
},
{
"id": "2206.06522"
},
{
"id": "2204.10878"
},
{
"id": "2211.05100"
},
{
"id": "2201.11903"
},
{
"id": "2205.10178"
},
{
"id": "2205.01068"
}
] |
2306.07209
| 17 |
4
Except for learning to operate the tools, several contemporaneous studies [34, 35] have proposed to empower LLMs to create new tools for specific scenarios like mathematical solving and reasoning. These impressive studies have revealed the great potential of LLM to handle specialized domain tasks.
Distinct from these approaches, our Data-Copilot system is different in the following aspects: (1) Data-Copilot is a general LLM-based system specifically designed for a variety of data-related tasks. It contemplates how LLM can be exploited to access and manage large amounts of heterogeneous data for complex user demands, like querying, analysis, and visualization. (2) Requiring only a few seed requests, Data-Copilot employs a self-request approach to design its interface tools independently. It separates the definition of an interface from its specific implementation, allowing LLM to focus more on accurately describing the functionality of the interface. This approach is a clear difference from previous works and provides the community with a viable solution for automated tool creation. (3) Data-Copilot is capable of constructing more complex interface scheduling processes such as parallel, sequential, and loop workflow based on its own designed interfaces.
# 3 Data-Copilot
|
2306.07209#17
|
Data-Copilot: Bridging Billions of Data and Humans with Autonomous Workflow
|
Various industries such as finance, meteorology, and energy generate vast
amounts of heterogeneous data every day. There is a natural demand for humans
to manage, process, and display data efficiently. However, it necessitates
labor-intensive efforts and a high level of expertise for these data-related
tasks. Considering that large language models (LLMs) have showcased promising
capabilities in semantic understanding and reasoning, we advocate that the
deployment of LLMs could autonomously manage and process massive amounts of
data while displaying and interacting in a human-friendly manner. Based on this
belief, we propose Data-Copilot, an LLM-based system that connects numerous
data sources on one end and caters to diverse human demands on the other end.
Acting like an experienced expert, Data-Copilot autonomously transforms raw
data into visualization results that best match the user's intent.
Specifically, Data-Copilot autonomously designs versatile interfaces (tools)
for data management, processing, prediction, and visualization. In real-time
response, it automatically deploys a concise workflow by invoking corresponding
interfaces step by step for the user's request. The interface design and
deployment processes are fully controlled by Data-Copilot itself, without human
assistance. Besides, we create a Data-Copilot demo that links abundant data
from different domains (stock, fund, company, economics, and live news) and
accurately respond to diverse requests, serving as a reliable AI assistant.
|
http://arxiv.org/pdf/2306.07209
|
Wenqi Zhang, Yongliang Shen, Weiming Lu, Yueting Zhuang
|
cs.CL, cs.AI, cs.CE
| null | null |
cs.CL
|
20230612
|
20230612
|
[
{
"id": "2305.14318"
},
{
"id": "2303.17564"
},
{
"id": "2304.12995"
},
{
"id": "2305.19308"
},
{
"id": "2305.17126"
},
{
"id": "2305.15038"
},
{
"id": "2303.02927"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.