doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2308.03210 | 22 | Learning Properties. The proposed TCP layer introduces time-varying convolutional kernels as opposed to fixed kernels that are commonly employed in traditional convolutional neural networks (CNNs). In other words, the employed kernels do not remain fixed throughout the whole length of the input series. This particular trait of TPC does not explicitly force weight sharing between different subsequences of the time series during convolution. Weight sharing is, however, implicitly modeled via the learnable representations of time, that are used to initialize the kernel weights. This is based on the assumption that observations that are mapped to similar time embeddings will probably share similar values of weights in the convolutional operation. The proposed approach still maintains the ability to locally aggregate information by retaining the notion of fixed kernel size in the convolution operation. This allows for the output of the convolution to be locally aggregated, while still incorporating the benefits of time-varying convolutional kernels. | 2308.03210#22 | Time-Parameterized Convolutional Neural Networks for Irregularly Sampled Time Series | Irregularly sampled multivariate time series are ubiquitous in several
application domains, leading to sparse, not fully-observed and non-aligned
observations across different variables. Standard sequential neural network
architectures, such as recurrent neural networks (RNNs) and convolutional
neural networks (CNNs), consider regular spacing between observation times,
posing significant challenges to irregular time series modeling. While most of
the proposed architectures incorporate RNN variants to handle irregular time
intervals, convolutional neural networks have not been adequately studied in
the irregular sampling setting. In this paper, we parameterize convolutional
layers by employing time-explicitly initialized kernels. Such general functions
of time enhance the learning process of continuous-time hidden dynamics and can
be efficiently incorporated into convolutional kernel weights. We, thus,
propose the time-parameterized convolutional neural network (TPCNN), which
shares similar properties with vanilla convolutions but is carefully designed
for irregularly sampled time series. We evaluate TPCNN on both interpolation
and classification tasks involving real-world irregularly sampled multivariate
time series datasets. Our experimental results indicate the competitive
performance of the proposed TPCNN model which is also significantly more
efficient than other state-of-the-art methods. At the same time, the proposed
architecture allows the interpretability of the input series by leveraging the
combination of learnable time functions that improve the network performance in
subsequent tasks and expedite the inaugural application of convolutions in this
field. | http://arxiv.org/pdf/2308.03210 | Chrysoula Kosma, Giannis Nikolentzos, Michalis Vazirgiannis | cs.LG | null | null | cs.LG | 20230806 | 20230809 | [
{
"id": "1710.04110"
},
{
"id": "1909.07782"
},
{
"id": "2102.02611"
},
{
"id": "1706.02633"
},
{
"id": "2101.10318"
}
] |
2308.03210 | 23 | Invariance Properties. If some patterns in the time series are identical, both in terms of the ob- servations and also in terms of the difference in time between the observations, then the TPC layer will produce the same output for those two patterns. For example, let xi = (xiâK, . . . , xi, . . . , xi+K) and xj = (xjâK, . . . , xj, . . . , xj+K) denote two sequences of values and ti = (tiâK, . . . , ti, . . . , ti+K) and tj = (tjâK, . . . , tj, . . . , tj+K) denote their respective observation times. If xi = xj holds and âti = âtj also holds, where âti = (tiâK tj), then the kernels produced for these two sequences of values are identical and therefore, the layer produces the same output. | 2308.03210#23 | Time-Parameterized Convolutional Neural Networks for Irregularly Sampled Time Series | Irregularly sampled multivariate time series are ubiquitous in several
application domains, leading to sparse, not fully-observed and non-aligned
observations across different variables. Standard sequential neural network
architectures, such as recurrent neural networks (RNNs) and convolutional
neural networks (CNNs), consider regular spacing between observation times,
posing significant challenges to irregular time series modeling. While most of
the proposed architectures incorporate RNN variants to handle irregular time
intervals, convolutional neural networks have not been adequately studied in
the irregular sampling setting. In this paper, we parameterize convolutional
layers by employing time-explicitly initialized kernels. Such general functions
of time enhance the learning process of continuous-time hidden dynamics and can
be efficiently incorporated into convolutional kernel weights. We, thus,
propose the time-parameterized convolutional neural network (TPCNN), which
shares similar properties with vanilla convolutions but is carefully designed
for irregularly sampled time series. We evaluate TPCNN on both interpolation
and classification tasks involving real-world irregularly sampled multivariate
time series datasets. Our experimental results indicate the competitive
performance of the proposed TPCNN model which is also significantly more
efficient than other state-of-the-art methods. At the same time, the proposed
architecture allows the interpretability of the input series by leveraging the
combination of learnable time functions that improve the network performance in
subsequent tasks and expedite the inaugural application of convolutions in this
field. | http://arxiv.org/pdf/2308.03210 | Chrysoula Kosma, Giannis Nikolentzos, Michalis Vazirgiannis | cs.LG | null | null | cs.LG | 20230806 | 20230809 | [
{
"id": "1710.04110"
},
{
"id": "1909.07782"
},
{
"id": "2102.02611"
},
{
"id": "1706.02633"
},
{
"id": "2101.10318"
}
] |
2308.03210 | 24 | Furthermore, the different functions defined in the previous subsection make the kernels invariant to different transformations. For instance, in the above example, suppose that âti = âtj, and that the k-th element of the second sequence is equal to (k + 1)2Ï times the corresponding element of the first sequence for ) function) and with . Then, the TPC layer equipped with the h2 function (i. e., sin( 0, 1, . . . , 2K + 1 k · } θ3 = 1 and θ4 = 0 would produce the same output for both patterns. Such a function can capture periodic temporal correlations.
7
# 3.5 TPCNN Framework for Irregularly Sampled Time Series
We will next discuss how the TPC layer can be integrated into neural network architectures for dealing with various tasks that involve irregular time series, such as interpolation and classification. Following previous work, we propose an encoder-decoder framework, so-called Time-Parameterized Convolutional Neural Network (TPCNN) framework. In what follows, we give more details about the two main components of the proposed framework, namely its encoder and its decoder. | 2308.03210#24 | Time-Parameterized Convolutional Neural Networks for Irregularly Sampled Time Series | Irregularly sampled multivariate time series are ubiquitous in several
application domains, leading to sparse, not fully-observed and non-aligned
observations across different variables. Standard sequential neural network
architectures, such as recurrent neural networks (RNNs) and convolutional
neural networks (CNNs), consider regular spacing between observation times,
posing significant challenges to irregular time series modeling. While most of
the proposed architectures incorporate RNN variants to handle irregular time
intervals, convolutional neural networks have not been adequately studied in
the irregular sampling setting. In this paper, we parameterize convolutional
layers by employing time-explicitly initialized kernels. Such general functions
of time enhance the learning process of continuous-time hidden dynamics and can
be efficiently incorporated into convolutional kernel weights. We, thus,
propose the time-parameterized convolutional neural network (TPCNN), which
shares similar properties with vanilla convolutions but is carefully designed
for irregularly sampled time series. We evaluate TPCNN on both interpolation
and classification tasks involving real-world irregularly sampled multivariate
time series datasets. Our experimental results indicate the competitive
performance of the proposed TPCNN model which is also significantly more
efficient than other state-of-the-art methods. At the same time, the proposed
architecture allows the interpretability of the input series by leveraging the
combination of learnable time functions that improve the network performance in
subsequent tasks and expedite the inaugural application of convolutions in this
field. | http://arxiv.org/pdf/2308.03210 | Chrysoula Kosma, Giannis Nikolentzos, Michalis Vazirgiannis | cs.LG | null | null | cs.LG | 20230806 | 20230809 | [
{
"id": "1710.04110"
},
{
"id": "1909.07782"
},
{
"id": "2102.02611"
},
{
"id": "1706.02633"
},
{
"id": "2101.10318"
}
] |
2308.03210 | 25 | TPCNN Encoder. This module is responsible for mapping the input time series into a latent vector which captures their overall shape and their specificities. The first layer of the encoder is an instance of the TPC RmÃL layer introduced above. The TPC layer receives as input the irregular and multivariate series X(i) and the corresponding vector of observation times t(i) = . The output of TPC layer is then } successively fed to vanilla convolution layers which can capture longer-time dependencies of the continuous latent representation of the time series. A pooling layer follows each convolution layer, including TPC. By down-sampling the output, such layers are expected to extract features that are good indicators of class membership or of the shape of the time series. Finally, a fully-connected layer is applied to the output of the last convolution layer to extract a low-dimensional representation z(i)
â | 2308.03210#25 | Time-Parameterized Convolutional Neural Networks for Irregularly Sampled Time Series | Irregularly sampled multivariate time series are ubiquitous in several
application domains, leading to sparse, not fully-observed and non-aligned
observations across different variables. Standard sequential neural network
architectures, such as recurrent neural networks (RNNs) and convolutional
neural networks (CNNs), consider regular spacing between observation times,
posing significant challenges to irregular time series modeling. While most of
the proposed architectures incorporate RNN variants to handle irregular time
intervals, convolutional neural networks have not been adequately studied in
the irregular sampling setting. In this paper, we parameterize convolutional
layers by employing time-explicitly initialized kernels. Such general functions
of time enhance the learning process of continuous-time hidden dynamics and can
be efficiently incorporated into convolutional kernel weights. We, thus,
propose the time-parameterized convolutional neural network (TPCNN), which
shares similar properties with vanilla convolutions but is carefully designed
for irregularly sampled time series. We evaluate TPCNN on both interpolation
and classification tasks involving real-world irregularly sampled multivariate
time series datasets. Our experimental results indicate the competitive
performance of the proposed TPCNN model which is also significantly more
efficient than other state-of-the-art methods. At the same time, the proposed
architecture allows the interpretability of the input series by leveraging the
combination of learnable time functions that improve the network performance in
subsequent tasks and expedite the inaugural application of convolutions in this
field. | http://arxiv.org/pdf/2308.03210 | Chrysoula Kosma, Giannis Nikolentzos, Michalis Vazirgiannis | cs.LG | null | null | cs.LG | 20230806 | 20230809 | [
{
"id": "1710.04110"
},
{
"id": "1909.07782"
},
{
"id": "2102.02611"
},
{
"id": "1706.02633"
},
{
"id": "2101.10318"
}
] |
2308.03210 | 26 | â
TPCNN Decoder. This part of the architecture is responsible for reconstructing the multivariate input series from the latent vector that is produced by the encoder. The latent vector z that was produced by the encoder is first given as input to a fully-connected layer whose objective is to perform rescaling. The emerging vector is then passed onto another fully-connected layer which produces a matrix ËX(i) that matches the dimension of the input time series. These reconstructed time series are then compared against the input series to evaluate the autoencoderâs performance.
Interpolation and Classification Setting. Note that some components of the TPCNN framework depend on the considered task, i. e., interpolation or classification. For instance, in the interpolation setting, each time a kernel of the TPC layer is applied to some subset of the input series, the observation that lies at the center of that subset is masked such that the model does not have direct access to it. On the other hand, such a masking is not performed in the case of the classification setting.
The reconstruction loss of a standard autoencoder is typically measured using the mean squared error (MSE) between the original input and the reconstructed output. For an input time series X(i), the MSE loss is computed as:
1 ; ~ 2 Linterpolation = iol y x? - xO N, | jeO | 2308.03210#26 | Time-Parameterized Convolutional Neural Networks for Irregularly Sampled Time Series | Irregularly sampled multivariate time series are ubiquitous in several
application domains, leading to sparse, not fully-observed and non-aligned
observations across different variables. Standard sequential neural network
architectures, such as recurrent neural networks (RNNs) and convolutional
neural networks (CNNs), consider regular spacing between observation times,
posing significant challenges to irregular time series modeling. While most of
the proposed architectures incorporate RNN variants to handle irregular time
intervals, convolutional neural networks have not been adequately studied in
the irregular sampling setting. In this paper, we parameterize convolutional
layers by employing time-explicitly initialized kernels. Such general functions
of time enhance the learning process of continuous-time hidden dynamics and can
be efficiently incorporated into convolutional kernel weights. We, thus,
propose the time-parameterized convolutional neural network (TPCNN), which
shares similar properties with vanilla convolutions but is carefully designed
for irregularly sampled time series. We evaluate TPCNN on both interpolation
and classification tasks involving real-world irregularly sampled multivariate
time series datasets. Our experimental results indicate the competitive
performance of the proposed TPCNN model which is also significantly more
efficient than other state-of-the-art methods. At the same time, the proposed
architecture allows the interpretability of the input series by leveraging the
combination of learnable time functions that improve the network performance in
subsequent tasks and expedite the inaugural application of convolutions in this
field. | http://arxiv.org/pdf/2308.03210 | Chrysoula Kosma, Giannis Nikolentzos, Michalis Vazirgiannis | cs.LG | null | null | cs.LG | 20230806 | 20230809 | [
{
"id": "1710.04110"
},
{
"id": "1909.07782"
},
{
"id": "2102.02611"
},
{
"id": "1706.02633"
},
{
"id": "2101.10318"
}
] |
2308.03210 | 27 | 1 ; ~ 2 Linterpolation = iol y x? - xO N, | jeO
is a set that contains the indices of the observed values and ËX(i) denotes the reconstructed series
where produced by the decoder as a function of the latent vector z. O
The encoder-decoder framework of Figure 1 (Right) is combined with the MSE loss for the interpolation task. Additionally, as already discussed, masking is performed on the center element of each slice of the input series, and the rest of the observed values of the slice are used for interpolation.
In the case of classification, the latent representation z that is generated by the encoder and which preserves the information about the multivariate time seriesâ dependencies, can be directly fed to a classifier module to make predictions. In the experiments that follow, we employ a 2-layer multi-layer perceptron (MLP) with ReLU activation function. Thus, in the case of a classification problem with classes, the output is computed as follows:
p = sof tmax(M LP (z))
Then, given a training set consisting of time series X(1), . . . , X(N ), we use the negative log-likelihood of the correct labels as training loss:
N Ie Lelassi fication = â Ss Ss y$â log p\? i=1 j=1
# L
8 | 2308.03210#27 | Time-Parameterized Convolutional Neural Networks for Irregularly Sampled Time Series | Irregularly sampled multivariate time series are ubiquitous in several
application domains, leading to sparse, not fully-observed and non-aligned
observations across different variables. Standard sequential neural network
architectures, such as recurrent neural networks (RNNs) and convolutional
neural networks (CNNs), consider regular spacing between observation times,
posing significant challenges to irregular time series modeling. While most of
the proposed architectures incorporate RNN variants to handle irregular time
intervals, convolutional neural networks have not been adequately studied in
the irregular sampling setting. In this paper, we parameterize convolutional
layers by employing time-explicitly initialized kernels. Such general functions
of time enhance the learning process of continuous-time hidden dynamics and can
be efficiently incorporated into convolutional kernel weights. We, thus,
propose the time-parameterized convolutional neural network (TPCNN), which
shares similar properties with vanilla convolutions but is carefully designed
for irregularly sampled time series. We evaluate TPCNN on both interpolation
and classification tasks involving real-world irregularly sampled multivariate
time series datasets. Our experimental results indicate the competitive
performance of the proposed TPCNN model which is also significantly more
efficient than other state-of-the-art methods. At the same time, the proposed
architecture allows the interpretability of the input series by leveraging the
combination of learnable time functions that improve the network performance in
subsequent tasks and expedite the inaugural application of convolutions in this
field. | http://arxiv.org/pdf/2308.03210 | Chrysoula Kosma, Giannis Nikolentzos, Michalis Vazirgiannis | cs.LG | null | null | cs.LG | 20230806 | 20230809 | [
{
"id": "1710.04110"
},
{
"id": "1909.07782"
},
{
"id": "2102.02611"
},
{
"id": "1706.02633"
},
{
"id": "2101.10318"
}
] |
2308.03210 | 28 | N Ie Lelassi fication = â Ss Ss y$â log p\? i=1 j=1
# L
8
# where y(i) j
is equal to 1 if X(i) belongs to the j-th class, and 0 otherwise.
The application of the TPCNN model to the above two scenarios is illustrated in Figure 1 (classification on the left and interpolation on the right).
# 4 Experiments
In this section, we describe the experimental setup and methodology used to evaluate the performance of our proposed time-parameterized convolutional layer on various tasks involving irregular time series, including interpolation and classification.
# 4.1 Datasets
We evaluate the performance of the proposed architecture and the baselines on the following real-world datasets:
PhysioNet: The PhysioNet Challenge 2012 dataset [28] comprises 8000 multivariate time series that correspond to records from the first 48 hours of a patientâs admission to the intensive care unit (ICU). Measurements include 37 variables which can be missing at different steps and occur in irregular intervals. Half of the instances are labeled with 13.8% of instances being in the positive class (in-hospital mortality). For the interpolation experiments, we used all 8000 instances and for the classification experiments, we used the 4000 labeled instances. We use the same experimental protocols and preprocessing steps as in [24]. | 2308.03210#28 | Time-Parameterized Convolutional Neural Networks for Irregularly Sampled Time Series | Irregularly sampled multivariate time series are ubiquitous in several
application domains, leading to sparse, not fully-observed and non-aligned
observations across different variables. Standard sequential neural network
architectures, such as recurrent neural networks (RNNs) and convolutional
neural networks (CNNs), consider regular spacing between observation times,
posing significant challenges to irregular time series modeling. While most of
the proposed architectures incorporate RNN variants to handle irregular time
intervals, convolutional neural networks have not been adequately studied in
the irregular sampling setting. In this paper, we parameterize convolutional
layers by employing time-explicitly initialized kernels. Such general functions
of time enhance the learning process of continuous-time hidden dynamics and can
be efficiently incorporated into convolutional kernel weights. We, thus,
propose the time-parameterized convolutional neural network (TPCNN), which
shares similar properties with vanilla convolutions but is carefully designed
for irregularly sampled time series. We evaluate TPCNN on both interpolation
and classification tasks involving real-world irregularly sampled multivariate
time series datasets. Our experimental results indicate the competitive
performance of the proposed TPCNN model which is also significantly more
efficient than other state-of-the-art methods. At the same time, the proposed
architecture allows the interpretability of the input series by leveraging the
combination of learnable time functions that improve the network performance in
subsequent tasks and expedite the inaugural application of convolutions in this
field. | http://arxiv.org/pdf/2308.03210 | Chrysoula Kosma, Giannis Nikolentzos, Michalis Vazirgiannis | cs.LG | null | null | cs.LG | 20230806 | 20230809 | [
{
"id": "1710.04110"
},
{
"id": "1909.07782"
},
{
"id": "2102.02611"
},
{
"id": "1706.02633"
},
{
"id": "2101.10318"
}
] |
2308.03210 | 29 | MIMIC-III: The MIMIC-III dataset [13] consists of multivariate health records, that can have missing values, collected at Beth Israel Deaconess Medical Center between 2001 and 2012. Based again on the preprocessing strategy of [24], we extract 53211 samples including 12 features. Given the first 48 hours of data, the task is to predict in-hospital mortality, with 8.1% of the data samples in the positive class.
Human Activity: The human activity dataset contains time series data from five individuals performing various activities (such as walking, sitting, lying, standing, etc.), based on the 3D positions of tags attached to their belts, chest and ankles (12 features in total). Following the preprocessing procedures outlined by [24], a dataset of 6554 sequences and 50 time steps is extracted. The task for this dataset is to classify each time step in the series into one of the eleven activities.
# 4.2 Experimental Setting | 2308.03210#29 | Time-Parameterized Convolutional Neural Networks for Irregularly Sampled Time Series | Irregularly sampled multivariate time series are ubiquitous in several
application domains, leading to sparse, not fully-observed and non-aligned
observations across different variables. Standard sequential neural network
architectures, such as recurrent neural networks (RNNs) and convolutional
neural networks (CNNs), consider regular spacing between observation times,
posing significant challenges to irregular time series modeling. While most of
the proposed architectures incorporate RNN variants to handle irregular time
intervals, convolutional neural networks have not been adequately studied in
the irregular sampling setting. In this paper, we parameterize convolutional
layers by employing time-explicitly initialized kernels. Such general functions
of time enhance the learning process of continuous-time hidden dynamics and can
be efficiently incorporated into convolutional kernel weights. We, thus,
propose the time-parameterized convolutional neural network (TPCNN), which
shares similar properties with vanilla convolutions but is carefully designed
for irregularly sampled time series. We evaluate TPCNN on both interpolation
and classification tasks involving real-world irregularly sampled multivariate
time series datasets. Our experimental results indicate the competitive
performance of the proposed TPCNN model which is also significantly more
efficient than other state-of-the-art methods. At the same time, the proposed
architecture allows the interpretability of the input series by leveraging the
combination of learnable time functions that improve the network performance in
subsequent tasks and expedite the inaugural application of convolutions in this
field. | http://arxiv.org/pdf/2308.03210 | Chrysoula Kosma, Giannis Nikolentzos, Michalis Vazirgiannis | cs.LG | null | null | cs.LG | 20230806 | 20230809 | [
{
"id": "1710.04110"
},
{
"id": "1909.07782"
},
{
"id": "2102.02611"
},
{
"id": "1706.02633"
},
{
"id": "2101.10318"
}
] |
2308.03210 | 30 | # 4.2 Experimental Setting
We next explain the experimental setting we follow for interpolation and classification, similar to the work of [27]. In the case of interpolation, we study all instances (labeled and unlabeled) from the PhysioNet dataset. The dataset is partitioned into an 80% training set and a 20% test set, with a fraction (20%) of the training data serving as the validation set. The interpolation task is to predict based on a subset of available data points values for the unobserved points. This is executed using different percentages of observed steps, which vary between 50% and 90% of the total available steps. For this experiment, we perform five different runs and report performance on the unobserved data using the mean squared error (MSE) metric. | 2308.03210#30 | Time-Parameterized Convolutional Neural Networks for Irregularly Sampled Time Series | Irregularly sampled multivariate time series are ubiquitous in several
application domains, leading to sparse, not fully-observed and non-aligned
observations across different variables. Standard sequential neural network
architectures, such as recurrent neural networks (RNNs) and convolutional
neural networks (CNNs), consider regular spacing between observation times,
posing significant challenges to irregular time series modeling. While most of
the proposed architectures incorporate RNN variants to handle irregular time
intervals, convolutional neural networks have not been adequately studied in
the irregular sampling setting. In this paper, we parameterize convolutional
layers by employing time-explicitly initialized kernels. Such general functions
of time enhance the learning process of continuous-time hidden dynamics and can
be efficiently incorporated into convolutional kernel weights. We, thus,
propose the time-parameterized convolutional neural network (TPCNN), which
shares similar properties with vanilla convolutions but is carefully designed
for irregularly sampled time series. We evaluate TPCNN on both interpolation
and classification tasks involving real-world irregularly sampled multivariate
time series datasets. Our experimental results indicate the competitive
performance of the proposed TPCNN model which is also significantly more
efficient than other state-of-the-art methods. At the same time, the proposed
architecture allows the interpretability of the input series by leveraging the
combination of learnable time functions that improve the network performance in
subsequent tasks and expedite the inaugural application of convolutions in this
field. | http://arxiv.org/pdf/2308.03210 | Chrysoula Kosma, Giannis Nikolentzos, Michalis Vazirgiannis | cs.LG | null | null | cs.LG | 20230806 | 20230809 | [
{
"id": "1710.04110"
},
{
"id": "1909.07782"
},
{
"id": "2102.02611"
},
{
"id": "1706.02633"
},
{
"id": "2101.10318"
}
] |
2308.03210 | 31 | We also use the labeled data from the PhysioNet, MIMIC-III and Human Activity datasets to conduct classification experiments. For the physiological data of PhysioNet and MIMIC-III, the classification task considers the entire time series, whereas, in the context of the human activity dataset, classification is performed for each time step in the series. We follow the same train, validation and test splitting procedure as described in the interpolation setting. For this experiment, we perform five different runs to provide the classification performance on the different datasets. For PhysioNet and MIMIC-III datasets, we report performance using the area under the ROC curve (AUC) score, due to class imbalance. For the Human Activity dataset, we asses the model performance using the accuracy metric. The validation set is used to select the best set of hyperparameters for our models via grid search.
9
Table 1: Performance for interpolation with different percentages of observed time points on PhysioNet. We mention in bold the best-performing method(s) and underline the second best-performing method(s) based on statistical significance tests. | 2308.03210#31 | Time-Parameterized Convolutional Neural Networks for Irregularly Sampled Time Series | Irregularly sampled multivariate time series are ubiquitous in several
application domains, leading to sparse, not fully-observed and non-aligned
observations across different variables. Standard sequential neural network
architectures, such as recurrent neural networks (RNNs) and convolutional
neural networks (CNNs), consider regular spacing between observation times,
posing significant challenges to irregular time series modeling. While most of
the proposed architectures incorporate RNN variants to handle irregular time
intervals, convolutional neural networks have not been adequately studied in
the irregular sampling setting. In this paper, we parameterize convolutional
layers by employing time-explicitly initialized kernels. Such general functions
of time enhance the learning process of continuous-time hidden dynamics and can
be efficiently incorporated into convolutional kernel weights. We, thus,
propose the time-parameterized convolutional neural network (TPCNN), which
shares similar properties with vanilla convolutions but is carefully designed
for irregularly sampled time series. We evaluate TPCNN on both interpolation
and classification tasks involving real-world irregularly sampled multivariate
time series datasets. Our experimental results indicate the competitive
performance of the proposed TPCNN model which is also significantly more
efficient than other state-of-the-art methods. At the same time, the proposed
architecture allows the interpretability of the input series by leveraging the
combination of learnable time functions that improve the network performance in
subsequent tasks and expedite the inaugural application of convolutions in this
field. | http://arxiv.org/pdf/2308.03210 | Chrysoula Kosma, Giannis Nikolentzos, Michalis Vazirgiannis | cs.LG | null | null | cs.LG | 20230806 | 20230809 | [
{
"id": "1710.04110"
},
{
"id": "1909.07782"
},
{
"id": "2102.02611"
},
{
"id": "1706.02633"
},
{
"id": "2101.10318"
}
] |
2308.03210 | 32 | Model RNN-VAE L-ODE-RNN L-ODE-ODE mTAND-Full TPCNN (ours) Observed(%) 13.418 ± 0.008 8.132 ± 0.020 6.721 ± 0.109 4.139 ± 0.029 5.993 ± 0.058 50% Mean Squared Error (Ã10â3) 11.887 ± 0.007 8.171 ± 0.030 6.798 ± 0.143 4.157 ± 0.053 5.654 ± 0.108 70% 12.594 ± 0.004 8.140 ± 0.018 6.816 ± 0.045 4.018 ± 0.048 5.797 ± 0.063 60% 11.133 ± 0.007 8.143 ± 0.025 6.850 ± 0.066 4.410 ± 0.149 5.624 ± 0.084 80% 11.470 ± 0.006 8.402 ± 0.022 7.142 ± 0.066 4.798 ± 0.036 5.532 ± 0.140 90%
# 4.3 Baseline Models | 2308.03210#32 | Time-Parameterized Convolutional Neural Networks for Irregularly Sampled Time Series | Irregularly sampled multivariate time series are ubiquitous in several
application domains, leading to sparse, not fully-observed and non-aligned
observations across different variables. Standard sequential neural network
architectures, such as recurrent neural networks (RNNs) and convolutional
neural networks (CNNs), consider regular spacing between observation times,
posing significant challenges to irregular time series modeling. While most of
the proposed architectures incorporate RNN variants to handle irregular time
intervals, convolutional neural networks have not been adequately studied in
the irregular sampling setting. In this paper, we parameterize convolutional
layers by employing time-explicitly initialized kernels. Such general functions
of time enhance the learning process of continuous-time hidden dynamics and can
be efficiently incorporated into convolutional kernel weights. We, thus,
propose the time-parameterized convolutional neural network (TPCNN), which
shares similar properties with vanilla convolutions but is carefully designed
for irregularly sampled time series. We evaluate TPCNN on both interpolation
and classification tasks involving real-world irregularly sampled multivariate
time series datasets. Our experimental results indicate the competitive
performance of the proposed TPCNN model which is also significantly more
efficient than other state-of-the-art methods. At the same time, the proposed
architecture allows the interpretability of the input series by leveraging the
combination of learnable time functions that improve the network performance in
subsequent tasks and expedite the inaugural application of convolutions in this
field. | http://arxiv.org/pdf/2308.03210 | Chrysoula Kosma, Giannis Nikolentzos, Michalis Vazirgiannis | cs.LG | null | null | cs.LG | 20230806 | 20230809 | [
{
"id": "1710.04110"
},
{
"id": "1909.07782"
},
{
"id": "2102.02611"
},
{
"id": "1706.02633"
},
{
"id": "2101.10318"
}
] |
2308.03210 | 33 | # 4.3 Baseline Models
In this study, we conduct a thorough evaluation of several deep learning architectures as baseline models for performance comparison. These models are specifically designed to handle irregular time series and include variations of the Recurrent Neural Network (RNN), Attention modules and encoder-decoder architectures. The specific models evaluated in this study include:
(i) Basic RNN variants including: RNN-Impute, RNN-ât, RNN-decay, GRU-D. The RNN-Impute model employs a method to impute missing data points based on the weighted average between the last observation of the time series and the total mean of the variable in the training set [3]. In RNN-ât the input to RNN is extended with a missing indicator for the variable and the time interval ât since the last observed point. The RNN-decay is an RNN with hidden states that decay exponentially over time [18, 3], whereas GRU-D employs exponential decay on both hidden states and input [3]. | 2308.03210#33 | Time-Parameterized Convolutional Neural Networks for Irregularly Sampled Time Series | Irregularly sampled multivariate time series are ubiquitous in several
application domains, leading to sparse, not fully-observed and non-aligned
observations across different variables. Standard sequential neural network
architectures, such as recurrent neural networks (RNNs) and convolutional
neural networks (CNNs), consider regular spacing between observation times,
posing significant challenges to irregular time series modeling. While most of
the proposed architectures incorporate RNN variants to handle irregular time
intervals, convolutional neural networks have not been adequately studied in
the irregular sampling setting. In this paper, we parameterize convolutional
layers by employing time-explicitly initialized kernels. Such general functions
of time enhance the learning process of continuous-time hidden dynamics and can
be efficiently incorporated into convolutional kernel weights. We, thus,
propose the time-parameterized convolutional neural network (TPCNN), which
shares similar properties with vanilla convolutions but is carefully designed
for irregularly sampled time series. We evaluate TPCNN on both interpolation
and classification tasks involving real-world irregularly sampled multivariate
time series datasets. Our experimental results indicate the competitive
performance of the proposed TPCNN model which is also significantly more
efficient than other state-of-the-art methods. At the same time, the proposed
architecture allows the interpretability of the input series by leveraging the
combination of learnable time functions that improve the network performance in
subsequent tasks and expedite the inaugural application of convolutions in this
field. | http://arxiv.org/pdf/2308.03210 | Chrysoula Kosma, Giannis Nikolentzos, Michalis Vazirgiannis | cs.LG | null | null | cs.LG | 20230806 | 20230809 | [
{
"id": "1710.04110"
},
{
"id": "1909.07782"
},
{
"id": "2102.02611"
},
{
"id": "1706.02633"
},
{
"id": "2101.10318"
}
] |
2308.03210 | 34 | (ii) Other RNN variants, such as Phased-LSTM, IP-Nets, SeFT, RNN-VAE. The Phased-LSTM model incorporates time irregularity through the use of a time gate that controls access to the hidden and cell states of the LSTM [19]. IP-Nets are Interpolation-Prediction Networks (IPN), which perform interpolation prior to prediction with an RNN on the transformed equally-spaced intervals, using semi-parametric interpolation layers [26]. The SeFT model employs learnable set functions for time series and combines the representations with an attention-based mechanism [11]. RNN-VAE is a standard variational RNN encoder-decoder.
(iii) ODE variants, such as ODE-RNN, L-ODE-RNN, L-ODE-ODE. In ODE-RNN neural ODEs model the dynamics of the hidden state, and an RNN updates the hidden state in the presence of new observations [24]. Similarly, L-ODE-RNN and L-ODE-ODE are latent ODEs with the former combining an RNN encoder and a neural ODE decoder [4], and the latter an ODE-RNN encoder and a neural ODE decoder [24]. | 2308.03210#34 | Time-Parameterized Convolutional Neural Networks for Irregularly Sampled Time Series | Irregularly sampled multivariate time series are ubiquitous in several
application domains, leading to sparse, not fully-observed and non-aligned
observations across different variables. Standard sequential neural network
architectures, such as recurrent neural networks (RNNs) and convolutional
neural networks (CNNs), consider regular spacing between observation times,
posing significant challenges to irregular time series modeling. While most of
the proposed architectures incorporate RNN variants to handle irregular time
intervals, convolutional neural networks have not been adequately studied in
the irregular sampling setting. In this paper, we parameterize convolutional
layers by employing time-explicitly initialized kernels. Such general functions
of time enhance the learning process of continuous-time hidden dynamics and can
be efficiently incorporated into convolutional kernel weights. We, thus,
propose the time-parameterized convolutional neural network (TPCNN), which
shares similar properties with vanilla convolutions but is carefully designed
for irregularly sampled time series. We evaluate TPCNN on both interpolation
and classification tasks involving real-world irregularly sampled multivariate
time series datasets. Our experimental results indicate the competitive
performance of the proposed TPCNN model which is also significantly more
efficient than other state-of-the-art methods. At the same time, the proposed
architecture allows the interpretability of the input series by leveraging the
combination of learnable time functions that improve the network performance in
subsequent tasks and expedite the inaugural application of convolutions in this
field. | http://arxiv.org/pdf/2308.03210 | Chrysoula Kosma, Giannis Nikolentzos, Michalis Vazirgiannis | cs.LG | null | null | cs.LG | 20230806 | 20230809 | [
{
"id": "1710.04110"
},
{
"id": "1909.07782"
},
{
"id": "2102.02611"
},
{
"id": "1706.02633"
},
{
"id": "2101.10318"
}
] |
2308.03210 | 35 | (iv) Attention-based frameworks, including mTAND. The multi-time attention network, mTAND, interpo- lates missing data using a learnable attention similarity kernel between observations, which are accessed based on trainable temporal embeddings [27].
# 4.4 Results
Interpolation of missing data. In Table 1 we present the results of the experimental setting designed for interpolation, as described in Section 4.2. For different percentages of observed values (i. e., ranging from 50% to 90%), we record the interpolation performance on the reconstructed irregularly sampled multivariate time series of the PhysioNet dataset using the MSE metric. We compare the proposed TPCNN model to different
10
Table 2: Performance for per-sequence classification on PhysioNet and MIMIC-III and per-time-point classification on Human Activity datasets. We mention in bold the best-performing method(s) and underline the second best-performing method(s) based on statistical significance tests. | 2308.03210#35 | Time-Parameterized Convolutional Neural Networks for Irregularly Sampled Time Series | Irregularly sampled multivariate time series are ubiquitous in several
application domains, leading to sparse, not fully-observed and non-aligned
observations across different variables. Standard sequential neural network
architectures, such as recurrent neural networks (RNNs) and convolutional
neural networks (CNNs), consider regular spacing between observation times,
posing significant challenges to irregular time series modeling. While most of
the proposed architectures incorporate RNN variants to handle irregular time
intervals, convolutional neural networks have not been adequately studied in
the irregular sampling setting. In this paper, we parameterize convolutional
layers by employing time-explicitly initialized kernels. Such general functions
of time enhance the learning process of continuous-time hidden dynamics and can
be efficiently incorporated into convolutional kernel weights. We, thus,
propose the time-parameterized convolutional neural network (TPCNN), which
shares similar properties with vanilla convolutions but is carefully designed
for irregularly sampled time series. We evaluate TPCNN on both interpolation
and classification tasks involving real-world irregularly sampled multivariate
time series datasets. Our experimental results indicate the competitive
performance of the proposed TPCNN model which is also significantly more
efficient than other state-of-the-art methods. At the same time, the proposed
architecture allows the interpretability of the input series by leveraging the
combination of learnable time functions that improve the network performance in
subsequent tasks and expedite the inaugural application of convolutions in this
field. | http://arxiv.org/pdf/2308.03210 | Chrysoula Kosma, Giannis Nikolentzos, Michalis Vazirgiannis | cs.LG | null | null | cs.LG | 20230806 | 20230809 | [
{
"id": "1710.04110"
},
{
"id": "1909.07782"
},
{
"id": "2102.02611"
},
{
"id": "1706.02633"
},
{
"id": "2101.10318"
}
] |
2308.03210 | 36 | Model PhysioNet 0.764 ± 0.016 RNN-Impute 0.787 ± 0.014 RNN-ât 0.807 ± 0.003 RNN-Decay RNN GRU-D 0.818 ± 0.008 Phased-LSTM 0.836 ± 0.003 0.819 ± 0.006 IP-Nets 0.795 ± 0.015 SeFT 0.515 ± 0.040 RNN-VAE 0.833 ± 0.009 ODE-RNN 0.781 ± 0.018 L-ODE-RNN 0.829 ± 0.004 L-ODE-ODE mTAND-Full 0.858 ± 0.004 0.833 ± 0.001 AUC MIMIC-III 0.8249 ± 0.0010 0.8364 ± 0.0011 0.8392 ± 0.0012 0.8270 ± 0.0010 0.8429 ± 0.0035 0.8390 ± 0.0011 0.8485 ± 0.0022 0.5175 ± 0.0312 0.8561 ± 0.0051 0.7734 ± 0.0030 0.8559 ± 0.0041 0.8544 ± 0.0024 0.8380 ± 0.0011 Accuracy Human Activity 0.859 ± 0.004 0.857 ± | 2308.03210#36 | Time-Parameterized Convolutional Neural Networks for Irregularly Sampled Time Series | Irregularly sampled multivariate time series are ubiquitous in several
application domains, leading to sparse, not fully-observed and non-aligned
observations across different variables. Standard sequential neural network
architectures, such as recurrent neural networks (RNNs) and convolutional
neural networks (CNNs), consider regular spacing between observation times,
posing significant challenges to irregular time series modeling. While most of
the proposed architectures incorporate RNN variants to handle irregular time
intervals, convolutional neural networks have not been adequately studied in
the irregular sampling setting. In this paper, we parameterize convolutional
layers by employing time-explicitly initialized kernels. Such general functions
of time enhance the learning process of continuous-time hidden dynamics and can
be efficiently incorporated into convolutional kernel weights. We, thus,
propose the time-parameterized convolutional neural network (TPCNN), which
shares similar properties with vanilla convolutions but is carefully designed
for irregularly sampled time series. We evaluate TPCNN on both interpolation
and classification tasks involving real-world irregularly sampled multivariate
time series datasets. Our experimental results indicate the competitive
performance of the proposed TPCNN model which is also significantly more
efficient than other state-of-the-art methods. At the same time, the proposed
architecture allows the interpretability of the input series by leveraging the
combination of learnable time functions that improve the network performance in
subsequent tasks and expedite the inaugural application of convolutions in this
field. | http://arxiv.org/pdf/2308.03210 | Chrysoula Kosma, Giannis Nikolentzos, Michalis Vazirgiannis | cs.LG | null | null | cs.LG | 20230806 | 20230809 | [
{
"id": "1710.04110"
},
{
"id": "1909.07782"
},
{
"id": "2102.02611"
},
{
"id": "1706.02633"
},
{
"id": "2101.10318"
}
] |
2308.03210 | 39 | # PhysioNet MIMIC-III Human Activity
baseline methods designed for interpolation, including RNN-VAE, L-ODE-RNN, L-ODE-ODE and mTAND- Full (i. e., mTAND encoder-decoder framework for interpolation). We mention in bold the best-performing method and underline the results for the second-performing method. We also perform tests for measuring the statistical significance of the studied methods, which leads to highlighting two distinct models that refer to the highest performances. We can observe that the best-performing method is mTAND-Full, which is closely followed by the proposed TPCNN model. The rest of the baselines show significantly worse performance compared to the proposed TPCNN, including the highly accurate in the irregular setting ODE-based method 10â3 in terms of L-ODE-ODE. The performance of the proposed model ranges from MSE, showing a slightly improved performance as the percentage of missing observations decreases. On the other hand, mTAND-Full shows a slightly degrading performance for a smaller percentage of missing data, with RNN-VAE being the only baseline method that follows the same behavior.
Classification. We also report in Table 2 the results of the different baselines, as described in Section 4.3, and the proposed TPCNN model on classification for the labeled instances of PhysioNet, MIMIC-III and
11 | 2308.03210#39 | Time-Parameterized Convolutional Neural Networks for Irregularly Sampled Time Series | Irregularly sampled multivariate time series are ubiquitous in several
application domains, leading to sparse, not fully-observed and non-aligned
observations across different variables. Standard sequential neural network
architectures, such as recurrent neural networks (RNNs) and convolutional
neural networks (CNNs), consider regular spacing between observation times,
posing significant challenges to irregular time series modeling. While most of
the proposed architectures incorporate RNN variants to handle irregular time
intervals, convolutional neural networks have not been adequately studied in
the irregular sampling setting. In this paper, we parameterize convolutional
layers by employing time-explicitly initialized kernels. Such general functions
of time enhance the learning process of continuous-time hidden dynamics and can
be efficiently incorporated into convolutional kernel weights. We, thus,
propose the time-parameterized convolutional neural network (TPCNN), which
shares similar properties with vanilla convolutions but is carefully designed
for irregularly sampled time series. We evaluate TPCNN on both interpolation
and classification tasks involving real-world irregularly sampled multivariate
time series datasets. Our experimental results indicate the competitive
performance of the proposed TPCNN model which is also significantly more
efficient than other state-of-the-art methods. At the same time, the proposed
architecture allows the interpretability of the input series by leveraging the
combination of learnable time functions that improve the network performance in
subsequent tasks and expedite the inaugural application of convolutions in this
field. | http://arxiv.org/pdf/2308.03210 | Chrysoula Kosma, Giannis Nikolentzos, Michalis Vazirgiannis | cs.LG | null | null | cs.LG | 20230806 | 20230809 | [
{
"id": "1710.04110"
},
{
"id": "1909.07782"
},
{
"id": "2102.02611"
},
{
"id": "1706.02633"
},
{
"id": "2101.10318"
}
] |
2308.03210 | 41 | Human Activity datasets. For the first two imbalanced datasets, we use AUC as an evaluation metric and perform per-sequence binary classification, whereas, for the Human Activity dataset, we report accuracy for the task of per-time-point classification. For all datasets, we boldly mention the best-performing methods and underline the results for the second best-performing methods. Due to several non-statistically significant differences in performances, we have several methods being among the first or second best-performing. For PhysioNet and Human Activity datasets, our proposed TPCNN framework is the second-best method in terms of metrics, surpassed by the attention-based model mTAND-Full. More specifically, in PhysioNet the proposed model performs as well as the ODE variants (i. e., ODE-RNN, L-ODE-ODE) that are however significantly slow in terms of computational time, as mentioned in [27]. In Human Activity classification, TPCNN shows 1% worse than mTAND-Full. However, in the MIMIC-III classification, quite improved performance being the proposed TPCNN model lies among the third-best-performing methods, being surpassed by several baselines. In this | 2308.03210#41 | Time-Parameterized Convolutional Neural Networks for Irregularly Sampled Time Series | Irregularly sampled multivariate time series are ubiquitous in several
application domains, leading to sparse, not fully-observed and non-aligned
observations across different variables. Standard sequential neural network
architectures, such as recurrent neural networks (RNNs) and convolutional
neural networks (CNNs), consider regular spacing between observation times,
posing significant challenges to irregular time series modeling. While most of
the proposed architectures incorporate RNN variants to handle irregular time
intervals, convolutional neural networks have not been adequately studied in
the irregular sampling setting. In this paper, we parameterize convolutional
layers by employing time-explicitly initialized kernels. Such general functions
of time enhance the learning process of continuous-time hidden dynamics and can
be efficiently incorporated into convolutional kernel weights. We, thus,
propose the time-parameterized convolutional neural network (TPCNN), which
shares similar properties with vanilla convolutions but is carefully designed
for irregularly sampled time series. We evaluate TPCNN on both interpolation
and classification tasks involving real-world irregularly sampled multivariate
time series datasets. Our experimental results indicate the competitive
performance of the proposed TPCNN model which is also significantly more
efficient than other state-of-the-art methods. At the same time, the proposed
architecture allows the interpretability of the input series by leveraging the
combination of learnable time functions that improve the network performance in
subsequent tasks and expedite the inaugural application of convolutions in this
field. | http://arxiv.org/pdf/2308.03210 | Chrysoula Kosma, Giannis Nikolentzos, Michalis Vazirgiannis | cs.LG | null | null | cs.LG | 20230806 | 20230809 | [
{
"id": "1710.04110"
},
{
"id": "1909.07782"
},
{
"id": "2102.02611"
},
{
"id": "1706.02633"
},
{
"id": "2101.10318"
}
] |
2308.03210 | 42 | classification, quite improved performance being the proposed TPCNN model lies among the third-best-performing methods, being surpassed by several baselines. In this dataset, ODE-RNN, L-ODE-ODE and mTAND-Full methods achieve the highest AUC scores, followed by the SeFT model, which however performs significantly worse in classification experiments for the other two datasets. The significant performance advantage of mTAND-Full in this task can be attributed to its design which jointly performs interpolation and classification while directly attending only to observed time points. On the other hand, the proposed model handles missing data inside the convolutional kernel of the TPC layer by applying the mean aggregator of Equation 3. The aggregation neighborhood however is constrained by the kernel size and remains fixed throughout the series length. Extending the proposed architecture to incorporate size-varying kernels could further improve the learning capabilities of the TPC layer. | 2308.03210#42 | Time-Parameterized Convolutional Neural Networks for Irregularly Sampled Time Series | Irregularly sampled multivariate time series are ubiquitous in several
application domains, leading to sparse, not fully-observed and non-aligned
observations across different variables. Standard sequential neural network
architectures, such as recurrent neural networks (RNNs) and convolutional
neural networks (CNNs), consider regular spacing between observation times,
posing significant challenges to irregular time series modeling. While most of
the proposed architectures incorporate RNN variants to handle irregular time
intervals, convolutional neural networks have not been adequately studied in
the irregular sampling setting. In this paper, we parameterize convolutional
layers by employing time-explicitly initialized kernels. Such general functions
of time enhance the learning process of continuous-time hidden dynamics and can
be efficiently incorporated into convolutional kernel weights. We, thus,
propose the time-parameterized convolutional neural network (TPCNN), which
shares similar properties with vanilla convolutions but is carefully designed
for irregularly sampled time series. We evaluate TPCNN on both interpolation
and classification tasks involving real-world irregularly sampled multivariate
time series datasets. Our experimental results indicate the competitive
performance of the proposed TPCNN model which is also significantly more
efficient than other state-of-the-art methods. At the same time, the proposed
architecture allows the interpretability of the input series by leveraging the
combination of learnable time functions that improve the network performance in
subsequent tasks and expedite the inaugural application of convolutions in this
field. | http://arxiv.org/pdf/2308.03210 | Chrysoula Kosma, Giannis Nikolentzos, Michalis Vazirgiannis | cs.LG | null | null | cs.LG | 20230806 | 20230809 | [
{
"id": "1710.04110"
},
{
"id": "1909.07782"
},
{
"id": "2102.02611"
},
{
"id": "1706.02633"
},
{
"id": "2101.10318"
}
] |
2308.03210 | 43 | In Table 3 we provide a comparison in terms of memory and computational costs Computational cost. between the proposed TPCNN and its main competitor mTAND-Full. We report the size, i. e., the number of parameters, and the time per epoch in minutes for the two methods and the three real-world datasets. Comparisons of mTAND and previous state-of-the-art models, among which the efficient ODE-based methods, as shown in [27] have demonstrated that the former is significantly faster (i. e., approximately 100 times) than ODE-based methods that make use of an ODE solver. As we can observe in Table 3, TPCNN is as fast as mTAND-Full in terms of time cost comparison. When it comes to the size of the model, the proposed TPCNN uses significantly fewer parameters compared to mTAND-Full, while maintaining competitive performance. 350 thousand More specifically, TPCNN uses approximately some hundred thousand parameters, i. e., 100 parameters, while mTAND-Full size scales to millions of parameters, i. e., approximately 1.5 million. This comparison highlights the high efficacy of convolutions in the irregular sampling setting, which allow the training of neural networks that are significantly smaller and fast compared to the baselines. Therefore, the proposed TPCNN can easily scale to larger datasets and remains efficient even when trained with fewer parameters.
12 | 2308.03210#43 | Time-Parameterized Convolutional Neural Networks for Irregularly Sampled Time Series | Irregularly sampled multivariate time series are ubiquitous in several
application domains, leading to sparse, not fully-observed and non-aligned
observations across different variables. Standard sequential neural network
architectures, such as recurrent neural networks (RNNs) and convolutional
neural networks (CNNs), consider regular spacing between observation times,
posing significant challenges to irregular time series modeling. While most of
the proposed architectures incorporate RNN variants to handle irregular time
intervals, convolutional neural networks have not been adequately studied in
the irregular sampling setting. In this paper, we parameterize convolutional
layers by employing time-explicitly initialized kernels. Such general functions
of time enhance the learning process of continuous-time hidden dynamics and can
be efficiently incorporated into convolutional kernel weights. We, thus,
propose the time-parameterized convolutional neural network (TPCNN), which
shares similar properties with vanilla convolutions but is carefully designed
for irregularly sampled time series. We evaluate TPCNN on both interpolation
and classification tasks involving real-world irregularly sampled multivariate
time series datasets. Our experimental results indicate the competitive
performance of the proposed TPCNN model which is also significantly more
efficient than other state-of-the-art methods. At the same time, the proposed
architecture allows the interpretability of the input series by leveraging the
combination of learnable time functions that improve the network performance in
subsequent tasks and expedite the inaugural application of convolutions in this
field. | http://arxiv.org/pdf/2308.03210 | Chrysoula Kosma, Giannis Nikolentzos, Michalis Vazirgiannis | cs.LG | null | null | cs.LG | 20230806 | 20230809 | [
{
"id": "1710.04110"
},
{
"id": "1909.07782"
},
{
"id": "2102.02611"
},
{
"id": "1706.02633"
},
{
"id": "2101.10318"
}
] |
2308.03210 | 44 | 12
ââ PhysioNet _s ââ MIMIC-UL Test AUC lng) et) Time Functi sin() xm) tin) Time Functions sp) {sin â+â Human Activity Ting) xn) in), 60 Time Functions } {sing
Figure 3: Ablation study on different time functions for the parameterization of convolutional kernels for each dataset. Each plot captures the performance (AUC or Accuracy) for each function or combination of functions on the test set. | 2308.03210#44 | Time-Parameterized Convolutional Neural Networks for Irregularly Sampled Time Series | Irregularly sampled multivariate time series are ubiquitous in several
application domains, leading to sparse, not fully-observed and non-aligned
observations across different variables. Standard sequential neural network
architectures, such as recurrent neural networks (RNNs) and convolutional
neural networks (CNNs), consider regular spacing between observation times,
posing significant challenges to irregular time series modeling. While most of
the proposed architectures incorporate RNN variants to handle irregular time
intervals, convolutional neural networks have not been adequately studied in
the irregular sampling setting. In this paper, we parameterize convolutional
layers by employing time-explicitly initialized kernels. Such general functions
of time enhance the learning process of continuous-time hidden dynamics and can
be efficiently incorporated into convolutional kernel weights. We, thus,
propose the time-parameterized convolutional neural network (TPCNN), which
shares similar properties with vanilla convolutions but is carefully designed
for irregularly sampled time series. We evaluate TPCNN on both interpolation
and classification tasks involving real-world irregularly sampled multivariate
time series datasets. Our experimental results indicate the competitive
performance of the proposed TPCNN model which is also significantly more
efficient than other state-of-the-art methods. At the same time, the proposed
architecture allows the interpretability of the input series by leveraging the
combination of learnable time functions that improve the network performance in
subsequent tasks and expedite the inaugural application of convolutions in this
field. | http://arxiv.org/pdf/2308.03210 | Chrysoula Kosma, Giannis Nikolentzos, Michalis Vazirgiannis | cs.LG | null | null | cs.LG | 20230806 | 20230809 | [
{
"id": "1710.04110"
},
{
"id": "1909.07782"
},
{
"id": "2102.02611"
},
{
"id": "1706.02633"
},
{
"id": "2101.10318"
}
] |
2308.03210 | 45 | Experiments on synthetic data. Following the line of work of [27], we reproduce their synthetic sinusoidal [0, 1]. Given dataset that consists of 1000 samples, each describing a time series of 100 time points where t 10 reference points, an RBF kernel with bandwidth 100 is used to obtain local interpolations at the 100 time steps. For each sample, 20 time points are randomly selected so as to represent an irregularly spaced series. A split of 80% and 20% extracts the respective train and test sets. We employ the encoder-decoder interpolation framework of Figure 1 (Right). Contrary to the interpolation setting for PhysioNet, we give as input the 20 irregular time steps, without the missing points, and reconstruct each observation based on the rest using TPCNN with the functions h2(x) = sin(x) (blue points) and h5(x) = exp(x) (green points). We visualize the obtained reconstructions for 3 samples of the test set in Figure 2. Each plot consists of the true values (ground truth) for a test sample, while the dark markers represent the 20 observed input data points ) (observed data), the blue markers and the green markers the 20 predicted values (reconstruction) using sin( | 2308.03210#45 | Time-Parameterized Convolutional Neural Networks for Irregularly Sampled Time Series | Irregularly sampled multivariate time series are ubiquitous in several
application domains, leading to sparse, not fully-observed and non-aligned
observations across different variables. Standard sequential neural network
architectures, such as recurrent neural networks (RNNs) and convolutional
neural networks (CNNs), consider regular spacing between observation times,
posing significant challenges to irregular time series modeling. While most of
the proposed architectures incorporate RNN variants to handle irregular time
intervals, convolutional neural networks have not been adequately studied in
the irregular sampling setting. In this paper, we parameterize convolutional
layers by employing time-explicitly initialized kernels. Such general functions
of time enhance the learning process of continuous-time hidden dynamics and can
be efficiently incorporated into convolutional kernel weights. We, thus,
propose the time-parameterized convolutional neural network (TPCNN), which
shares similar properties with vanilla convolutions but is carefully designed
for irregularly sampled time series. We evaluate TPCNN on both interpolation
and classification tasks involving real-world irregularly sampled multivariate
time series datasets. Our experimental results indicate the competitive
performance of the proposed TPCNN model which is also significantly more
efficient than other state-of-the-art methods. At the same time, the proposed
architecture allows the interpretability of the input series by leveraging the
combination of learnable time functions that improve the network performance in
subsequent tasks and expedite the inaugural application of convolutions in this
field. | http://arxiv.org/pdf/2308.03210 | Chrysoula Kosma, Giannis Nikolentzos, Michalis Vazirgiannis | cs.LG | null | null | cs.LG | 20230806 | 20230809 | [
{
"id": "1710.04110"
},
{
"id": "1909.07782"
},
{
"id": "2102.02611"
},
{
"id": "1706.02633"
},
{
"id": "2101.10318"
}
] |
2308.03210 | 46 | the dark markers represent the 20 observed input data points ) (observed data), the blue markers and the green markers the 20 predicted values (reconstruction) using sin( · and exp( ) functions respectively. By employing the function h2(x) = sin(x), we are able to achieve a lower · MSE loss compared to the ones achieved with the rest of the time functions defined in Section 3.2. We should mention here that in case domain knowledge is available, it can be incorporated into the proposed TPCNN method via the employed time function, which is likely to lead to performance improvements. | 2308.03210#46 | Time-Parameterized Convolutional Neural Networks for Irregularly Sampled Time Series | Irregularly sampled multivariate time series are ubiquitous in several
application domains, leading to sparse, not fully-observed and non-aligned
observations across different variables. Standard sequential neural network
architectures, such as recurrent neural networks (RNNs) and convolutional
neural networks (CNNs), consider regular spacing between observation times,
posing significant challenges to irregular time series modeling. While most of
the proposed architectures incorporate RNN variants to handle irregular time
intervals, convolutional neural networks have not been adequately studied in
the irregular sampling setting. In this paper, we parameterize convolutional
layers by employing time-explicitly initialized kernels. Such general functions
of time enhance the learning process of continuous-time hidden dynamics and can
be efficiently incorporated into convolutional kernel weights. We, thus,
propose the time-parameterized convolutional neural network (TPCNN), which
shares similar properties with vanilla convolutions but is carefully designed
for irregularly sampled time series. We evaluate TPCNN on both interpolation
and classification tasks involving real-world irregularly sampled multivariate
time series datasets. Our experimental results indicate the competitive
performance of the proposed TPCNN model which is also significantly more
efficient than other state-of-the-art methods. At the same time, the proposed
architecture allows the interpretability of the input series by leveraging the
combination of learnable time functions that improve the network performance in
subsequent tasks and expedite the inaugural application of convolutions in this
field. | http://arxiv.org/pdf/2308.03210 | Chrysoula Kosma, Giannis Nikolentzos, Michalis Vazirgiannis | cs.LG | null | null | cs.LG | 20230806 | 20230809 | [
{
"id": "1710.04110"
},
{
"id": "1909.07782"
},
{
"id": "2102.02611"
},
{
"id": "1706.02633"
},
{
"id": "2101.10318"
}
] |
2308.03210 | 47 | Ablation study. We also present in Figure 3 an ablation study on different time functions employed for parameterizing the weights of the convolutional kernels. The performance metric (AUC or accuracy) on the test set is reported on the classification task of the real-world datasets given a different time function or combination of time functions. For all three datasets, we examine a subset of the functions described in Section 3.2. More specifically, we employ h1(x), h2(x), h3(x), h5(x) (i. e., lin( )) and ), cos( · their combination (e. g., ). We observe that different functions may ), lin( } contribute more or less to the classification performance for the given dataset. In PhysioNet, while the linear ) lead to the lowest AUC on the test set, when combined with function lin( 1%. Additionally, in MIMIC-III classification ) and cos( sin( · · cos( ) (i. e., linear function) lead ) and exp( · 4%. At the same, the combination of functions improves performance but to a reduced performance by
â¼
13 | 2308.03210#47 | Time-Parameterized Convolutional Neural Networks for Irregularly Sampled Time Series | Irregularly sampled multivariate time series are ubiquitous in several
application domains, leading to sparse, not fully-observed and non-aligned
observations across different variables. Standard sequential neural network
architectures, such as recurrent neural networks (RNNs) and convolutional
neural networks (CNNs), consider regular spacing between observation times,
posing significant challenges to irregular time series modeling. While most of
the proposed architectures incorporate RNN variants to handle irregular time
intervals, convolutional neural networks have not been adequately studied in
the irregular sampling setting. In this paper, we parameterize convolutional
layers by employing time-explicitly initialized kernels. Such general functions
of time enhance the learning process of continuous-time hidden dynamics and can
be efficiently incorporated into convolutional kernel weights. We, thus,
propose the time-parameterized convolutional neural network (TPCNN), which
shares similar properties with vanilla convolutions but is carefully designed
for irregularly sampled time series. We evaluate TPCNN on both interpolation
and classification tasks involving real-world irregularly sampled multivariate
time series datasets. Our experimental results indicate the competitive
performance of the proposed TPCNN model which is also significantly more
efficient than other state-of-the-art methods. At the same time, the proposed
architecture allows the interpretability of the input series by leveraging the
combination of learnable time functions that improve the network performance in
subsequent tasks and expedite the inaugural application of convolutions in this
field. | http://arxiv.org/pdf/2308.03210 | Chrysoula Kosma, Giannis Nikolentzos, Michalis Vazirgiannis | cs.LG | null | null | cs.LG | 20230806 | 20230809 | [
{
"id": "1710.04110"
},
{
"id": "1909.07782"
},
{
"id": "2102.02611"
},
{
"id": "1706.02633"
},
{
"id": "2101.10318"
}
] |
2308.03210 | 48 | â¼
13
) ) when employed alone. Finally on the Human Activity dataset, cos( ) and exp( does not surpass cos( · · · ) function achieve the highest ), cos( sin( function and the combination { · · test accuracy. The linear lin( ) function again, in this case, leads to the lowest accuracy score compared to · the rest of the time functions. During training, we can observe that the linear time function followed by a standard non-linear activation (e. g., ReLU) when used for the parameterization of the convolutional kernel weights suffers from slow convergence and consequently worse performance. On the other hand, periodic time functions and the exponential function seem to more efficiently describe the time dynamics and lead to smoother training when used for parameterizing convolutions. This experiment highlights the explainability aspects of the proposed TPCNN model since it allows us to determine which time functions better describe the considered time series. Furthermore, under certain conditions, the time series could be considered as a composition of such kind of functions.
# 5 Conclusion
In this work, we carefully designed and experimentally evaluated a novel time-parameterized convolutional neural network, which incorporates learnable time functions into the weights of convolutional kernels. The proposed method generalizes well in different tasks involving irregularly sampled multivariate time series while being computationally efficient and interpretable.
# References | 2308.03210#48 | Time-Parameterized Convolutional Neural Networks for Irregularly Sampled Time Series | Irregularly sampled multivariate time series are ubiquitous in several
application domains, leading to sparse, not fully-observed and non-aligned
observations across different variables. Standard sequential neural network
architectures, such as recurrent neural networks (RNNs) and convolutional
neural networks (CNNs), consider regular spacing between observation times,
posing significant challenges to irregular time series modeling. While most of
the proposed architectures incorporate RNN variants to handle irregular time
intervals, convolutional neural networks have not been adequately studied in
the irregular sampling setting. In this paper, we parameterize convolutional
layers by employing time-explicitly initialized kernels. Such general functions
of time enhance the learning process of continuous-time hidden dynamics and can
be efficiently incorporated into convolutional kernel weights. We, thus,
propose the time-parameterized convolutional neural network (TPCNN), which
shares similar properties with vanilla convolutions but is carefully designed
for irregularly sampled time series. We evaluate TPCNN on both interpolation
and classification tasks involving real-world irregularly sampled multivariate
time series datasets. Our experimental results indicate the competitive
performance of the proposed TPCNN model which is also significantly more
efficient than other state-of-the-art methods. At the same time, the proposed
architecture allows the interpretability of the input series by leveraging the
combination of learnable time functions that improve the network performance in
subsequent tasks and expedite the inaugural application of convolutions in this
field. | http://arxiv.org/pdf/2308.03210 | Chrysoula Kosma, Giannis Nikolentzos, Michalis Vazirgiannis | cs.LG | null | null | cs.LG | 20230806 | 20230809 | [
{
"id": "1710.04110"
},
{
"id": "1909.07782"
},
{
"id": "2102.02611"
},
{
"id": "1706.02633"
},
{
"id": "2101.10318"
}
] |
2308.03210 | 49 | # References
[1] E Oran Brigham. The fast Fourier transform and its applications. Prentice-Hall, Inc., 1988.
[2] Wei Cao, Dong Wang, Jian Li, Hao Zhou, Lei Li, and Yitan Li. Brits: Bidirectional recurrent imputation for time series. Advances in neural information processing systems, 31, 2018.
[3] Zhengping Che, Sanjay Purushotham, Kyunghyun Cho, David Sontag, and Yan Liu. Recurrent neural networks for multivariate time series with missing values. Scientific reports, 8(1):1â12, 2018.
[4] Ricky TQ Chen, Yulia Rubanova, Jesse Bettencourt, and David Duvenaud. Neural ordinary differential equations. In Advances on Neural Information Processing Systems, pages 6572â6583, 2018.
[5] Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555, 2014.
[6] Edward De Brouwer, Jaak Simm, Adam Arany, and Yves Moreau. Gru-ode-bayes: Continuous modeling of sporadically-observed time series. Advances in neural information processing systems, 32, 2019. | 2308.03210#49 | Time-Parameterized Convolutional Neural Networks for Irregularly Sampled Time Series | Irregularly sampled multivariate time series are ubiquitous in several
application domains, leading to sparse, not fully-observed and non-aligned
observations across different variables. Standard sequential neural network
architectures, such as recurrent neural networks (RNNs) and convolutional
neural networks (CNNs), consider regular spacing between observation times,
posing significant challenges to irregular time series modeling. While most of
the proposed architectures incorporate RNN variants to handle irregular time
intervals, convolutional neural networks have not been adequately studied in
the irregular sampling setting. In this paper, we parameterize convolutional
layers by employing time-explicitly initialized kernels. Such general functions
of time enhance the learning process of continuous-time hidden dynamics and can
be efficiently incorporated into convolutional kernel weights. We, thus,
propose the time-parameterized convolutional neural network (TPCNN), which
shares similar properties with vanilla convolutions but is carefully designed
for irregularly sampled time series. We evaluate TPCNN on both interpolation
and classification tasks involving real-world irregularly sampled multivariate
time series datasets. Our experimental results indicate the competitive
performance of the proposed TPCNN model which is also significantly more
efficient than other state-of-the-art methods. At the same time, the proposed
architecture allows the interpretability of the input series by leveraging the
combination of learnable time functions that improve the network performance in
subsequent tasks and expedite the inaugural application of convolutions in this
field. | http://arxiv.org/pdf/2308.03210 | Chrysoula Kosma, Giannis Nikolentzos, Michalis Vazirgiannis | cs.LG | null | null | cs.LG | 20230806 | 20230809 | [
{
"id": "1710.04110"
},
{
"id": "1909.07782"
},
{
"id": "2102.02611"
},
{
"id": "1706.02633"
},
{
"id": "2101.10318"
}
] |
2308.03210 | 50 | [7] Jan G De Gooijer and Rob J Hyndman. 25 years of time series forecasting. International Journal of Forecasting, 22(3):443â473, 2006.
[8] Crist´obal Esteban, Stephanie L Hyland, and Gunnar R¨atsch. Real-valued (medical) time series generation with recurrent conditional gans. arXiv preprint arXiv:1706.02633, 2017.
[9] Joseph Futoma, Sanjay Hariharan, and Katherine Heller. Learning to detect sepsis with a multitask gaussian process rnn classifier. In International conference on machine learning, pages 1174â1182. PMLR, 2017.
[10] Sepp Hochreiter and J¨urgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735â1780, 1997.
[11] Max Horn, Michael Moor, Christian Bock, Bastian Rieck, and Karsten Borgwardt. Set functions for time series. In International Conference on Machine Learning, pages 4353â4363. PMLR, 2020.
14 | 2308.03210#50 | Time-Parameterized Convolutional Neural Networks for Irregularly Sampled Time Series | Irregularly sampled multivariate time series are ubiquitous in several
application domains, leading to sparse, not fully-observed and non-aligned
observations across different variables. Standard sequential neural network
architectures, such as recurrent neural networks (RNNs) and convolutional
neural networks (CNNs), consider regular spacing between observation times,
posing significant challenges to irregular time series modeling. While most of
the proposed architectures incorporate RNN variants to handle irregular time
intervals, convolutional neural networks have not been adequately studied in
the irregular sampling setting. In this paper, we parameterize convolutional
layers by employing time-explicitly initialized kernels. Such general functions
of time enhance the learning process of continuous-time hidden dynamics and can
be efficiently incorporated into convolutional kernel weights. We, thus,
propose the time-parameterized convolutional neural network (TPCNN), which
shares similar properties with vanilla convolutions but is carefully designed
for irregularly sampled time series. We evaluate TPCNN on both interpolation
and classification tasks involving real-world irregularly sampled multivariate
time series datasets. Our experimental results indicate the competitive
performance of the proposed TPCNN model which is also significantly more
efficient than other state-of-the-art methods. At the same time, the proposed
architecture allows the interpretability of the input series by leveraging the
combination of learnable time functions that improve the network performance in
subsequent tasks and expedite the inaugural application of convolutions in this
field. | http://arxiv.org/pdf/2308.03210 | Chrysoula Kosma, Giannis Nikolentzos, Michalis Vazirgiannis | cs.LG | null | null | cs.LG | 20230806 | 20230809 | [
{
"id": "1710.04110"
},
{
"id": "1909.07782"
},
{
"id": "2102.02611"
},
{
"id": "1706.02633"
},
{
"id": "2101.10318"
}
] |
2308.03210 | 51 | 14
[12] Hassan Ismail Fawaz, Germain Forestier, Jonathan Weber, Lhassane Idoumghar, and Pierre-Alain Muller. Deep learning for time series classification: a review. Data mining and knowledge discovery, 33(4):917â963, 2019.
[13] Alistair EW Johnson, Tom J Pollard, Lu Shen, Li-wei H Lehman, Mengling Feng, Mohammad Ghassemi, Benjamin Moody, Peter Szolovits, Leo Anthony Celi, and Roger G Mark. Mimic-iii, a freely accessible critical care database. Scientific data, 3(1):1â9, 2016.
[14] Patrick Kidger, James Morrill, James Foster, and Terry Lyons. Neural controlled differential equations for irregular time series. Advances in Neural Information Processing Systems, 33:6696â6707, 2020.
[15] Yann LeCun, L´eon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278â2324, 1998.
[16] Yonghong Luo, Xiangrui Cai, Ying Zhang, Jun Xu, et al. Multivariate time series imputation with generative adversarial networks. Advances in neural information processing systems, 31, 2018. | 2308.03210#51 | Time-Parameterized Convolutional Neural Networks for Irregularly Sampled Time Series | Irregularly sampled multivariate time series are ubiquitous in several
application domains, leading to sparse, not fully-observed and non-aligned
observations across different variables. Standard sequential neural network
architectures, such as recurrent neural networks (RNNs) and convolutional
neural networks (CNNs), consider regular spacing between observation times,
posing significant challenges to irregular time series modeling. While most of
the proposed architectures incorporate RNN variants to handle irregular time
intervals, convolutional neural networks have not been adequately studied in
the irregular sampling setting. In this paper, we parameterize convolutional
layers by employing time-explicitly initialized kernels. Such general functions
of time enhance the learning process of continuous-time hidden dynamics and can
be efficiently incorporated into convolutional kernel weights. We, thus,
propose the time-parameterized convolutional neural network (TPCNN), which
shares similar properties with vanilla convolutions but is carefully designed
for irregularly sampled time series. We evaluate TPCNN on both interpolation
and classification tasks involving real-world irregularly sampled multivariate
time series datasets. Our experimental results indicate the competitive
performance of the proposed TPCNN model which is also significantly more
efficient than other state-of-the-art methods. At the same time, the proposed
architecture allows the interpretability of the input series by leveraging the
combination of learnable time functions that improve the network performance in
subsequent tasks and expedite the inaugural application of convolutions in this
field. | http://arxiv.org/pdf/2308.03210 | Chrysoula Kosma, Giannis Nikolentzos, Michalis Vazirgiannis | cs.LG | null | null | cs.LG | 20230806 | 20230809 | [
{
"id": "1710.04110"
},
{
"id": "1909.07782"
},
{
"id": "2102.02611"
},
{
"id": "1706.02633"
},
{
"id": "2101.10318"
}
] |
2308.03210 | 52 | [17] Hongyuan Mei and Jason M Eisner. The neural hawkes process: A neurally self-modulating multivariate point process. Advances in neural information processing systems, 30, 2017.
[18] Michael C Mozer, Denis Kazakov, and Robert V Lindsey. Discrete event, continuous time rnns. arXiv preprint arXiv:1710.04110, 2017.
[19] Daniel Neil, Michael Pfeiffer, and Shih-Chii Liu. Phased lstm: Accelerating recurrent network training for long or event-based sequences. Advances in neural information processing systems, 29, 2016.
[20] Jeong Joon Park, Peter Florence, Julian Straub, Richard Newcombe, and Steven Lovegrove. Deepsdf: Learning continuous signed distance functions for shape representation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 165â174, 2019.
[21] Trang Pham, Truyen Tran, Dinh Phung, and Svetha Venkatesh. Predicting healthcare trajectories from medical records: A deep learning approach. Journal of biomedical informatics, 69:218â229, 2017. | 2308.03210#52 | Time-Parameterized Convolutional Neural Networks for Irregularly Sampled Time Series | Irregularly sampled multivariate time series are ubiquitous in several
application domains, leading to sparse, not fully-observed and non-aligned
observations across different variables. Standard sequential neural network
architectures, such as recurrent neural networks (RNNs) and convolutional
neural networks (CNNs), consider regular spacing between observation times,
posing significant challenges to irregular time series modeling. While most of
the proposed architectures incorporate RNN variants to handle irregular time
intervals, convolutional neural networks have not been adequately studied in
the irregular sampling setting. In this paper, we parameterize convolutional
layers by employing time-explicitly initialized kernels. Such general functions
of time enhance the learning process of continuous-time hidden dynamics and can
be efficiently incorporated into convolutional kernel weights. We, thus,
propose the time-parameterized convolutional neural network (TPCNN), which
shares similar properties with vanilla convolutions but is carefully designed
for irregularly sampled time series. We evaluate TPCNN on both interpolation
and classification tasks involving real-world irregularly sampled multivariate
time series datasets. Our experimental results indicate the competitive
performance of the proposed TPCNN model which is also significantly more
efficient than other state-of-the-art methods. At the same time, the proposed
architecture allows the interpretability of the input series by leveraging the
combination of learnable time functions that improve the network performance in
subsequent tasks and expedite the inaugural application of convolutions in this
field. | http://arxiv.org/pdf/2308.03210 | Chrysoula Kosma, Giannis Nikolentzos, Michalis Vazirgiannis | cs.LG | null | null | cs.LG | 20230806 | 20230809 | [
{
"id": "1710.04110"
},
{
"id": "1909.07782"
},
{
"id": "2102.02611"
},
{
"id": "1706.02633"
},
{
"id": "2101.10318"
}
] |
2308.03210 | 53 | [22] Alvin Rajkomar, Eyal Oren, Kai Chen, Andrew M Dai, Nissan Hajaj, Michaela Hardt, Peter J Liu, Xiaobing Liu, Jake Marcus, Mimi Sun, et al. Scalable and accurate deep learning with electronic health records. NPJ digital medicine, 1(1):1â10, 2018.
[23] David W Romero, Anna Kuzina, Erik J Bekkers, Jakub M Tomczak, and Mark Hoogendoorn. Ckconv: Continuous kernel convolution for sequential data. arXiv preprint arXiv:2102.02611, 2021.
[24] Yulia Rubanova, Ricky TQ Chen, and David K Duvenaud. Latent ordinary differential equations for irregularly-sampled time series. Advances in neural information processing systems, 32, 2019.
[25] Kristof Sch¨utt, Pieter-Jan Kindermans, Huziel Enoc Sauceda Felix, Stefan Chmiela, Alexandre Tkatchenko, and Klaus-Robert M¨uller. Schnet: A continuous-filter convolutional neural network for modeling quantum interactions. Advances in neural information processing systems, 30, 2017. | 2308.03210#53 | Time-Parameterized Convolutional Neural Networks for Irregularly Sampled Time Series | Irregularly sampled multivariate time series are ubiquitous in several
application domains, leading to sparse, not fully-observed and non-aligned
observations across different variables. Standard sequential neural network
architectures, such as recurrent neural networks (RNNs) and convolutional
neural networks (CNNs), consider regular spacing between observation times,
posing significant challenges to irregular time series modeling. While most of
the proposed architectures incorporate RNN variants to handle irregular time
intervals, convolutional neural networks have not been adequately studied in
the irregular sampling setting. In this paper, we parameterize convolutional
layers by employing time-explicitly initialized kernels. Such general functions
of time enhance the learning process of continuous-time hidden dynamics and can
be efficiently incorporated into convolutional kernel weights. We, thus,
propose the time-parameterized convolutional neural network (TPCNN), which
shares similar properties with vanilla convolutions but is carefully designed
for irregularly sampled time series. We evaluate TPCNN on both interpolation
and classification tasks involving real-world irregularly sampled multivariate
time series datasets. Our experimental results indicate the competitive
performance of the proposed TPCNN model which is also significantly more
efficient than other state-of-the-art methods. At the same time, the proposed
architecture allows the interpretability of the input series by leveraging the
combination of learnable time functions that improve the network performance in
subsequent tasks and expedite the inaugural application of convolutions in this
field. | http://arxiv.org/pdf/2308.03210 | Chrysoula Kosma, Giannis Nikolentzos, Michalis Vazirgiannis | cs.LG | null | null | cs.LG | 20230806 | 20230809 | [
{
"id": "1710.04110"
},
{
"id": "1909.07782"
},
{
"id": "2102.02611"
},
{
"id": "1706.02633"
},
{
"id": "2101.10318"
}
] |
2308.03210 | 54 | [26] Satya Narayan Shukla and Benjamin M Marlin. Interpolation-prediction networks for irregularly sampled time series. arXiv preprint arXiv:1909.07782, 2019.
[27] Satya Narayan Shukla and Benjamin M Marlin. Multi-time attention networks for irregularly sampled time series. arXiv preprint arXiv:2101.10318, 2021.
[28] Ikaro Silva, George Moody, Daniel J Scott, Leo A Celi, and Roger G Mark. Predicting in-hospital mortality of icu patients: The physionet/computing in cardiology challenge 2012. In 2012 Computing in Cardiology, pages 245â248. IEEE, 2012.
15
[29] Vincent Sitzmann, Julien Martel, Alexander Bergman, David Lindell, and Gordon Wetzstein. Implicit neural representations with periodic activation functions. Advances in Neural Information Processing Systems, 33:7462â7473, 2020.
[30] Huan Song, Deepta Rajan, Jayaraman Thiagarajan, and Andreas Spanias. Attend and diagnose: Clinical time series analysis using attention models. In Proceedings of the AAAI conference on artificial intelligence, volume 32, 2018. | 2308.03210#54 | Time-Parameterized Convolutional Neural Networks for Irregularly Sampled Time Series | Irregularly sampled multivariate time series are ubiquitous in several
application domains, leading to sparse, not fully-observed and non-aligned
observations across different variables. Standard sequential neural network
architectures, such as recurrent neural networks (RNNs) and convolutional
neural networks (CNNs), consider regular spacing between observation times,
posing significant challenges to irregular time series modeling. While most of
the proposed architectures incorporate RNN variants to handle irregular time
intervals, convolutional neural networks have not been adequately studied in
the irregular sampling setting. In this paper, we parameterize convolutional
layers by employing time-explicitly initialized kernels. Such general functions
of time enhance the learning process of continuous-time hidden dynamics and can
be efficiently incorporated into convolutional kernel weights. We, thus,
propose the time-parameterized convolutional neural network (TPCNN), which
shares similar properties with vanilla convolutions but is carefully designed
for irregularly sampled time series. We evaluate TPCNN on both interpolation
and classification tasks involving real-world irregularly sampled multivariate
time series datasets. Our experimental results indicate the competitive
performance of the proposed TPCNN model which is also significantly more
efficient than other state-of-the-art methods. At the same time, the proposed
architecture allows the interpretability of the input series by leveraging the
combination of learnable time functions that improve the network performance in
subsequent tasks and expedite the inaugural application of convolutions in this
field. | http://arxiv.org/pdf/2308.03210 | Chrysoula Kosma, Giannis Nikolentzos, Michalis Vazirgiannis | cs.LG | null | null | cs.LG | 20230806 | 20230809 | [
{
"id": "1710.04110"
},
{
"id": "1909.07782"
},
{
"id": "2102.02611"
},
{
"id": "1706.02633"
},
{
"id": "2101.10318"
}
] |
2308.03210 | 55 | [31] Qingxiong Tan, Mang Ye, Baoyao Yang, Siqi Liu, Andy Jinhua Ma, Terry Cheuk-Fung Yip, Grace Lai-Hung Wong, and PongChi Yuen. Data-gru: Dual-attention time-aware gated recurrent unit for irregular multivariate time series. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 930â937, 2020.
32 Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017.
[33] Shenlong Wang, Simon Suo, Wei-Chiu Ma, Andrei Pokrovsky, and Raquel Urtasun. Deep parametric continuous convolutional neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2589â2597, 2018.
[34] Paul J Werbos. Backpropagation through time: what it does and how to do it. Proceedings of the IEEE, 78(10):1550â1560, 1990. | 2308.03210#55 | Time-Parameterized Convolutional Neural Networks for Irregularly Sampled Time Series | Irregularly sampled multivariate time series are ubiquitous in several
application domains, leading to sparse, not fully-observed and non-aligned
observations across different variables. Standard sequential neural network
architectures, such as recurrent neural networks (RNNs) and convolutional
neural networks (CNNs), consider regular spacing between observation times,
posing significant challenges to irregular time series modeling. While most of
the proposed architectures incorporate RNN variants to handle irregular time
intervals, convolutional neural networks have not been adequately studied in
the irregular sampling setting. In this paper, we parameterize convolutional
layers by employing time-explicitly initialized kernels. Such general functions
of time enhance the learning process of continuous-time hidden dynamics and can
be efficiently incorporated into convolutional kernel weights. We, thus,
propose the time-parameterized convolutional neural network (TPCNN), which
shares similar properties with vanilla convolutions but is carefully designed
for irregularly sampled time series. We evaluate TPCNN on both interpolation
and classification tasks involving real-world irregularly sampled multivariate
time series datasets. Our experimental results indicate the competitive
performance of the proposed TPCNN model which is also significantly more
efficient than other state-of-the-art methods. At the same time, the proposed
architecture allows the interpretability of the input series by leveraging the
combination of learnable time functions that improve the network performance in
subsequent tasks and expedite the inaugural application of convolutions in this
field. | http://arxiv.org/pdf/2308.03210 | Chrysoula Kosma, Giannis Nikolentzos, Michalis Vazirgiannis | cs.LG | null | null | cs.LG | 20230806 | 20230809 | [
{
"id": "1710.04110"
},
{
"id": "1909.07782"
},
{
"id": "2102.02611"
},
{
"id": "1706.02633"
},
{
"id": "2101.10318"
}
] |
2308.03210 | 56 | [35] Da Xu, Chuanwei Ruan, Evren Korpeoglu, Sushant Kumar, and Kannan Achan. Self-attention with functional time representation learning. Advances in neural information processing systems, 32, 2019.
[36] Jinsung Yoon, William R Zame, and Mihaela van der Schaar. Estimating missing data in temporal data streams using multi-directional recurrent neural networks. IEEE Transactions on Biomedical Engineering, 66(5):1477â1490, 2018.
[37] Yuan Zhang. Attain: Attention-based time-aware lstm networks for disease progression modeling. In In Proceedings of the 28th International Joint Conference on Artificial Intelligence (IJCAI-2019), pp. 4369-4375, Macao, China., 2019.
16 | 2308.03210#56 | Time-Parameterized Convolutional Neural Networks for Irregularly Sampled Time Series | Irregularly sampled multivariate time series are ubiquitous in several
application domains, leading to sparse, not fully-observed and non-aligned
observations across different variables. Standard sequential neural network
architectures, such as recurrent neural networks (RNNs) and convolutional
neural networks (CNNs), consider regular spacing between observation times,
posing significant challenges to irregular time series modeling. While most of
the proposed architectures incorporate RNN variants to handle irregular time
intervals, convolutional neural networks have not been adequately studied in
the irregular sampling setting. In this paper, we parameterize convolutional
layers by employing time-explicitly initialized kernels. Such general functions
of time enhance the learning process of continuous-time hidden dynamics and can
be efficiently incorporated into convolutional kernel weights. We, thus,
propose the time-parameterized convolutional neural network (TPCNN), which
shares similar properties with vanilla convolutions but is carefully designed
for irregularly sampled time series. We evaluate TPCNN on both interpolation
and classification tasks involving real-world irregularly sampled multivariate
time series datasets. Our experimental results indicate the competitive
performance of the proposed TPCNN model which is also significantly more
efficient than other state-of-the-art methods. At the same time, the proposed
architecture allows the interpretability of the input series by leveraging the
combination of learnable time functions that improve the network performance in
subsequent tasks and expedite the inaugural application of convolutions in this
field. | http://arxiv.org/pdf/2308.03210 | Chrysoula Kosma, Giannis Nikolentzos, Michalis Vazirgiannis | cs.LG | null | null | cs.LG | 20230806 | 20230809 | [
{
"id": "1710.04110"
},
{
"id": "1909.07782"
},
{
"id": "2102.02611"
},
{
"id": "1706.02633"
},
{
"id": "2101.10318"
}
] |
2308.02773 | 0 | 3 2 0 2
g u A 5 ] L C . s c [
1 v 3 7 7 2 0 . 8 0 3 2 : v i X r a
EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education Yuhao Dan1â, Zhikai Lei1â, Yiyang Gu1â, Yong Li1, Jianghao Yin1, Jiaju Lin1, Linhao Ye1, Zhiyan Tie1, Yougen Zhou1, Yilei Wang2, Aimin Zhou1,2, Ze Zhou4 Qin Chen1â , Jie Zhou1â , Liang He1 , Xipeng Qiu3 1 School of Computer Science and Technology, East China Normal University, Shanghai, China 2 Institute of AI for Education, ECNU, Shanghai, China 3 School of Computer Science, Fudan University, Shanghai, China 4 ZhuQingTing Data Technology (Zhejiang) Co., Ltd., Zhejiang, China
# Abstract
sonalized, comprehensive, and timely support to teachers, students, and parents. | 2308.02773#0 | EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education | EduChat (https://www.educhat.top/) is a large-scale language model
(LLM)-based chatbot system in the education domain. Its goal is to support
personalized, fair, and compassionate intelligent education, serving teachers,
students, and parents. Guided by theories from psychology and education, it
further strengthens educational functions such as open question answering,
essay assessment, Socratic teaching, and emotional support based on the
existing basic LLMs. Particularly, we learn domain-specific knowledge by
pre-training on the educational corpus and stimulate various skills with tool
use by fine-tuning on designed system prompts and instructions. Currently,
EduChat is available online as an open-source project, with its code, data, and
model parameters available on platforms (e.g., GitHub
https://github.com/icalk-nlp/EduChat, Hugging Face
https://huggingface.co/ecnu-icalk ). We also prepare a demonstration of its
capabilities online (https://vimeo.com/851004454). This initiative aims to
promote research and applications of LLMs for intelligent education. | http://arxiv.org/pdf/2308.02773 | Yuhao Dan, Zhikai Lei, Yiyang Gu, Yong Li, Jianghao Yin, Jiaju Lin, Linhao Ye, Zhiyan Tie, Yougen Zhou, Yilei Wang, Aimin Zhou, Ze Zhou, Qin Chen, Jie Zhou, Liang He, Xipeng Qiu | cs.CL | null | null | cs.CL | 20230805 | 20230805 | [
{
"id": "2302.13971"
},
{
"id": "2107.03374"
},
{
"id": "2303.14742"
},
{
"id": "2305.08322"
},
{
"id": "1908.10084"
}
] |
2308.02773 | 1 | # Abstract
sonalized, comprehensive, and timely support to teachers, students, and parents.
EduChat1 is a large-scale language model (LLM)-based chatbot system in the education domain. Its goal is to support personalized, fair, and compassionate intelligent education, serv- ing teachers, students, and parents. Guided by theories from psychology and education, it fur- ther strengthens educational functions such as open question answering, essay assessment, So- cratic teaching, and emotional support based on the existing basic LLMs. Particularly, we learn domain-specific knowledge by pre-training on the educational corpus and stimulate various skills with tool use by fine-tuning on designed system prompts and instructions. Currently, EduChat is available online as an open-source project, with its code, data, and model parame- ters available on platforms (e.g., GitHub2, Hug- ging Face3). We also prepare a demonstration of its capabilities online4. This initiative aims to promote research and applications of LLMs for intelligent education.
# Introduction | 2308.02773#1 | EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education | EduChat (https://www.educhat.top/) is a large-scale language model
(LLM)-based chatbot system in the education domain. Its goal is to support
personalized, fair, and compassionate intelligent education, serving teachers,
students, and parents. Guided by theories from psychology and education, it
further strengthens educational functions such as open question answering,
essay assessment, Socratic teaching, and emotional support based on the
existing basic LLMs. Particularly, we learn domain-specific knowledge by
pre-training on the educational corpus and stimulate various skills with tool
use by fine-tuning on designed system prompts and instructions. Currently,
EduChat is available online as an open-source project, with its code, data, and
model parameters available on platforms (e.g., GitHub
https://github.com/icalk-nlp/EduChat, Hugging Face
https://huggingface.co/ecnu-icalk ). We also prepare a demonstration of its
capabilities online (https://vimeo.com/851004454). This initiative aims to
promote research and applications of LLMs for intelligent education. | http://arxiv.org/pdf/2308.02773 | Yuhao Dan, Zhikai Lei, Yiyang Gu, Yong Li, Jianghao Yin, Jiaju Lin, Linhao Ye, Zhiyan Tie, Yougen Zhou, Yilei Wang, Aimin Zhou, Ze Zhou, Qin Chen, Jie Zhou, Liang He, Xipeng Qiu | cs.CL | null | null | cs.CL | 20230805 | 20230805 | [
{
"id": "2302.13971"
},
{
"id": "2107.03374"
},
{
"id": "2303.14742"
},
{
"id": "2305.08322"
},
{
"id": "1908.10084"
}
] |
2308.02773 | 2 | # Introduction
Recently, large-scale language models (LLMs), such as ChatGPT (Schulman et al., 2022), LLaMa (Touvron et al., 2023a), have achieved great suc- cess in the field of natural language processing (Zhou et al., 2023b). LLMs obtained the ability of reasoning, long-range context modeling, and task generalization by training on large-scale tex- tual corpus with some strategies, such as code pre- training (Chen et al., 2021), instruction tuning (Wei et al., 2022), and reinforcement learning from hu- man feedback (RLHF) (Stiennon et al., 2020). With the advent of LLMs, they have the potential to rev- olutionize intelligent education by providing perââ Equal contribution. â â Corresponding author. 1https://www.educhat.top/ 2https://github.com/icalk-nlp/EduChat 3https://huggingface.co/ecnu-icalk 4https://vimeo.com/851004454?share=copy | 2308.02773#2 | EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education | EduChat (https://www.educhat.top/) is a large-scale language model
(LLM)-based chatbot system in the education domain. Its goal is to support
personalized, fair, and compassionate intelligent education, serving teachers,
students, and parents. Guided by theories from psychology and education, it
further strengthens educational functions such as open question answering,
essay assessment, Socratic teaching, and emotional support based on the
existing basic LLMs. Particularly, we learn domain-specific knowledge by
pre-training on the educational corpus and stimulate various skills with tool
use by fine-tuning on designed system prompts and instructions. Currently,
EduChat is available online as an open-source project, with its code, data, and
model parameters available on platforms (e.g., GitHub
https://github.com/icalk-nlp/EduChat, Hugging Face
https://huggingface.co/ecnu-icalk ). We also prepare a demonstration of its
capabilities online (https://vimeo.com/851004454). This initiative aims to
promote research and applications of LLMs for intelligent education. | http://arxiv.org/pdf/2308.02773 | Yuhao Dan, Zhikai Lei, Yiyang Gu, Yong Li, Jianghao Yin, Jiaju Lin, Linhao Ye, Zhiyan Tie, Yougen Zhou, Yilei Wang, Aimin Zhou, Ze Zhou, Qin Chen, Jie Zhou, Liang He, Xipeng Qiu | cs.CL | null | null | cs.CL | 20230805 | 20230805 | [
{
"id": "2302.13971"
},
{
"id": "2107.03374"
},
{
"id": "2303.14742"
},
{
"id": "2305.08322"
},
{
"id": "1908.10084"
}
] |
2308.02773 | 3 | However, there are several challenges of apply- ing LLMs into education domain. One challenge (C1) is that there is still a gap between the LLMs and the educational expert since LLMs are pre- trained on the general corpus, which lack sufficient educational knowledge and can not align well with real scenarios (e.g., essay assessment). The other challenge (C2) is that the knowledge in the field of education is updating, while LLMs can not learn up-to-date knowledge due to the training mecha- nism. Moreover, LLMs suffer from the hallucina- tion problem, and may generate responses that are not truthful. | 2308.02773#3 | EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education | EduChat (https://www.educhat.top/) is a large-scale language model
(LLM)-based chatbot system in the education domain. Its goal is to support
personalized, fair, and compassionate intelligent education, serving teachers,
students, and parents. Guided by theories from psychology and education, it
further strengthens educational functions such as open question answering,
essay assessment, Socratic teaching, and emotional support based on the
existing basic LLMs. Particularly, we learn domain-specific knowledge by
pre-training on the educational corpus and stimulate various skills with tool
use by fine-tuning on designed system prompts and instructions. Currently,
EduChat is available online as an open-source project, with its code, data, and
model parameters available on platforms (e.g., GitHub
https://github.com/icalk-nlp/EduChat, Hugging Face
https://huggingface.co/ecnu-icalk ). We also prepare a demonstration of its
capabilities online (https://vimeo.com/851004454). This initiative aims to
promote research and applications of LLMs for intelligent education. | http://arxiv.org/pdf/2308.02773 | Yuhao Dan, Zhikai Lei, Yiyang Gu, Yong Li, Jianghao Yin, Jiaju Lin, Linhao Ye, Zhiyan Tie, Yougen Zhou, Yilei Wang, Aimin Zhou, Ze Zhou, Qin Chen, Jie Zhou, Liang He, Xipeng Qiu | cs.CL | null | null | cs.CL | 20230805 | 20230805 | [
{
"id": "2302.13971"
},
{
"id": "2107.03374"
},
{
"id": "2303.14742"
},
{
"id": "2305.08322"
},
{
"id": "1908.10084"
}
] |
2308.02773 | 4 | To address these problems, we propose EduChat, an LLM-based chatbot system for intelligent educa- tion. For C1, we pre-train LLMs on a large number of educational books (e.g., psychology, ancient po- etry) and 4 million cleaned diverse instructions to learn the fundamental knowledge. Then, we fine- tune the model on 500 thousand high-quality cus- tomized instructions to activate education-specific functions (e.g., essay assessment, Socratic teach- ing and emotional support), by aligning with the feedbacks from psychology experts and frontline teachers. For C2, we explore a retrieval-augmented technology, which enables LLMs to automatically judge the helpfulness of the retrieved information, and generate the response based on the relevant in- formation and knowledge stored in LLMs. In this way, our EduChat can access the latest information from the internet, ensuring that the responses are accurate and credible. As an open-source project, EduChat improves the performance of education- specific functions while maintaining comparable foundational capabilities to other large-scale mod- els with equivalent parameter size. The main con- tributions are as follows:
⢠We explore the potential of incorporating theories of psychology and education into LLMs, which
sheds light on how to adapt general LLMs to specific domains; | 2308.02773#4 | EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education | EduChat (https://www.educhat.top/) is a large-scale language model
(LLM)-based chatbot system in the education domain. Its goal is to support
personalized, fair, and compassionate intelligent education, serving teachers,
students, and parents. Guided by theories from psychology and education, it
further strengthens educational functions such as open question answering,
essay assessment, Socratic teaching, and emotional support based on the
existing basic LLMs. Particularly, we learn domain-specific knowledge by
pre-training on the educational corpus and stimulate various skills with tool
use by fine-tuning on designed system prompts and instructions. Currently,
EduChat is available online as an open-source project, with its code, data, and
model parameters available on platforms (e.g., GitHub
https://github.com/icalk-nlp/EduChat, Hugging Face
https://huggingface.co/ecnu-icalk ). We also prepare a demonstration of its
capabilities online (https://vimeo.com/851004454). This initiative aims to
promote research and applications of LLMs for intelligent education. | http://arxiv.org/pdf/2308.02773 | Yuhao Dan, Zhikai Lei, Yiyang Gu, Yong Li, Jianghao Yin, Jiaju Lin, Linhao Ye, Zhiyan Tie, Yougen Zhou, Yilei Wang, Aimin Zhou, Ze Zhou, Qin Chen, Jie Zhou, Liang He, Xipeng Qiu | cs.CL | null | null | cs.CL | 20230805 | 20230805 | [
{
"id": "2302.13971"
},
{
"id": "2107.03374"
},
{
"id": "2303.14742"
},
{
"id": "2305.08322"
},
{
"id": "1908.10084"
}
] |
2308.02773 | 5 | ⢠We explore the potential of incorporating theories of psychology and education into LLMs, which
sheds light on how to adapt general LLMs to specific domains;
⢠Diverse system prompts and instructions are de- signed to control the tool use and stimulate differ- ent skills, which alleviates the problem of hallu- cination and is more applicable in real education scenarios;
⢠We develop and release the EduChat system with various educational functions, thus developers and researchers can help speed up the research and applications of intelligent education.
# 2 Related Work
Recently, LLMs like ChatGPT (Schulman et al., 2022), ChatGLM (Du et al., 2022), and LLaMA2- Chat (Touvron et al., 2023b) have emerged as a breakthrough technology in natural language pro- cessing, achieving strong performance on language generation and understanding through pre-training on massive text and instruction tuning. | 2308.02773#5 | EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education | EduChat (https://www.educhat.top/) is a large-scale language model
(LLM)-based chatbot system in the education domain. Its goal is to support
personalized, fair, and compassionate intelligent education, serving teachers,
students, and parents. Guided by theories from psychology and education, it
further strengthens educational functions such as open question answering,
essay assessment, Socratic teaching, and emotional support based on the
existing basic LLMs. Particularly, we learn domain-specific knowledge by
pre-training on the educational corpus and stimulate various skills with tool
use by fine-tuning on designed system prompts and instructions. Currently,
EduChat is available online as an open-source project, with its code, data, and
model parameters available on platforms (e.g., GitHub
https://github.com/icalk-nlp/EduChat, Hugging Face
https://huggingface.co/ecnu-icalk ). We also prepare a demonstration of its
capabilities online (https://vimeo.com/851004454). This initiative aims to
promote research and applications of LLMs for intelligent education. | http://arxiv.org/pdf/2308.02773 | Yuhao Dan, Zhikai Lei, Yiyang Gu, Yong Li, Jianghao Yin, Jiaju Lin, Linhao Ye, Zhiyan Tie, Yougen Zhou, Yilei Wang, Aimin Zhou, Ze Zhou, Qin Chen, Jie Zhou, Liang He, Xipeng Qiu | cs.CL | null | null | cs.CL | 20230805 | 20230805 | [
{
"id": "2302.13971"
},
{
"id": "2107.03374"
},
{
"id": "2303.14742"
},
{
"id": "2305.08322"
},
{
"id": "1908.10084"
}
] |
2308.02773 | 6 | While LLMs demonstrate impressive capabili- ties in general domains, their lack of subject-matter expertise becomes apparent when applied to spe- cialized verticals. For instance, we can find spe- cialized language models catering to various do- mains, such as ChatDoctor (Li et al., 2023) and HuaTuoGPT (Zhang et al., 2023b) in healthcare, FinGPT (Yang et al., 2023) in finance, and Chat- Law (Cui et al., 2023) in the legal domain. These niche fields inherently necessitate models to pos- sess comprehensive domain knowledge to address relevant queries, especially when assisting real users in practical scenarios. In education, Baladn et al. (2023) tune open-source LLMs for generating better teacher responses in BEA 2023 Shared Task (Tack et al., 2023). But challenges still exist, such as the lack of domain knowledge in general LLMs and the necessity for them to align with educational abilities (e.g., essay assessment, emotional support, and Socratic teaching). EduChat is pre-trained on a diverse education corpus to ensure the alignment of EduChat with educational abilities.
# 3 Core Functions of EduChat | 2308.02773#6 | EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education | EduChat (https://www.educhat.top/) is a large-scale language model
(LLM)-based chatbot system in the education domain. Its goal is to support
personalized, fair, and compassionate intelligent education, serving teachers,
students, and parents. Guided by theories from psychology and education, it
further strengthens educational functions such as open question answering,
essay assessment, Socratic teaching, and emotional support based on the
existing basic LLMs. Particularly, we learn domain-specific knowledge by
pre-training on the educational corpus and stimulate various skills with tool
use by fine-tuning on designed system prompts and instructions. Currently,
EduChat is available online as an open-source project, with its code, data, and
model parameters available on platforms (e.g., GitHub
https://github.com/icalk-nlp/EduChat, Hugging Face
https://huggingface.co/ecnu-icalk ). We also prepare a demonstration of its
capabilities online (https://vimeo.com/851004454). This initiative aims to
promote research and applications of LLMs for intelligent education. | http://arxiv.org/pdf/2308.02773 | Yuhao Dan, Zhikai Lei, Yiyang Gu, Yong Li, Jianghao Yin, Jiaju Lin, Linhao Ye, Zhiyan Tie, Yougen Zhou, Yilei Wang, Aimin Zhou, Ze Zhou, Qin Chen, Jie Zhou, Liang He, Xipeng Qiu | cs.CL | null | null | cs.CL | 20230805 | 20230805 | [
{
"id": "2302.13971"
},
{
"id": "2107.03374"
},
{
"id": "2303.14742"
},
{
"id": "2305.08322"
},
{
"id": "1908.10084"
}
] |
2308.02773 | 7 | # 3 Core Functions of EduChat
Retrieval-Augmented Open Question Answer- ing (QA) The education domain demands high accuracy and real-time updates regarding knowl- edge and related policies. However, existing gener- ative LLMs suffer from issues like fabricating infor- mation and lagging behind in knowledge updates. To address this, we explore retrieval-augmented
open QA methods. By utilizing real-time updated corpora from the internet as an external knowledge source, we enable LLMs to autonomously assess the relevance of retrieved information to answer a given question and decide which information to incorporate for generating responses. Through extensive experimental analysis, we discover that our model exhibits significant advantages over gen- eral LLMs in terms of eliminating fabrications and maintaining up-to-date knowledge. | 2308.02773#7 | EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education | EduChat (https://www.educhat.top/) is a large-scale language model
(LLM)-based chatbot system in the education domain. Its goal is to support
personalized, fair, and compassionate intelligent education, serving teachers,
students, and parents. Guided by theories from psychology and education, it
further strengthens educational functions such as open question answering,
essay assessment, Socratic teaching, and emotional support based on the
existing basic LLMs. Particularly, we learn domain-specific knowledge by
pre-training on the educational corpus and stimulate various skills with tool
use by fine-tuning on designed system prompts and instructions. Currently,
EduChat is available online as an open-source project, with its code, data, and
model parameters available on platforms (e.g., GitHub
https://github.com/icalk-nlp/EduChat, Hugging Face
https://huggingface.co/ecnu-icalk ). We also prepare a demonstration of its
capabilities online (https://vimeo.com/851004454). This initiative aims to
promote research and applications of LLMs for intelligent education. | http://arxiv.org/pdf/2308.02773 | Yuhao Dan, Zhikai Lei, Yiyang Gu, Yong Li, Jianghao Yin, Jiaju Lin, Linhao Ye, Zhiyan Tie, Yougen Zhou, Yilei Wang, Aimin Zhou, Ze Zhou, Qin Chen, Jie Zhou, Liang He, Xipeng Qiu | cs.CL | null | null | cs.CL | 20230805 | 20230805 | [
{
"id": "2302.13971"
},
{
"id": "2107.03374"
},
{
"id": "2303.14742"
},
{
"id": "2305.08322"
},
{
"id": "1908.10084"
}
] |
2308.02773 | 8 | Fine-grained Essay Assessment In essay assess- ment, teachers meticulously annotate grammar er- rors, provide scores, and offer feedback on standout sentences. Existing language models often have coarse granularity in grading, limiting studentsâ writing skill improvement. Our research focuses on more fine-grained and comprehensive essay assess- ment. Combining frontline teaching profession- alsâ expertise, we provide overall scores, aspect- level ratings, and detailed comments on content, expression, paragraph, and overall evaluation. Our model can identify standout sentences, highlight- ing strengths and areas for improvement, enabling personalized guidance for studentsâ essay writing skills. This ensures timely and professional support in all aspects of writing.
Socratic Teaching We focus on developing So- cratic teaching capabilities in LLMs rather than providing direct answers to students. We adopt the Socratic dialogue method, engaging in multi-step question-and-answer interactions to encourage in- dependent thinking. By stimulating discussions, debates, evaluations, and analyses, we aim to fos- ter advanced cognitive skills and cultivate studentsâ autonomy in learning. Our ultimate goal is to en- hance critical thinking and innovation abilities to their fullest extent. | 2308.02773#8 | EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education | EduChat (https://www.educhat.top/) is a large-scale language model
(LLM)-based chatbot system in the education domain. Its goal is to support
personalized, fair, and compassionate intelligent education, serving teachers,
students, and parents. Guided by theories from psychology and education, it
further strengthens educational functions such as open question answering,
essay assessment, Socratic teaching, and emotional support based on the
existing basic LLMs. Particularly, we learn domain-specific knowledge by
pre-training on the educational corpus and stimulate various skills with tool
use by fine-tuning on designed system prompts and instructions. Currently,
EduChat is available online as an open-source project, with its code, data, and
model parameters available on platforms (e.g., GitHub
https://github.com/icalk-nlp/EduChat, Hugging Face
https://huggingface.co/ecnu-icalk ). We also prepare a demonstration of its
capabilities online (https://vimeo.com/851004454). This initiative aims to
promote research and applications of LLMs for intelligent education. | http://arxiv.org/pdf/2308.02773 | Yuhao Dan, Zhikai Lei, Yiyang Gu, Yong Li, Jianghao Yin, Jiaju Lin, Linhao Ye, Zhiyan Tie, Yougen Zhou, Yilei Wang, Aimin Zhou, Ze Zhou, Qin Chen, Jie Zhou, Liang He, Xipeng Qiu | cs.CL | null | null | cs.CL | 20230805 | 20230805 | [
{
"id": "2302.13971"
},
{
"id": "2107.03374"
},
{
"id": "2303.14742"
},
{
"id": "2305.08322"
},
{
"id": "1908.10084"
}
] |
2308.02773 | 9 | Psychology-based Emotional Support Adoles- cents and children face more severe psychological pressures due to their immature cognitive devel- opment. Whereas, current LLMs usually provide generic advice, which can not well fit the specific emotional problem. To address this, we develop a psychological inquiry framework based on emotion psychology, such as Rational Emotive Behavior Therapy (REBT) and the ABC theory (Ellis, 1991). Our fine-tuned model can simulate a psycholog- ical counselor, providing personalized diagnoses and emotional support for users. EduChat fosters a deeper understanding of usersâ emotional states
and offers accurate and professional assistance.
# 4 Data Construction
# 4.1 Pre-training Data | 2308.02773#9 | EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education | EduChat (https://www.educhat.top/) is a large-scale language model
(LLM)-based chatbot system in the education domain. Its goal is to support
personalized, fair, and compassionate intelligent education, serving teachers,
students, and parents. Guided by theories from psychology and education, it
further strengthens educational functions such as open question answering,
essay assessment, Socratic teaching, and emotional support based on the
existing basic LLMs. Particularly, we learn domain-specific knowledge by
pre-training on the educational corpus and stimulate various skills with tool
use by fine-tuning on designed system prompts and instructions. Currently,
EduChat is available online as an open-source project, with its code, data, and
model parameters available on platforms (e.g., GitHub
https://github.com/icalk-nlp/EduChat, Hugging Face
https://huggingface.co/ecnu-icalk ). We also prepare a demonstration of its
capabilities online (https://vimeo.com/851004454). This initiative aims to
promote research and applications of LLMs for intelligent education. | http://arxiv.org/pdf/2308.02773 | Yuhao Dan, Zhikai Lei, Yiyang Gu, Yong Li, Jianghao Yin, Jiaju Lin, Linhao Ye, Zhiyan Tie, Yougen Zhou, Yilei Wang, Aimin Zhou, Ze Zhou, Qin Chen, Jie Zhou, Liang He, Xipeng Qiu | cs.CL | null | null | cs.CL | 20230805 | 20230805 | [
{
"id": "2302.13971"
},
{
"id": "2107.03374"
},
{
"id": "2303.14742"
},
{
"id": "2305.08322"
},
{
"id": "1908.10084"
}
] |
2308.02773 | 10 | and offers accurate and professional assistance.
# 4 Data Construction
# 4.1 Pre-training Data
Textbooks Data In our research, we gather a vast amount of educational textbook and online question bank data from Chinese middle and high school exams for pre-training. Additionally, we en- rich our model with over 70,000 Chinese poetries, providing detailed information on authors, back- grounds, and poetry appreciation to enhance its poetry creation and appreciation capabilities. To facilitate empathetic emotional support dialogues, we carefully select 60 famous works from hun- dreds of psychology books. These selected books belong to two main categories. The first category consists of 15 branches of psychological theory, in- cluding developmental and educational psychology, social psychology, behavioral psychology, counsel- ing psychology and others. The second category contains various psychological practices, which of- fer practical cases of psychological consultation and emotional support dialogues. By incorporat- ing the diverse fundamental data into pre-training, our model gains a deeper understanding of educa- tion and psychology, enabling it to generate more helpful responses. | 2308.02773#10 | EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education | EduChat (https://www.educhat.top/) is a large-scale language model
(LLM)-based chatbot system in the education domain. Its goal is to support
personalized, fair, and compassionate intelligent education, serving teachers,
students, and parents. Guided by theories from psychology and education, it
further strengthens educational functions such as open question answering,
essay assessment, Socratic teaching, and emotional support based on the
existing basic LLMs. Particularly, we learn domain-specific knowledge by
pre-training on the educational corpus and stimulate various skills with tool
use by fine-tuning on designed system prompts and instructions. Currently,
EduChat is available online as an open-source project, with its code, data, and
model parameters available on platforms (e.g., GitHub
https://github.com/icalk-nlp/EduChat, Hugging Face
https://huggingface.co/ecnu-icalk ). We also prepare a demonstration of its
capabilities online (https://vimeo.com/851004454). This initiative aims to
promote research and applications of LLMs for intelligent education. | http://arxiv.org/pdf/2308.02773 | Yuhao Dan, Zhikai Lei, Yiyang Gu, Yong Li, Jianghao Yin, Jiaju Lin, Linhao Ye, Zhiyan Tie, Yougen Zhou, Yilei Wang, Aimin Zhou, Ze Zhou, Qin Chen, Jie Zhou, Liang He, Xipeng Qiu | cs.CL | null | null | cs.CL | 20230805 | 20230805 | [
{
"id": "2302.13971"
},
{
"id": "2107.03374"
},
{
"id": "2303.14742"
},
{
"id": "2305.08322"
},
{
"id": "1908.10084"
}
] |
2308.02773 | 11 | Fundamental Instruction Data To achieve a more natural human-computer interaction, we col- lect a large volume of bilingual instruct tuning data from reputable open-source repositories like Al- paca5, BELLE (Ji et al., 2023), GPT4All6, Open- Assistant7, FLANCoT8, and Firefly9. The data spans various task types, enabling our models to acquire foundational instruction following capabil- ities for diverse instruction types. In addition, we source high-quality multi-turn dialogue data from MOSS (Sun et al., 2023), BELLE (Ji et al., 2023), COIG (Zhang et al., 2023a), LIMA (Zhou et al., 2023a), and ShareGPT10. This data covers various dialogue contexts, including role-playing, creative writing, and code-related discussions, ensuring our modelsâ competence in engaging and sustaining meaningful multi-turn conversations.
# 5https://github.com/tatsu-lab/stanford_alpaca 6https://github.com/nomic-ai/gpt4all 7https://github.com/LAION-AI/Open-Assistant 8https://huggingface.co/datasets/lucasmccabe# lmi/FLAN_CoT_alpaca_style | 2308.02773#11 | EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education | EduChat (https://www.educhat.top/) is a large-scale language model
(LLM)-based chatbot system in the education domain. Its goal is to support
personalized, fair, and compassionate intelligent education, serving teachers,
students, and parents. Guided by theories from psychology and education, it
further strengthens educational functions such as open question answering,
essay assessment, Socratic teaching, and emotional support based on the
existing basic LLMs. Particularly, we learn domain-specific knowledge by
pre-training on the educational corpus and stimulate various skills with tool
use by fine-tuning on designed system prompts and instructions. Currently,
EduChat is available online as an open-source project, with its code, data, and
model parameters available on platforms (e.g., GitHub
https://github.com/icalk-nlp/EduChat, Hugging Face
https://huggingface.co/ecnu-icalk ). We also prepare a demonstration of its
capabilities online (https://vimeo.com/851004454). This initiative aims to
promote research and applications of LLMs for intelligent education. | http://arxiv.org/pdf/2308.02773 | Yuhao Dan, Zhikai Lei, Yiyang Gu, Yong Li, Jianghao Yin, Jiaju Lin, Linhao Ye, Zhiyan Tie, Yougen Zhou, Yilei Wang, Aimin Zhou, Ze Zhou, Qin Chen, Jie Zhou, Liang He, Xipeng Qiu | cs.CL | null | null | cs.CL | 20230805 | 20230805 | [
{
"id": "2302.13971"
},
{
"id": "2107.03374"
},
{
"id": "2303.14742"
},
{
"id": "2305.08322"
},
{
"id": "1908.10084"
}
] |
2308.02773 | 12 | # 9https://github.com/yangjianxin1/Firefly 10https://huggingface.co/datasets/gozfarb/ShareGPT_
Vicuna_unfiltered
me Essay Assessment âm= _Emotional Support âmm Retrieval-Augmented Open QA Socratic Teaching 29.4%
Figure 1: Distribution of educational data.
4.2 Fine-tuning Data To enhance the capability of education, we con- struct the Educational Instruction Data for fine- tuning, which covers retrieval-augmented open QA, emotional support, Socratic teaching and essay as- sessment. The distribution is shown in Figure 1.
Retrieval-Augmented Open QA Data To ad- dress hallucination and timely knowledge issues in Open QA, we design a retrieval-augmented open QA technique. We sample high-quality data through ChatGPT scoring in relevant Open QA and Subject QA datasets. To tackle irrelevant retrieved content, we introduce self-checking. ChatGPT as- sesses whether the retrieval content helps answer the question and then generates the answer using an self-check, incorporating the useful retrieval con- tent and questions. To maintain data quality, we manually verify the data during this process. | 2308.02773#12 | EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education | EduChat (https://www.educhat.top/) is a large-scale language model
(LLM)-based chatbot system in the education domain. Its goal is to support
personalized, fair, and compassionate intelligent education, serving teachers,
students, and parents. Guided by theories from psychology and education, it
further strengthens educational functions such as open question answering,
essay assessment, Socratic teaching, and emotional support based on the
existing basic LLMs. Particularly, we learn domain-specific knowledge by
pre-training on the educational corpus and stimulate various skills with tool
use by fine-tuning on designed system prompts and instructions. Currently,
EduChat is available online as an open-source project, with its code, data, and
model parameters available on platforms (e.g., GitHub
https://github.com/icalk-nlp/EduChat, Hugging Face
https://huggingface.co/ecnu-icalk ). We also prepare a demonstration of its
capabilities online (https://vimeo.com/851004454). This initiative aims to
promote research and applications of LLMs for intelligent education. | http://arxiv.org/pdf/2308.02773 | Yuhao Dan, Zhikai Lei, Yiyang Gu, Yong Li, Jianghao Yin, Jiaju Lin, Linhao Ye, Zhiyan Tie, Yougen Zhou, Yilei Wang, Aimin Zhou, Ze Zhou, Qin Chen, Jie Zhou, Liang He, Xipeng Qiu | cs.CL | null | null | cs.CL | 20230805 | 20230805 | [
{
"id": "2302.13971"
},
{
"id": "2107.03374"
},
{
"id": "2303.14742"
},
{
"id": "2305.08322"
},
{
"id": "1908.10084"
}
] |
2308.02773 | 13 | Emotional Support Data To overcome the scarcity of Chinese emotional support dialogue data, we adopt a translation and expansion ap- proach. We translate the widely-used English emo- tional support dataset, ESConv (Liu et al., 2021), into Chinese as ESConv-zh. After manual review and cleaning, we simulate multi-agent dialogues based on various patient scenarios within ESConv- zh and also collect real-life Chinese psychological counseling consultation data, incorporating patient information and diagnosis results. By training our models on diverse datasets, we empower them to provide robust emotional support and act as com- passionate counselors during consultations.
Socratic Teaching Data Teachers play a key role in guiding and encouraging heuristic exploration rather than just providing answers. To support this, we generate dialogues simulating the Socratic teaching method by incorporating multi-step Q&A involving counter-questions, challenges, and in- quiries. These dialogues are manually evaluated | 2308.02773#13 | EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education | EduChat (https://www.educhat.top/) is a large-scale language model
(LLM)-based chatbot system in the education domain. Its goal is to support
personalized, fair, and compassionate intelligent education, serving teachers,
students, and parents. Guided by theories from psychology and education, it
further strengthens educational functions such as open question answering,
essay assessment, Socratic teaching, and emotional support based on the
existing basic LLMs. Particularly, we learn domain-specific knowledge by
pre-training on the educational corpus and stimulate various skills with tool
use by fine-tuning on designed system prompts and instructions. Currently,
EduChat is available online as an open-source project, with its code, data, and
model parameters available on platforms (e.g., GitHub
https://github.com/icalk-nlp/EduChat, Hugging Face
https://huggingface.co/ecnu-icalk ). We also prepare a demonstration of its
capabilities online (https://vimeo.com/851004454). This initiative aims to
promote research and applications of LLMs for intelligent education. | http://arxiv.org/pdf/2308.02773 | Yuhao Dan, Zhikai Lei, Yiyang Gu, Yong Li, Jianghao Yin, Jiaju Lin, Linhao Ye, Zhiyan Tie, Yougen Zhou, Yilei Wang, Aimin Zhou, Ze Zhou, Qin Chen, Jie Zhou, Liang He, Xipeng Qiu | cs.CL | null | null | cs.CL | 20230805 | 20230805 | [
{
"id": "2302.13971"
},
{
"id": "2107.03374"
},
{
"id": "2303.14742"
},
{
"id": "2305.08322"
},
{
"id": "1908.10084"
}
] |
2308.02773 | 14 | Io eee eee eee ee ' x Web Search General ' ' if > aa S L 1 my lam an Al assistant named EduChat. | | Tools | ©. & ) f Psychol. i i aH alculators aaa sychology Han 1 c yet a it 1 | Personal EduChatis a conversational language | Q Skill tu ! | Profile model developed by ECNU. Self-check Socrates : ' 1 Ability Pre-training Textbooks #TEXT Textbooks Fundamental Instruction through the dawning beams. By the window <TITLE> Bamboo Shadows Amidst Lingering Snow <POETRY> A speck of vanishing snow remains, A lone maon amidst bamboo shades i wanes. In the clear night, ts brillance gleams, Cold air ingers Fine-tuning Essay Assessment wee Meee Fundamental Instruction INSTRUCTION {A Itean replicate in human blood plasma, #RESPONSE family Hepadnaviidae, a hepattrople DNA, ' 1 1 1 ' Educational t Instruction 1 1 1 1 âThe correct statement about Hepatitis B viru is (). B. It cannot parasitze in respiratory mucosal cell C. its nucleic acd contains 5 nitrogenous bases. 1. | 2308.02773#14 | EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education | EduChat (https://www.educhat.top/) is a large-scale language model
(LLM)-based chatbot system in the education domain. Its goal is to support
personalized, fair, and compassionate intelligent education, serving teachers,
students, and parents. Guided by theories from psychology and education, it
further strengthens educational functions such as open question answering,
essay assessment, Socratic teaching, and emotional support based on the
existing basic LLMs. Particularly, we learn domain-specific knowledge by
pre-training on the educational corpus and stimulate various skills with tool
use by fine-tuning on designed system prompts and instructions. Currently,
EduChat is available online as an open-source project, with its code, data, and
model parameters available on platforms (e.g., GitHub
https://github.com/icalk-nlp/EduChat, Hugging Face
https://huggingface.co/ecnu-icalk ). We also prepare a demonstration of its
capabilities online (https://vimeo.com/851004454). This initiative aims to
promote research and applications of LLMs for intelligent education. | http://arxiv.org/pdf/2308.02773 | Yuhao Dan, Zhikai Lei, Yiyang Gu, Yong Li, Jianghao Yin, Jiaju Lin, Linhao Ye, Zhiyan Tie, Yougen Zhou, Yilei Wang, Aimin Zhou, Ze Zhou, Qin Chen, Jie Zhou, Liang He, Xipeng Qiu | cs.CL | null | null | cs.CL | 20230805 | 20230805 | [
{
"id": "2302.13971"
},
{
"id": "2107.03374"
},
{
"id": "2303.14742"
},
{
"id": "2305.08322"
},
{
"id": "1908.10084"
}
] |
2308.02773 | 15 | viru is (). B. It cannot parasitze in respiratory mucosal cell C. its nucleic acd contains 5 nitrogenous bases. 1. its protein i synthesized by its own ribosomes. âAnswer is B. Hepatitis B virus (HBV) belongs tothe Knowledge > a Socratic ' ' Emotional ' & @® Eauchat iS oman | + C.China â D. United Kingdom â f t {| @:tn which country will the 1 | 2024 Olympic Games beheld? |â> | Search =| _» a Franceâ B. Unied States Engine f \ (1) Paris, France; Hamburg, Germany; Budapest, Hungary; \ iy dnb kia et cone > Useless X # Prefix ! Chest 22¢ Seer Opes Bee and ()i8 ! Symes wl ake place Yom uly 25 fo gus 1 useless, Result (2) i CEE He usetuly | py atuneui, corm || | '@) On September 13, 2017, during the 131st international lise the Information ' âOlymple Commit (OC) Session held in Lima, Pen, > Useless X | from the Result (2) ' Portus sav ste hs iy rt en ' x js | 2308.02773#15 | EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education | EduChat (https://www.educhat.top/) is a large-scale language model
(LLM)-based chatbot system in the education domain. Its goal is to support
personalized, fair, and compassionate intelligent education, serving teachers,
students, and parents. Guided by theories from psychology and education, it
further strengthens educational functions such as open question answering,
essay assessment, Socratic teaching, and emotional support based on the
existing basic LLMs. Particularly, we learn domain-specific knowledge by
pre-training on the educational corpus and stimulate various skills with tool
use by fine-tuning on designed system prompts and instructions. Currently,
EduChat is available online as an open-source project, with its code, data, and
model parameters available on platforms (e.g., GitHub
https://github.com/icalk-nlp/EduChat, Hugging Face
https://huggingface.co/ecnu-icalk ). We also prepare a demonstration of its
capabilities online (https://vimeo.com/851004454). This initiative aims to
promote research and applications of LLMs for intelligent education. | http://arxiv.org/pdf/2308.02773 | Yuhao Dan, Zhikai Lei, Yiyang Gu, Yong Li, Jianghao Yin, Jiaju Lin, Linhao Ye, Zhiyan Tie, Yougen Zhou, Yilei Wang, Aimin Zhou, Ze Zhou, Qin Chen, Jie Zhou, Liang He, Xipeng Qiu | cs.CL | null | null | cs.CL | 20230805 | 20230805 | [
{
"id": "2302.13971"
},
{
"id": "2107.03374"
},
{
"id": "2303.14742"
},
{
"id": "2305.08322"
},
{
"id": "1908.10084"
}
] |
2308.02773 | 16 | Figure 2: The overall framework of EduChat.
for accuracy, fluency, and progression from easy to complex questions. Integrating this dataset into training equips our model with a strong capability in Socratic teaching, distinguishing it from other LLMs that only offer direct answers.
Essay Assessment Data The lack of timely and detailed feedback often hinders studentsâ writing improvement. To tackle this issue, we create a high-quality essay assessment dataset. Initially, we collect essays and employ ChatGPT to evaluate them in terms of content, expression, and overall quality. To ensure data quality, we invite pedagogi- cal experts to manually curate the comments. This dataset empowers EduChat with the ability to pro- vide students with high-quality feedback, aiding in the enhancement of their writing skills.
GPU acceleration, speeding up the process.
# 5 EduChat
EduChat is an LLM designed for the education do- main (Figure 2). We first pre-train on large-scale education corpus (e.g., textbooks, instructions for foundational tasks) to learn the domain-specific and foundational knowledge. We then learn the pedagogical skills by fine-tuning EduChat on task- specific instruction datasets. Moreover, we lever- age online retrieval to enhance the accuracy and timeliness of knowledge in its responses. To con- trol skills, we design various system prompts to unlock different scenes with tool usage.
# 5.1 Training Procedure of EduChat | 2308.02773#16 | EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education | EduChat (https://www.educhat.top/) is a large-scale language model
(LLM)-based chatbot system in the education domain. Its goal is to support
personalized, fair, and compassionate intelligent education, serving teachers,
students, and parents. Guided by theories from psychology and education, it
further strengthens educational functions such as open question answering,
essay assessment, Socratic teaching, and emotional support based on the
existing basic LLMs. Particularly, we learn domain-specific knowledge by
pre-training on the educational corpus and stimulate various skills with tool
use by fine-tuning on designed system prompts and instructions. Currently,
EduChat is available online as an open-source project, with its code, data, and
model parameters available on platforms (e.g., GitHub
https://github.com/icalk-nlp/EduChat, Hugging Face
https://huggingface.co/ecnu-icalk ). We also prepare a demonstration of its
capabilities online (https://vimeo.com/851004454). This initiative aims to
promote research and applications of LLMs for intelligent education. | http://arxiv.org/pdf/2308.02773 | Yuhao Dan, Zhikai Lei, Yiyang Gu, Yong Li, Jianghao Yin, Jiaju Lin, Linhao Ye, Zhiyan Tie, Yougen Zhou, Yilei Wang, Aimin Zhou, Ze Zhou, Qin Chen, Jie Zhou, Liang He, Xipeng Qiu | cs.CL | null | null | cs.CL | 20230805 | 20230805 | [
{
"id": "2302.13971"
},
{
"id": "2107.03374"
},
{
"id": "2303.14742"
},
{
"id": "2305.08322"
},
{
"id": "1908.10084"
}
] |
2308.02773 | 17 | # 5.1 Training Procedure of EduChat
# 4.3 Data Preprocessing
To enhance data quality, we conduct semantic-level deduplication on the dataset. Using the sentence- transformers model (Reimers and Gurevych, 2019), we obtain sentence embeddings for each data point and calculate cosine similarity between all pairs of embeddings. For similarities exceeding a threshold of 0.7, we remove one of the duplicates. We im- plement the similarity calculation using CUDA for
The training of EduChat is mainly divided into two stages: fundamental capabilities acquisition and educational skills acquisition. In the first stage, we pre-train the model on educational books and Q&A pairs (detailed in Section 4.1) to equip it with foundational knowledge across disciplines. Besides, large-scale instruction tuning and open- domain dialogue datasets are also incorporated to enable basic instruction following ability and dia- logue ability (detailed in Section 4.2). In the second | 2308.02773#17 | EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education | EduChat (https://www.educhat.top/) is a large-scale language model
(LLM)-based chatbot system in the education domain. Its goal is to support
personalized, fair, and compassionate intelligent education, serving teachers,
students, and parents. Guided by theories from psychology and education, it
further strengthens educational functions such as open question answering,
essay assessment, Socratic teaching, and emotional support based on the
existing basic LLMs. Particularly, we learn domain-specific knowledge by
pre-training on the educational corpus and stimulate various skills with tool
use by fine-tuning on designed system prompts and instructions. Currently,
EduChat is available online as an open-source project, with its code, data, and
model parameters available on platforms (e.g., GitHub
https://github.com/icalk-nlp/EduChat, Hugging Face
https://huggingface.co/ecnu-icalk ). We also prepare a demonstration of its
capabilities online (https://vimeo.com/851004454). This initiative aims to
promote research and applications of LLMs for intelligent education. | http://arxiv.org/pdf/2308.02773 | Yuhao Dan, Zhikai Lei, Yiyang Gu, Yong Li, Jianghao Yin, Jiaju Lin, Linhao Ye, Zhiyan Tie, Yougen Zhou, Yilei Wang, Aimin Zhou, Ze Zhou, Qin Chen, Jie Zhou, Liang He, Xipeng Qiu | cs.CL | null | null | cs.CL | 20230805 | 20230805 | [
{
"id": "2302.13971"
},
{
"id": "2107.03374"
},
{
"id": "2303.14742"
},
{
"id": "2305.08322"
},
{
"id": "1908.10084"
}
] |
2308.02773 | 18 | @ Open QA + New Topic Topic 1 2024/07/90 1421.96 Topic 2 new Chat Please introduce East China Normal University t4 East China Normal University (ECNU) is a comprehensive public research university located in Shanghai, China. it was founded in 1951 through the merger of two institutions of higher education in Shanghai, and has since become one of the most prestigious universities in China. ECNU is known for its strong research and academic programs, and has @ reputation for producing top-quality graduates in a wide range of fields. The university has a diverse student body, with students from all over CChina and around the world. ECNU is also known for its strong ties to industry and business, and has a number of research centers and institutes that focus on areas such as artificial intelligence, biotechnology, and environmental science. In addition % Examples Topic 3 to its academic programs, ECNU is also known for its beautiful campus, which features a mix of traditional Chinese architecture âand modern facilities. Overall, ECNU is a highly respected and well-regarded institution of higher education, and is a great choice for students who are locking for a challenging and rewarding academic experience. Y Open aA Reference: | 2308.02773#18 | EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education | EduChat (https://www.educhat.top/) is a large-scale language model
(LLM)-based chatbot system in the education domain. Its goal is to support
personalized, fair, and compassionate intelligent education, serving teachers,
students, and parents. Guided by theories from psychology and education, it
further strengthens educational functions such as open question answering,
essay assessment, Socratic teaching, and emotional support based on the
existing basic LLMs. Particularly, we learn domain-specific knowledge by
pre-training on the educational corpus and stimulate various skills with tool
use by fine-tuning on designed system prompts and instructions. Currently,
EduChat is available online as an open-source project, with its code, data, and
model parameters available on platforms (e.g., GitHub
https://github.com/icalk-nlp/EduChat, Hugging Face
https://huggingface.co/ecnu-icalk ). We also prepare a demonstration of its
capabilities online (https://vimeo.com/851004454). This initiative aims to
promote research and applications of LLMs for intelligent education. | http://arxiv.org/pdf/2308.02773 | Yuhao Dan, Zhikai Lei, Yiyang Gu, Yong Li, Jianghao Yin, Jiaju Lin, Linhao Ye, Zhiyan Tie, Yougen Zhou, Yilei Wang, Aimin Zhou, Ze Zhou, Qin Chen, Jie Zhou, Liang He, Xipeng Qiu | cs.CL | null | null | cs.CL | 20230805 | 20230805 | [
{
"id": "2302.13971"
},
{
"id": "2107.03374"
},
{
"id": "2303.14742"
},
{
"id": "2305.08322"
},
{
"id": "1908.10084"
}
] |
2308.02773 | 19 | institution of higher education, and is a great choice for students who are locking for a challenging and rewarding academic experience. Y Open aA Reference: Emotional Support â+ East China Normal University ~ Wikipedia â+ East China Normal University + Introduction to ECNU-East China Normal University Essay Assessment Socratic Teaching Course Guidance Career Planning © Regenerate response i XX Open aa Type Retrieval s 0} @ > | 2308.02773#19 | EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education | EduChat (https://www.educhat.top/) is a large-scale language model
(LLM)-based chatbot system in the education domain. Its goal is to support
personalized, fair, and compassionate intelligent education, serving teachers,
students, and parents. Guided by theories from psychology and education, it
further strengthens educational functions such as open question answering,
essay assessment, Socratic teaching, and emotional support based on the
existing basic LLMs. Particularly, we learn domain-specific knowledge by
pre-training on the educational corpus and stimulate various skills with tool
use by fine-tuning on designed system prompts and instructions. Currently,
EduChat is available online as an open-source project, with its code, data, and
model parameters available on platforms (e.g., GitHub
https://github.com/icalk-nlp/EduChat, Hugging Face
https://huggingface.co/ecnu-icalk ). We also prepare a demonstration of its
capabilities online (https://vimeo.com/851004454). This initiative aims to
promote research and applications of LLMs for intelligent education. | http://arxiv.org/pdf/2308.02773 | Yuhao Dan, Zhikai Lei, Yiyang Gu, Yong Li, Jianghao Yin, Jiaju Lin, Linhao Ye, Zhiyan Tie, Yougen Zhou, Yilei Wang, Aimin Zhou, Ze Zhou, Qin Chen, Jie Zhou, Liang He, Xipeng Qiu | cs.CL | null | null | cs.CL | 20230805 | 20230805 | [
{
"id": "2302.13971"
},
{
"id": "2107.03374"
},
{
"id": "2303.14742"
},
{
"id": "2305.08322"
},
{
"id": "1908.10084"
}
] |
2308.02773 | 20 | Figure 3: Demo of EduChat.
stage, we develop EduChatâs pedagogical skills by fine-tuning the model on our carefully curated data, including retrieval-augmented open QA, emotional support, Socratic teaching and essay assessment datasets mentioned in Section 4.2.
# 5.2 Online Knowledge Retrieval
"Calculator: Disable" signifies the modelâs inability to utilize a calculator; 3) Skill Selection: Teachers in various settings possess unique communication skills, such as Socratic Teaching or Psychology- based Emotional Support. To cater to specific sce- narios, we include function names at the end of the system prompt, which activates corresponding abilities based on the sceneâs requirements.
Existing generative LLMs all suffer from the issues of generating hallucinations and outdated informa- tion, which is detrimental to an educational model. To mitigate this problem, we introduce self-check as shown in Figure 2. Specifically, when online knowledge retrieval is enabled, the model picks useful retrieval results by asking itself "Is this help- ful for answering the question?" and append filtered snippets before the dialogue history.
# 5.3 System Prompt Design | 2308.02773#20 | EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education | EduChat (https://www.educhat.top/) is a large-scale language model
(LLM)-based chatbot system in the education domain. Its goal is to support
personalized, fair, and compassionate intelligent education, serving teachers,
students, and parents. Guided by theories from psychology and education, it
further strengthens educational functions such as open question answering,
essay assessment, Socratic teaching, and emotional support based on the
existing basic LLMs. Particularly, we learn domain-specific knowledge by
pre-training on the educational corpus and stimulate various skills with tool
use by fine-tuning on designed system prompts and instructions. Currently,
EduChat is available online as an open-source project, with its code, data, and
model parameters available on platforms (e.g., GitHub
https://github.com/icalk-nlp/EduChat, Hugging Face
https://huggingface.co/ecnu-icalk ). We also prepare a demonstration of its
capabilities online (https://vimeo.com/851004454). This initiative aims to
promote research and applications of LLMs for intelligent education. | http://arxiv.org/pdf/2308.02773 | Yuhao Dan, Zhikai Lei, Yiyang Gu, Yong Li, Jianghao Yin, Jiaju Lin, Linhao Ye, Zhiyan Tie, Yougen Zhou, Yilei Wang, Aimin Zhou, Ze Zhou, Qin Chen, Jie Zhou, Liang He, Xipeng Qiu | cs.CL | null | null | cs.CL | 20230805 | 20230805 | [
{
"id": "2302.13971"
},
{
"id": "2107.03374"
},
{
"id": "2303.14742"
},
{
"id": "2305.08322"
},
{
"id": "1908.10084"
}
] |
2308.02773 | 21 | # 5.3 System Prompt Design
Teachers always utilize various tools with different skills to enhance their teaching across different sub- jects. To enable EduChat to emulate an authentic teacher-student interaction, we carefully craft the system prompt that consists of personal profile, tool usage and skill selection (see Figure 2). Detailed settings can be found in Table 2.
# 5.4 Demonstration
We also develop a user-friendly demo system for EduChat (see Figure 3). Upon logging in, users can select from various functions, such as Open QA and Emotional Support, each offering a scene-specific system prompt to activate the corresponding ability. With this intuitive interface, users can easily engage in interactive conversations with EduChat to assist students, teachers and parents. Additionally, the system is designed to be adaptive, continuously learning from user interactions to further improve its capabilities and provide more personalized and effective assistance over time.
# 6 Experimental Results
1) Personal Profile: To remind the model of its own identity, the system prompt begins with: "EduChat is a conversational language model de- veloped by East China Normal University."; 2) Tool Usage: To regulate tool availability, the second part of the system prompt commences with "EduChatâs tools:", listing all tool names and their respective accessibility. For instance, "Web search: Enable" indicates the modelâs ability to use retrieval, while | 2308.02773#21 | EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education | EduChat (https://www.educhat.top/) is a large-scale language model
(LLM)-based chatbot system in the education domain. Its goal is to support
personalized, fair, and compassionate intelligent education, serving teachers,
students, and parents. Guided by theories from psychology and education, it
further strengthens educational functions such as open question answering,
essay assessment, Socratic teaching, and emotional support based on the
existing basic LLMs. Particularly, we learn domain-specific knowledge by
pre-training on the educational corpus and stimulate various skills with tool
use by fine-tuning on designed system prompts and instructions. Currently,
EduChat is available online as an open-source project, with its code, data, and
model parameters available on platforms (e.g., GitHub
https://github.com/icalk-nlp/EduChat, Hugging Face
https://huggingface.co/ecnu-icalk ). We also prepare a demonstration of its
capabilities online (https://vimeo.com/851004454). This initiative aims to
promote research and applications of LLMs for intelligent education. | http://arxiv.org/pdf/2308.02773 | Yuhao Dan, Zhikai Lei, Yiyang Gu, Yong Li, Jianghao Yin, Jiaju Lin, Linhao Ye, Zhiyan Tie, Yougen Zhou, Yilei Wang, Aimin Zhou, Ze Zhou, Qin Chen, Jie Zhou, Liang He, Xipeng Qiu | cs.CL | null | null | cs.CL | 20230805 | 20230805 | [
{
"id": "2302.13971"
},
{
"id": "2107.03374"
},
{
"id": "2303.14742"
},
{
"id": "2305.08322"
},
{
"id": "1908.10084"
}
] |
2308.02773 | 22 | # 6.1 Resutls of C-Eval
Table 1 presents the results of our model on the C- Eval benchmark (Huang et al., 2023), a comprehen- sive Chinese evaluation suite for foundation mod- els. The dataset consists of 13,948 multi-choice questions, spanning 52 diverse disciplines and cat- egorized into four difficulty levels. Analyzing the table, we observe that our model achieves com- mendable performance compared to models with
# Chinese
# Translate to English
# Retrieval-Augmented Open QA
User: ee ae tif? 2
User: Who is the author of the View the Sea?
EduChat: (UIE) MITE Bh. GPR AL SLE TONER, Hides âi iE : Be ai fait ADH HORACE GEENA. PTR BE ae oe PPMP HAM HE, AE PRI AR EEL
# Br RE:
Wit ite ERE | 2308.02773#22 | EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education | EduChat (https://www.educhat.top/) is a large-scale language model
(LLM)-based chatbot system in the education domain. Its goal is to support
personalized, fair, and compassionate intelligent education, serving teachers,
students, and parents. Guided by theories from psychology and education, it
further strengthens educational functions such as open question answering,
essay assessment, Socratic teaching, and emotional support based on the
existing basic LLMs. Particularly, we learn domain-specific knowledge by
pre-training on the educational corpus and stimulate various skills with tool
use by fine-tuning on designed system prompts and instructions. Currently,
EduChat is available online as an open-source project, with its code, data, and
model parameters available on platforms (e.g., GitHub
https://github.com/icalk-nlp/EduChat, Hugging Face
https://huggingface.co/ecnu-icalk ). We also prepare a demonstration of its
capabilities online (https://vimeo.com/851004454). This initiative aims to
promote research and applications of LLMs for intelligent education. | http://arxiv.org/pdf/2308.02773 | Yuhao Dan, Zhikai Lei, Yiyang Gu, Yong Li, Jianghao Yin, Jiaju Lin, Linhao Ye, Zhiyan Tie, Yougen Zhou, Yilei Wang, Aimin Zhou, Ze Zhou, Qin Chen, Jie Zhou, Liang He, Xipeng Qiu | cs.CL | null | null | cs.CL | 20230805 | 20230805 | [
{
"id": "2302.13971"
},
{
"id": "2107.03374"
},
{
"id": "2303.14742"
},
{
"id": "2305.08322"
},
{
"id": "1908.10084"
}
] |
2308.02773 | 23 | # Br RE:
Wit ite ERE
EduChat: The author of the View the Sea was Cao Cao, a poet in the late Eastern Han Dynasty. This poem is written by Cao Cao when he climbed the mountain and looked at the sea in Jieshi Mountain. It depicts the magnificence of the rivers and mountains of the motherland. It not only depicts the magnificence of the mountains and the sea, but also expre: the poet's enterprising spirit with the scenery and the world in mind. This poem is regarded as a classic of ancient Chinese poetry and one of Cao Cao's representative works.
# Reference:
# View_the_sea_ Baidu_Baike | 2308.02773#23 | EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education | EduChat (https://www.educhat.top/) is a large-scale language model
(LLM)-based chatbot system in the education domain. Its goal is to support
personalized, fair, and compassionate intelligent education, serving teachers,
students, and parents. Guided by theories from psychology and education, it
further strengthens educational functions such as open question answering,
essay assessment, Socratic teaching, and emotional support based on the
existing basic LLMs. Particularly, we learn domain-specific knowledge by
pre-training on the educational corpus and stimulate various skills with tool
use by fine-tuning on designed system prompts and instructions. Currently,
EduChat is available online as an open-source project, with its code, data, and
model parameters available on platforms (e.g., GitHub
https://github.com/icalk-nlp/EduChat, Hugging Face
https://huggingface.co/ecnu-icalk ). We also prepare a demonstration of its
capabilities online (https://vimeo.com/851004454). This initiative aims to
promote research and applications of LLMs for intelligent education. | http://arxiv.org/pdf/2308.02773 | Yuhao Dan, Zhikai Lei, Yiyang Gu, Yong Li, Jianghao Yin, Jiaju Lin, Linhao Ye, Zhiyan Tie, Yougen Zhou, Yilei Wang, Aimin Zhou, Ze Zhou, Qin Chen, Jie Zhou, Liang He, Xipeng Qiu | cs.CL | null | null | cs.CL | 20230805 | 20230805 | [
{
"id": "2302.13971"
},
{
"id": "2107.03374"
},
{
"id": "2303.14742"
},
{
"id": "2305.08322"
},
{
"id": "1908.10084"
}
] |
2308.02773 | 24 | Socratic Teaching 3? uo itm ALY ZS ADK SSM? SS AE AL aE FAT NI 2 User: FR 4s itt EduChat: JZ A: Fil Ri ip RU 2 AK BCA NS BLN 2 J, ARAVA Y iit FL AOA SH I « SHAT FEAL A AE « NIG ASE AE A RW FATA AEG 2 Leu, 1, 3, 5, 7, 98 User it fut EduChat: Bel) A] WA Fil ES 2 AA BS RHF HAAS: Sn= (1/2)*n (al+an), Jal ES IER BONEN Sil. User: HEIR A RIE ANTUE HEA MBEAN EB EduChat: 1bZ,, ri al + (N-I)* de CPA AE SL SSNS an, ABA SENT ZS aI rE ERE PILZ i BON, BEI ea at FY LATER AEB Fe? ATTA. Tel, SE 2e BONN T N-1)*d, BEAM LAG Ban =3+ (6-1)*2, Ban=13. y 7,9, 11, 13. REN, ROW? PAE, 3&6 User: Betti EduChat: 57/4, T, WK. WR Ae a, BALIN AY DA fa | 2308.02773#24 | EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education | EduChat (https://www.educhat.top/) is a large-scale language model
(LLM)-based chatbot system in the education domain. Its goal is to support
personalized, fair, and compassionate intelligent education, serving teachers,
students, and parents. Guided by theories from psychology and education, it
further strengthens educational functions such as open question answering,
essay assessment, Socratic teaching, and emotional support based on the
existing basic LLMs. Particularly, we learn domain-specific knowledge by
pre-training on the educational corpus and stimulate various skills with tool
use by fine-tuning on designed system prompts and instructions. Currently,
EduChat is available online as an open-source project, with its code, data, and
model parameters available on platforms (e.g., GitHub
https://github.com/icalk-nlp/EduChat, Hugging Face
https://huggingface.co/ecnu-icalk ). We also prepare a demonstration of its
capabilities online (https://vimeo.com/851004454). This initiative aims to
promote research and applications of LLMs for intelligent education. | http://arxiv.org/pdf/2308.02773 | Yuhao Dan, Zhikai Lei, Yiyang Gu, Yong Li, Jianghao Yin, Jiaju Lin, Linhao Ye, Zhiyan Tie, Yougen Zhou, Yilei Wang, Aimin Zhou, Ze Zhou, Qin Chen, Jie Zhou, Liang He, Xipeng Qiu | cs.CL | null | null | cs.CL | 20230805 | 20230805 | [
{
"id": "2302.13971"
},
{
"id": "2107.03374"
},
{
"id": "2303.14742"
},
{
"id": "2305.08322"
},
{
"id": "1908.10084"
}
] |
2308.02773 | 25 | REN, ROW? PAE, 3&6 User: Betti EduChat: 57/4, T, WK. WR Ae a, BALIN AY DA fa aR. User: How do you find the first n terms of an arithmetic sequence? EduChat: Are you asking for the formula for the sum of the first n terms of an arithmetic sequence, or how to apply the formula to find the sum of the first n terms? User: I don't know the formula for the sum of the first n terms of an arithmetic sequence. EduChat: Well, do you know what an arithmetic sequence is? User: Not really. EduChat: Okay, let's first understand what an arithmetic sequence is. An arithmetic sequence is a sequence in which the difference between any term and the one before it is constant. For example, 1, 3, 5, 7, 9 is an arithmetic sequence because the difference between each term and the one before it User: So how do you find the sum of the first N terms? EduChat: We can use the formula for the sum of an arithmetic sequence to solve it. The formula for the sum of the first N terms of an arithmetic sequence is: Sn = 1/2 * n* (al + an), where al is the first term of the arithmetic sequence, and an | 2308.02773#25 | EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education | EduChat (https://www.educhat.top/) is a large-scale language model
(LLM)-based chatbot system in the education domain. Its goal is to support
personalized, fair, and compassionate intelligent education, serving teachers,
students, and parents. Guided by theories from psychology and education, it
further strengthens educational functions such as open question answering,
essay assessment, Socratic teaching, and emotional support based on the
existing basic LLMs. Particularly, we learn domain-specific knowledge by
pre-training on the educational corpus and stimulate various skills with tool
use by fine-tuning on designed system prompts and instructions. Currently,
EduChat is available online as an open-source project, with its code, data, and
model parameters available on platforms (e.g., GitHub
https://github.com/icalk-nlp/EduChat, Hugging Face
https://huggingface.co/ecnu-icalk ). We also prepare a demonstration of its
capabilities online (https://vimeo.com/851004454). This initiative aims to
promote research and applications of LLMs for intelligent education. | http://arxiv.org/pdf/2308.02773 | Yuhao Dan, Zhikai Lei, Yiyang Gu, Yong Li, Jianghao Yin, Jiaju Lin, Linhao Ye, Zhiyan Tie, Yougen Zhou, Yilei Wang, Aimin Zhou, Ze Zhou, Qin Chen, Jie Zhou, Liang He, Xipeng Qiu | cs.CL | null | null | cs.CL | 20230805 | 20230805 | [
{
"id": "2302.13971"
},
{
"id": "2107.03374"
},
{
"id": "2303.14742"
},
{
"id": "2305.08322"
},
{
"id": "1908.10084"
}
] |
2308.02773 | 26 | sum of the first N terms of an arithmetic sequence is: Sn = 1/2 * n* (al + an), where al is the first term of the arithmetic sequence, and an is the Nth term. User: But I don't know what the Nth term is. EduChat: Well, we can derive the Nth term from known conditions. If we know the first term of an arithmetic sequence al, the common difference d, and the Nth term an, then the formula for the Nth term is User: What do you mean by common difference? EduChat: The common difference refers to the difference between any two terms in an arithmetic sequence. In an arithmetic sequence, the difference between each term and the one before it is the same, and this equal difference is the common difference. User: Can you give me an example? EduChat: Of course. For example, if the first term the common difference is 2, what is the 6th term? A 3+ (6-1) * 2, which give: quence are 3, 5, 7,9, 11, 13. stand now, thank you. EduChat: You're welcome, feel free to ask if you have any other questions. arithmetic sequence is âding to the formula an =13. Therefore, the first 6 | 2308.02773#26 | EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education | EduChat (https://www.educhat.top/) is a large-scale language model
(LLM)-based chatbot system in the education domain. Its goal is to support
personalized, fair, and compassionate intelligent education, serving teachers,
students, and parents. Guided by theories from psychology and education, it
further strengthens educational functions such as open question answering,
essay assessment, Socratic teaching, and emotional support based on the
existing basic LLMs. Particularly, we learn domain-specific knowledge by
pre-training on the educational corpus and stimulate various skills with tool
use by fine-tuning on designed system prompts and instructions. Currently,
EduChat is available online as an open-source project, with its code, data, and
model parameters available on platforms (e.g., GitHub
https://github.com/icalk-nlp/EduChat, Hugging Face
https://huggingface.co/ecnu-icalk ). We also prepare a demonstration of its
capabilities online (https://vimeo.com/851004454). This initiative aims to
promote research and applications of LLMs for intelligent education. | http://arxiv.org/pdf/2308.02773 | Yuhao Dan, Zhikai Lei, Yiyang Gu, Yong Li, Jianghao Yin, Jiaju Lin, Linhao Ye, Zhiyan Tie, Yougen Zhou, Yilei Wang, Aimin Zhou, Ze Zhou, Qin Chen, Jie Zhou, Liang He, Xipeng Qiu | cs.CL | null | null | cs.CL | 20230805 | 20230805 | [
{
"id": "2302.13971"
},
{
"id": "2107.03374"
},
{
"id": "2303.14742"
},
{
"id": "2305.08322"
},
{
"id": "1908.10084"
}
] |
2308.02773 | 27 | Figure 4: Cases of retrieval-augmented open QA and socratic teaching.
STEMSocial ScienceHumanitiesOthersAvg(hard) Avg 68.7 64.5 54.4 50.9 53.6 57.3 52.8 55.4 51.7 51.3 44.6 44.3 42.8 46.2 41.6 42.3 33.3 33.6 40.7 42.9 49.3 53.7 GPT-4 67.1 ChatGPT 52.9 Baichuan-13B 47.0 InternLM-7B 48.0 ChatGLM2-6B 48.6 WestlakeLM-19B 41.6 38.2 Baichuan-7B Chinese-Alpaca-33B 37.0 Chinese-Alpaca-13B 31.6 EduChat 36.2 EduChat (w Retrieval) 43.5 77.6 61.8 66.8 67.4 60.5 51.0 52.0 51.6 37.2 50.7 59.3 67.8 53.6 49.8 45.8 49.8 44.5 39.3 40.3 32.8 37.7 46.6 54.9 41.4 36.7 37.1 37.1 34.9 31.5 30.3 27.3 28.3 33.1 | 2308.02773#27 | EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education | EduChat (https://www.educhat.top/) is a large-scale language model
(LLM)-based chatbot system in the education domain. Its goal is to support
personalized, fair, and compassionate intelligent education, serving teachers,
students, and parents. Guided by theories from psychology and education, it
further strengthens educational functions such as open question answering,
essay assessment, Socratic teaching, and emotional support based on the
existing basic LLMs. Particularly, we learn domain-specific knowledge by
pre-training on the educational corpus and stimulate various skills with tool
use by fine-tuning on designed system prompts and instructions. Currently,
EduChat is available online as an open-source project, with its code, data, and
model parameters available on platforms (e.g., GitHub
https://github.com/icalk-nlp/EduChat, Hugging Face
https://huggingface.co/ecnu-icalk ). We also prepare a demonstration of its
capabilities online (https://vimeo.com/851004454). This initiative aims to
promote research and applications of LLMs for intelligent education. | http://arxiv.org/pdf/2308.02773 | Yuhao Dan, Zhikai Lei, Yiyang Gu, Yong Li, Jianghao Yin, Jiaju Lin, Linhao Ye, Zhiyan Tie, Yougen Zhou, Yilei Wang, Aimin Zhou, Ze Zhou, Qin Chen, Jie Zhou, Liang He, Xipeng Qiu | cs.CL | null | null | cs.CL | 20230805 | 20230805 | [
{
"id": "2302.13971"
},
{
"id": "2107.03374"
},
{
"id": "2303.14742"
},
{
"id": "2305.08322"
},
{
"id": "1908.10084"
}
] |
2308.02773 | 28 | For emotional support, EduChat can interact like a psychological counselor rather than giving the general advice. For space limitation, we provide more cases of psychology-based emotional support and fine-grained essay assessment in the Appendix (Figure 5).
Table 1: Results of C-Eval.
# 7 Conclusion
similar parameter scales, such as Chinese Alpaca- 13B and WastlackLM. Notably, both EduChat and Chinese Alpaca-13B are built on the LLaMa-13B base model. However, EduChat outperforms Chi- nese Alpaca-13B by over seven points. Further- more, our integration of retrieval into LLMs proves to be highly effective, demonstrating the power of our retrieval-augmented open QA technique in enhancing model performance.
# 6.2 Case Studies
Figure 4 shows the cases of our EduChat on retrieval-augmented open QA and socratic teaching. EduChat can provide precise answer with retrieved relevant information, and learn to guide the student to solve the problems like a teacher step by step. | 2308.02773#28 | EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education | EduChat (https://www.educhat.top/) is a large-scale language model
(LLM)-based chatbot system in the education domain. Its goal is to support
personalized, fair, and compassionate intelligent education, serving teachers,
students, and parents. Guided by theories from psychology and education, it
further strengthens educational functions such as open question answering,
essay assessment, Socratic teaching, and emotional support based on the
existing basic LLMs. Particularly, we learn domain-specific knowledge by
pre-training on the educational corpus and stimulate various skills with tool
use by fine-tuning on designed system prompts and instructions. Currently,
EduChat is available online as an open-source project, with its code, data, and
model parameters available on platforms (e.g., GitHub
https://github.com/icalk-nlp/EduChat, Hugging Face
https://huggingface.co/ecnu-icalk ). We also prepare a demonstration of its
capabilities online (https://vimeo.com/851004454). This initiative aims to
promote research and applications of LLMs for intelligent education. | http://arxiv.org/pdf/2308.02773 | Yuhao Dan, Zhikai Lei, Yiyang Gu, Yong Li, Jianghao Yin, Jiaju Lin, Linhao Ye, Zhiyan Tie, Yougen Zhou, Yilei Wang, Aimin Zhou, Ze Zhou, Qin Chen, Jie Zhou, Liang He, Xipeng Qiu | cs.CL | null | null | cs.CL | 20230805 | 20230805 | [
{
"id": "2302.13971"
},
{
"id": "2107.03374"
},
{
"id": "2303.14742"
},
{
"id": "2305.08322"
},
{
"id": "1908.10084"
}
] |
2308.02773 | 29 | In this paper, we introduce EduChat, an LLM- based chatbot system for intelligent education. Our goal is to provide personalized, fair, and compas- sionate support to teachers, students, and parents. By leveraging psychology and education theories, we enhance educational functions like open QA, essay assessment, Socratic teaching, and emotional support. Through pre-training on educational cor- pus and fine-tuning with task-specific instructions, EduChat demonstrates great performance on the C- Eval benchmark. Overall, EduChat exhibits great potential towards revolutionizing intelligent educa- tion. In future work, we aim to expand EduChat on more functions, such as career planning, course guidance, question generation and so on.
# References
Alexis Baladn, Ignacio Sastre, Luis Chiruzzo, and Aiala Ros. 2023. RETUYT-InCo at BEA 2023 shared task: Tuning open-source LLMs for generating teacher In Proceedings of the 18th Workshop responses. on Innovative Use of NLP for Building Educational Applications (BEA 2023), pages 756â765, Toronto, Canada. Association for Computational Linguistics. | 2308.02773#29 | EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education | EduChat (https://www.educhat.top/) is a large-scale language model
(LLM)-based chatbot system in the education domain. Its goal is to support
personalized, fair, and compassionate intelligent education, serving teachers,
students, and parents. Guided by theories from psychology and education, it
further strengthens educational functions such as open question answering,
essay assessment, Socratic teaching, and emotional support based on the
existing basic LLMs. Particularly, we learn domain-specific knowledge by
pre-training on the educational corpus and stimulate various skills with tool
use by fine-tuning on designed system prompts and instructions. Currently,
EduChat is available online as an open-source project, with its code, data, and
model parameters available on platforms (e.g., GitHub
https://github.com/icalk-nlp/EduChat, Hugging Face
https://huggingface.co/ecnu-icalk ). We also prepare a demonstration of its
capabilities online (https://vimeo.com/851004454). This initiative aims to
promote research and applications of LLMs for intelligent education. | http://arxiv.org/pdf/2308.02773 | Yuhao Dan, Zhikai Lei, Yiyang Gu, Yong Li, Jianghao Yin, Jiaju Lin, Linhao Ye, Zhiyan Tie, Yougen Zhou, Yilei Wang, Aimin Zhou, Ze Zhou, Qin Chen, Jie Zhou, Liang He, Xipeng Qiu | cs.CL | null | null | cs.CL | 20230805 | 20230805 | [
{
"id": "2302.13971"
},
{
"id": "2107.03374"
},
{
"id": "2303.14742"
},
{
"id": "2305.08322"
},
{
"id": "1908.10084"
}
] |
2308.02773 | 30 | Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Ka- plan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. 2021. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374.
Jiaxi Cui, Zongjia Li, Yang Yan, Bohua Chen, and Li Yuan. 2023. Chatlaw: Open-source legal large language model with integrated external knowledge bases. ArXiv, abs/2306.16092.
Zhengxiao Du, Yujie Qian, Xiao Liu, Ming Ding, Jiezhong Qiu, Zhilin Yang, and Jie Tang. 2022. Glm: General language model pretraining with autoregres- sive blank infilling. In Proceedings of the 60th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 320â335.
Albert Ellis. 1991. The revised ABCâs of rational- emotive therapy (RET). Journal of Rational-Emotive and Cognitive-Behavior Therapy, 9(3):139â172. | 2308.02773#30 | EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education | EduChat (https://www.educhat.top/) is a large-scale language model
(LLM)-based chatbot system in the education domain. Its goal is to support
personalized, fair, and compassionate intelligent education, serving teachers,
students, and parents. Guided by theories from psychology and education, it
further strengthens educational functions such as open question answering,
essay assessment, Socratic teaching, and emotional support based on the
existing basic LLMs. Particularly, we learn domain-specific knowledge by
pre-training on the educational corpus and stimulate various skills with tool
use by fine-tuning on designed system prompts and instructions. Currently,
EduChat is available online as an open-source project, with its code, data, and
model parameters available on platforms (e.g., GitHub
https://github.com/icalk-nlp/EduChat, Hugging Face
https://huggingface.co/ecnu-icalk ). We also prepare a demonstration of its
capabilities online (https://vimeo.com/851004454). This initiative aims to
promote research and applications of LLMs for intelligent education. | http://arxiv.org/pdf/2308.02773 | Yuhao Dan, Zhikai Lei, Yiyang Gu, Yong Li, Jianghao Yin, Jiaju Lin, Linhao Ye, Zhiyan Tie, Yougen Zhou, Yilei Wang, Aimin Zhou, Ze Zhou, Qin Chen, Jie Zhou, Liang He, Xipeng Qiu | cs.CL | null | null | cs.CL | 20230805 | 20230805 | [
{
"id": "2302.13971"
},
{
"id": "2107.03374"
},
{
"id": "2303.14742"
},
{
"id": "2305.08322"
},
{
"id": "1908.10084"
}
] |
2308.02773 | 31 | Yuzhen Huang, Yuzhuo Bai, Zhihao Zhu, Junlei Zhang, Jinghan Zhang, Tangjun Su, Junteng Liu, Chuancheng Lv, Yikai Zhang, Jiayi Lei, Yao Fu, Maosong Sun, and Junxian He. 2023. C- eval: A multi-level multi-discipline chinese evalu- ation suite for foundation models. arXiv preprint arXiv:2305.08322.
Yunjie Ji, Yong Deng, Yan Gong, Yiping Peng, Qiang Niu, Lei Zhang, Baochang Ma, and Xiangang Li. 2023. Exploring the impact of instruction data scaling on large language models: An empirical arXiv preprint study on real-world use cases. arXiv:2303.14742.
Yunxiang Li, Zihan Li, Kai Zhang, Ruilong Dan, Steve Jiang, and You Zhang. 2023. Chatdoctor: A medical chat model fine-tuned on a large language model meta-ai (llama) using medical domain knowledge. Cureus, 15(6). | 2308.02773#31 | EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education | EduChat (https://www.educhat.top/) is a large-scale language model
(LLM)-based chatbot system in the education domain. Its goal is to support
personalized, fair, and compassionate intelligent education, serving teachers,
students, and parents. Guided by theories from psychology and education, it
further strengthens educational functions such as open question answering,
essay assessment, Socratic teaching, and emotional support based on the
existing basic LLMs. Particularly, we learn domain-specific knowledge by
pre-training on the educational corpus and stimulate various skills with tool
use by fine-tuning on designed system prompts and instructions. Currently,
EduChat is available online as an open-source project, with its code, data, and
model parameters available on platforms (e.g., GitHub
https://github.com/icalk-nlp/EduChat, Hugging Face
https://huggingface.co/ecnu-icalk ). We also prepare a demonstration of its
capabilities online (https://vimeo.com/851004454). This initiative aims to
promote research and applications of LLMs for intelligent education. | http://arxiv.org/pdf/2308.02773 | Yuhao Dan, Zhikai Lei, Yiyang Gu, Yong Li, Jianghao Yin, Jiaju Lin, Linhao Ye, Zhiyan Tie, Yougen Zhou, Yilei Wang, Aimin Zhou, Ze Zhou, Qin Chen, Jie Zhou, Liang He, Xipeng Qiu | cs.CL | null | null | cs.CL | 20230805 | 20230805 | [
{
"id": "2302.13971"
},
{
"id": "2107.03374"
},
{
"id": "2303.14742"
},
{
"id": "2305.08322"
},
{
"id": "1908.10084"
}
] |
2308.02773 | 32 | Siyang Liu, Chujie Zheng, Orianna Demasi, Sahand Sabour, Yu Li, Zhou Yu, Yong Jiang, and Minlie Huang. 2021. Towards emotional support dialog systems. In Proceedings of the 59th Annual Meet- ing of the Association for Computational Linguistics and the 11th International Joint Conference on Natu- ral Language Processing (Volume 1: Long Papers), pages 3469â3483, Online. Association for Computa- tional Linguistics.
Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. arXiv preprint arXiv:1908.10084.
John Schulman, B Zoph, C Kim, J Hilton, J Menick, J Weng, JFC Uribe, L Fedus, L Metz, M Pokorny, et al. 2022. ChatGPT: Optimizing language models for dialogue. In OpenAI blog.
Nisan Stiennon, Long Ouyang, Jeffrey Wu, Daniel Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul F Christiano. 2020. Learn- In Ad- ing to summarize with human feedback. vances in Neural Information Processing Systems, volume 33, pages 3008â3021. | 2308.02773#32 | EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education | EduChat (https://www.educhat.top/) is a large-scale language model
(LLM)-based chatbot system in the education domain. Its goal is to support
personalized, fair, and compassionate intelligent education, serving teachers,
students, and parents. Guided by theories from psychology and education, it
further strengthens educational functions such as open question answering,
essay assessment, Socratic teaching, and emotional support based on the
existing basic LLMs. Particularly, we learn domain-specific knowledge by
pre-training on the educational corpus and stimulate various skills with tool
use by fine-tuning on designed system prompts and instructions. Currently,
EduChat is available online as an open-source project, with its code, data, and
model parameters available on platforms (e.g., GitHub
https://github.com/icalk-nlp/EduChat, Hugging Face
https://huggingface.co/ecnu-icalk ). We also prepare a demonstration of its
capabilities online (https://vimeo.com/851004454). This initiative aims to
promote research and applications of LLMs for intelligent education. | http://arxiv.org/pdf/2308.02773 | Yuhao Dan, Zhikai Lei, Yiyang Gu, Yong Li, Jianghao Yin, Jiaju Lin, Linhao Ye, Zhiyan Tie, Yougen Zhou, Yilei Wang, Aimin Zhou, Ze Zhou, Qin Chen, Jie Zhou, Liang He, Xipeng Qiu | cs.CL | null | null | cs.CL | 20230805 | 20230805 | [
{
"id": "2302.13971"
},
{
"id": "2107.03374"
},
{
"id": "2303.14742"
},
{
"id": "2305.08322"
},
{
"id": "1908.10084"
}
] |
2308.02773 | 33 | Tianxiang Sun, Xiaotian Zhang, Zhengfu He, Peng Li, Qinyuan Cheng, Hang Yan, Xiangyang Liu, Yunfan Shao, Qiong Tang, Xingjian Zhao, Ke Chen, Yining Zheng, Zhejian Zhou, Ruixiao Li, Jun Zhan, Yun- hua Zhou, Linyang Li, Xiaogui Yang, Lingling Wu, Zhangyue Yin, Xuanjing Huang, and Xipeng Qiu. 2023. Moss: Training conversational language mod- els from synthetic data.
Anaïs Tack, Ekaterina Kochmar, Zheng Yuan, Serge Bibauw, and Chris Piech. 2023. The BEA 2023 shared task on generating AI teacher responses in educational dialogues. In Proceedings of the 18th Workshop on Innovative Use of NLP for Building Ed- ucational Applications (BEA 2023), pages 785â795, Toronto, Canada. Association for Computational Lin- guistics. | 2308.02773#33 | EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education | EduChat (https://www.educhat.top/) is a large-scale language model
(LLM)-based chatbot system in the education domain. Its goal is to support
personalized, fair, and compassionate intelligent education, serving teachers,
students, and parents. Guided by theories from psychology and education, it
further strengthens educational functions such as open question answering,
essay assessment, Socratic teaching, and emotional support based on the
existing basic LLMs. Particularly, we learn domain-specific knowledge by
pre-training on the educational corpus and stimulate various skills with tool
use by fine-tuning on designed system prompts and instructions. Currently,
EduChat is available online as an open-source project, with its code, data, and
model parameters available on platforms (e.g., GitHub
https://github.com/icalk-nlp/EduChat, Hugging Face
https://huggingface.co/ecnu-icalk ). We also prepare a demonstration of its
capabilities online (https://vimeo.com/851004454). This initiative aims to
promote research and applications of LLMs for intelligent education. | http://arxiv.org/pdf/2308.02773 | Yuhao Dan, Zhikai Lei, Yiyang Gu, Yong Li, Jianghao Yin, Jiaju Lin, Linhao Ye, Zhiyan Tie, Yougen Zhou, Yilei Wang, Aimin Zhou, Ze Zhou, Qin Chen, Jie Zhou, Liang He, Xipeng Qiu | cs.CL | null | null | cs.CL | 20230805 | 20230805 | [
{
"id": "2302.13971"
},
{
"id": "2107.03374"
},
{
"id": "2303.14742"
},
{
"id": "2305.08322"
},
{
"id": "1908.10084"
}
] |
2308.02773 | 35 | Hugo Touvron, Louis Martin, Kevin R. Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Niko- lay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Daniel M. Bikel, Lukas Blecher, Cris- tian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony S. Hartshorn, Saghar Hos- seini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel M. Kloumann, A. V. Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi | 2308.02773#35 | EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education | EduChat (https://www.educhat.top/) is a large-scale language model
(LLM)-based chatbot system in the education domain. Its goal is to support
personalized, fair, and compassionate intelligent education, serving teachers,
students, and parents. Guided by theories from psychology and education, it
further strengthens educational functions such as open question answering,
essay assessment, Socratic teaching, and emotional support based on the
existing basic LLMs. Particularly, we learn domain-specific knowledge by
pre-training on the educational corpus and stimulate various skills with tool
use by fine-tuning on designed system prompts and instructions. Currently,
EduChat is available online as an open-source project, with its code, data, and
model parameters available on platforms (e.g., GitHub
https://github.com/icalk-nlp/EduChat, Hugging Face
https://huggingface.co/ecnu-icalk ). We also prepare a demonstration of its
capabilities online (https://vimeo.com/851004454). This initiative aims to
promote research and applications of LLMs for intelligent education. | http://arxiv.org/pdf/2308.02773 | Yuhao Dan, Zhikai Lei, Yiyang Gu, Yong Li, Jianghao Yin, Jiaju Lin, Linhao Ye, Zhiyan Tie, Yougen Zhou, Yilei Wang, Aimin Zhou, Ze Zhou, Qin Chen, Jie Zhou, Liang He, Xipeng Qiu | cs.CL | null | null | cs.CL | 20230805 | 20230805 | [
{
"id": "2302.13971"
},
{
"id": "2107.03374"
},
{
"id": "2303.14742"
},
{
"id": "2305.08322"
},
{
"id": "1908.10084"
}
] |
2308.02773 | 36 | Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, R. Subramanian, Xia Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zhengxu Yan, Iliyan Zarov, Yuchen Zhang, An- gela Fan, Melanie Kambadur, Sharan Narang, Aure- lien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023b. Llama 2: Open foundation and fine-tuned chat models. ArXiv, abs/2307.09288. | 2308.02773#36 | EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education | EduChat (https://www.educhat.top/) is a large-scale language model
(LLM)-based chatbot system in the education domain. Its goal is to support
personalized, fair, and compassionate intelligent education, serving teachers,
students, and parents. Guided by theories from psychology and education, it
further strengthens educational functions such as open question answering,
essay assessment, Socratic teaching, and emotional support based on the
existing basic LLMs. Particularly, we learn domain-specific knowledge by
pre-training on the educational corpus and stimulate various skills with tool
use by fine-tuning on designed system prompts and instructions. Currently,
EduChat is available online as an open-source project, with its code, data, and
model parameters available on platforms (e.g., GitHub
https://github.com/icalk-nlp/EduChat, Hugging Face
https://huggingface.co/ecnu-icalk ). We also prepare a demonstration of its
capabilities online (https://vimeo.com/851004454). This initiative aims to
promote research and applications of LLMs for intelligent education. | http://arxiv.org/pdf/2308.02773 | Yuhao Dan, Zhikai Lei, Yiyang Gu, Yong Li, Jianghao Yin, Jiaju Lin, Linhao Ye, Zhiyan Tie, Yougen Zhou, Yilei Wang, Aimin Zhou, Ze Zhou, Qin Chen, Jie Zhou, Liang He, Xipeng Qiu | cs.CL | null | null | cs.CL | 20230805 | 20230805 | [
{
"id": "2302.13971"
},
{
"id": "2107.03374"
},
{
"id": "2303.14742"
},
{
"id": "2305.08322"
},
{
"id": "1908.10084"
}
] |
2308.02773 | 37 | Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, An- drew M. Dai, and Quoc V. Le. 2022. Finetuned language models are zero-shot learners. In The Tenth
International Conference on Learning Representa- tions.
Hongyang Yang, Xiao-Yang Liu, and Chris Wang. 2023. Fingpt: Open-source financial large language models. ArXiv, abs/2306.06031.
Ge Zhang, Yemin Shi, Ruibo Liu, Ruibin Yuan, Yizhi Li, Siwei Dong, Yu Shu, Zhaoqun Li, Zekun Wang, Chenghua Lin, Wenhao Huang, and Jie Fu. 2023a. Chinese open instruction generalist: A preliminary release.
Hongbo Zhang, Junying Chen, Feng Jiang, Fei Yu, Zhi- hong Chen, Jianquan Li, Guimin Chen, Xiangbo Wu, Zhiyi Zhang, Qingying Xiao, Xiang Wan, Benyou Wang, and Haizhou Li. 2023b. Huatuogpt, towards taming language model to be a doctor. ArXiv, abs/2305.15075. | 2308.02773#37 | EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education | EduChat (https://www.educhat.top/) is a large-scale language model
(LLM)-based chatbot system in the education domain. Its goal is to support
personalized, fair, and compassionate intelligent education, serving teachers,
students, and parents. Guided by theories from psychology and education, it
further strengthens educational functions such as open question answering,
essay assessment, Socratic teaching, and emotional support based on the
existing basic LLMs. Particularly, we learn domain-specific knowledge by
pre-training on the educational corpus and stimulate various skills with tool
use by fine-tuning on designed system prompts and instructions. Currently,
EduChat is available online as an open-source project, with its code, data, and
model parameters available on platforms (e.g., GitHub
https://github.com/icalk-nlp/EduChat, Hugging Face
https://huggingface.co/ecnu-icalk ). We also prepare a demonstration of its
capabilities online (https://vimeo.com/851004454). This initiative aims to
promote research and applications of LLMs for intelligent education. | http://arxiv.org/pdf/2308.02773 | Yuhao Dan, Zhikai Lei, Yiyang Gu, Yong Li, Jianghao Yin, Jiaju Lin, Linhao Ye, Zhiyan Tie, Yougen Zhou, Yilei Wang, Aimin Zhou, Ze Zhou, Qin Chen, Jie Zhou, Liang He, Xipeng Qiu | cs.CL | null | null | cs.CL | 20230805 | 20230805 | [
{
"id": "2302.13971"
},
{
"id": "2107.03374"
},
{
"id": "2303.14742"
},
{
"id": "2305.08322"
},
{
"id": "1908.10084"
}
] |
2308.02773 | 38 | Chunting Zhou, Pengfei Liu, Puxin Xu, Srini Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, Susan Zhang, Gargi Ghosh, Mike Lewis, Luke Zettlemoyer, and Omer Levy. 2023a. Lima: Less is more for alignment.
Jie Zhou, Pei Ke, Xipeng Qiu, Minlie Huang, and Jun- ping Zhang. 2023b. Chatgpt: Potential, prospects, and limitations. Frontiers of Information Technology & Electronic Engineering, pages 1â6.
Tool Skill Retrieval-Augmented Open QA Fine-grained Essay Assessment Psychology-based Emotional Support Socratic Teaching Retrieval Self-check General Psychology Socrates True True False False True True False False True False False False False True True/False True/False False False False False
Table 2: Illustration of system prompt design.
# Chinese
Translate to English
# Psychology-based Emotional Support
User: fli,
# B
# (ceceln Fs AHERN
User: Hello, I feel quite stressed recently. | 2308.02773#38 | EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education | EduChat (https://www.educhat.top/) is a large-scale language model
(LLM)-based chatbot system in the education domain. Its goal is to support
personalized, fair, and compassionate intelligent education, serving teachers,
students, and parents. Guided by theories from psychology and education, it
further strengthens educational functions such as open question answering,
essay assessment, Socratic teaching, and emotional support based on the
existing basic LLMs. Particularly, we learn domain-specific knowledge by
pre-training on the educational corpus and stimulate various skills with tool
use by fine-tuning on designed system prompts and instructions. Currently,
EduChat is available online as an open-source project, with its code, data, and
model parameters available on platforms (e.g., GitHub
https://github.com/icalk-nlp/EduChat, Hugging Face
https://huggingface.co/ecnu-icalk ). We also prepare a demonstration of its
capabilities online (https://vimeo.com/851004454). This initiative aims to
promote research and applications of LLMs for intelligent education. | http://arxiv.org/pdf/2308.02773 | Yuhao Dan, Zhikai Lei, Yiyang Gu, Yong Li, Jianghao Yin, Jiaju Lin, Linhao Ye, Zhiyan Tie, Yougen Zhou, Yilei Wang, Aimin Zhou, Ze Zhou, Qin Chen, Jie Zhou, Liang He, Xipeng Qiu | cs.CL | null | null | cs.CL | 20230805 | 20230805 | [
{
"id": "2302.13971"
},
{
"id": "2107.03374"
},
{
"id": "2303.14742"
},
{
"id": "2305.08322"
},
{
"id": "1908.10084"
}
] |
2308.02773 | 39 | # Chinese
Translate to English
# Psychology-based Emotional Support
User: fli,
# B
# (ceceln Fs AHERN
User: Hello, I feel quite stressed recently.
EduChat: fist, BQ] LAE AR iit Re ARR, ARE MACS sek MTT GS VR 2E FE BURKE o Hse a Tf Bak dy 2 ESTEE PY GRARAS BR, Sep BRAN DME Ae SH BiB Ap
# User: Sei 18 3 7° a
# AEA SE
# (ARE
# Re
Od HH is {EMR AVR IBA, TRIER tis feeuebeue DEA REIN AT NST. PAT AK EEA ANS Bs Wi Ad, eres H User: FHS, UU fed Laine if DLE IIE ihe fia â FE BAIN fa] AU, OE LSJ) as, Bh RETA Ay Ais TOMAS, AS SATE. Th, fee SMA, AME FIED.
EduChat: 87% As EPR EE AG FES 5 LRA E At a] a aG TEA FE, BEI AB TY WFR ER 6 | 2308.02773#39 | EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education | EduChat (https://www.educhat.top/) is a large-scale language model
(LLM)-based chatbot system in the education domain. Its goal is to support
personalized, fair, and compassionate intelligent education, serving teachers,
students, and parents. Guided by theories from psychology and education, it
further strengthens educational functions such as open question answering,
essay assessment, Socratic teaching, and emotional support based on the
existing basic LLMs. Particularly, we learn domain-specific knowledge by
pre-training on the educational corpus and stimulate various skills with tool
use by fine-tuning on designed system prompts and instructions. Currently,
EduChat is available online as an open-source project, with its code, data, and
model parameters available on platforms (e.g., GitHub
https://github.com/icalk-nlp/EduChat, Hugging Face
https://huggingface.co/ecnu-icalk ). We also prepare a demonstration of its
capabilities online (https://vimeo.com/851004454). This initiative aims to
promote research and applications of LLMs for intelligent education. | http://arxiv.org/pdf/2308.02773 | Yuhao Dan, Zhikai Lei, Yiyang Gu, Yong Li, Jianghao Yin, Jiaju Lin, Linhao Ye, Zhiyan Tie, Yougen Zhou, Yilei Wang, Aimin Zhou, Ze Zhou, Qin Chen, Jie Zhou, Liang He, Xipeng Qiu | cs.CL | null | null | cs.CL | 20230805 | 20230805 | [
{
"id": "2302.13971"
},
{
"id": "2107.03374"
},
{
"id": "2303.14742"
},
{
"id": "2305.08322"
},
{
"id": "1908.10084"
}
] |
2308.02773 | 40 | EduChat: 87% As EPR EE AG FES 5 LRA E At a] a aG TEA FE, BEI AB TY WFR ER 6
EduChat: Hello, I can understand your feelings. Stress is an emotion we often encounter in life, but if we are under high pressure for a long time, it will have negative effects on our physical and mental health. Can you tell me some of the reasons for your increased stress?
User: I've been studying hard recently, but my grades aren't improving. EduChat: Academic stress is indeed a common source of stress. Improving grades requires time and effort, but if you feel a lot of pressure, it may be because you have high expectations for yourself, or because you feel that your efforts are not being rewarded. You can try to adjust your mindset, don't be too demanding of yourself, and believe that your efforts will be rewarded. In addition, you can try some relaxation methods, such as sports, listening to music, chatting with friends, etc., to relieve stress.
User: Okay, thank you
EduChat: You're welcome, I hope my suggestions can help you. If you have any other questions or need further help, you can find me at any time.
# Multi-grained Essay A: sment
User: $i) atk He
User: First Encounter with Fireflies | 2308.02773#40 | EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education | EduChat (https://www.educhat.top/) is a large-scale language model
(LLM)-based chatbot system in the education domain. Its goal is to support
personalized, fair, and compassionate intelligent education, serving teachers,
students, and parents. Guided by theories from psychology and education, it
further strengthens educational functions such as open question answering,
essay assessment, Socratic teaching, and emotional support based on the
existing basic LLMs. Particularly, we learn domain-specific knowledge by
pre-training on the educational corpus and stimulate various skills with tool
use by fine-tuning on designed system prompts and instructions. Currently,
EduChat is available online as an open-source project, with its code, data, and
model parameters available on platforms (e.g., GitHub
https://github.com/icalk-nlp/EduChat, Hugging Face
https://huggingface.co/ecnu-icalk ). We also prepare a demonstration of its
capabilities online (https://vimeo.com/851004454). This initiative aims to
promote research and applications of LLMs for intelligent education. | http://arxiv.org/pdf/2308.02773 | Yuhao Dan, Zhikai Lei, Yiyang Gu, Yong Li, Jianghao Yin, Jiaju Lin, Linhao Ye, Zhiyan Tie, Yougen Zhou, Yilei Wang, Aimin Zhou, Ze Zhou, Qin Chen, Jie Zhou, Liang He, Xipeng Qiu | cs.CL | null | null | cs.CL | 20230805 | 20230805 | [
{
"id": "2302.13971"
},
{
"id": "2107.03374"
},
{
"id": "2303.14742"
},
{
"id": "2305.08322"
},
{
"id": "1908.10084"
}
] |
2308.02773 | 42 | POU PSUEML IT, MRT STE CEI, ZAR 3, MATA, UR. 23. " ae Vee T RAF. DEEN, TEP OP aE âFM AME aia] BEAM Hh, WOES HECK LE RHE, Ti Bit Ae TBR te TESS BATS Bh. ti PBA OPiS ees BOK SURSIT 2 AGAR AACN HEF MET ee, B Melee BH Bale Ks, fiâ Bota te is i BH ae Kc Ba Wai Fi ne ke Be BIR BETIS een MEK aR, BASHA TER, BIS REET. RAT A, WUT LAT, BAM AS at AE o BEAR Hae BALIN 8 7 TIS ReR OFey TT a
SAUL TSW ME
# EPO EY ETE HEC EduChat:
+ AE: 7/8
+ AE: 7/8 AL CASA âSENN He Hâ RRIF. FOR, SRR. SOC HT RT RAN PEAS ESC PS A AS TEE MTT A H H8e AD >, Be SCREAMS | 2308.02773#42 | EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education | EduChat (https://www.educhat.top/) is a large-scale language model
(LLM)-based chatbot system in the education domain. Its goal is to support
personalized, fair, and compassionate intelligent education, serving teachers,
students, and parents. Guided by theories from psychology and education, it
further strengthens educational functions such as open question answering,
essay assessment, Socratic teaching, and emotional support based on the
existing basic LLMs. Particularly, we learn domain-specific knowledge by
pre-training on the educational corpus and stimulate various skills with tool
use by fine-tuning on designed system prompts and instructions. Currently,
EduChat is available online as an open-source project, with its code, data, and
model parameters available on platforms (e.g., GitHub
https://github.com/icalk-nlp/EduChat, Hugging Face
https://huggingface.co/ecnu-icalk ). We also prepare a demonstration of its
capabilities online (https://vimeo.com/851004454). This initiative aims to
promote research and applications of LLMs for intelligent education. | http://arxiv.org/pdf/2308.02773 | Yuhao Dan, Zhikai Lei, Yiyang Gu, Yong Li, Jianghao Yin, Jiaju Lin, Linhao Ye, Zhiyan Tie, Yougen Zhou, Yilei Wang, Aimin Zhou, Ze Zhou, Qin Chen, Jie Zhou, Liang He, Xipeng Qiu | cs.CL | null | null | cs.CL | 20230805 | 20230805 | [
{
"id": "2302.13971"
},
{
"id": "2107.03374"
},
{
"id": "2303.14742"
},
{
"id": "2305.08322"
},
{
"id": "1908.10084"
}
] |
2308.02773 | 43 | Today, my mother said she would take me to see the fireflies at Liu Hui Pavilion after work. I looked forward to it all day, and finally, it was night. As we walked into Zijinshan Road, the air was unusually moist and fresh. The trees on both sides of the road were tall and dense, and their leaves were rustling pleasantly. The road was quiet, and occasionally, we could see people leisurely strolling and others sweating heavily from their night runs. Suddenly, a motorcycle whizzed past, breaking the tranquility of the night. Cicadas, frogs, and cuckoos composed the summer choir, playing the moving melody of nature. As we turned into Liu Hui Pavilion, it was pitch black in front of us - the fireflies had gathered. Not far away, we could hear the sounds of children's surprise, excitement, and laughter. We walked a short distance with our flashlight and saw something twinkling in the bushes, appearing and disappearing. "Mom! I see the fireflies!" I was extremely excited. Under the moonlight, some fireflies slowly rose from the bushes, floating mid-air, turning into "little twinkling stars." At that | 2308.02773#43 | EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education | EduChat (https://www.educhat.top/) is a large-scale language model
(LLM)-based chatbot system in the education domain. Its goal is to support
personalized, fair, and compassionate intelligent education, serving teachers,
students, and parents. Guided by theories from psychology and education, it
further strengthens educational functions such as open question answering,
essay assessment, Socratic teaching, and emotional support based on the
existing basic LLMs. Particularly, we learn domain-specific knowledge by
pre-training on the educational corpus and stimulate various skills with tool
use by fine-tuning on designed system prompts and instructions. Currently,
EduChat is available online as an open-source project, with its code, data, and
model parameters available on platforms (e.g., GitHub
https://github.com/icalk-nlp/EduChat, Hugging Face
https://huggingface.co/ecnu-icalk ). We also prepare a demonstration of its
capabilities online (https://vimeo.com/851004454). This initiative aims to
promote research and applications of LLMs for intelligent education. | http://arxiv.org/pdf/2308.02773 | Yuhao Dan, Zhikai Lei, Yiyang Gu, Yong Li, Jianghao Yin, Jiaju Lin, Linhao Ye, Zhiyan Tie, Yougen Zhou, Yilei Wang, Aimin Zhou, Ze Zhou, Qin Chen, Jie Zhou, Liang He, Xipeng Qiu | cs.CL | null | null | cs.CL | 20230805 | 20230805 | [
{
"id": "2302.13971"
},
{
"id": "2107.03374"
},
{
"id": "2303.14742"
},
{
"id": "2305.08322"
},
{
"id": "1908.10084"
}
] |
2308.02773 | 44 | I was extremely excited. Under the moonlight, some fireflies slowly rose from the bushes, floating mid-air, turning into "little twinkling stars." At that moment, my mother said, "A firefly has landed on your head!" Just then, another firefly flew from mid-air and landed on my hand. [held my breath, held my hand steady, and stood still, until this little elf flew away. I watched for a long time until my mother reminded me that it was time to go home. I reluctantly said goodbye to the fireflies. Unexpectedly, the fireflies gave me a surprise. On the way back, there were often two or three fireflies, like lantern bearers leading us along the roadside. When we reached the Post Office Museum, a group of fireflies were dancing in the southern bushes, as if to say goodbye to me. | 2308.02773#44 | EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education | EduChat (https://www.educhat.top/) is a large-scale language model
(LLM)-based chatbot system in the education domain. Its goal is to support
personalized, fair, and compassionate intelligent education, serving teachers,
students, and parents. Guided by theories from psychology and education, it
further strengthens educational functions such as open question answering,
essay assessment, Socratic teaching, and emotional support based on the
existing basic LLMs. Particularly, we learn domain-specific knowledge by
pre-training on the educational corpus and stimulate various skills with tool
use by fine-tuning on designed system prompts and instructions. Currently,
EduChat is available online as an open-source project, with its code, data, and
model parameters available on platforms (e.g., GitHub
https://github.com/icalk-nlp/EduChat, Hugging Face
https://huggingface.co/ecnu-icalk ). We also prepare a demonstration of its
capabilities online (https://vimeo.com/851004454). This initiative aims to
promote research and applications of LLMs for intelligent education. | http://arxiv.org/pdf/2308.02773 | Yuhao Dan, Zhikai Lei, Yiyang Gu, Yong Li, Jianghao Yin, Jiaju Lin, Linhao Ye, Zhiyan Tie, Yougen Zhou, Yilei Wang, Aimin Zhou, Ze Zhou, Qin Chen, Jie Zhou, Liang He, Xipeng Qiu | cs.CL | null | null | cs.CL | 20230805 | 20230805 | [
{
"id": "2302.13971"
},
{
"id": "2107.03374"
},
{
"id": "2303.14742"
},
{
"id": "2305.08322"
},
{
"id": "1908.10084"
}
] |
2308.02773 | 45 | This firefly-watching experience made my night both wonderfull and unforgettable. Please kindly rate the composition above.
Ashi EEG Fa » FDL TER AA TAFE 7 6 AE AE TI © fei Fe) EGU AE Pa Wl, FRE EH 28
+ RAT:
POS RASA T LAMA, MAL TOR ARATE HN KT, FRE,
# SP BLE
+
2B: (AEA BAST JESU» (RY ek AMALIE IF» ABR, âSTE
SIE: BHAI, | WE*E Rape: Hi eb ABC. Be Beil ist 434 HERA ese SR AWE SUE HY LL Seite AS Bek ab ini, RB
# EduChat:
+ Content Evaluation: 7/8 This article revolves around the theme of "Beautiful Fireflies". The content of the essay is specific, clear, and vividly portrays details, expressing a wealth of emotions that are sincere and touching. There are numerous wonderful sentences throughout the text that add a literary charm to the article.
+ Expression Evaluation: 8/8 The fluidity of this article is high, demonstrating the young author's logical thinking ability. It has clear and accurate sentences, showing excellent expressive ability.
+ Highlight Sentence: Cicadas, frogs, and cuckoos formed a choir of summer, singing the touching melody of nature. | 2308.02773#45 | EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education | EduChat (https://www.educhat.top/) is a large-scale language model
(LLM)-based chatbot system in the education domain. Its goal is to support
personalized, fair, and compassionate intelligent education, serving teachers,
students, and parents. Guided by theories from psychology and education, it
further strengthens educational functions such as open question answering,
essay assessment, Socratic teaching, and emotional support based on the
existing basic LLMs. Particularly, we learn domain-specific knowledge by
pre-training on the educational corpus and stimulate various skills with tool
use by fine-tuning on designed system prompts and instructions. Currently,
EduChat is available online as an open-source project, with its code, data, and
model parameters available on platforms (e.g., GitHub
https://github.com/icalk-nlp/EduChat, Hugging Face
https://huggingface.co/ecnu-icalk ). We also prepare a demonstration of its
capabilities online (https://vimeo.com/851004454). This initiative aims to
promote research and applications of LLMs for intelligent education. | http://arxiv.org/pdf/2308.02773 | Yuhao Dan, Zhikai Lei, Yiyang Gu, Yong Li, Jianghao Yin, Jiaju Lin, Linhao Ye, Zhiyan Tie, Yougen Zhou, Yilei Wang, Aimin Zhou, Ze Zhou, Qin Chen, Jie Zhou, Liang He, Xipeng Qiu | cs.CL | null | null | cs.CL | 20230805 | 20230805 | [
{
"id": "2302.13971"
},
{
"id": "2107.03374"
},
{
"id": "2303.14742"
},
{
"id": "2305.08322"
},
{
"id": "1908.10084"
}
] |
2308.02773 | 46 | + Highlight Sentence: Cicadas, frogs, and cuckoos formed a choir of summer, singing the touching melody of nature.
HEAR SC He BE âhn 5 SIREN BE FAIA TP (ESR a 4 ih oye EAE His Ga ABU, tee Ae » BORAT SCRA ECE, Thiet WEL. WENA, RE WRIER.
+ Paragraph Evaluation: Second Paragraph: The author unfolds the paragraph around the firefly viewing ceremony at Liu Hui Xie. It is vivid and smooth, with clear writing ideas. Third Paragraph: The paragraph is too long, the topics within the same paragraph are not unified enough, it could be divided into multiple paragraphs. The excellent description of details within the paragraph makes the narrative vivid and real. Fourth Paragraph: The concluding paragraph revisits the beautiful fireflies, echoing the theme. It is vivid, smooth, and has clear writing ideas. | 2308.02773#46 | EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education | EduChat (https://www.educhat.top/) is a large-scale language model
(LLM)-based chatbot system in the education domain. Its goal is to support
personalized, fair, and compassionate intelligent education, serving teachers,
students, and parents. Guided by theories from psychology and education, it
further strengthens educational functions such as open question answering,
essay assessment, Socratic teaching, and emotional support based on the
existing basic LLMs. Particularly, we learn domain-specific knowledge by
pre-training on the educational corpus and stimulate various skills with tool
use by fine-tuning on designed system prompts and instructions. Currently,
EduChat is available online as an open-source project, with its code, data, and
model parameters available on platforms (e.g., GitHub
https://github.com/icalk-nlp/EduChat, Hugging Face
https://huggingface.co/ecnu-icalk ). We also prepare a demonstration of its
capabilities online (https://vimeo.com/851004454). This initiative aims to
promote research and applications of LLMs for intelligent education. | http://arxiv.org/pdf/2308.02773 | Yuhao Dan, Zhikai Lei, Yiyang Gu, Yong Li, Jianghao Yin, Jiaju Lin, Linhao Ye, Zhiyan Tie, Yougen Zhou, Yilei Wang, Aimin Zhou, Ze Zhou, Qin Chen, Jie Zhou, Liang He, Xipeng Qiu | cs.CL | null | null | cs.CL | 20230805 | 20230805 | [
{
"id": "2302.13971"
},
{
"id": "2107.03374"
},
{
"id": "2303.14742"
},
{
"id": "2305.08322"
},
{
"id": "1908.10084"
}
] |
2308.02773 | 47 | + Overall Evaluation: This article is excellent. The essay revolves around "Beautiful Fireflies". The text closely follows the topic, with unified and complete content. In writing, more attention should be paid to topic review issues to ensure that the text is closely related to the theme and the text serves the center. From the perspective of writing objects, this is a descriptive article, which requires focusing on the description method of scenery, whether it is rich and real. From a methodological perspective, this is a realistic article that strives for rich descriptions that conform to narrative logic and are lively and vivid.
Figure 5: Cases of psychology-based emotional support and fine-grained essay assessment. | 2308.02773#47 | EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education | EduChat (https://www.educhat.top/) is a large-scale language model
(LLM)-based chatbot system in the education domain. Its goal is to support
personalized, fair, and compassionate intelligent education, serving teachers,
students, and parents. Guided by theories from psychology and education, it
further strengthens educational functions such as open question answering,
essay assessment, Socratic teaching, and emotional support based on the
existing basic LLMs. Particularly, we learn domain-specific knowledge by
pre-training on the educational corpus and stimulate various skills with tool
use by fine-tuning on designed system prompts and instructions. Currently,
EduChat is available online as an open-source project, with its code, data, and
model parameters available on platforms (e.g., GitHub
https://github.com/icalk-nlp/EduChat, Hugging Face
https://huggingface.co/ecnu-icalk ). We also prepare a demonstration of its
capabilities online (https://vimeo.com/851004454). This initiative aims to
promote research and applications of LLMs for intelligent education. | http://arxiv.org/pdf/2308.02773 | Yuhao Dan, Zhikai Lei, Yiyang Gu, Yong Li, Jianghao Yin, Jiaju Lin, Linhao Ye, Zhiyan Tie, Yougen Zhou, Yilei Wang, Aimin Zhou, Ze Zhou, Qin Chen, Jie Zhou, Liang He, Xipeng Qiu | cs.CL | null | null | cs.CL | 20230805 | 20230805 | [
{
"id": "2302.13971"
},
{
"id": "2107.03374"
},
{
"id": "2303.14742"
},
{
"id": "2305.08322"
},
{
"id": "1908.10084"
}
] |
2308.02151 | 0 | 3 2 0 2
g u A 4 ] L C . s c [
1 v 1 5 1 2 0 . 8 0 3 2 : v i X r a
# RETROFORMER: RETROSPECTIVE LARGE LANGUAGE AGENTS WITH POLICY GRADIENT OPTIMIZATION
# Weiran Yaoâ Shelby Heineckeâ Xueâ Rithesh Murthyâ Zeyuan Chenâ Juan Carlos Nieblesâ Zhiwei Liuâ Yihao Fengâ Le Jianguo Zhangâ Devansh Arpitâ Ran Xuâ Phil Muiâ Huan Wangâ ,â Caiming Xiongâ ,â Silvio Savareseâ ,â
# â Salesforce Research âCTO Office, Salesforce âCorresponding Authors: {huan.wang, cxiong, ssavarese}@salesforce.com
# ABSTRACT | 2308.02151#0 | Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization | Recent months have seen the emergence of a powerful new trend in which large
language models (LLMs) are augmented to become autonomous language agents
capable of performing objective oriented multi-step tasks on their own, rather
than merely responding to queries from human users. Most existing language
agents, however, are not optimized using environment-specific rewards. Although
some agents enable iterative refinement through verbal feedback, they do not
reason and plan in ways that are compatible with gradient-based learning from
rewards. This paper introduces a principled framework for reinforcing large
language agents by learning a retrospective model, which automatically tunes
the language agent prompts from environment feedback through policy gradient.
Specifically, our proposed agent architecture learns from rewards across
multiple environments and tasks, for fine-tuning a pre-trained language model
which refines the language agent prompt by summarizing the root cause of prior
failed attempts and proposing action plans. Experimental results on various
tasks demonstrate that the language agents improve over time and that our
approach considerably outperforms baselines that do not properly leverage
gradients from the environment. This demonstrates that using policy gradient
optimization to improve language agents, for which we believe our work is one
of the first, seems promising and can be applied to optimize other models in
the agent architecture to enhance agent performances over time. | http://arxiv.org/pdf/2308.02151 | Weiran Yao, Shelby Heinecke, Juan Carlos Niebles, Zhiwei Liu, Yihao Feng, Le Xue, Rithesh Murthy, Zeyuan Chen, Jianguo Zhang, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese | cs.CL, cs.AI | null | null | cs.CL | 20230804 | 20230804 | [
{
"id": "2303.17580"
},
{
"id": "2305.18290"
},
{
"id": "2302.07867"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2303.17651"
},
{
"id": "2201.11903"
},
{
"id": "2305.14992"
},
{
"id": "2206.11871"
}
] |
2308.02151 | 1 | # ABSTRACT
Recent months have seen the emergence of a powerful new trend in which large language models (LLMs) are augmented to become autonomous language agents capable of performing objective oriented multi-step tasks on their own, rather than merely responding to queries from human users. Most existing language agents, however, are not optimized using environment-specific rewards. Although some agents enable iterative refinement through verbal feedback, they do not reason and plan in ways that are compatible with gradient-based learning from rewards. This paper introduces a principled framework for reinforcing large language agents by learning a retrospective model, which automatically tunes the language agent prompts from environment feedback through policy gradient. Specifically, our proposed agent architecture learns from rewards across multiple environments and tasks, for fine-tuning a pre-trained language model which refines the lan- guage agent prompt by summarizing the root cause of prior failed attempts and proposing action plans. Experimental results on various tasks demonstrate that the language agents improve over time and that our approach considerably out- performs baselines that do not properly leverage gradients from the environment. This demonstrates that using policy gradient optimization to improve language agents, for which we believe our work is one of the first, seems promising and can be applied to optimize other models in the agent architecture to enhance agent performances over time.
# INTRODUCTION | 2308.02151#1 | Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization | Recent months have seen the emergence of a powerful new trend in which large
language models (LLMs) are augmented to become autonomous language agents
capable of performing objective oriented multi-step tasks on their own, rather
than merely responding to queries from human users. Most existing language
agents, however, are not optimized using environment-specific rewards. Although
some agents enable iterative refinement through verbal feedback, they do not
reason and plan in ways that are compatible with gradient-based learning from
rewards. This paper introduces a principled framework for reinforcing large
language agents by learning a retrospective model, which automatically tunes
the language agent prompts from environment feedback through policy gradient.
Specifically, our proposed agent architecture learns from rewards across
multiple environments and tasks, for fine-tuning a pre-trained language model
which refines the language agent prompt by summarizing the root cause of prior
failed attempts and proposing action plans. Experimental results on various
tasks demonstrate that the language agents improve over time and that our
approach considerably outperforms baselines that do not properly leverage
gradients from the environment. This demonstrates that using policy gradient
optimization to improve language agents, for which we believe our work is one
of the first, seems promising and can be applied to optimize other models in
the agent architecture to enhance agent performances over time. | http://arxiv.org/pdf/2308.02151 | Weiran Yao, Shelby Heinecke, Juan Carlos Niebles, Zhiwei Liu, Yihao Feng, Le Xue, Rithesh Murthy, Zeyuan Chen, Jianguo Zhang, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese | cs.CL, cs.AI | null | null | cs.CL | 20230804 | 20230804 | [
{
"id": "2303.17580"
},
{
"id": "2305.18290"
},
{
"id": "2302.07867"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2303.17651"
},
{
"id": "2201.11903"
},
{
"id": "2305.14992"
},
{
"id": "2206.11871"
}
] |
2308.02490 | 1 | # MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
# Weihao Yu1â Zhengyuan Yang2â Linjie Li2 Jianfeng Wang2 Kevin Lin2 Zicheng Liu2 Xinchao Wang1â Lijuan Wang2â
# 1National University of Singapore [email protected]
# 2Microsoft Azure AI
1National University of Singapore ?Microsoft Azure AI
[email protected]
{zhengyang,lindsey.li,jianfw,keli,zliu,lijuanw}@microsoft.com
# Abstract | 2308.02490#1 | MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities | We propose MM-Vet, an evaluation benchmark that examines large multimodal
models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various
intriguing abilities, such as solving math problems written on the blackboard,
reasoning about events and celebrities in news images, and explaining visual
jokes. Rapid model advancements pose challenges to evaluation benchmark
development. Problems include: (1) How to systematically structure and evaluate
the complicated multimodal tasks; (2) How to design evaluation metrics that
work well across question and answer types; and (3) How to give model insights
beyond a simple performance ranking. To this end, we present MM-Vet, designed
based on the insight that the intriguing ability to solve complicated tasks is
often achieved by a generalist model being able to integrate different core
vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and
examines the 16 integrations of interest derived from the capability
combination. For evaluation metrics, we propose an LLM-based evaluator for
open-ended outputs. The evaluator enables the evaluation across different
question types and answer styles, resulting in a unified scoring metric. We
evaluate representative LMMs on MM-Vet, providing insights into the
capabilities of different LMM system paradigms and models. Code and data are
available at https://github.com/yuweihao/MM-Vet. | http://arxiv.org/pdf/2308.02490 | Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang | cs.AI, cs.CL, cs.CV, cs.LG | Add results of GPT-4V. Code, data and leaderboard:
https://github.com/yuweihao/MM-Vet | null | cs.AI | 20230804 | 20231024 | [
{
"id": "2302.13971"
},
{
"id": "2106.13884"
},
{
"id": "2306.05425"
},
{
"id": "2204.02311"
},
{
"id": "2309.17421"
},
{
"id": "2306.08640"
},
{
"id": "2305.10403"
},
{
"id": "2302.04166"
},
{
"id": "2203.15556"
},
{
"id": "2210.09263"
},
{
"id": "2306.14565"
},
{
"id": "2103.00020"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2204.00598"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2010.11929"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "2307.06281"
},
{
"id": "1910.13461"
},
{
"id": "2207.04672"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2211.10435"
},
{
"id": "2305.07895"
},
{
"id": "2307.02469"
},
{
"id": "2308.01390"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2310.11441"
},
{
"id": "2306.09265"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2304.06939"
},
{
"id": "2110.07205"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2308.02151 | 2 | # INTRODUCTION
Recently, we have seen the emergence of a powerful new trend in which large language models (LLMs) are augmented to become autonomous language agents capable of performing tasks on their own, ultimately in the service of a goal, rather than simply responding to queries from human users. Prominent studies, including ReAct (Yao et al., 2023), Toolformer (Schick et al., 2023), Hug- gingGPT (Shen et al., 2023), generative agents (Park et al., 2023), WebGPT (Nakano et al., 2021), AutoGPT (Gravitas, 2023), BabyAGI (Nakajima, 2023), and Langchain (Chase, 2023), have suc- cessfully showcased the viability of creating autonomous decision-making agents by leveraging the capabilities of LLMs. These approaches use LLMs to generate text-based outputs and actions that can be further employed for making API calls and executing operations within a given environment. | 2308.02151#2 | Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization | Recent months have seen the emergence of a powerful new trend in which large
language models (LLMs) are augmented to become autonomous language agents
capable of performing objective oriented multi-step tasks on their own, rather
than merely responding to queries from human users. Most existing language
agents, however, are not optimized using environment-specific rewards. Although
some agents enable iterative refinement through verbal feedback, they do not
reason and plan in ways that are compatible with gradient-based learning from
rewards. This paper introduces a principled framework for reinforcing large
language agents by learning a retrospective model, which automatically tunes
the language agent prompts from environment feedback through policy gradient.
Specifically, our proposed agent architecture learns from rewards across
multiple environments and tasks, for fine-tuning a pre-trained language model
which refines the language agent prompt by summarizing the root cause of prior
failed attempts and proposing action plans. Experimental results on various
tasks demonstrate that the language agents improve over time and that our
approach considerably outperforms baselines that do not properly leverage
gradients from the environment. This demonstrates that using policy gradient
optimization to improve language agents, for which we believe our work is one
of the first, seems promising and can be applied to optimize other models in
the agent architecture to enhance agent performances over time. | http://arxiv.org/pdf/2308.02151 | Weiran Yao, Shelby Heinecke, Juan Carlos Niebles, Zhiwei Liu, Yihao Feng, Le Xue, Rithesh Murthy, Zeyuan Chen, Jianguo Zhang, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese | cs.CL, cs.AI | null | null | cs.CL | 20230804 | 20230804 | [
{
"id": "2303.17580"
},
{
"id": "2305.18290"
},
{
"id": "2302.07867"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2303.17651"
},
{
"id": "2201.11903"
},
{
"id": "2305.14992"
},
{
"id": "2206.11871"
}
] |
2308.02490 | 2 | # Abstract
We propose MM-Vet1, an evaluation benchmark that examines large multimodal models (LMMs) on complicated multimodal tasks. Recent LMMs have shown var- ious intriguing abilities, such as solving math problems written on the blackboard, reasoning about events and celebrities in news images, and explaining visual jokes. Rapid model advancements pose challenges to evaluation benchmark development. Problems include: (1) How to systematically structure and evaluate the compli- cated multimodal tasks; (2) How to design evaluation metrics that work well across question and answer types; and (3) How to give model insights beyond a simple performance ranking. To this end, we present MM-Vet, designed based on the insight that the intriguing ability to solve complicated tasks is often achieved by a generalist model being able to integrate different core vision-language (VL) capa- bilities. MM-Vet defines 6 core VL capabilities and examines the 16 integrations of interest derived from the capability combination. For evaluation metrics, we propose an LLM-based evaluator for open-ended outputs. The evaluator enables the evaluation across different question types and answer styles, resulting in a unified scoring metric. We evaluate representative LMMs on MM-Vet, providing insights into the capabilities of different LMM system paradigms and models. Code and data are available at https://github.com/yuweihao/MM-Vet.
# Introduction | 2308.02490#2 | MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities | We propose MM-Vet, an evaluation benchmark that examines large multimodal
models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various
intriguing abilities, such as solving math problems written on the blackboard,
reasoning about events and celebrities in news images, and explaining visual
jokes. Rapid model advancements pose challenges to evaluation benchmark
development. Problems include: (1) How to systematically structure and evaluate
the complicated multimodal tasks; (2) How to design evaluation metrics that
work well across question and answer types; and (3) How to give model insights
beyond a simple performance ranking. To this end, we present MM-Vet, designed
based on the insight that the intriguing ability to solve complicated tasks is
often achieved by a generalist model being able to integrate different core
vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and
examines the 16 integrations of interest derived from the capability
combination. For evaluation metrics, we propose an LLM-based evaluator for
open-ended outputs. The evaluator enables the evaluation across different
question types and answer styles, resulting in a unified scoring metric. We
evaluate representative LMMs on MM-Vet, providing insights into the
capabilities of different LMM system paradigms and models. Code and data are
available at https://github.com/yuweihao/MM-Vet. | http://arxiv.org/pdf/2308.02490 | Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang | cs.AI, cs.CL, cs.CV, cs.LG | Add results of GPT-4V. Code, data and leaderboard:
https://github.com/yuweihao/MM-Vet | null | cs.AI | 20230804 | 20231024 | [
{
"id": "2302.13971"
},
{
"id": "2106.13884"
},
{
"id": "2306.05425"
},
{
"id": "2204.02311"
},
{
"id": "2309.17421"
},
{
"id": "2306.08640"
},
{
"id": "2305.10403"
},
{
"id": "2302.04166"
},
{
"id": "2203.15556"
},
{
"id": "2210.09263"
},
{
"id": "2306.14565"
},
{
"id": "2103.00020"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2204.00598"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2010.11929"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "2307.06281"
},
{
"id": "1910.13461"
},
{
"id": "2207.04672"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2211.10435"
},
{
"id": "2305.07895"
},
{
"id": "2307.02469"
},
{
"id": "2308.01390"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2310.11441"
},
{
"id": "2306.09265"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2304.06939"
},
{
"id": "2110.07205"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2308.02151 | 3 | Given the immense scale of LLMs with an extensive parameter count, the behaviors of most existing language agents, however, are not optimized or aligned with environment reward functions. An exception is a very recent language agent architecture, namely Reflexion (Shinn et al., 2023), and several other related work, e.g., Self-Refine (Madaan et al., 2023b) and Generative Agent (Park et al., 2023), which use verbal feedback, namely self-reflection, to help agents learn from prior failure. These reflective agents convert binary or scalar reward from the environment into verbal feedback in the form of a textual summary, which is then added as additional context to the prompt for the language agent. The self-reflection feedback acts as a semantic signal by providing the agent
1
with a concrete direction to improve upon, helping it learn from prior mistakes and prevent repetitive errors to perform better in the next attempt. | 2308.02151#3 | Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization | Recent months have seen the emergence of a powerful new trend in which large
language models (LLMs) are augmented to become autonomous language agents
capable of performing objective oriented multi-step tasks on their own, rather
than merely responding to queries from human users. Most existing language
agents, however, are not optimized using environment-specific rewards. Although
some agents enable iterative refinement through verbal feedback, they do not
reason and plan in ways that are compatible with gradient-based learning from
rewards. This paper introduces a principled framework for reinforcing large
language agents by learning a retrospective model, which automatically tunes
the language agent prompts from environment feedback through policy gradient.
Specifically, our proposed agent architecture learns from rewards across
multiple environments and tasks, for fine-tuning a pre-trained language model
which refines the language agent prompt by summarizing the root cause of prior
failed attempts and proposing action plans. Experimental results on various
tasks demonstrate that the language agents improve over time and that our
approach considerably outperforms baselines that do not properly leverage
gradients from the environment. This demonstrates that using policy gradient
optimization to improve language agents, for which we believe our work is one
of the first, seems promising and can be applied to optimize other models in
the agent architecture to enhance agent performances over time. | http://arxiv.org/pdf/2308.02151 | Weiran Yao, Shelby Heinecke, Juan Carlos Niebles, Zhiwei Liu, Yihao Feng, Le Xue, Rithesh Murthy, Zeyuan Chen, Jianguo Zhang, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese | cs.CL, cs.AI | null | null | cs.CL | 20230804 | 20230804 | [
{
"id": "2303.17580"
},
{
"id": "2305.18290"
},
{
"id": "2302.07867"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2303.17651"
},
{
"id": "2201.11903"
},
{
"id": "2305.14992"
},
{
"id": "2206.11871"
}
] |
2308.02490 | 3 | # Introduction
The breakthroughs in large language models (LLMs) [11, 59, 20, 5, 73, 36] bring generalist AI models that can solve a wide range of complicated natural language tasks, many approaching the human-expert-level performance [59, 13]. Large multimodal models (LMMs) aim to achieve even stronger general intelligence via extending LLMs with multimodal inputs. Since more than 80% of our human beingâs perception, learning, cognition, and activities are mediated through vision [65], it is natural to start the exploration by equipping LLMs with âeyes.â One main thread of LMM works, represented by Frozen [75], Flamingo [4], PaLM-E [25], GPT-4 [59], extend LLMs with the visual understanding capability via end-to-end tuning. There also exists the exploration [83, 89, 85, 70, 30] on the modular combination of LLMs and image-to-text vision-language models. Recently, thanks to the open-source of powerful LLMs like LLaMA [73], more open-sourced LMMs are built, including OpenFlamingo [9], LLaVA [51], MiniGPT-4 [93], Otter [44], InstructBLIP [23], and many more [33, 52, 87]. These studies showcase the intriguing ability to solve various complicated | 2308.02490#3 | MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities | We propose MM-Vet, an evaluation benchmark that examines large multimodal
models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various
intriguing abilities, such as solving math problems written on the blackboard,
reasoning about events and celebrities in news images, and explaining visual
jokes. Rapid model advancements pose challenges to evaluation benchmark
development. Problems include: (1) How to systematically structure and evaluate
the complicated multimodal tasks; (2) How to design evaluation metrics that
work well across question and answer types; and (3) How to give model insights
beyond a simple performance ranking. To this end, we present MM-Vet, designed
based on the insight that the intriguing ability to solve complicated tasks is
often achieved by a generalist model being able to integrate different core
vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and
examines the 16 integrations of interest derived from the capability
combination. For evaluation metrics, we propose an LLM-based evaluator for
open-ended outputs. The evaluator enables the evaluation across different
question types and answer styles, resulting in a unified scoring metric. We
evaluate representative LMMs on MM-Vet, providing insights into the
capabilities of different LMM system paradigms and models. Code and data are
available at https://github.com/yuweihao/MM-Vet. | http://arxiv.org/pdf/2308.02490 | Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang | cs.AI, cs.CL, cs.CV, cs.LG | Add results of GPT-4V. Code, data and leaderboard:
https://github.com/yuweihao/MM-Vet | null | cs.AI | 20230804 | 20231024 | [
{
"id": "2302.13971"
},
{
"id": "2106.13884"
},
{
"id": "2306.05425"
},
{
"id": "2204.02311"
},
{
"id": "2309.17421"
},
{
"id": "2306.08640"
},
{
"id": "2305.10403"
},
{
"id": "2302.04166"
},
{
"id": "2203.15556"
},
{
"id": "2210.09263"
},
{
"id": "2306.14565"
},
{
"id": "2103.00020"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2204.00598"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2010.11929"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "2307.06281"
},
{
"id": "1910.13461"
},
{
"id": "2207.04672"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2211.10435"
},
{
"id": "2305.07895"
},
{
"id": "2307.02469"
},
{
"id": "2308.01390"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2310.11441"
},
{
"id": "2306.09265"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2304.06939"
},
{
"id": "2110.07205"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2308.02151 | 4 | 1
with a concrete direction to improve upon, helping it learn from prior mistakes and prevent repetitive errors to perform better in the next attempt.
Although the self-reflection operation enables iterative refinement, generating useful reflective feed- back from a pre-trained, frozen LLM is challenging, as showcased in Fig. 1, since it requires the LLM to have a good understanding of where the agent made mistakes in a specific environment, i.e., the credit assignment problem (Sutton & Barto, 2018), as well as the ability to generate a summary containing actionable insights for improvement. The verbal reinforcement cannot be optimal, if the frozen language model has not been properly fine-tuned to specialize in credit assignment problems for the tasks in given environments. Furthermore, the existing language agents do not reason and plan in ways that are compatible with differentiable, gradient-based learning from rewards by ex- ploiting the existing abundant reinforcement learning techniques. To address these limitations, this paper introduces Retroformer, a principled framework for reinforcing language agents by learn- ing a plug-in retrospective model, which automatically refines the language agent prompts from environment feedback through policy optimization. Specifically, our proposed agent architecture can learn from arbitrary reward information across multiple environments and tasks, for iteratively fine-tuning a pre-trained language model, which refines the language agent prompts by reflecting on failed attempts and assigning credits of actions taken by the agent on future rewards.
# 1. Task instruction | 2308.02151#4 | Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization | Recent months have seen the emergence of a powerful new trend in which large
language models (LLMs) are augmented to become autonomous language agents
capable of performing objective oriented multi-step tasks on their own, rather
than merely responding to queries from human users. Most existing language
agents, however, are not optimized using environment-specific rewards. Although
some agents enable iterative refinement through verbal feedback, they do not
reason and plan in ways that are compatible with gradient-based learning from
rewards. This paper introduces a principled framework for reinforcing large
language agents by learning a retrospective model, which automatically tunes
the language agent prompts from environment feedback through policy gradient.
Specifically, our proposed agent architecture learns from rewards across
multiple environments and tasks, for fine-tuning a pre-trained language model
which refines the language agent prompt by summarizing the root cause of prior
failed attempts and proposing action plans. Experimental results on various
tasks demonstrate that the language agents improve over time and that our
approach considerably outperforms baselines that do not properly leverage
gradients from the environment. This demonstrates that using policy gradient
optimization to improve language agents, for which we believe our work is one
of the first, seems promising and can be applied to optimize other models in
the agent architecture to enhance agent performances over time. | http://arxiv.org/pdf/2308.02151 | Weiran Yao, Shelby Heinecke, Juan Carlos Niebles, Zhiwei Liu, Yihao Feng, Le Xue, Rithesh Murthy, Zeyuan Chen, Jianguo Zhang, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese | cs.CL, cs.AI | null | null | cs.CL | 20230804 | 20230804 | [
{
"id": "2303.17580"
},
{
"id": "2305.18290"
},
{
"id": "2302.07867"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2303.17651"
},
{
"id": "2201.11903"
},
{
"id": "2305.14992"
},
{
"id": "2206.11871"
}
] |
2308.02490 | 4 | âEqual contribution. â Corresponding authors. 1Short for âMultimodal Veterinarian.â
coco Text VOQA Q: Is the boy happy? GT: The man at bat readies to swing at the Q: What is the largest denomination on table? GT: Yes pitch while the umpire looks on. GT: 500 Required capability: Required capabilities: Required capabilities: Recognition Recognition Recognition Language generation OCR (a) (b) (c) Me: I'll do it at 8 Time: 8.05 Me: looks like | gotta wait till 9 now Q: What will the girl on the right write Q: Where is this photo taken? Q: Can you explain this meme? on the board? GT: Qatar GT: This meme is a humorous take on GT: 14 procrastination and the tendency to delay Required capabilities: Required capabilities: tasks until a specific time ... Recognition Recognition Required capabilities: Spatial awareness Knowledge OcR OCR Recognition Math Knowledge Language generation (a) (e) ()
Figure 1: Required capabilities of different benchmarks. Different from conventional VL benchmarks only require one or two capabilities, MM-Vet focuses on the integration of different core VL capabili- ties, including recognition, OCR, knowledge, language generation, spatial awareness, and math. | 2308.02490#4 | MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities | We propose MM-Vet, an evaluation benchmark that examines large multimodal
models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various
intriguing abilities, such as solving math problems written on the blackboard,
reasoning about events and celebrities in news images, and explaining visual
jokes. Rapid model advancements pose challenges to evaluation benchmark
development. Problems include: (1) How to systematically structure and evaluate
the complicated multimodal tasks; (2) How to design evaluation metrics that
work well across question and answer types; and (3) How to give model insights
beyond a simple performance ranking. To this end, we present MM-Vet, designed
based on the insight that the intriguing ability to solve complicated tasks is
often achieved by a generalist model being able to integrate different core
vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and
examines the 16 integrations of interest derived from the capability
combination. For evaluation metrics, we propose an LLM-based evaluator for
open-ended outputs. The evaluator enables the evaluation across different
question types and answer styles, resulting in a unified scoring metric. We
evaluate representative LMMs on MM-Vet, providing insights into the
capabilities of different LMM system paradigms and models. Code and data are
available at https://github.com/yuweihao/MM-Vet. | http://arxiv.org/pdf/2308.02490 | Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang | cs.AI, cs.CL, cs.CV, cs.LG | Add results of GPT-4V. Code, data and leaderboard:
https://github.com/yuweihao/MM-Vet | null | cs.AI | 20230804 | 20231024 | [
{
"id": "2302.13971"
},
{
"id": "2106.13884"
},
{
"id": "2306.05425"
},
{
"id": "2204.02311"
},
{
"id": "2309.17421"
},
{
"id": "2306.08640"
},
{
"id": "2305.10403"
},
{
"id": "2302.04166"
},
{
"id": "2203.15556"
},
{
"id": "2210.09263"
},
{
"id": "2306.14565"
},
{
"id": "2103.00020"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2204.00598"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2010.11929"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "2307.06281"
},
{
"id": "1910.13461"
},
{
"id": "2207.04672"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2211.10435"
},
{
"id": "2305.07895"
},
{
"id": "2307.02469"
},
{
"id": "2308.01390"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2310.11441"
},
{
"id": "2306.09265"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2304.06939"
},
{
"id": "2110.07205"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2308.02151 | 5 | # 1. Task instruction
Lollipop Chainsaw featured Juliet Starling, who was voiced by a Canadian-American actress who has done voice roles for what Teen Titans spinoff series?
2. Action sequences in prior trial 4. Action sequences in next trial Action I Search[Juliet Starling] Action 1: Search[Lollipop Chainsaw] Action 2: Search[{Lollipop Chainsaw] . . oe Action 2: Search[{Tara Strong] Action 3: Search[Tara Strong] Action 3: Finish[Teen Titans, Teen Titans Go!] Action 4: Finish[Teen Titans and Teen Titans Go!] : ? : J + | add to agent prompt 3. Verbal feedback (self-reflection)
I should have searched for Lollipop Chainsaw first and looked up the Canadian-American actress who voiced Juliet Starling afterwards. I also should have looked up Tara Strong's filmography and searched for any voice roles she did specifically for Teen Titans or Teen Titans Go! | 2308.02151#5 | Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization | Recent months have seen the emergence of a powerful new trend in which large
language models (LLMs) are augmented to become autonomous language agents
capable of performing objective oriented multi-step tasks on their own, rather
than merely responding to queries from human users. Most existing language
agents, however, are not optimized using environment-specific rewards. Although
some agents enable iterative refinement through verbal feedback, they do not
reason and plan in ways that are compatible with gradient-based learning from
rewards. This paper introduces a principled framework for reinforcing large
language agents by learning a retrospective model, which automatically tunes
the language agent prompts from environment feedback through policy gradient.
Specifically, our proposed agent architecture learns from rewards across
multiple environments and tasks, for fine-tuning a pre-trained language model
which refines the language agent prompt by summarizing the root cause of prior
failed attempts and proposing action plans. Experimental results on various
tasks demonstrate that the language agents improve over time and that our
approach considerably outperforms baselines that do not properly leverage
gradients from the environment. This demonstrates that using policy gradient
optimization to improve language agents, for which we believe our work is one
of the first, seems promising and can be applied to optimize other models in
the agent architecture to enhance agent performances over time. | http://arxiv.org/pdf/2308.02151 | Weiran Yao, Shelby Heinecke, Juan Carlos Niebles, Zhiwei Liu, Yihao Feng, Le Xue, Rithesh Murthy, Zeyuan Chen, Jianguo Zhang, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese | cs.CL, cs.AI | null | null | cs.CL | 20230804 | 20230804 | [
{
"id": "2303.17580"
},
{
"id": "2305.18290"
},
{
"id": "2302.07867"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2303.17651"
},
{
"id": "2201.11903"
},
{
"id": "2305.14992"
},
{
"id": "2206.11871"
}
] |
2308.02151 | 6 | Figure 1: An example of uninformative self-reflections from a frozen LLM. The root cause of failure in prior trial is that the agent should have responded only the spinoff series âTeen Titans Goâ not âTeen Titansâ as the answer. The agent forgot its goal during a chain of thought, actions and lengthy observations. The verbal feedback from frozen LLM, however, only rephrases the prior actions sequences as the proposed new plan, resulting the same, incorrect actions in the next trial.
We conduct experiments on open-source simulation and real-world environments including Hot- PotQA (Yang et al., 2018), which consists of search-based question answering tasks to test the tool use abilities of a web agent which needs to call Wikipedia APIs in multiple steps for answering ques- tion. We observe Retroformer agents are faster learners compared with reflexion, which does not use gradient for reasoning and planning, and better decision-makers and reasoners. More con- cretely, Retroformer agents improve the success rate of search-based question-anwering tasks in HotPotQA (Yang et al., 2018) by 18% in 4 attempts, which demonstrates the effectiveness of gradient-based reasoning and planning for tool use under large state-action space.
To summarize, our contributions are the following: | 2308.02151#6 | Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization | Recent months have seen the emergence of a powerful new trend in which large
language models (LLMs) are augmented to become autonomous language agents
capable of performing objective oriented multi-step tasks on their own, rather
than merely responding to queries from human users. Most existing language
agents, however, are not optimized using environment-specific rewards. Although
some agents enable iterative refinement through verbal feedback, they do not
reason and plan in ways that are compatible with gradient-based learning from
rewards. This paper introduces a principled framework for reinforcing large
language agents by learning a retrospective model, which automatically tunes
the language agent prompts from environment feedback through policy gradient.
Specifically, our proposed agent architecture learns from rewards across
multiple environments and tasks, for fine-tuning a pre-trained language model
which refines the language agent prompt by summarizing the root cause of prior
failed attempts and proposing action plans. Experimental results on various
tasks demonstrate that the language agents improve over time and that our
approach considerably outperforms baselines that do not properly leverage
gradients from the environment. This demonstrates that using policy gradient
optimization to improve language agents, for which we believe our work is one
of the first, seems promising and can be applied to optimize other models in
the agent architecture to enhance agent performances over time. | http://arxiv.org/pdf/2308.02151 | Weiran Yao, Shelby Heinecke, Juan Carlos Niebles, Zhiwei Liu, Yihao Feng, Le Xue, Rithesh Murthy, Zeyuan Chen, Jianguo Zhang, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese | cs.CL, cs.AI | null | null | cs.CL | 20230804 | 20230804 | [
{
"id": "2303.17580"
},
{
"id": "2305.18290"
},
{
"id": "2302.07867"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2303.17651"
},
{
"id": "2201.11903"
},
{
"id": "2305.14992"
},
{
"id": "2206.11871"
}
] |
2308.02490 | 6 | Despite the promising qualitative results on LMMâs capabilities, it remains unclear how to systemati- cally evaluate those showcased complicated multimodal tasks and what are the relationships among evaluated tasks, which is the first step in developing a quantitative evaluation benchmark. As shown in Figure 1, existing vision-language benchmarks [6, 17, 72] focus on simple Vision-Language (VL) tasks that require specific one or two capabilities, such as recognition, language generation, or OCR, but fall short in benchmarking more complicated tasks. Alternatively, we examine the arbitrary integration of core VL capabilities for complicated tasks, with the insight that the intriguing ability to solve complicated multimodal tasks can be achieved by a generalist model mastering and integrating different core capabilities. Following this insight, we propose a new benchmark for evaluating LMMs, namely MM-Vet. MM-Vet defines six core VL capabilities, including recognition, OCR, knowledge, language generation, spatial awareness, and math, which integrate to solve various complicated multimodal tasks. MM-Vet contains 16 tasks for quantitative evaluation. For example, in Figure 1(d), answering the question âWhat will the girl on the right write on the board?â in MM-Vet requires recognizing the genders of the three kids, locating queried girl spatially, recognizing the scene text written by the girl, and finally calculating the result. | 2308.02490#6 | MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities | We propose MM-Vet, an evaluation benchmark that examines large multimodal
models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various
intriguing abilities, such as solving math problems written on the blackboard,
reasoning about events and celebrities in news images, and explaining visual
jokes. Rapid model advancements pose challenges to evaluation benchmark
development. Problems include: (1) How to systematically structure and evaluate
the complicated multimodal tasks; (2) How to design evaluation metrics that
work well across question and answer types; and (3) How to give model insights
beyond a simple performance ranking. To this end, we present MM-Vet, designed
based on the insight that the intriguing ability to solve complicated tasks is
often achieved by a generalist model being able to integrate different core
vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and
examines the 16 integrations of interest derived from the capability
combination. For evaluation metrics, we propose an LLM-based evaluator for
open-ended outputs. The evaluator enables the evaluation across different
question types and answer styles, resulting in a unified scoring metric. We
evaluate representative LMMs on MM-Vet, providing insights into the
capabilities of different LMM system paradigms and models. Code and data are
available at https://github.com/yuweihao/MM-Vet. | http://arxiv.org/pdf/2308.02490 | Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang | cs.AI, cs.CL, cs.CV, cs.LG | Add results of GPT-4V. Code, data and leaderboard:
https://github.com/yuweihao/MM-Vet | null | cs.AI | 20230804 | 20231024 | [
{
"id": "2302.13971"
},
{
"id": "2106.13884"
},
{
"id": "2306.05425"
},
{
"id": "2204.02311"
},
{
"id": "2309.17421"
},
{
"id": "2306.08640"
},
{
"id": "2305.10403"
},
{
"id": "2302.04166"
},
{
"id": "2203.15556"
},
{
"id": "2210.09263"
},
{
"id": "2306.14565"
},
{
"id": "2103.00020"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2204.00598"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2010.11929"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "2307.06281"
},
{
"id": "1910.13461"
},
{
"id": "2207.04672"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2211.10435"
},
{
"id": "2305.07895"
},
{
"id": "2307.02469"
},
{
"id": "2308.01390"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2310.11441"
},
{
"id": "2306.09265"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2304.06939"
},
{
"id": "2110.07205"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2308.02151 | 7 | To summarize, our contributions are the following:
⢠The paper introduces Retroformer, which iteratively refines the prompts given to large lan- guage agents based on environmental feedback to improve learning speed and task completion. We take a policy gradient approach with the Actor LLM being part of the environment, allowing learning from a wide range of reward signals for diverse tasks and environments.
⢠The proposed method focuses on enhancing the retrospective model in the language agent archi- tecture, without accessing the Actor LLM parameters or needing to propagate gradients through it. The agnostic nature of Retroformer makes it a flexible plug-in module for various types of cloud-based LLMs, such as GPT or Bard.
2
# 2 RELATED WORK | 2308.02151#7 | Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization | Recent months have seen the emergence of a powerful new trend in which large
language models (LLMs) are augmented to become autonomous language agents
capable of performing objective oriented multi-step tasks on their own, rather
than merely responding to queries from human users. Most existing language
agents, however, are not optimized using environment-specific rewards. Although
some agents enable iterative refinement through verbal feedback, they do not
reason and plan in ways that are compatible with gradient-based learning from
rewards. This paper introduces a principled framework for reinforcing large
language agents by learning a retrospective model, which automatically tunes
the language agent prompts from environment feedback through policy gradient.
Specifically, our proposed agent architecture learns from rewards across
multiple environments and tasks, for fine-tuning a pre-trained language model
which refines the language agent prompt by summarizing the root cause of prior
failed attempts and proposing action plans. Experimental results on various
tasks demonstrate that the language agents improve over time and that our
approach considerably outperforms baselines that do not properly leverage
gradients from the environment. This demonstrates that using policy gradient
optimization to improve language agents, for which we believe our work is one
of the first, seems promising and can be applied to optimize other models in
the agent architecture to enhance agent performances over time. | http://arxiv.org/pdf/2308.02151 | Weiran Yao, Shelby Heinecke, Juan Carlos Niebles, Zhiwei Liu, Yihao Feng, Le Xue, Rithesh Murthy, Zeyuan Chen, Jianguo Zhang, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese | cs.CL, cs.AI | null | null | cs.CL | 20230804 | 20230804 | [
{
"id": "2303.17580"
},
{
"id": "2305.18290"
},
{
"id": "2302.07867"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2303.17651"
},
{
"id": "2201.11903"
},
{
"id": "2305.14992"
},
{
"id": "2206.11871"
}
] |
2308.02490 | 7 | Other than the evaluation category definition, the evaluation metrics are another challenge in bench- mark development, given the diverse answer styles and question types. Specifically: (1) The desired outputs in different multimodal tasks have diverse formats, e.g., Figure 1(d)âs math problem can be
2
answered by a single word, while outputs for the essay writing question are hundred-words long; (2) The core aspect to evaluate in different tasks varies, e.g., text generation focuses more on the text quality, recognition can be considered correct with the key concept recognized. Most integrated tasks would require comprehensive evaluations from multiple dimensions. Inspired by recent NLP studies [19, 53, 28] that use LLMs for model evaluation, we propose an LLM-based evaluator as the evaluation metric for open-ended model outputs. As shown in Table 1, we prompt GPT-4 [59] with few-shot evaluation prompts to obtain an evaluation score ranging from 0 to 1. Instead of manually defining the possible answer styles and question types, we include different sample types as few-shot examples and let LLMs infer the scoring criteria automatically. Such metric design eases the future extension to more question types, such as box localization [16, 82, 77]. | 2308.02490#7 | MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities | We propose MM-Vet, an evaluation benchmark that examines large multimodal
models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various
intriguing abilities, such as solving math problems written on the blackboard,
reasoning about events and celebrities in news images, and explaining visual
jokes. Rapid model advancements pose challenges to evaluation benchmark
development. Problems include: (1) How to systematically structure and evaluate
the complicated multimodal tasks; (2) How to design evaluation metrics that
work well across question and answer types; and (3) How to give model insights
beyond a simple performance ranking. To this end, we present MM-Vet, designed
based on the insight that the intriguing ability to solve complicated tasks is
often achieved by a generalist model being able to integrate different core
vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and
examines the 16 integrations of interest derived from the capability
combination. For evaluation metrics, we propose an LLM-based evaluator for
open-ended outputs. The evaluator enables the evaluation across different
question types and answer styles, resulting in a unified scoring metric. We
evaluate representative LMMs on MM-Vet, providing insights into the
capabilities of different LMM system paradigms and models. Code and data are
available at https://github.com/yuweihao/MM-Vet. | http://arxiv.org/pdf/2308.02490 | Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang | cs.AI, cs.CL, cs.CV, cs.LG | Add results of GPT-4V. Code, data and leaderboard:
https://github.com/yuweihao/MM-Vet | null | cs.AI | 20230804 | 20231024 | [
{
"id": "2302.13971"
},
{
"id": "2106.13884"
},
{
"id": "2306.05425"
},
{
"id": "2204.02311"
},
{
"id": "2309.17421"
},
{
"id": "2306.08640"
},
{
"id": "2305.10403"
},
{
"id": "2302.04166"
},
{
"id": "2203.15556"
},
{
"id": "2210.09263"
},
{
"id": "2306.14565"
},
{
"id": "2103.00020"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2204.00598"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2010.11929"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "2307.06281"
},
{
"id": "1910.13461"
},
{
"id": "2207.04672"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2211.10435"
},
{
"id": "2305.07895"
},
{
"id": "2307.02469"
},
{
"id": "2308.01390"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2310.11441"
},
{
"id": "2306.09265"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2304.06939"
},
{
"id": "2110.07205"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2308.02151 | 8 | 2
# 2 RELATED WORK
Autonomous Language Agents We summarize in Table 1 the recent language agent literature related to our work from five perspectives and differentiate our method from them. The completion of a complex task typically involves numerous stages. An agent must possess knowledge of these stages and plan accordingly. Chain-of-Thoughts or CoT (Wei et al., 2022) is the pioneering work that prompts the agent to decompose challenging reasoning tasks into smaller, more manageable steps. ReAct (Yao et al., 2023), on the other hand, proposes the exploitation of this reasoning and acting proficiency within LLM to encourage interaction with the environment (e.g. using the Wikipedia search API) by mapping observations to the generation of reasoning and action traces or API calls in natural language. This agent architecture has spawned various applications, such as HuggingGPT (Shen et al., 2023), generative agents (Park et al., 2023), WebGPT (Nakano et al., 2021), AutoGPT (Gravitas, 2023), BabyAGI (Nakajima, 2023), and Langchain (Chase, 2023).
Table 1: Related work on large language agents. | 2308.02151#8 | Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization | Recent months have seen the emergence of a powerful new trend in which large
language models (LLMs) are augmented to become autonomous language agents
capable of performing objective oriented multi-step tasks on their own, rather
than merely responding to queries from human users. Most existing language
agents, however, are not optimized using environment-specific rewards. Although
some agents enable iterative refinement through verbal feedback, they do not
reason and plan in ways that are compatible with gradient-based learning from
rewards. This paper introduces a principled framework for reinforcing large
language agents by learning a retrospective model, which automatically tunes
the language agent prompts from environment feedback through policy gradient.
Specifically, our proposed agent architecture learns from rewards across
multiple environments and tasks, for fine-tuning a pre-trained language model
which refines the language agent prompt by summarizing the root cause of prior
failed attempts and proposing action plans. Experimental results on various
tasks demonstrate that the language agents improve over time and that our
approach considerably outperforms baselines that do not properly leverage
gradients from the environment. This demonstrates that using policy gradient
optimization to improve language agents, for which we believe our work is one
of the first, seems promising and can be applied to optimize other models in
the agent architecture to enhance agent performances over time. | http://arxiv.org/pdf/2308.02151 | Weiran Yao, Shelby Heinecke, Juan Carlos Niebles, Zhiwei Liu, Yihao Feng, Le Xue, Rithesh Murthy, Zeyuan Chen, Jianguo Zhang, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese | cs.CL, cs.AI | null | null | cs.CL | 20230804 | 20230804 | [
{
"id": "2303.17580"
},
{
"id": "2305.18290"
},
{
"id": "2302.07867"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2303.17651"
},
{
"id": "2201.11903"
},
{
"id": "2305.14992"
},
{
"id": "2206.11871"
}
] |
2308.02490 | 8 | MM-Vetâs evaluation category and metric designs allow users to obtain capability insights for different LMMs. Such model analyses are more informative than a single overall ranking, which highly depends on the dataset sample composition and might be biased. We evaluate two sets of multimodal systems, i.e., the end-to-end tuned LMMs including OpenFlamingo [9], LLaVA [51], MiniGPT-4 [93], Otter [44], InstructBLIP [23], etc, and the LLM-tool-using systems [85, 70, 30, 38] such as MM-ReAct [85]. Despite not knowing model details, we also evaluate industry solutions such as Bard [34]. We first discuss the capability analyses of these two system paradigms and the representative models. We then dive deeper into the open-sourced LMMs and examine how the training data, vision encoder, and LLM selection influence the performance on different capabilities.
Our contributions are summarized as follows. | 2308.02490#8 | MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities | We propose MM-Vet, an evaluation benchmark that examines large multimodal
models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various
intriguing abilities, such as solving math problems written on the blackboard,
reasoning about events and celebrities in news images, and explaining visual
jokes. Rapid model advancements pose challenges to evaluation benchmark
development. Problems include: (1) How to systematically structure and evaluate
the complicated multimodal tasks; (2) How to design evaluation metrics that
work well across question and answer types; and (3) How to give model insights
beyond a simple performance ranking. To this end, we present MM-Vet, designed
based on the insight that the intriguing ability to solve complicated tasks is
often achieved by a generalist model being able to integrate different core
vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and
examines the 16 integrations of interest derived from the capability
combination. For evaluation metrics, we propose an LLM-based evaluator for
open-ended outputs. The evaluator enables the evaluation across different
question types and answer styles, resulting in a unified scoring metric. We
evaluate representative LMMs on MM-Vet, providing insights into the
capabilities of different LMM system paradigms and models. Code and data are
available at https://github.com/yuweihao/MM-Vet. | http://arxiv.org/pdf/2308.02490 | Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang | cs.AI, cs.CL, cs.CV, cs.LG | Add results of GPT-4V. Code, data and leaderboard:
https://github.com/yuweihao/MM-Vet | null | cs.AI | 20230804 | 20231024 | [
{
"id": "2302.13971"
},
{
"id": "2106.13884"
},
{
"id": "2306.05425"
},
{
"id": "2204.02311"
},
{
"id": "2309.17421"
},
{
"id": "2306.08640"
},
{
"id": "2305.10403"
},
{
"id": "2302.04166"
},
{
"id": "2203.15556"
},
{
"id": "2210.09263"
},
{
"id": "2306.14565"
},
{
"id": "2103.00020"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2204.00598"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2010.11929"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "2307.06281"
},
{
"id": "1910.13461"
},
{
"id": "2207.04672"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2211.10435"
},
{
"id": "2305.07895"
},
{
"id": "2307.02469"
},
{
"id": "2308.01390"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2310.11441"
},
{
"id": "2306.09265"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2304.06939"
},
{
"id": "2110.07205"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2308.02151 | 9 | Table 1: Related work on large language agents.
Approach CoT (Wei et al., 2022) ReAct (Yao et al., 2023) Self-refine (Madaan et al., 2023b) RAP (Hao et al., 2023) Reflexion (Shinn et al., 2023) Retroformer (our method) Gradient Arbitrary learning â â â â â â reward â â â â â â Iterative refinement â â â â â â Hidden Decision Memory constraints making â â â â â â â â â â â â â â â â â â | 2308.02151#9 | Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization | Recent months have seen the emergence of a powerful new trend in which large
language models (LLMs) are augmented to become autonomous language agents
capable of performing objective oriented multi-step tasks on their own, rather
than merely responding to queries from human users. Most existing language
agents, however, are not optimized using environment-specific rewards. Although
some agents enable iterative refinement through verbal feedback, they do not
reason and plan in ways that are compatible with gradient-based learning from
rewards. This paper introduces a principled framework for reinforcing large
language agents by learning a retrospective model, which automatically tunes
the language agent prompts from environment feedback through policy gradient.
Specifically, our proposed agent architecture learns from rewards across
multiple environments and tasks, for fine-tuning a pre-trained language model
which refines the language agent prompt by summarizing the root cause of prior
failed attempts and proposing action plans. Experimental results on various
tasks demonstrate that the language agents improve over time and that our
approach considerably outperforms baselines that do not properly leverage
gradients from the environment. This demonstrates that using policy gradient
optimization to improve language agents, for which we believe our work is one
of the first, seems promising and can be applied to optimize other models in
the agent architecture to enhance agent performances over time. | http://arxiv.org/pdf/2308.02151 | Weiran Yao, Shelby Heinecke, Juan Carlos Niebles, Zhiwei Liu, Yihao Feng, Le Xue, Rithesh Murthy, Zeyuan Chen, Jianguo Zhang, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese | cs.CL, cs.AI | null | null | cs.CL | 20230804 | 20230804 | [
{
"id": "2303.17580"
},
{
"id": "2305.18290"
},
{
"id": "2302.07867"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2303.17651"
},
{
"id": "2201.11903"
},
{
"id": "2305.14992"
},
{
"id": "2206.11871"
}
] |
2308.02490 | 9 | Our contributions are summarized as follows.
We propose MM-Vet to evaluate LMMsâ ability on complicated multimodal tasks. MM-Vet defines 16 emergent tasks of interest, integrated from the six defined core VL capabilities. ⢠We propose an LLM-based evaluator for open-ended outputs of LMMs, which unifies the evaluation across different answer styles and question types. The evaluation metrics ensure the thorough evaluation of both the factual correctness and text quality of the responses. ⢠We benchmark representative LMMs on MM-Vet, revealing the relative strengths and
weaknesses of different system paradigms and models, as summarized in Section 4.5.
# 2 Related work | 2308.02490#9 | MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities | We propose MM-Vet, an evaluation benchmark that examines large multimodal
models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various
intriguing abilities, such as solving math problems written on the blackboard,
reasoning about events and celebrities in news images, and explaining visual
jokes. Rapid model advancements pose challenges to evaluation benchmark
development. Problems include: (1) How to systematically structure and evaluate
the complicated multimodal tasks; (2) How to design evaluation metrics that
work well across question and answer types; and (3) How to give model insights
beyond a simple performance ranking. To this end, we present MM-Vet, designed
based on the insight that the intriguing ability to solve complicated tasks is
often achieved by a generalist model being able to integrate different core
vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and
examines the 16 integrations of interest derived from the capability
combination. For evaluation metrics, we propose an LLM-based evaluator for
open-ended outputs. The evaluator enables the evaluation across different
question types and answer styles, resulting in a unified scoring metric. We
evaluate representative LMMs on MM-Vet, providing insights into the
capabilities of different LMM system paradigms and models. Code and data are
available at https://github.com/yuweihao/MM-Vet. | http://arxiv.org/pdf/2308.02490 | Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang | cs.AI, cs.CL, cs.CV, cs.LG | Add results of GPT-4V. Code, data and leaderboard:
https://github.com/yuweihao/MM-Vet | null | cs.AI | 20230804 | 20231024 | [
{
"id": "2302.13971"
},
{
"id": "2106.13884"
},
{
"id": "2306.05425"
},
{
"id": "2204.02311"
},
{
"id": "2309.17421"
},
{
"id": "2306.08640"
},
{
"id": "2305.10403"
},
{
"id": "2302.04166"
},
{
"id": "2203.15556"
},
{
"id": "2210.09263"
},
{
"id": "2306.14565"
},
{
"id": "2103.00020"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2204.00598"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2010.11929"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "2307.06281"
},
{
"id": "1910.13461"
},
{
"id": "2207.04672"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2211.10435"
},
{
"id": "2305.07895"
},
{
"id": "2307.02469"
},
{
"id": "2308.01390"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2310.11441"
},
{
"id": "2306.09265"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2304.06939"
},
{
"id": "2110.07205"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2308.02151 | 10 | However, these approaches fail to learn from valuable feedback, such as environment rewards, to en- hance the agentâs behaviors, resulting in performances that are solely dependent on the quality of the pre-trained LLM. Self-refine (Madaan et al., 2023a) addresses this limitation by employing a single LLM as a generator, refiner, and provider of feedback, allowing for iterative refinement of outputs. However, it is not specifically tailored for real-world task-based interaction with the environment. On the other hand, RAP (Hao et al., 2023) repurposes the LLM to function as both a world model and a reasoning agent. It incorporates Monte Carlo Tree Search for strategic exploration within the extensive realm of reasoning with environment rewards. This approach enables effective naviga- tion and decision-making in complex domains. Recently, Shinn et al. (2023) presents Reflexion, a framework that equips agents with dynamic memory and self-reflection capabilities, enhancing their reasoning skills. Self-reflection plays a pivotal role, allowing autonomous agents to iteratively refine past actions, make improvements, and prevent repetitive errors. | 2308.02151#10 | Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization | Recent months have seen the emergence of a powerful new trend in which large
language models (LLMs) are augmented to become autonomous language agents
capable of performing objective oriented multi-step tasks on their own, rather
than merely responding to queries from human users. Most existing language
agents, however, are not optimized using environment-specific rewards. Although
some agents enable iterative refinement through verbal feedback, they do not
reason and plan in ways that are compatible with gradient-based learning from
rewards. This paper introduces a principled framework for reinforcing large
language agents by learning a retrospective model, which automatically tunes
the language agent prompts from environment feedback through policy gradient.
Specifically, our proposed agent architecture learns from rewards across
multiple environments and tasks, for fine-tuning a pre-trained language model
which refines the language agent prompt by summarizing the root cause of prior
failed attempts and proposing action plans. Experimental results on various
tasks demonstrate that the language agents improve over time and that our
approach considerably outperforms baselines that do not properly leverage
gradients from the environment. This demonstrates that using policy gradient
optimization to improve language agents, for which we believe our work is one
of the first, seems promising and can be applied to optimize other models in
the agent architecture to enhance agent performances over time. | http://arxiv.org/pdf/2308.02151 | Weiran Yao, Shelby Heinecke, Juan Carlos Niebles, Zhiwei Liu, Yihao Feng, Le Xue, Rithesh Murthy, Zeyuan Chen, Jianguo Zhang, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese | cs.CL, cs.AI | null | null | cs.CL | 20230804 | 20230804 | [
{
"id": "2303.17580"
},
{
"id": "2305.18290"
},
{
"id": "2302.07867"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2303.17651"
},
{
"id": "2201.11903"
},
{
"id": "2305.14992"
},
{
"id": "2206.11871"
}
] |
2308.02490 | 10 | weaknesses of different system paradigms and models, as summarized in Section 4.5.
# 2 Related work
Multimodal models. Vision-language models [17, 35, 56, 18, 48, 40, 79, 76, 82, 29] approach multimodal intelligence of jointly understanding and generating vision and language signals. Inspired by the impressive quality and genericity in recent large language models (LLMs) [12, 59, 20, 73], researchers explore large multimodal models (LMMs) that seamlessly integrate different vision- In approaching such multimodal language capabilities to solve complicated multimodal tasks. generalist systems, one direction is to extend LLMs with the multi-sensory ability, such as pioneer works Frozen [75], Flamingo [4], PaLM-E [25], GPT-4 [59]. Recent open-sourced LLMs [91, 73, 64] also facilitate various research studies including OpenFlamingo [9], LLaVA [51], MiniGPT-4 [93], Otter [44], InstructBLIP [23], and so on [33, 52, 87]. On the other hand, multimodal agents [85, 70, 38, 30] explore chaining different vision tools with LLMs [12, 59] to achieve integrated vision-language capabilities. | 2308.02490#10 | MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities | We propose MM-Vet, an evaluation benchmark that examines large multimodal
models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various
intriguing abilities, such as solving math problems written on the blackboard,
reasoning about events and celebrities in news images, and explaining visual
jokes. Rapid model advancements pose challenges to evaluation benchmark
development. Problems include: (1) How to systematically structure and evaluate
the complicated multimodal tasks; (2) How to design evaluation metrics that
work well across question and answer types; and (3) How to give model insights
beyond a simple performance ranking. To this end, we present MM-Vet, designed
based on the insight that the intriguing ability to solve complicated tasks is
often achieved by a generalist model being able to integrate different core
vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and
examines the 16 integrations of interest derived from the capability
combination. For evaluation metrics, we propose an LLM-based evaluator for
open-ended outputs. The evaluator enables the evaluation across different
question types and answer styles, resulting in a unified scoring metric. We
evaluate representative LMMs on MM-Vet, providing insights into the
capabilities of different LMM system paradigms and models. Code and data are
available at https://github.com/yuweihao/MM-Vet. | http://arxiv.org/pdf/2308.02490 | Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang | cs.AI, cs.CL, cs.CV, cs.LG | Add results of GPT-4V. Code, data and leaderboard:
https://github.com/yuweihao/MM-Vet | null | cs.AI | 20230804 | 20231024 | [
{
"id": "2302.13971"
},
{
"id": "2106.13884"
},
{
"id": "2306.05425"
},
{
"id": "2204.02311"
},
{
"id": "2309.17421"
},
{
"id": "2306.08640"
},
{
"id": "2305.10403"
},
{
"id": "2302.04166"
},
{
"id": "2203.15556"
},
{
"id": "2210.09263"
},
{
"id": "2306.14565"
},
{
"id": "2103.00020"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2204.00598"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2010.11929"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "2307.06281"
},
{
"id": "1910.13461"
},
{
"id": "2207.04672"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2211.10435"
},
{
"id": "2305.07895"
},
{
"id": "2307.02469"
},
{
"id": "2308.01390"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2310.11441"
},
{
"id": "2306.09265"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2304.06939"
},
{
"id": "2110.07205"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2308.02151 | 11 | Transformer Reinforcement Learning Reinforcement learning with a provided reward function or a reward-labeled dataset, commonly referred to as RLHF, has become a standard practice within the LLM fine-tuning pipeline. These endeavors have convincingly demonstrated the efficacy of RL as a means to guide language models towards desired behaviors that align with predefined reward functions encompassing various domains, including machine translation, summarization, and gen- erating favorable reviews. Among the prevalent transformer RL methods are online RL algorithms such as Proximal Policy Optimization or PPO (Schulman et al., 2017), and offline RL techniques such as Implicit Language Q-Learning or ILQL (Snell et al., 2022) and Direct Preference Optimiza- tion or DPO (Rafailov et al., 2023). These methods have been implemented in TRL/TRLX (von Werra et al., 2020) distributed training framework.
# 3 CHALLENGES
Although LLMs are not designed to handle tool use or take actions, it has been observed (Gravitas, 2023; Nakajima, 2023; Chase, 2023) that empirically for text-rich environment, especially when the actions and states are accurately described using natural languages, LLMs work surprisingly well. However there are still plenty of challenges applying LLM-based agents. Here we list several below. | 2308.02151#11 | Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization | Recent months have seen the emergence of a powerful new trend in which large
language models (LLMs) are augmented to become autonomous language agents
capable of performing objective oriented multi-step tasks on their own, rather
than merely responding to queries from human users. Most existing language
agents, however, are not optimized using environment-specific rewards. Although
some agents enable iterative refinement through verbal feedback, they do not
reason and plan in ways that are compatible with gradient-based learning from
rewards. This paper introduces a principled framework for reinforcing large
language agents by learning a retrospective model, which automatically tunes
the language agent prompts from environment feedback through policy gradient.
Specifically, our proposed agent architecture learns from rewards across
multiple environments and tasks, for fine-tuning a pre-trained language model
which refines the language agent prompt by summarizing the root cause of prior
failed attempts and proposing action plans. Experimental results on various
tasks demonstrate that the language agents improve over time and that our
approach considerably outperforms baselines that do not properly leverage
gradients from the environment. This demonstrates that using policy gradient
optimization to improve language agents, for which we believe our work is one
of the first, seems promising and can be applied to optimize other models in
the agent architecture to enhance agent performances over time. | http://arxiv.org/pdf/2308.02151 | Weiran Yao, Shelby Heinecke, Juan Carlos Niebles, Zhiwei Liu, Yihao Feng, Le Xue, Rithesh Murthy, Zeyuan Chen, Jianguo Zhang, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese | cs.CL, cs.AI | null | null | cs.CL | 20230804 | 20230804 | [
{
"id": "2303.17580"
},
{
"id": "2305.18290"
},
{
"id": "2302.07867"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2303.17651"
},
{
"id": "2201.11903"
},
{
"id": "2305.14992"
},
{
"id": "2206.11871"
}
] |
2308.02490 | 11 | VL benchmarks. Classic VL benchmarks focus on specific capabilities of interest, such as visual recognition [35], image description [17, 3], as well as other benchmarks for specialized capabilities such as scene text understanding [72, 71, 86], commonsense reasoning [88], outside knowledge [58]. The recent development of generalist LMMs posts a strong need for modernized VL benchmarks, which contain complicated multimodal tasks that require integrated VL capabilities.
Our MM-Vet is most related to the concurrent evaluation studies [27, 54, 45, 80, 50] such as MME and MMBench, which design comprehensive evaluation samples to facilitate the LMM evaluation. One major difference is that MM-Vet defines and studies the integrated VL capabilities, allowing the evaluation to provide insights beyond the overall model ranking.
LLM-based evaluation. MM-Vet adopts the open-ended LLM-based evaluator, allowing the evalua- tion across answer styles and question types without requiring binary or multiple answer choices. The technique of prompting LLMs for model evaluation is related to the explorations in NLP [19, 53, 28]. We show that the technique extends well to multimodal tasks, and presents a unified prompt to evaluate samples with different answer styles and question types.
3 | 2308.02490#11 | MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities | We propose MM-Vet, an evaluation benchmark that examines large multimodal
models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various
intriguing abilities, such as solving math problems written on the blackboard,
reasoning about events and celebrities in news images, and explaining visual
jokes. Rapid model advancements pose challenges to evaluation benchmark
development. Problems include: (1) How to systematically structure and evaluate
the complicated multimodal tasks; (2) How to design evaluation metrics that
work well across question and answer types; and (3) How to give model insights
beyond a simple performance ranking. To this end, we present MM-Vet, designed
based on the insight that the intriguing ability to solve complicated tasks is
often achieved by a generalist model being able to integrate different core
vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and
examines the 16 integrations of interest derived from the capability
combination. For evaluation metrics, we propose an LLM-based evaluator for
open-ended outputs. The evaluator enables the evaluation across different
question types and answer styles, resulting in a unified scoring metric. We
evaluate representative LMMs on MM-Vet, providing insights into the
capabilities of different LMM system paradigms and models. Code and data are
available at https://github.com/yuweihao/MM-Vet. | http://arxiv.org/pdf/2308.02490 | Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang | cs.AI, cs.CL, cs.CV, cs.LG | Add results of GPT-4V. Code, data and leaderboard:
https://github.com/yuweihao/MM-Vet | null | cs.AI | 20230804 | 20231024 | [
{
"id": "2302.13971"
},
{
"id": "2106.13884"
},
{
"id": "2306.05425"
},
{
"id": "2204.02311"
},
{
"id": "2309.17421"
},
{
"id": "2306.08640"
},
{
"id": "2305.10403"
},
{
"id": "2302.04166"
},
{
"id": "2203.15556"
},
{
"id": "2210.09263"
},
{
"id": "2306.14565"
},
{
"id": "2103.00020"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2204.00598"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2010.11929"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "2307.06281"
},
{
"id": "1910.13461"
},
{
"id": "2207.04672"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2211.10435"
},
{
"id": "2305.07895"
},
{
"id": "2307.02469"
},
{
"id": "2308.01390"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2310.11441"
},
{
"id": "2306.09265"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2304.06939"
},
{
"id": "2110.07205"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.