doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1710.04087 | 6 | In summary, this paper makes the following main contributions:
⢠We present an unsupervised approach that reaches or outperforms state-of-the-art super- vised approaches on several language pairs and on three different evaluation tasks, namely word translation, sentence translation retrieval, and cross-lingual word similarity. On a standard word translation retrieval benchmark, using 200k vocabularies, our method reaches 66.2% accuracy on English-Italian while the best supervised approach is at 63.7%.
⢠We introduce a cross-domain similarity adaptation to mitigate the so-called hubness prob- lem (points tending to be nearest neighbors of many points in high-dimensional spaces). It is inspired by the self-tuning method from Zelnik-manor & Perona (2005), but adapted to our two-domain scenario in which we must consider a bi-partite graph for neighbors. This approach signiï¬cantly improves the absolute performance, and outperforms the state of the art both in supervised and unsupervised setups on word-translation benchmarks.
⢠We propose an unsupervised criterion that is highly correlated with the quality of the map- ping, that can be used both as a stopping criterion and to select the best hyper-parameters.
⢠We release high-quality dictionaries for 12 oriented languages pairs, as well as the corre- sponding supervised and unsupervised word embeddings. | 1710.04087#6 | Word Translation Without Parallel Data | State-of-the-art methods for learning cross-lingual word embeddings have
relied on bilingual dictionaries or parallel corpora. Recent studies showed
that the need for parallel data supervision can be alleviated with
character-level information. While these methods showed encouraging results,
they are not on par with their supervised counterparts and are limited to pairs
of languages sharing a common alphabet. In this work, we show that we can build
a bilingual dictionary between two languages without using any parallel
corpora, by aligning monolingual word embedding spaces in an unsupervised way.
Without using any character information, our model even outperforms existing
supervised methods on cross-lingual tasks for some language pairs. Our
experiments demonstrate that our method works very well also for distant
language pairs, like English-Russian or English-Chinese. We finally describe
experiments on the English-Esperanto low-resource language pair, on which there
only exists a limited amount of parallel data, to show the potential impact of
our method in fully unsupervised machine translation. Our code, embeddings and
dictionaries are publicly available. | http://arxiv.org/pdf/1710.04087 | Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, Hervé Jégou | cs.CL | ICLR 2018 | null | cs.CL | 20171011 | 20180130 | [
{
"id": "1701.00160"
},
{
"id": "1602.01925"
},
{
"id": "1702.08734"
}
] |
1710.04110 | 6 | 2
# Discrete Event, Continuous Time RNNs
include time stamps as additional inputs; these inputs gives deep learning models in principle all the necessary ï¬exibility to handle time. However, the ï¬exibility may simply be too great, in the same way that fully connected deep nets are too ï¬exible to match convolutional net performance in vision tasks (Lecun et al., 1998). The architectural biases serve to constrain learning in a helpful manner. | 1710.04110#6 | Discrete Event, Continuous Time RNNs | We investigate recurrent neural network architectures for event-sequence
processing. Event sequences, characterized by discrete observations stamped
with continuous-valued times of occurrence, are challenging due to the
potentially wide dynamic range of relevant time scales as well as interactions
between time scales. We describe four forms of inductive bias that should
benefit architectures for event sequences: temporal locality, position and
scale homogeneity, and scale interdependence. We extend the popular gated
recurrent unit (GRU) architecture to incorporate these biases via intrinsic
temporal dynamics, obtaining a continuous-time GRU. The CT-GRU arises by
interpreting the gates of a GRU as selecting a time scale of memory, and the
CT-GRU generalizes the GRU by incorporating multiple time scales of memory and
performing context-dependent selection of time scales for information storage
and retrieval. Event time-stamps drive decay dynamics of the CT-GRU, whereas
they serve as generic additional inputs to the GRU. Despite the very different
manner in which the two models consider time, their performance on eleven data
sets we examined is essentially identical. Our surprising results point both to
the robustness of GRU and LSTM architectures for handling continuous time, and
to the potency of incorporating continuous dynamics into neural architectures. | http://arxiv.org/pdf/1710.04110 | Michael C. Mozer, Denis Kazakov, Robert V. Lindsey | cs.NE, cs.LG, I.2.6 | 21 pages | null | cs.NE | 20171011 | 20171011 | [] |
1710.04087 | 7 | ⢠We release high-quality dictionaries for 12 oriented languages pairs, as well as the corre- sponding supervised and unsupervised word embeddings.
⢠We demonstrate the effectiveness of our method using an example of a low-resource lan- guage pair where parallel corpora are not available (English-Esperanto) for which our method is particularly suited.
The paper is organized as follows. Section 2 describes our unsupervised approach with adversarial training and our reï¬nement procedure. We then present our training procedure with unsupervised model selection in Section 3. We report in Section 4 our results on several cross-lingual tasks for several language pairs and compare our approach to supervised methods. Finally, we explain how our approach differs from recent related work on learning cross-lingual word embeddings.
# 2 MODEL
In this paper, we always assume that we have two sets of embeddings trained independently on monolingual data. Our work focuses on learning a mapping between the two sets such that transla- tions are close in the shared space. Mikolov et al. (2013b) show that they can exploit the similarities of monolingual embedding spaces to learn such a mapping. For this purpose, they use a known dictionary of n = 5000 pairs of words {xi, yi}iâ{1,n}, and learn a linear mapping W between the source and the target space such that | 1710.04087#7 | Word Translation Without Parallel Data | State-of-the-art methods for learning cross-lingual word embeddings have
relied on bilingual dictionaries or parallel corpora. Recent studies showed
that the need for parallel data supervision can be alleviated with
character-level information. While these methods showed encouraging results,
they are not on par with their supervised counterparts and are limited to pairs
of languages sharing a common alphabet. In this work, we show that we can build
a bilingual dictionary between two languages without using any parallel
corpora, by aligning monolingual word embedding spaces in an unsupervised way.
Without using any character information, our model even outperforms existing
supervised methods on cross-lingual tasks for some language pairs. Our
experiments demonstrate that our method works very well also for distant
language pairs, like English-Russian or English-Chinese. We finally describe
experiments on the English-Esperanto low-resource language pair, on which there
only exists a limited amount of parallel data, to show the potential impact of
our method in fully unsupervised machine translation. Our code, embeddings and
dictionaries are publicly available. | http://arxiv.org/pdf/1710.04087 | Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, Hervé Jégou | cs.CL | ICLR 2018 | null | cs.CL | 20171011 | 20180130 | [
{
"id": "1701.00160"
},
{
"id": "1602.01925"
},
{
"id": "1702.08734"
}
] |
1710.04110 | 7 | Given the similarity between the spatial regularities incorporated into the convolutional net and the temporal regularities we described, it seems natural to use a convolutional architecture for sequences, essentially remapping time into space (Waibel et al., 1990; Lockett and Miikkulainen, 2009; Nguyen et al., 2016; Taylor et al., 2010; Kalchbrenner et al., 2014; Sainath et al., 2015; Zeng et al., 2016). In terms of the biases that we conjecture to be helpful, convolutional nets can check all the boxes, and some recent work has begun to investigate multiscale convolutional nets for time series to capture scale interactions (Cui et al., 2016). However, convolutional architectures poorly address the continuous nature of time and the potential wide range of time scales. Consider a domain such as network intrusion detection: event patterns of relevance can occur on a time scale of microseconds to weeks (Mukherjee et al., 1994; Palanivel and Duraiswamy, 2014). It is diï¬cult to conceive how a convolutional architecture could accommodate this dynamic range. | 1710.04110#7 | Discrete Event, Continuous Time RNNs | We investigate recurrent neural network architectures for event-sequence
processing. Event sequences, characterized by discrete observations stamped
with continuous-valued times of occurrence, are challenging due to the
potentially wide dynamic range of relevant time scales as well as interactions
between time scales. We describe four forms of inductive bias that should
benefit architectures for event sequences: temporal locality, position and
scale homogeneity, and scale interdependence. We extend the popular gated
recurrent unit (GRU) architecture to incorporate these biases via intrinsic
temporal dynamics, obtaining a continuous-time GRU. The CT-GRU arises by
interpreting the gates of a GRU as selecting a time scale of memory, and the
CT-GRU generalizes the GRU by incorporating multiple time scales of memory and
performing context-dependent selection of time scales for information storage
and retrieval. Event time-stamps drive decay dynamics of the CT-GRU, whereas
they serve as generic additional inputs to the GRU. Despite the very different
manner in which the two models consider time, their performance on eleven data
sets we examined is essentially identical. Our surprising results point both to
the robustness of GRU and LSTM architectures for handling continuous time, and
to the potency of incorporating continuous dynamics into neural architectures. | http://arxiv.org/pdf/1710.04110 | Michael C. Mozer, Denis Kazakov, Robert V. Lindsey | cs.NE, cs.LG, I.2.6 | 21 pages | null | cs.NE | 20171011 | 20171011 | [] |
1710.04110 | 8 | RNN architectures have been proposed to address the multiscale nature of time series and to handle interactions of temporal scale, but these approaches have been focused on ordinal sequences and indexing is based on sequence position rather than chronological time. This work includes clockwork RNNs (KoutnÃk et al., 2014), gated feedback RNNs (Chung et al., 2015), and hierarchical multiscale RNNs (Chung et al., 2016).
A wide range of probabilistic methods have been applied to event sequences, including hidden semi-Markov models and survival analysis (Kapoor et al., 2014, 2015; Zhang et al., 2016), temporal point processes (Dai et al., 2016; Du et al., 2015, 2016; Wang et al., 2016b), nonstationary bandits (Komiyama and Qin, 2014), and time-sensitive latent-factor models (Koren, 2010). All probabilistic methods properly treat chronological time as time, and therefore naturally incorporate temporal locality and position homogeneity biases. These methods also tend to permit a wide dynamic range of time scales. However, they are limited by strong generative assumptions. Our aim is to combine the strength of probabilistic methodsâ having an explicit theory of temporal dynamicsâwith the strength of deep learningâhaving the ability to discover representations.
# 2. Continuous-time recurrent networks | 1710.04110#8 | Discrete Event, Continuous Time RNNs | We investigate recurrent neural network architectures for event-sequence
processing. Event sequences, characterized by discrete observations stamped
with continuous-valued times of occurrence, are challenging due to the
potentially wide dynamic range of relevant time scales as well as interactions
between time scales. We describe four forms of inductive bias that should
benefit architectures for event sequences: temporal locality, position and
scale homogeneity, and scale interdependence. We extend the popular gated
recurrent unit (GRU) architecture to incorporate these biases via intrinsic
temporal dynamics, obtaining a continuous-time GRU. The CT-GRU arises by
interpreting the gates of a GRU as selecting a time scale of memory, and the
CT-GRU generalizes the GRU by incorporating multiple time scales of memory and
performing context-dependent selection of time scales for information storage
and retrieval. Event time-stamps drive decay dynamics of the CT-GRU, whereas
they serve as generic additional inputs to the GRU. Despite the very different
manner in which the two models consider time, their performance on eleven data
sets we examined is essentially identical. Our surprising results point both to
the robustness of GRU and LSTM architectures for handling continuous time, and
to the potency of incorporating continuous dynamics into neural architectures. | http://arxiv.org/pdf/1710.04110 | Michael C. Mozer, Denis Kazakov, Robert V. Lindsey | cs.NE, cs.LG, I.2.6 | 21 pages | null | cs.NE | 20171011 | 20171011 | [] |
1710.04087 | 9 | Figure 1: Toy illustration of the method. (A) There are two distributions of word embeddings, English words in red denoted by X and Italian words in blue denoted by Y , which we want to align/translate. Each dot represents a word in that space. The size of the dot is proportional to the frequency of the words in the training corpus of that language. (B) Using adversarial learning, we learn a rotation matrix W which roughly aligns the two distributions. The green stars are randomly selected words that are fed to the discriminator to determine whether the two word embeddings come from the same distribution. (C) The mapping W is further reï¬ned via Procrustes. This method uses frequent words aligned by the previous step as anchor points, and minimizes an energy function that corresponds to a spring system between anchor points. The reï¬ned mapping is then used to map all words in the dictionary. (D) Finally, we translate by using the mapping W and a distance metric, dubbed CSLS, that expands the space where there is high density of points (like the area around the word âcatâ), so that âhubsâ (like the word âcatâ) become less close to other word vectors than they would otherwise (compare to the same region in panel (A)). | 1710.04087#9 | Word Translation Without Parallel Data | State-of-the-art methods for learning cross-lingual word embeddings have
relied on bilingual dictionaries or parallel corpora. Recent studies showed
that the need for parallel data supervision can be alleviated with
character-level information. While these methods showed encouraging results,
they are not on par with their supervised counterparts and are limited to pairs
of languages sharing a common alphabet. In this work, we show that we can build
a bilingual dictionary between two languages without using any parallel
corpora, by aligning monolingual word embedding spaces in an unsupervised way.
Without using any character information, our model even outperforms existing
supervised methods on cross-lingual tasks for some language pairs. Our
experiments demonstrate that our method works very well also for distant
language pairs, like English-Russian or English-Chinese. We finally describe
experiments on the English-Esperanto low-resource language pair, on which there
only exists a limited amount of parallel data, to show the potential impact of
our method in fully unsupervised machine translation. Our code, embeddings and
dictionaries are publicly available. | http://arxiv.org/pdf/1710.04087 | Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, Hervé Jégou | cs.CL | ICLR 2018 | null | cs.CL | 20171011 | 20180130 | [
{
"id": "1701.00160"
},
{
"id": "1602.01925"
},
{
"id": "1702.08734"
}
] |
1710.04110 | 9 | # 2. Continuous-time recurrent networks
All dynamical event-sequence models must construct memories that encapsulate information from past that is relevant for future prediction, action, or classiï¬cation. This information may have a limited lifetime of utility, and stale information which is no longer relevant should be forgotten. LSTM (Hochreiter and Schmidhuber, 1997) was originally designed to operate without forgetting, but adding a mechanism of forgetting improved the architecture (Gers et al., 2000). The intrinsic dynamics of the newer GRU (Chung et al., 2014) architecture incorporates forgetting: storage of new information is balanced against the forgetting of old. In this section, we summarize the GRU architecture and we characterize its forgetting mechanism from a novel perspective that facilitates generalizing the architecture to handling sequences in continuous time. For expositionâs sake, we present our approach in terms of the
3
# Mozer, Kazakov, & Lindsey | 1710.04110#9 | Discrete Event, Continuous Time RNNs | We investigate recurrent neural network architectures for event-sequence
processing. Event sequences, characterized by discrete observations stamped
with continuous-valued times of occurrence, are challenging due to the
potentially wide dynamic range of relevant time scales as well as interactions
between time scales. We describe four forms of inductive bias that should
benefit architectures for event sequences: temporal locality, position and
scale homogeneity, and scale interdependence. We extend the popular gated
recurrent unit (GRU) architecture to incorporate these biases via intrinsic
temporal dynamics, obtaining a continuous-time GRU. The CT-GRU arises by
interpreting the gates of a GRU as selecting a time scale of memory, and the
CT-GRU generalizes the GRU by incorporating multiple time scales of memory and
performing context-dependent selection of time scales for information storage
and retrieval. Event time-stamps drive decay dynamics of the CT-GRU, whereas
they serve as generic additional inputs to the GRU. Despite the very different
manner in which the two models consider time, their performance on eleven data
sets we examined is essentially identical. Our surprising results point both to
the robustness of GRU and LSTM architectures for handling continuous time, and
to the potency of incorporating continuous dynamics into neural architectures. | http://arxiv.org/pdf/1710.04110 | Michael C. Mozer, Denis Kazakov, Robert V. Lindsey | cs.NE, cs.LG, I.2.6 | 21 pages | null | cs.NE | 20171011 | 20171011 | [] |
1710.04087 | 10 | In practice, |Mikolov et al.|(2013b) obtained better results on the word translation task using a sim- ple linear mapping, and did not observe any improvement when using more advanced strategies like multilayer neural networks. {Xing et al. (2015) showed that these results are improved by enforc- ing an orthogonality constraint on W. In that case, the equation boils down to the Procrustes problem, which advantageously offers a closed form solution obtained from the singular value de- composition (SVD) of Y X7: W* = argmin ||WX â Y||p = UV", with UZV7 = SVD(YX7). (2) We04(R)
In this paper, we show how to learn this mapping W without cross-lingual supervision; an illustration of the approach is given in Fig. 1. First, we learn an initial proxy of W by using an adversarial criterion. Then, we use the words that match the best as anchor points for Procrustes. Finally, we improve performance over less frequent words by changing the metric of the space, which leads to spread more of those points in dense regions. Next, we describe the details of each of these steps.
2.1 DOMAIN-ADVERSARIAL SETTING | 1710.04087#10 | Word Translation Without Parallel Data | State-of-the-art methods for learning cross-lingual word embeddings have
relied on bilingual dictionaries or parallel corpora. Recent studies showed
that the need for parallel data supervision can be alleviated with
character-level information. While these methods showed encouraging results,
they are not on par with their supervised counterparts and are limited to pairs
of languages sharing a common alphabet. In this work, we show that we can build
a bilingual dictionary between two languages without using any parallel
corpora, by aligning monolingual word embedding spaces in an unsupervised way.
Without using any character information, our model even outperforms existing
supervised methods on cross-lingual tasks for some language pairs. Our
experiments demonstrate that our method works very well also for distant
language pairs, like English-Russian or English-Chinese. We finally describe
experiments on the English-Esperanto low-resource language pair, on which there
only exists a limited amount of parallel data, to show the potential impact of
our method in fully unsupervised machine translation. Our code, embeddings and
dictionaries are publicly available. | http://arxiv.org/pdf/1710.04087 | Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, Hervé Jégou | cs.CL | ICLR 2018 | null | cs.CL | 20171011 | 20180130 | [
{
"id": "1701.00160"
},
{
"id": "1602.01925"
},
{
"id": "1702.08734"
}
] |
1710.04110 | 10 | Figure 1: A schematic of the GRU (left) and CT-GRU (right). Color coding of the elements matches the background color used in the tables presenting activation dynamics. For the CT-GRU, the large rectangle with segments represents a multiscale hidden representation. The intrinsic decay temporal decay of this representation, as well as the recurrent self-connections, is not depicted in the schematic.
GRU, but it could be cast in terms of LSTM just as well. There appears to be no functional diï¬erence between the two architectures with proper initialization (Jozefowicz et al., 2015).
# 2.1 Gated recurrent unit (GRU)
The most basic architecture using gated-recurrent units (GRUs) involves an input layer, a recurrent hidden GRU layer, and an output layer. A schematic of the GRU units is shown in the left panel of Figure 1. The reset gate, r, shunts the activation of the previous hidden state, h. The shunted state, in conjunction with the external input, x, is used to detect the presence of task-relevant events (q). The update gate, s, then determines what proportion of the old hidden state should be retained and what proportion of the detected event should be stored. Formally, given an external input xk at step k and the previous hidden state hkâ1, the GRU layer updates as follows: | 1710.04110#10 | Discrete Event, Continuous Time RNNs | We investigate recurrent neural network architectures for event-sequence
processing. Event sequences, characterized by discrete observations stamped
with continuous-valued times of occurrence, are challenging due to the
potentially wide dynamic range of relevant time scales as well as interactions
between time scales. We describe four forms of inductive bias that should
benefit architectures for event sequences: temporal locality, position and
scale homogeneity, and scale interdependence. We extend the popular gated
recurrent unit (GRU) architecture to incorporate these biases via intrinsic
temporal dynamics, obtaining a continuous-time GRU. The CT-GRU arises by
interpreting the gates of a GRU as selecting a time scale of memory, and the
CT-GRU generalizes the GRU by incorporating multiple time scales of memory and
performing context-dependent selection of time scales for information storage
and retrieval. Event time-stamps drive decay dynamics of the CT-GRU, whereas
they serve as generic additional inputs to the GRU. Despite the very different
manner in which the two models consider time, their performance on eleven data
sets we examined is essentially identical. Our surprising results point both to
the robustness of GRU and LSTM architectures for handling continuous time, and
to the potency of incorporating continuous dynamics into neural architectures. | http://arxiv.org/pdf/1710.04110 | Michael C. Mozer, Denis Kazakov, Robert V. Lindsey | cs.NE, cs.LG, I.2.6 | 21 pages | null | cs.NE | 20171011 | 20171011 | [] |
1710.04087 | 11 | 2.1 DOMAIN-ADVERSARIAL SETTING
In this section, we present our domain-adversarial approach for learning W without cross-lingual supervision. Let X = {x1, ..., xn} and Y = {y1, ..., ym} be two sets of n and m word embeddings coming from a source and a target language respectively. A model is trained to discriminate between elements randomly sampled from W X = {W x1, ..., W xn} and Y. We call this model the discrim- inator. W is trained to prevent the discriminator from making accurate predictions. As a result, this is a two-player game, where the discriminator aims at maximizing its ability to identify the origin of an embedding, and W aims at preventing the discriminator from doing so by making W X and Y as similar as possible. This approach is in line with the work of Ganin et al. (2016), who proposed to learn latent representations invariant to the input domain, where in our case, a domain is represented by a language (source or target).
Discriminator objective We refer to the discriminator parameters as 0p. We consider the prob- ability Po, (source = 1|z) that a vector z is the mapping of a source embedding (as opposed to a target embedding) according to the discriminator. The discriminator loss can be written as: | 1710.04087#11 | Word Translation Without Parallel Data | State-of-the-art methods for learning cross-lingual word embeddings have
relied on bilingual dictionaries or parallel corpora. Recent studies showed
that the need for parallel data supervision can be alleviated with
character-level information. While these methods showed encouraging results,
they are not on par with their supervised counterparts and are limited to pairs
of languages sharing a common alphabet. In this work, we show that we can build
a bilingual dictionary between two languages without using any parallel
corpora, by aligning monolingual word embedding spaces in an unsupervised way.
Without using any character information, our model even outperforms existing
supervised methods on cross-lingual tasks for some language pairs. Our
experiments demonstrate that our method works very well also for distant
language pairs, like English-Russian or English-Chinese. We finally describe
experiments on the English-Esperanto low-resource language pair, on which there
only exists a limited amount of parallel data, to show the potential impact of
our method in fully unsupervised machine translation. Our code, embeddings and
dictionaries are publicly available. | http://arxiv.org/pdf/1710.04087 | Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, Hervé Jégou | cs.CL | ICLR 2018 | null | cs.CL | 20171011 | 20180130 | [
{
"id": "1701.00160"
},
{
"id": "1602.01925"
},
{
"id": "1702.08734"
}
] |
1710.04110 | 11 | 1. Determine reset gate settings rk â logistic(W Rxk + U Rhkâ1 + bR) 2. Detect relevant event signals qk â tanh(W Qxk + U Q(rk ⦠hkâ1) + bQ) 3. Determine update gate settings sk â logistic(W Sxk + U Shkâ1 + bS) 4. Update hidden state hk â (1 â sk) ⦠hkâ1 + sk ⦠qk
where W â, U â, and bâ are model parameters, ⦠denotes the Hadamard product, and h0 = 0. Readers who are familiar with GRUs may notice that our depiction of GRUs in in Figure 1 looks a bit diï¬erent than the depiction in the originating article (Chung et al., 2014). Our intention is to highlight the fact that the âupdateâ gate is actually making a decision about what to store in the memory (hence the notation s), and the âresetâ gate is actually making a decision about what to retrieve from the memory (hence the notation r). The schematic in Figure 1 makes obvious the store and retrieval operations via the gate placement on the input to and output from the hidden state, h, respectively. | 1710.04110#11 | Discrete Event, Continuous Time RNNs | We investigate recurrent neural network architectures for event-sequence
processing. Event sequences, characterized by discrete observations stamped
with continuous-valued times of occurrence, are challenging due to the
potentially wide dynamic range of relevant time scales as well as interactions
between time scales. We describe four forms of inductive bias that should
benefit architectures for event sequences: temporal locality, position and
scale homogeneity, and scale interdependence. We extend the popular gated
recurrent unit (GRU) architecture to incorporate these biases via intrinsic
temporal dynamics, obtaining a continuous-time GRU. The CT-GRU arises by
interpreting the gates of a GRU as selecting a time scale of memory, and the
CT-GRU generalizes the GRU by incorporating multiple time scales of memory and
performing context-dependent selection of time scales for information storage
and retrieval. Event time-stamps drive decay dynamics of the CT-GRU, whereas
they serve as generic additional inputs to the GRU. Despite the very different
manner in which the two models consider time, their performance on eleven data
sets we examined is essentially identical. Our surprising results point both to
the robustness of GRU and LSTM architectures for handling continuous time, and
to the potency of incorporating continuous dynamics into neural architectures. | http://arxiv.org/pdf/1710.04110 | Michael C. Mozer, Denis Kazakov, Robert V. Lindsey | cs.NE, cs.LG, I.2.6 | 21 pages | null | cs.NE | 20171011 | 20171011 | [] |
1710.04087 | 12 | i 1 Lp(@p|W) = a Slog Po, (source = 1|W2;) âah Ss log Po, (source = Oly). (3) i=1 i=1
In the unsupervised setting, W is now trained so that the discriminator is Mapping objective unable to accurately predict the embedding origins:
m n Lw(W|@p) = -= Ss log Po, (source = 0|W2) - â> log Po, (source = 1lyi)- (4) i=1 i=1
3
Published as a conference paper at ICLR 2018
Learning algorithm To train our model, we follow the standard training procedure of deep ad- versarial networks of Goodfellow et al. (2014). For every input sample, the discriminator and the mapping matrix W are trained successively with stochastic gradient updates to respectively mini- mize LD and LW . The details of training are given in the next section.
2.2 REFINEMENT PROCEDURE | 1710.04087#12 | Word Translation Without Parallel Data | State-of-the-art methods for learning cross-lingual word embeddings have
relied on bilingual dictionaries or parallel corpora. Recent studies showed
that the need for parallel data supervision can be alleviated with
character-level information. While these methods showed encouraging results,
they are not on par with their supervised counterparts and are limited to pairs
of languages sharing a common alphabet. In this work, we show that we can build
a bilingual dictionary between two languages without using any parallel
corpora, by aligning monolingual word embedding spaces in an unsupervised way.
Without using any character information, our model even outperforms existing
supervised methods on cross-lingual tasks for some language pairs. Our
experiments demonstrate that our method works very well also for distant
language pairs, like English-Russian or English-Chinese. We finally describe
experiments on the English-Esperanto low-resource language pair, on which there
only exists a limited amount of parallel data, to show the potential impact of
our method in fully unsupervised machine translation. Our code, embeddings and
dictionaries are publicly available. | http://arxiv.org/pdf/1710.04087 | Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, Hervé Jégou | cs.CL | ICLR 2018 | null | cs.CL | 20171011 | 20180130 | [
{
"id": "1701.00160"
},
{
"id": "1602.01925"
},
{
"id": "1702.08734"
}
] |
1710.04110 | 12 | To incorporate time into the GRU, we observe that the storage operation essentially splits each new event, qk, into a portion sk that is stored indeï¬nitely and a portion 1 â sk that
4
# Discrete Event, Continuous Time RNNs
is stored for only an inï¬nitesimally short period of time. Similarly, the retrieval operation reassembles a memory by taking a proportion, rk, of a long-lasting memoryâvia the product rk ⦠hkâ1âand a complementary proportion, 1 â rk of a very very short-term memoryâa memory so brief that it has decayed to 0. The retrieval operation is thus equivalent to computing the mixture rk ⦠hkâ1 + (1 â rk) ⦠0. | 1710.04110#12 | Discrete Event, Continuous Time RNNs | We investigate recurrent neural network architectures for event-sequence
processing. Event sequences, characterized by discrete observations stamped
with continuous-valued times of occurrence, are challenging due to the
potentially wide dynamic range of relevant time scales as well as interactions
between time scales. We describe four forms of inductive bias that should
benefit architectures for event sequences: temporal locality, position and
scale homogeneity, and scale interdependence. We extend the popular gated
recurrent unit (GRU) architecture to incorporate these biases via intrinsic
temporal dynamics, obtaining a continuous-time GRU. The CT-GRU arises by
interpreting the gates of a GRU as selecting a time scale of memory, and the
CT-GRU generalizes the GRU by incorporating multiple time scales of memory and
performing context-dependent selection of time scales for information storage
and retrieval. Event time-stamps drive decay dynamics of the CT-GRU, whereas
they serve as generic additional inputs to the GRU. Despite the very different
manner in which the two models consider time, their performance on eleven data
sets we examined is essentially identical. Our surprising results point both to
the robustness of GRU and LSTM architectures for handling continuous time, and
to the potency of incorporating continuous dynamics into neural architectures. | http://arxiv.org/pdf/1710.04110 | Michael C. Mozer, Denis Kazakov, Robert V. Lindsey | cs.NE, cs.LG, I.2.6 | 21 pages | null | cs.NE | 20171011 | 20171011 | [] |
1710.04087 | 13 | 2.2 REFINEMENT PROCEDURE
The matrix W obtained with adversarial training gives good performance (see Table 1), but the results are still not on par with the supervised approach. In fact, the adversarial approach tries to align all words irrespective of their frequencies. However, rare words have embeddings that are less updated and are more likely to appear in different contexts in each corpus, which makes them harder to align. Under the assumption that the mapping is linear, it is then better to infer the global mapping using only the most frequent words as anchors. Besides, the accuracy on the most frequent word pairs is high after adversarial training.
To reï¬ne our mapping, we build a synthetic parallel vocabulary using the W just learned with ad- versarial training. Speciï¬cally, we consider the most frequent words and retain only mutual nearest neighbors to ensure a high-quality dictionary. Subsequently, we apply the Procrustes solution in (2) on this generated dictionary. Considering the improved solution generated with the Procrustes al- gorithm, it is possible to generate a more accurate dictionary and apply this method iteratively, similarly to Artetxe et al. (2017). However, given that the synthetic dictionary obtained using ad- versarial training is already strong, we only observe small improvements when doing more than one iteration, i.e., the improvements on the word translation task are usually below 1%. | 1710.04087#13 | Word Translation Without Parallel Data | State-of-the-art methods for learning cross-lingual word embeddings have
relied on bilingual dictionaries or parallel corpora. Recent studies showed
that the need for parallel data supervision can be alleviated with
character-level information. While these methods showed encouraging results,
they are not on par with their supervised counterparts and are limited to pairs
of languages sharing a common alphabet. In this work, we show that we can build
a bilingual dictionary between two languages without using any parallel
corpora, by aligning monolingual word embedding spaces in an unsupervised way.
Without using any character information, our model even outperforms existing
supervised methods on cross-lingual tasks for some language pairs. Our
experiments demonstrate that our method works very well also for distant
language pairs, like English-Russian or English-Chinese. We finally describe
experiments on the English-Esperanto low-resource language pair, on which there
only exists a limited amount of parallel data, to show the potential impact of
our method in fully unsupervised machine translation. Our code, embeddings and
dictionaries are publicly available. | http://arxiv.org/pdf/1710.04087 | Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, Hervé Jégou | cs.CL | ICLR 2018 | null | cs.CL | 20171011 | 20180130 | [
{
"id": "1701.00160"
},
{
"id": "1602.01925"
},
{
"id": "1702.08734"
}
] |
1710.04110 | 13 | The essential idea of the model we will introduce, the CT-GRU, is to endow each hidden unit with multiple memory traces that span a range of time scales, in contrast to the GRU which can be conceived of as having just two time scales: one inï¬nitely long and one inï¬nitesimally short. We deï¬ne time scale in the standard sense of a linear time-invariant system, operating according to the diï¬erential equation dh/dt = âh/Ï , where h is the memory, t is continuous time, and Ï is a (nonnegative) time constant or time scale. These dynamics yield exponential decay, i.e., h(t) = eât/Ï h(0) and Ï is the time for the state to decay to a proportion eâ1 â .37 of its initial level. The short and long time scales of the GRU correspond to the limits Ï â 0 and Ï â â, respectively.
# 2.2 Continuous-time gated recurrent unit (CT-GRU) | 1710.04110#13 | Discrete Event, Continuous Time RNNs | We investigate recurrent neural network architectures for event-sequence
processing. Event sequences, characterized by discrete observations stamped
with continuous-valued times of occurrence, are challenging due to the
potentially wide dynamic range of relevant time scales as well as interactions
between time scales. We describe four forms of inductive bias that should
benefit architectures for event sequences: temporal locality, position and
scale homogeneity, and scale interdependence. We extend the popular gated
recurrent unit (GRU) architecture to incorporate these biases via intrinsic
temporal dynamics, obtaining a continuous-time GRU. The CT-GRU arises by
interpreting the gates of a GRU as selecting a time scale of memory, and the
CT-GRU generalizes the GRU by incorporating multiple time scales of memory and
performing context-dependent selection of time scales for information storage
and retrieval. Event time-stamps drive decay dynamics of the CT-GRU, whereas
they serve as generic additional inputs to the GRU. Despite the very different
manner in which the two models consider time, their performance on eleven data
sets we examined is essentially identical. Our surprising results point both to
the robustness of GRU and LSTM architectures for handling continuous time, and
to the potency of incorporating continuous dynamics into neural architectures. | http://arxiv.org/pdf/1710.04110 | Michael C. Mozer, Denis Kazakov, Robert V. Lindsey | cs.NE, cs.LG, I.2.6 | 21 pages | null | cs.NE | 20171011 | 20171011 | [] |
1710.04087 | 14 | 2.3 CROSS-DOMAIN SIMILARITY LOCAL SCALING (CSLS)
In this subsection, our motivation is to produce reliable matching pairs between two languages: we want to improve the comparison metric such that the nearest neighbor of a source word, in the target language, is more likely to have as a nearest neighbor this particular source word.
Nearest neighbors are by nature asymmetric: y being a K-NN of x does not imply that x is a K-NN of y. In high-dimensional spaces (Radovanovi´c et al., 2010), this leads to a phenomenon that is detrimental to matching pairs based on a nearest neighbor rule: some vectors, dubbed hubs, are with high probability nearest neighbors of many other points, while others (anti-hubs) are not nearest neighbors of any point. This problem has been observed in different areas, from matching image features in vision (Jegou et al., 2010) to translating words in text understanding applications (Dinu et al., 2015). Various solutions have been proposed to mitigate this issue, some being reminiscent of pre-processing already existing in spectral clustering algorithms (Zelnik-manor & Perona, 2005). | 1710.04087#14 | Word Translation Without Parallel Data | State-of-the-art methods for learning cross-lingual word embeddings have
relied on bilingual dictionaries or parallel corpora. Recent studies showed
that the need for parallel data supervision can be alleviated with
character-level information. While these methods showed encouraging results,
they are not on par with their supervised counterparts and are limited to pairs
of languages sharing a common alphabet. In this work, we show that we can build
a bilingual dictionary between two languages without using any parallel
corpora, by aligning monolingual word embedding spaces in an unsupervised way.
Without using any character information, our model even outperforms existing
supervised methods on cross-lingual tasks for some language pairs. Our
experiments demonstrate that our method works very well also for distant
language pairs, like English-Russian or English-Chinese. We finally describe
experiments on the English-Esperanto low-resource language pair, on which there
only exists a limited amount of parallel data, to show the potential impact of
our method in fully unsupervised machine translation. Our code, embeddings and
dictionaries are publicly available. | http://arxiv.org/pdf/1710.04087 | Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, Hervé Jégou | cs.CL | ICLR 2018 | null | cs.CL | 20171011 | 20180130 | [
{
"id": "1701.00160"
},
{
"id": "1602.01925"
},
{
"id": "1702.08734"
}
] |
1710.04110 | 14 | # 2.2 Continuous-time gated recurrent unit (CT-GRU)
We argued that the storage (or update) gate of the GRU decides how to distribute the memory of a new event across time scales, and the retrieval (or reset) gate decides how to collect information previously stored across time scales. Binding memory operations to a time scale is sensible for any intelligent agent because diï¬erent activities require diï¬erent memory durations. To use human cognition as an example, when you are told a phone number, you need remember it only for a few seconds to enter it in your phone; when making a mental shopping list, you need remember the items only until you get to the store; but when a colleague goes on sabbatical and returns a year later, you should still remember her name. Although individuals typically do not wish to forget, forgetting can be viewed as adaptive (Anderson and Milson, 1989): when information becomes stale or is no longer relevant, it only interferes with ongoing processing and clutters memory. Indeed, cognitive scientists have shown that when an attribute must be updated frequently in memory, its current value decays more rapidly (Altmann and Gray, 2002). This phenomenon is related to the beneï¬t of distributed practice on human knowledge retention: when study is spaced versus massed in time, memories are more durable (Mozer et al., 2009). | 1710.04110#14 | Discrete Event, Continuous Time RNNs | We investigate recurrent neural network architectures for event-sequence
processing. Event sequences, characterized by discrete observations stamped
with continuous-valued times of occurrence, are challenging due to the
potentially wide dynamic range of relevant time scales as well as interactions
between time scales. We describe four forms of inductive bias that should
benefit architectures for event sequences: temporal locality, position and
scale homogeneity, and scale interdependence. We extend the popular gated
recurrent unit (GRU) architecture to incorporate these biases via intrinsic
temporal dynamics, obtaining a continuous-time GRU. The CT-GRU arises by
interpreting the gates of a GRU as selecting a time scale of memory, and the
CT-GRU generalizes the GRU by incorporating multiple time scales of memory and
performing context-dependent selection of time scales for information storage
and retrieval. Event time-stamps drive decay dynamics of the CT-GRU, whereas
they serve as generic additional inputs to the GRU. Despite the very different
manner in which the two models consider time, their performance on eleven data
sets we examined is essentially identical. Our surprising results point both to
the robustness of GRU and LSTM architectures for handling continuous time, and
to the potency of incorporating continuous dynamics into neural architectures. | http://arxiv.org/pdf/1710.04110 | Michael C. Mozer, Denis Kazakov, Robert V. Lindsey | cs.NE, cs.LG, I.2.6 | 21 pages | null | cs.NE | 20171011 | 20171011 | [] |
1710.04087 | 15 | However, most studies aiming at mitigating hubness consider a single feature distribution. In our case, we have two domains, one for each language. This particular case is taken into account by Dinu et al. (2015), who propose a pairing rule based on reverse ranks, and the inverted soft-max (ISF) by Smith et al. (2017), which we evaluate in our experimental section. These methods are not fully satisfactory because the similarity updates are different for the words of the source and target languages. Additionally, ISF requires to cross-validate a parameter, whose estimation is noisy in an unsupervised setting where we do not have a direct cross-validation criterion.
In contrast, we consider a bi-partite neighborhood graph, in which each word of a given dictionary is connected to its K nearest neighbors in the other language. We denote by NT(W xs) the neigh- borhood, on this bi-partite graph, associated with a mapped source word embedding W xs. All K elements of NT(W xs) are words from the target language. Similarly we denote by NS(yt) the neighborhood associated with a word t of the target language. We consider the mean similarity of a source embedding xs to its target neighborhood as
1 r¢(Was) = K > cos(Was, yt), (5) weNy (Wee) | 1710.04087#15 | Word Translation Without Parallel Data | State-of-the-art methods for learning cross-lingual word embeddings have
relied on bilingual dictionaries or parallel corpora. Recent studies showed
that the need for parallel data supervision can be alleviated with
character-level information. While these methods showed encouraging results,
they are not on par with their supervised counterparts and are limited to pairs
of languages sharing a common alphabet. In this work, we show that we can build
a bilingual dictionary between two languages without using any parallel
corpora, by aligning monolingual word embedding spaces in an unsupervised way.
Without using any character information, our model even outperforms existing
supervised methods on cross-lingual tasks for some language pairs. Our
experiments demonstrate that our method works very well also for distant
language pairs, like English-Russian or English-Chinese. We finally describe
experiments on the English-Esperanto low-resource language pair, on which there
only exists a limited amount of parallel data, to show the potential impact of
our method in fully unsupervised machine translation. Our code, embeddings and
dictionaries are publicly available. | http://arxiv.org/pdf/1710.04087 | Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, Hervé Jégou | cs.CL | ICLR 2018 | null | cs.CL | 20171011 | 20180130 | [
{
"id": "1701.00160"
},
{
"id": "1602.01925"
},
{
"id": "1702.08734"
}
] |
1710.04110 | 15 | Returning to the CT-GRU, our goal is to develop a model thatâconsistent with the GRUâstores each new event at a time scale deemed appropriate for it, and similarly retrieves information from an appropriate time scale. Thus, we wish to replace the GRU storage and retrieval gates with storage and retrieval scales, computed from the external input and the current hidden state. The scale is expressed in terms of a time constant.
k (the superscript s denotes âstorageâ), each would require a separate trace. Because separate traces are not feasible, we propose instead a ï¬xed set of traces with predeï¬ned time scales, and each to-be-stored event is distributed among the available traces. Speciï¬cally, we propose a ï¬xed set of M traces with log-linear spaced time scales, ËT â¡ {ËÏ1, ËÏ2, . . . ËÏM }, and we approximate the storage of a single trace at scale k with a mixture of traces from ËT . Of course, an exponential curve with an arbitrary decay Ï S rate cannot necessarily be modeled as a mixture of exponentials with predeï¬ned decay rates. However, we can attempt to ensure that the half life of the mixture matches the half life of
5
# Mozer, Kazakov, & Lindsey | 1710.04110#15 | Discrete Event, Continuous Time RNNs | We investigate recurrent neural network architectures for event-sequence
processing. Event sequences, characterized by discrete observations stamped
with continuous-valued times of occurrence, are challenging due to the
potentially wide dynamic range of relevant time scales as well as interactions
between time scales. We describe four forms of inductive bias that should
benefit architectures for event sequences: temporal locality, position and
scale homogeneity, and scale interdependence. We extend the popular gated
recurrent unit (GRU) architecture to incorporate these biases via intrinsic
temporal dynamics, obtaining a continuous-time GRU. The CT-GRU arises by
interpreting the gates of a GRU as selecting a time scale of memory, and the
CT-GRU generalizes the GRU by incorporating multiple time scales of memory and
performing context-dependent selection of time scales for information storage
and retrieval. Event time-stamps drive decay dynamics of the CT-GRU, whereas
they serve as generic additional inputs to the GRU. Despite the very different
manner in which the two models consider time, their performance on eleven data
sets we examined is essentially identical. Our surprising results point both to
the robustness of GRU and LSTM architectures for handling continuous time, and
to the potency of incorporating continuous dynamics into neural architectures. | http://arxiv.org/pdf/1710.04110 | Michael C. Mozer, Denis Kazakov, Robert V. Lindsey | cs.NE, cs.LG, I.2.6 | 21 pages | null | cs.NE | 20171011 | 20171011 | [] |
1710.04087 | 16 | 1 r¢(Was) = K > cos(Was, yt), (5) weNy (Wee)
ytâNT(W xs) where cos(., .) is the cosine similarity. Likewise we denote by rS(yt) the mean similarity of a target word yt to its neighborhood. These quantities are computed for all source and target word vectors with the efï¬cient nearest neighbors implementation by Johnson et al. (2017). We use them to deï¬ne a similarity measure CSLS(., .) between mapped source words and target words, as
CSLS(W xs, yt) = 2 cos(W xs, yt) â rT(W xs) â rS(yt). (6)
4
Published as a conference paper at ICLR 2018
Intuitively, this update increases the similarity associated with isolated word vectors. Conversely it decreases the ones of vectors lying in dense areas. Our experiments show that the CSLS signiï¬cantly increases the accuracy for word translation retrieval, while not requiring any parameter tuning.
# 3 TRAINING AND ARCHITECTURAL CHOICES
3.1 ARCHITECTURE | 1710.04087#16 | Word Translation Without Parallel Data | State-of-the-art methods for learning cross-lingual word embeddings have
relied on bilingual dictionaries or parallel corpora. Recent studies showed
that the need for parallel data supervision can be alleviated with
character-level information. While these methods showed encouraging results,
they are not on par with their supervised counterparts and are limited to pairs
of languages sharing a common alphabet. In this work, we show that we can build
a bilingual dictionary between two languages without using any parallel
corpora, by aligning monolingual word embedding spaces in an unsupervised way.
Without using any character information, our model even outperforms existing
supervised methods on cross-lingual tasks for some language pairs. Our
experiments demonstrate that our method works very well also for distant
language pairs, like English-Russian or English-Chinese. We finally describe
experiments on the English-Esperanto low-resource language pair, on which there
only exists a limited amount of parallel data, to show the potential impact of
our method in fully unsupervised machine translation. Our code, embeddings and
dictionaries are publicly available. | http://arxiv.org/pdf/1710.04087 | Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, Hervé Jégou | cs.CL | ICLR 2018 | null | cs.CL | 20171011 | 20180130 | [
{
"id": "1701.00160"
},
{
"id": "1602.01925"
},
{
"id": "1702.08734"
}
] |
1710.04110 | 16 | 5
# Mozer, Kazakov, & Lindsey
104 ââ True 102 â Scale Mixture £ 10° g ic] = 10? 104 10° (a) 10° 10° 10° Time Scale
Figure 2: (a) Half life for a range of time scales: true value (dashed black line) and mixture approximation (blue line). (b) Decay curves for time scales Ï â [10, 100] (solid lines) and the mixture approximation (dashed lines).
the target. Through experimentation, we have achieved a match with high ï¬delity when ski, the proportion of the to-be-stored signal allocated to ï¬xed scale i, is:
Ski elnG/EP SD, ew line;/72)P (1) | 1710.04110#16 | Discrete Event, Continuous Time RNNs | We investigate recurrent neural network architectures for event-sequence
processing. Event sequences, characterized by discrete observations stamped
with continuous-valued times of occurrence, are challenging due to the
potentially wide dynamic range of relevant time scales as well as interactions
between time scales. We describe four forms of inductive bias that should
benefit architectures for event sequences: temporal locality, position and
scale homogeneity, and scale interdependence. We extend the popular gated
recurrent unit (GRU) architecture to incorporate these biases via intrinsic
temporal dynamics, obtaining a continuous-time GRU. The CT-GRU arises by
interpreting the gates of a GRU as selecting a time scale of memory, and the
CT-GRU generalizes the GRU by incorporating multiple time scales of memory and
performing context-dependent selection of time scales for information storage
and retrieval. Event time-stamps drive decay dynamics of the CT-GRU, whereas
they serve as generic additional inputs to the GRU. Despite the very different
manner in which the two models consider time, their performance on eleven data
sets we examined is essentially identical. Our surprising results point both to
the robustness of GRU and LSTM architectures for handling continuous time, and
to the potency of incorporating continuous dynamics into neural architectures. | http://arxiv.org/pdf/1710.04110 | Michael C. Mozer, Denis Kazakov, Robert V. Lindsey | cs.NE, cs.LG, I.2.6 | 21 pages | null | cs.NE | 20171011 | 20171011 | [] |
1710.04087 | 17 | # 3 TRAINING AND ARCHITECTURAL CHOICES
3.1 ARCHITECTURE
We use unsupervised word vectors that were trained using fastText2. These correspond to monolin- gual embeddings of dimension 300 trained on Wikipedia corpora; therefore, the mapping W has size 300 à 300. Words are lower-cased, and those that appear less than 5 times are discarded for training. As a post-processing step, we only select the ï¬rst 200k most frequent words in our experiments.
For our discriminator, we use a multilayer perceptron with two hidden layers of size 2048, and Leaky-ReLU activation functions. The input to the discriminator is corrupted with dropout noise with a rate of 0.1. As suggested by Goodfellow (2016), we include a smoothing coefï¬cient s = 0.2 in the discriminator predictions. We use stochastic gradient descent with a batch size of 32, a learning rate of 0.1 and a decay of 0.95 both for the discriminator and W . We divide the learning rate by 2 every time our unsupervised validation criterion decreases.
3.2 DISCRIMINATOR INPUTS | 1710.04087#17 | Word Translation Without Parallel Data | State-of-the-art methods for learning cross-lingual word embeddings have
relied on bilingual dictionaries or parallel corpora. Recent studies showed
that the need for parallel data supervision can be alleviated with
character-level information. While these methods showed encouraging results,
they are not on par with their supervised counterparts and are limited to pairs
of languages sharing a common alphabet. In this work, we show that we can build
a bilingual dictionary between two languages without using any parallel
corpora, by aligning monolingual word embedding spaces in an unsupervised way.
Without using any character information, our model even outperforms existing
supervised methods on cross-lingual tasks for some language pairs. Our
experiments demonstrate that our method works very well also for distant
language pairs, like English-Russian or English-Chinese. We finally describe
experiments on the English-Esperanto low-resource language pair, on which there
only exists a limited amount of parallel data, to show the potential impact of
our method in fully unsupervised machine translation. Our code, embeddings and
dictionaries are publicly available. | http://arxiv.org/pdf/1710.04087 | Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, Hervé Jégou | cs.CL | ICLR 2018 | null | cs.CL | 20171011 | 20180130 | [
{
"id": "1701.00160"
},
{
"id": "1602.01925"
},
{
"id": "1702.08734"
}
] |
1710.04110 | 17 | Figure 2a shows that the half life of arbitrary time scales can be well approximated by a ï¬nite mixture of time scales. The graph plots half life as a function of time scale, with the veridical mapping shown as a dashed black line, and the approximation shown as the solid blue line for ËT consisting of the open circles on the graph. Obviously, one cannot extrapolate to time scales outside the range in ËT , but enough should be known about a problem domain to determine a bounding range of time scales. We have found that constraining separation among constants in ËT such that ËÏi+1 = 101/2ËÏi achieves a high-ï¬delity match. Figure 2b plots memory decay as a function of time for time scales {10, 20, . . . , 100} (solid lines), along with the mixture approximation (dashed lines) using the set of scales in Figure 2a. Corresponding solid and dashed curves match well in their half lives (open circles), despite the fact that the approximation is more like a power function, with faster decay early on and slower decay later on. The heavy tails should, if anything, be helpful for preserving state and error gradients. For the sake of modeling, the important point is that a continuous change in Ï S k produces continuous changes in both the weightings ski and the eï¬ective decay function. | 1710.04110#17 | Discrete Event, Continuous Time RNNs | We investigate recurrent neural network architectures for event-sequence
processing. Event sequences, characterized by discrete observations stamped
with continuous-valued times of occurrence, are challenging due to the
potentially wide dynamic range of relevant time scales as well as interactions
between time scales. We describe four forms of inductive bias that should
benefit architectures for event sequences: temporal locality, position and
scale homogeneity, and scale interdependence. We extend the popular gated
recurrent unit (GRU) architecture to incorporate these biases via intrinsic
temporal dynamics, obtaining a continuous-time GRU. The CT-GRU arises by
interpreting the gates of a GRU as selecting a time scale of memory, and the
CT-GRU generalizes the GRU by incorporating multiple time scales of memory and
performing context-dependent selection of time scales for information storage
and retrieval. Event time-stamps drive decay dynamics of the CT-GRU, whereas
they serve as generic additional inputs to the GRU. Despite the very different
manner in which the two models consider time, their performance on eleven data
sets we examined is essentially identical. Our surprising results point both to
the robustness of GRU and LSTM architectures for handling continuous time, and
to the potency of incorporating continuous dynamics into neural architectures. | http://arxiv.org/pdf/1710.04110 | Michael C. Mozer, Denis Kazakov, Robert V. Lindsey | cs.NE, cs.LG, I.2.6 | 21 pages | null | cs.NE | 20171011 | 20171011 | [] |
1710.04087 | 18 | 3.2 DISCRIMINATOR INPUTS
The embedding quality of rare words is generally not as good as the one of frequent words (Luong et al., 2013), and we observed that feeding the discriminator with rare words had a small, but not negligible negative impact. As a result, we only feed the discriminator with the 50,000 most frequent words. At each training step, the word embeddings given to the discriminator are sampled uniformly. Sampling them according to the word frequency did not have any noticeable impact on the results.
3.3 ORTHOGONALITY
Smith et al. ) showed that imposing an orthogonal constraint to the linear operator led to better performance. Using an orthogonal matrix has several advantages. First, it ensures that the monolingual quality of the embeddings is preserved. Indeed, an orthogonal matrix preserves the dot product of vectors, as well as their ¢2 distances, and is therefore an isometry of the Euclidean space (such as a rotation). Moreover, it made the training procedure more stable in our experiments. In this work, we propose to use a simple update step to ensure that the matrix W stays close to an orthogonal matrix during training . Specifically, we alternate the update of our model with the following update rule on the matrix W:
W â (1 + β)W â β(W W T )W (7) | 1710.04087#18 | Word Translation Without Parallel Data | State-of-the-art methods for learning cross-lingual word embeddings have
relied on bilingual dictionaries or parallel corpora. Recent studies showed
that the need for parallel data supervision can be alleviated with
character-level information. While these methods showed encouraging results,
they are not on par with their supervised counterparts and are limited to pairs
of languages sharing a common alphabet. In this work, we show that we can build
a bilingual dictionary between two languages without using any parallel
corpora, by aligning monolingual word embedding spaces in an unsupervised way.
Without using any character information, our model even outperforms existing
supervised methods on cross-lingual tasks for some language pairs. Our
experiments demonstrate that our method works very well also for distant
language pairs, like English-Russian or English-Chinese. We finally describe
experiments on the English-Esperanto low-resource language pair, on which there
only exists a limited amount of parallel data, to show the potential impact of
our method in fully unsupervised machine translation. Our code, embeddings and
dictionaries are publicly available. | http://arxiv.org/pdf/1710.04087 | Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, Hervé Jégou | cs.CL | ICLR 2018 | null | cs.CL | 20171011 | 20180130 | [
{
"id": "1701.00160"
},
{
"id": "1602.01925"
},
{
"id": "1702.08734"
}
] |
1710.04110 | 18 | Just as the GRU determines the position of the storage (update) gate from its input, k . We use an exponential transform to the CT-GRU determines the time scale of storage, Ï S k : ensure nonnegative Ï S
k â exp (W Sxk + U Shkâ1 + bS) . Ï S (2)
k serve as gates on each of the ï¬xed-scale traces. The Functionally, the {ski} derived from Ï S retrieval operation mirrors the storage operation. A time scale of retrieval, Ï R k is computed from the input, and a half-life-matching mixture of the stored traces serves as the retrieved value from the CT-GRU memory. The right panel of Figure 1 shows a schematic of the
6
Discrete Event, Continuous Time RNNs | 1710.04110#18 | Discrete Event, Continuous Time RNNs | We investigate recurrent neural network architectures for event-sequence
processing. Event sequences, characterized by discrete observations stamped
with continuous-valued times of occurrence, are challenging due to the
potentially wide dynamic range of relevant time scales as well as interactions
between time scales. We describe four forms of inductive bias that should
benefit architectures for event sequences: temporal locality, position and
scale homogeneity, and scale interdependence. We extend the popular gated
recurrent unit (GRU) architecture to incorporate these biases via intrinsic
temporal dynamics, obtaining a continuous-time GRU. The CT-GRU arises by
interpreting the gates of a GRU as selecting a time scale of memory, and the
CT-GRU generalizes the GRU by incorporating multiple time scales of memory and
performing context-dependent selection of time scales for information storage
and retrieval. Event time-stamps drive decay dynamics of the CT-GRU, whereas
they serve as generic additional inputs to the GRU. Despite the very different
manner in which the two models consider time, their performance on eleven data
sets we examined is essentially identical. Our surprising results point both to
the robustness of GRU and LSTM architectures for handling continuous time, and
to the potency of incorporating continuous dynamics into neural architectures. | http://arxiv.org/pdf/1710.04110 | Michael C. Mozer, Denis Kazakov, Robert V. Lindsey | cs.NE, cs.LG, I.2.6 | 21 pages | null | cs.NE | 20171011 | 20171011 | [] |
1710.04087 | 19 | W â (1 + β)W â β(W W T )W (7)
where β = 0.01 is usually found to perform well. This method ensures that the matrix stays close to the manifold of orthogonal matrices after each update. In practice, we observe that the eigenvalues of our matrices all have a modulus close to 1, as expected.
3.4 DICTIONARY GENERATION
The reï¬nement step requires to generate a new dictionary at each iteration. In order for the Procrustes solution to work well, it is best to apply it on correct word pairs. As a result, we use the CSLS method described in Section 2.3 to select more accurate translation pairs in the dictionary. To increase even more the quality of the dictionary, and ensure that W is learned from correct translation pairs, we only consider mutual nearest neighbors, i.e. pairs of words that are mutually nearest neighbors of each other according to CSLS. This signiï¬cantly decreases the size of the generated dictionary, but improves its accuracy, as well as the overall performance.
# 3.5 VALIDATION CRITERION FOR UNSUPERVISED MODEL SELECTION
Selecting the best model is a challenging, yet important task in the unsupervised setting, as it is not possible to use a validation set (using a validation set would mean that we possess parallel data). To | 1710.04087#19 | Word Translation Without Parallel Data | State-of-the-art methods for learning cross-lingual word embeddings have
relied on bilingual dictionaries or parallel corpora. Recent studies showed
that the need for parallel data supervision can be alleviated with
character-level information. While these methods showed encouraging results,
they are not on par with their supervised counterparts and are limited to pairs
of languages sharing a common alphabet. In this work, we show that we can build
a bilingual dictionary between two languages without using any parallel
corpora, by aligning monolingual word embedding spaces in an unsupervised way.
Without using any character information, our model even outperforms existing
supervised methods on cross-lingual tasks for some language pairs. Our
experiments demonstrate that our method works very well also for distant
language pairs, like English-Russian or English-Chinese. We finally describe
experiments on the English-Esperanto low-resource language pair, on which there
only exists a limited amount of parallel data, to show the potential impact of
our method in fully unsupervised machine translation. Our code, embeddings and
dictionaries are publicly available. | http://arxiv.org/pdf/1710.04087 | Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, Hervé Jégou | cs.CL | ICLR 2018 | null | cs.CL | 20171011 | 20180130 | [
{
"id": "1701.00160"
},
{
"id": "1602.01925"
},
{
"id": "1702.08734"
}
] |
1710.04110 | 19 | 6
Discrete Event, Continuous Time RNNs
CT-GRU with s and r used to select the storage and retrieval scales from a multiscale memory trace. The time lag between events, âtk, is an explicit input to the memory, used to determine the amount of decay between discrete events. The CT-GRU and GRU updates are composed of the identical steps, and in fact the CT-GRU with just two scales, ËT = {0, â}, and ï¬xed âtk, is identical to the GRU. The dynamics of storage and retrieval simplify because the logarithmic term in Equation 1 cancels with the exponentiation in Equation 2, yielding the elegant CT-GRU update: | 1710.04110#19 | Discrete Event, Continuous Time RNNs | We investigate recurrent neural network architectures for event-sequence
processing. Event sequences, characterized by discrete observations stamped
with continuous-valued times of occurrence, are challenging due to the
potentially wide dynamic range of relevant time scales as well as interactions
between time scales. We describe four forms of inductive bias that should
benefit architectures for event sequences: temporal locality, position and
scale homogeneity, and scale interdependence. We extend the popular gated
recurrent unit (GRU) architecture to incorporate these biases via intrinsic
temporal dynamics, obtaining a continuous-time GRU. The CT-GRU arises by
interpreting the gates of a GRU as selecting a time scale of memory, and the
CT-GRU generalizes the GRU by incorporating multiple time scales of memory and
performing context-dependent selection of time scales for information storage
and retrieval. Event time-stamps drive decay dynamics of the CT-GRU, whereas
they serve as generic additional inputs to the GRU. Despite the very different
manner in which the two models consider time, their performance on eleven data
sets we examined is essentially identical. Our surprising results point both to
the robustness of GRU and LSTM architectures for handling continuous time, and
to the potency of incorporating continuous dynamics into neural architectures. | http://arxiv.org/pdf/1710.04110 | Michael C. Mozer, Denis Kazakov, Robert V. Lindsey | cs.NE, cs.LG, I.2.6 | 21 pages | null | cs.NE | 20171011 | 20171011 | [] |
1710.04087 | 20 | 2Word vectors downloaded from: https://github.com/facebookresearch/fastText
5
Published as a conference paper at ICLR 2018
Ve ve a agin Wn VAS ! 60 40 20 â = Word Translation Accuracy ââ Discriminator Accuracy ° ââ Unsupervised Criterion oO 20 40 60 80 100 120 140 Epoch
Figure 2: Unsupervised model selection. Correlation between our unsupervised vali- dation criterion (black line) and actual word translation accuracy (blue line). In this par- ticular experiment, the selected model is at epoch 10. Observe how our criterion is well correlated with translation accuracy. | 1710.04087#20 | Word Translation Without Parallel Data | State-of-the-art methods for learning cross-lingual word embeddings have
relied on bilingual dictionaries or parallel corpora. Recent studies showed
that the need for parallel data supervision can be alleviated with
character-level information. While these methods showed encouraging results,
they are not on par with their supervised counterparts and are limited to pairs
of languages sharing a common alphabet. In this work, we show that we can build
a bilingual dictionary between two languages without using any parallel
corpora, by aligning monolingual word embedding spaces in an unsupervised way.
Without using any character information, our model even outperforms existing
supervised methods on cross-lingual tasks for some language pairs. Our
experiments demonstrate that our method works very well also for distant
language pairs, like English-Russian or English-Chinese. We finally describe
experiments on the English-Esperanto low-resource language pair, on which there
only exists a limited amount of parallel data, to show the potential impact of
our method in fully unsupervised machine translation. Our code, embeddings and
dictionaries are publicly available. | http://arxiv.org/pdf/1710.04087 | Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, Hervé Jégou | cs.CL | ICLR 2018 | null | cs.CL | 20171011 | 20180130 | [
{
"id": "1701.00160"
},
{
"id": "1602.01925"
},
{
"id": "1702.08734"
}
] |
1710.04110 | 20 | Int? + Wa, +U®hy_ + b® 1. Determine retrieval scale and weight- rp < softmax; (â(In re âIn #)?) ing 2. Detect relevant event signals gk + tanh(W2a;, +U2(S>; reo hes) +b?) Inte â W*ay + UShy_1 + ° 3. Determine storage scale and weight- 5;; < softmax; (-(n7§ - in#)?) ing 4. Update multiscale state Puri + la = Spi) 0 het + SK 0° aK eAte/Ti 5. Combine time scales hye > ri
# 3. Experiments
We compare the CT-GRU to a standard GRU that receives additional real-valued ât inputs. Although the CT-GRU is derived from the GRU, the CT-GRU is wired with a speciï¬c form of continuous time dynamics, whereas the GRU is free to use the ât input in an arbitrary manner. The conjecture that motivated our work is that the inductive bias built into the CT-GRU would enable it to better leverage temporal information and therefore outperform the overly ï¬exible, poorly constrained GRU. | 1710.04110#20 | Discrete Event, Continuous Time RNNs | We investigate recurrent neural network architectures for event-sequence
processing. Event sequences, characterized by discrete observations stamped
with continuous-valued times of occurrence, are challenging due to the
potentially wide dynamic range of relevant time scales as well as interactions
between time scales. We describe four forms of inductive bias that should
benefit architectures for event sequences: temporal locality, position and
scale homogeneity, and scale interdependence. We extend the popular gated
recurrent unit (GRU) architecture to incorporate these biases via intrinsic
temporal dynamics, obtaining a continuous-time GRU. The CT-GRU arises by
interpreting the gates of a GRU as selecting a time scale of memory, and the
CT-GRU generalizes the GRU by incorporating multiple time scales of memory and
performing context-dependent selection of time scales for information storage
and retrieval. Event time-stamps drive decay dynamics of the CT-GRU, whereas
they serve as generic additional inputs to the GRU. Despite the very different
manner in which the two models consider time, their performance on eleven data
sets we examined is essentially identical. Our surprising results point both to
the robustness of GRU and LSTM architectures for handling continuous time, and
to the potency of incorporating continuous dynamics into neural architectures. | http://arxiv.org/pdf/1710.04110 | Michael C. Mozer, Denis Kazakov, Robert V. Lindsey | cs.NE, cs.LG, I.2.6 | 21 pages | null | cs.NE | 20171011 | 20171011 | [] |
1710.04087 | 21 | address this issue, we perform model selection using an unsupervised criterion that quantiï¬es the closeness of the source and target embedding spaces. Speciï¬cally, we consider the 10k most frequent source words, and use CSLS to generate a translation for each of them. We then compute the average cosine similarity between these deemed translations, and use this average as a validation metric. We found that this simple criterion is better correlated with the performance on the evaluation tasks than optimal transport distances such as the Wasserstein distance (Rubner et al. (2000)). Figure 2 shows the correlation between the evaluation score and this unsupervised criterion (without stabilization by learning rate shrinkage). We use it as a stopping criterion during training, and also for hyper- parameter selection in all our experiments.
# 4 EXPERIMENTS
In this section, we empirically demonstrate the effectiveness of our unsupervised approach on sev- eral benchmarks, and compare it with state-of-the-art supervised methods. We ï¬rst present the cross-lingual evaluation tasks that we consider to evaluate the quality of our cross-lingual word em- beddings. Then, we present our baseline model. Last, we compare our unsupervised approach to our baseline and to previous methods. In the appendix, we offer a complementary analysis on the alignment of several sets of English embeddings trained with different methods and corpora. | 1710.04087#21 | Word Translation Without Parallel Data | State-of-the-art methods for learning cross-lingual word embeddings have
relied on bilingual dictionaries or parallel corpora. Recent studies showed
that the need for parallel data supervision can be alleviated with
character-level information. While these methods showed encouraging results,
they are not on par with their supervised counterparts and are limited to pairs
of languages sharing a common alphabet. In this work, we show that we can build
a bilingual dictionary between two languages without using any parallel
corpora, by aligning monolingual word embedding spaces in an unsupervised way.
Without using any character information, our model even outperforms existing
supervised methods on cross-lingual tasks for some language pairs. Our
experiments demonstrate that our method works very well also for distant
language pairs, like English-Russian or English-Chinese. We finally describe
experiments on the English-Esperanto low-resource language pair, on which there
only exists a limited amount of parallel data, to show the potential impact of
our method in fully unsupervised machine translation. Our code, embeddings and
dictionaries are publicly available. | http://arxiv.org/pdf/1710.04087 | Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, Hervé Jégou | cs.CL | ICLR 2018 | null | cs.CL | 20171011 | 20180130 | [
{
"id": "1701.00160"
},
{
"id": "1602.01925"
},
{
"id": "1702.08734"
}
] |
1710.04110 | 21 | We have conducted experiments on a diverse variety of event-sequence data sets, synthetic and natural. The synthetic sets were designed to reveal the types of temporal structure that each architecture could discover. We have explored a range of classiï¬cation and prediction tasks. The punch line of our work is this: Although the CT-GRU and GRU handle time in very diï¬erent manners, the two architectures perform essentially identically. We found almost no empirical diï¬erence between the models. Where one makes errors, the other makes the same errors. Both models perform signiï¬cantly above sensible baselines, and both models leverage time, albeit in a diï¬erent manner. Nonetheless, we will argue that the CT-GRU has interesting dynamics and oï¬ers lessons for future research.
# 3.1 Methodology | 1710.04110#21 | Discrete Event, Continuous Time RNNs | We investigate recurrent neural network architectures for event-sequence
processing. Event sequences, characterized by discrete observations stamped
with continuous-valued times of occurrence, are challenging due to the
potentially wide dynamic range of relevant time scales as well as interactions
between time scales. We describe four forms of inductive bias that should
benefit architectures for event sequences: temporal locality, position and
scale homogeneity, and scale interdependence. We extend the popular gated
recurrent unit (GRU) architecture to incorporate these biases via intrinsic
temporal dynamics, obtaining a continuous-time GRU. The CT-GRU arises by
interpreting the gates of a GRU as selecting a time scale of memory, and the
CT-GRU generalizes the GRU by incorporating multiple time scales of memory and
performing context-dependent selection of time scales for information storage
and retrieval. Event time-stamps drive decay dynamics of the CT-GRU, whereas
they serve as generic additional inputs to the GRU. Despite the very different
manner in which the two models consider time, their performance on eleven data
sets we examined is essentially identical. Our surprising results point both to
the robustness of GRU and LSTM architectures for handling continuous time, and
to the potency of incorporating continuous dynamics into neural architectures. | http://arxiv.org/pdf/1710.04110 | Michael C. Mozer, Denis Kazakov, Robert V. Lindsey | cs.NE, cs.LG, I.2.6 | 21 pages | null | cs.NE | 20171011 | 20171011 | [] |
1710.04087 | 22 | # 4.1 EVALUATION TASKS
Word translation The task considers the problem of retrieving the translation of given source words. The problem with most available bilingual dictionaries is that they are generated using online tools like Google Translate, and do not take into account the polysemy of words. Failing to capture word polysemy in the vocabulary leads to a wrong evaluation of the quality of the word embedding space. Other dictionaries are generated using phrase tables of machine translation systems, but they are very noisy or trained on relatively small parallel corpora. For this task, we create high-quality | 1710.04087#22 | Word Translation Without Parallel Data | State-of-the-art methods for learning cross-lingual word embeddings have
relied on bilingual dictionaries or parallel corpora. Recent studies showed
that the need for parallel data supervision can be alleviated with
character-level information. While these methods showed encouraging results,
they are not on par with their supervised counterparts and are limited to pairs
of languages sharing a common alphabet. In this work, we show that we can build
a bilingual dictionary between two languages without using any parallel
corpora, by aligning monolingual word embedding spaces in an unsupervised way.
Without using any character information, our model even outperforms existing
supervised methods on cross-lingual tasks for some language pairs. Our
experiments demonstrate that our method works very well also for distant
language pairs, like English-Russian or English-Chinese. We finally describe
experiments on the English-Esperanto low-resource language pair, on which there
only exists a limited amount of parallel data, to show the potential impact of
our method in fully unsupervised machine translation. Our code, embeddings and
dictionaries are publicly available. | http://arxiv.org/pdf/1710.04087 | Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, Hervé Jégou | cs.CL | ICLR 2018 | null | cs.CL | 20171011 | 20180130 | [
{
"id": "1701.00160"
},
{
"id": "1602.01925"
},
{
"id": "1702.08734"
}
] |
1710.04110 | 22 | # 3.1 Methodology
In all simulations, we present sequences of symbolic event labels. The input is a one-hot representation of the current event, xk. For the CT-GRU, âtkâthe lag between events k and k + 1âis provided as a special input that modulates decay (see Figure 1b). For the GRU, âtkâ1 and âtk are included as standard real-valued inputs. The output layer representation and activation function depends on the task. For event-label prediction, the task is to predict the next event, xk+1; the output layer is a one-hot representation with a softmax activation
7
Mozer, Kazakov, & Lindsey
# Mozer, Kazakov, & Lindsey
Figure 3: Working memory task: (a) CT-GRU (blue) and GRU (orange) response to probe (b) Storage timescales, k ), and event-detection weights, W Q. The CT-GRU modulates storage on sequences like {l/0, x/0, . . . , x/t} for a range of t. log10(Ï S time scale of symbol based on the context. | 1710.04110#22 | Discrete Event, Continuous Time RNNs | We investigate recurrent neural network architectures for event-sequence
processing. Event sequences, characterized by discrete observations stamped
with continuous-valued times of occurrence, are challenging due to the
potentially wide dynamic range of relevant time scales as well as interactions
between time scales. We describe four forms of inductive bias that should
benefit architectures for event sequences: temporal locality, position and
scale homogeneity, and scale interdependence. We extend the popular gated
recurrent unit (GRU) architecture to incorporate these biases via intrinsic
temporal dynamics, obtaining a continuous-time GRU. The CT-GRU arises by
interpreting the gates of a GRU as selecting a time scale of memory, and the
CT-GRU generalizes the GRU by incorporating multiple time scales of memory and
performing context-dependent selection of time scales for information storage
and retrieval. Event time-stamps drive decay dynamics of the CT-GRU, whereas
they serve as generic additional inputs to the GRU. Despite the very different
manner in which the two models consider time, their performance on eleven data
sets we examined is essentially identical. Our surprising results point both to
the robustness of GRU and LSTM architectures for handling continuous time, and
to the potency of incorporating continuous dynamics into neural architectures. | http://arxiv.org/pdf/1710.04110 | Michael C. Mozer, Denis Kazakov, Robert V. Lindsey | cs.NE, cs.LG, I.2.6 | 21 pages | null | cs.NE | 20171011 | 20171011 | [] |
1710.04087 | 23 | en-de de-en Methods with cross-lingual supervision and fastText embeddings Procrustes - NN Procrustes - ISF Procrustes - CSLS Methods without cross-lingual supervision and fastText embeddings Adv - NN Adv - CSLS Adv - Reï¬ne - NN Adv - Reï¬ne - CSLS Table 1: Word translation retrieval P@1 for our released vocabularies in various language pairs. We consider 1,500 source test queries, and 200k target words for each language pair. We use fastText embeddings trained on Wikipedia. NN: nearest neighbors. ISF: inverted softmax. (âenâ is English, âfrâ is French, âdeâ is German, âruâ is Russian, âzhâ is classical Chinese and âeoâ is Esperanto)
6
Published as a conference paper at ICLR 2018
Italian to English P@1 P@5 P@10 P@1 P@5 P@10 | 1710.04087#23 | Word Translation Without Parallel Data | State-of-the-art methods for learning cross-lingual word embeddings have
relied on bilingual dictionaries or parallel corpora. Recent studies showed
that the need for parallel data supervision can be alleviated with
character-level information. While these methods showed encouraging results,
they are not on par with their supervised counterparts and are limited to pairs
of languages sharing a common alphabet. In this work, we show that we can build
a bilingual dictionary between two languages without using any parallel
corpora, by aligning monolingual word embedding spaces in an unsupervised way.
Without using any character information, our model even outperforms existing
supervised methods on cross-lingual tasks for some language pairs. Our
experiments demonstrate that our method works very well also for distant
language pairs, like English-Russian or English-Chinese. We finally describe
experiments on the English-Esperanto low-resource language pair, on which there
only exists a limited amount of parallel data, to show the potential impact of
our method in fully unsupervised machine translation. Our code, embeddings and
dictionaries are publicly available. | http://arxiv.org/pdf/1710.04087 | Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, Hervé Jégou | cs.CL | ICLR 2018 | null | cs.CL | 20171011 | 20180130 | [
{
"id": "1701.00160"
},
{
"id": "1602.01925"
},
{
"id": "1702.08734"
}
] |
1710.04110 | 23 | function. For event-polarity prediction, the task is to predict a binary property of the next event. For this task, the output consists of one logistic unit per event label; only the event that actually occurs is provided a target (0 or 1) value. For classiï¬cation, the task is to map a complete sequence to one of two classes, 0 or 1, via a logistic output unit.
We constructed independent Theano (Theano Development Team, 2016) and TensorFlow (Abadi et al., 2015) implementations as a means of verifying the code. For all data sets, 15% of the training set is used as validation data model selection, performed via early stopping and selection from a range of hidden layer sizes. We assess test-set performance via three measures: accuracy, log likelihood, and a discriminability measure, AUC (Green and Swets, 1966). We report accuracy because it closely mirrors log likelihood and AUC on all data sets, and accuracy is most intuitive. More details of the simulation methodology and a complete description of data sets can be found in the Supplementary Materials.
# 3.2 Discovery of temporal patterns in synthetic data | 1710.04110#23 | Discrete Event, Continuous Time RNNs | We investigate recurrent neural network architectures for event-sequence
processing. Event sequences, characterized by discrete observations stamped
with continuous-valued times of occurrence, are challenging due to the
potentially wide dynamic range of relevant time scales as well as interactions
between time scales. We describe four forms of inductive bias that should
benefit architectures for event sequences: temporal locality, position and
scale homogeneity, and scale interdependence. We extend the popular gated
recurrent unit (GRU) architecture to incorporate these biases via intrinsic
temporal dynamics, obtaining a continuous-time GRU. The CT-GRU arises by
interpreting the gates of a GRU as selecting a time scale of memory, and the
CT-GRU generalizes the GRU by incorporating multiple time scales of memory and
performing context-dependent selection of time scales for information storage
and retrieval. Event time-stamps drive decay dynamics of the CT-GRU, whereas
they serve as generic additional inputs to the GRU. Despite the very different
manner in which the two models consider time, their performance on eleven data
sets we examined is essentially identical. Our surprising results point both to
the robustness of GRU and LSTM architectures for handling continuous time, and
to the potency of incorporating continuous dynamics into neural architectures. | http://arxiv.org/pdf/1710.04110 | Michael C. Mozer, Denis Kazakov, Robert V. Lindsey | cs.NE, cs.LG, I.2.6 | 21 pages | null | cs.NE | 20171011 | 20171011 | [] |
1710.04087 | 24 | Methods with cross-lingual supervision (WaCky) Mikolov et al. (2013b) â Dinu et al. (2015)â CCAâ Artetxe et al. (2017) Smith et al. (2017)â Procrustes - CSLS Methods without cross-lingual supervision (WaCky) Adv - Reï¬ne - CSLS Methods with cross-lingual supervision (Wiki) Procrustes - CSLS Methods without cross-lingual supervision (Wiki) Adv - Reï¬ne - CSLS 33.8 48.3 53.9 38.5 56.4 63.9 36.1 52.7 58.1 39.7 54.7 60.5 43.1 60.7 66.4 44.9 61.8 66.6 45.1 60.7 65.1 63.7 78.6 81.1 24.9 41.0 24.6 45.4 31.0 49.9 33.8 52.4 38.0 58.5 38.5 57.2 38.3 57.8 56.3 76.2 47.4 54.1 57.0 59.1 63.6 63.0 62.8 80.6 66.2 80.4 83.4 58.7 76.5 80.9 | 1710.04087#24 | Word Translation Without Parallel Data | State-of-the-art methods for learning cross-lingual word embeddings have
relied on bilingual dictionaries or parallel corpora. Recent studies showed
that the need for parallel data supervision can be alleviated with
character-level information. While these methods showed encouraging results,
they are not on par with their supervised counterparts and are limited to pairs
of languages sharing a common alphabet. In this work, we show that we can build
a bilingual dictionary between two languages without using any parallel
corpora, by aligning monolingual word embedding spaces in an unsupervised way.
Without using any character information, our model even outperforms existing
supervised methods on cross-lingual tasks for some language pairs. Our
experiments demonstrate that our method works very well also for distant
language pairs, like English-Russian or English-Chinese. We finally describe
experiments on the English-Esperanto low-resource language pair, on which there
only exists a limited amount of parallel data, to show the potential impact of
our method in fully unsupervised machine translation. Our code, embeddings and
dictionaries are publicly available. | http://arxiv.org/pdf/1710.04087 | Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, Hervé Jégou | cs.CL | ICLR 2018 | null | cs.CL | 20171011 | 20180130 | [
{
"id": "1701.00160"
},
{
"id": "1602.01925"
},
{
"id": "1702.08734"
}
] |
1710.04110 | 24 | To illustrate the operation of the CT-GRU, we devised a Working memory task requiring limited-duration information storage. The input sequence consists of commands to store, for a duration of 1, 10, or 100 time unitsâspeciï¬ed by the commands s, m, or lâa speciï¬c symbolâ a, b, or c. The input sequence also contains symbols a-c in isolation to probe memory for whether the symbol is currently stored. For example, with x/t denoting event x at time t, consider the sequence: {m/0, b/0, b/5}. The ï¬rst two events instruct the memory to store b for 10 time units. The third probes for b at time 5, which should produce a response of 1, whereas probes b/25 or a/5 should produce 0. Both GRU and CT-GRU with 15 hidden units learn the task well, with 98.8% and 98.7% test-set accuracy, respectively. Figure 3a plots probe response to sequences of the form {l/0, x/0, . . . , x/t} for various durations t. The scatterplot represents individual test sequences; the dashed line is a logistic | 1710.04110#24 | Discrete Event, Continuous Time RNNs | We investigate recurrent neural network architectures for event-sequence
processing. Event sequences, characterized by discrete observations stamped
with continuous-valued times of occurrence, are challenging due to the
potentially wide dynamic range of relevant time scales as well as interactions
between time scales. We describe four forms of inductive bias that should
benefit architectures for event sequences: temporal locality, position and
scale homogeneity, and scale interdependence. We extend the popular gated
recurrent unit (GRU) architecture to incorporate these biases via intrinsic
temporal dynamics, obtaining a continuous-time GRU. The CT-GRU arises by
interpreting the gates of a GRU as selecting a time scale of memory, and the
CT-GRU generalizes the GRU by incorporating multiple time scales of memory and
performing context-dependent selection of time scales for information storage
and retrieval. Event time-stamps drive decay dynamics of the CT-GRU, whereas
they serve as generic additional inputs to the GRU. Despite the very different
manner in which the two models consider time, their performance on eleven data
sets we examined is essentially identical. Our surprising results point both to
the robustness of GRU and LSTM architectures for handling continuous time, and
to the potency of incorporating continuous dynamics into neural architectures. | http://arxiv.org/pdf/1710.04110 | Michael C. Mozer, Denis Kazakov, Robert V. Lindsey | cs.NE, cs.LG, I.2.6 | 21 pages | null | cs.NE | 20171011 | 20171011 | [] |
1710.04087 | 25 | Table 2: English-Italian word translation average precisions (@1, @5, @10) from 1.5k source word queries using 200k target words. Re- sults marked with the symbol â are from Smith et al. (2017). Wiki means the embeddings were trained on Wikipedia using fastText. Note that the method used by Artetxe et al. (2017) does not use the same super- vision as other supervised methods, as they only use numbers in their ini- tial parallel dictionary.
dictionaries of up to 100k pairs of words using an internal translation tool to alleviate this issue. We make these dictionaries publicly available as part of the MUSE library3.
We report results on these bilingual dictionaries, as well on those released by Dinu et al. (2015) to allow for a direct comparison with previous approaches. For each language pair, we consider 1,500 query source and 200k target words. Following standard practice, we measure how many times one of the correct translations of a source word is retrieved, and report precision@k for k = 1, 5, 10. | 1710.04087#25 | Word Translation Without Parallel Data | State-of-the-art methods for learning cross-lingual word embeddings have
relied on bilingual dictionaries or parallel corpora. Recent studies showed
that the need for parallel data supervision can be alleviated with
character-level information. While these methods showed encouraging results,
they are not on par with their supervised counterparts and are limited to pairs
of languages sharing a common alphabet. In this work, we show that we can build
a bilingual dictionary between two languages without using any parallel
corpora, by aligning monolingual word embedding spaces in an unsupervised way.
Without using any character information, our model even outperforms existing
supervised methods on cross-lingual tasks for some language pairs. Our
experiments demonstrate that our method works very well also for distant
language pairs, like English-Russian or English-Chinese. We finally describe
experiments on the English-Esperanto low-resource language pair, on which there
only exists a limited amount of parallel data, to show the potential impact of
our method in fully unsupervised machine translation. Our code, embeddings and
dictionaries are publicly available. | http://arxiv.org/pdf/1710.04087 | Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, Hervé Jégou | cs.CL | ICLR 2018 | null | cs.CL | 20171011 | 20180130 | [
{
"id": "1701.00160"
},
{
"id": "1602.01925"
},
{
"id": "1702.08734"
}
] |
1710.04087 | 26 | Cross-lingual semantic word similarity We also evaluate the quality of our cross-lingual word embeddings space using word similarity tasks. This task aims at evaluating how well the cosine similarity between two words of different languages correlates with a human-labeled score. We use the SemEval 2017 competition data (Camacho-Collados et al. (2017)) which provides large, high- quality and well-balanced datasets composed of nominal pairs that are manually scored according to a well-deï¬ned similarity scale. We report Pearson correlation.
Sentence translation retrieval Going from the word to the sentence level, we consider bag-of- words aggregation methods to perform sentence retrieval on the Europarl corpus. We consider 2,000 source sentence queries and 200k target sentences for each language pair and report the precision@k for k = 1, 5, 10, which accounts for the fraction of pairs for which the correct translation of the source words is in the k-th nearest neighbors. We use the idf-weighted average to merge word into sentence embeddings. The idf weights are obtained using other 300k sentences from Europarl.
4.2 RESULTS AND DISCUSSION | 1710.04087#26 | Word Translation Without Parallel Data | State-of-the-art methods for learning cross-lingual word embeddings have
relied on bilingual dictionaries or parallel corpora. Recent studies showed
that the need for parallel data supervision can be alleviated with
character-level information. While these methods showed encouraging results,
they are not on par with their supervised counterparts and are limited to pairs
of languages sharing a common alphabet. In this work, we show that we can build
a bilingual dictionary between two languages without using any parallel
corpora, by aligning monolingual word embedding spaces in an unsupervised way.
Without using any character information, our model even outperforms existing
supervised methods on cross-lingual tasks for some language pairs. Our
experiments demonstrate that our method works very well also for distant
language pairs, like English-Russian or English-Chinese. We finally describe
experiments on the English-Esperanto low-resource language pair, on which there
only exists a limited amount of parallel data, to show the potential impact of
our method in fully unsupervised machine translation. Our code, embeddings and
dictionaries are publicly available. | http://arxiv.org/pdf/1710.04087 | Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, Hervé Jégou | cs.CL | ICLR 2018 | null | cs.CL | 20171011 | 20180130 | [
{
"id": "1701.00160"
},
{
"id": "1602.01925"
},
{
"id": "1702.08734"
}
] |
1710.04110 | 26 | 8
Discrete Event, Continuous Time RNNs
scale, the CT-GRU is amenable to dissection and interpretation. The bottom of Figure 3b shows weights W Q for the ï¬fteen CT-GRU hidden units, arranged such that the units which respond more strongly to symbols aâcâand thus will serve as memory for these symbolsâare further to the right (blue negative, red positive). The top of the Figure shows the storage k , for a symbol aâc when preceded by commands s, m, or l. timescale, expressed as log10 Ï S In accordance with task demands, the CT-GRU modulates the storage time scale based on the command context. | 1710.04110#26 | Discrete Event, Continuous Time RNNs | We investigate recurrent neural network architectures for event-sequence
processing. Event sequences, characterized by discrete observations stamped
with continuous-valued times of occurrence, are challenging due to the
potentially wide dynamic range of relevant time scales as well as interactions
between time scales. We describe four forms of inductive bias that should
benefit architectures for event sequences: temporal locality, position and
scale homogeneity, and scale interdependence. We extend the popular gated
recurrent unit (GRU) architecture to incorporate these biases via intrinsic
temporal dynamics, obtaining a continuous-time GRU. The CT-GRU arises by
interpreting the gates of a GRU as selecting a time scale of memory, and the
CT-GRU generalizes the GRU by incorporating multiple time scales of memory and
performing context-dependent selection of time scales for information storage
and retrieval. Event time-stamps drive decay dynamics of the CT-GRU, whereas
they serve as generic additional inputs to the GRU. Despite the very different
manner in which the two models consider time, their performance on eleven data
sets we examined is essentially identical. Our surprising results point both to
the robustness of GRU and LSTM architectures for handling continuous time, and
to the potency of incorporating continuous dynamics into neural architectures. | http://arxiv.org/pdf/1710.04110 | Michael C. Mozer, Denis Kazakov, Robert V. Lindsey | cs.NE, cs.LG, I.2.6 | 21 pages | null | cs.NE | 20171011 | 20171011 | [] |
1710.04087 | 27 | 4.2 RESULTS AND DISCUSSION
In what follows, we present the results on word translation retrieval using our bilingual dictionar- ies in Table 1 and our comparison to previous work in Table 2 where we signiï¬cantly outperform previous approaches. We also present results on the sentence translation retrieval task in Table 3 and the cross-lingual word similarity task in Table 4. Finally, we present results on word-by-word translation for English-Esperanto in Table 5.
Baselines In our experiments, we consider a supervised baseline that uses the solution of the Procrustes formula given in (2), and trained on a dictionary of 5,000 source words. This baseline can be combined with different similarity measures: NN for nearest neighbor similarity, ISF for Inverted SoftMax and the CSLS approach described in Section 2.2.
Cross-domain similarity local scaling This approach has a single parameter K deï¬ning the size of the neighborhood. The performance is very stable and therefore K does not need cross-validation: the results are essentially the same for K = 5, 10 and 50, therefore we set K = 10 in all experiments. In Table 1, we observe the impact of the similarity metric with the Procrustes supervised approach. Looking at the difference between Procrustes-NN and Procrustes-CSLS, one can see that CSLS | 1710.04087#27 | Word Translation Without Parallel Data | State-of-the-art methods for learning cross-lingual word embeddings have
relied on bilingual dictionaries or parallel corpora. Recent studies showed
that the need for parallel data supervision can be alleviated with
character-level information. While these methods showed encouraging results,
they are not on par with their supervised counterparts and are limited to pairs
of languages sharing a common alphabet. In this work, we show that we can build
a bilingual dictionary between two languages without using any parallel
corpora, by aligning monolingual word embedding spaces in an unsupervised way.
Without using any character information, our model even outperforms existing
supervised methods on cross-lingual tasks for some language pairs. Our
experiments demonstrate that our method works very well also for distant
language pairs, like English-Russian or English-Chinese. We finally describe
experiments on the English-Esperanto low-resource language pair, on which there
only exists a limited amount of parallel data, to show the potential impact of
our method in fully unsupervised machine translation. Our code, embeddings and
dictionaries are publicly available. | http://arxiv.org/pdf/1710.04087 | Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, Hervé Jégou | cs.CL | ICLR 2018 | null | cs.CL | 20171011 | 20180130 | [
{
"id": "1701.00160"
},
{
"id": "1602.01925"
},
{
"id": "1702.08734"
}
] |
1710.04110 | 27 | Moving on to more systematic investigations, we devised three synthetic data sets for which inter-event times are required to attain optimal performance. Data set Cluster classiï¬es 100-element sequences according to whether three speciï¬c events occur in any order within a given time span. Figure 4a shows a sample sequence with critical elements that both satisfy and fail to satisfy the time-span requirement, indicated by the solid and outline rectangle, respectively. Remembering outputs a binary value for each event label indicating whether the lag from the last occurrence of the event is below or above a critical time threshold. Rhythm classiï¬es 100-element sequences according to whether the inter-event timings follow a set of event-contingent rules, like a type of musical notation. The CT-GRU performs no better than the GRU with ât inputs, although both outperform the GRU without ât (Figures 5a-c), indicating that both architectures are able to use the temporal lags. For these and all other simulations reported, errors produced by the CT-GRU and the GRU are almost perfectly correlated. (Henceforth, we refer to the GRU with ât as the GRU.) | 1710.04110#27 | Discrete Event, Continuous Time RNNs | We investigate recurrent neural network architectures for event-sequence
processing. Event sequences, characterized by discrete observations stamped
with continuous-valued times of occurrence, are challenging due to the
potentially wide dynamic range of relevant time scales as well as interactions
between time scales. We describe four forms of inductive bias that should
benefit architectures for event sequences: temporal locality, position and
scale homogeneity, and scale interdependence. We extend the popular gated
recurrent unit (GRU) architecture to incorporate these biases via intrinsic
temporal dynamics, obtaining a continuous-time GRU. The CT-GRU arises by
interpreting the gates of a GRU as selecting a time scale of memory, and the
CT-GRU generalizes the GRU by incorporating multiple time scales of memory and
performing context-dependent selection of time scales for information storage
and retrieval. Event time-stamps drive decay dynamics of the CT-GRU, whereas
they serve as generic additional inputs to the GRU. Despite the very different
manner in which the two models consider time, their performance on eleven data
sets we examined is essentially identical. Our surprising results point both to
the robustness of GRU and LSTM architectures for handling continuous time, and
to the potency of incorporating continuous dynamics into neural architectures. | http://arxiv.org/pdf/1710.04110 | Michael C. Mozer, Denis Kazakov, Robert V. Lindsey | cs.NE, cs.LG, I.2.6 | 21 pages | null | cs.NE | 20171011 | 20171011 | [] |
1710.04110 | 28 | We ran ten replications of Cluster with diï¬erent initializations and diï¬erent example sequences, and found no reliable diï¬erence between CT-GRU and GRU by a two-sided Wilcoxon sign rank test (p = .43). Because our data sets are almost all largeâwith between 10k to 100k training and test examplesâand because our aim is not to argue that the CT-GRU outperforms the GRU, we report outcomes from a single simulation run in Figures 5a-i.
Having demonstrated that the GRU is able to leverage the ât inputs, we conducted two simulations to show that the CT-GRU requires decay dynamics to achieve its performance. We created a version of the CT-GRU in which the traces did not decay with the passage of time. In principle, such an architecture could be used as a ï¬exible memory, where a unit
(TE âTTC Ah eI TIT EET IATI YT 0 50 100 150 200 Time
Figure 4: Event sequences for (a) Cluster, (b) Hawkes process, and (c) Reddit. Time is on horizontal axis. Color denotes event label; in (a), irrelevant labels are rendered as dashed black lines.
9
Mozer, Kazakov, & Lindsey | 1710.04110#28 | Discrete Event, Continuous Time RNNs | We investigate recurrent neural network architectures for event-sequence
processing. Event sequences, characterized by discrete observations stamped
with continuous-valued times of occurrence, are challenging due to the
potentially wide dynamic range of relevant time scales as well as interactions
between time scales. We describe four forms of inductive bias that should
benefit architectures for event sequences: temporal locality, position and
scale homogeneity, and scale interdependence. We extend the popular gated
recurrent unit (GRU) architecture to incorporate these biases via intrinsic
temporal dynamics, obtaining a continuous-time GRU. The CT-GRU arises by
interpreting the gates of a GRU as selecting a time scale of memory, and the
CT-GRU generalizes the GRU by incorporating multiple time scales of memory and
performing context-dependent selection of time scales for information storage
and retrieval. Event time-stamps drive decay dynamics of the CT-GRU, whereas
they serve as generic additional inputs to the GRU. Despite the very different
manner in which the two models consider time, their performance on eleven data
sets we examined is essentially identical. Our surprising results point both to
the robustness of GRU and LSTM architectures for handling continuous time, and
to the potency of incorporating continuous dynamics into neural architectures. | http://arxiv.org/pdf/1710.04110 | Michael C. Mozer, Denis Kazakov, Robert V. Lindsey | cs.NE, cs.LG, I.2.6 | 21 pages | null | cs.NE | 20171011 | 20171011 | [] |
1710.04087 | 29 | 7
Published as a conference paper at ICLR 2018
English to Italian P@1 P@5 P@10 Methods with cross-lingual supervision Mikolov et al. (2013b) â Dinu et al. (2015) â Smith et al. (2017) â Procrustes - NN Procrustes - CSLS Methods without cross-lingual supervision Adv - CSLS Adv - Reï¬ne - CSLS 10.5 18.7 22.8 45.3 72.4 80.7 54.6 72.7 78.2 42.6 54.7 59.0 66.1 77.1 80.7 42.5 57.6 63.6 65.9 79.7 83.1 Italian to English P@1 P@5 P@10 12.0 22.1 26.7 48.9 71.3 78.3 42.9 62.2 69.2 53.5 65.5 69.5 69.5 79.6 83.5 47.0 62.1 67.8 69.0 79.7 83.1 Table 3: English-Italian sentence translation retrieval. We report the average P@k from 2,000 source queries using 200,000 target sen- tences. We use the same embeddings as in Smith et al. (2017). Their re- sults are marked with the symbol â . | 1710.04087#29 | Word Translation Without Parallel Data | State-of-the-art methods for learning cross-lingual word embeddings have
relied on bilingual dictionaries or parallel corpora. Recent studies showed
that the need for parallel data supervision can be alleviated with
character-level information. While these methods showed encouraging results,
they are not on par with their supervised counterparts and are limited to pairs
of languages sharing a common alphabet. In this work, we show that we can build
a bilingual dictionary between two languages without using any parallel
corpora, by aligning monolingual word embedding spaces in an unsupervised way.
Without using any character information, our model even outperforms existing
supervised methods on cross-lingual tasks for some language pairs. Our
experiments demonstrate that our method works very well also for distant
language pairs, like English-Russian or English-Chinese. We finally describe
experiments on the English-Esperanto low-resource language pair, on which there
only exists a limited amount of parallel data, to show the potential impact of
our method in fully unsupervised machine translation. Our code, embeddings and
dictionaries are publicly available. | http://arxiv.org/pdf/1710.04087 | Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, Hervé Jégou | cs.CL | ICLR 2018 | null | cs.CL | 20171011 | 20180130 | [
{
"id": "1701.00160"
},
{
"id": "1602.01925"
},
{
"id": "1702.08734"
}
] |
1710.04110 | 29 | 9
Mozer, Kazakov, & Lindsey
(b) (c) » ) oe So rreerceeeencenecerennrcen Bw By rcereerrenrrereerrenrcor 5 Sy En eg" 8 Fo 3 £ a © 3 Ba Bn 8 Be 2 o = - Lo S - 2 a iy 4 2 4 3 ss] (Ra 2 4 2 Ba) o 7 s gw 2 fi s a7 = £ ° g g £ S = 2 = FS s = 2 Ea =| 3 Sus} =. x3 °"E z° 2 Oo") & 2B 3 "| Tea la oS B 6 0 an © oo Ol oe (h) @ a oe Be Ey Bo ic) Ss gz Bs 3 a Be 2" 8 gâ 5° Ew ge 3 3 Ba 3° ele 2. ge 2. iva a3 G Sa a â te Su oo. Sn
Figure 5: Comparison of GRU, CT-GRU, and variants. Data sets (a)-(i) consist of at least 10k training and test examples and thus a single train/test split is adequate for evaluation. Smaller data set (j) is tested via 8-fold cross validation. Solid black lines represent a reference baseline performance level, and dashed lines indicate optimal performance (where known). | 1710.04110#29 | Discrete Event, Continuous Time RNNs | We investigate recurrent neural network architectures for event-sequence
processing. Event sequences, characterized by discrete observations stamped
with continuous-valued times of occurrence, are challenging due to the
potentially wide dynamic range of relevant time scales as well as interactions
between time scales. We describe four forms of inductive bias that should
benefit architectures for event sequences: temporal locality, position and
scale homogeneity, and scale interdependence. We extend the popular gated
recurrent unit (GRU) architecture to incorporate these biases via intrinsic
temporal dynamics, obtaining a continuous-time GRU. The CT-GRU arises by
interpreting the gates of a GRU as selecting a time scale of memory, and the
CT-GRU generalizes the GRU by incorporating multiple time scales of memory and
performing context-dependent selection of time scales for information storage
and retrieval. Event time-stamps drive decay dynamics of the CT-GRU, whereas
they serve as generic additional inputs to the GRU. Despite the very different
manner in which the two models consider time, their performance on eleven data
sets we examined is essentially identical. Our surprising results point both to
the robustness of GRU and LSTM architectures for handling continuous time, and
to the potency of incorporating continuous dynamics into neural architectures. | http://arxiv.org/pdf/1710.04110 | Michael C. Mozer, Denis Kazakov, Robert V. Lindsey | cs.NE, cs.LG, I.2.6 | 21 pages | null | cs.NE | 20171011 | 20171011 | [] |
1710.04087 | 30 | provides a strong and robust gain in performance across all language pairs, with up to 7.2% in en- eo. We observe that Procrustes-CSLS is almost systematically better than Procrustes-ISF, while being computationally faster and not requiring hyper-parameter tuning. In Table 2, we compare our Procrustes-CSLS approach to previous models presented in Mikolov et al. (2013b); Dinu et al. (2015); Smith et al. (2017); Artetxe et al. (2017) on the English-Italian word translation task, on which state-of-the-art models have been already compared. We show that our Procrustes-CSLS approach obtains an accuracy of 44.9%, outperforming all previous approaches. In Table 3, we also obtain a strong gain in accuracy in the Italian-English sentence retrieval task using CSLS, from 53.5% to 69.5%, outperforming previous approaches by an absolute gain of more than 20%. | 1710.04087#30 | Word Translation Without Parallel Data | State-of-the-art methods for learning cross-lingual word embeddings have
relied on bilingual dictionaries or parallel corpora. Recent studies showed
that the need for parallel data supervision can be alleviated with
character-level information. While these methods showed encouraging results,
they are not on par with their supervised counterparts and are limited to pairs
of languages sharing a common alphabet. In this work, we show that we can build
a bilingual dictionary between two languages without using any parallel
corpora, by aligning monolingual word embedding spaces in an unsupervised way.
Without using any character information, our model even outperforms existing
supervised methods on cross-lingual tasks for some language pairs. Our
experiments demonstrate that our method works very well also for distant
language pairs, like English-Russian or English-Chinese. We finally describe
experiments on the English-Esperanto low-resource language pair, on which there
only exists a limited amount of parallel data, to show the potential impact of
our method in fully unsupervised machine translation. Our code, embeddings and
dictionaries are publicly available. | http://arxiv.org/pdf/1710.04087 | Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, Hervé Jégou | cs.CL | ICLR 2018 | null | cs.CL | 20171011 | 20180130 | [
{
"id": "1701.00160"
},
{
"id": "1602.01925"
},
{
"id": "1702.08734"
}
] |
1710.04110 | 30 | decides which memory slot to use for information storage and retrieval. However, in practice removing decay dynamics for intrinsically temporal tasks harms the CT-GRU (Figures 5d,e). Data set Hawkes process consists of parallel event streams generated by independent Hawkes processes operating over a range of time scales; an example sequence is shown in Figure 4b. Data set Disperse classiï¬es event streams according to whether two speciï¬c events occur in a precise but distant temporal relationship.
# 3.3 Naturalistic data sets | 1710.04110#30 | Discrete Event, Continuous Time RNNs | We investigate recurrent neural network architectures for event-sequence
processing. Event sequences, characterized by discrete observations stamped
with continuous-valued times of occurrence, are challenging due to the
potentially wide dynamic range of relevant time scales as well as interactions
between time scales. We describe four forms of inductive bias that should
benefit architectures for event sequences: temporal locality, position and
scale homogeneity, and scale interdependence. We extend the popular gated
recurrent unit (GRU) architecture to incorporate these biases via intrinsic
temporal dynamics, obtaining a continuous-time GRU. The CT-GRU arises by
interpreting the gates of a GRU as selecting a time scale of memory, and the
CT-GRU generalizes the GRU by incorporating multiple time scales of memory and
performing context-dependent selection of time scales for information storage
and retrieval. Event time-stamps drive decay dynamics of the CT-GRU, whereas
they serve as generic additional inputs to the GRU. Despite the very different
manner in which the two models consider time, their performance on eleven data
sets we examined is essentially identical. Our surprising results point both to
the robustness of GRU and LSTM architectures for handling continuous time, and
to the potency of incorporating continuous dynamics into neural architectures. | http://arxiv.org/pdf/1710.04110 | Michael C. Mozer, Denis Kazakov, Robert V. Lindsey | cs.NE, cs.LG, I.2.6 | 21 pages | null | cs.NE | 20171011 | 20171011 | [] |
1710.04087 | 31 | Impact of the monolingual embeddings For the word translation task, we obtained a signiï¬cant boost in performance when considering fastText embeddings trained on Wikipedia, as opposed to previously used CBOW embeddings trained on the WaCky datasets (Baroni et al. (2009)), as can been seen in Table 2. Among the two factors of variation, we noticed that this boost in performance was mostly due to the change in corpora. The fastText embeddings, which incorporates more syn- tactic information about the words, obtained only two percent more accuracy compared to CBOW embeddings trained on the same corpus, out of the 18.8% gain. We hypothesize that this gain is due to the similar co-occurrence statistics of Wikipedia corpora. Figure 3 in the appendix shows results on the alignment of different monolingual embeddings and concurs with this hypothesis. We also obtained better results for monolingual evaluation tasks such as word similarities and word analogies when training our embeddings on the Wikipedia corpora. | 1710.04087#31 | Word Translation Without Parallel Data | State-of-the-art methods for learning cross-lingual word embeddings have
relied on bilingual dictionaries or parallel corpora. Recent studies showed
that the need for parallel data supervision can be alleviated with
character-level information. While these methods showed encouraging results,
they are not on par with their supervised counterparts and are limited to pairs
of languages sharing a common alphabet. In this work, we show that we can build
a bilingual dictionary between two languages without using any parallel
corpora, by aligning monolingual word embedding spaces in an unsupervised way.
Without using any character information, our model even outperforms existing
supervised methods on cross-lingual tasks for some language pairs. Our
experiments demonstrate that our method works very well also for distant
language pairs, like English-Russian or English-Chinese. We finally describe
experiments on the English-Esperanto low-resource language pair, on which there
only exists a limited amount of parallel data, to show the potential impact of
our method in fully unsupervised machine translation. Our code, embeddings and
dictionaries are publicly available. | http://arxiv.org/pdf/1710.04087 | Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, Hervé Jégou | cs.CL | ICLR 2018 | null | cs.CL | 20171011 | 20180130 | [
{
"id": "1701.00160"
},
{
"id": "1602.01925"
},
{
"id": "1702.08734"
}
] |
1710.04110 | 31 | # 3.3 Naturalistic data sets
We experimented with ï¬ve real-world event-sequence data sets, described in detail in the Supplementary Materials. Reddit is the timeseries of subreddit postings of 30k users, with sequences spanning up to several years and a thousand postings (Figure 4c). Last.fm has 300 time-tagged artist selections of 30k users, spanning a time range from hours to months. Msnbc, from the UCI repository (Lichman, 2013), has the sequence of categorized MSNBC web pages viewed in 120k sessions. Spanish and Japanese are data sets of students practicing foreign language vocabulary over a period of up to 4 months, with the lag between practice of a vocabulary item ranging from seconds to months. Reddit, Last.fm, and Msnbc are event-label prediction tasks; Spanish and Japanese require event-polarity prediction (whether students successfully translated a vocabulary item given their study history). Because students forget with the passage of time, we expected that CT-GRU would be particularly eï¬ective for modeling human memory strength.
Figures 5f-j reveal no meaningful performance diï¬erence between the GRU and CT-GRU architectures, and both architectures outperform a baseline measure (depicted as the solid black line in the Figures). For Reddit, Last.fm, and Msnbc, the baseline is obtained by
10
Discrete Event, Continuous Time RNNs | 1710.04110#31 | Discrete Event, Continuous Time RNNs | We investigate recurrent neural network architectures for event-sequence
processing. Event sequences, characterized by discrete observations stamped
with continuous-valued times of occurrence, are challenging due to the
potentially wide dynamic range of relevant time scales as well as interactions
between time scales. We describe four forms of inductive bias that should
benefit architectures for event sequences: temporal locality, position and
scale homogeneity, and scale interdependence. We extend the popular gated
recurrent unit (GRU) architecture to incorporate these biases via intrinsic
temporal dynamics, obtaining a continuous-time GRU. The CT-GRU arises by
interpreting the gates of a GRU as selecting a time scale of memory, and the
CT-GRU generalizes the GRU by incorporating multiple time scales of memory and
performing context-dependent selection of time scales for information storage
and retrieval. Event time-stamps drive decay dynamics of the CT-GRU, whereas
they serve as generic additional inputs to the GRU. Despite the very different
manner in which the two models consider time, their performance on eleven data
sets we examined is essentially identical. Our surprising results point both to
the robustness of GRU and LSTM architectures for handling continuous time, and
to the potency of incorporating continuous dynamics into neural architectures. | http://arxiv.org/pdf/1710.04110 | Michael C. Mozer, Denis Kazakov, Robert V. Lindsey | cs.NE, cs.LG, I.2.6 | 21 pages | null | cs.NE | 20171011 | 20171011 | [] |
1710.04087 | 32 | Adversarial approach Table[I]shows that the adversarial approach provides a strong system for learning cross-lingual embeddings without parallel data. On the es-en and en-fr language pairs, Adv-CSLS obtains a P@1 of 79.7% and 77.8%, which is only 3.2% and 3.3% below the super- vised approach. Additionally, we observe that most systems still obtain decent results on distant languages that do not share a common alphabet (en-ru and en-zh), for which method exploiting identical character strings are just not applicable (Artetxe et al.|(2017)). This method allows us to build a strong synthetic vocabulary using similarities obtained with CSLS. The gain in absolute ac- curacy observed with CSLS on the Procrustes method is even more important here, with differences between Adv-NN and Adv-CSLS of up to 8.4% on es-en. As a simple baseline, we tried to match the first two moments of the projected source and target embeddings, which amounts to solving W* © argminy, ||(WX)7(WX) â YTY||p and solving the sign ambiguity This attempt was not successful, which we | 1710.04087#32 | Word Translation Without Parallel Data | State-of-the-art methods for learning cross-lingual word embeddings have
relied on bilingual dictionaries or parallel corpora. Recent studies showed
that the need for parallel data supervision can be alleviated with
character-level information. While these methods showed encouraging results,
they are not on par with their supervised counterparts and are limited to pairs
of languages sharing a common alphabet. In this work, we show that we can build
a bilingual dictionary between two languages without using any parallel
corpora, by aligning monolingual word embedding spaces in an unsupervised way.
Without using any character information, our model even outperforms existing
supervised methods on cross-lingual tasks for some language pairs. Our
experiments demonstrate that our method works very well also for distant
language pairs, like English-Russian or English-Chinese. We finally describe
experiments on the English-Esperanto low-resource language pair, on which there
only exists a limited amount of parallel data, to show the potential impact of
our method in fully unsupervised machine translation. Our code, embeddings and
dictionaries are publicly available. | http://arxiv.org/pdf/1710.04087 | Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, Hervé Jégou | cs.CL | ICLR 2018 | null | cs.CL | 20171011 | 20180130 | [
{
"id": "1701.00160"
},
{
"id": "1602.01925"
},
{
"id": "1702.08734"
}
] |
1710.04110 | 32 | 10
Discrete Event, Continuous Time RNNs
predicting the next event label is the same as the current label; for Spanish and Japanese, the baseline is obtained by predicting the same success or failure for a vocabulary item as on the previous trial. Signiï¬cantly beating baseline is quite diï¬cult for each of these tasks because they involve modeling human behavior that is governed by many factors external to event history.
The most distressing result, which we do not show in the Figures, is that for each of these tasks, removing the ât inputs from the GRU has only a tiny impact on performance, at most a 5% drop toward baseline. Thus, neither GRU nor CT-GRU is able to leverage the timing information in the event stream. One possibility is that the stochasticity of human behavior overwhelms any signal in event timing. If so, time tags may provide more leverage for event sequences obtained from alternative sources (e.g., computer systems, physical processes). However, we are not hopeful given that our synthetic data sets also failed to show an advantage for the CT-GRU, and those data sets were crafted to beneï¬t an architecture like CT-GRU with intrinsic temporal dynamics.
# 3.4 Summary of other investigations | 1710.04110#32 | Discrete Event, Continuous Time RNNs | We investigate recurrent neural network architectures for event-sequence
processing. Event sequences, characterized by discrete observations stamped
with continuous-valued times of occurrence, are challenging due to the
potentially wide dynamic range of relevant time scales as well as interactions
between time scales. We describe four forms of inductive bias that should
benefit architectures for event sequences: temporal locality, position and
scale homogeneity, and scale interdependence. We extend the popular gated
recurrent unit (GRU) architecture to incorporate these biases via intrinsic
temporal dynamics, obtaining a continuous-time GRU. The CT-GRU arises by
interpreting the gates of a GRU as selecting a time scale of memory, and the
CT-GRU generalizes the GRU by incorporating multiple time scales of memory and
performing context-dependent selection of time scales for information storage
and retrieval. Event time-stamps drive decay dynamics of the CT-GRU, whereas
they serve as generic additional inputs to the GRU. Despite the very different
manner in which the two models consider time, their performance on eleven data
sets we examined is essentially identical. Our surprising results point both to
the robustness of GRU and LSTM architectures for handling continuous time, and
to the potency of incorporating continuous dynamics into neural architectures. | http://arxiv.org/pdf/1710.04110 | Michael C. Mozer, Denis Kazakov, Robert V. Lindsey | cs.NE, cs.LG, I.2.6 | 21 pages | null | cs.NE | 20171011 | 20171011 | [] |
1710.04110 | 33 | We conducted a variety of additional investigations that we summarize here. First, we hoped that with smaller data sets, the value of the inductive bias in the CT-GRU would give it an advantage over the GRU, but it did not. Second, we tested other natural and synthetic data sets, but the pattern of results is as we report here. Third, we considered additional tasks that might reveal an advantage of the CT-GRU such as sequence extrapolation and event-timing prediction. And ï¬nally, we developed literally dozens of alternative neural net architectures that, like the CT-GRU, incorporate the forms of inductive bias described in the introduction that we expected to be helpful for event-sequence processing. All of these architectures share intrinsic time-based decay whose dynamics are modulated by information contained in the event sequence. These architectures include: variants of the CT-GRU in which the retrieved state is also used for output and computation of the storage and retrieval scales; the LSTM analog of the CT-GRU, with multiple temporal scales; and a variety of memory mechanisms whose internal dynamics are designed to mimic mean-ï¬eld | 1710.04110#33 | Discrete Event, Continuous Time RNNs | We investigate recurrent neural network architectures for event-sequence
processing. Event sequences, characterized by discrete observations stamped
with continuous-valued times of occurrence, are challenging due to the
potentially wide dynamic range of relevant time scales as well as interactions
between time scales. We describe four forms of inductive bias that should
benefit architectures for event sequences: temporal locality, position and
scale homogeneity, and scale interdependence. We extend the popular gated
recurrent unit (GRU) architecture to incorporate these biases via intrinsic
temporal dynamics, obtaining a continuous-time GRU. The CT-GRU arises by
interpreting the gates of a GRU as selecting a time scale of memory, and the
CT-GRU generalizes the GRU by incorporating multiple time scales of memory and
performing context-dependent selection of time scales for information storage
and retrieval. Event time-stamps drive decay dynamics of the CT-GRU, whereas
they serve as generic additional inputs to the GRU. Despite the very different
manner in which the two models consider time, their performance on eleven data
sets we examined is essentially identical. Our surprising results point both to
the robustness of GRU and LSTM architectures for handling continuous time, and
to the potency of incorporating continuous dynamics into neural architectures. | http://arxiv.org/pdf/1710.04110 | Michael C. Mozer, Denis Kazakov, Robert V. Lindsey | cs.NE, cs.LG, I.2.6 | 21 pages | null | cs.NE | 20171011 | 20171011 | [] |
1710.04087 | 34 | Reï¬nement: closing the gap with supervised approaches The reï¬nement step on the synthetic bilingual vocabulary constructed after adversarial training brings an additional and signiï¬cant gain in performance, closing the gap between our approach and the supervised baseline. In Table 1, we observe that our unsupervised method even outperforms our strong supervised baseline on en-it and en-es, and is able to retrieve the correct translation of a source word with up to 83% accuracy. The better performance of the unsupervised approach can be explained by the strong similarity of co- occurrence statistics between the languages, and by the limitation in the supervised approach that uses a pre-deï¬ned ï¬xed-size vocabulary (of 5,000 unique source words): in our case the reï¬nement step can potentially use more anchor points. In Table 3, we also observe a strong gain in accuracy
8
Published as a conference paper at ICLR 2018 | 1710.04087#34 | Word Translation Without Parallel Data | State-of-the-art methods for learning cross-lingual word embeddings have
relied on bilingual dictionaries or parallel corpora. Recent studies showed
that the need for parallel data supervision can be alleviated with
character-level information. While these methods showed encouraging results,
they are not on par with their supervised counterparts and are limited to pairs
of languages sharing a common alphabet. In this work, we show that we can build
a bilingual dictionary between two languages without using any parallel
corpora, by aligning monolingual word embedding spaces in an unsupervised way.
Without using any character information, our model even outperforms existing
supervised methods on cross-lingual tasks for some language pairs. Our
experiments demonstrate that our method works very well also for distant
language pairs, like English-Russian or English-Chinese. We finally describe
experiments on the English-Esperanto low-resource language pair, on which there
only exists a limited amount of parallel data, to show the potential impact of
our method in fully unsupervised machine translation. Our code, embeddings and
dictionaries are publicly available. | http://arxiv.org/pdf/1710.04087 | Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, Hervé Jégou | cs.CL | ICLR 2018 | null | cs.CL | 20171011 | 20180130 | [
{
"id": "1701.00160"
},
{
"id": "1602.01925"
},
{
"id": "1702.08734"
}
] |
1710.04110 | 34 | the CT-GRU, with multiple temporal scales; and a variety of memory mechanisms whose internal dynamics are designed to mimic mean-ï¬eld approximations to stochastic processes, including survival processes and self-excitatory and self-inhibitory point processes (e.g., Hawkes processes). Some of these models are easier to train than others, but, in the end, none beat the performance of generic LSTM or GRU architectures provided with additional ât inputs. | 1710.04110#34 | Discrete Event, Continuous Time RNNs | We investigate recurrent neural network architectures for event-sequence
processing. Event sequences, characterized by discrete observations stamped
with continuous-valued times of occurrence, are challenging due to the
potentially wide dynamic range of relevant time scales as well as interactions
between time scales. We describe four forms of inductive bias that should
benefit architectures for event sequences: temporal locality, position and
scale homogeneity, and scale interdependence. We extend the popular gated
recurrent unit (GRU) architecture to incorporate these biases via intrinsic
temporal dynamics, obtaining a continuous-time GRU. The CT-GRU arises by
interpreting the gates of a GRU as selecting a time scale of memory, and the
CT-GRU generalizes the GRU by incorporating multiple time scales of memory and
performing context-dependent selection of time scales for information storage
and retrieval. Event time-stamps drive decay dynamics of the CT-GRU, whereas
they serve as generic additional inputs to the GRU. Despite the very different
manner in which the two models consider time, their performance on eleven data
sets we examined is essentially identical. Our surprising results point both to
the robustness of GRU and LSTM architectures for handling continuous time, and
to the potency of incorporating continuous dynamics into neural architectures. | http://arxiv.org/pdf/1710.04110 | Michael C. Mozer, Denis Kazakov, Robert V. Lindsey | cs.NE, cs.LG, I.2.6 | 21 pages | null | cs.NE | 20171011 | 20171011 | [] |
1710.04087 | 35 | 8
Published as a conference paper at ICLR 2018
en-es SemEval 2017 Methods with cross-lingual supervision 0.65 0.64 NASARI our baseline 0.71 0.72 Methods without cross-lingual supervision 0.67 0.69 Adv 0.71 0.71 Adv - Reï¬ne Table 4: Cross-lingual wordsim task. NASARI (Camacho-Collados et al. (2016)) refers to the ofï¬cial SemEval2017 baseline. We report Pearson correlation.
0.60 0.72 0.70 0.71
Dictionary - NN Dictionary - CSLS en-eo 6.1 11.1 eo-en 11.9 14.3
Table 5: BLEU score on English-Esperanto. Although being a naive approach, word-by- word translation is enough to get a rough idea of the input sentence. The quality of the gener- ated dictionary has a signiï¬cant impact on the BLEU score.
(up to 15%) on sentence retrieval using bag-of-words embeddings, which is consistent with the gain observed on the word retrieval task. | 1710.04087#35 | Word Translation Without Parallel Data | State-of-the-art methods for learning cross-lingual word embeddings have
relied on bilingual dictionaries or parallel corpora. Recent studies showed
that the need for parallel data supervision can be alleviated with
character-level information. While these methods showed encouraging results,
they are not on par with their supervised counterparts and are limited to pairs
of languages sharing a common alphabet. In this work, we show that we can build
a bilingual dictionary between two languages without using any parallel
corpora, by aligning monolingual word embedding spaces in an unsupervised way.
Without using any character information, our model even outperforms existing
supervised methods on cross-lingual tasks for some language pairs. Our
experiments demonstrate that our method works very well also for distant
language pairs, like English-Russian or English-Chinese. We finally describe
experiments on the English-Esperanto low-resource language pair, on which there
only exists a limited amount of parallel data, to show the potential impact of
our method in fully unsupervised machine translation. Our code, embeddings and
dictionaries are publicly available. | http://arxiv.org/pdf/1710.04087 | Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, Hervé Jégou | cs.CL | ICLR 2018 | null | cs.CL | 20171011 | 20180130 | [
{
"id": "1701.00160"
},
{
"id": "1602.01925"
},
{
"id": "1702.08734"
}
] |
1710.04110 | 35 | # 4. Discussion
Our work is premised on the hypothesis that event-sequence processing in RNN architectures could be improved by incorporating domain-appropriate inductive bias. Despite a concerted, year-long eï¬ort, we found no support for this hypothesis. Selling a null result is challenging. We have demonstrated that there is no trivial or pathological explanation for the null result, such as implementation issues with the CT-GRU or the possibility that both architectures simply ignore time. Our methodology is sound and careful, our simulations extensive and thorough. Nevertheless, negative results can be inï¬uential, e.g., the failure to learn long-term temporal dependencies (Hochreiter et al., 2001; Hochreiter, 1998; Bengio et al., 1994; Mozer, 1992) led to the discovery of novel RNN architectures. Further, this report may save others
11
# Mozer, Kazakov, & Lindsey
from a duplication of eï¬ort. We also note, somewhat cynically, that a large fraction of the novel architectures that are claimed to yield promising results one year seem to fall by the wayside a year later. | 1710.04110#35 | Discrete Event, Continuous Time RNNs | We investigate recurrent neural network architectures for event-sequence
processing. Event sequences, characterized by discrete observations stamped
with continuous-valued times of occurrence, are challenging due to the
potentially wide dynamic range of relevant time scales as well as interactions
between time scales. We describe four forms of inductive bias that should
benefit architectures for event sequences: temporal locality, position and
scale homogeneity, and scale interdependence. We extend the popular gated
recurrent unit (GRU) architecture to incorporate these biases via intrinsic
temporal dynamics, obtaining a continuous-time GRU. The CT-GRU arises by
interpreting the gates of a GRU as selecting a time scale of memory, and the
CT-GRU generalizes the GRU by incorporating multiple time scales of memory and
performing context-dependent selection of time scales for information storage
and retrieval. Event time-stamps drive decay dynamics of the CT-GRU, whereas
they serve as generic additional inputs to the GRU. Despite the very different
manner in which the two models consider time, their performance on eleven data
sets we examined is essentially identical. Our surprising results point both to
the robustness of GRU and LSTM architectures for handling continuous time, and
to the potency of incorporating continuous dynamics into neural architectures. | http://arxiv.org/pdf/1710.04110 | Michael C. Mozer, Denis Kazakov, Robert V. Lindsey | cs.NE, cs.LG, I.2.6 | 21 pages | null | cs.NE | 20171011 | 20171011 | [] |
1710.04087 | 36 | (up to 15%) on sentence retrieval using bag-of-words embeddings, which is consistent with the gain observed on the word retrieval task.
Application to a low-resource language pair and to machine translation Our method is par- ticularly suited for low-resource languages for which there only exists a very limited amount of parallel data. We apply it to the English-Esperanto language pair. We use the fastText embeddings trained on Wikipedia, and create a dictionary based on an online lexicon. The performance of our unsupervised approach on English-Esperanto is of 28.2%, compared to 29.3% with the supervised method. On Esperanto-English, our unsupervised approach obtains 25.6%, which is 1.3% better than the supervised method. The dictionary we use for that language pair does not take into account the polysemy of words, which explains why the results are lower than on other language pairs. Peo- ple commonly report the P@5 to alleviate this issue. In particular, the P@5 for English-Esperanto and Esperanto-English is of 46.5% and 43.9% respectively. | 1710.04087#36 | Word Translation Without Parallel Data | State-of-the-art methods for learning cross-lingual word embeddings have
relied on bilingual dictionaries or parallel corpora. Recent studies showed
that the need for parallel data supervision can be alleviated with
character-level information. While these methods showed encouraging results,
they are not on par with their supervised counterparts and are limited to pairs
of languages sharing a common alphabet. In this work, we show that we can build
a bilingual dictionary between two languages without using any parallel
corpora, by aligning monolingual word embedding spaces in an unsupervised way.
Without using any character information, our model even outperforms existing
supervised methods on cross-lingual tasks for some language pairs. Our
experiments demonstrate that our method works very well also for distant
language pairs, like English-Russian or English-Chinese. We finally describe
experiments on the English-Esperanto low-resource language pair, on which there
only exists a limited amount of parallel data, to show the potential impact of
our method in fully unsupervised machine translation. Our code, embeddings and
dictionaries are publicly available. | http://arxiv.org/pdf/1710.04087 | Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, Hervé Jégou | cs.CL | ICLR 2018 | null | cs.CL | 20171011 | 20180130 | [
{
"id": "1701.00160"
},
{
"id": "1602.01925"
},
{
"id": "1702.08734"
}
] |
1710.04110 | 36 | One possible explanation for our null result may come from the fact that the CT-GRU has no more free parameters than the GRU. In fact, the GRU has more parameters because the inter-event times are treated as additional inputs with associated weights in the GRU. The CT-GRU and GRU have diï¬erent sorts of ï¬exibility via their free parameters, but perhaps the space of solutions they can encode is roughly the same. Nonetheless, we are a bit mystiï¬ed as to how they could admit the same solution space, given the very diï¬erent manners in which they encode and utilize time.
Our work has two key insights that ought to have value for future research. First, we cast the popular LSTM and GRU architectures in terms of time-scale selection rather than in terms of gating information ï¬ow. Second, we show that a simple mechanism with a ï¬nite set of time scales is capable of storing and retrieving information from a continuous range of time scales. | 1710.04110#36 | Discrete Event, Continuous Time RNNs | We investigate recurrent neural network architectures for event-sequence
processing. Event sequences, characterized by discrete observations stamped
with continuous-valued times of occurrence, are challenging due to the
potentially wide dynamic range of relevant time scales as well as interactions
between time scales. We describe four forms of inductive bias that should
benefit architectures for event sequences: temporal locality, position and
scale homogeneity, and scale interdependence. We extend the popular gated
recurrent unit (GRU) architecture to incorporate these biases via intrinsic
temporal dynamics, obtaining a continuous-time GRU. The CT-GRU arises by
interpreting the gates of a GRU as selecting a time scale of memory, and the
CT-GRU generalizes the GRU by incorporating multiple time scales of memory and
performing context-dependent selection of time scales for information storage
and retrieval. Event time-stamps drive decay dynamics of the CT-GRU, whereas
they serve as generic additional inputs to the GRU. Despite the very different
manner in which the two models consider time, their performance on eleven data
sets we examined is essentially identical. Our surprising results point both to
the robustness of GRU and LSTM architectures for handling continuous time, and
to the potency of incorporating continuous dynamics into neural architectures. | http://arxiv.org/pdf/1710.04110 | Michael C. Mozer, Denis Kazakov, Robert V. Lindsey | cs.NE, cs.LG, I.2.6 | 21 pages | null | cs.NE | 20171011 | 20171011 | [] |
1710.04087 | 37 | To show the impact of such a dictionary on machine translation, we apply it to the English-Esperanto Tatoeba corpora (Tiedemann, 2012). We remove all pairs containing sentences with unknown words, resulting in about 60k pairs. Then, we translate sentences in both directions by doing word-by- word translation. In Table 5, we report the BLEU score with this method, when using a dictionary generated using nearest neighbors, and CSLS. With CSLS, this naive approach obtains 11.1 and 14.3 BLEU on English-Esperanto and Esperanto-English respectively. Table 6 in the appendix shows some examples of sentences in Esperanto translated into English using word-by-word translation. As one can see, the meaning is mostly conveyed in the translated sentences, but the translations contain some simple errors. For instance, the âmiâ is translated into âsorryâ instead of âiâ, etc. The translations could easily be improved using a language model.
# 5 RELATED WORK | 1710.04087#37 | Word Translation Without Parallel Data | State-of-the-art methods for learning cross-lingual word embeddings have
relied on bilingual dictionaries or parallel corpora. Recent studies showed
that the need for parallel data supervision can be alleviated with
character-level information. While these methods showed encouraging results,
they are not on par with their supervised counterparts and are limited to pairs
of languages sharing a common alphabet. In this work, we show that we can build
a bilingual dictionary between two languages without using any parallel
corpora, by aligning monolingual word embedding spaces in an unsupervised way.
Without using any character information, our model even outperforms existing
supervised methods on cross-lingual tasks for some language pairs. Our
experiments demonstrate that our method works very well also for distant
language pairs, like English-Russian or English-Chinese. We finally describe
experiments on the English-Esperanto low-resource language pair, on which there
only exists a limited amount of parallel data, to show the potential impact of
our method in fully unsupervised machine translation. Our code, embeddings and
dictionaries are publicly available. | http://arxiv.org/pdf/1710.04087 | Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, Hervé Jégou | cs.CL | ICLR 2018 | null | cs.CL | 20171011 | 20180130 | [
{
"id": "1701.00160"
},
{
"id": "1602.01925"
},
{
"id": "1702.08734"
}
] |
1710.04110 | 37 | To end on a more positive note, incorporating continuous-time dynamics into neural architectures has led us to some observations worthy of further pursuit. For example, consider the possibility of multiple events occurring simultaneously, e.g., a stream of outgoing emails might be coded in terms of the recipients, and a single message may be sent to multiple individuals. The state of an LSTM, GRU, or CT-GRU will depend on the order that the individuals are presented. However, we can incorporate into the CT-GRU absorption time dynamics for an input x, via the closed-form solution to diï¬erential equations dh = âh/Ïhid + x/Ïin and dx = âx/Ïin, yielding a model whose dynamics are invariant to order for simultaneous events, and relatively insensitive to order for events arriving closely in time. Such behavior could have signiï¬cant beneï¬ts for event sequences with measurement noise or random factors inï¬uencing arrival times.
# Acknowledgments
This research was supported by NSF grants DRL-1631428 and SES-1461535.
12
Discrete Event, Continuous Time RNNs
# Appendix A.
# A.1 Simulation methodology | 1710.04110#37 | Discrete Event, Continuous Time RNNs | We investigate recurrent neural network architectures for event-sequence
processing. Event sequences, characterized by discrete observations stamped
with continuous-valued times of occurrence, are challenging due to the
potentially wide dynamic range of relevant time scales as well as interactions
between time scales. We describe four forms of inductive bias that should
benefit architectures for event sequences: temporal locality, position and
scale homogeneity, and scale interdependence. We extend the popular gated
recurrent unit (GRU) architecture to incorporate these biases via intrinsic
temporal dynamics, obtaining a continuous-time GRU. The CT-GRU arises by
interpreting the gates of a GRU as selecting a time scale of memory, and the
CT-GRU generalizes the GRU by incorporating multiple time scales of memory and
performing context-dependent selection of time scales for information storage
and retrieval. Event time-stamps drive decay dynamics of the CT-GRU, whereas
they serve as generic additional inputs to the GRU. Despite the very different
manner in which the two models consider time, their performance on eleven data
sets we examined is essentially identical. Our surprising results point both to
the robustness of GRU and LSTM architectures for handling continuous time, and
to the potency of incorporating continuous dynamics into neural architectures. | http://arxiv.org/pdf/1710.04110 | Michael C. Mozer, Denis Kazakov, Robert V. Lindsey | cs.NE, cs.LG, I.2.6 | 21 pages | null | cs.NE | 20171011 | 20171011 | [] |
1710.04087 | 38 | # 5 RELATED WORK
Work on bilingual lexicon induction without parallel corpora has a long tradition, starting with the seminal works by Rapp (1995) and Fung (1995). Similar to our approach, they exploit the Harris (1954) distributional structure, but using discrete word representations such as TF-IDF vectors. Fol- lowing studies by Fung & Yee (1998); Rapp (1999); Schafer & Yarowsky (2002); Koehn & Knight (2002); Haghighi et al. (2008); Irvine & Callison-Burch (2013) leverage statistical similarities be- tween two languages to learn small dictionaries of a few hundred words. These methods need to be initialized with a seed bilingual lexicon, using for instance the edit distance between source and tar- get words. This can be seen as prior knowledge, only available for closely related languages. There is also a large amount of studies in statistical decipherment, where the machine translation problem is reduced to a deciphering problem, and the source language is considered as a ciphertext (Ravi & Knight, 2011; Pourdamghani & Knight, 2017). Although initially not based on distributional se- mantics, recent studies show that the use of word embeddings can bring signiï¬cant improvement in statistical decipherment (Dou et al., 2015). | 1710.04087#38 | Word Translation Without Parallel Data | State-of-the-art methods for learning cross-lingual word embeddings have
relied on bilingual dictionaries or parallel corpora. Recent studies showed
that the need for parallel data supervision can be alleviated with
character-level information. While these methods showed encouraging results,
they are not on par with their supervised counterparts and are limited to pairs
of languages sharing a common alphabet. In this work, we show that we can build
a bilingual dictionary between two languages without using any parallel
corpora, by aligning monolingual word embedding spaces in an unsupervised way.
Without using any character information, our model even outperforms existing
supervised methods on cross-lingual tasks for some language pairs. Our
experiments demonstrate that our method works very well also for distant
language pairs, like English-Russian or English-Chinese. We finally describe
experiments on the English-Esperanto low-resource language pair, on which there
only exists a limited amount of parallel data, to show the potential impact of
our method in fully unsupervised machine translation. Our code, embeddings and
dictionaries are publicly available. | http://arxiv.org/pdf/1710.04087 | Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, Hervé Jégou | cs.CL | ICLR 2018 | null | cs.CL | 20171011 | 20180130 | [
{
"id": "1701.00160"
},
{
"id": "1602.01925"
},
{
"id": "1702.08734"
}
] |
1710.04110 | 38 | 12
Discrete Event, Continuous Time RNNs
# Appendix A.
# A.1 Simulation methodology
We constructed independent theano (Theano Development Team, 2016) and tensorï¬ow (Abadi et al., 2015) implementations as a means of verifying the code. For all data sets, 15% of the training set is used as validation data model selection, performed via early stopping and selection from a range of hidden layer sizes. Optimization was performed via RMSPROP. Drop out was not used as it appeared to have little impact on results. We assessed performance on a test set via three measures: accuracy of prediction/classiï¬cation, log likelihood of correct prediction/classiï¬cation, and AUC (a discriminability measure) (Green and Swets, 1966). Because accuracy mirrored the other two measures for our data sets and because it is the most intuitive, we report accuracy. For event-label prediction tasks, a response is correct if the highest output probability label is the correct label. For classiï¬cation tasks and event-polarity prediction tasks, a response is correct if the error magnitude is less than 0.5 for outputs in [0, 1].
# A.1.1 GRU initialization | 1710.04110#38 | Discrete Event, Continuous Time RNNs | We investigate recurrent neural network architectures for event-sequence
processing. Event sequences, characterized by discrete observations stamped
with continuous-valued times of occurrence, are challenging due to the
potentially wide dynamic range of relevant time scales as well as interactions
between time scales. We describe four forms of inductive bias that should
benefit architectures for event sequences: temporal locality, position and
scale homogeneity, and scale interdependence. We extend the popular gated
recurrent unit (GRU) architecture to incorporate these biases via intrinsic
temporal dynamics, obtaining a continuous-time GRU. The CT-GRU arises by
interpreting the gates of a GRU as selecting a time scale of memory, and the
CT-GRU generalizes the GRU by incorporating multiple time scales of memory and
performing context-dependent selection of time scales for information storage
and retrieval. Event time-stamps drive decay dynamics of the CT-GRU, whereas
they serve as generic additional inputs to the GRU. Despite the very different
manner in which the two models consider time, their performance on eleven data
sets we examined is essentially identical. Our surprising results point both to
the robustness of GRU and LSTM architectures for handling continuous time, and
to the potency of incorporating continuous dynamics into neural architectures. | http://arxiv.org/pdf/1710.04110 | Michael C. Mozer, Denis Kazakov, Robert V. Lindsey | cs.NE, cs.LG, I.2.6 | 21 pages | null | cs.NE | 20171011 | 20171011 | [] |
1710.04087 | 39 | The rise of distributed word embeddings has revived some of these approaches, now with the goal of aligning embedding spaces instead of just aligning vocabularies. Cross-lingual word embeddings can be used to extract bilingual lexicons by computing the nearest neighbor of a source word, but also allow other applications such as sentence retrieval or cross-lingual document classiï¬cation (Kle- mentiev et al., 2012). In general, they are used as building blocks for various cross-lingual language processing systems. More recently, several approaches have been proposed to learn bilingual dictio- naries mapping from the source to the target space (Mikolov et al., 2013b; Zou et al., 2013; Faruqui
9
Published as a conference paper at ICLR 2018
& Dyer, 2014; Ammar et al., 2016). In particular, Xing et al. (2015) showed that adding an or- thogonality constraint to the mapping can signiï¬cantly improve performance, and has a closed-form solution. This approach was further referred to as the Procrustes approach in Smith et al. (2017). | 1710.04087#39 | Word Translation Without Parallel Data | State-of-the-art methods for learning cross-lingual word embeddings have
relied on bilingual dictionaries or parallel corpora. Recent studies showed
that the need for parallel data supervision can be alleviated with
character-level information. While these methods showed encouraging results,
they are not on par with their supervised counterparts and are limited to pairs
of languages sharing a common alphabet. In this work, we show that we can build
a bilingual dictionary between two languages without using any parallel
corpora, by aligning monolingual word embedding spaces in an unsupervised way.
Without using any character information, our model even outperforms existing
supervised methods on cross-lingual tasks for some language pairs. Our
experiments demonstrate that our method works very well also for distant
language pairs, like English-Russian or English-Chinese. We finally describe
experiments on the English-Esperanto low-resource language pair, on which there
only exists a limited amount of parallel data, to show the potential impact of
our method in fully unsupervised machine translation. Our code, embeddings and
dictionaries are publicly available. | http://arxiv.org/pdf/1710.04087 | Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, Hervé Jégou | cs.CL | ICLR 2018 | null | cs.CL | 20171011 | 20180130 | [
{
"id": "1701.00160"
},
{
"id": "1602.01925"
},
{
"id": "1702.08734"
}
] |
1710.04110 | 39 | # A.1.1 GRU initialization
The GRU U â and W â weights are initialized with L2 norm 1 and such that the fan-in weights across hidden units are mutually orthogonal. The GRU bâ are initialized to zero. Other weights, including the mapping from input to hidden and hidden to output, are initialized by draws from a N (0, .01) distribution.
A.1.2 CT-GRU initialization
The CT-GRU requires specifying a range of time scales in advance. These scales are denoted ËT â¡ {ËÏ1, ËÏ2, . . . ËÏM } in the main article. We picked a range of scales that spanned the shortest inter-event times to the duration of the longest event sequence, allowing information from early in a sequence to be retained until the end of the sequence. The time constants were chosen in steps such that ËÏi+1 = 101/2ËÏi, as noted in the main article. The range of time scales yielded M â {4, 5, ..., 9}. We note that domain knowledge can be useful in picking time scales to avoid unnecessarily short and long scales. | 1710.04110#39 | Discrete Event, Continuous Time RNNs | We investigate recurrent neural network architectures for event-sequence
processing. Event sequences, characterized by discrete observations stamped
with continuous-valued times of occurrence, are challenging due to the
potentially wide dynamic range of relevant time scales as well as interactions
between time scales. We describe four forms of inductive bias that should
benefit architectures for event sequences: temporal locality, position and
scale homogeneity, and scale interdependence. We extend the popular gated
recurrent unit (GRU) architecture to incorporate these biases via intrinsic
temporal dynamics, obtaining a continuous-time GRU. The CT-GRU arises by
interpreting the gates of a GRU as selecting a time scale of memory, and the
CT-GRU generalizes the GRU by incorporating multiple time scales of memory and
performing context-dependent selection of time scales for information storage
and retrieval. Event time-stamps drive decay dynamics of the CT-GRU, whereas
they serve as generic additional inputs to the GRU. Despite the very different
manner in which the two models consider time, their performance on eleven data
sets we examined is essentially identical. Our surprising results point both to
the robustness of GRU and LSTM architectures for handling continuous time, and
to the potency of incorporating continuous dynamics into neural architectures. | http://arxiv.org/pdf/1710.04110 | Michael C. Mozer, Denis Kazakov, Robert V. Lindsey | cs.NE, cs.LG, I.2.6 | 21 pages | null | cs.NE | 20171011 | 20171011 | [] |
1710.04087 | 40 | The hubness problem for cross-lingual word embedding spaces was investigated by Dinu et al. (2015). The authors added a correction to the word retrieval algorithm by incorporating a nearest neighbors reciprocity term. More similar to our cross-domain similarity local scaling approach, Smith et al. (2017) introduced the inverted-softmax to down-weight similarities involving often- retrieved hub words. Intuitively, given a query source word and a candidate target word, they esti- mate the probability that the candidate translates back to the query, rather than the probability that the query translates to the candidate. | 1710.04087#40 | Word Translation Without Parallel Data | State-of-the-art methods for learning cross-lingual word embeddings have
relied on bilingual dictionaries or parallel corpora. Recent studies showed
that the need for parallel data supervision can be alleviated with
character-level information. While these methods showed encouraging results,
they are not on par with their supervised counterparts and are limited to pairs
of languages sharing a common alphabet. In this work, we show that we can build
a bilingual dictionary between two languages without using any parallel
corpora, by aligning monolingual word embedding spaces in an unsupervised way.
Without using any character information, our model even outperforms existing
supervised methods on cross-lingual tasks for some language pairs. Our
experiments demonstrate that our method works very well also for distant
language pairs, like English-Russian or English-Chinese. We finally describe
experiments on the English-Esperanto low-resource language pair, on which there
only exists a limited amount of parallel data, to show the potential impact of
our method in fully unsupervised machine translation. Our code, embeddings and
dictionaries are publicly available. | http://arxiv.org/pdf/1710.04087 | Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, Hervé Jégou | cs.CL | ICLR 2018 | null | cs.CL | 20171011 | 20180130 | [
{
"id": "1701.00160"
},
{
"id": "1602.01925"
},
{
"id": "1702.08734"
}
] |
1710.04110 | 40 | The CT-GRU U â, W â, and bQ parameters are initialized in the same manner as the GRU. The bS and bR parameters are initialized to ln(ËÏ1ËÏM )1/2 to bias the storage and retrieval scales to the middle of the scale range.
# A.2 Data sets
We explored a total of 11 data sets, 6 synthetic and 5 natural.
# A.2.1 Synthetic Data Sets
All synthetic data sets consisted of 10,000 training and 10,000 testing examples.
Working memory. We devised a simple task requiring a duration-limited or working memory. The input sequence consists of commands to store a symbol (a, b, or c) for a short (s), medium (m), or long (l) time intervalâ1, 10, or 100 time units, respectively. The input sequence also contains the symbols a-c in isolation to probe the memory for whether the
13
# Mozer, Kazakov, & Lindsey | 1710.04110#40 | Discrete Event, Continuous Time RNNs | We investigate recurrent neural network architectures for event-sequence
processing. Event sequences, characterized by discrete observations stamped
with continuous-valued times of occurrence, are challenging due to the
potentially wide dynamic range of relevant time scales as well as interactions
between time scales. We describe four forms of inductive bias that should
benefit architectures for event sequences: temporal locality, position and
scale homogeneity, and scale interdependence. We extend the popular gated
recurrent unit (GRU) architecture to incorporate these biases via intrinsic
temporal dynamics, obtaining a continuous-time GRU. The CT-GRU arises by
interpreting the gates of a GRU as selecting a time scale of memory, and the
CT-GRU generalizes the GRU by incorporating multiple time scales of memory and
performing context-dependent selection of time scales for information storage
and retrieval. Event time-stamps drive decay dynamics of the CT-GRU, whereas
they serve as generic additional inputs to the GRU. Despite the very different
manner in which the two models consider time, their performance on eleven data
sets we examined is essentially identical. Our surprising results point both to
the robustness of GRU and LSTM architectures for handling continuous time, and
to the potency of incorporating continuous dynamics into neural architectures. | http://arxiv.org/pdf/1710.04110 | Michael C. Mozer, Denis Kazakov, Robert V. Lindsey | cs.NE, cs.LG, I.2.6 | 21 pages | null | cs.NE | 20171011 | 20171011 | [] |
1710.04087 | 41 | Recent work by Smith et al. (2017) leveraged identical character strings in both source and target languages to create a dictionary with low supervision, on which they applied the Procrustes al- gorithm. Similar to this approach, recent work by Artetxe et al. (2017) used identical digits and numbers to form an initial seed dictionary, and performed an update similar to our reï¬nement step, but iteratively until convergence. While they showed they could obtain good results using as little as twenty parallel words, their method still needs cross-lingual information and is not suitable for languages that do not share a common alphabet. For instance, the method of Artetxe et al. (2017) on our dataset does not work on the word translation task for any of the language pairs, because the digits were ï¬ltered out from the datasets used to train the fastText embeddings. This iterative EM- based algorithm initialized with a seed lexicon has also been explored in other studies (Haghighi et al., 2008; Kondrak et al., 2017). | 1710.04087#41 | Word Translation Without Parallel Data | State-of-the-art methods for learning cross-lingual word embeddings have
relied on bilingual dictionaries or parallel corpora. Recent studies showed
that the need for parallel data supervision can be alleviated with
character-level information. While these methods showed encouraging results,
they are not on par with their supervised counterparts and are limited to pairs
of languages sharing a common alphabet. In this work, we show that we can build
a bilingual dictionary between two languages without using any parallel
corpora, by aligning monolingual word embedding spaces in an unsupervised way.
Without using any character information, our model even outperforms existing
supervised methods on cross-lingual tasks for some language pairs. Our
experiments demonstrate that our method works very well also for distant
language pairs, like English-Russian or English-Chinese. We finally describe
experiments on the English-Esperanto low-resource language pair, on which there
only exists a limited amount of parallel data, to show the potential impact of
our method in fully unsupervised machine translation. Our code, embeddings and
dictionaries are publicly available. | http://arxiv.org/pdf/1710.04087 | Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, Hervé Jégou | cs.CL | ICLR 2018 | null | cs.CL | 20171011 | 20180130 | [
{
"id": "1701.00160"
},
{
"id": "1602.01925"
},
{
"id": "1702.08734"
}
] |
1710.04110 | 41 | 13
# Mozer, Kazakov, & Lindsey
symbol is currently stored. For example, consider the sequence: {0, s}, {0, b}, {5, b}, where a {t, x} denotes event x in the input sequence at time t. The ï¬rst 2 events instruct the memory to store b for 10 time units. The third event probes for b at time 5. This probe should produce a response of 1, whereas queries {25, b} or {5, a} should produce a response of 0. The speciï¬c form of sequences generated consisted of two commands to store distinct symbols, separated in time by t1 units, followed by a probe of one of the symbols following t2 units. The lags t1 and t2 were chosen in order to balance the training and test sets with half positive and half negative examples. Only ï¬fteen hidden units were used for this task in order to interpret model behavior. | 1710.04110#41 | Discrete Event, Continuous Time RNNs | We investigate recurrent neural network architectures for event-sequence
processing. Event sequences, characterized by discrete observations stamped
with continuous-valued times of occurrence, are challenging due to the
potentially wide dynamic range of relevant time scales as well as interactions
between time scales. We describe four forms of inductive bias that should
benefit architectures for event sequences: temporal locality, position and
scale homogeneity, and scale interdependence. We extend the popular gated
recurrent unit (GRU) architecture to incorporate these biases via intrinsic
temporal dynamics, obtaining a continuous-time GRU. The CT-GRU arises by
interpreting the gates of a GRU as selecting a time scale of memory, and the
CT-GRU generalizes the GRU by incorporating multiple time scales of memory and
performing context-dependent selection of time scales for information storage
and retrieval. Event time-stamps drive decay dynamics of the CT-GRU, whereas
they serve as generic additional inputs to the GRU. Despite the very different
manner in which the two models consider time, their performance on eleven data
sets we examined is essentially identical. Our surprising results point both to
the robustness of GRU and LSTM architectures for handling continuous time, and
to the potency of incorporating continuous dynamics into neural architectures. | http://arxiv.org/pdf/1710.04110 | Michael C. Mozer, Denis Kazakov, Robert V. Lindsey | cs.NE, cs.LG, I.2.6 | 21 pages | null | cs.NE | 20171011 | 20171011 | [] |
1710.04087 | 42 | There has been a few attempts to align monolingual word vector spaces with no supervision. Similar to our work, Zhang et al. (2017b) employed adversarial training, but their approach is different than ours in multiple ways. First, they rely on sharp drops of the discriminator accuracy for model selection. In our experiments, their model selection criterion does not correlate with the overall model performance, as shown in Figure 2. Furthermore, it does not allow for hyper-parameters tuning, since it selects the best model over a single experiment. We argue it is a serious limitation, since the best hyper-parameters vary signiï¬cantly across language pairs. Despite considering small vocabularies of a few thousand words, their method obtained weak results compared to supervised approaches. More recently, Zhang et al. (2017a) proposed to minimize the earth-mover distance after adversarial training. They compare their results only to their supervised baseline trained with a small seed lexicon, which is one to two orders of magnitude smaller than what we report here.
# 6 CONCLUSION | 1710.04087#42 | Word Translation Without Parallel Data | State-of-the-art methods for learning cross-lingual word embeddings have
relied on bilingual dictionaries or parallel corpora. Recent studies showed
that the need for parallel data supervision can be alleviated with
character-level information. While these methods showed encouraging results,
they are not on par with their supervised counterparts and are limited to pairs
of languages sharing a common alphabet. In this work, we show that we can build
a bilingual dictionary between two languages without using any parallel
corpora, by aligning monolingual word embedding spaces in an unsupervised way.
Without using any character information, our model even outperforms existing
supervised methods on cross-lingual tasks for some language pairs. Our
experiments demonstrate that our method works very well also for distant
language pairs, like English-Russian or English-Chinese. We finally describe
experiments on the English-Esperanto low-resource language pair, on which there
only exists a limited amount of parallel data, to show the potential impact of
our method in fully unsupervised machine translation. Our code, embeddings and
dictionaries are publicly available. | http://arxiv.org/pdf/1710.04087 | Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, Hervé Jégou | cs.CL | ICLR 2018 | null | cs.CL | 20171011 | 20180130 | [
{
"id": "1701.00160"
},
{
"id": "1602.01925"
},
{
"id": "1702.08734"
}
] |
1710.04110 | 42 | Cluster. We generated sequences of 100 events drawn uniformly from 12 labels, aâl, with inter-event times drawn from an exponential distribution with mean 1. The task is to classify the sequence depending on the occurrence of events a, b, and c in any order within a 6 time unit window. The data set was balanced with half positive and half negative examples. The positive examples had one or more occurrences of the target pattern at a random position within the sequence. We tested a 20 hidden unit architecture.
Remembering. We generated sequences of 100 events drawn uniformly from 12 labels with inter-event time lags drawn uniformly from {1, 10, 100}. Each time a symbol is presented, the task is to remember that symbol for 310 time steps. If the next occurrence of the symbol is within this threshold, the target output for that symbol should be 1, otherwise 0. The threshold of 310 time steps was chosen in order that the target outputs are roughly balanced. The target output for the ï¬rst presentation of a symbol is 0. We tested 20 and 40 hidden unit architectures. | 1710.04110#42 | Discrete Event, Continuous Time RNNs | We investigate recurrent neural network architectures for event-sequence
processing. Event sequences, characterized by discrete observations stamped
with continuous-valued times of occurrence, are challenging due to the
potentially wide dynamic range of relevant time scales as well as interactions
between time scales. We describe four forms of inductive bias that should
benefit architectures for event sequences: temporal locality, position and
scale homogeneity, and scale interdependence. We extend the popular gated
recurrent unit (GRU) architecture to incorporate these biases via intrinsic
temporal dynamics, obtaining a continuous-time GRU. The CT-GRU arises by
interpreting the gates of a GRU as selecting a time scale of memory, and the
CT-GRU generalizes the GRU by incorporating multiple time scales of memory and
performing context-dependent selection of time scales for information storage
and retrieval. Event time-stamps drive decay dynamics of the CT-GRU, whereas
they serve as generic additional inputs to the GRU. Despite the very different
manner in which the two models consider time, their performance on eleven data
sets we examined is essentially identical. Our surprising results point both to
the robustness of GRU and LSTM architectures for handling continuous time, and
to the potency of incorporating continuous dynamics into neural architectures. | http://arxiv.org/pdf/1710.04110 | Michael C. Mozer, Denis Kazakov, Robert V. Lindsey | cs.NE, cs.LG, I.2.6 | 21 pages | null | cs.NE | 20171011 | 20171011 | [] |
1710.04087 | 43 | # 6 CONCLUSION
In this work, we show for the ï¬rst time that one can align word embedding spaces without any cross-lingual supervision, i.e., solely based on unaligned datasets of each language, while reaching or outperforming the quality of previous supervised approaches in several cases. Using adversarial training, we are able to initialize a linear mapping between a source and a target space, which we also use to produce a synthetic parallel dictionary. It is then possible to apply the same techniques proposed for supervised techniques, namely a Procrustean optimization. Two key ingredients con- tribute to the success of our approach: First we propose a simple criterion that is used as an effective unsupervised validation metric. Second we propose the similarity measure CSLS, which mitigates the hubness problem and drastically increases the word translation accuracy. As a result, our ap- proach produces high-quality dictionaries between different pairs of languages, with up to 83.3% on the Spanish-English word translation task. This performance is on par with supervised approaches. Our method is also effective on the English-Esperanto pair, thereby showing that it works for low- resource language pairs, and can be used as a ï¬rst step towards unsupervised machine translation.
# ACKNOWLEDGMENTS | 1710.04087#43 | Word Translation Without Parallel Data | State-of-the-art methods for learning cross-lingual word embeddings have
relied on bilingual dictionaries or parallel corpora. Recent studies showed
that the need for parallel data supervision can be alleviated with
character-level information. While these methods showed encouraging results,
they are not on par with their supervised counterparts and are limited to pairs
of languages sharing a common alphabet. In this work, we show that we can build
a bilingual dictionary between two languages without using any parallel
corpora, by aligning monolingual word embedding spaces in an unsupervised way.
Without using any character information, our model even outperforms existing
supervised methods on cross-lingual tasks for some language pairs. Our
experiments demonstrate that our method works very well also for distant
language pairs, like English-Russian or English-Chinese. We finally describe
experiments on the English-Esperanto low-resource language pair, on which there
only exists a limited amount of parallel data, to show the potential impact of
our method in fully unsupervised machine translation. Our code, embeddings and
dictionaries are publicly available. | http://arxiv.org/pdf/1710.04087 | Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, Hervé Jégou | cs.CL | ICLR 2018 | null | cs.CL | 20171011 | 20180130 | [
{
"id": "1701.00160"
},
{
"id": "1602.01925"
},
{
"id": "1702.08734"
}
] |
1710.04110 | 43 | RuytuM. This classification task involved sequences of 100 symbols drawn uniformly from AâD and terminated by E. The target output at the end of the sequence is 1 if the sequence follows a fixed rhythmic pattern, such that the lag following A-D are 1, 2, 4, and 8, respectively. The positive sequences follow the pattern exactly. The negative sequences double or halve between one and four of the lags. The training and test sets are balanced between positive and negative examples. Note that this task cannot be performed above chance without knowing the inter-event lags. We tested 20 and 40 hidden unit architectures. HAWKES PROCESS. We generated interspersed event sequences for 12 labels from indepen- dent Hawkes processes. A Hawkes process is a self-excitatory point process whose intensity (event rate) at time ¢ depends on its history: A(t) = w+ a/T Ys, 24 et-4i)/7 where {t;} is the set of previously generated event times. Using the algorithm of 2013), we synthesized sequences with a = .5, y = .02, and 7 ⬠{1,2,4,8,...,4096}. For each sequence, we assigned a random permutation of the possible | 1710.04110#43 | Discrete Event, Continuous Time RNNs | We investigate recurrent neural network architectures for event-sequence
processing. Event sequences, characterized by discrete observations stamped
with continuous-valued times of occurrence, are challenging due to the
potentially wide dynamic range of relevant time scales as well as interactions
between time scales. We describe four forms of inductive bias that should
benefit architectures for event sequences: temporal locality, position and
scale homogeneity, and scale interdependence. We extend the popular gated
recurrent unit (GRU) architecture to incorporate these biases via intrinsic
temporal dynamics, obtaining a continuous-time GRU. The CT-GRU arises by
interpreting the gates of a GRU as selecting a time scale of memory, and the
CT-GRU generalizes the GRU by incorporating multiple time scales of memory and
performing context-dependent selection of time scales for information storage
and retrieval. Event time-stamps drive decay dynamics of the CT-GRU, whereas
they serve as generic additional inputs to the GRU. Despite the very different
manner in which the two models consider time, their performance on eleven data
sets we examined is essentially identical. Our surprising results point both to
the robustness of GRU and LSTM architectures for handling continuous time, and
to the potency of incorporating continuous dynamics into neural architectures. | http://arxiv.org/pdf/1710.04110 | Michael C. Mozer, Denis Kazakov, Robert V. Lindsey | cs.NE, cs.LG, I.2.6 | 21 pages | null | cs.NE | 20171011 | 20171011 | [] |
1710.04087 | 44 | # ACKNOWLEDGMENTS
We thank Juan Miguel Pino, Moustapha Ciss´e, Nicolas Usunier, Yann Ollivier, David Lopez-Paz, Alexandre Sablayrolles, and the FAIR team for useful comments and discussions.
REFERENCES Waleed Ammar, George Mulcaire, Yulia Tsvetkov, Guillaume Lample, Chris Dyer, and Noah A
Smith. Massively multilingual word embeddings. arXiv preprint arXiv:1602.01925, 2016.
10
Published as a conference paper at ICLR 2018
Mikel Artetxe, Gorka Labaka, and Eneko Agirre. Learning principled bilingual mappings of word embeddings while preserving monolingual invariance. Proceedings of EMNLP, 2016.
Mikel Artetxe, Gorka Labaka, and Eneko Agirre. Learning bilingual word embeddings with (al- In Proceedings of the 55th Annual Meeting of the Association for most) no bilingual data. Computational Linguistics (Volume 1: Long Papers), pp. 451â462. Association for Computa- tional Linguistics, 2017. | 1710.04087#44 | Word Translation Without Parallel Data | State-of-the-art methods for learning cross-lingual word embeddings have
relied on bilingual dictionaries or parallel corpora. Recent studies showed
that the need for parallel data supervision can be alleviated with
character-level information. While these methods showed encouraging results,
they are not on par with their supervised counterparts and are limited to pairs
of languages sharing a common alphabet. In this work, we show that we can build
a bilingual dictionary between two languages without using any parallel
corpora, by aligning monolingual word embedding spaces in an unsupervised way.
Without using any character information, our model even outperforms existing
supervised methods on cross-lingual tasks for some language pairs. Our
experiments demonstrate that our method works very well also for distant
language pairs, like English-Russian or English-Chinese. We finally describe
experiments on the English-Esperanto low-resource language pair, on which there
only exists a limited amount of parallel data, to show the potential impact of
our method in fully unsupervised machine translation. Our code, embeddings and
dictionaries are publicly available. | http://arxiv.org/pdf/1710.04087 | Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, Hervé Jégou | cs.CL | ICLR 2018 | null | cs.CL | 20171011 | 20180130 | [
{
"id": "1701.00160"
},
{
"id": "1602.01925"
},
{
"id": "1702.08734"
}
] |
1710.04110 | 44 | = .5, y = .02, and 7 ⬠{1,2,4,8,...,4096}. For each sequence, we assigned a random permutation of the possible 7 scales to event labels. The intensity function ensures that the event rate is identical across scales, but labels with shorter time constants are more concentrated and bursty. The task here is to predict the next event label given the time to the next event, dt,, and the complete event history. Sequences ranged from 240 to 1020 events. Optimal performance for this data set was determined via maximum likelihood inference on the parameters of the model that generated the data. We tested 10, 20, 40, and 80 hidden unit architectures. | 1710.04110#44 | Discrete Event, Continuous Time RNNs | We investigate recurrent neural network architectures for event-sequence
processing. Event sequences, characterized by discrete observations stamped
with continuous-valued times of occurrence, are challenging due to the
potentially wide dynamic range of relevant time scales as well as interactions
between time scales. We describe four forms of inductive bias that should
benefit architectures for event sequences: temporal locality, position and
scale homogeneity, and scale interdependence. We extend the popular gated
recurrent unit (GRU) architecture to incorporate these biases via intrinsic
temporal dynamics, obtaining a continuous-time GRU. The CT-GRU arises by
interpreting the gates of a GRU as selecting a time scale of memory, and the
CT-GRU generalizes the GRU by incorporating multiple time scales of memory and
performing context-dependent selection of time scales for information storage
and retrieval. Event time-stamps drive decay dynamics of the CT-GRU, whereas
they serve as generic additional inputs to the GRU. Despite the very different
manner in which the two models consider time, their performance on eleven data
sets we examined is essentially identical. Our surprising results point both to
the robustness of GRU and LSTM architectures for handling continuous time, and
to the potency of incorporating continuous dynamics into neural architectures. | http://arxiv.org/pdf/1710.04110 | Michael C. Mozer, Denis Kazakov, Robert V. Lindsey | cs.NE, cs.LG, I.2.6 | 21 pages | null | cs.NE | 20171011 | 20171011 | [] |
1710.04087 | 45 | Marco Baroni, Silvia Bernardini, Adriano Ferraresi, and Eros Zanchetta. The wacky wide web: a collection of very large linguistically processed web-crawled corpora. Language resources and evaluation, 43(3):209â226, 2009.
Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5: 135â146, 2017.
Jos´e Camacho-Collados, Mohammad Taher Pilehvar, and Roberto Navigli. Nasari: Integrating ex- plicit knowledge and corpus statistics for a multilingual representation of concepts and entities. Artiï¬cial Intelligence, 240:36â64, 2016.
Jose Camacho-Collados, Mohammad Taher Pilehvar, Nigel Collier, and Roberto Navigli. Semeval- 2017 task 2: Multilingual and cross-lingual semantic word similarity. Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval 2017), 2017.
Hailong Cao, Tiejun Zhao, Shu Zhang, and Yao Meng. A distribution-based model to learn bilingual word embeddings. Proceedings of COLING, 2016. | 1710.04087#45 | Word Translation Without Parallel Data | State-of-the-art methods for learning cross-lingual word embeddings have
relied on bilingual dictionaries or parallel corpora. Recent studies showed
that the need for parallel data supervision can be alleviated with
character-level information. While these methods showed encouraging results,
they are not on par with their supervised counterparts and are limited to pairs
of languages sharing a common alphabet. In this work, we show that we can build
a bilingual dictionary between two languages without using any parallel
corpora, by aligning monolingual word embedding spaces in an unsupervised way.
Without using any character information, our model even outperforms existing
supervised methods on cross-lingual tasks for some language pairs. Our
experiments demonstrate that our method works very well also for distant
language pairs, like English-Russian or English-Chinese. We finally describe
experiments on the English-Esperanto low-resource language pair, on which there
only exists a limited amount of parallel data, to show the potential impact of
our method in fully unsupervised machine translation. Our code, embeddings and
dictionaries are publicly available. | http://arxiv.org/pdf/1710.04087 | Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, Hervé Jégou | cs.CL | ICLR 2018 | null | cs.CL | 20171011 | 20180130 | [
{
"id": "1701.00160"
},
{
"id": "1602.01925"
},
{
"id": "1702.08734"
}
] |
1710.04110 | 45 | Disperse. We generated sequences of 100 events drawn from 12 labels, a-l, with inter- event times drawn from an exponential distribution with mean 1. The task is to classify a sequence according to whether a and b occur separated by 10 time units anywhere in the sequence. The target output is 1 if they occur at a lag ranging in [9,11], or 0 otherwise. The
14
Discrete Event, Continuous Time RNNs
training and test sets are balanced with half positive and half negative examples. We tested 20, and 40 hidden unit architectures.
A.2.2 Naturalistic data sets | 1710.04110#45 | Discrete Event, Continuous Time RNNs | We investigate recurrent neural network architectures for event-sequence
processing. Event sequences, characterized by discrete observations stamped
with continuous-valued times of occurrence, are challenging due to the
potentially wide dynamic range of relevant time scales as well as interactions
between time scales. We describe four forms of inductive bias that should
benefit architectures for event sequences: temporal locality, position and
scale homogeneity, and scale interdependence. We extend the popular gated
recurrent unit (GRU) architecture to incorporate these biases via intrinsic
temporal dynamics, obtaining a continuous-time GRU. The CT-GRU arises by
interpreting the gates of a GRU as selecting a time scale of memory, and the
CT-GRU generalizes the GRU by incorporating multiple time scales of memory and
performing context-dependent selection of time scales for information storage
and retrieval. Event time-stamps drive decay dynamics of the CT-GRU, whereas
they serve as generic additional inputs to the GRU. Despite the very different
manner in which the two models consider time, their performance on eleven data
sets we examined is essentially identical. Our surprising results point both to
the robustness of GRU and LSTM architectures for handling continuous time, and
to the potency of incorporating continuous dynamics into neural architectures. | http://arxiv.org/pdf/1710.04110 | Michael C. Mozer, Denis Kazakov, Robert V. Lindsey | cs.NE, cs.LG, I.2.6 | 21 pages | null | cs.NE | 20171011 | 20171011 | [] |
1710.04087 | 46 | Hailong Cao, Tiejun Zhao, Shu Zhang, and Yao Meng. A distribution-based model to learn bilingual word embeddings. Proceedings of COLING, 2016.
Moustapha Cisse, Piotr Bojanowski, Edouard Grave, Yann Dauphin, and Nicolas Usunier. Parseval networks: Improving robustness to adversarial examples. International Conference on Machine Learning, pp. 854â863, 2017.
Georgiana Dinu, Angeliki Lazaridou, and Marco Baroni. Improving zero-shot learning by mitigating the hubness problem. International Conference on Learning Representations, Workshop Track, 2015.
Qing Dou, Ashish Vaswani, Kevin Knight, and Chris Dyer. Unifying bayesian inference and vector space models for improved decipherment. 2015.
Long Duong, Hiroshi Kanayama, Tengfei Ma, Steven Bird, and Trevor Cohn. Learning crosslingual word embeddings without bilingual corpora. Proceedings of EMNLP, 2016.
Manaal Faruqui and Chris Dyer. Improving vector space word representations using multilingual correlation. Proceedings of EACL, 2014.
Pascale Fung. Compiling bilingual lexicon entries from a non-parallel english-chinese corpus. In Proceedings of the Third Workshop on Very Large Corpora, pp. 173â183, 1995. | 1710.04087#46 | Word Translation Without Parallel Data | State-of-the-art methods for learning cross-lingual word embeddings have
relied on bilingual dictionaries or parallel corpora. Recent studies showed
that the need for parallel data supervision can be alleviated with
character-level information. While these methods showed encouraging results,
they are not on par with their supervised counterparts and are limited to pairs
of languages sharing a common alphabet. In this work, we show that we can build
a bilingual dictionary between two languages without using any parallel
corpora, by aligning monolingual word embedding spaces in an unsupervised way.
Without using any character information, our model even outperforms existing
supervised methods on cross-lingual tasks for some language pairs. Our
experiments demonstrate that our method works very well also for distant
language pairs, like English-Russian or English-Chinese. We finally describe
experiments on the English-Esperanto low-resource language pair, on which there
only exists a limited amount of parallel data, to show the potential impact of
our method in fully unsupervised machine translation. Our code, embeddings and
dictionaries are publicly available. | http://arxiv.org/pdf/1710.04087 | Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, Hervé Jégou | cs.CL | ICLR 2018 | null | cs.CL | 20171011 | 20180130 | [
{
"id": "1701.00160"
},
{
"id": "1602.01925"
},
{
"id": "1702.08734"
}
] |
1710.04110 | 46 | training and test sets are balanced with half positive and half negative examples. We tested 20, and 40 hidden unit architectures.
A.2.2 Naturalistic data sets
Reddit. We collected sequence of subreddit postings from 30,733 users, and divided the users into 15,000 for training and 15,733 for testing. The posting sequences ranged from 30 subreddits to 976, with a mean length of 61.0. (We excluded users who posted fewer than 30 times.) Each posting was considered an event and the task is to predict the next event label, i.e., the next subreddit to which the user will post. To focus on the temporal pattern of selections rather than the popularity of speciï¬c subreddits, we re-indexed each sequence such that each subreddit was mapped to the order in which it appeared in a sequence. Consequently, the ï¬rst posting for any user will correspond to label 1; the second posting could either be a repetition of 1 or a new subreddit, 2. If the user posted to more then 50 subreddits, the 51st and beyond were assigned to label 50. Baseline performance is obtained by predicting that event k + 1 will be the same as event k. | 1710.04110#46 | Discrete Event, Continuous Time RNNs | We investigate recurrent neural network architectures for event-sequence
processing. Event sequences, characterized by discrete observations stamped
with continuous-valued times of occurrence, are challenging due to the
potentially wide dynamic range of relevant time scales as well as interactions
between time scales. We describe four forms of inductive bias that should
benefit architectures for event sequences: temporal locality, position and
scale homogeneity, and scale interdependence. We extend the popular gated
recurrent unit (GRU) architecture to incorporate these biases via intrinsic
temporal dynamics, obtaining a continuous-time GRU. The CT-GRU arises by
interpreting the gates of a GRU as selecting a time scale of memory, and the
CT-GRU generalizes the GRU by incorporating multiple time scales of memory and
performing context-dependent selection of time scales for information storage
and retrieval. Event time-stamps drive decay dynamics of the CT-GRU, whereas
they serve as generic additional inputs to the GRU. Despite the very different
manner in which the two models consider time, their performance on eleven data
sets we examined is essentially identical. Our surprising results point both to
the robustness of GRU and LSTM architectures for handling continuous time, and
to the potency of incorporating continuous dynamics into neural architectures. | http://arxiv.org/pdf/1710.04110 | Michael C. Mozer, Denis Kazakov, Robert V. Lindsey | cs.NE, cs.LG, I.2.6 | 21 pages | null | cs.NE | 20171011 | 20171011 | [] |
1710.04087 | 47 | Pascale Fung and Lo Yuen Yee. An ir approach for translating new words from nonparallel, compa- rable texts. In Proceedings of the 17th International Conference on Computational Linguistics - Volume 1, COLING â98, pp. 414â420. Association for Computational Linguistics, 1998.
Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, Franc¸ois Laviolette, Mario Marchand, and Victor Lempitsky. Domain-adversarial training of neural net- works. Journal of Machine Learning Research, 17(59):1â35, 2016.
Ian Goodfellow. Nips 2016 tutorial: Generative adversarial networks. arXiv preprint arXiv:1701.00160, 2016.
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. Advances in neural information processing systems, pp. 2672â2680, 2014. | 1710.04087#47 | Word Translation Without Parallel Data | State-of-the-art methods for learning cross-lingual word embeddings have
relied on bilingual dictionaries or parallel corpora. Recent studies showed
that the need for parallel data supervision can be alleviated with
character-level information. While these methods showed encouraging results,
they are not on par with their supervised counterparts and are limited to pairs
of languages sharing a common alphabet. In this work, we show that we can build
a bilingual dictionary between two languages without using any parallel
corpora, by aligning monolingual word embedding spaces in an unsupervised way.
Without using any character information, our model even outperforms existing
supervised methods on cross-lingual tasks for some language pairs. Our
experiments demonstrate that our method works very well also for distant
language pairs, like English-Russian or English-Chinese. We finally describe
experiments on the English-Esperanto low-resource language pair, on which there
only exists a limited amount of parallel data, to show the potential impact of
our method in fully unsupervised machine translation. Our code, embeddings and
dictionaries are publicly available. | http://arxiv.org/pdf/1710.04087 | Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, Hervé Jégou | cs.CL | ICLR 2018 | null | cs.CL | 20171011 | 20180130 | [
{
"id": "1701.00160"
},
{
"id": "1602.01925"
},
{
"id": "1702.08734"
}
] |
1710.04110 | 47 | Last.fm. We collected sequences of musical artist selections from 30,000 individuals, split evenly into training and testing sets. We picked a span of time wide enough to encompass exactly 300 selections. This span ranged from under an hour to more than six years, with a mean span of 76.3 days. To focus on the temporal pattern of selections rather than the popularity of speciï¬c artists, we re-indexed each sequence such that each artist was mapped to the order in which it appeared in a sequence. Any sequence with more than 50 distinct artists was rejected. Baseline performance is obtained by predicting that event k + 1 will be the same as event k.
Msnbc. This data set was obtained from the UCI repository and consists of the sequence of requests a user makes for web pages on the MSNBC site. The pages are classiï¬ed into one of 17 categories, such as frontpage, news, tech, local. The sequences ranged from 9 selections to 99 selections with a mean length of 17.6. Unfortunately, time tags were not available for these data, and thus we treated the event sequences as ordinal sequences. We were interested in including one data set with ordinal sequences in order to examine whether such sequences might show an advantage or disadvantage for the CT-GRU. Baseline performance is obtained by predicting that event k + 1 will be the same as event k. | 1710.04110#47 | Discrete Event, Continuous Time RNNs | We investigate recurrent neural network architectures for event-sequence
processing. Event sequences, characterized by discrete observations stamped
with continuous-valued times of occurrence, are challenging due to the
potentially wide dynamic range of relevant time scales as well as interactions
between time scales. We describe four forms of inductive bias that should
benefit architectures for event sequences: temporal locality, position and
scale homogeneity, and scale interdependence. We extend the popular gated
recurrent unit (GRU) architecture to incorporate these biases via intrinsic
temporal dynamics, obtaining a continuous-time GRU. The CT-GRU arises by
interpreting the gates of a GRU as selecting a time scale of memory, and the
CT-GRU generalizes the GRU by incorporating multiple time scales of memory and
performing context-dependent selection of time scales for information storage
and retrieval. Event time-stamps drive decay dynamics of the CT-GRU, whereas
they serve as generic additional inputs to the GRU. Despite the very different
manner in which the two models consider time, their performance on eleven data
sets we examined is essentially identical. Our surprising results point both to
the robustness of GRU and LSTM architectures for handling continuous time, and
to the potency of incorporating continuous dynamics into neural architectures. | http://arxiv.org/pdf/1710.04110 | Michael C. Mozer, Denis Kazakov, Robert V. Lindsey | cs.NE, cs.LG, I.2.6 | 21 pages | null | cs.NE | 20171011 | 20171011 | [] |
1710.04087 | 48 | Stephan Gouws, Yoshua Bengio, and Greg Corrado. Bilbowa: Fast bilingual distributed representa- tions without word alignments. In Proceedings of the 32nd International Conference on Machine Learning (ICML-15), pp. 748â756, 2015.
11
Published as a conference paper at ICLR 2018
Aria Haghighi, Percy Liang, Taylor Berg-Kirkpatrick, and Dan Klein. Learning bilingual lexicons from monolingual corpora. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics, 2008.
Zellig S Harris. Distributional structure. Word, 10(2-3):146â162, 1954.
Ann Irvine and Chris Callison-Burch. Supervised bilingual lexicon induction with multiple mono- lingual signals. In HLT-NAACL, 2013.
Herve Jegou, Cordelia Schmid, Hedi Harzallah, and Jakob Verbeek. Accurate image search us- ing the contextual dissimilarity measure. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(1):2â11, 2010.
Jeff Johnson, Matthijs Douze, and Herv´e J´egou. Billion-scale similarity search with gpus. arXiv preprint arXiv:1702.08734, 2017. | 1710.04087#48 | Word Translation Without Parallel Data | State-of-the-art methods for learning cross-lingual word embeddings have
relied on bilingual dictionaries or parallel corpora. Recent studies showed
that the need for parallel data supervision can be alleviated with
character-level information. While these methods showed encouraging results,
they are not on par with their supervised counterparts and are limited to pairs
of languages sharing a common alphabet. In this work, we show that we can build
a bilingual dictionary between two languages without using any parallel
corpora, by aligning monolingual word embedding spaces in an unsupervised way.
Without using any character information, our model even outperforms existing
supervised methods on cross-lingual tasks for some language pairs. Our
experiments demonstrate that our method works very well also for distant
language pairs, like English-Russian or English-Chinese. We finally describe
experiments on the English-Esperanto low-resource language pair, on which there
only exists a limited amount of parallel data, to show the potential impact of
our method in fully unsupervised machine translation. Our code, embeddings and
dictionaries are publicly available. | http://arxiv.org/pdf/1710.04087 | Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, Hervé Jégou | cs.CL | ICLR 2018 | null | cs.CL | 20171011 | 20180130 | [
{
"id": "1701.00160"
},
{
"id": "1602.01925"
},
{
"id": "1702.08734"
}
] |
1710.04110 | 48 | Spanish. This data set consists of retrieval practice trials from 180 native English speaking students studying 221 Spanish language vocabulary items over the time span of a semester Lindsey et al. (2014). On each trial, students were shown an English word or phrase to translate to Spanish, and correct or incorrect performance was recorded. The sequences consist of a studentâs entire study history for a single item, and the task is to predict trial-to-trial accuracy. The data set consists of 37601 sequences split randomly into 18800 for training and 18801 for testing. Sequences had a mean length of 15.9 and a maximum length of 190. The input consisted of 221 Ã 2 units each of which represents the current trialâthe Cartesian product of item practiced and incorrect/correct performance. The output consisted of 221 logistic units with 0/1 values for the prediction of incorrect/correct performance on each of the 221 items. Training and test set error is based only on the item actually practiced. Baseline performance is obtained by predicting that the accuracy of a studentâs response on trial k + 1 is the same as on trial k.
15
# Mozer, Kazakov, & Lindsey | 1710.04110#48 | Discrete Event, Continuous Time RNNs | We investigate recurrent neural network architectures for event-sequence
processing. Event sequences, characterized by discrete observations stamped
with continuous-valued times of occurrence, are challenging due to the
potentially wide dynamic range of relevant time scales as well as interactions
between time scales. We describe four forms of inductive bias that should
benefit architectures for event sequences: temporal locality, position and
scale homogeneity, and scale interdependence. We extend the popular gated
recurrent unit (GRU) architecture to incorporate these biases via intrinsic
temporal dynamics, obtaining a continuous-time GRU. The CT-GRU arises by
interpreting the gates of a GRU as selecting a time scale of memory, and the
CT-GRU generalizes the GRU by incorporating multiple time scales of memory and
performing context-dependent selection of time scales for information storage
and retrieval. Event time-stamps drive decay dynamics of the CT-GRU, whereas
they serve as generic additional inputs to the GRU. Despite the very different
manner in which the two models consider time, their performance on eleven data
sets we examined is essentially identical. Our surprising results point both to
the robustness of GRU and LSTM architectures for handling continuous time, and
to the potency of incorporating continuous dynamics into neural architectures. | http://arxiv.org/pdf/1710.04110 | Michael C. Mozer, Denis Kazakov, Robert V. Lindsey | cs.NE, cs.LG, I.2.6 | 21 pages | null | cs.NE | 20171011 | 20171011 | [] |
1710.04087 | 49 | Alexandre Klementiev, Ivan Titov, and Binod Bhattarai. Inducing crosslingual distributed represen- tations of words. Proceedings of COLING, pp. 1459â1474, 2012.
In Proceedings of the ACL-02 workshop on Unsupervised lexical acquisition-Volume 9, pp. 9â16. Association for Computational Linguistics, 2002.
Grzegorz Kondrak, Bradley Hauer, and Garrett Nicolai. Bootstrapping unsupervised bilingual lexi- con induction. In EACL, 2017.
Angeliki Lazaridou, Georgiana Dinu, and Marco Baroni. Hubness and pollution: Delving into cross- space mapping for zero-shot learning. Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics, 2015.
Omer Levy and Yoav Goldberg. Neural word embedding as implicit matrix factorization. Advances in neural information processing systems, pp. 2177â2185, 2014.
Thang Luong, Richard Socher, and Christopher D Manning. Better word representations with re- cursive neural networks for morphology. CoNLL, pp. 104â113, 2013.
Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efï¬cient estimation of word represen- tations in vector space. Proceedings of Workshop at ICLR, 2013a. | 1710.04087#49 | Word Translation Without Parallel Data | State-of-the-art methods for learning cross-lingual word embeddings have
relied on bilingual dictionaries or parallel corpora. Recent studies showed
that the need for parallel data supervision can be alleviated with
character-level information. While these methods showed encouraging results,
they are not on par with their supervised counterparts and are limited to pairs
of languages sharing a common alphabet. In this work, we show that we can build
a bilingual dictionary between two languages without using any parallel
corpora, by aligning monolingual word embedding spaces in an unsupervised way.
Without using any character information, our model even outperforms existing
supervised methods on cross-lingual tasks for some language pairs. Our
experiments demonstrate that our method works very well also for distant
language pairs, like English-Russian or English-Chinese. We finally describe
experiments on the English-Esperanto low-resource language pair, on which there
only exists a limited amount of parallel data, to show the potential impact of
our method in fully unsupervised machine translation. Our code, embeddings and
dictionaries are publicly available. | http://arxiv.org/pdf/1710.04087 | Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, Hervé Jégou | cs.CL | ICLR 2018 | null | cs.CL | 20171011 | 20180130 | [
{
"id": "1701.00160"
},
{
"id": "1602.01925"
},
{
"id": "1702.08734"
}
] |
1710.04110 | 49 | 15
# Mozer, Kazakov, & Lindsey
Japanese. This data set is from a controlled laboratory study of learning Japanese vocabulary with 32 participants studying 60 vocabulary items over an 84 day period, with times between practice trials ranging from seconds to 50 days. For this data set, we formed one sequence per subject; the sequences ranged from 654 to 659 trials. Because of the small number of subjects, we made an 8-fold split, each time training on 25 subjects, validating on 3, and testing on the remaining 4. Baseline performance is obtained by predicting that the accuracy of a studentâs response on trial k + 1 is the same as on trial k.
16
Discrete Event, Continuous Time RNNs
# References | 1710.04110#49 | Discrete Event, Continuous Time RNNs | We investigate recurrent neural network architectures for event-sequence
processing. Event sequences, characterized by discrete observations stamped
with continuous-valued times of occurrence, are challenging due to the
potentially wide dynamic range of relevant time scales as well as interactions
between time scales. We describe four forms of inductive bias that should
benefit architectures for event sequences: temporal locality, position and
scale homogeneity, and scale interdependence. We extend the popular gated
recurrent unit (GRU) architecture to incorporate these biases via intrinsic
temporal dynamics, obtaining a continuous-time GRU. The CT-GRU arises by
interpreting the gates of a GRU as selecting a time scale of memory, and the
CT-GRU generalizes the GRU by incorporating multiple time scales of memory and
performing context-dependent selection of time scales for information storage
and retrieval. Event time-stamps drive decay dynamics of the CT-GRU, whereas
they serve as generic additional inputs to the GRU. Despite the very different
manner in which the two models consider time, their performance on eleven data
sets we examined is essentially identical. Our surprising results point both to
the robustness of GRU and LSTM architectures for handling continuous time, and
to the potency of incorporating continuous dynamics into neural architectures. | http://arxiv.org/pdf/1710.04110 | Michael C. Mozer, Denis Kazakov, Robert V. Lindsey | cs.NE, cs.LG, I.2.6 | 21 pages | null | cs.NE | 20171011 | 20171011 | [] |
1710.04087 | 50 | Tomas Mikolov, Quoc V Le, and Ilya Sutskever. Exploiting similarities among languages for ma- chine translation. arXiv preprint arXiv:1309.4168, 2013b.
Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representa- tions of words and phrases and their compositionality. Advances in neural information processing systems, pp. 3111â3119, 2013c.
Robert Parker, David Graff, Junbo Kong, Ke Chen, and Kazuaki Maeda. English gigaword. Linguistic Data Consortium, 2011.
Jeffrey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vectors for word representation. Proceedings of EMNLP, 14:1532â1543, 2014.
N. Pourdamghani and K. Knight. Deciphering related languages. In EMNLP, 2017.
MiloËs Radovanovi´c, Alexandros Nanopoulos, and Mirjana Ivanovi´c. Hubs in space: Popular nearest neighbors in high-dimensional data. Journal of Machine Learning Research, 11(Sep):2487â2531, 2010. | 1710.04087#50 | Word Translation Without Parallel Data | State-of-the-art methods for learning cross-lingual word embeddings have
relied on bilingual dictionaries or parallel corpora. Recent studies showed
that the need for parallel data supervision can be alleviated with
character-level information. While these methods showed encouraging results,
they are not on par with their supervised counterparts and are limited to pairs
of languages sharing a common alphabet. In this work, we show that we can build
a bilingual dictionary between two languages without using any parallel
corpora, by aligning monolingual word embedding spaces in an unsupervised way.
Without using any character information, our model even outperforms existing
supervised methods on cross-lingual tasks for some language pairs. Our
experiments demonstrate that our method works very well also for distant
language pairs, like English-Russian or English-Chinese. We finally describe
experiments on the English-Esperanto low-resource language pair, on which there
only exists a limited amount of parallel data, to show the potential impact of
our method in fully unsupervised machine translation. Our code, embeddings and
dictionaries are publicly available. | http://arxiv.org/pdf/1710.04087 | Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, Hervé Jégou | cs.CL | ICLR 2018 | null | cs.CL | 20171011 | 20180130 | [
{
"id": "1701.00160"
},
{
"id": "1602.01925"
},
{
"id": "1702.08734"
}
] |
1710.04110 | 50 | 16
Discrete Event, Continuous Time RNNs
# References
MartÃn Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeï¬rey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoï¬rey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mané, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. URL http://tensorflow.org/. Software available from tensorï¬ow.org.
E. M. Altmann and W. D. Gray. Forgetting to remember: The functional relationship of decay and interference. Psychological Science, 13:27â33, 2002. | 1710.04110#50 | Discrete Event, Continuous Time RNNs | We investigate recurrent neural network architectures for event-sequence
processing. Event sequences, characterized by discrete observations stamped
with continuous-valued times of occurrence, are challenging due to the
potentially wide dynamic range of relevant time scales as well as interactions
between time scales. We describe four forms of inductive bias that should
benefit architectures for event sequences: temporal locality, position and
scale homogeneity, and scale interdependence. We extend the popular gated
recurrent unit (GRU) architecture to incorporate these biases via intrinsic
temporal dynamics, obtaining a continuous-time GRU. The CT-GRU arises by
interpreting the gates of a GRU as selecting a time scale of memory, and the
CT-GRU generalizes the GRU by incorporating multiple time scales of memory and
performing context-dependent selection of time scales for information storage
and retrieval. Event time-stamps drive decay dynamics of the CT-GRU, whereas
they serve as generic additional inputs to the GRU. Despite the very different
manner in which the two models consider time, their performance on eleven data
sets we examined is essentially identical. Our surprising results point both to
the robustness of GRU and LSTM architectures for handling continuous time, and
to the potency of incorporating continuous dynamics into neural architectures. | http://arxiv.org/pdf/1710.04110 | Michael C. Mozer, Denis Kazakov, Robert V. Lindsey | cs.NE, cs.LG, I.2.6 | 21 pages | null | cs.NE | 20171011 | 20171011 | [] |
1710.04087 | 51 | In Proceedings of the 33rd Annual Meeting on Association for Computational Linguistics, ACL â95, pp. 320â322. Associa- tion for Computational Linguistics, 1995.
Reinhard Rapp. Automatic identiï¬cation of word translations from unrelated english and ger- man corpora. In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics, ACL â99. Association for Computational Linguistics, 1999.
12
Published as a conference paper at ICLR 2018
S. Ravi and K. Knight. Deciphering foreign language. In ACL, 2011.
Yossi Rubner, Carlo Tomasi, and Leonidas J Guibas. The earth moverâs distance as a metric for image retrieval. International journal of computer vision, 40(2):99â121, 2000.
Charles Schafer and David Yarowsky. Inducing translation lexicons via diverse similarity measures In Proceedings of the 6th Conference on Natural Language Learning - and bridge languages. Volume 20, COLING-02. Association for Computational Linguistics, 2002.
Peter H Sch¨onemann. A generalized solution of the orthogonal procrustes problem. Psychometrika, 31(1):1â10, 1966. | 1710.04087#51 | Word Translation Without Parallel Data | State-of-the-art methods for learning cross-lingual word embeddings have
relied on bilingual dictionaries or parallel corpora. Recent studies showed
that the need for parallel data supervision can be alleviated with
character-level information. While these methods showed encouraging results,
they are not on par with their supervised counterparts and are limited to pairs
of languages sharing a common alphabet. In this work, we show that we can build
a bilingual dictionary between two languages without using any parallel
corpora, by aligning monolingual word embedding spaces in an unsupervised way.
Without using any character information, our model even outperforms existing
supervised methods on cross-lingual tasks for some language pairs. Our
experiments demonstrate that our method works very well also for distant
language pairs, like English-Russian or English-Chinese. We finally describe
experiments on the English-Esperanto low-resource language pair, on which there
only exists a limited amount of parallel data, to show the potential impact of
our method in fully unsupervised machine translation. Our code, embeddings and
dictionaries are publicly available. | http://arxiv.org/pdf/1710.04087 | Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, Hervé Jégou | cs.CL | ICLR 2018 | null | cs.CL | 20171011 | 20180130 | [
{
"id": "1701.00160"
},
{
"id": "1602.01925"
},
{
"id": "1702.08734"
}
] |
1710.04110 | 51 | E. M. Altmann and W. D. Gray. Forgetting to remember: The functional relationship of decay and interference. Psychological Science, 13:27â33, 2002.
J. R. Anderson and R. Milson. Human memory: An adaptive perspective. Psychological Review, 96:703â719, 1989.
Y. Bengio, P. Simard, and P. Frasconi. Learning long-term dependencies with gradient descent is diï¬cult. Trans. Neur. Netw., 5(2):157â166, March 1994. ISSN 1045-9227. doi: 10.1109/72.279181. URL http://dx.doi.org/10.1109/72.279181. | 1710.04110#51 | Discrete Event, Continuous Time RNNs | We investigate recurrent neural network architectures for event-sequence
processing. Event sequences, characterized by discrete observations stamped
with continuous-valued times of occurrence, are challenging due to the
potentially wide dynamic range of relevant time scales as well as interactions
between time scales. We describe four forms of inductive bias that should
benefit architectures for event sequences: temporal locality, position and
scale homogeneity, and scale interdependence. We extend the popular gated
recurrent unit (GRU) architecture to incorporate these biases via intrinsic
temporal dynamics, obtaining a continuous-time GRU. The CT-GRU arises by
interpreting the gates of a GRU as selecting a time scale of memory, and the
CT-GRU generalizes the GRU by incorporating multiple time scales of memory and
performing context-dependent selection of time scales for information storage
and retrieval. Event time-stamps drive decay dynamics of the CT-GRU, whereas
they serve as generic additional inputs to the GRU. Despite the very different
manner in which the two models consider time, their performance on eleven data
sets we examined is essentially identical. Our surprising results point both to
the robustness of GRU and LSTM architectures for handling continuous time, and
to the potency of incorporating continuous dynamics into neural architectures. | http://arxiv.org/pdf/1710.04110 | Michael C. Mozer, Denis Kazakov, Robert V. Lindsey | cs.NE, cs.LG, I.2.6 | 21 pages | null | cs.NE | 20171011 | 20171011 | [] |
1710.04087 | 52 | Peter H Sch¨onemann. A generalized solution of the orthogonal procrustes problem. Psychometrika, 31(1):1â10, 1966.
Samuel L Smith, David HP Turban, Steven Hamblin, and Nils Y Hammerla. Ofï¬ine bilingual word vectors, orthogonal transformations and the inverted softmax. International Conference on Learning Representations, 2017.
In Nicoletta Calzolari (Conference Chair), Khalid Choukri, Thierry Declerck, Mehmet Uur Doan, Bente Maegaard, Joseph Mariani, Asuncion Moreno, Jan Odijk, and Stelios Piperidis (eds.), Proceedings of the Eight International Conference on Language Resources and Evaluation (LRECâ12), Istanbul, Turkey, may 2012. European Language Resources Association (ELRA). ISBN 978-2-9517408-7-7.
Shinji Umeyama. An eigendecomposition approach to weighted graph matching problems. IEEE transactions on pattern analysis and machine intelligence, 10(5):695â703, 1988.
Ivan Vulic and Marie-Francine Moens. Bilingual word embeddings from non-parallel document- aligned data applied to bilingual lexicon induction. Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics (ACL 2015), pp. 719â725, 2015. | 1710.04087#52 | Word Translation Without Parallel Data | State-of-the-art methods for learning cross-lingual word embeddings have
relied on bilingual dictionaries or parallel corpora. Recent studies showed
that the need for parallel data supervision can be alleviated with
character-level information. While these methods showed encouraging results,
they are not on par with their supervised counterparts and are limited to pairs
of languages sharing a common alphabet. In this work, we show that we can build
a bilingual dictionary between two languages without using any parallel
corpora, by aligning monolingual word embedding spaces in an unsupervised way.
Without using any character information, our model even outperforms existing
supervised methods on cross-lingual tasks for some language pairs. Our
experiments demonstrate that our method works very well also for distant
language pairs, like English-Russian or English-Chinese. We finally describe
experiments on the English-Esperanto low-resource language pair, on which there
only exists a limited amount of parallel data, to show the potential impact of
our method in fully unsupervised machine translation. Our code, embeddings and
dictionaries are publicly available. | http://arxiv.org/pdf/1710.04087 | Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, Hervé Jégou | cs.CL | ICLR 2018 | null | cs.CL | 20171011 | 20180130 | [
{
"id": "1701.00160"
},
{
"id": "1602.01925"
},
{
"id": "1702.08734"
}
] |
1710.04110 | 52 | Pierre Buyssens, Abderrahim Elmoataz, and Olivier Lézoray. Multiscale convolutional neural networks for visionâbased classiï¬cation of cells. In Kyoung Mu Lee, Yasuyuki Matsushita, James M. Rehg, and Zhanyi Hu, editors, Computer Vision â ACCV 2012: 11th Asian Conference on Computer Vision, Daejeon, Korea, November 5-9, 2012, Revised Selected Papers, Part II, pages 342â352, Berlin, Heidelberg, 2013. Springer Berlin Heidelberg. doi: 10.1007/978-3-642-37444-9_27.
Edward Choi, Mohammad Taha Bahadori, Andy Schuetz, Walter F. Stewart, and Jimeng Sun. Doctor ai: Predicting clinical events via recurrent neural networks. In Finale Doshi-Velez, Jim Fackler, David Kale, Byron Wallace, and Jenna Weins, editors, Proceedings of the 1st Machine Learning for Healthcare Conference, volume 56 of Proceedings of Machine Learning Research, pages 301â318, Northeastern University, Boston, MA, USA, 18â19 Aug 2016. PMLR. | 1710.04110#52 | Discrete Event, Continuous Time RNNs | We investigate recurrent neural network architectures for event-sequence
processing. Event sequences, characterized by discrete observations stamped
with continuous-valued times of occurrence, are challenging due to the
potentially wide dynamic range of relevant time scales as well as interactions
between time scales. We describe four forms of inductive bias that should
benefit architectures for event sequences: temporal locality, position and
scale homogeneity, and scale interdependence. We extend the popular gated
recurrent unit (GRU) architecture to incorporate these biases via intrinsic
temporal dynamics, obtaining a continuous-time GRU. The CT-GRU arises by
interpreting the gates of a GRU as selecting a time scale of memory, and the
CT-GRU generalizes the GRU by incorporating multiple time scales of memory and
performing context-dependent selection of time scales for information storage
and retrieval. Event time-stamps drive decay dynamics of the CT-GRU, whereas
they serve as generic additional inputs to the GRU. Despite the very different
manner in which the two models consider time, their performance on eleven data
sets we examined is essentially identical. Our surprising results point both to
the robustness of GRU and LSTM architectures for handling continuous time, and
to the potency of incorporating continuous dynamics into neural architectures. | http://arxiv.org/pdf/1710.04110 | Michael C. Mozer, Denis Kazakov, Robert V. Lindsey | cs.NE, cs.LG, I.2.6 | 21 pages | null | cs.NE | 20171011 | 20171011 | [] |
1710.04087 | 53 | Chao Xing, Dong Wang, Chao Liu, and Yiye Lin. Normalized word embedding and orthogonal transform for bilingual word translation. Proceedings of NAACL, 2015.
Lihi Zelnik-manor and Pietro Perona. Self-tuning spectral clustering. In L. K. Saul, Y. Weiss, and L. Bottou (eds.), Advances in Neural Information Processing Systems 17, pp. 1601â1608. MIT Press, 2005.
Meng Zhang, Yang Liu, Huanbo Luan, and Maosong Sun. Earth moverâs distance minimization for unsupervised bilingual lexicon induction. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 1924â1935. Association for Computational Lin- guistics, 2017a.
Meng Zhang, Yang Liu, Huanbo Luan, and Maosong Sun. Adversarial training for unsupervised bilingual lexicon induction. Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics, 2017b.
Will Y Zou, Richard Socher, Daniel M Cer, and Christopher D Manning. Bilingual word embed- dings for phrase-based machine translation. Proceedings of EMNLP, 2013.
13
Published as a conference paper at ICLR 2018
# 7 APPENDIX | 1710.04087#53 | Word Translation Without Parallel Data | State-of-the-art methods for learning cross-lingual word embeddings have
relied on bilingual dictionaries or parallel corpora. Recent studies showed
that the need for parallel data supervision can be alleviated with
character-level information. While these methods showed encouraging results,
they are not on par with their supervised counterparts and are limited to pairs
of languages sharing a common alphabet. In this work, we show that we can build
a bilingual dictionary between two languages without using any parallel
corpora, by aligning monolingual word embedding spaces in an unsupervised way.
Without using any character information, our model even outperforms existing
supervised methods on cross-lingual tasks for some language pairs. Our
experiments demonstrate that our method works very well also for distant
language pairs, like English-Russian or English-Chinese. We finally describe
experiments on the English-Esperanto low-resource language pair, on which there
only exists a limited amount of parallel data, to show the potential impact of
our method in fully unsupervised machine translation. Our code, embeddings and
dictionaries are publicly available. | http://arxiv.org/pdf/1710.04087 | Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, Hervé Jégou | cs.CL | ICLR 2018 | null | cs.CL | 20171011 | 20180130 | [
{
"id": "1701.00160"
},
{
"id": "1602.01925"
},
{
"id": "1702.08734"
}
] |
1710.04110 | 53 | Junyoung Chung, Ãaglar Gülçehre, KyungHyun Cho, and Yoshua Bengio. Empirical evalu- ation of gated recurrent neural networks on sequence modeling. CoRR, abs/1412.3555, 2014. URL http://arxiv.org/abs/1412.3555.
Junyoung Chung, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. Gated feedback recurrent neural networks. In Proceedings of the 32Nd International Conference on International Conference on Machine Learning - Volume 37, ICMLâ15, pages 2067â2075. JMLR.org, 2015. URL http://dl.acm.org/citation.cfm?id=3045118.3045338.
Junyoung Chung, Sungjin Ahn, and Yoshua Bengio. Hierarchical multiscale recurrent neural networks. CoRR, abs/1609.01704, 2016. URL http://arxiv.org/abs/1609.01704.
17
Mozer, Kazakov, & Lindsey | 1710.04110#53 | Discrete Event, Continuous Time RNNs | We investigate recurrent neural network architectures for event-sequence
processing. Event sequences, characterized by discrete observations stamped
with continuous-valued times of occurrence, are challenging due to the
potentially wide dynamic range of relevant time scales as well as interactions
between time scales. We describe four forms of inductive bias that should
benefit architectures for event sequences: temporal locality, position and
scale homogeneity, and scale interdependence. We extend the popular gated
recurrent unit (GRU) architecture to incorporate these biases via intrinsic
temporal dynamics, obtaining a continuous-time GRU. The CT-GRU arises by
interpreting the gates of a GRU as selecting a time scale of memory, and the
CT-GRU generalizes the GRU by incorporating multiple time scales of memory and
performing context-dependent selection of time scales for information storage
and retrieval. Event time-stamps drive decay dynamics of the CT-GRU, whereas
they serve as generic additional inputs to the GRU. Despite the very different
manner in which the two models consider time, their performance on eleven data
sets we examined is essentially identical. Our surprising results point both to
the robustness of GRU and LSTM architectures for handling continuous time, and
to the potency of incorporating continuous dynamics into neural architectures. | http://arxiv.org/pdf/1710.04110 | Michael C. Mozer, Denis Kazakov, Robert V. Lindsey | cs.NE, cs.LG, I.2.6 | 21 pages | null | cs.NE | 20171011 | 20171011 | [] |
1710.04087 | 54 | 13
Published as a conference paper at ICLR 2018
# 7 APPENDIX
In order to gain a better understanding of the impact of using similar corpora or similar word em- bedding methods, we investigated merging two English monolingual embedding spaces using either Wikipedia or the Gigaword corpus (Parker et al. (2011)), and either Skip-Gram, CBOW or fastText methods (see Figure 3).
100 100] 100 J 100 | 29.910 99.0] 99.9] 09.7] 99.6] 99.7) 99.7 o 90 e mm Fottid sa 8 80 e > anne: a 70 5 2 60 z . aoe: = 50 40 5k-7k 10k-12k =S0k-52k 100k-â102k 150k â152k @NN @CSLS
100 99,7) 99.7] 99.2) 99.3) eas 96.2 963 o 90 e ae ee es 8 80 e ae | a 70 5 2 60 z oe | = 50 40 5k-7k 10k-12k =S0k-52k 100k-â102k 150k â152k @NN @CSLS
(a) skip-gram-seed1(Wiki) â skip-gram-seed2(Wiki) | 1710.04087#54 | Word Translation Without Parallel Data | State-of-the-art methods for learning cross-lingual word embeddings have
relied on bilingual dictionaries or parallel corpora. Recent studies showed
that the need for parallel data supervision can be alleviated with
character-level information. While these methods showed encouraging results,
they are not on par with their supervised counterparts and are limited to pairs
of languages sharing a common alphabet. In this work, we show that we can build
a bilingual dictionary between two languages without using any parallel
corpora, by aligning monolingual word embedding spaces in an unsupervised way.
Without using any character information, our model even outperforms existing
supervised methods on cross-lingual tasks for some language pairs. Our
experiments demonstrate that our method works very well also for distant
language pairs, like English-Russian or English-Chinese. We finally describe
experiments on the English-Esperanto low-resource language pair, on which there
only exists a limited amount of parallel data, to show the potential impact of
our method in fully unsupervised machine translation. Our code, embeddings and
dictionaries are publicly available. | http://arxiv.org/pdf/1710.04087 | Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, Hervé Jégou | cs.CL | ICLR 2018 | null | cs.CL | 20171011 | 20180130 | [
{
"id": "1701.00160"
},
{
"id": "1602.01925"
},
{
"id": "1702.08734"
}
] |
1710.04110 | 54 | 17
Mozer, Kazakov, & Lindsey
Zhicheng Cui, Wenlin Chen, and Yixin Chen. Multi-scale convolutional neural networks for time series classiï¬cation. CoRR, abs/1603.06995, 2016. URL http://arxiv.org/abs/ 1603.06995.
Hanjun Dai, Yichen Wang, Rakshit Trivedi, and Le Song. Recurrent coevolutionary latent feature processes for continuous-time recommendation. In Proceedings of the 1st Workshop on Deep Learning for Recommender Systems, DLRS 2016, pages 29â34, New York, NY, USA, 2016. ACM. ISBN 978-1-4503-4795-2. doi: 10.1145/2988450.2988451. URL http: //doi.acm.org/10.1145/2988450.2988451.
Angelos Dassios and Hongbiao Zhao. Exact simulation of Hawkes process with exponentially decaying intensity. Electron. Commun. Probab., 18:13 pp., 2013. doi: 10.1214/ECP. v18-2717. URL http://dx.doi.org/10.1214/ECP.v18-2717. | 1710.04110#54 | Discrete Event, Continuous Time RNNs | We investigate recurrent neural network architectures for event-sequence
processing. Event sequences, characterized by discrete observations stamped
with continuous-valued times of occurrence, are challenging due to the
potentially wide dynamic range of relevant time scales as well as interactions
between time scales. We describe four forms of inductive bias that should
benefit architectures for event sequences: temporal locality, position and
scale homogeneity, and scale interdependence. We extend the popular gated
recurrent unit (GRU) architecture to incorporate these biases via intrinsic
temporal dynamics, obtaining a continuous-time GRU. The CT-GRU arises by
interpreting the gates of a GRU as selecting a time scale of memory, and the
CT-GRU generalizes the GRU by incorporating multiple time scales of memory and
performing context-dependent selection of time scales for information storage
and retrieval. Event time-stamps drive decay dynamics of the CT-GRU, whereas
they serve as generic additional inputs to the GRU. Despite the very different
manner in which the two models consider time, their performance on eleven data
sets we examined is essentially identical. Our surprising results point both to
the robustness of GRU and LSTM architectures for handling continuous time, and
to the potency of incorporating continuous dynamics into neural architectures. | http://arxiv.org/pdf/1710.04110 | Michael C. Mozer, Denis Kazakov, Robert V. Lindsey | cs.NE, cs.LG, I.2.6 | 21 pages | null | cs.NE | 20171011 | 20171011 | [] |
1710.04087 | 55 | (a) skip-gram-seed1(Wiki) â skip-gram-seed2(Wiki)
# (b) skip-gram(Wiki) â CBOW(Wiki)
100 & 90 g 373) | Fad 5 : : 8 80 33 | 8 g 70 é ia o) 5 2 60 z :. wae = 50 40 5k-7k 10k-12k =S0k-52k 100k-â102k 150k â152k @NN @CSLS
100 3 90 § 878 ; 3 so fal 82.9 8 80.4 Fa g 70 ae air 5 673 2 60 z :. aoe = 50 40 a 5k-7k 10k-12k =S0k-52k 100k-â102k 150k â152k @NN @CSLS
(c) fastText(Wiki) â fastText(Giga) (d) skip-gram(Wiki) â fastText(Giga) | 1710.04087#55 | Word Translation Without Parallel Data | State-of-the-art methods for learning cross-lingual word embeddings have
relied on bilingual dictionaries or parallel corpora. Recent studies showed
that the need for parallel data supervision can be alleviated with
character-level information. While these methods showed encouraging results,
they are not on par with their supervised counterparts and are limited to pairs
of languages sharing a common alphabet. In this work, we show that we can build
a bilingual dictionary between two languages without using any parallel
corpora, by aligning monolingual word embedding spaces in an unsupervised way.
Without using any character information, our model even outperforms existing
supervised methods on cross-lingual tasks for some language pairs. Our
experiments demonstrate that our method works very well also for distant
language pairs, like English-Russian or English-Chinese. We finally describe
experiments on the English-Esperanto low-resource language pair, on which there
only exists a limited amount of parallel data, to show the potential impact of
our method in fully unsupervised machine translation. Our code, embeddings and
dictionaries are publicly available. | http://arxiv.org/pdf/1710.04087 | Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, Hervé Jégou | cs.CL | ICLR 2018 | null | cs.CL | 20171011 | 20180130 | [
{
"id": "1701.00160"
},
{
"id": "1602.01925"
},
{
"id": "1702.08734"
}
] |
1710.04110 | 55 | Nan Du, Yichen Wang, Niao He, and Le Song. Time-sensitive recommendation from recurrent user activities. In Proceedings of the 28th International Conference on Neural Information Processing Systems, NIPSâ15, pages 3492â3500, Cambridge, MA, USA, 2015. MIT Press. URL http://dl.acm.org/citation.cfm?id=2969442.2969629.
Nan Du, Hanjun Dai, Rakshit Trivedi, Utkarsh Upadhyay, Manuel Gomez-Rodriguez, and Le Song. Recurrent marked temporal point processes: Embedding event history to vector. In Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD â16, pages 1555â1564, New York, NY, USA, 2016. ACM. ISBN 978-1-4503-4232-2. doi: 10.1145/2939672.2939875. URL http://doi.acm.org/10. 1145/2939672.2939875.
Kunihiko Fukushima. Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaï¬ected by shift in position. Biological Cybernetics, 36:193â202, 1980. | 1710.04110#55 | Discrete Event, Continuous Time RNNs | We investigate recurrent neural network architectures for event-sequence
processing. Event sequences, characterized by discrete observations stamped
with continuous-valued times of occurrence, are challenging due to the
potentially wide dynamic range of relevant time scales as well as interactions
between time scales. We describe four forms of inductive bias that should
benefit architectures for event sequences: temporal locality, position and
scale homogeneity, and scale interdependence. We extend the popular gated
recurrent unit (GRU) architecture to incorporate these biases via intrinsic
temporal dynamics, obtaining a continuous-time GRU. The CT-GRU arises by
interpreting the gates of a GRU as selecting a time scale of memory, and the
CT-GRU generalizes the GRU by incorporating multiple time scales of memory and
performing context-dependent selection of time scales for information storage
and retrieval. Event time-stamps drive decay dynamics of the CT-GRU, whereas
they serve as generic additional inputs to the GRU. Despite the very different
manner in which the two models consider time, their performance on eleven data
sets we examined is essentially identical. Our surprising results point both to
the robustness of GRU and LSTM architectures for handling continuous time, and
to the potency of incorporating continuous dynamics into neural architectures. | http://arxiv.org/pdf/1710.04110 | Michael C. Mozer, Denis Kazakov, Robert V. Lindsey | cs.NE, cs.LG, I.2.6 | 21 pages | null | cs.NE | 20171011 | 20171011 | [] |
1710.04087 | 56 | (c) fastText(Wiki) â fastText(Giga) (d) skip-gram(Wiki) â fastText(Giga)
Figure 3: English to English word alignment accuracy. Evolution of word translation retrieval accuracy with regard to word frequency, using either Wikipedia (Wiki) or the Gigaword corpus (Giga), and either skip-gram, continuous bag-of-words (CBOW) or fastText embeddings. The model can learn to perfectly align embeddings trained on the same corpus but with different seeds (a), as well as embeddings learned using different models (overall, when employing CSLS which is more accurate on rare words) (b). However, the model has more trouble aligning embeddings trained on different corpora (Wikipedia and Gigaword) (c). This can be explained by the difference in co-occurrence statistics of the two corpora, particularly on the rarer words. Performance can be further deteriorated by using both different models and different types of corpus (d). | 1710.04087#56 | Word Translation Without Parallel Data | State-of-the-art methods for learning cross-lingual word embeddings have
relied on bilingual dictionaries or parallel corpora. Recent studies showed
that the need for parallel data supervision can be alleviated with
character-level information. While these methods showed encouraging results,
they are not on par with their supervised counterparts and are limited to pairs
of languages sharing a common alphabet. In this work, we show that we can build
a bilingual dictionary between two languages without using any parallel
corpora, by aligning monolingual word embedding spaces in an unsupervised way.
Without using any character information, our model even outperforms existing
supervised methods on cross-lingual tasks for some language pairs. Our
experiments demonstrate that our method works very well also for distant
language pairs, like English-Russian or English-Chinese. We finally describe
experiments on the English-Esperanto low-resource language pair, on which there
only exists a limited amount of parallel data, to show the potential impact of
our method in fully unsupervised machine translation. Our code, embeddings and
dictionaries are publicly available. | http://arxiv.org/pdf/1710.04087 | Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, Hervé Jégou | cs.CL | ICLR 2018 | null | cs.CL | 20171011 | 20180130 | [
{
"id": "1701.00160"
},
{
"id": "1602.01925"
},
{
"id": "1702.08734"
}
] |
1710.04110 | 56 | Felix A. Gers, Jürgen A. Schmidhuber, and Fred A. Cummins. Learning to forget: Contin- ual prediction with LSTM. Neural Comput., 12(10):2451â2471, October 2000. ISSN 0899-7667. doi: 10.1162/089976600300015015. URL http://dx.doi.org/10.1162/ 089976600300015015.
D. M. Green and J. A. Swets. Signal detection theory and psychophysics. John Wiley and Sons, New York, 1966.
Balázs Hidasi, Alexandros Karatzoglou, Linas Baltrunas, and Domonkos Tikk. Session-based recommendations with recurrent neural networks. CoRR, abs/1511.06939, 2015. URL http://arxiv.org/abs/1511.06939.
Sepp Hochreiter. The vanishing gradient problem during learning recurrent neural nets and problem solutions. Int. J. Uncertain. Fuzziness Knowl.-Based Syst., 6(2):107â116, April 1998. ISSN 0218-4885. doi: 10.1142/S0218488598000094. URL http://dx.doi.org/10. 1142/S0218488598000094. | 1710.04110#56 | Discrete Event, Continuous Time RNNs | We investigate recurrent neural network architectures for event-sequence
processing. Event sequences, characterized by discrete observations stamped
with continuous-valued times of occurrence, are challenging due to the
potentially wide dynamic range of relevant time scales as well as interactions
between time scales. We describe four forms of inductive bias that should
benefit architectures for event sequences: temporal locality, position and
scale homogeneity, and scale interdependence. We extend the popular gated
recurrent unit (GRU) architecture to incorporate these biases via intrinsic
temporal dynamics, obtaining a continuous-time GRU. The CT-GRU arises by
interpreting the gates of a GRU as selecting a time scale of memory, and the
CT-GRU generalizes the GRU by incorporating multiple time scales of memory and
performing context-dependent selection of time scales for information storage
and retrieval. Event time-stamps drive decay dynamics of the CT-GRU, whereas
they serve as generic additional inputs to the GRU. Despite the very different
manner in which the two models consider time, their performance on eleven data
sets we examined is essentially identical. Our surprising results point both to
the robustness of GRU and LSTM architectures for handling continuous time, and
to the potency of incorporating continuous dynamics into neural architectures. | http://arxiv.org/pdf/1710.04110 | Michael C. Mozer, Denis Kazakov, Robert V. Lindsey | cs.NE, cs.LG, I.2.6 | 21 pages | null | cs.NE | 20171011 | 20171011 | [] |
1710.04087 | 57 | mi kelkfoje parolas kun mia najbaro tra la barilo . sorry sometimes speaks with my neighbor across the barrier . i sometimes talk to my neighbor across the fence . la viro malanta ili ludas la pianon . the man behind they plays the piano . the man behind them is playing the piano . bonvole protektu min kontra tiuj malbonaj viroj . gratefully protects hi against those worst men . please defend me from such bad men .
Table 6: Esperanto-English. Examples of fully unsupervised word-by-word translations. The translations reï¬ect the meaning of the source sentences, and could potentially be improved using a simple language model.
14 | 1710.04087#57 | Word Translation Without Parallel Data | State-of-the-art methods for learning cross-lingual word embeddings have
relied on bilingual dictionaries or parallel corpora. Recent studies showed
that the need for parallel data supervision can be alleviated with
character-level information. While these methods showed encouraging results,
they are not on par with their supervised counterparts and are limited to pairs
of languages sharing a common alphabet. In this work, we show that we can build
a bilingual dictionary between two languages without using any parallel
corpora, by aligning monolingual word embedding spaces in an unsupervised way.
Without using any character information, our model even outperforms existing
supervised methods on cross-lingual tasks for some language pairs. Our
experiments demonstrate that our method works very well also for distant
language pairs, like English-Russian or English-Chinese. We finally describe
experiments on the English-Esperanto low-resource language pair, on which there
only exists a limited amount of parallel data, to show the potential impact of
our method in fully unsupervised machine translation. Our code, embeddings and
dictionaries are publicly available. | http://arxiv.org/pdf/1710.04087 | Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, Hervé Jégou | cs.CL | ICLR 2018 | null | cs.CL | 20171011 | 20180130 | [
{
"id": "1701.00160"
},
{
"id": "1602.01925"
},
{
"id": "1702.08734"
}
] |
1710.04110 | 57 | Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural Computation, 9 (8):1735â1780, 1997.
18
Discrete Event, Continuous Time RNNs
Sepp Hochreiter, Yoshua Bengio, Paolo Frasconi, and Jürgen Schmidhuber. Gradient ï¬ow in recurrent nets: the diï¬culty of learning long-term dependencies. In J. F. Kolen and S. Kremer, editors, A ï¬eld guide to dynamical recurrent neural networks. IEEE Press, Los Alamitos, 2001.
Rafal Jozefowicz, Wojciech Zaremba, and Ilya Sutskever. An empirical exploration of recurrent network architectures. In Proceedings of the 32nd International Conference on Machine Learning (ICML-15), pages 2342â2350, 2015.
Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. A convolutional neural network for modelling sentences. CoRR, abs/1404.2188, 2014. URL http://arxiv.org/abs/1404. 2188. | 1710.04110#57 | Discrete Event, Continuous Time RNNs | We investigate recurrent neural network architectures for event-sequence
processing. Event sequences, characterized by discrete observations stamped
with continuous-valued times of occurrence, are challenging due to the
potentially wide dynamic range of relevant time scales as well as interactions
between time scales. We describe four forms of inductive bias that should
benefit architectures for event sequences: temporal locality, position and
scale homogeneity, and scale interdependence. We extend the popular gated
recurrent unit (GRU) architecture to incorporate these biases via intrinsic
temporal dynamics, obtaining a continuous-time GRU. The CT-GRU arises by
interpreting the gates of a GRU as selecting a time scale of memory, and the
CT-GRU generalizes the GRU by incorporating multiple time scales of memory and
performing context-dependent selection of time scales for information storage
and retrieval. Event time-stamps drive decay dynamics of the CT-GRU, whereas
they serve as generic additional inputs to the GRU. Despite the very different
manner in which the two models consider time, their performance on eleven data
sets we examined is essentially identical. Our surprising results point both to
the robustness of GRU and LSTM architectures for handling continuous time, and
to the potency of incorporating continuous dynamics into neural architectures. | http://arxiv.org/pdf/1710.04110 | Michael C. Mozer, Denis Kazakov, Robert V. Lindsey | cs.NE, cs.LG, I.2.6 | 21 pages | null | cs.NE | 20171011 | 20171011 | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.