doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1711.05101 | 57 | SuppFigure 4 is the equivalent of Figure 3 in the main paper, but for ImageNet32x32 instead of for CIFAR-10. The qualitative results are identical: weight decay leads to better training loss (cross- entropy) than L2 regularization, and to an even greater improvement of test error.
SuppFigure 5 and SuppFigure 6 are the equivalents of Figure 4 in the main paper but supplemented with training loss curves in its bottom row. The results show that Adam and its variants with decou- pled weight decay converge faster (in terms of training loss) on CIFAR-10 than the corresponding SGD variants (the difference for ImageNet32x32 is small). As is discussed in the main paper, when the same values of training loss are considered, AdamW demonstrates better values of test error than Adam. Interestingly, SuppFigure 5 and SuppFigure 6 show that the restart variants AdamWR and SGDWR also demonstrate better generalization than AdamW and SGDW, respectively.
3
Published as a conference paper at ICLR 2019
Adam without cosine annealing 1e-2 te3 Initial learning rate 5 48 46 44 42 4 38 36 34 32 1e-3 s 1e-5 te-6 1e-5 1e-4 L2 regularization factor | 1711.05101#57 | Decoupled Weight Decay Regularization | L$_2$ regularization and weight decay regularization are equivalent for
standard stochastic gradient descent (when rescaled by the learning rate), but
as we demonstrate this is \emph{not} the case for adaptive gradient algorithms,
such as Adam. While common implementations of these algorithms employ L$_2$
regularization (often calling it "weight decay" in what may be misleading due
to the inequivalence we expose), we propose a simple modification to recover
the original formulation of weight decay regularization by \emph{decoupling}
the weight decay from the optimization steps taken w.r.t. the loss function. We
provide empirical evidence that our proposed modification (i) decouples the
optimal choice of weight decay factor from the setting of the learning rate for
both standard SGD and Adam and (ii) substantially improves Adam's
generalization performance, allowing it to compete with SGD with momentum on
image classification datasets (on which it was previously typically
outperformed by the latter). Our proposed decoupled weight decay has already
been adopted by many researchers, and the community has implemented it in
TensorFlow and PyTorch; the complete source code for our experiments is
available at https://github.com/loshchil/AdamW-and-SGDW | http://arxiv.org/pdf/1711.05101 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | Published as a conference paper at ICLR 2019 | null | cs.LG | 20171114 | 20190104 | [
{
"id": "1810.12281"
},
{
"id": "1705.07485"
},
{
"id": "1507.02030"
},
{
"id": "1609.04836"
},
{
"id": "1707.08819"
},
{
"id": "1805.09501"
},
{
"id": "1704.00109"
},
{
"id": "1712.09913"
},
{
"id": "1705.08292"
},
{
"id": "1707.07012"
},
{
"id": "1703.04933"
},
{
"id": "1608.03983"
},
{
"id": "1804.06559"
},
{
"id": "1506.01186"
},
{
"id": "1805.01667"
},
{
"id": "1709.04546"
},
{
"id": "1511.06434"
}
] |
1711.05101 | 58 | SuppFigure 2: Performance of âstandard Adamâ: Adam with L2 regularization and a ï¬xed learning rate. We show the ï¬nal test error of a 26 2x96d ResNet on CIFAR-10 after 1800 epochs of the original Adam for different settings of learning rate and weight decay used for L2 regularization.
4
Published as a conference paper at ICLR 2019
AdamW on CIFAR-10 after 25 epochs 12 1/16 1/32 Initial learning rate to be multiplied by 0.01 1/64 116 1/8 1/4 12 Weight decay to be multiplied by 0.001 9
Initial learning rate to be multiplied by 0.01 AdamW on CIFAR.-40 after 10Q epochs 1/2 = 1/16 132 18 1/4 1/2 Weight decay to be multiplied by 0.001 1
2 -_) > Fe) 3 2 Ss Ee Py 2 2 2 [34 a a ⬠o 2 3s = AdamW on CIFAR-40 after 400 epochs 112 1/4 8 1/32 1/64 116 18 1/4 1/2 Weight decay to be multiplied by 0.001 1 | 1711.05101#58 | Decoupled Weight Decay Regularization | L$_2$ regularization and weight decay regularization are equivalent for
standard stochastic gradient descent (when rescaled by the learning rate), but
as we demonstrate this is \emph{not} the case for adaptive gradient algorithms,
such as Adam. While common implementations of these algorithms employ L$_2$
regularization (often calling it "weight decay" in what may be misleading due
to the inequivalence we expose), we propose a simple modification to recover
the original formulation of weight decay regularization by \emph{decoupling}
the weight decay from the optimization steps taken w.r.t. the loss function. We
provide empirical evidence that our proposed modification (i) decouples the
optimal choice of weight decay factor from the setting of the learning rate for
both standard SGD and Adam and (ii) substantially improves Adam's
generalization performance, allowing it to compete with SGD with momentum on
image classification datasets (on which it was previously typically
outperformed by the latter). Our proposed decoupled weight decay has already
been adopted by many researchers, and the community has implemented it in
TensorFlow and PyTorch; the complete source code for our experiments is
available at https://github.com/loshchil/AdamW-and-SGDW | http://arxiv.org/pdf/1711.05101 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | Published as a conference paper at ICLR 2019 | null | cs.LG | 20171114 | 20190104 | [
{
"id": "1810.12281"
},
{
"id": "1705.07485"
},
{
"id": "1507.02030"
},
{
"id": "1609.04836"
},
{
"id": "1707.08819"
},
{
"id": "1805.09501"
},
{
"id": "1704.00109"
},
{
"id": "1712.09913"
},
{
"id": "1705.08292"
},
{
"id": "1707.07012"
},
{
"id": "1703.04933"
},
{
"id": "1608.03983"
},
{
"id": "1804.06559"
},
{
"id": "1506.01186"
},
{
"id": "1805.01667"
},
{
"id": "1709.04546"
},
{
"id": "1511.06434"
}
] |
1711.05101 | 59 | 1/16 1/32 Initial learning rate to be multiplied by 0.01 1/64 116 1/8 1/4 12 Weight decay to be multiplied by 0.001 1/2 weAdamW on CIFAR-10 after 24 epochs 1/16 1/32 Initial learning rate to be multiplied by 0.01 1164 1/80 1/40 1120 1/10 Normalized weight decay 12 AdamW on ImageNet32x32 after 1 epoch 1/4 V8 1/16 Initial learning rate to be multiplied by 0.01 64 0 1/160 1/80 Normalized weight decay 1140 1/20 1/10 Initial learning rate 1/32 1/64 100 60 50 60 40 Initial learning rate to be multiplied by 0.01 Initial learning rate to be multiplied by 0.01 Initial learning rate to be multiplied by 0.01 Initial learning rate on epochs 1/2 = 1/16 132 18 1/4 1/2 Weight decay to be multiplied by 0.001 12 AdamW on CIFAR-10 after 100 epochs 1/4 1/8 116 1/32 1/64 1/80 1/40 1/20 110 1 Normalized weight decay 1 fdamw on ImageNet32x32 after 16 epochs 1/16 1/32 1/64 0 1160 1/80 | 1711.05101#59 | Decoupled Weight Decay Regularization | L$_2$ regularization and weight decay regularization are equivalent for
standard stochastic gradient descent (when rescaled by the learning rate), but
as we demonstrate this is \emph{not} the case for adaptive gradient algorithms,
such as Adam. While common implementations of these algorithms employ L$_2$
regularization (often calling it "weight decay" in what may be misleading due
to the inequivalence we expose), we propose a simple modification to recover
the original formulation of weight decay regularization by \emph{decoupling}
the weight decay from the optimization steps taken w.r.t. the loss function. We
provide empirical evidence that our proposed modification (i) decouples the
optimal choice of weight decay factor from the setting of the learning rate for
both standard SGD and Adam and (ii) substantially improves Adam's
generalization performance, allowing it to compete with SGD with momentum on
image classification datasets (on which it was previously typically
outperformed by the latter). Our proposed decoupled weight decay has already
been adopted by many researchers, and the community has implemented it in
TensorFlow and PyTorch; the complete source code for our experiments is
available at https://github.com/loshchil/AdamW-and-SGDW | http://arxiv.org/pdf/1711.05101 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | Published as a conference paper at ICLR 2019 | null | cs.LG | 20171114 | 20190104 | [
{
"id": "1810.12281"
},
{
"id": "1705.07485"
},
{
"id": "1507.02030"
},
{
"id": "1609.04836"
},
{
"id": "1707.08819"
},
{
"id": "1805.09501"
},
{
"id": "1704.00109"
},
{
"id": "1712.09913"
},
{
"id": "1705.08292"
},
{
"id": "1707.07012"
},
{
"id": "1703.04933"
},
{
"id": "1608.03983"
},
{
"id": "1804.06559"
},
{
"id": "1506.01186"
},
{
"id": "1805.01667"
},
{
"id": "1709.04546"
},
{
"id": "1511.06434"
}
] |
1711.05101 | 60 | 1/20 110 1 Normalized weight decay 1 fdamw on ImageNet32x32 after 16 epochs 1/16 1/32 1/64 0 1160 1/80 1/40 1/20 Normalized weight decay 4;28SDW on ImageNet32x32 after 16 epochs 1/10 115 14 ry 1116 1/32 1/64 2 -_) > Fe) 3 2 Ss Ee Py 2 2 2 [34 a a ⬠o 2 3s = 1 5 Initial learning rate to be multiplied by 0.01 Initial learning rate to be multiplied by 0.01 Initial learning rate on epochs 112 1/4 8 1/32 1/64 116 18 1/4 1/2 Weight decay to be multiplied by 0.001 AdamW gn CIFAR-10 after 400 epochs 1 1/2 114 1/8 1/16 1/32 1/64 1/80 1/40 1120 1/10 U5 Normalized weight decay 1 Adamw on ImageNet32x32 after 64 epochs 114 1/8 AM6 1/32 4 0 1/160 1/80 1/40 1/20 1/10 115 Normalized weight decay SGDW on ImageNet32x32 after 64 epochs 1/2 eo a 1/32 17 | 1711.05101#60 | Decoupled Weight Decay Regularization | L$_2$ regularization and weight decay regularization are equivalent for
standard stochastic gradient descent (when rescaled by the learning rate), but
as we demonstrate this is \emph{not} the case for adaptive gradient algorithms,
such as Adam. While common implementations of these algorithms employ L$_2$
regularization (often calling it "weight decay" in what may be misleading due
to the inequivalence we expose), we propose a simple modification to recover
the original formulation of weight decay regularization by \emph{decoupling}
the weight decay from the optimization steps taken w.r.t. the loss function. We
provide empirical evidence that our proposed modification (i) decouples the
optimal choice of weight decay factor from the setting of the learning rate for
both standard SGD and Adam and (ii) substantially improves Adam's
generalization performance, allowing it to compete with SGD with momentum on
image classification datasets (on which it was previously typically
outperformed by the latter). Our proposed decoupled weight decay has already
been adopted by many researchers, and the community has implemented it in
TensorFlow and PyTorch; the complete source code for our experiments is
available at https://github.com/loshchil/AdamW-and-SGDW | http://arxiv.org/pdf/1711.05101 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | Published as a conference paper at ICLR 2019 | null | cs.LG | 20171114 | 20190104 | [
{
"id": "1810.12281"
},
{
"id": "1705.07485"
},
{
"id": "1507.02030"
},
{
"id": "1609.04836"
},
{
"id": "1707.08819"
},
{
"id": "1805.09501"
},
{
"id": "1704.00109"
},
{
"id": "1712.09913"
},
{
"id": "1705.08292"
},
{
"id": "1707.07012"
},
{
"id": "1703.04933"
},
{
"id": "1608.03983"
},
{
"id": "1804.06559"
},
{
"id": "1506.01186"
},
{
"id": "1805.01667"
},
{
"id": "1709.04546"
},
{
"id": "1511.06434"
}
] |
1711.05101 | 61 | 1/2 weAdamW on CIFAR-10 after 24 epochs 1/16 1/32 Initial learning rate to be multiplied by 0.01 1164 1/80 1/40 1120 1/10 Normalized weight decay
Initial learning rate to be multiplied by 0.01 12 AdamW on CIFAR-10 after 100 epochs 1/4 1/8 116 1/32 1/64 1/80 1/40 1/20 110 1 Normalized weight decay 5
Initial learning rate to be multiplied by 0.01 AdamW gn CIFAR-10 after 400 epochs 1/2 114 1/8 1/16 1/32 1/64 1/80 1/40 1120 1/10 U5 Normalized weight decay
Initial learning rate to be multiplied by 0.01 1 fdamw on ImageNet32x32 after 16 epochs 1/16 1/32 1/64 0 1160 1/80 1/40 1/20 Normalized weight decay 1/10 115
Initial learning rate to be multiplied by 0.01 1 Adamw on ImageNet32x32 after 64 epochs 114 1/8 AM6 1/32 4 0 1/160 1/80 1/40 1/20 1/10 115 Normalized weight decay | 1711.05101#61 | Decoupled Weight Decay Regularization | L$_2$ regularization and weight decay regularization are equivalent for
standard stochastic gradient descent (when rescaled by the learning rate), but
as we demonstrate this is \emph{not} the case for adaptive gradient algorithms,
such as Adam. While common implementations of these algorithms employ L$_2$
regularization (often calling it "weight decay" in what may be misleading due
to the inequivalence we expose), we propose a simple modification to recover
the original formulation of weight decay regularization by \emph{decoupling}
the weight decay from the optimization steps taken w.r.t. the loss function. We
provide empirical evidence that our proposed modification (i) decouples the
optimal choice of weight decay factor from the setting of the learning rate for
both standard SGD and Adam and (ii) substantially improves Adam's
generalization performance, allowing it to compete with SGD with momentum on
image classification datasets (on which it was previously typically
outperformed by the latter). Our proposed decoupled weight decay has already
been adopted by many researchers, and the community has implemented it in
TensorFlow and PyTorch; the complete source code for our experiments is
available at https://github.com/loshchil/AdamW-and-SGDW | http://arxiv.org/pdf/1711.05101 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | Published as a conference paper at ICLR 2019 | null | cs.LG | 20171114 | 20190104 | [
{
"id": "1810.12281"
},
{
"id": "1705.07485"
},
{
"id": "1507.02030"
},
{
"id": "1609.04836"
},
{
"id": "1707.08819"
},
{
"id": "1805.09501"
},
{
"id": "1704.00109"
},
{
"id": "1712.09913"
},
{
"id": "1705.08292"
},
{
"id": "1707.07012"
},
{
"id": "1703.04933"
},
{
"id": "1608.03983"
},
{
"id": "1804.06559"
},
{
"id": "1506.01186"
},
{
"id": "1805.01667"
},
{
"id": "1709.04546"
},
{
"id": "1511.06434"
}
] |
1711.05101 | 62 | 12 AdamW on ImageNet32x32 after 1 epoch 1/4 V8 1/16 Initial learning rate to be multiplied by 0.01 64 0 1/160 1/80 Normalized weight decay 1140 1/20 1/10 100 60 50
Initial learning rate 4;28SDW on ImageNet32x32 after 16 epochs 14 ry 1116 1/32 1/64 ° 1160 1/80 1/40 1/20 1/10 Normalized weight decay 15
Initial learning rate SGDW on ImageNet32x32 after 64 epochs t) 1160 1/80 1/40 1/20 1/10 15 Normalized weight decay 1/2 eo a 1/32 17
Initial learning rate 1/32 1/64 0 1/160 1/80 Normalized weight decay 140 «61/20 1/10 V5 60 40 | 1711.05101#62 | Decoupled Weight Decay Regularization | L$_2$ regularization and weight decay regularization are equivalent for
standard stochastic gradient descent (when rescaled by the learning rate), but
as we demonstrate this is \emph{not} the case for adaptive gradient algorithms,
such as Adam. While common implementations of these algorithms employ L$_2$
regularization (often calling it "weight decay" in what may be misleading due
to the inequivalence we expose), we propose a simple modification to recover
the original formulation of weight decay regularization by \emph{decoupling}
the weight decay from the optimization steps taken w.r.t. the loss function. We
provide empirical evidence that our proposed modification (i) decouples the
optimal choice of weight decay factor from the setting of the learning rate for
both standard SGD and Adam and (ii) substantially improves Adam's
generalization performance, allowing it to compete with SGD with momentum on
image classification datasets (on which it was previously typically
outperformed by the latter). Our proposed decoupled weight decay has already
been adopted by many researchers, and the community has implemented it in
TensorFlow and PyTorch; the complete source code for our experiments is
available at https://github.com/loshchil/AdamW-and-SGDW | http://arxiv.org/pdf/1711.05101 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | Published as a conference paper at ICLR 2019 | null | cs.LG | 20171114 | 20190104 | [
{
"id": "1810.12281"
},
{
"id": "1705.07485"
},
{
"id": "1507.02030"
},
{
"id": "1609.04836"
},
{
"id": "1707.08819"
},
{
"id": "1805.09501"
},
{
"id": "1704.00109"
},
{
"id": "1712.09913"
},
{
"id": "1705.08292"
},
{
"id": "1707.07012"
},
{
"id": "1703.04933"
},
{
"id": "1608.03983"
},
{
"id": "1804.06559"
},
{
"id": "1506.01186"
},
{
"id": "1805.01667"
},
{
"id": "1709.04546"
},
{
"id": "1511.06434"
}
] |
1711.05101 | 63 | Initial learning rate 1/32 1/64 0 1/160 1/80 Normalized weight decay 140 «61/20 1/10 V5 60 40
SuppFigure 3: Effect of normalized weight decay. We show the ï¬nal test Top-1 error on CIFAR- 10 (ï¬rst two rows for AdamW without and with normalized weight decay) and Top-5 error on ImageNet32x32 (last two rows for AdamW and SGDW, both with normalized weight decay) of a 26 2x64d ResNet after different numbers of epochs (see columns). While the optimal settings of the raw weight decay change signiï¬cantly for different runtime budgets (see the ï¬rst row), the values of the normalized weight decay remain very similar for different budgets (see the second row) and different datasets (here, CIFAR-10 and ImageNet32x32), and even across AdamW and SGDW.
5
Published as a conference paper at ICLR 2019
(ea and AdamW with LR=0.001 and different weight decays S = 5 = fa 3 le (| _ === Adamw 10 0 10 20 30 40 50 60 Epochs | 1711.05101#63 | Decoupled Weight Decay Regularization | L$_2$ regularization and weight decay regularization are equivalent for
standard stochastic gradient descent (when rescaled by the learning rate), but
as we demonstrate this is \emph{not} the case for adaptive gradient algorithms,
such as Adam. While common implementations of these algorithms employ L$_2$
regularization (often calling it "weight decay" in what may be misleading due
to the inequivalence we expose), we propose a simple modification to recover
the original formulation of weight decay regularization by \emph{decoupling}
the weight decay from the optimization steps taken w.r.t. the loss function. We
provide empirical evidence that our proposed modification (i) decouples the
optimal choice of weight decay factor from the setting of the learning rate for
both standard SGD and Adam and (ii) substantially improves Adam's
generalization performance, allowing it to compete with SGD with momentum on
image classification datasets (on which it was previously typically
outperformed by the latter). Our proposed decoupled weight decay has already
been adopted by many researchers, and the community has implemented it in
TensorFlow and PyTorch; the complete source code for our experiments is
available at https://github.com/loshchil/AdamW-and-SGDW | http://arxiv.org/pdf/1711.05101 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | Published as a conference paper at ICLR 2019 | null | cs.LG | 20171114 | 20190104 | [
{
"id": "1810.12281"
},
{
"id": "1705.07485"
},
{
"id": "1507.02030"
},
{
"id": "1609.04836"
},
{
"id": "1707.08819"
},
{
"id": "1805.09501"
},
{
"id": "1704.00109"
},
{
"id": "1712.09913"
},
{
"id": "1705.08292"
},
{
"id": "1707.07012"
},
{
"id": "1703.04933"
},
{
"id": "1608.03983"
},
{
"id": "1804.06559"
},
{
"id": "1506.01186"
},
{
"id": "1805.01667"
},
{
"id": "1709.04546"
},
{
"id": "1511.06434"
}
] |
1711.05101 | 64 | (ea and AdamW with LR=0.001 and different weight decays S = 5 = fa 3 le (| _ === Adamw 10 0 10 20 30 40 50 60 Epochs
Adam and AdamW with LR=0.001 and different weight decays 10 s 4 g is 2 Py 8 § 2 2 8 oD 2 ⬠âs a Adam ââ Adamw 0 10 20 30 40 50 60 Epochs
10 s 4 g is 2 S Py = 8 § 5 2 = 2 fa 8 3 oD le 2 ⬠âs a Adam ââ Adamw (| _ === Adamw 10 0 10 20 30 40 50 60 0 10 20 30 40 50 60 Epochs Epochs 30 eee Ad âO- Adam elu âe- Adamw acer ny a to a Test error (%) 8 Test error (%) â rc om 10â 10° 10° 10° 15 Weight decay for Adam 10° Normalized weight decay times 107 for Adamw Training loss (cross-entropy)
30 Ad elu acer to a Test error (%) â om 15 10° Training loss (cross-entropy)
eee âO- Adam âe- Adamw ny a Test error (%) 8 rc 10â 10° 10° 10° Weight decay for Adam Normalized weight decay times 107 for Adamw | 1711.05101#64 | Decoupled Weight Decay Regularization | L$_2$ regularization and weight decay regularization are equivalent for
standard stochastic gradient descent (when rescaled by the learning rate), but
as we demonstrate this is \emph{not} the case for adaptive gradient algorithms,
such as Adam. While common implementations of these algorithms employ L$_2$
regularization (often calling it "weight decay" in what may be misleading due
to the inequivalence we expose), we propose a simple modification to recover
the original formulation of weight decay regularization by \emph{decoupling}
the weight decay from the optimization steps taken w.r.t. the loss function. We
provide empirical evidence that our proposed modification (i) decouples the
optimal choice of weight decay factor from the setting of the learning rate for
both standard SGD and Adam and (ii) substantially improves Adam's
generalization performance, allowing it to compete with SGD with momentum on
image classification datasets (on which it was previously typically
outperformed by the latter). Our proposed decoupled weight decay has already
been adopted by many researchers, and the community has implemented it in
TensorFlow and PyTorch; the complete source code for our experiments is
available at https://github.com/loshchil/AdamW-and-SGDW | http://arxiv.org/pdf/1711.05101 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | Published as a conference paper at ICLR 2019 | null | cs.LG | 20171114 | 20190104 | [
{
"id": "1810.12281"
},
{
"id": "1705.07485"
},
{
"id": "1507.02030"
},
{
"id": "1609.04836"
},
{
"id": "1707.08819"
},
{
"id": "1805.09501"
},
{
"id": "1704.00109"
},
{
"id": "1712.09913"
},
{
"id": "1705.08292"
},
{
"id": "1707.07012"
},
{
"id": "1703.04933"
},
{
"id": "1608.03983"
},
{
"id": "1804.06559"
},
{
"id": "1506.01186"
},
{
"id": "1805.01667"
},
{
"id": "1709.04546"
},
{
"id": "1511.06434"
}
] |
1711.05101 | 65 | SuppFigure 4: Learning curves (top row) and generalization results (Top-5 errors in bottom row) obtained by a 26 2x96d ResNet trained with Adam and AdamW on ImageNet32x32.
6
Published as a conference paper at ICLR 2019
Test error (%) 2 ; ; ; ; , 0 200 400 600 800 1000 1200 1400 1600 1800 Epochs T Training loss (cross-entropy) rs}
Test error (%) 2 ; ; ; ; , 0 200 400 600 800 1000 1200 1400 1600 1800 Epochs
T Training loss (cross-entropy) rs} 0 200 400 600 800 1000 1200 1400 1600 1800 Epochs
SuppFigure 5: Test error curves (top row) and training loss curves (bottom row) for CIFAR-10.
7
Published as a conference paper at ICLR 2019
35 T â& Adam AdamW SGDW 30/ AdamWPR | | â4â sGDwR Test error (%) & 20+ 15 - 0 50 100 150 Epochs
10" Adam AdamW ââ SGDW Training loss (cross-entropy) 0) 50 100 150 Epochs
SuppFigure 6: geNet32x32. Test error curves (top row) and training loss curves (bottom row) for Ima8 | 1711.05101#65 | Decoupled Weight Decay Regularization | L$_2$ regularization and weight decay regularization are equivalent for
standard stochastic gradient descent (when rescaled by the learning rate), but
as we demonstrate this is \emph{not} the case for adaptive gradient algorithms,
such as Adam. While common implementations of these algorithms employ L$_2$
regularization (often calling it "weight decay" in what may be misleading due
to the inequivalence we expose), we propose a simple modification to recover
the original formulation of weight decay regularization by \emph{decoupling}
the weight decay from the optimization steps taken w.r.t. the loss function. We
provide empirical evidence that our proposed modification (i) decouples the
optimal choice of weight decay factor from the setting of the learning rate for
both standard SGD and Adam and (ii) substantially improves Adam's
generalization performance, allowing it to compete with SGD with momentum on
image classification datasets (on which it was previously typically
outperformed by the latter). Our proposed decoupled weight decay has already
been adopted by many researchers, and the community has implemented it in
TensorFlow and PyTorch; the complete source code for our experiments is
available at https://github.com/loshchil/AdamW-and-SGDW | http://arxiv.org/pdf/1711.05101 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | Published as a conference paper at ICLR 2019 | null | cs.LG | 20171114 | 20190104 | [
{
"id": "1810.12281"
},
{
"id": "1705.07485"
},
{
"id": "1507.02030"
},
{
"id": "1609.04836"
},
{
"id": "1707.08819"
},
{
"id": "1805.09501"
},
{
"id": "1704.00109"
},
{
"id": "1712.09913"
},
{
"id": "1705.08292"
},
{
"id": "1707.07012"
},
{
"id": "1703.04933"
},
{
"id": "1608.03983"
},
{
"id": "1804.06559"
},
{
"id": "1506.01186"
},
{
"id": "1805.01667"
},
{
"id": "1709.04546"
},
{
"id": "1511.06434"
}
] |
1711.04289 | 0 | 8 1 0 2 n u J 3 2 ] L C . s c [
3 v 9 8 2 4 0 . 1 1 7 1 : v i X r a
# Neural Natural Language Inference Models Enhanced with External Knowledge
Qian Chen University of Science and Technology of China [email protected]
Xiaodan Zhu ECE, Queenâs University [email protected]
Zhen-Hua Ling University of Science and Technology of China [email protected]
Diana Inkpen University of Ottawa [email protected]
# Si Wei iFLYTEK Research [email protected]
# Abstract | 1711.04289#0 | Neural Natural Language Inference Models Enhanced with External Knowledge | Modeling natural language inference is a very challenging task. With the
availability of large annotated data, it has recently become feasible to train
complex models such as neural-network-based inference models, which have shown
to achieve the state-of-the-art performance. Although there exist relatively
large annotated data, can machines learn all knowledge needed to perform
natural language inference (NLI) from these data? If not, how can
neural-network-based NLI models benefit from external knowledge and how to
build NLI models to leverage it? In this paper, we enrich the state-of-the-art
neural natural language inference models with external knowledge. We
demonstrate that the proposed models improve neural NLI models to achieve the
state-of-the-art performance on the SNLI and MultiNLI datasets. | http://arxiv.org/pdf/1711.04289 | Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, Si Wei | cs.CL | Accepted by ACL 2018 | null | cs.CL | 20171112 | 20180623 | [
{
"id": "1703.04617"
}
] |
1711.04289 | 1 | Diana Inkpen University of Ottawa [email protected]
# Si Wei iFLYTEK Research [email protected]
# Abstract
Modeling natural language inference is a very challenging task. With the avail- ability of large annotated data, it has re- cently become feasible to train complex models such as neural-network-based in- ference models, which have shown to achieve the state-of-the-art performance. Although there exist relatively large anno- tated data, can machines learn all knowl- edge needed to perform natural language inference (NLI) from these data? If not, how can neural-network-based NLI mod- els beneï¬t from external knowledge and how to build NLI models to leverage it? In this paper, we enrich the state-of-the-art neural natural language inference models with external knowledge. We demonstrate that the proposed models improve neural NLI models to achieve the state-of-the-art performance on the SNLI and MultiNLI datasets. | 1711.04289#1 | Neural Natural Language Inference Models Enhanced with External Knowledge | Modeling natural language inference is a very challenging task. With the
availability of large annotated data, it has recently become feasible to train
complex models such as neural-network-based inference models, which have shown
to achieve the state-of-the-art performance. Although there exist relatively
large annotated data, can machines learn all knowledge needed to perform
natural language inference (NLI) from these data? If not, how can
neural-network-based NLI models benefit from external knowledge and how to
build NLI models to leverage it? In this paper, we enrich the state-of-the-art
neural natural language inference models with external knowledge. We
demonstrate that the proposed models improve neural NLI models to achieve the
state-of-the-art performance on the SNLI and MultiNLI datasets. | http://arxiv.org/pdf/1711.04289 | Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, Si Wei | cs.CL | Accepted by ACL 2018 | null | cs.CL | 20171112 | 20180623 | [
{
"id": "1703.04617"
}
] |
1711.04289 | 2 | anno- the tated datasets were made available, e.g., (Bowman et al., 2015) and MultiNLI SNLI datasets (Williams et al., 2017), which made it feasible to train rather complicated neural- network-based models that ï¬t a large set of parameters to better model NLI. Such models have shown to achieve the state-of-the-art per- formance (Bowman et al., 2015, 2016; Yu and Munkhdalai, 2017b; Parikh et al., 2016; Sha et al., 2016; Chen et al., 2017a,b; Tay et al., 2018).
While neural networks have been shown to be very effective in modeling NLI with large train- ing data, they have often focused on end-to-end training by assuming that all inference knowledge is learnable from the provided training data. In this paper, we relax this assumption and explore whether external knowledge can further help NLI. Consider an example:
⢠p: A lady standing in a wheat ï¬eld.
⢠h: A person standing in a corn ï¬eld.
# Introduction | 1711.04289#2 | Neural Natural Language Inference Models Enhanced with External Knowledge | Modeling natural language inference is a very challenging task. With the
availability of large annotated data, it has recently become feasible to train
complex models such as neural-network-based inference models, which have shown
to achieve the state-of-the-art performance. Although there exist relatively
large annotated data, can machines learn all knowledge needed to perform
natural language inference (NLI) from these data? If not, how can
neural-network-based NLI models benefit from external knowledge and how to
build NLI models to leverage it? In this paper, we enrich the state-of-the-art
neural natural language inference models with external knowledge. We
demonstrate that the proposed models improve neural NLI models to achieve the
state-of-the-art performance on the SNLI and MultiNLI datasets. | http://arxiv.org/pdf/1711.04289 | Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, Si Wei | cs.CL | Accepted by ACL 2018 | null | cs.CL | 20171112 | 20180623 | [
{
"id": "1703.04617"
}
] |
1711.04289 | 3 | ⢠p: A lady standing in a wheat ï¬eld.
⢠h: A person standing in a corn ï¬eld.
# Introduction
Reasoning and inference are central to both hu- man and artiï¬cial intelligence. Natural language inference (NLI), also known as recognizing tex- tual entailment (RTE), is an important NLP prob- lem concerned with determining inferential rela- tionship (e.g., entailment, contradiction, or neu- tral) between a premise p and a hypothesis h. In general, modeling informal inference in language is a very challenging and basic problem towards achieving true natural language understanding.
In this simpliï¬ed example, when computers are asked to predict the relation between these two sentences and if training data do not provide the knowledge of relationship between âwheatâ and âcornâ (e.g., if one of the two words does not ap- pear in the training data or they are not paired in any premise-hypothesis pairs), it will be hard for computers to correctly recognize that the premise contradicts the hypothesis.
In general, although in many tasks learning tab- ula rasa achieved state-of-the-art performance, we believe complicated NLP problems such as NLI | 1711.04289#3 | Neural Natural Language Inference Models Enhanced with External Knowledge | Modeling natural language inference is a very challenging task. With the
availability of large annotated data, it has recently become feasible to train
complex models such as neural-network-based inference models, which have shown
to achieve the state-of-the-art performance. Although there exist relatively
large annotated data, can machines learn all knowledge needed to perform
natural language inference (NLI) from these data? If not, how can
neural-network-based NLI models benefit from external knowledge and how to
build NLI models to leverage it? In this paper, we enrich the state-of-the-art
neural natural language inference models with external knowledge. We
demonstrate that the proposed models improve neural NLI models to achieve the
state-of-the-art performance on the SNLI and MultiNLI datasets. | http://arxiv.org/pdf/1711.04289 | Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, Si Wei | cs.CL | Accepted by ACL 2018 | null | cs.CL | 20171112 | 20180623 | [
{
"id": "1703.04617"
}
] |
1711.04289 | 4 | In general, although in many tasks learning tab- ula rasa achieved state-of-the-art performance, we believe complicated NLP problems such as NLI
could beneï¬t from leveraging knowledge accumu- lated by humans, particularly in a foreseeable fu- ture when machines are unable to learn it by them- selves.
In this paper we enrich neural-network-based NLI models with external knowledge in co- attention, local inference collection, and inference composition components. We show the proposed model improves the state-of-the-art NLI models to achieve better performances on the SNLI and MultiNLI datasets. The advantage of using exter- nal knowledge is more signiï¬cant when the size of training data is restricted, suggesting that if more knowledge can be obtained, it may bring more beneï¬t. In addition to attaining the state-of-the- art performance, we are also interested in under- standing how external knowledge contributes to the major components of typical neural-network- based NLI models.
# 2 Related Work | 1711.04289#4 | Neural Natural Language Inference Models Enhanced with External Knowledge | Modeling natural language inference is a very challenging task. With the
availability of large annotated data, it has recently become feasible to train
complex models such as neural-network-based inference models, which have shown
to achieve the state-of-the-art performance. Although there exist relatively
large annotated data, can machines learn all knowledge needed to perform
natural language inference (NLI) from these data? If not, how can
neural-network-based NLI models benefit from external knowledge and how to
build NLI models to leverage it? In this paper, we enrich the state-of-the-art
neural natural language inference models with external knowledge. We
demonstrate that the proposed models improve neural NLI models to achieve the
state-of-the-art performance on the SNLI and MultiNLI datasets. | http://arxiv.org/pdf/1711.04289 | Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, Si Wei | cs.CL | Accepted by ACL 2018 | null | cs.CL | 20171112 | 20180623 | [
{
"id": "1703.04617"
}
] |
1711.04289 | 6 | More recently the availability of much larger annotated data, e.g., SNLI (Bowman et al., 2015) and MultiNLI (Williams et al., 2017), has made it possible to train more complex mod- els. These models mainly fall into two types of approaches: sentence-encoding-based models and models using also inter-sentence attention. Sentence-encoding-based models use Siamese ar- chitecture (Bromley et al., 1993). The parameter- tied neural networks are applied to encode both the premise and the hypothesis. Then a neural network classiï¬er is applied to decide relationship between the two sentences. Different neural net- works have been utilized for sentence encoding, such as LSTM (Bowman et al., 2015), GRU (Ven- drov et al., 2015), CNN (Mou et al., 2016), BiL- STM and its variants (Liu et al., 2016c; Lin et al., 2017; Chen et al., 2017b; Nie and Bansal, 2017), self-attention network (Shen et al., 2017, 2018), and more complicated neural networks (Bowman et al., 2016; Yu and Munkhdalai, 2017a,b; Choi et al., 2017). Sentence-encoding-based models | 1711.04289#6 | Neural Natural Language Inference Models Enhanced with External Knowledge | Modeling natural language inference is a very challenging task. With the
availability of large annotated data, it has recently become feasible to train
complex models such as neural-network-based inference models, which have shown
to achieve the state-of-the-art performance. Although there exist relatively
large annotated data, can machines learn all knowledge needed to perform
natural language inference (NLI) from these data? If not, how can
neural-network-based NLI models benefit from external knowledge and how to
build NLI models to leverage it? In this paper, we enrich the state-of-the-art
neural natural language inference models with external knowledge. We
demonstrate that the proposed models improve neural NLI models to achieve the
state-of-the-art performance on the SNLI and MultiNLI datasets. | http://arxiv.org/pdf/1711.04289 | Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, Si Wei | cs.CL | Accepted by ACL 2018 | null | cs.CL | 20171112 | 20180623 | [
{
"id": "1703.04617"
}
] |
1711.04289 | 7 | transform sentences into ï¬xed-length vector rep- resentations, which may help a wide range of tasks (Conneau et al., 2017).
The second set of models use inter-sentence at- tention (Rockt¨aschel et al., 2015; Wang and Jiang, 2016; Cheng et al., 2016; Parikh et al., 2016; Chen et al., 2017a). Among them, Rockt¨aschel et al. (2015) were among the ï¬rst to propose neu- ral attention-based models for NLI. Chen et al. (2017a) proposed an enhanced sequential infer- ence model (ESIM), which is one of the best mod- els so far and is used as one of our baselines in this paper. | 1711.04289#7 | Neural Natural Language Inference Models Enhanced with External Knowledge | Modeling natural language inference is a very challenging task. With the
availability of large annotated data, it has recently become feasible to train
complex models such as neural-network-based inference models, which have shown
to achieve the state-of-the-art performance. Although there exist relatively
large annotated data, can machines learn all knowledge needed to perform
natural language inference (NLI) from these data? If not, how can
neural-network-based NLI models benefit from external knowledge and how to
build NLI models to leverage it? In this paper, we enrich the state-of-the-art
neural natural language inference models with external knowledge. We
demonstrate that the proposed models improve neural NLI models to achieve the
state-of-the-art performance on the SNLI and MultiNLI datasets. | http://arxiv.org/pdf/1711.04289 | Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, Si Wei | cs.CL | Accepted by ACL 2018 | null | cs.CL | 20171112 | 20180623 | [
{
"id": "1703.04617"
}
] |
1711.04289 | 8 | In this paper we enrich neural-network-based NLI models with external knowledge. Unlike early work on NLI (Jijkoun and de Rijke, 2005; MacCartney et al., 2008; MacCartney, 2009) that explores external knowledge in conventional NLI models on relatively small NLI datasets, we aim to merge the advantage of powerful modeling ability of neural networks with extra external inference knowledge. We show that the proposed model improves the state-of-the-art neural NLI models to achieve better performances on the SNLI and MultiNLI datasets. The advantage of using exter- nal knowledge is more signiï¬cant when the size of training data is restricted, suggesting that if more knowledge can be obtained, it may have more ben- eï¬t. In addition to attaining the state-of-the-art performance, we are also interested in understand- ing how external knowledge affect major compo- nents of neural-network-based NLI models. | 1711.04289#8 | Neural Natural Language Inference Models Enhanced with External Knowledge | Modeling natural language inference is a very challenging task. With the
availability of large annotated data, it has recently become feasible to train
complex models such as neural-network-based inference models, which have shown
to achieve the state-of-the-art performance. Although there exist relatively
large annotated data, can machines learn all knowledge needed to perform
natural language inference (NLI) from these data? If not, how can
neural-network-based NLI models benefit from external knowledge and how to
build NLI models to leverage it? In this paper, we enrich the state-of-the-art
neural natural language inference models with external knowledge. We
demonstrate that the proposed models improve neural NLI models to achieve the
state-of-the-art performance on the SNLI and MultiNLI datasets. | http://arxiv.org/pdf/1711.04289 | Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, Si Wei | cs.CL | Accepted by ACL 2018 | null | cs.CL | 20171112 | 20180623 | [
{
"id": "1703.04617"
}
] |
1711.04289 | 9 | In general, external knowledge has shown to be effective in neural networks for other NLP tasks, including word embedding (Chen et al., 2015; Faruqui et al., 2015; Liu et al., 2015; Wieting et al., 2015; Mrksic et al., 2017), machine trans- lation (Shi et al., 2016; Zhang et al., 2017b), lan- guage modeling (Ahn et al., 2016), and dialogue systems (Chen et al., 2016b).
# 3 Neural-Network-Based NLI Models with External Knowledge
In this section we propose neural-network-based inference NLI models to incorporate external knowledge, which, as we will show later in Sec- tion 5, achieve the state-of-the-art performance. In addition to attaining the leading performance we are also interested in investigating the effects of external knowledge on major components of neural-network-based NLI modeling. | 1711.04289#9 | Neural Natural Language Inference Models Enhanced with External Knowledge | Modeling natural language inference is a very challenging task. With the
availability of large annotated data, it has recently become feasible to train
complex models such as neural-network-based inference models, which have shown
to achieve the state-of-the-art performance. Although there exist relatively
large annotated data, can machines learn all knowledge needed to perform
natural language inference (NLI) from these data? If not, how can
neural-network-based NLI models benefit from external knowledge and how to
build NLI models to leverage it? In this paper, we enrich the state-of-the-art
neural natural language inference models with external knowledge. We
demonstrate that the proposed models improve neural NLI models to achieve the
state-of-the-art performance on the SNLI and MultiNLI datasets. | http://arxiv.org/pdf/1711.04289 | Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, Si Wei | cs.CL | Accepted by ACL 2018 | null | cs.CL | 20171112 | 20180623 | [
{
"id": "1703.04617"
}
] |
1711.04289 | 10 | Figure 1 shows a high-level general view of the proposed framework. While speciï¬c NLI systems vary in their implementation, typical state-of-the- art NLI models contain the main components (or equivalents) of representing premise and hypoth- esis sentences, collecting local (e.g., lexical) in- ference information, and aggregating and compos- ing local information to make the global decision at the sentence level. We incorporate and investi- gate external knowledge accordingly in these ma- jor NLI components: computing co-attention, col- lecting local inference information, and compos- ing inference to make ï¬nal decision.
# 3.1 External Knowledge | 1711.04289#10 | Neural Natural Language Inference Models Enhanced with External Knowledge | Modeling natural language inference is a very challenging task. With the
availability of large annotated data, it has recently become feasible to train
complex models such as neural-network-based inference models, which have shown
to achieve the state-of-the-art performance. Although there exist relatively
large annotated data, can machines learn all knowledge needed to perform
natural language inference (NLI) from these data? If not, how can
neural-network-based NLI models benefit from external knowledge and how to
build NLI models to leverage it? In this paper, we enrich the state-of-the-art
neural natural language inference models with external knowledge. We
demonstrate that the proposed models improve neural NLI models to achieve the
state-of-the-art performance on the SNLI and MultiNLI datasets. | http://arxiv.org/pdf/1711.04289 | Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, Si Wei | cs.CL | Accepted by ACL 2018 | null | cs.CL | 20171112 | 20180623 | [
{
"id": "1703.04617"
}
] |
1711.04289 | 11 | # 3.1 External Knowledge
As discussed above, although there exist relatively large annotated data for NLI, can machines learn all inference knowledge needed to perform NLI from the data? If not, how can neural network- based NLI models beneï¬t from external knowl- edge and how to build NLI models to leverage it? external, inference-related knowledge in major compo- nents of neural networks for natural language inference. For example, intuitively knowledge synonymy, antonymy, hypernymy and about hyponymy between given words may help model soft-alignment between premises and hypotheses; knowledge about hypernymy and hyponymy may help capture entailment; knowledge about antonymy and co-hyponyms (words sharing the same hypernym) may beneï¬t the modeling of contradiction. | 1711.04289#11 | Neural Natural Language Inference Models Enhanced with External Knowledge | Modeling natural language inference is a very challenging task. With the
availability of large annotated data, it has recently become feasible to train
complex models such as neural-network-based inference models, which have shown
to achieve the state-of-the-art performance. Although there exist relatively
large annotated data, can machines learn all knowledge needed to perform
natural language inference (NLI) from these data? If not, how can
neural-network-based NLI models benefit from external knowledge and how to
build NLI models to leverage it? In this paper, we enrich the state-of-the-art
neural natural language inference models with external knowledge. We
demonstrate that the proposed models improve neural NLI models to achieve the
state-of-the-art performance on the SNLI and MultiNLI datasets. | http://arxiv.org/pdf/1711.04289 | Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, Si Wei | cs.CL | Accepted by ACL 2018 | null | cs.CL | 20171112 | 20180623 | [
{
"id": "1703.04617"
}
] |
1711.04289 | 12 | In this section, we discuss the incorporation of basic, lexical-level semantic knowledge into neu- ral NLI components. Speciï¬cally, we consider ex- ternal lexical-level inference knowledge between word wi and wj, which is represented as a vec- tor rij and is incorporated into three speciï¬c com- ponents shown in Figure 1. We will discuss the details of how rij is constructed later in the exper- iment setup section (Section 4) but instead focus on the proposed model in this section. Note that while we study lexical-level inference knowledge in the paper, if inference knowledge about larger pieces of text pairs (e.g., inference relations be- tween phrases) are available, the proposed model can be easily extended to handle that. In this paper, we instead let the NLI models to compose lexical- level knowledge to obtain inference relations be- tween larger pieces of texts.
# 3.2 Encoding Premise and Hypothesis | 1711.04289#12 | Neural Natural Language Inference Models Enhanced with External Knowledge | Modeling natural language inference is a very challenging task. With the
availability of large annotated data, it has recently become feasible to train
complex models such as neural-network-based inference models, which have shown
to achieve the state-of-the-art performance. Although there exist relatively
large annotated data, can machines learn all knowledge needed to perform
natural language inference (NLI) from these data? If not, how can
neural-network-based NLI models benefit from external knowledge and how to
build NLI models to leverage it? In this paper, we enrich the state-of-the-art
neural natural language inference models with external knowledge. We
demonstrate that the proposed models improve neural NLI models to achieve the
state-of-the-art performance on the SNLI and MultiNLI datasets. | http://arxiv.org/pdf/1711.04289 | Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, Si Wei | cs.CL | Accepted by ACL 2018 | null | cs.CL | 20171112 | 20180623 | [
{
"id": "1703.04617"
}
] |
1711.04289 | 13 | # 3.2 Encoding Premise and Hypothesis
Same as much previous work (Chen et al., 2017a,b), we encode the premise and the hypoth- esis with bidirectional LSTMs (BiLSTMs). The premise is represented as a = (a1, . . . , am) and the hypothesis is b = (b1, . . . , bn), where m and n are the lengths of the sentences. Then a and b are embedded into de-dimensional vectors [E(a1), . . . , E(am)] and [E(b1), . . . , E(bn)] using the embedding matrix E â RdeÃ|V |, where |V | is the vocabulary size and E can be initialized with the pre-trained word embedding. To represent words in its context, the premise and the hypothe- sis are fed into BiLSTM encoders (Hochreiter and Schmidhuber, 1997) to obtain context-dependent hidden states as and bs:
as i = Encoder(E(a), i) , bs j = Encoder(E(b), j) .
(1)
(2)
where i and j indicate the i-th word in the premise and the j-th word in the hypothesis, respectively.
# 3.3 Knowledge-Enriched Co-Attention | 1711.04289#13 | Neural Natural Language Inference Models Enhanced with External Knowledge | Modeling natural language inference is a very challenging task. With the
availability of large annotated data, it has recently become feasible to train
complex models such as neural-network-based inference models, which have shown
to achieve the state-of-the-art performance. Although there exist relatively
large annotated data, can machines learn all knowledge needed to perform
natural language inference (NLI) from these data? If not, how can
neural-network-based NLI models benefit from external knowledge and how to
build NLI models to leverage it? In this paper, we enrich the state-of-the-art
neural natural language inference models with external knowledge. We
demonstrate that the proposed models improve neural NLI models to achieve the
state-of-the-art performance on the SNLI and MultiNLI datasets. | http://arxiv.org/pdf/1711.04289 | Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, Si Wei | cs.CL | Accepted by ACL 2018 | null | cs.CL | 20171112 | 20180623 | [
{
"id": "1703.04617"
}
] |
1711.04289 | 14 | (2)
where i and j indicate the i-th word in the premise and the j-th word in the hypothesis, respectively.
# 3.3 Knowledge-Enriched Co-Attention
As discussed above, soft-alignment of word pairs between the premise and the hypothesis may ben- eï¬t from knowledge-enriched co-attention mech- anism. Given the relation features rij â Rdr be- tween the premiseâs i-th word and the hypothesisâs j-th word derived from the external knowledge, the co-attention is calculated as:
eij = (as i )Tbs j + F (rij) . (3)
The function F can be any non-linear or linear functions. In this paper, we use F (rij) = λ1(rij), where λ is a hyper-parameter tuned on the devel- opment set and 1 is the indication function as fol- lows:
L(rij) = rf if rj; is not a zero vector ; j * (4) if rj; is a zero vector . | 1711.04289#14 | Neural Natural Language Inference Models Enhanced with External Knowledge | Modeling natural language inference is a very challenging task. With the
availability of large annotated data, it has recently become feasible to train
complex models such as neural-network-based inference models, which have shown
to achieve the state-of-the-art performance. Although there exist relatively
large annotated data, can machines learn all knowledge needed to perform
natural language inference (NLI) from these data? If not, how can
neural-network-based NLI models benefit from external knowledge and how to
build NLI models to leverage it? In this paper, we enrich the state-of-the-art
neural natural language inference models with external knowledge. We
demonstrate that the proposed models improve neural NLI models to achieve the
state-of-the-art performance on the SNLI and MultiNLI datasets. | http://arxiv.org/pdf/1711.04289 | Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, Si Wei | cs.CL | Accepted by ACL 2018 | null | cs.CL | 20171112 | 20180623 | [
{
"id": "1703.04617"
}
] |
1711.04289 | 15 | L(rij) = rf if rj; is not a zero vector ; j * (4) if rj; is a zero vector .
Intuitively, word pairs with semantic relationship, e.g., synonymy, antonymy, hypernymy, hyponymy and co-hyponyms, are probably aligned together. We will discuss how we construct external knowl- edge later in Section 4. We have also tried a two- layer MLP as a universal function approximator in function F to learn the underlying combination function but did not observe further improvement over the best performance we obtained on the de- velopment datasets.
y A M ultilayer Perceptron Classifier Knowledge â Enhanced Inference Composition oo _ , Local Inference Collection ee with External Knowledge © © ® Knowledge Enriched. Co â attention Input Encoding ai a 1 <a Ly The child is getting a pedicure Premise External Knowledge | SameHyper: [pedicure, manicure] â> ' Synonymy: (child, kid] The kid is getting a manicure Hypothesis
Figure 1: A high-level view of neural-network-based NLI models enriched with external knowledge in co-attention, local inference collection, and inference composition. | 1711.04289#15 | Neural Natural Language Inference Models Enhanced with External Knowledge | Modeling natural language inference is a very challenging task. With the
availability of large annotated data, it has recently become feasible to train
complex models such as neural-network-based inference models, which have shown
to achieve the state-of-the-art performance. Although there exist relatively
large annotated data, can machines learn all knowledge needed to perform
natural language inference (NLI) from these data? If not, how can
neural-network-based NLI models benefit from external knowledge and how to
build NLI models to leverage it? In this paper, we enrich the state-of-the-art
neural natural language inference models with external knowledge. We
demonstrate that the proposed models improve neural NLI models to achieve the
state-of-the-art performance on the SNLI and MultiNLI datasets. | http://arxiv.org/pdf/1711.04289 | Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, Si Wei | cs.CL | Accepted by ACL 2018 | null | cs.CL | 20171112 | 20180623 | [
{
"id": "1703.04617"
}
] |
1711.04289 | 16 | Figure 1: A high-level view of neural-network-based NLI models enriched with external knowledge in co-attention, local inference collection, and inference composition.
is determined by the co- attention matrix e â RmÃn computed in Equa- tion (3), which is used to obtain the local relevance between the premise and the hypothesis. For the hidden state of the i-th word in the premise, i.e., as i (already encoding the word itself and its con- text), the relevant semantics in the hypothesis is identiï¬ed into a context vector ac i using eij, more speciï¬cally with Equation (5).
n exp(ei;) c F a; = ,ac= ajjbe, (5) exp(e;;) Bij onyâ Oi yar, (6) ij aan exp(â¬x) fi > Bi i
where α â RmÃn and β â RmÃn are the nor- malized attention weight matrices with respect to the 2-axis and 1-axis. The same calculation is per- formed for each word in the hypothesis, i.e., bs j, with Equation (6) to obtain the context vector bc j.
# 3.4 Local Inference Collection with External Knowledge | 1711.04289#16 | Neural Natural Language Inference Models Enhanced with External Knowledge | Modeling natural language inference is a very challenging task. With the
availability of large annotated data, it has recently become feasible to train
complex models such as neural-network-based inference models, which have shown
to achieve the state-of-the-art performance. Although there exist relatively
large annotated data, can machines learn all knowledge needed to perform
natural language inference (NLI) from these data? If not, how can
neural-network-based NLI models benefit from external knowledge and how to
build NLI models to leverage it? In this paper, we enrich the state-of-the-art
neural natural language inference models with external knowledge. We
demonstrate that the proposed models improve neural NLI models to achieve the
state-of-the-art performance on the SNLI and MultiNLI datasets. | http://arxiv.org/pdf/1711.04289 | Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, Si Wei | cs.CL | Accepted by ACL 2018 | null | cs.CL | 20171112 | 20180623 | [
{
"id": "1703.04617"
}
] |
1711.04289 | 17 | # 3.4 Local Inference Collection with External Knowledge
By way of comparing the inference-related seman- tic relation between as i (individual word repre- sentation in premise) and ac i (context representa- tion from hypothesis which is align to word as i ), we can model local inference (i.e., word-level in- ference) between aligned word pairs. Intuitively, for example, knowledge about hypernymy or hy- ponymy may help model entailment and knowl- edge about antonymy and co-hyponyms may help model contradiction. Through comparing as i and
ac in addition to their relation from external i , knowledge, we can obtain word-level inference information for each word. The same calcula- tion is performed for bs j. Thus, we collect knowledge-enriched local inference information:
n ajâ = G([a}; af; a; â aj; a; 0 af So aijrij)), (7) j=l m bs, bE: BS â bE; BS 0 BE: S Bijrjil) i=l m bj ~~ PPI G([ (8) | 1711.04289#17 | Neural Natural Language Inference Models Enhanced with External Knowledge | Modeling natural language inference is a very challenging task. With the
availability of large annotated data, it has recently become feasible to train
complex models such as neural-network-based inference models, which have shown
to achieve the state-of-the-art performance. Although there exist relatively
large annotated data, can machines learn all knowledge needed to perform
natural language inference (NLI) from these data? If not, how can
neural-network-based NLI models benefit from external knowledge and how to
build NLI models to leverage it? In this paper, we enrich the state-of-the-art
neural natural language inference models with external knowledge. We
demonstrate that the proposed models improve neural NLI models to achieve the
state-of-the-art performance on the SNLI and MultiNLI datasets. | http://arxiv.org/pdf/1711.04289 | Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, Si Wei | cs.CL | Accepted by ACL 2018 | null | cs.CL | 20171112 | 20180623 | [
{
"id": "1703.04617"
}
] |
1711.04289 | 18 | where a heuristic matching trick with difference and element-wise product is used (Mou et al., 2016; Chen et al., 2017a). The last terms in Equa- ion (7)(8) are used to obtain word-level infer- ence information from external knowledge. Take Equation (7) as example, rj; is the relation fea- ure between the i-th word in the premise and the j-th word in the hypothesis, but we care more about semantic relation between aligned word pairs between the premise and the hypoth- esis. Thus, we use a soft-aligned version through the soft-alignment weight a;;. For the i-th word in the premise, the last term in Equation (7) is a word-level inference information based on ex- ernal knowledge between the i-th word and the aligned word. The same calculation for hypoth- esis is performed in Equation (8). G is a non- linear mapping function to reduce dimensionality. Specifically, we use a 1-layer feed-forward neural network with the ReLU activation function with a shortcut connection, i.e., concatenate the hidden states after ReLU with the input yy 1 WjTij (OF ey Biyrjs) as the output a?â (or bm),
# 3.5 Knowledge-Enhanced Inference Composition | 1711.04289#18 | Neural Natural Language Inference Models Enhanced with External Knowledge | Modeling natural language inference is a very challenging task. With the
availability of large annotated data, it has recently become feasible to train
complex models such as neural-network-based inference models, which have shown
to achieve the state-of-the-art performance. Although there exist relatively
large annotated data, can machines learn all knowledge needed to perform
natural language inference (NLI) from these data? If not, how can
neural-network-based NLI models benefit from external knowledge and how to
build NLI models to leverage it? In this paper, we enrich the state-of-the-art
neural natural language inference models with external knowledge. We
demonstrate that the proposed models improve neural NLI models to achieve the
state-of-the-art performance on the SNLI and MultiNLI datasets. | http://arxiv.org/pdf/1711.04289 | Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, Si Wei | cs.CL | Accepted by ACL 2018 | null | cs.CL | 20171112 | 20180623 | [
{
"id": "1703.04617"
}
] |
1711.04289 | 20 | av i = Composition(am, i) , j = Composition(bm, j) . bv
(9)
(10)
Here, we also use BiLSTMs as building blocks for the composition layer, but the responsibility of BiLSTMs in the inference composition layer is completely different from that in the input en- coding layer. The BiLSTMs here read local in- ference vectors (am and bm) and learn to judge the types of local inference relationship and dis- tinguish crucial local inference vectors for overall sentence-level inference relationship. Intuitively, the ï¬nal prediction is likely to depend on word pairs appearing in external knowledge that have some semantic relation. Our inference model con- verts the output hidden vectors of BiLSTMs to the ï¬xed-length vector with pooling operations and puts it into the ï¬nal classiï¬er to determine the overall inference class. Particularly, in addi- tion to using mean pooling and max pooling sim- ilarly to ESIM (Chen et al., 2017a), we propose to use weighted pooling based on external knowl- edge to obtain a ï¬xed-length vector as in Equation (11)(12). | 1711.04289#20 | Neural Natural Language Inference Models Enhanced with External Knowledge | Modeling natural language inference is a very challenging task. With the
availability of large annotated data, it has recently become feasible to train
complex models such as neural-network-based inference models, which have shown
to achieve the state-of-the-art performance. Although there exist relatively
large annotated data, can machines learn all knowledge needed to perform
natural language inference (NLI) from these data? If not, how can
neural-network-based NLI models benefit from external knowledge and how to
build NLI models to leverage it? In this paper, we enrich the state-of-the-art
neural natural language inference models with external knowledge. We
demonstrate that the proposed models improve neural NLI models to achieve the
state-of-the-art performance on the SNLI and MultiNLI datasets. | http://arxiv.org/pdf/1711.04289 | Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, Si Wei | cs.CL | Accepted by ACL 2018 | null | cs.CL | 20171112 | 20180623 | [
{
"id": "1703.04617"
}
] |
1711.04289 | 21 | we exp(H (S771 aig? ig)) a a Deol Fours) 7 w exp(A (oe Big yi) v b : r bY. 3 5 1 XP(A (Sy BigP yi)? dy (12)
In our experiments, we regard the function H as a 1-layer feed-forward neural network with ReLU activation function. We concatenate all pooling vectors, i.e., mean, max, and weighted pooling, into the ï¬xed-length vector and then put the vector into the ï¬nal multilayer perceptron (MLP) clas- siï¬er. The MLP has one hidden layer with tanh activation and softmax output layer in our exper- iments. The entire model is trained end-to-end, through minimizing the cross-entropy loss.
# 4 Experiment Set-Up
# 4.1 Representation of External Knowledge | 1711.04289#21 | Neural Natural Language Inference Models Enhanced with External Knowledge | Modeling natural language inference is a very challenging task. With the
availability of large annotated data, it has recently become feasible to train
complex models such as neural-network-based inference models, which have shown
to achieve the state-of-the-art performance. Although there exist relatively
large annotated data, can machines learn all knowledge needed to perform
natural language inference (NLI) from these data? If not, how can
neural-network-based NLI models benefit from external knowledge and how to
build NLI models to leverage it? In this paper, we enrich the state-of-the-art
neural natural language inference models with external knowledge. We
demonstrate that the proposed models improve neural NLI models to achieve the
state-of-the-art performance on the SNLI and MultiNLI datasets. | http://arxiv.org/pdf/1711.04289 | Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, Si Wei | cs.CL | Accepted by ACL 2018 | null | cs.CL | 20171112 | 20180623 | [
{
"id": "1703.04617"
}
] |
1711.04289 | 22 | # 4 Experiment Set-Up
# 4.1 Representation of External Knowledge
Lexical Semantic Relations As described in to incorporate external knowledge Section 3.1, (as a knowledge vector rij) to the state-of-the- art neural network-based NLI models, we ï¬rst explore semantic relations in WordNet (Miller, 1995), motivated by MacCartney (2009). Specif- ically, the relations of lexical pairs are derived as described in (1)-(4) below. Instead of using Jiang- Conrath WordNet distance metric (Jiang and Con- rath, 1997), which does not improve the perfor- mance of our models on the development sets, we add a new feature, i.e., co-hyponyms, which con- sistently beneï¬t our models.
(1) Synonymy: It takes the value 1 if the words in the pair are synonyms in WordNet (i.e., be- long to the same synset), and 0 otherwise. For example, [felicitous, good] = 1, [dog, wolf] = 0.
(2) Antonymy: It takes the value 1 if the words in the pair are antonyms in WordNet, and 0 otherwise. For example, [wet, dry] = 1. | 1711.04289#22 | Neural Natural Language Inference Models Enhanced with External Knowledge | Modeling natural language inference is a very challenging task. With the
availability of large annotated data, it has recently become feasible to train
complex models such as neural-network-based inference models, which have shown
to achieve the state-of-the-art performance. Although there exist relatively
large annotated data, can machines learn all knowledge needed to perform
natural language inference (NLI) from these data? If not, how can
neural-network-based NLI models benefit from external knowledge and how to
build NLI models to leverage it? In this paper, we enrich the state-of-the-art
neural natural language inference models with external knowledge. We
demonstrate that the proposed models improve neural NLI models to achieve the
state-of-the-art performance on the SNLI and MultiNLI datasets. | http://arxiv.org/pdf/1711.04289 | Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, Si Wei | cs.CL | Accepted by ACL 2018 | null | cs.CL | 20171112 | 20180623 | [
{
"id": "1703.04617"
}
] |
1711.04289 | 23 | (3) Hypernymy: It takes the value 1 â n/8 if one word is a (direct or indirect) hypernym of the other word in WordNet, where n is the num- ber of edges between the two words in hier- archies, and 0 otherwise. Note that we ignore pairs in the hierarchy which have more than 8 edges in between. For example, [dog, canid] = 0.875, [wolf, canid] = 0.875, [dog, carni- vore] = 0.75, [canid, dog] = 0
(4) Hyponymy: It is simply the inverse of the hy- pernymy feature. For example, [canid, dog] = 0.875, [dog, canid] = 0.
(5) Co-hyponyms: It takes the value 1 if the two words have the same hypernym but they do not belong to the same synset, and 0 other- wise. For example, [dog, wolf] = 1. | 1711.04289#23 | Neural Natural Language Inference Models Enhanced with External Knowledge | Modeling natural language inference is a very challenging task. With the
availability of large annotated data, it has recently become feasible to train
complex models such as neural-network-based inference models, which have shown
to achieve the state-of-the-art performance. Although there exist relatively
large annotated data, can machines learn all knowledge needed to perform
natural language inference (NLI) from these data? If not, how can
neural-network-based NLI models benefit from external knowledge and how to
build NLI models to leverage it? In this paper, we enrich the state-of-the-art
neural natural language inference models with external knowledge. We
demonstrate that the proposed models improve neural NLI models to achieve the
state-of-the-art performance on the SNLI and MultiNLI datasets. | http://arxiv.org/pdf/1711.04289 | Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, Si Wei | cs.CL | Accepted by ACL 2018 | null | cs.CL | 20171112 | 20180623 | [
{
"id": "1703.04617"
}
] |
1711.04289 | 24 | As discussed above, we expect features like syn- onymy, antonymy, hypernymy, hyponymy and co- hyponyms would help model co-attention align- ment between the premise and the hypothesis. Knowledge of hypernymy and hyponymy may help capture entailment; knowledge of antonymy and co-hyponyms may help model contradiction. Their ï¬nal contributions will be learned in end-to-end model training. We regard the vector r â Rdr as
the relation feature derived from external knowl- edge, where dr is 5 here. In addition, Table 1 re- ports some key statistics of these features.
Feature #Words #Pairs Synonymy Antonymy Hypernymy Hyponymy Co-hyponyms 84,487 6,161 57,475 57,475 53,281 237,937 6,617 753,086 753,086 3,674,700
Table 1: Statistics of lexical relation features.
In addition to the above relations, we also use more relation features in WordNet, including in- stance, instance of, same instance, entailment, member meronym, member holonym, substance meronym, substance holonym, part meronym, part holonym, summing up to 15 features, but these ad- ditional features do not bring further improvement on the development dataset, as also discussed in Section 5. | 1711.04289#24 | Neural Natural Language Inference Models Enhanced with External Knowledge | Modeling natural language inference is a very challenging task. With the
availability of large annotated data, it has recently become feasible to train
complex models such as neural-network-based inference models, which have shown
to achieve the state-of-the-art performance. Although there exist relatively
large annotated data, can machines learn all knowledge needed to perform
natural language inference (NLI) from these data? If not, how can
neural-network-based NLI models benefit from external knowledge and how to
build NLI models to leverage it? In this paper, we enrich the state-of-the-art
neural natural language inference models with external knowledge. We
demonstrate that the proposed models improve neural NLI models to achieve the
state-of-the-art performance on the SNLI and MultiNLI datasets. | http://arxiv.org/pdf/1711.04289 | Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, Si Wei | cs.CL | Accepted by ACL 2018 | null | cs.CL | 20171112 | 20180623 | [
{
"id": "1703.04617"
}
] |
1711.04289 | 25 | Relation Embeddings In the most recent years graph embedding has been widely employed to learn representation for vertexes and their relations in a graph. In our work here, we also capture the relation between any two words in WordNet through relation embedding. Speciï¬cally, we em- ployed TransE (Bordes et al., 2013), a widely used graph embedding methods, to capture relation em- bedding between any two words. We used two typical approaches to obtaining the relation em- bedding. The ï¬rst directly uses 18 relation em- beddings pretrained on the WN18 dataset (Bordes et al., 2013). Speciï¬cally, if a word pair has a cer- tain type relation, we take the corresponding re- lation embedding. Sometimes, if a word pair has multiple relations among the 18 types; we take an average of the relation embedding. The second ap- proach uses TransEâs word embedding (trained on WordNet) to obtain relation embedding, through the objective function used in TransE, i.e., l â t â h, where l indicates relation embedding, t in- dicates tail entity embedding, and h indicates head entity embedding. | 1711.04289#25 | Neural Natural Language Inference Models Enhanced with External Knowledge | Modeling natural language inference is a very challenging task. With the
availability of large annotated data, it has recently become feasible to train
complex models such as neural-network-based inference models, which have shown
to achieve the state-of-the-art performance. Although there exist relatively
large annotated data, can machines learn all knowledge needed to perform
natural language inference (NLI) from these data? If not, how can
neural-network-based NLI models benefit from external knowledge and how to
build NLI models to leverage it? In this paper, we enrich the state-of-the-art
neural natural language inference models with external knowledge. We
demonstrate that the proposed models improve neural NLI models to achieve the
state-of-the-art performance on the SNLI and MultiNLI datasets. | http://arxiv.org/pdf/1711.04289 | Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, Si Wei | cs.CL | Accepted by ACL 2018 | null | cs.CL | 20171112 | 20180623 | [
{
"id": "1703.04617"
}
] |
1711.04289 | 26 | Note that in addition to relation embedding trained on WordNet, other relational embedding resources exist; e.g., that trained on Freebase (WikiData) (Bollacker et al., 2007), but such knowledge resources are mainly about facts (e.g., relationship between Bill Gates and Microsoft) and are less for commonsense knowledge used in
general natural language inference (e.g., the color yellow potentially contradicts red).
# 4.2 NLI Datasets | 1711.04289#26 | Neural Natural Language Inference Models Enhanced with External Knowledge | Modeling natural language inference is a very challenging task. With the
availability of large annotated data, it has recently become feasible to train
complex models such as neural-network-based inference models, which have shown
to achieve the state-of-the-art performance. Although there exist relatively
large annotated data, can machines learn all knowledge needed to perform
natural language inference (NLI) from these data? If not, how can
neural-network-based NLI models benefit from external knowledge and how to
build NLI models to leverage it? In this paper, we enrich the state-of-the-art
neural natural language inference models with external knowledge. We
demonstrate that the proposed models improve neural NLI models to achieve the
state-of-the-art performance on the SNLI and MultiNLI datasets. | http://arxiv.org/pdf/1711.04289 | Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, Si Wei | cs.CL | Accepted by ACL 2018 | null | cs.CL | 20171112 | 20180623 | [
{
"id": "1703.04617"
}
] |
1711.04289 | 27 | general natural language inference (e.g., the color yellow potentially contradicts red).
# 4.2 NLI Datasets
In our experiments, we use Stanford Natural Lan- guage Inference (SNLI) dataset (Bowman et al., 2015) and Multi-Genre Natural Language Infer- ence (MultiNLI) (Williams et al., 2017) dataset, which focus on three basic relations between a premise and a potential hypothesis: the premise entails the hypothesis (entailment), they contradict each other (contradiction), or they are not related (neutral). We use the same data split as in previ- ous work (Bowman et al., 2015; Williams et al., 2017) and classiï¬cation accuracy as the evaluation metric. In addition, we test our models (trained on the SNLI training set) on a new test set (Glockner et al., 2018), which assesses the lexical inference abilities of NLI systems and consists of 8,193 sam- ples. WordNet 3.0 (Miller, 1995) is used to extract semantic relation features between words. The words are lemmatized using Stanford CoreNLP 3.7.0 (Manning et al., 2014). The premise and the hypothesis sentences fed into the input encoding layer are tokenized. | 1711.04289#27 | Neural Natural Language Inference Models Enhanced with External Knowledge | Modeling natural language inference is a very challenging task. With the
availability of large annotated data, it has recently become feasible to train
complex models such as neural-network-based inference models, which have shown
to achieve the state-of-the-art performance. Although there exist relatively
large annotated data, can machines learn all knowledge needed to perform
natural language inference (NLI) from these data? If not, how can
neural-network-based NLI models benefit from external knowledge and how to
build NLI models to leverage it? In this paper, we enrich the state-of-the-art
neural natural language inference models with external knowledge. We
demonstrate that the proposed models improve neural NLI models to achieve the
state-of-the-art performance on the SNLI and MultiNLI datasets. | http://arxiv.org/pdf/1711.04289 | Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, Si Wei | cs.CL | Accepted by ACL 2018 | null | cs.CL | 20171112 | 20180623 | [
{
"id": "1703.04617"
}
] |
1711.04289 | 28 | 4.3 Training Details For duplicability, we release our code1. All our models were strictly selected on the development set of the SNLI data and the in-domain devel- opment set of MultiNLI and were then tested on the corresponding test set. The main training de- tails are as follows: the dimension of the hid- den states of LSTMs and word embeddings are 300. The word embeddings are initialized by 300D GloVe 840B (Pennington et al., 2014), and out-of-vocabulary words among them are initial- ized randomly. All word embeddings are updated during training. Adam (Kingma and Ba, 2014) is used for optimization with an initial learning rate of 0.0004. The mini-batch size is set to 32. Note that the above hyperparameter settings are same as those used in the baseline ESIM (Chen et al., 2017a) model. ESIM is a strong NLI baseline framework with the source code made available at https://github.com/lukecq1231/nli (the ESIM core code has also been adapted to sum- marization (Chen et al., 2016a) and question- answering tasks (Zhang et al., 2017a)).
# The
# trade-off
# λ
# for
calculating
1https://github.com/lukecq1231/kim | 1711.04289#28 | Neural Natural Language Inference Models Enhanced with External Knowledge | Modeling natural language inference is a very challenging task. With the
availability of large annotated data, it has recently become feasible to train
complex models such as neural-network-based inference models, which have shown
to achieve the state-of-the-art performance. Although there exist relatively
large annotated data, can machines learn all knowledge needed to perform
natural language inference (NLI) from these data? If not, how can
neural-network-based NLI models benefit from external knowledge and how to
build NLI models to leverage it? In this paper, we enrich the state-of-the-art
neural natural language inference models with external knowledge. We
demonstrate that the proposed models improve neural NLI models to achieve the
state-of-the-art performance on the SNLI and MultiNLI datasets. | http://arxiv.org/pdf/1711.04289 | Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, Si Wei | cs.CL | Accepted by ACL 2018 | null | cs.CL | 20171112 | 20180623 | [
{
"id": "1703.04617"
}
] |
1711.04289 | 30 | Table 2 shows the results of state-of-the-art models on the SNLI dataset. Among them, ESIM (Chen et al., 2017a) is one of the previous state-of-the-art systems with an 88.0% test-set accuracy. The pro- posed model, namely Knowledge-based Inference Model (KIM), which enriches ESIM with external knowledge, obtains an accuracy of 88.6%, the best single-model performance reported on the SNLI dataset. The difference between ESIM and KIM is statistically signiï¬cant under the one-tailed paired t-test at the 99% signiï¬cance level. Note that the KIM model reported here uses ï¬ve semantic rela- tions described in Section 4. In addition to that, we also use 15 semantic relation features, which does not bring additional gains in performance. These results highlight the effectiveness of the ï¬ve se- mantic relations described in Section 4. To further investigate external knowledge, we add TransE re- lation embedding, and again no further improve- ment is observed on both the development and test sets when TransE relation embedding is used (con- catenated) with the semantic relation vectors. We consider this is due to the fact | 1711.04289#30 | Neural Natural Language Inference Models Enhanced with External Knowledge | Modeling natural language inference is a very challenging task. With the
availability of large annotated data, it has recently become feasible to train
complex models such as neural-network-based inference models, which have shown
to achieve the state-of-the-art performance. Although there exist relatively
large annotated data, can machines learn all knowledge needed to perform
natural language inference (NLI) from these data? If not, how can
neural-network-based NLI models benefit from external knowledge and how to
build NLI models to leverage it? In this paper, we enrich the state-of-the-art
neural natural language inference models with external knowledge. We
demonstrate that the proposed models improve neural NLI models to achieve the
state-of-the-art performance on the SNLI and MultiNLI datasets. | http://arxiv.org/pdf/1711.04289 | Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, Si Wei | cs.CL | Accepted by ACL 2018 | null | cs.CL | 20171112 | 20180623 | [
{
"id": "1703.04617"
}
] |
1711.04289 | 32 | Table 3 shows the performance of models on the MultiNLI dataset. The baseline ESIM achieves 76.8% and 75.8% on in-domain and cross-domain test set, respectively. If we extend the ESIM with external knowledge, we achieve signiï¬cant gains to 77.2% and 76.4% respectively. Again, the gains are consistent on SNLI and MultiNLI, and we ex- pect they would be orthogonal to other factors when external knowledge is added into other state- of-the-art models.
# 5.2 Ablation Results | 1711.04289#32 | Neural Natural Language Inference Models Enhanced with External Knowledge | Modeling natural language inference is a very challenging task. With the
availability of large annotated data, it has recently become feasible to train
complex models such as neural-network-based inference models, which have shown
to achieve the state-of-the-art performance. Although there exist relatively
large annotated data, can machines learn all knowledge needed to perform
natural language inference (NLI) from these data? If not, how can
neural-network-based NLI models benefit from external knowledge and how to
build NLI models to leverage it? In this paper, we enrich the state-of-the-art
neural natural language inference models with external knowledge. We
demonstrate that the proposed models improve neural NLI models to achieve the
state-of-the-art performance on the SNLI and MultiNLI datasets. | http://arxiv.org/pdf/1711.04289 | Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, Si Wei | cs.CL | Accepted by ACL 2018 | null | cs.CL | 20171112 | 20180623 | [
{
"id": "1703.04617"
}
] |
1711.04289 | 33 | # 5.2 Ablation Results
Figure 2 displays the ablation analysis of differ- ent components when using the external knowl- edge. To compare the effects of external knowl- edge under different training data scales, we ranModel Test LSTM Att. (Rockt¨aschel et al., 2015) DF-LSTMs (Liu et al., 2016a) TC-LSTMs (Liu et al., 2016b) Match-LSTM (Wang and Jiang, 2016) LSTMN (Cheng et al., 2016) Decomposable Att. (Parikh et al., 2016) NTI (Yu and Munkhdalai, 2017b) Re-read LSTM (Sha et al., 2016) BiMPM (Wang et al., 2017) DIIN (Gong et al., 2017) BCN + CoVe (McCann et al., 2017) CAFE (Tay et al., 2018) 83.5 84.6 85.1 86.1 86.3 86.8 87.3 87.5 87.5 88.0 88.1 88.5 ESIM (Chen et al., 2017a) KIM (This paper) 88.0 88.6
Table 2: Accuracies of models on SNLI. | 1711.04289#33 | Neural Natural Language Inference Models Enhanced with External Knowledge | Modeling natural language inference is a very challenging task. With the
availability of large annotated data, it has recently become feasible to train
complex models such as neural-network-based inference models, which have shown
to achieve the state-of-the-art performance. Although there exist relatively
large annotated data, can machines learn all knowledge needed to perform
natural language inference (NLI) from these data? If not, how can
neural-network-based NLI models benefit from external knowledge and how to
build NLI models to leverage it? In this paper, we enrich the state-of-the-art
neural natural language inference models with external knowledge. We
demonstrate that the proposed models improve neural NLI models to achieve the
state-of-the-art performance on the SNLI and MultiNLI datasets. | http://arxiv.org/pdf/1711.04289 | Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, Si Wei | cs.CL | Accepted by ACL 2018 | null | cs.CL | 20171112 | 20180623 | [
{
"id": "1703.04617"
}
] |
1711.04289 | 34 | Table 2: Accuracies of models on SNLI.
Model In Cross CBOW (Williams et al., 2017) BiLSTM (Williams et al., 2017) DiSAN (Shen et al., 2017) Gated BiLSTM (Chen et al., 2017b) SS BiLSTM (Nie and Bansal, 2017) DIIN * (Gong et al., 2017) CAFE (Tay et al., 2018) 64.8 66.9 71.0 73.5 74.6 77.8 78.7 64.5 66.9 71.4 73.6 73.6 78.8 77.9 ESIM (Chen et al., 2017a) KIM (This paper) 76.8 77.2 75.8 76.4
Table 3: Accuracies of models on MultiNLI. * in- dicates models using extra SNLI training set. | 1711.04289#34 | Neural Natural Language Inference Models Enhanced with External Knowledge | Modeling natural language inference is a very challenging task. With the
availability of large annotated data, it has recently become feasible to train
complex models such as neural-network-based inference models, which have shown
to achieve the state-of-the-art performance. Although there exist relatively
large annotated data, can machines learn all knowledge needed to perform
natural language inference (NLI) from these data? If not, how can
neural-network-based NLI models benefit from external knowledge and how to
build NLI models to leverage it? In this paper, we enrich the state-of-the-art
neural natural language inference models with external knowledge. We
demonstrate that the proposed models improve neural NLI models to achieve the
state-of-the-art performance on the SNLI and MultiNLI datasets. | http://arxiv.org/pdf/1711.04289 | Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, Si Wei | cs.CL | Accepted by ACL 2018 | null | cs.CL | 20171112 | 20180623 | [
{
"id": "1703.04617"
}
] |
1711.04289 | 35 | domly sample different ratios of the entire training set, i.e., 0.8%, 4%, 20% and 100%. âAâ indicates adding external knowledge in calculating the co- attention matrix as in Equation (3), âIâ indicates adding external knowledge in collecting local in- ference information as in Equation (7)(8), and âCâ indicates adding external knowledge in compos- ing inference as in Equation (11)(12). When we only have restricted training data, i.e., 0.8% train- ing set (about 4,000 samples), the baseline ESIM has a poor accuracy of 62.4%. When we only add external knowledge in calculating co-attention (âAâ), the accuracy increases to 66.6% (+ absolute 4.2%). When we only utilize external knowledge in collecting local inference information (âIâ), the accuracy has a signiï¬cant gain, to 70.3% (+ ab- solute 7.9%). When we only add external knowl- edge in inference composition (âCâ), the accuracy gets a smaller gain to 63.4% (+ absolute 1.0%). The comparison indicates that âIâ plays the most important role | 1711.04289#35 | Neural Natural Language Inference Models Enhanced with External Knowledge | Modeling natural language inference is a very challenging task. With the
availability of large annotated data, it has recently become feasible to train
complex models such as neural-network-based inference models, which have shown
to achieve the state-of-the-art performance. Although there exist relatively
large annotated data, can machines learn all knowledge needed to perform
natural language inference (NLI) from these data? If not, how can
neural-network-based NLI models benefit from external knowledge and how to
build NLI models to leverage it? In this paper, we enrich the state-of-the-art
neural natural language inference models with external knowledge. We
demonstrate that the proposed models improve neural NLI models to achieve the
state-of-the-art performance on the SNLI and MultiNLI datasets. | http://arxiv.org/pdf/1711.04289 | Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, Si Wei | cs.CL | Accepted by ACL 2018 | null | cs.CL | 20171112 | 20180623 | [
{
"id": "1703.04617"
}
] |
1711.04289 | 36 | the accuracy gets a smaller gain to 63.4% (+ absolute 1.0%). The comparison indicates that âIâ plays the most important role among the three components in us- ing external knowledge. Moreover, when we compose the three components (âA,I,Câ), we obtain the best result of 72.6% (+ absolute 10.2%). When we use more training data, i.e., 4%, 20%, 100% of the training set, only âIâ achieves a signiï¬cant gain, but âAâ or âCâ does not bring any signiï¬- cant improvement. The results indicate that ex- ternal semantic knowledge only helps co-attention and composition when limited training data is lim- ited, but always helps in collecting local inference information. Meanwhile, for less training data, λ is usually set to a larger value. For example, the optimal λ on the development set is 20 for 0.8% training set, 2 for the 4% training set, 1 for the 20% training set and 0.2 for the 100% training set. Figure 3 displays the results of using different ratios of external knowledge (randomly keep dif- ferent percentages of whole lexical semantic rela- tions) under different | 1711.04289#36 | Neural Natural Language Inference Models Enhanced with External Knowledge | Modeling natural language inference is a very challenging task. With the
availability of large annotated data, it has recently become feasible to train
complex models such as neural-network-based inference models, which have shown
to achieve the state-of-the-art performance. Although there exist relatively
large annotated data, can machines learn all knowledge needed to perform
natural language inference (NLI) from these data? If not, how can
neural-network-based NLI models benefit from external knowledge and how to
build NLI models to leverage it? In this paper, we enrich the state-of-the-art
neural natural language inference models with external knowledge. We
demonstrate that the proposed models improve neural NLI models to achieve the
state-of-the-art performance on the SNLI and MultiNLI datasets. | http://arxiv.org/pdf/1711.04289 | Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, Si Wei | cs.CL | Accepted by ACL 2018 | null | cs.CL | 20171112 | 20180623 | [
{
"id": "1703.04617"
}
] |
1711.04289 | 37 | Figure 3 displays the results of using different ratios of external knowledge (randomly keep dif- ferent percentages of whole lexical semantic rela- tions) under different sizes of training data. Note that here we only use external knowledge in col- lecting local inference information as it always works well for different scale of the training set. Better accuracies are achieved when using more external knowledge. Especially under the condi- tion of restricted training data (0.8%), the model obtains a large gain when using more than half of external knowledge. | 1711.04289#37 | Neural Natural Language Inference Models Enhanced with External Knowledge | Modeling natural language inference is a very challenging task. With the
availability of large annotated data, it has recently become feasible to train
complex models such as neural-network-based inference models, which have shown
to achieve the state-of-the-art performance. Although there exist relatively
large annotated data, can machines learn all knowledge needed to perform
natural language inference (NLI) from these data? If not, how can
neural-network-based NLI models benefit from external knowledge and how to
build NLI models to leverage it? In this paper, we enrich the state-of-the-art
neural natural language inference models with external knowledge. We
demonstrate that the proposed models improve neural NLI models to achieve the
state-of-the-art performance on the SNLI and MultiNLI datasets. | http://arxiv.org/pdf/1711.04289 | Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, Si Wei | cs.CL | Accepted by ACL 2018 | null | cs.CL | 20171112 | 20180623 | [
{
"id": "1703.04617"
}
] |
1711.04289 | 38 | 90 80 Accuracy (%) a 60 10° 10? Ratio of training set
Figure 2: Accuracies of models of incorporat- ing external knowledge into different NLI compo- nents, under different sizes of training data (0.8%, 4%, 20%, and the entire training data).
# 5.3 Analysis on the (Glockner et al., 2018) Test Set
In addition, Table 4 shows the results on a newly published test set (Glockner et al., 2018). Com- pared with the performance on the SNLI test
Accuracy (%) ~ a 70 â2â 0.8% training set 65 â+â 4% training set â+â 20% training set ââ 100% training set 60 0 0.2 0.4 0.6 0.8 i Ratio of external knowledge
Figure 3: Accuracies of models under differ- ent sizes of external knowledge. More external knowledge corresponds to higher accuracies.
Model SNLI Glocknerâs(â) (Parikh et al., 2016)* (Nie and Bansal, 2017)* ESIM * KIM (This paper) 84.7 86.0 87.9 88.6 51.9 (-32.8) 62.2 (-23.8) 65.6 (-22.3) 83.5 ( -5.1) | 1711.04289#38 | Neural Natural Language Inference Models Enhanced with External Knowledge | Modeling natural language inference is a very challenging task. With the
availability of large annotated data, it has recently become feasible to train
complex models such as neural-network-based inference models, which have shown
to achieve the state-of-the-art performance. Although there exist relatively
large annotated data, can machines learn all knowledge needed to perform
natural language inference (NLI) from these data? If not, how can
neural-network-based NLI models benefit from external knowledge and how to
build NLI models to leverage it? In this paper, we enrich the state-of-the-art
neural natural language inference models with external knowledge. We
demonstrate that the proposed models improve neural NLI models to achieve the
state-of-the-art performance on the SNLI and MultiNLI datasets. | http://arxiv.org/pdf/1711.04289 | Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, Si Wei | cs.CL | Accepted by ACL 2018 | null | cs.CL | 20171112 | 20180623 | [
{
"id": "1703.04617"
}
] |
1711.04289 | 39 | Table 4: Accuracies of models on the SNLI and (Glockner et al., 2018) test set. * indicates the re- sults taken from (Glockner et al., 2018).
set, the performance of the three baseline mod- els dropped substantially on the (Glockner et al., 2018) test set, with the differences ranging from 22.3% to 32.8% in accuracy. Instead, the proposed KIM achieves 83.5% on this test set (with only a 5.1% drop in performance), which demonstrates its better ability of utilizing lexical level inference and hence better generalizability.
Figure 5 displays the accuracy of ESIM and KIM in each replacement-word category of the (Glockner et al., 2018) test set. KIM outper- forms ESIM in 13 out of 14 categories, and only performs worse on synonyms.
# 5.4 Analysis by Inference Categories
We perform more analysis (Table 6) using the sup- plementary annotations provided by the MultiNLI dataset (Williams et al., 2017), which have 495 samples (about 1/20 of the entire development set) for both in-domain and out-domain set. We com- pare against the model outputs of the ESIM model across 13 categories of inference. Table 6 reports the results. We can see that KIM outperforms ESIM on overall accuracies on both in-domain and | 1711.04289#39 | Neural Natural Language Inference Models Enhanced with External Knowledge | Modeling natural language inference is a very challenging task. With the
availability of large annotated data, it has recently become feasible to train
complex models such as neural-network-based inference models, which have shown
to achieve the state-of-the-art performance. Although there exist relatively
large annotated data, can machines learn all knowledge needed to perform
natural language inference (NLI) from these data? If not, how can
neural-network-based NLI models benefit from external knowledge and how to
build NLI models to leverage it? In this paper, we enrich the state-of-the-art
neural natural language inference models with external knowledge. We
demonstrate that the proposed models improve neural NLI models to achieve the
state-of-the-art performance on the SNLI and MultiNLI datasets. | http://arxiv.org/pdf/1711.04289 | Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, Si Wei | cs.CL | Accepted by ACL 2018 | null | cs.CL | 20171112 | 20180623 | [
{
"id": "1703.04617"
}
] |
1711.04289 | 40 | Category Instance ESIM KIM Antonyms Cardinals Nationalities Drinks Antonyms WordNet Colors Ordinals Countries Rooms Materials Vegetables Instruments Planets Synonyms 1,147 759 755 731 706 699 663 613 595 397 109 65 60 894 70.4 75.5 35.9 63.7 74.6 96.1 21.0 25.4 69.4 89.7 31.2 90.8 3.3 99.7 86.5 93.4 73.5 96.6 78.8 98.3 56.6 70.8 77.6 98.7 79.8 96.9 5.0 92.1 Overall 8,193 65.6 83.5
Table 5: The number of instances and accu- racy per category achieved by ESIM and KIM on the (Glockner et al., 2018) test set. | 1711.04289#40 | Neural Natural Language Inference Models Enhanced with External Knowledge | Modeling natural language inference is a very challenging task. With the
availability of large annotated data, it has recently become feasible to train
complex models such as neural-network-based inference models, which have shown
to achieve the state-of-the-art performance. Although there exist relatively
large annotated data, can machines learn all knowledge needed to perform
natural language inference (NLI) from these data? If not, how can
neural-network-based NLI models benefit from external knowledge and how to
build NLI models to leverage it? In this paper, we enrich the state-of-the-art
neural natural language inference models with external knowledge. We
demonstrate that the proposed models improve neural NLI models to achieve the
state-of-the-art performance on the SNLI and MultiNLI datasets. | http://arxiv.org/pdf/1711.04289 | Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, Si Wei | cs.CL | Accepted by ACL 2018 | null | cs.CL | 20171112 | 20180623 | [
{
"id": "1703.04617"
}
] |
1711.04289 | 41 | Table 5: The number of instances and accu- racy per category achieved by ESIM and KIM on the (Glockner et al., 2018) test set.
Category In-domain Cross-domain ESIM KIM ESIM KIM Active/Passive Antonym Belief Conditional Coreference Long sentence Modal Negation Paraphrase Quantity/Time Quantiï¬er Tense Word overlap 93.3 76.5 72.7 65.2 80.0 82.8 80.6 76.7 84.0 66.7 79.2 74.5 89.3 93.3 76.5 75.8 65.2 76.7 78.8 79.9 79.8 72.0 66.7 78.4 78.4 85.7 100.0 70.0 75.9 61.5 75.9 69.7 77.0 73.1 86.5 56.4 73.6 72.2 83.8 100.0 75.0 79.3 69.2 75.9 73.4 80.2 71.2 89.2 59.0 77.1 66.7 81.1 Overall 77.1 77.9 76.7 77.4
Table 6: Detailed Analysis on MultiNLI. | 1711.04289#41 | Neural Natural Language Inference Models Enhanced with External Knowledge | Modeling natural language inference is a very challenging task. With the
availability of large annotated data, it has recently become feasible to train
complex models such as neural-network-based inference models, which have shown
to achieve the state-of-the-art performance. Although there exist relatively
large annotated data, can machines learn all knowledge needed to perform
natural language inference (NLI) from these data? If not, how can
neural-network-based NLI models benefit from external knowledge and how to
build NLI models to leverage it? In this paper, we enrich the state-of-the-art
neural natural language inference models with external knowledge. We
demonstrate that the proposed models improve neural NLI models to achieve the
state-of-the-art performance on the SNLI and MultiNLI datasets. | http://arxiv.org/pdf/1711.04289 | Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, Si Wei | cs.CL | Accepted by ACL 2018 | null | cs.CL | 20171112 | 20180623 | [
{
"id": "1703.04617"
}
] |
1711.04289 | 42 | Table 6: Detailed Analysis on MultiNLI.
cross-domain subset of development set. KIM out- performs or equals ESIM in 10 out of 13 cate- gories on the cross-domain setting, while only 7 out of 13 categories on in-domain setting. It indi- cates that external knowledge helps more in cross- domain setting. Especially, for antonym category in cross-domain set, KIM outperform ESIM sig- niï¬cantly (+ absolute 5.0%) as expected, because antonym feature captured by external knowledge would help unseen cross-domain samples.
# 5.5 Case Study
Table 7 includes some examples from the SNLI test set, where KIM successfully predicts the in- ference relation and ESIM fails. In the ï¬rst examP/G Sentences e/c p: An African person standing in a wheat ï¬eld. h: A person standing in a corn ï¬eld. e/c p: Little girl is ï¬ipping an omelet in the kitchen. h: A young girl cooks pancakes. c/e p: A middle eastern marketplace. h: A middle easten store. c/e p: Two boys are swimming with boogie boards. h: Two boys are swimming with their ï¬oats. | 1711.04289#42 | Neural Natural Language Inference Models Enhanced with External Knowledge | Modeling natural language inference is a very challenging task. With the
availability of large annotated data, it has recently become feasible to train
complex models such as neural-network-based inference models, which have shown
to achieve the state-of-the-art performance. Although there exist relatively
large annotated data, can machines learn all knowledge needed to perform
natural language inference (NLI) from these data? If not, how can
neural-network-based NLI models benefit from external knowledge and how to
build NLI models to leverage it? In this paper, we enrich the state-of-the-art
neural natural language inference models with external knowledge. We
demonstrate that the proposed models improve neural NLI models to achieve the
state-of-the-art performance on the SNLI and MultiNLI datasets. | http://arxiv.org/pdf/1711.04289 | Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, Si Wei | cs.CL | Accepted by ACL 2018 | null | cs.CL | 20171112 | 20180623 | [
{
"id": "1703.04617"
}
] |
1711.04289 | 43 | Table 7: Examples. Word in bold are key words in making ï¬nal prediction. P indicates a predicted label and G indicates gold-standard label. e and c denote entailment and contradiction, respectively.
ple, the premise is âAn African person standing in a wheat ï¬eldâ and the hypothesis âA person stand- ing in a corn ï¬eldâ. As the KIM model knows that âwheatâ and âcornâ are both a kind of cereal, i.e, the co-hyponyms relationship in our relation fea- tures, KIM therefore predicts the premise contra- dicts the hypothesis. However, the baseline ESIM cannot learn the relationship between âwheatâ and âcornâ effectively due to lack of enough samples in the training sets. With the help of external knowledge, i.e., âwheatâ and âcornâ having the same hypernym âcerealâ, KIM predicts contradic- tion correctly.
# 6 Conclusions | 1711.04289#43 | Neural Natural Language Inference Models Enhanced with External Knowledge | Modeling natural language inference is a very challenging task. With the
availability of large annotated data, it has recently become feasible to train
complex models such as neural-network-based inference models, which have shown
to achieve the state-of-the-art performance. Although there exist relatively
large annotated data, can machines learn all knowledge needed to perform
natural language inference (NLI) from these data? If not, how can
neural-network-based NLI models benefit from external knowledge and how to
build NLI models to leverage it? In this paper, we enrich the state-of-the-art
neural natural language inference models with external knowledge. We
demonstrate that the proposed models improve neural NLI models to achieve the
state-of-the-art performance on the SNLI and MultiNLI datasets. | http://arxiv.org/pdf/1711.04289 | Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, Si Wei | cs.CL | Accepted by ACL 2018 | null | cs.CL | 20171112 | 20180623 | [
{
"id": "1703.04617"
}
] |
1711.04289 | 44 | # 6 Conclusions
Our neural-network-based model for natural lan- guage inference with external knowledge, namely KIM, achieves the state-of-the-art accuracies. The model is equipped with external knowledge in its main components, speciï¬cally, in calculating co- attention, collecting local inference, and compos- ing inference. We provide detailed analyses on our model and results. The proposed model of infus- ing neural networks with external knowledge may also help shed some light on tasks other than NLI.
# Acknowledgments
We thank Yibo Sun and Bing Qin for early helpful discussion.
# References
Sungjin Ahn, Heeyoul Choi, Tanel P¨arnamaa, and Yoshua Bengio. 2016. A neural knowledge lan- guage model. CoRR, abs/1608.00318.
Kurt D. Bollacker, Robert P. Cook, and Patrick Tufts. 2007. Freebase: A shared database of structured In Proceedings of the general human knowledge. Twenty-Second AAAI Conference on Artiï¬cial In- telligence, July 22-26, 2007, Vancouver, British Columbia, Canada, pages 1962â1963. | 1711.04289#44 | Neural Natural Language Inference Models Enhanced with External Knowledge | Modeling natural language inference is a very challenging task. With the
availability of large annotated data, it has recently become feasible to train
complex models such as neural-network-based inference models, which have shown
to achieve the state-of-the-art performance. Although there exist relatively
large annotated data, can machines learn all knowledge needed to perform
natural language inference (NLI) from these data? If not, how can
neural-network-based NLI models benefit from external knowledge and how to
build NLI models to leverage it? In this paper, we enrich the state-of-the-art
neural natural language inference models with external knowledge. We
demonstrate that the proposed models improve neural NLI models to achieve the
state-of-the-art performance on the SNLI and MultiNLI datasets. | http://arxiv.org/pdf/1711.04289 | Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, Si Wei | cs.CL | Accepted by ACL 2018 | null | cs.CL | 20171112 | 20180623 | [
{
"id": "1703.04617"
}
] |
1711.04289 | 45 | Antoine Bordes, Nicolas Usunier, Alberto Garc´ıa- Dur´an, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi- relational data. In Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Pro- ceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States., pages 2787â 2795.
Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large an- notated corpus for learning natural language infer- In Proceedings of the 2015 Conference on ence. Empirical Methods in Natural Language Process- ing, EMNLP 2015, Lisbon, Portugal, September 17- 21, 2015, pages 632â642.
Samuel R. Bowman, Jon Gauthier, Abhinav Ras- togi, Raghav Gupta, Christopher D. Manning, and Christopher Potts. 2016. A fast uniï¬ed model for parsing and sentence understanding. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. | 1711.04289#45 | Neural Natural Language Inference Models Enhanced with External Knowledge | Modeling natural language inference is a very challenging task. With the
availability of large annotated data, it has recently become feasible to train
complex models such as neural-network-based inference models, which have shown
to achieve the state-of-the-art performance. Although there exist relatively
large annotated data, can machines learn all knowledge needed to perform
natural language inference (NLI) from these data? If not, how can
neural-network-based NLI models benefit from external knowledge and how to
build NLI models to leverage it? In this paper, we enrich the state-of-the-art
neural natural language inference models with external knowledge. We
demonstrate that the proposed models improve neural NLI models to achieve the
state-of-the-art performance on the SNLI and MultiNLI datasets. | http://arxiv.org/pdf/1711.04289 | Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, Si Wei | cs.CL | Accepted by ACL 2018 | null | cs.CL | 20171112 | 20180623 | [
{
"id": "1703.04617"
}
] |
1711.04289 | 46 | Jane Bromley, Isabelle Guyon, Yann LeCun, Eduard S¨ackinger, and Roopak Shah. 1993. Signature veri- ï¬cation using a siamese time delay neural network. In Advances in Neural Information Processing Sys- tems 6, [7th NIPS Conference, Denver, Colorado, USA, 1993], pages 737â744.
Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, and Hui Jiang. 2016a. Distraction-based neural net- In Proceedings of works for modeling document. the Twenty-Fifth International Joint Conference on Artiï¬cial Intelligence, IJCAI 2016, New York, NY, USA, 9-15 July 2016, pages 2754â2760.
Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2017a. Enhanced LSTM for natural language inference. In Proceed- ings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancou- ver, Canada, July 30 - August 4, Volume 1: Long Papers, pages 1657â1668. | 1711.04289#46 | Neural Natural Language Inference Models Enhanced with External Knowledge | Modeling natural language inference is a very challenging task. With the
availability of large annotated data, it has recently become feasible to train
complex models such as neural-network-based inference models, which have shown
to achieve the state-of-the-art performance. Although there exist relatively
large annotated data, can machines learn all knowledge needed to perform
natural language inference (NLI) from these data? If not, how can
neural-network-based NLI models benefit from external knowledge and how to
build NLI models to leverage it? In this paper, we enrich the state-of-the-art
neural natural language inference models with external knowledge. We
demonstrate that the proposed models improve neural NLI models to achieve the
state-of-the-art performance on the SNLI and MultiNLI datasets. | http://arxiv.org/pdf/1711.04289 | Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, Si Wei | cs.CL | Accepted by ACL 2018 | null | cs.CL | 20171112 | 20180623 | [
{
"id": "1703.04617"
}
] |
1711.04289 | 47 | Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2017b. Recurrent neural network-based sentence encoder with gated atten- tion for natural language inference. In Proceedings of the 2nd Workshop on Evaluating Vector Space
Representations for NLP, RepEval@EMNLP 2017, Copenhagen, Denmark, September 8, 2017, pages 36â40.
Yun-Nung Chen, Dilek Z. Hakkani-T¨ur, G¨okhan T¨ur, Asli C¸ elikyilmaz, Jianfeng Gao, and Li Deng. 2016b. Knowledge as a teacher: Knowledge- CoRR, guided structural attention networks. abs/1609.03286.
Zhigang Chen, Wei Lin, Qian Chen, Xiaoping Chen, Si Wei, Hui Jiang, and Xiaodan Zhu. 2015. Re- visiting word embedding for contrasting meaning. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Lan- guage Processing of the Asian Federation of Natural Language Processing, ACL 2015, July 26-31, 2015, Beijing, China, Volume 1: Long Papers, pages 106â 115. | 1711.04289#47 | Neural Natural Language Inference Models Enhanced with External Knowledge | Modeling natural language inference is a very challenging task. With the
availability of large annotated data, it has recently become feasible to train
complex models such as neural-network-based inference models, which have shown
to achieve the state-of-the-art performance. Although there exist relatively
large annotated data, can machines learn all knowledge needed to perform
natural language inference (NLI) from these data? If not, how can
neural-network-based NLI models benefit from external knowledge and how to
build NLI models to leverage it? In this paper, we enrich the state-of-the-art
neural natural language inference models with external knowledge. We
demonstrate that the proposed models improve neural NLI models to achieve the
state-of-the-art performance on the SNLI and MultiNLI datasets. | http://arxiv.org/pdf/1711.04289 | Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, Si Wei | cs.CL | Accepted by ACL 2018 | null | cs.CL | 20171112 | 20180623 | [
{
"id": "1703.04617"
}
] |
1711.04289 | 48 | Jianpeng Cheng, Li Dong, and Mirella Lapata. 2016. Long short-term memory-networks for machine reading. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Process- ing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 551â561.
Jihun Choi, Kang Min Yoo, and Sang-goo Lee. 2017. Unsupervised learning of task-speciï¬c tree struc- tures with tree-lstms. CoRR, abs/1707.02786.
Alexis Conneau, Douwe Kiela, Holger Schwenk, Lo¨ıc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of the 2017 Conference on Empirical Methods in Nat- ural Language Processing, EMNLP 2017, Copen- hagen, Denmark, September 9-11, 2017, pages 670â 680. | 1711.04289#48 | Neural Natural Language Inference Models Enhanced with External Knowledge | Modeling natural language inference is a very challenging task. With the
availability of large annotated data, it has recently become feasible to train
complex models such as neural-network-based inference models, which have shown
to achieve the state-of-the-art performance. Although there exist relatively
large annotated data, can machines learn all knowledge needed to perform
natural language inference (NLI) from these data? If not, how can
neural-network-based NLI models benefit from external knowledge and how to
build NLI models to leverage it? In this paper, we enrich the state-of-the-art
neural natural language inference models with external knowledge. We
demonstrate that the proposed models improve neural NLI models to achieve the
state-of-the-art performance on the SNLI and MultiNLI datasets. | http://arxiv.org/pdf/1711.04289 | Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, Si Wei | cs.CL | Accepted by ACL 2018 | null | cs.CL | 20171112 | 20180623 | [
{
"id": "1703.04617"
}
] |
1711.04289 | 49 | Ido Dagan, Oren Glickman, and Bernardo Magnini. 2005. The PASCAL recognising textual entailment challenge. In Machine Learning Challenges, Eval- uating Predictive Uncertainty, Visual Object Clas- siï¬cation and Recognizing Textual Entailment, First PASCAL Machine Learning Challenges Workshop, MLCW 2005, Southampton, UK, April 11-13, 2005, Revised Selected Papers, pages 177â190.
Manaal Faruqui, Jesse Dodge, Sujay Kumar Jauhar, Chris Dyer, Eduard H. Hovy, and Noah A. Smith. 2015. Retroï¬tting word vectors to semantic lexi- cons. In NAACL HLT 2015, The 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Denver, Colorado, USA, May 31 - June 5, 2015, pages 1606â1615.
Max Glockner, Vered Shwartz, and Yoav Goldberg. 2018. Breaking nli systems with sentences that re- quire simple lexical inferences. In The 56th Annual Meeting of the Association for Computational Lin- guistics (ACL), Melbourne, Australia.
Yichen Gong, Heng Luo, and Jian Zhang. 2017. Natural language inference over interaction space. CoRR, abs/1709.04348. | 1711.04289#49 | Neural Natural Language Inference Models Enhanced with External Knowledge | Modeling natural language inference is a very challenging task. With the
availability of large annotated data, it has recently become feasible to train
complex models such as neural-network-based inference models, which have shown
to achieve the state-of-the-art performance. Although there exist relatively
large annotated data, can machines learn all knowledge needed to perform
natural language inference (NLI) from these data? If not, how can
neural-network-based NLI models benefit from external knowledge and how to
build NLI models to leverage it? In this paper, we enrich the state-of-the-art
neural natural language inference models with external knowledge. We
demonstrate that the proposed models improve neural NLI models to achieve the
state-of-the-art performance on the SNLI and MultiNLI datasets. | http://arxiv.org/pdf/1711.04289 | Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, Si Wei | cs.CL | Accepted by ACL 2018 | null | cs.CL | 20171112 | 20180623 | [
{
"id": "1703.04617"
}
] |
1711.04289 | 50 | Yichen Gong, Heng Luo, and Jian Zhang. 2017. Natural language inference over interaction space. CoRR, abs/1709.04348.
Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735â1780.
Adrian Iftene and Alexandra Balahur-Dobrescu. 2007. Proceedings of the ACL-PASCAL Workshop on Tex- tual Entailment and Paraphrasing, chapter Hypoth- esis Transformation and Semantic Variability Rules Used in Recognizing Textual Entailment. Associa- tion for Computational Linguistics.
Jay J. Jiang and David W. Conrath. 1997. Seman- tic similarity based on corpus statistics and lexical In Proceedings of the 10th Research taxonomy. on Computational Linguistics International Confer- ence, ROCLING 1997, Taipei, Taiwan, August 1997, pages 19â33.
Valentin Jijkoun and Maarten de Rijke. 2005. Recog- nizing textual entailment using lexical similarity. In Proceedings of the PASCAL Challenges Workshop on Recognising Textual Entailment.
Diederik P. Kingma and Jimmy Ba. 2014. Adam: CoRR, A method for stochastic optimization. abs/1412.6980. | 1711.04289#50 | Neural Natural Language Inference Models Enhanced with External Knowledge | Modeling natural language inference is a very challenging task. With the
availability of large annotated data, it has recently become feasible to train
complex models such as neural-network-based inference models, which have shown
to achieve the state-of-the-art performance. Although there exist relatively
large annotated data, can machines learn all knowledge needed to perform
natural language inference (NLI) from these data? If not, how can
neural-network-based NLI models benefit from external knowledge and how to
build NLI models to leverage it? In this paper, we enrich the state-of-the-art
neural natural language inference models with external knowledge. We
demonstrate that the proposed models improve neural NLI models to achieve the
state-of-the-art performance on the SNLI and MultiNLI datasets. | http://arxiv.org/pdf/1711.04289 | Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, Si Wei | cs.CL | Accepted by ACL 2018 | null | cs.CL | 20171112 | 20180623 | [
{
"id": "1703.04617"
}
] |
1711.04289 | 51 | Diederik P. Kingma and Jimmy Ba. 2014. Adam: CoRR, A method for stochastic optimization. abs/1412.6980.
Zhouhan Lin, Minwei Feng, C´ıcero Nogueira dos San- tos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. 2017. A structured self-attentive sentence embedding. CoRR, abs/1703.03130.
Pengfei Liu, Xipeng Qiu, Jifan Chen, and Xuanjing Huang. 2016a. Deep fusion lstms for text seman- In Proceedings of the 54th Annual tic matching. Meeting of the Association for Computational Lin- guistics, ACL 2016, August 7-12, 2016, Berlin, Ger- many, Volume 1: Long Papers.
Pengfei Liu, Xipeng Qiu, Yaqian Zhou, Jifan Chen, and Xuanjing Huang. 2016b. Modelling interaction of sentence pair with coupled-lstms. In Proceedings of the 2016 Conference on Empirical Methods in Nat- ural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 1703â1712. | 1711.04289#51 | Neural Natural Language Inference Models Enhanced with External Knowledge | Modeling natural language inference is a very challenging task. With the
availability of large annotated data, it has recently become feasible to train
complex models such as neural-network-based inference models, which have shown
to achieve the state-of-the-art performance. Although there exist relatively
large annotated data, can machines learn all knowledge needed to perform
natural language inference (NLI) from these data? If not, how can
neural-network-based NLI models benefit from external knowledge and how to
build NLI models to leverage it? In this paper, we enrich the state-of-the-art
neural natural language inference models with external knowledge. We
demonstrate that the proposed models improve neural NLI models to achieve the
state-of-the-art performance on the SNLI and MultiNLI datasets. | http://arxiv.org/pdf/1711.04289 | Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, Si Wei | cs.CL | Accepted by ACL 2018 | null | cs.CL | 20171112 | 20180623 | [
{
"id": "1703.04617"
}
] |
1711.04289 | 52 | Quan Liu, Hui Jiang, Si Wei, Zhen-Hua Ling, and Yu Hu. 2015. Learning semantic word embeddings In Pro- based on ordinal knowledge constraints. ceedings of the 53rd Annual Meeting of the Associ- ation for Computational Linguistics and the 7th In- ternational Joint Conference on Natural Language Processing of the Asian Federation of Natural Lan- guage Processing, ACL 2015, July 26-31, 2015, Bei- jing, China, Volume 1: Long Papers, pages 1501â 1511.
Yang Liu, Chengjie Sun, Lei Lin, and Xiaolong Wang. 2016c. Learning natural language inference us- ing bidirectional LSTM model and inner-attention. CoRR, abs/1605.09090.
Bill MacCartney. 2009. Natural Language Inference. Ph.D. thesis, Stanford University.
Bill MacCartney, Michel Galley, and Christopher D. Manning. 2008. A phrase-based alignment model for natural language inference. In 2008 Conference on Empirical Methods in Natural Language Pro- cessing, EMNLP 2008, Proceedings of the Confer- ence, 25-27 October 2008, Honolulu, Hawaii, USA, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 802â811. | 1711.04289#52 | Neural Natural Language Inference Models Enhanced with External Knowledge | Modeling natural language inference is a very challenging task. With the
availability of large annotated data, it has recently become feasible to train
complex models such as neural-network-based inference models, which have shown
to achieve the state-of-the-art performance. Although there exist relatively
large annotated data, can machines learn all knowledge needed to perform
natural language inference (NLI) from these data? If not, how can
neural-network-based NLI models benefit from external knowledge and how to
build NLI models to leverage it? In this paper, we enrich the state-of-the-art
neural natural language inference models with external knowledge. We
demonstrate that the proposed models improve neural NLI models to achieve the
state-of-the-art performance on the SNLI and MultiNLI datasets. | http://arxiv.org/pdf/1711.04289 | Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, Si Wei | cs.CL | Accepted by ACL 2018 | null | cs.CL | 20171112 | 20180623 | [
{
"id": "1703.04617"
}
] |
1711.04289 | 53 | Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Rose Finkel, Steven Bethard, and David Mc- Closky. 2014. The stanford corenlp natural language processing toolkit. In Proceedings of the 52nd An- nual Meeting of the Association for Computational Linguistics, ACL 2014, June 22-27, 2014, Baltimore, MD, USA, System Demonstrations, pages 55â60.
Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. 2017. Learned in translation: Con- In Advances in Neural textualized word vectors. Information Processing Systems 30: Annual Con- ference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA, pages 6297â6308.
George A. Miller. 1995. Wordnet: A lexical database for english. Commun. ACM, 38(11):39â41.
Lili Mou, Rui Men, Ge Li, Yan Xu, Lu Zhang, Rui Yan, and Zhi Jin. 2016. Natural language infer- ence by tree-based convolution and heuristic match- ing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 2: Short Papers. | 1711.04289#53 | Neural Natural Language Inference Models Enhanced with External Knowledge | Modeling natural language inference is a very challenging task. With the
availability of large annotated data, it has recently become feasible to train
complex models such as neural-network-based inference models, which have shown
to achieve the state-of-the-art performance. Although there exist relatively
large annotated data, can machines learn all knowledge needed to perform
natural language inference (NLI) from these data? If not, how can
neural-network-based NLI models benefit from external knowledge and how to
build NLI models to leverage it? In this paper, we enrich the state-of-the-art
neural natural language inference models with external knowledge. We
demonstrate that the proposed models improve neural NLI models to achieve the
state-of-the-art performance on the SNLI and MultiNLI datasets. | http://arxiv.org/pdf/1711.04289 | Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, Si Wei | cs.CL | Accepted by ACL 2018 | null | cs.CL | 20171112 | 20180623 | [
{
"id": "1703.04617"
}
] |
1711.04289 | 54 | Nikola Mrksic, Ivan Vulic, Diarmuid ´O S´eaghdha, Ira Leviant, Roi Reichart, Milica Gasic, Anna Korho- nen, and Steve J. Young. 2017. Semantic special- isation of distributional word vector spaces using monolingual and cross-lingual constraints. CoRR, abs/1706.00374.
Shortcut- stacked sentence encoders for multi-domain infer- In Proceedings of the 2nd Workshop on ence. Evaluating Vector Space Representations for NLP, RepEval@EMNLP 2017, Copenhagen, Denmark, September 8, 2017, pages 41â45.
Ankur P. Parikh, Oscar T¨ackstr¨om, Dipanjan Das, and Jakob Uszkoreit. 2016. A decomposable attention model for natural language inference. In Proceed- ings of the 2016 Conference on Empirical Meth- ods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 2249â2255.
Jeffrey Pennington, Richard Socher, and Christo- pher D. Manning. 2014. Glove: Global vectors for In Proceedings of the 2014 word representation. Conference on Empirical Methods in Natural Lan- guage Processing, EMNLP 2014, October 25-29, | 1711.04289#54 | Neural Natural Language Inference Models Enhanced with External Knowledge | Modeling natural language inference is a very challenging task. With the
availability of large annotated data, it has recently become feasible to train
complex models such as neural-network-based inference models, which have shown
to achieve the state-of-the-art performance. Although there exist relatively
large annotated data, can machines learn all knowledge needed to perform
natural language inference (NLI) from these data? If not, how can
neural-network-based NLI models benefit from external knowledge and how to
build NLI models to leverage it? In this paper, we enrich the state-of-the-art
neural natural language inference models with external knowledge. We
demonstrate that the proposed models improve neural NLI models to achieve the
state-of-the-art performance on the SNLI and MultiNLI datasets. | http://arxiv.org/pdf/1711.04289 | Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, Si Wei | cs.CL | Accepted by ACL 2018 | null | cs.CL | 20171112 | 20180623 | [
{
"id": "1703.04617"
}
] |
1711.04289 | 55 | 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1532â1543.
Tim Rockt¨aschel, Edward Grefenstette, Karl Moritz Hermann, Tom´as Kocisk´y, and Phil Blunsom. 2015. Reasoning about entailment with neural attention. CoRR, abs/1509.06664.
Lei Sha, Baobao Chang, Zhifang Sui, and Sujian Li. 2016. Reading and thinking: Re-read LSTM unit In COLING for textual entailment recognition. 2016, 26th International Conference on Computa- tional Linguistics, Proceedings of the Conference: Technical Papers, December 11-16, 2016, Osaka, Japan, pages 2870â2879.
Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang, Shirui Pan, and Chengqi Zhang. 2017. Disan: Di- rectional self-attention network for rnn/cnn-free lan- guage understanding. CoRR, abs/1709.04696.
Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang, Sen Wang, and Chengqi Zhang. 2018. Reinforced self- attention network: a hybrid of hard and soft attention for sequence modeling. CoRR, abs/1801.10296. | 1711.04289#55 | Neural Natural Language Inference Models Enhanced with External Knowledge | Modeling natural language inference is a very challenging task. With the
availability of large annotated data, it has recently become feasible to train
complex models such as neural-network-based inference models, which have shown
to achieve the state-of-the-art performance. Although there exist relatively
large annotated data, can machines learn all knowledge needed to perform
natural language inference (NLI) from these data? If not, how can
neural-network-based NLI models benefit from external knowledge and how to
build NLI models to leverage it? In this paper, we enrich the state-of-the-art
neural natural language inference models with external knowledge. We
demonstrate that the proposed models improve neural NLI models to achieve the
state-of-the-art performance on the SNLI and MultiNLI datasets. | http://arxiv.org/pdf/1711.04289 | Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, Si Wei | cs.CL | Accepted by ACL 2018 | null | cs.CL | 20171112 | 20180623 | [
{
"id": "1703.04617"
}
] |
1711.04289 | 56 | Chen Shi, Shujie Liu, Shuo Ren, Shi Feng, Mu Li, Ming Zhou, Xu Sun, and Houfeng Wang. 2016. Knowledge-based semantic embedding for machine translation. In Proceedings of the 54th Annual Meet- ing of the Association for Computational Linguis- tics, ACL 2016, August 7-12, 2016, Berlin, Ger- many, Volume 1: Long Papers.
Yi Tay, Luu Anh Tuan, and Siu Cheung Hui. 2018. A compare-propagate architecture with alignment fac- torization for natural language inference. CoRR, abs/1801.00102.
Ivan Vendrov, Ryan Kiros, Sanja Fidler, and Raquel Urtasun. 2015. Order-embeddings of images and language. CoRR, abs/1511.06361.
Shuohang Wang and Jing Jiang. 2016. Learning natu- ral language inference with LSTM. In NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, San Diego California, USA, June 12-17, 2016, pages 1442â 1451. | 1711.04289#56 | Neural Natural Language Inference Models Enhanced with External Knowledge | Modeling natural language inference is a very challenging task. With the
availability of large annotated data, it has recently become feasible to train
complex models such as neural-network-based inference models, which have shown
to achieve the state-of-the-art performance. Although there exist relatively
large annotated data, can machines learn all knowledge needed to perform
natural language inference (NLI) from these data? If not, how can
neural-network-based NLI models benefit from external knowledge and how to
build NLI models to leverage it? In this paper, we enrich the state-of-the-art
neural natural language inference models with external knowledge. We
demonstrate that the proposed models improve neural NLI models to achieve the
state-of-the-art performance on the SNLI and MultiNLI datasets. | http://arxiv.org/pdf/1711.04289 | Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, Si Wei | cs.CL | Accepted by ACL 2018 | null | cs.CL | 20171112 | 20180623 | [
{
"id": "1703.04617"
}
] |
1711.04289 | 57 | Zhiguo Wang, Wael Hamza, and Radu Florian. 2017. Bilateral multi-perspective matching for natural lan- guage sentences. In Proceedings of the Twenty-Sixth International Joint Conference on Artiï¬cial Intelli- gence, IJCAI 2017, Melbourne, Australia, August 19-25, 2017, pages 4144â4150.
John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2015. From paraphrase database to compo- sitional paraphrase model and back. TACL, 3:345â 358.
Adina Williams, Nikita Nangia, and Samuel R. Bow- man. 2017. A broad-coverage challenge corpus for sentence understanding through inference. CoRR, abs/1704.05426.
Hong Yu and Tsendsuren Munkhdalai. 2017a. Neural semantic encoders. In Proceedings of the 15th Con- ference of the European Chapter of the Association for Computational Linguistics, EACL 2017, Valen- cia, Spain, April 3-7, 2017, Volume 1: Long Papers, pages 397â407. | 1711.04289#57 | Neural Natural Language Inference Models Enhanced with External Knowledge | Modeling natural language inference is a very challenging task. With the
availability of large annotated data, it has recently become feasible to train
complex models such as neural-network-based inference models, which have shown
to achieve the state-of-the-art performance. Although there exist relatively
large annotated data, can machines learn all knowledge needed to perform
natural language inference (NLI) from these data? If not, how can
neural-network-based NLI models benefit from external knowledge and how to
build NLI models to leverage it? In this paper, we enrich the state-of-the-art
neural natural language inference models with external knowledge. We
demonstrate that the proposed models improve neural NLI models to achieve the
state-of-the-art performance on the SNLI and MultiNLI datasets. | http://arxiv.org/pdf/1711.04289 | Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, Si Wei | cs.CL | Accepted by ACL 2018 | null | cs.CL | 20171112 | 20180623 | [
{
"id": "1703.04617"
}
] |
1711.04289 | 58 | Hong Yu and Tsendsuren Munkhdalai. 2017b. Neu- ral tree indexers for text understanding. In Proceed- ings of the 15th Conference of the European Chap- ter of the Association for Computational Linguistics, EACL 2017, Valencia, Spain, April 3-7, 2017, Vol- ume 1: Long Papers, pages 11â21.
Junbei Zhang, Xiaodan Zhu, Qian Chen, Lirong Ex- Dai, Si Wei, ploring question understanding and adaptation in neural-network-based question answering. CoRR, abs/arXiv:1703.04617v2.
Shiyue Zhang, Gulnigar Mahmut, Dong Wang, and Askar Hamdulla. 2017b. Memory-augmented chinese-uyghur neural machine translation. In 2017 Asia-Paciï¬c Signal and Information Processing As- sociation Annual Summit and Conference, APSIPA ASC 2017, Kuala Lumpur, Malaysia, December 12- 15, 2017, pages 1092â1096. | 1711.04289#58 | Neural Natural Language Inference Models Enhanced with External Knowledge | Modeling natural language inference is a very challenging task. With the
availability of large annotated data, it has recently become feasible to train
complex models such as neural-network-based inference models, which have shown
to achieve the state-of-the-art performance. Although there exist relatively
large annotated data, can machines learn all knowledge needed to perform
natural language inference (NLI) from these data? If not, how can
neural-network-based NLI models benefit from external knowledge and how to
build NLI models to leverage it? In this paper, we enrich the state-of-the-art
neural natural language inference models with external knowledge. We
demonstrate that the proposed models improve neural NLI models to achieve the
state-of-the-art performance on the SNLI and MultiNLI datasets. | http://arxiv.org/pdf/1711.04289 | Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, Si Wei | cs.CL | Accepted by ACL 2018 | null | cs.CL | 20171112 | 20180623 | [
{
"id": "1703.04617"
}
] |
1711.02255 | 0 | 8 1 0 2
l u J 9 ] G L . s c [
2 v 5 5 2 2 0 . 1 1 7 1 : v i X r a
# Convolutional Normalizing Flows
# Guoqing Zheng 1 Yiming Yang 1 Jaime Carbonell 1
# Abstract
Bayesian posterior inference is prevalent in var- ious machine learning problems. Variational in- ference provides one way to approximate the pos- terior distribution, however its expressive power is limited and so is the accuracy of resulting ap- proximation. Recently, there has a trend of using neural networks to approximate the variational posterior distribution due to the ï¬exibility of neu- ral network architecture. One way to construct ï¬exible variational distribution is to warp a sim- ple density into a complex by normalizing ï¬ows, where the resulting density can be analytically evaluated. However, there is a trade-off between the ï¬exibility of normalizing ï¬ow and computa- tion cost for efï¬cient transformation. In this paper, we propose a simple yet effective architecture of normalizing ï¬ows, ConvFlow, based on convolu- tion over the dimensions of random input vector. Experiments on synthetic and real world posterior inference problems demonstrate the effectiveness and efï¬ciency of the proposed method.
# 1. Introduction | 1711.02255#0 | Convolutional Normalizing Flows | Bayesian posterior inference is prevalent in various machine learning
problems. Variational inference provides one way to approximate the posterior
distribution, however its expressive power is limited and so is the accuracy of
resulting approximation. Recently, there has a trend of using neural networks
to approximate the variational posterior distribution due to the flexibility of
neural network architecture. One way to construct flexible variational
distribution is to warp a simple density into a complex by normalizing flows,
where the resulting density can be analytically evaluated. However, there is a
trade-off between the flexibility of normalizing flow and computation cost for
efficient transformation. In this paper, we propose a simple yet effective
architecture of normalizing flows, ConvFlow, based on convolution over the
dimensions of random input vector. Experiments on synthetic and real world
posterior inference problems demonstrate the effectiveness and efficiency of
the proposed method. | http://arxiv.org/pdf/1711.02255 | Guoqing Zheng, Yiming Yang, Jaime Carbonell | cs.LG | ICML 2018 Workshop on Theoretical Foundations and Applications of
Deep Generative Models | null | cs.LG | 20171107 | 20180709 | [
{
"id": "1511.07122"
},
{
"id": "1605.08803"
},
{
"id": "1509.00519"
},
{
"id": "1609.03499"
},
{
"id": "1601.06759"
}
] |
1711.02255 | 1 | # 1. Introduction
Posterior inference is the key to Bayesian modeling, where we are interested to see how our belief over the variables of interest change after observing a set of data points. Pre- dictions can also beneï¬t from Bayesian modeling as every prediction will be equipped with conï¬dence intervals repre- senting how sure the prediction is. Compared to the maxi- mum a posterior estimator of the model parameters, which is a point estimator, the posterior distribution provide richer information about the model parameter hence enabling more justiï¬ed prediction.
Among the various inference algorithms for posterior esti- mation, variational inference (VI) and Monte Carlo Markov
1School of Computer Science, Carnegie Mellon Univer- sity, Pittsburgh PA, USA. Correspondence to: Guoqing Zheng <[email protected]>.
Presented at the ICML 2018 Workshop on Theoretical Foundations and Applications of Deep Generative Models. Copyright 2018 by the author(s). | 1711.02255#1 | Convolutional Normalizing Flows | Bayesian posterior inference is prevalent in various machine learning
problems. Variational inference provides one way to approximate the posterior
distribution, however its expressive power is limited and so is the accuracy of
resulting approximation. Recently, there has a trend of using neural networks
to approximate the variational posterior distribution due to the flexibility of
neural network architecture. One way to construct flexible variational
distribution is to warp a simple density into a complex by normalizing flows,
where the resulting density can be analytically evaluated. However, there is a
trade-off between the flexibility of normalizing flow and computation cost for
efficient transformation. In this paper, we propose a simple yet effective
architecture of normalizing flows, ConvFlow, based on convolution over the
dimensions of random input vector. Experiments on synthetic and real world
posterior inference problems demonstrate the effectiveness and efficiency of
the proposed method. | http://arxiv.org/pdf/1711.02255 | Guoqing Zheng, Yiming Yang, Jaime Carbonell | cs.LG | ICML 2018 Workshop on Theoretical Foundations and Applications of
Deep Generative Models | null | cs.LG | 20171107 | 20180709 | [
{
"id": "1511.07122"
},
{
"id": "1605.08803"
},
{
"id": "1509.00519"
},
{
"id": "1609.03499"
},
{
"id": "1601.06759"
}
] |
1711.02255 | 2 | Presented at the ICML 2018 Workshop on Theoretical Foundations and Applications of Deep Generative Models. Copyright 2018 by the author(s).
chain (MCMC) are the most two widely used ones. It is well known that MCMC suffers from slow mixing time though asymptotically the samples from the chain will be distributed from the true posterior. VI, on the other hand, facilitates faster inference, since it is optimizing an explicit objective function and convergence can be measured and controlled, and itâs been widely used in many Bayesian mod- els, such as Latent Dirichlet Allocation (Blei et al., 2003), etc. However, one drawback of VI is that it makes strong assumption about the shape of the posterior such as the pos- terior can be decomposed into multiple independent factors. Though faster convergence can be achieved by parameter learning, the approximating accuracy is largely limited. | 1711.02255#2 | Convolutional Normalizing Flows | Bayesian posterior inference is prevalent in various machine learning
problems. Variational inference provides one way to approximate the posterior
distribution, however its expressive power is limited and so is the accuracy of
resulting approximation. Recently, there has a trend of using neural networks
to approximate the variational posterior distribution due to the flexibility of
neural network architecture. One way to construct flexible variational
distribution is to warp a simple density into a complex by normalizing flows,
where the resulting density can be analytically evaluated. However, there is a
trade-off between the flexibility of normalizing flow and computation cost for
efficient transformation. In this paper, we propose a simple yet effective
architecture of normalizing flows, ConvFlow, based on convolution over the
dimensions of random input vector. Experiments on synthetic and real world
posterior inference problems demonstrate the effectiveness and efficiency of
the proposed method. | http://arxiv.org/pdf/1711.02255 | Guoqing Zheng, Yiming Yang, Jaime Carbonell | cs.LG | ICML 2018 Workshop on Theoretical Foundations and Applications of
Deep Generative Models | null | cs.LG | 20171107 | 20180709 | [
{
"id": "1511.07122"
},
{
"id": "1605.08803"
},
{
"id": "1509.00519"
},
{
"id": "1609.03499"
},
{
"id": "1601.06759"
}
] |
1711.02255 | 3 | The above drawbacks stimulates the interest for richer func- tion families to approximate posteriors while maintaining acceptable learning speed. Speciï¬cally, neural network is one among such models which has large modeling capac- ity and endows efï¬cient learning. (Rezende & Mohamed, 2015) proposed normalization ï¬ow, where the neural net- work is set up to learn an invertible transformation from one known distribution, which is easy to sample from, to the true posterior. Model learning is achieved by minimizing the KL divergence between the empirical distribution of the generated samples and the true posterior. After properly trained, the model will generate samples which are close to the true posterior, so that Bayesian predictions are made possible. Other methods based on modeling random vari- able transformation, but based on different formulations are also explored, including NICE (Dinh et al., 2014), the In- verse Autoregressive Flow (Kingma et al., 2016), and Real NVP (Dinh et al., 2016). | 1711.02255#3 | Convolutional Normalizing Flows | Bayesian posterior inference is prevalent in various machine learning
problems. Variational inference provides one way to approximate the posterior
distribution, however its expressive power is limited and so is the accuracy of
resulting approximation. Recently, there has a trend of using neural networks
to approximate the variational posterior distribution due to the flexibility of
neural network architecture. One way to construct flexible variational
distribution is to warp a simple density into a complex by normalizing flows,
where the resulting density can be analytically evaluated. However, there is a
trade-off between the flexibility of normalizing flow and computation cost for
efficient transformation. In this paper, we propose a simple yet effective
architecture of normalizing flows, ConvFlow, based on convolution over the
dimensions of random input vector. Experiments on synthetic and real world
posterior inference problems demonstrate the effectiveness and efficiency of
the proposed method. | http://arxiv.org/pdf/1711.02255 | Guoqing Zheng, Yiming Yang, Jaime Carbonell | cs.LG | ICML 2018 Workshop on Theoretical Foundations and Applications of
Deep Generative Models | null | cs.LG | 20171107 | 20180709 | [
{
"id": "1511.07122"
},
{
"id": "1605.08803"
},
{
"id": "1509.00519"
},
{
"id": "1609.03499"
},
{
"id": "1601.06759"
}
] |
1711.02255 | 4 | One key component for normalizing ï¬ow to work is to com- pute the determinant of the Jacobian of the transformation, and in order to maintain fast Jacobian computation, either very simple function is used as the transformation, such as the planar ï¬ow in (Rezende & Mohamed, 2015), or complex tweaking of the transformation layer is required. Alterna- tively, in this paper we propose a simple and yet effective architecture of normalizing ï¬ows, based on convolution on the random input vector. Due to the nature of convolution, bi-jective mapping between the input and output vectors can be easily established; meanwhile, efï¬cient computation of the determinant of the convolution Jacobian is achieved linearly. We further propose to incorporate dilated convoConvolutional Normalizing Flows
lution (Yu & Koltun, 2015; Oord et al., 2016a) to model long range interactions among the input dimensions. The resulting convolutional normalizing ï¬ow, which we term as Convolutional Flow (ConvFlow), is simple and yet effective in warping simple densities to match complex ones. | 1711.02255#4 | Convolutional Normalizing Flows | Bayesian posterior inference is prevalent in various machine learning
problems. Variational inference provides one way to approximate the posterior
distribution, however its expressive power is limited and so is the accuracy of
resulting approximation. Recently, there has a trend of using neural networks
to approximate the variational posterior distribution due to the flexibility of
neural network architecture. One way to construct flexible variational
distribution is to warp a simple density into a complex by normalizing flows,
where the resulting density can be analytically evaluated. However, there is a
trade-off between the flexibility of normalizing flow and computation cost for
efficient transformation. In this paper, we propose a simple yet effective
architecture of normalizing flows, ConvFlow, based on convolution over the
dimensions of random input vector. Experiments on synthetic and real world
posterior inference problems demonstrate the effectiveness and efficiency of
the proposed method. | http://arxiv.org/pdf/1711.02255 | Guoqing Zheng, Yiming Yang, Jaime Carbonell | cs.LG | ICML 2018 Workshop on Theoretical Foundations and Applications of
Deep Generative Models | null | cs.LG | 20171107 | 20180709 | [
{
"id": "1511.07122"
},
{
"id": "1605.08803"
},
{
"id": "1509.00519"
},
{
"id": "1609.03499"
},
{
"id": "1601.06759"
}
] |
1711.02255 | 5 | The remainder of this paper is organized as follows: We brieï¬y review the principles for normalizing ï¬ows in Sec- tion 2, and then present our proposed normalizing ï¬ow architecture based on convolution in Section 3. Empirical evaluations and analysis on both synthetic and real world data sets are carried out in Section 4, and we conclude this paper in Section 5.
# 2. Preliminaries
where *(z) = h'(w'z + b)w. The computation cost of the determinant is hence reduced from O(d?) to O(d).
Applying f to z can be viewed as feeding the input vari- able z to a neural network with only one single hidden unit followed by a linear output layer which has the same di- mension with the input layer. Obviously, because of the bottleneck caused by the single hidden unit, the capacity of the family of transformed density is hence limited.
# 3. A new transformation unit
In this section, we ï¬rst propose a general extension to the above mentioned planar normalizing ï¬ow, and then propose a restricted version of that, which actually turns out to be convolution over the dimensions of the input random vector.
# 2.1. Transformation of random variables | 1711.02255#5 | Convolutional Normalizing Flows | Bayesian posterior inference is prevalent in various machine learning
problems. Variational inference provides one way to approximate the posterior
distribution, however its expressive power is limited and so is the accuracy of
resulting approximation. Recently, there has a trend of using neural networks
to approximate the variational posterior distribution due to the flexibility of
neural network architecture. One way to construct flexible variational
distribution is to warp a simple density into a complex by normalizing flows,
where the resulting density can be analytically evaluated. However, there is a
trade-off between the flexibility of normalizing flow and computation cost for
efficient transformation. In this paper, we propose a simple yet effective
architecture of normalizing flows, ConvFlow, based on convolution over the
dimensions of random input vector. Experiments on synthetic and real world
posterior inference problems demonstrate the effectiveness and efficiency of
the proposed method. | http://arxiv.org/pdf/1711.02255 | Guoqing Zheng, Yiming Yang, Jaime Carbonell | cs.LG | ICML 2018 Workshop on Theoretical Foundations and Applications of
Deep Generative Models | null | cs.LG | 20171107 | 20180709 | [
{
"id": "1511.07122"
},
{
"id": "1605.08803"
},
{
"id": "1509.00519"
},
{
"id": "1609.03499"
},
{
"id": "1601.06759"
}
] |
1711.02255 | 6 | # 2.1. Transformation of random variables
Given a random variable z ⬠R¢ with density p(z), consider a smooth and invertible function f : R¢ â R® operated on z. Let zâ = f(z) be the resulting random variable, the density of zâ can be evaluated as
; afo} af | v(2!) = ple) act | = piayface 4) ay
# 3.1. Normalizing ï¬ow with d hidden units
Instead of having a single hidden unit as suggested in planar ï¬ow, consider d hidden units in the process. We denote the weights associated with the edges from the input layer to the output layer as W â RdÃd and the vector to adjust the magnitude of each dimension of the hidden layer activation as u, and the transformation is deï¬ned as
thus
lz!) = loz plz) â low lact 2f log p(zâ) = log p(z) â log |det 5, (2)
f(z) =ueh(Wz +b) (6)
where © denotes the point-wise multiplication. The Jaco- bian matrix of this transformation is
# 2.2. Normalizing ï¬ows | 1711.02255#6 | Convolutional Normalizing Flows | Bayesian posterior inference is prevalent in various machine learning
problems. Variational inference provides one way to approximate the posterior
distribution, however its expressive power is limited and so is the accuracy of
resulting approximation. Recently, there has a trend of using neural networks
to approximate the variational posterior distribution due to the flexibility of
neural network architecture. One way to construct flexible variational
distribution is to warp a simple density into a complex by normalizing flows,
where the resulting density can be analytically evaluated. However, there is a
trade-off between the flexibility of normalizing flow and computation cost for
efficient transformation. In this paper, we propose a simple yet effective
architecture of normalizing flows, ConvFlow, based on convolution over the
dimensions of random input vector. Experiments on synthetic and real world
posterior inference problems demonstrate the effectiveness and efficiency of
the proposed method. | http://arxiv.org/pdf/1711.02255 | Guoqing Zheng, Yiming Yang, Jaime Carbonell | cs.LG | ICML 2018 Workshop on Theoretical Foundations and Applications of
Deep Generative Models | null | cs.LG | 20171107 | 20180709 | [
{
"id": "1511.07122"
},
{
"id": "1605.08803"
},
{
"id": "1509.00519"
},
{
"id": "1609.03499"
},
{
"id": "1601.06759"
}
] |
1711.02255 | 7 | where © denotes the point-wise multiplication. The Jaco- bian matrix of this transformation is
# 2.2. Normalizing ï¬ows
Normalizing ï¬ows considers successively transforming z0 with a series of transformations {f1, f2, ..., fK} to construct arbitrarily complex densities for zK = fK ⦠fKâ1 ⦠... ⦠f1(z0) as
# âf âz âf âz
or = diag(u © h'(Wz + b))W (7)
det or = det{diag(u © h'(Wz + b))|det(W) (8)
Ofk det . OZp-1 K log p(zx) = log p(z0) â > log (3) k=l
Hence the complexity lies in computing the determinant of the Jacobian matrix. Without further assumption about f , the general complexity for that is O(d3) where d is the dimension of z. In order to accelerate this, (Rezende & Mohamed, 2015) proposed the following family of transfor- mations that they termed as planar ï¬ow: | 1711.02255#7 | Convolutional Normalizing Flows | Bayesian posterior inference is prevalent in various machine learning
problems. Variational inference provides one way to approximate the posterior
distribution, however its expressive power is limited and so is the accuracy of
resulting approximation. Recently, there has a trend of using neural networks
to approximate the variational posterior distribution due to the flexibility of
neural network architecture. One way to construct flexible variational
distribution is to warp a simple density into a complex by normalizing flows,
where the resulting density can be analytically evaluated. However, there is a
trade-off between the flexibility of normalizing flow and computation cost for
efficient transformation. In this paper, we propose a simple yet effective
architecture of normalizing flows, ConvFlow, based on convolution over the
dimensions of random input vector. Experiments on synthetic and real world
posterior inference problems demonstrate the effectiveness and efficiency of
the proposed method. | http://arxiv.org/pdf/1711.02255 | Guoqing Zheng, Yiming Yang, Jaime Carbonell | cs.LG | ICML 2018 Workshop on Theoretical Foundations and Applications of
Deep Generative Models | null | cs.LG | 20171107 | 20180709 | [
{
"id": "1511.07122"
},
{
"id": "1605.08803"
},
{
"id": "1509.00519"
},
{
"id": "1609.03499"
},
{
"id": "1601.06759"
}
] |
1711.02255 | 8 | As det(diag(u © hâ(Wz + b))) is linear, the complexity of computing the above transformation lies in computing det(W). Essentially the planar flow is restricting W to be a vector of length d instead of matrices, however we can relax that assumption while still maintaining linear complexity of the determinant computation based on a very simple fact that the determinant of a triangle matrix is also just the product of the elements on the diagonal.
# 3.2. Convolutional Flow
f(z) =z+uh(w'z +b) (4)
where w ⬠R?,u ⬠R4,b ⬠R are parameters and h(-) is a univariate non-linear function with derivative hâ(-). For this family of transformations, the determinant of the Jacobian matrix can be computed as
of det az det(I + ud(z)')=1+uly(z) ©)
Since normalizing ï¬ow with a fully connected layer may not be bijective and generally requires O(d3) computations for the determinant of the Jacobian even it is, we propose to use 1-d convolution to transform random vectors. | 1711.02255#8 | Convolutional Normalizing Flows | Bayesian posterior inference is prevalent in various machine learning
problems. Variational inference provides one way to approximate the posterior
distribution, however its expressive power is limited and so is the accuracy of
resulting approximation. Recently, there has a trend of using neural networks
to approximate the variational posterior distribution due to the flexibility of
neural network architecture. One way to construct flexible variational
distribution is to warp a simple density into a complex by normalizing flows,
where the resulting density can be analytically evaluated. However, there is a
trade-off between the flexibility of normalizing flow and computation cost for
efficient transformation. In this paper, we propose a simple yet effective
architecture of normalizing flows, ConvFlow, based on convolution over the
dimensions of random input vector. Experiments on synthetic and real world
posterior inference problems demonstrate the effectiveness and efficiency of
the proposed method. | http://arxiv.org/pdf/1711.02255 | Guoqing Zheng, Yiming Yang, Jaime Carbonell | cs.LG | ICML 2018 Workshop on Theoretical Foundations and Applications of
Deep Generative Models | null | cs.LG | 20171107 | 20180709 | [
{
"id": "1511.07122"
},
{
"id": "1605.08803"
},
{
"id": "1509.00519"
},
{
"id": "1609.03499"
},
{
"id": "1601.06759"
}
] |
1711.02255 | 9 | Figure 1(a) illustrates how 1-d convolution is performed over an input vector and outputs another vector. We propose to perform a 1-d convolution on an input random vector z, followed by a non-linearity and necessary post operation
Convolutional Normalizing Flows
(a) (b)
Figure 1: (a) Illustration of 1-D convolution, where the dimensions of the input/output variable are both 8 (the input vector is padded with 0), the width of the convolution ï¬lter is 3 and dilation is 1; (b) A block of ConvFlow layers stacked with different dilations.
after activation to generate an output vector. Speciï¬cally,
f(z) =z+u© h(conv(z, w)) (9)
where w ⬠R* is the parameter of the 1-d convolution filter (k is the convolution kernel width), conv(z, w) is the Id convolution operation as shown in Figure 1(a), h(-) is a monotonic non-linear activation function!, © denotes point-wise multiplication, and w ⬠R@ is a vector adjusting the magnitude of each dimension of the activation from h(-). We term this normalizing flow as Convolutional Flow (ConvFlow).
ConvFlow enjoys the following properties | 1711.02255#9 | Convolutional Normalizing Flows | Bayesian posterior inference is prevalent in various machine learning
problems. Variational inference provides one way to approximate the posterior
distribution, however its expressive power is limited and so is the accuracy of
resulting approximation. Recently, there has a trend of using neural networks
to approximate the variational posterior distribution due to the flexibility of
neural network architecture. One way to construct flexible variational
distribution is to warp a simple density into a complex by normalizing flows,
where the resulting density can be analytically evaluated. However, there is a
trade-off between the flexibility of normalizing flow and computation cost for
efficient transformation. In this paper, we propose a simple yet effective
architecture of normalizing flows, ConvFlow, based on convolution over the
dimensions of random input vector. Experiments on synthetic and real world
posterior inference problems demonstrate the effectiveness and efficiency of
the proposed method. | http://arxiv.org/pdf/1711.02255 | Guoqing Zheng, Yiming Yang, Jaime Carbonell | cs.LG | ICML 2018 Workshop on Theoretical Foundations and Applications of
Deep Generative Models | null | cs.LG | 20171107 | 20180709 | [
{
"id": "1511.07122"
},
{
"id": "1605.08803"
},
{
"id": "1509.00519"
},
{
"id": "1609.03499"
},
{
"id": "1601.06759"
}
] |
1711.02255 | 10 | ConvFlow enjoys the following properties
⢠Bi-jectivity can be easily achieved with standard and fast 1d convolution operator if proper padding and a monotonic activation function with bounded gradients are adopted (Minor care is needed to guarantee strict invertibility, see Appendix A for details);
⢠Due to local connectivity, the Jacobian determinant of ConvFlow only takes O(d) computation independent from convolution kernel width k since
Jacobian matrix of the 1d convolution conv(z, w) is
â conv(z, w) âz w1 w2 w3 w1 w2 w3 w1 w2 w3 w1 w2 w3 w1 w2 w3 w1 w2 w3 w1 w2 w1 (11)
=
which is a triangular matrix whose determinant can be easily computed; | 1711.02255#10 | Convolutional Normalizing Flows | Bayesian posterior inference is prevalent in various machine learning
problems. Variational inference provides one way to approximate the posterior
distribution, however its expressive power is limited and so is the accuracy of
resulting approximation. Recently, there has a trend of using neural networks
to approximate the variational posterior distribution due to the flexibility of
neural network architecture. One way to construct flexible variational
distribution is to warp a simple density into a complex by normalizing flows,
where the resulting density can be analytically evaluated. However, there is a
trade-off between the flexibility of normalizing flow and computation cost for
efficient transformation. In this paper, we propose a simple yet effective
architecture of normalizing flows, ConvFlow, based on convolution over the
dimensions of random input vector. Experiments on synthetic and real world
posterior inference problems demonstrate the effectiveness and efficiency of
the proposed method. | http://arxiv.org/pdf/1711.02255 | Guoqing Zheng, Yiming Yang, Jaime Carbonell | cs.LG | ICML 2018 Workshop on Theoretical Foundations and Applications of
Deep Generative Models | null | cs.LG | 20171107 | 20180709 | [
{
"id": "1511.07122"
},
{
"id": "1605.08803"
},
{
"id": "1509.00519"
},
{
"id": "1609.03499"
},
{
"id": "1601.06759"
}
] |
1711.02255 | 11 | =
which is a triangular matrix whose determinant can be easily computed;
⢠ConvFlow is much simpler than previously proposed variants of normalizing ï¬ows. The total number of parameters of one ConvFlow layer is only d + k where generally k < d, particularly efï¬cient for high dimen- sional cases. Notice that the number of parameters in the planar ï¬ow in (Rezende & Mohamed, 2015) is 2d and one layer of Inverse Autoregressive Flow (IAF) (Kingma et al., 2016) and Real NVP (Dinh et al., 2016) require even more parameters. In Section 3.3, we discuss the key differences of ConvFlow from IAF in detail.
A series of K ConvFlows can be stacked to generate com- plex output densities. Further, since convolutions are only visible to inputs from adjacent dimensions, we propose to incorporate dilated convolution (Yu & Koltun, 2015; Oord et al., 2016a) to the ï¬ow to accommodate interactions among dimensions with long distance apart. Figure 1(b) presents a block of 3 ConvFlows stacked, with different dilations for each layer. Larger receptive ï¬eld is achieved without increasing the number of parameters. We term this as a ConvBlock. | 1711.02255#11 | Convolutional Normalizing Flows | Bayesian posterior inference is prevalent in various machine learning
problems. Variational inference provides one way to approximate the posterior
distribution, however its expressive power is limited and so is the accuracy of
resulting approximation. Recently, there has a trend of using neural networks
to approximate the variational posterior distribution due to the flexibility of
neural network architecture. One way to construct flexible variational
distribution is to warp a simple density into a complex by normalizing flows,
where the resulting density can be analytically evaluated. However, there is a
trade-off between the flexibility of normalizing flow and computation cost for
efficient transformation. In this paper, we propose a simple yet effective
architecture of normalizing flows, ConvFlow, based on convolution over the
dimensions of random input vector. Experiments on synthetic and real world
posterior inference problems demonstrate the effectiveness and efficiency of
the proposed method. | http://arxiv.org/pdf/1711.02255 | Guoqing Zheng, Yiming Yang, Jaime Carbonell | cs.LG | ICML 2018 Workshop on Theoretical Foundations and Applications of
Deep Generative Models | null | cs.LG | 20171107 | 20180709 | [
{
"id": "1511.07122"
},
{
"id": "1605.08803"
},
{
"id": "1509.00519"
},
{
"id": "1609.03499"
},
{
"id": "1601.06759"
}
] |
1711.02255 | 12 | From the block of ConvFlow layers presented in Figure 1(b), it is easy to verify that dimension i (1 ⤠i ⤠d) of the output vector only depends on succeeding dimensions, but not preceding ones. In other words, dimensions with larger indices tend to end up getting little warping compared to the ones with smaller indices. Fortunately, this can be easily resolved by a Revert Layer, which simply outputs a reversed version of its input vector. Speciï¬cally, a Revert Layer g operates as
or =I1+diag(w;u@ h'(conv(z,w))) (10)
g(Z) := g([Z1, 22, ws 2d)" ) = [Za, Zdâ-1 i)" (12)
where w1 denotes the ï¬rst element of w. For example for the illustration in Figure 1(a), the
1Examples of valid h(x) include all conventional activations, including sigmoid, tanh, softplus, rectiï¬er (ReLU), leaky rectiï¬er (Leaky ReLU) and exponential linear unit (ELU). | 1711.02255#12 | Convolutional Normalizing Flows | Bayesian posterior inference is prevalent in various machine learning
problems. Variational inference provides one way to approximate the posterior
distribution, however its expressive power is limited and so is the accuracy of
resulting approximation. Recently, there has a trend of using neural networks
to approximate the variational posterior distribution due to the flexibility of
neural network architecture. One way to construct flexible variational
distribution is to warp a simple density into a complex by normalizing flows,
where the resulting density can be analytically evaluated. However, there is a
trade-off between the flexibility of normalizing flow and computation cost for
efficient transformation. In this paper, we propose a simple yet effective
architecture of normalizing flows, ConvFlow, based on convolution over the
dimensions of random input vector. Experiments on synthetic and real world
posterior inference problems demonstrate the effectiveness and efficiency of
the proposed method. | http://arxiv.org/pdf/1711.02255 | Guoqing Zheng, Yiming Yang, Jaime Carbonell | cs.LG | ICML 2018 Workshop on Theoretical Foundations and Applications of
Deep Generative Models | null | cs.LG | 20171107 | 20180709 | [
{
"id": "1511.07122"
},
{
"id": "1605.08803"
},
{
"id": "1509.00519"
},
{
"id": "1609.03499"
},
{
"id": "1601.06759"
}
] |
1711.02255 | 13 | Itâs easy to verify a Revert Layer is bijective and that the Jacobian of g is a d x d matrix with Is on its anti-diagonal and 0 otherwise, thus log |aet 32 is 0. Therefore, we can append a Revert Layer after each ConvBlock to accommo- date warping for dimensions with larger indices without
Convolutional Normalizing Flows
additional computation cost for the Jacobian as follows
# z ConvBlock ConvBlock Revert Revert
# Repetitions of ConvBlock+Revert for K times
# f (z)
(13)
these singularity transforms in the autoregressive NN are somewhat mitigated by their ï¬nal coupling with the input z, IAF still performs slightly worse in empirical evaluations than ConvFlow as no singular transform is involved in ConvFlow.
# 3.3. Connection to Inverse Autoregressive Flow
Inspired by the idea of constructing complex tractable densi- ties from simpler ones with bijective transformations, differ- ent variants of the original normalizing ï¬ow (NF) (Rezende & Mohamed, 2015) have been proposed. Perhaps the one most related to ConvFlow is Inverse Autoregressive Flow (Kingma et al., 2016), which employs autoregres- sive transformations over the input dimensions to construct output densities. Speciï¬cally, one layer of IAF works as follows | 1711.02255#13 | Convolutional Normalizing Flows | Bayesian posterior inference is prevalent in various machine learning
problems. Variational inference provides one way to approximate the posterior
distribution, however its expressive power is limited and so is the accuracy of
resulting approximation. Recently, there has a trend of using neural networks
to approximate the variational posterior distribution due to the flexibility of
neural network architecture. One way to construct flexible variational
distribution is to warp a simple density into a complex by normalizing flows,
where the resulting density can be analytically evaluated. However, there is a
trade-off between the flexibility of normalizing flow and computation cost for
efficient transformation. In this paper, we propose a simple yet effective
architecture of normalizing flows, ConvFlow, based on convolution over the
dimensions of random input vector. Experiments on synthetic and real world
posterior inference problems demonstrate the effectiveness and efficiency of
the proposed method. | http://arxiv.org/pdf/1711.02255 | Guoqing Zheng, Yiming Yang, Jaime Carbonell | cs.LG | ICML 2018 Workshop on Theoretical Foundations and Applications of
Deep Generative Models | null | cs.LG | 20171107 | 20180709 | [
{
"id": "1511.07122"
},
{
"id": "1605.08803"
},
{
"id": "1509.00519"
},
{
"id": "1609.03499"
},
{
"id": "1601.06759"
}
] |
1711.02255 | 14 | ⢠Lastly, despite the similar nature of modeling variable dimension with an autoregressive manner, ConvFlow is much more efï¬cient since the computation of the ï¬ow weights w and the input z is carried out by fast native 1- d convolutions, where IAF in its simplest form needs to maintain a masked feed forward network (if not main- taining an RNN). Similar idea of using convolution operators for efï¬cient modeling of data dimensions is also adopted by PixelCNN (Oord et al., 2016b).
# 4. Experiments
where
f(z) = Mz) + o(z) Oz (14)
[µ(z), Ï(z)] â AutoregressiveNN(z) (15)
We test performance the proposed ConvFlow on two set- tings, one on synthetic data to infer unnormalized target density and the other on density estimation for hand written digits and characters.
are outputs from an autoregressive neural network over the dimensions of z. There are two drawbacks of IAF compared to the proposed ConvFlow: | 1711.02255#14 | Convolutional Normalizing Flows | Bayesian posterior inference is prevalent in various machine learning
problems. Variational inference provides one way to approximate the posterior
distribution, however its expressive power is limited and so is the accuracy of
resulting approximation. Recently, there has a trend of using neural networks
to approximate the variational posterior distribution due to the flexibility of
neural network architecture. One way to construct flexible variational
distribution is to warp a simple density into a complex by normalizing flows,
where the resulting density can be analytically evaluated. However, there is a
trade-off between the flexibility of normalizing flow and computation cost for
efficient transformation. In this paper, we propose a simple yet effective
architecture of normalizing flows, ConvFlow, based on convolution over the
dimensions of random input vector. Experiments on synthetic and real world
posterior inference problems demonstrate the effectiveness and efficiency of
the proposed method. | http://arxiv.org/pdf/1711.02255 | Guoqing Zheng, Yiming Yang, Jaime Carbonell | cs.LG | ICML 2018 Workshop on Theoretical Foundations and Applications of
Deep Generative Models | null | cs.LG | 20171107 | 20180709 | [
{
"id": "1511.07122"
},
{
"id": "1605.08803"
},
{
"id": "1509.00519"
},
{
"id": "1609.03499"
},
{
"id": "1601.06759"
}
] |
1711.02255 | 15 | are outputs from an autoregressive neural network over the dimensions of z. There are two drawbacks of IAF compared to the proposed ConvFlow:
⢠The autoregressive neural network over input dimen- sions in IAF is represented by a Masked Autoen- coder (Germain et al., 2015), which generally requires O(d2) parameters per layer, where d is the input di- mension, while each layer of ConvFlow is much more parameter efï¬cient, only needing k + d parameters (k is the kernel size of 1d convolution and k < d). | 1711.02255#15 | Convolutional Normalizing Flows | Bayesian posterior inference is prevalent in various machine learning
problems. Variational inference provides one way to approximate the posterior
distribution, however its expressive power is limited and so is the accuracy of
resulting approximation. Recently, there has a trend of using neural networks
to approximate the variational posterior distribution due to the flexibility of
neural network architecture. One way to construct flexible variational
distribution is to warp a simple density into a complex by normalizing flows,
where the resulting density can be analytically evaluated. However, there is a
trade-off between the flexibility of normalizing flow and computation cost for
efficient transformation. In this paper, we propose a simple yet effective
architecture of normalizing flows, ConvFlow, based on convolution over the
dimensions of random input vector. Experiments on synthetic and real world
posterior inference problems demonstrate the effectiveness and efficiency of
the proposed method. | http://arxiv.org/pdf/1711.02255 | Guoqing Zheng, Yiming Yang, Jaime Carbonell | cs.LG | ICML 2018 Workshop on Theoretical Foundations and Applications of
Deep Generative Models | null | cs.LG | 20171107 | 20180709 | [
{
"id": "1511.07122"
},
{
"id": "1605.08803"
},
{
"id": "1509.00519"
},
{
"id": "1609.03499"
},
{
"id": "1601.06759"
}
] |
1711.02255 | 16 | ⢠More importantly, due to the coupling of Ï(z) and z in the IAF transformation, in order to make the compu- tation of the overall Jacobian determinant det âf âz linear in d, the Jacobian of the autoregressive NN transforma- tion is assumed to be strictly triangular (Equivalently, the Jacobian determinants of µ and Ï w.r.t z are both always 0. This is achieved by letting the ith dimension of µ and Ï depend only on dimensions 1, 2, ..., i â 1 of z). In other words, the mappings from z onto µ(z) and Ï(z) via the autoregressive NN are always singu- lar, no matter how their parameters are updated, and because of this, µ and Ï will only be able to cover a subspace of the input space z belongs to, which is ob- viously less desirable for a normalizing ï¬ow.2 Though
2Since the singular transformations will only lead to subspace coverage of the resulting variable µ and Ï, one could try to allevi- ate the subspace issue by modifying IAF to set both µ and Ï as free parameters to be learned, the resulting normalizing ï¬ow of which is exactly a version of planar ï¬ow as proposed in (Rezende & Mohamed, 2015). | 1711.02255#16 | Convolutional Normalizing Flows | Bayesian posterior inference is prevalent in various machine learning
problems. Variational inference provides one way to approximate the posterior
distribution, however its expressive power is limited and so is the accuracy of
resulting approximation. Recently, there has a trend of using neural networks
to approximate the variational posterior distribution due to the flexibility of
neural network architecture. One way to construct flexible variational
distribution is to warp a simple density into a complex by normalizing flows,
where the resulting density can be analytically evaluated. However, there is a
trade-off between the flexibility of normalizing flow and computation cost for
efficient transformation. In this paper, we propose a simple yet effective
architecture of normalizing flows, ConvFlow, based on convolution over the
dimensions of random input vector. Experiments on synthetic and real world
posterior inference problems demonstrate the effectiveness and efficiency of
the proposed method. | http://arxiv.org/pdf/1711.02255 | Guoqing Zheng, Yiming Yang, Jaime Carbonell | cs.LG | ICML 2018 Workshop on Theoretical Foundations and Applications of
Deep Generative Models | null | cs.LG | 20171107 | 20180709 | [
{
"id": "1511.07122"
},
{
"id": "1605.08803"
},
{
"id": "1509.00519"
},
{
"id": "1609.03499"
},
{
"id": "1601.06759"
}
] |
1711.02255 | 17 | # 4.1. Synthetic data
We conduct experiments on using the proposed ConvFlow to approximate an unnormalized target density of z with dimension 2 such that p(z) â exp(âU (z)). We adopt the same set of energy functions U (z) in (Rezende & Mo- hamed, 2015) for a fair comparison, which is reproduced below
2 1 z||â2 1p z1â2]2 _apzt2)2 ve) = 5 (EL*) log (e~ #142)" + e441â) 1 | 22 -wi(z vale) = 5 : ni
where w,(z) = sin (35+) r. The target density of z are plotted as the left most column in Figure 2, and we test to see if the proposed ConvFlow can transform a two dimensional standard Gaussian to the target density by minimizing the KL divergence
KL (qx (2x)||p(2)) = Bes Tonal k)) â Ez, log p(zx) of) 8.0 =Ez, log qo(z0)) â Ez, log (16) â0)) + const | 1711.02255#17 | Convolutional Normalizing Flows | Bayesian posterior inference is prevalent in various machine learning
problems. Variational inference provides one way to approximate the posterior
distribution, however its expressive power is limited and so is the accuracy of
resulting approximation. Recently, there has a trend of using neural networks
to approximate the variational posterior distribution due to the flexibility of
neural network architecture. One way to construct flexible variational
distribution is to warp a simple density into a complex by normalizing flows,
where the resulting density can be analytically evaluated. However, there is a
trade-off between the flexibility of normalizing flow and computation cost for
efficient transformation. In this paper, we propose a simple yet effective
architecture of normalizing flows, ConvFlow, based on convolution over the
dimensions of random input vector. Experiments on synthetic and real world
posterior inference problems demonstrate the effectiveness and efficiency of
the proposed method. | http://arxiv.org/pdf/1711.02255 | Guoqing Zheng, Yiming Yang, Jaime Carbonell | cs.LG | ICML 2018 Workshop on Theoretical Foundations and Applications of
Deep Generative Models | null | cs.LG | 20171107 | 20180709 | [
{
"id": "1511.07122"
},
{
"id": "1605.08803"
},
{
"id": "1509.00519"
},
{
"id": "1609.03499"
},
{
"id": "1601.06759"
}
] |
1711.02255 | 18 | where all expectations are evaluated with samples taken from q0(z0). We use a 2-d standard Gaussian as q0(z0) and we test different number of ConvBlocks stacked together in this task. Each ConvBlock in this case consists a ConvFlow layer with kernel size 2, dilation 1 and followed by another ConvFlow layer with kernel size 2, dilation 2. Revert Layer is appended after each ConvBlock, and tanh activation func- tion is adopted by ConvFlow. The Autoregressive NN in
Convolutional Normalizing Flows
IAF is implemented as a two layer masked fully connected neural network (Germain et al., 2015). | 1711.02255#18 | Convolutional Normalizing Flows | Bayesian posterior inference is prevalent in various machine learning
problems. Variational inference provides one way to approximate the posterior
distribution, however its expressive power is limited and so is the accuracy of
resulting approximation. Recently, there has a trend of using neural networks
to approximate the variational posterior distribution due to the flexibility of
neural network architecture. One way to construct flexible variational
distribution is to warp a simple density into a complex by normalizing flows,
where the resulting density can be analytically evaluated. However, there is a
trade-off between the flexibility of normalizing flow and computation cost for
efficient transformation. In this paper, we propose a simple yet effective
architecture of normalizing flows, ConvFlow, based on convolution over the
dimensions of random input vector. Experiments on synthetic and real world
posterior inference problems demonstrate the effectiveness and efficiency of
the proposed method. | http://arxiv.org/pdf/1711.02255 | Guoqing Zheng, Yiming Yang, Jaime Carbonell | cs.LG | ICML 2018 Workshop on Theoretical Foundations and Applications of
Deep Generative Models | null | cs.LG | 20171107 | 20180709 | [
{
"id": "1511.07122"
},
{
"id": "1605.08803"
},
{
"id": "1509.00519"
},
{
"id": "1609.03499"
},
{
"id": "1601.06759"
}
] |
1711.02255 | 19 | where p(z) and p(z1) are the priors deï¬ned over z and z1 for G1 and G2, respectively. All other conditional densities are speciï¬ed with their parameters θ deï¬ned by neural networks, therefore ending up with two stochastic neural networks. This network could have any number of layers, however in this paper, we focus on the ones which only have one and two stochastic layers, i.e., G1 and G2, to conduct a fair comparison with previous methods on similar network architectures, such as VAE, IWAE and Normalizing Flows.
We use the same network architectures for both G1 and G2 as in (Burda et al., 2015), speciï¬cally shown as follows
Figure 2: (a) True density; (b) Density learned by IAF (16 layers); (c) Density learned by ConvFlow. (8 blocks with each block consisting of 2 layers)
G1 : A single Gaussian stochastic layer z with 50 units. In between the latent variable z and observation x there are two deterministic layers, each with 200 units; | 1711.02255#19 | Convolutional Normalizing Flows | Bayesian posterior inference is prevalent in various machine learning
problems. Variational inference provides one way to approximate the posterior
distribution, however its expressive power is limited and so is the accuracy of
resulting approximation. Recently, there has a trend of using neural networks
to approximate the variational posterior distribution due to the flexibility of
neural network architecture. One way to construct flexible variational
distribution is to warp a simple density into a complex by normalizing flows,
where the resulting density can be analytically evaluated. However, there is a
trade-off between the flexibility of normalizing flow and computation cost for
efficient transformation. In this paper, we propose a simple yet effective
architecture of normalizing flows, ConvFlow, based on convolution over the
dimensions of random input vector. Experiments on synthetic and real world
posterior inference problems demonstrate the effectiveness and efficiency of
the proposed method. | http://arxiv.org/pdf/1711.02255 | Guoqing Zheng, Yiming Yang, Jaime Carbonell | cs.LG | ICML 2018 Workshop on Theoretical Foundations and Applications of
Deep Generative Models | null | cs.LG | 20171107 | 20180709 | [
{
"id": "1511.07122"
},
{
"id": "1605.08803"
},
{
"id": "1509.00519"
},
{
"id": "1609.03499"
},
{
"id": "1601.06759"
}
] |
1711.02255 | 20 | G1 : A single Gaussian stochastic layer z with 50 units. In between the latent variable z and observation x there are two deterministic layers, each with 200 units;
Experimental results are shown in Figure 2 for IAF (middle column) and ConvFlow (right column) to approximate the target density (left column). Even with 16 layers, IAF puts most of the density to one mode, conï¬rming our analysis about the singular transform problem in IAF: As the data dimension is only two, the subspace modeled by µ(z) and Ï(z) in Eq. (14) will be lying on a 1-d space, i.e., a straight line, which is shown in the middle column. The effect of singular transform on IAF will be less severe for higher dimensions. While with 8 layers of ConvBlocks (each block consists of 2 1d convolution layers), ConvFlow is already approximating the target density quite well despite the minor underestimate about the density around the boundaries.
G2 : Two Gaussian stochastic layers z1 and z2 with 50 and 100 units, respectively. Two deterministic layers with 200 units connect the observation x and latent variable z2, and two deterministic layers with 100 units are in between z2 and z1. | 1711.02255#20 | Convolutional Normalizing Flows | Bayesian posterior inference is prevalent in various machine learning
problems. Variational inference provides one way to approximate the posterior
distribution, however its expressive power is limited and so is the accuracy of
resulting approximation. Recently, there has a trend of using neural networks
to approximate the variational posterior distribution due to the flexibility of
neural network architecture. One way to construct flexible variational
distribution is to warp a simple density into a complex by normalizing flows,
where the resulting density can be analytically evaluated. However, there is a
trade-off between the flexibility of normalizing flow and computation cost for
efficient transformation. In this paper, we propose a simple yet effective
architecture of normalizing flows, ConvFlow, based on convolution over the
dimensions of random input vector. Experiments on synthetic and real world
posterior inference problems demonstrate the effectiveness and efficiency of
the proposed method. | http://arxiv.org/pdf/1711.02255 | Guoqing Zheng, Yiming Yang, Jaime Carbonell | cs.LG | ICML 2018 Workshop on Theoretical Foundations and Applications of
Deep Generative Models | null | cs.LG | 20171107 | 20180709 | [
{
"id": "1511.07122"
},
{
"id": "1605.08803"
},
{
"id": "1509.00519"
},
{
"id": "1609.03499"
},
{
"id": "1601.06759"
}
] |
1711.02255 | 21 | where a Gaussian stochastic layer consists of two fully con- nected linear layers, with one outputting the mean and the other outputting the logarithm of diagonal covariance. All other deterministic layers are fully connected with tanh non- linearity. Bernoulli observation models are assumed for both MNIST and OMNIGLOT. For MNIST, we employ the static binarization strategy as in (Larochelle & Murray, 2011) while dynamic binarization is employed for OMNIGLOT.
# 4.2. Handwritten digits and characters
4.2.1. SETUPS
To test the proposed ConvFlow for variational inference we use standard benchmark datasets MNIST3 and OM- NIGLOT4 (Lake et al., 2013). Our method is general and can be applied to any formulation of the generative model pθ(x, z); For simplicity and fair comparison, in this paper, we focus on densities deï¬ned by stochastic neural networks, i.e., a broad family of ï¬exible probabilistic generative mod- els with its parameters deï¬ned by neural networks. Specif- ically, we consider the following two family of generative models | 1711.02255#21 | Convolutional Normalizing Flows | Bayesian posterior inference is prevalent in various machine learning
problems. Variational inference provides one way to approximate the posterior
distribution, however its expressive power is limited and so is the accuracy of
resulting approximation. Recently, there has a trend of using neural networks
to approximate the variational posterior distribution due to the flexibility of
neural network architecture. One way to construct flexible variational
distribution is to warp a simple density into a complex by normalizing flows,
where the resulting density can be analytically evaluated. However, there is a
trade-off between the flexibility of normalizing flow and computation cost for
efficient transformation. In this paper, we propose a simple yet effective
architecture of normalizing flows, ConvFlow, based on convolution over the
dimensions of random input vector. Experiments on synthetic and real world
posterior inference problems demonstrate the effectiveness and efficiency of
the proposed method. | http://arxiv.org/pdf/1711.02255 | Guoqing Zheng, Yiming Yang, Jaime Carbonell | cs.LG | ICML 2018 Workshop on Theoretical Foundations and Applications of
Deep Generative Models | null | cs.LG | 20171107 | 20180709 | [
{
"id": "1511.07122"
},
{
"id": "1605.08803"
},
{
"id": "1509.00519"
},
{
"id": "1609.03499"
},
{
"id": "1601.06759"
}
] |
1711.02255 | 23 | (18)
(17)
The inference networks q(z|x) for G1 and G2 have similar architectures to the generative models, with details in (Burda et al., 2015). ConvFlow is hence used to warp the output of the inference network q(z|x), assumed be to Gaussian condi- tioned on the input x, to match complex true posteriors. Our baseline models include VAE (Kingma & Welling, 2013), IWAE (Burda et al., 2015) and Normalizing Flows (Rezende & Mohamed, 2015). Since our propose method involves adding more layers to the inference network, we also include another enhanced version of VAE with more deterministic layers added to its inference network, which we term as VAE+.5 With the same VAE architectures, we also test the abilities of constructing complex variational posteriors with IAF and ConvFlow, respectively. All models are im- plemented in PyTorch. Parameters of both the variational distribution and the generative distribution of all models are optimized with Adam (Kingma & Ba, 2014) for 2000 epochs, with a ï¬xed learning rate of 0.0005, exponential decay rates for the 1st and 2nd moments at 0.9 and 0.999, respectively. Batch normalization (Ioffe & Szegedy, 2015) | 1711.02255#23 | Convolutional Normalizing Flows | Bayesian posterior inference is prevalent in various machine learning
problems. Variational inference provides one way to approximate the posterior
distribution, however its expressive power is limited and so is the accuracy of
resulting approximation. Recently, there has a trend of using neural networks
to approximate the variational posterior distribution due to the flexibility of
neural network architecture. One way to construct flexible variational
distribution is to warp a simple density into a complex by normalizing flows,
where the resulting density can be analytically evaluated. However, there is a
trade-off between the flexibility of normalizing flow and computation cost for
efficient transformation. In this paper, we propose a simple yet effective
architecture of normalizing flows, ConvFlow, based on convolution over the
dimensions of random input vector. Experiments on synthetic and real world
posterior inference problems demonstrate the effectiveness and efficiency of
the proposed method. | http://arxiv.org/pdf/1711.02255 | Guoqing Zheng, Yiming Yang, Jaime Carbonell | cs.LG | ICML 2018 Workshop on Theoretical Foundations and Applications of
Deep Generative Models | null | cs.LG | 20171107 | 20180709 | [
{
"id": "1511.07122"
},
{
"id": "1605.08803"
},
{
"id": "1509.00519"
},
{
"id": "1609.03499"
},
{
"id": "1601.06759"
}
] |
1711.02255 | 24 | 3Data downloaded from http://www.cs.toronto.
# edu/Ëlarocheh/public/datasets/binarized_ mnist/ 4Data
from https://github.com/ downloaded yburda/iwae/raw/master/datasets/OMNIGLOT/ chardata.mat
5VAE+ adds more layers before the stochastic layer of the inference network while the proposed method is add convolutional ï¬ow layers after the stochastic layer.
Convolutional Normalizing Flows
and linear annealing of the KL divergence term between the variational posterior and the prior is employed for the ï¬rst 200 epochs, as it has been shown to help training multi- layer stochastic neural networks (Sønderby et al., 2016). Code to reproduce all reported results will be made publicly available.
For inference models with latent variable z of 50 dimen- sions, a ConvBlock consists of following ConvFlow layers
[ConvFlow(kernel size = 5, dilation = 1), ConvFlow(kernel size = 5, dilation = 2), ConvFlow(kernel size = 5, dilation = 4), ConvFlow(kernel size = 5, dilation = 8), ConvFlow(kernel size = 5, dilation = 16), ConvFlow(kernel size = 5, dilation = 32)] | 1711.02255#24 | Convolutional Normalizing Flows | Bayesian posterior inference is prevalent in various machine learning
problems. Variational inference provides one way to approximate the posterior
distribution, however its expressive power is limited and so is the accuracy of
resulting approximation. Recently, there has a trend of using neural networks
to approximate the variational posterior distribution due to the flexibility of
neural network architecture. One way to construct flexible variational
distribution is to warp a simple density into a complex by normalizing flows,
where the resulting density can be analytically evaluated. However, there is a
trade-off between the flexibility of normalizing flow and computation cost for
efficient transformation. In this paper, we propose a simple yet effective
architecture of normalizing flows, ConvFlow, based on convolution over the
dimensions of random input vector. Experiments on synthetic and real world
posterior inference problems demonstrate the effectiveness and efficiency of
the proposed method. | http://arxiv.org/pdf/1711.02255 | Guoqing Zheng, Yiming Yang, Jaime Carbonell | cs.LG | ICML 2018 Workshop on Theoretical Foundations and Applications of
Deep Generative Models | null | cs.LG | 20171107 | 20180709 | [
{
"id": "1511.07122"
},
{
"id": "1605.08803"
},
{
"id": "1509.00519"
},
{
"id": "1609.03499"
},
{
"id": "1601.06759"
}
] |
1711.02255 | 26 | (19)
variational posterior further close to the true posterior. We also observe that VAE with Inverse Autoregressive Flows (VAE+IAF) improves over VAE and VAE+, due to its model- ing of complex densities, however the improvements are not as signiï¬cant as ConvFlow. The limited improvement might be explained by our analysis on the singular transformation and subspace issue in IAF. Lastly, combining convolutional normalizing ï¬ows with multiple importance weighted sam- ples, as shown in last row of Table 1, further improvement on the test set log-likelihood is achieved. Overall, the method combining ConvFlow and importance weighted samples achieves best NLL on both settings, outperforming IWAE signiï¬cantly by 7.1 nats on G1 and 5.7 nats on G2. No- tice that, ConvFlow combined with IWAE achieves an NLL of 79.11, comparable to the best published result of 79.10, achieved by PixelRNN (Oord et al., 2016b) with a much more sophisticated architecture. Also itâs about 0.8 nat bet- ter than the best IAF result of 79.88 reported in (Kingma et al., 2016), which demonstrates the representative power of ConvFlow compared to IAF6. | 1711.02255#26 | Convolutional Normalizing Flows | Bayesian posterior inference is prevalent in various machine learning
problems. Variational inference provides one way to approximate the posterior
distribution, however its expressive power is limited and so is the accuracy of
resulting approximation. Recently, there has a trend of using neural networks
to approximate the variational posterior distribution due to the flexibility of
neural network architecture. One way to construct flexible variational
distribution is to warp a simple density into a complex by normalizing flows,
where the resulting density can be analytically evaluated. However, there is a
trade-off between the flexibility of normalizing flow and computation cost for
efficient transformation. In this paper, we propose a simple yet effective
architecture of normalizing flows, ConvFlow, based on convolution over the
dimensions of random input vector. Experiments on synthetic and real world
posterior inference problems demonstrate the effectiveness and efficiency of
the proposed method. | http://arxiv.org/pdf/1711.02255 | Guoqing Zheng, Yiming Yang, Jaime Carbonell | cs.LG | ICML 2018 Workshop on Theoretical Foundations and Applications of
Deep Generative Models | null | cs.LG | 20171107 | 20180709 | [
{
"id": "1511.07122"
},
{
"id": "1605.08803"
},
{
"id": "1509.00519"
},
{
"id": "1609.03499"
},
{
"id": "1601.06759"
}
] |
1711.02255 | 27 | [ConvFlow(kernel size = 5, dilation = 1), ConvFlow(kernel size = 5, dilation = 2), ConvFlow(kernel size = 5, dilation = 4), ConvFlow(kernel size = 5, dilation = 8), ConvFlow(kernel size = 5, dilation = 16), ConvFlow(kernel size = 5, dilation = 32), ConvFlow(kernel size = 5, dilation = 64)]
(20)
Results on OMNIGLOT are presented in Table 2 where similar trends can be observed as on MNIST. One ob- servation different from MNIST is that, the gain from IWAE+ConvFlow over IWAE is not as large as it is on MNIST, which could be explained by the fact that OM- NIGLOT is a more difï¬cult set compared to MNIST, as there are 1600 different types of symbols in the dataset with roughly 20 samples per type. Again on OMNIGLOT we ob- serve IAF with VAE improves over VAE and VAE+, while doesnât perform as well as ConvFlow.
A Revert layer is appended after each ConvBlock and leaky ReLU with a negative slope of 0.01 is used as the activation function in ConvFlow. For IAF, the autoregressive neural network is implemented as a two layer masked fully con- nected neural network. | 1711.02255#27 | Convolutional Normalizing Flows | Bayesian posterior inference is prevalent in various machine learning
problems. Variational inference provides one way to approximate the posterior
distribution, however its expressive power is limited and so is the accuracy of
resulting approximation. Recently, there has a trend of using neural networks
to approximate the variational posterior distribution due to the flexibility of
neural network architecture. One way to construct flexible variational
distribution is to warp a simple density into a complex by normalizing flows,
where the resulting density can be analytically evaluated. However, there is a
trade-off between the flexibility of normalizing flow and computation cost for
efficient transformation. In this paper, we propose a simple yet effective
architecture of normalizing flows, ConvFlow, based on convolution over the
dimensions of random input vector. Experiments on synthetic and real world
posterior inference problems demonstrate the effectiveness and efficiency of
the proposed method. | http://arxiv.org/pdf/1711.02255 | Guoqing Zheng, Yiming Yang, Jaime Carbonell | cs.LG | ICML 2018 Workshop on Theoretical Foundations and Applications of
Deep Generative Models | null | cs.LG | 20171107 | 20180709 | [
{
"id": "1511.07122"
},
{
"id": "1605.08803"
},
{
"id": "1509.00519"
},
{
"id": "1609.03499"
},
{
"id": "1601.06759"
}
] |
1711.02255 | 28 | 4.2.2. GENERATIVE DENSITY ESTIMATION
For MNIST, models are trained and tuned on the 60,000 training and validation images, and estimated log-likelihood on the test set with 128 importance weighted samples are reported. Table 1 presents the performance of all models, when the generative model is assumed to be from both G1 and G2.
4.2.3. LATENT CODE VISUALIZATION
We visualize the inferred latent codes z of 5000 digits in the MNIST test set with respect to their true class labels in Fig- ure 3 from different models with tSNE (Maaten & Hinton, 2008). We observe that on generative model G2, all three models are able to infer latent codes of the digits consistent with their true classes. However, VAE and VAE+IAF both show disconnected cluster of latent codes from the same class (e.g., digits 0 and digits 1). Latent codes inferred by VAE for digit 3 and 5 tend to mix with each other. Overall, VAE equipped with ConvFlow produces clear separable la- tent codes for different classes while also maintaining high in-class density (notably for digit classes 0, 1, 2, 7, 8, 9 as | 1711.02255#28 | Convolutional Normalizing Flows | Bayesian posterior inference is prevalent in various machine learning
problems. Variational inference provides one way to approximate the posterior
distribution, however its expressive power is limited and so is the accuracy of
resulting approximation. Recently, there has a trend of using neural networks
to approximate the variational posterior distribution due to the flexibility of
neural network architecture. One way to construct flexible variational
distribution is to warp a simple density into a complex by normalizing flows,
where the resulting density can be analytically evaluated. However, there is a
trade-off between the flexibility of normalizing flow and computation cost for
efficient transformation. In this paper, we propose a simple yet effective
architecture of normalizing flows, ConvFlow, based on convolution over the
dimensions of random input vector. Experiments on synthetic and real world
posterior inference problems demonstrate the effectiveness and efficiency of
the proposed method. | http://arxiv.org/pdf/1711.02255 | Guoqing Zheng, Yiming Yang, Jaime Carbonell | cs.LG | ICML 2018 Workshop on Theoretical Foundations and Applications of
Deep Generative Models | null | cs.LG | 20171107 | 20180709 | [
{
"id": "1511.07122"
},
{
"id": "1605.08803"
},
{
"id": "1509.00519"
},
{
"id": "1609.03499"
},
{
"id": "1601.06759"
}
] |
1711.02255 | 29 | Firstly, VAE+ achieves higher log-likelihood estimates than vanilla VAE due to the added more layers in the inference network, implying that a better posterior approximation is learned (which is still assumed to be a Gaussian). Sec- ond, we observe that VAE with ConvFlow achieves much better density estimates than VAE+, which conï¬rms our expectation that warping the variational distribution with convolutional ï¬ows enforces the resulting variational poste- rior to match the true non-Gaussian posterior. Also, adding more blocks of convolutional ï¬ows to the network makes the
6The result in (Kingma et al., 2016) are not directly compara- ble, as their results are achieved with a much more sophisticated VAE architecture and a much higher dimension of latent code (d = 1920 for the best NLL of 79.88). However, in this paper, we only assume a relatively simple VAE architecture compose of fully connected layers and the dimension of latent codes to be relatively low, 50 or 100, depending on the generative model in VAE. One could expect the performance of ConvFlow to improve even fur- ther if similar complex VAE architecture and higher dimension of latent codes are used.
Convolutional Normalizing Flows
Table 1: MNIST test set NLL with generative models G1 and G2 (lower is better K is number of ConvBlocks) | 1711.02255#29 | Convolutional Normalizing Flows | Bayesian posterior inference is prevalent in various machine learning
problems. Variational inference provides one way to approximate the posterior
distribution, however its expressive power is limited and so is the accuracy of
resulting approximation. Recently, there has a trend of using neural networks
to approximate the variational posterior distribution due to the flexibility of
neural network architecture. One way to construct flexible variational
distribution is to warp a simple density into a complex by normalizing flows,
where the resulting density can be analytically evaluated. However, there is a
trade-off between the flexibility of normalizing flow and computation cost for
efficient transformation. In this paper, we propose a simple yet effective
architecture of normalizing flows, ConvFlow, based on convolution over the
dimensions of random input vector. Experiments on synthetic and real world
posterior inference problems demonstrate the effectiveness and efficiency of
the proposed method. | http://arxiv.org/pdf/1711.02255 | Guoqing Zheng, Yiming Yang, Jaime Carbonell | cs.LG | ICML 2018 Workshop on Theoretical Foundations and Applications of
Deep Generative Models | null | cs.LG | 20171107 | 20180709 | [
{
"id": "1511.07122"
},
{
"id": "1605.08803"
},
{
"id": "1509.00519"
},
{
"id": "1609.03499"
},
{
"id": "1601.06759"
}
] |
1711.02255 | 30 | MNIST (static binarization) â log p(x) on G1 â log p(x) on G2 VAE (Burda et al., 2015) IWAE (IW = 50) (Burda et al., 2015) VAE+NF (Rezende & Mohamed, 2015) 88.37 86.90 - 85.66 84.26 ⤠85.10 VAE+ (K = 1) VAE+ (K = 4) VAE+ (K = 8) 88.20 88.08 87.98 85.41 85.26 85.16 VAE+IAF (K = 1) VAE+IAF (K = 2) VAE+IAF (K = 4) VAE+IAF (K = 8) 87.70 87.30 87.02 86.62 85.03 84.74 84.55 84.26 VAE+ConvFlow (K = 1) VAE+ConvFlow (K = 2) VAE+ConvFlow (K = 4) VAE+ConvFlow (K = 8) 86.91 86.40 84.78 83.89 85.45 85.37 81.64 81.21 IWAE+ConvFlow (K = 8, IW = 50) 79.78 79.11
C©OINHRUAWNHO | 1711.02255#30 | Convolutional Normalizing Flows | Bayesian posterior inference is prevalent in various machine learning
problems. Variational inference provides one way to approximate the posterior
distribution, however its expressive power is limited and so is the accuracy of
resulting approximation. Recently, there has a trend of using neural networks
to approximate the variational posterior distribution due to the flexibility of
neural network architecture. One way to construct flexible variational
distribution is to warp a simple density into a complex by normalizing flows,
where the resulting density can be analytically evaluated. However, there is a
trade-off between the flexibility of normalizing flow and computation cost for
efficient transformation. In this paper, we propose a simple yet effective
architecture of normalizing flows, ConvFlow, based on convolution over the
dimensions of random input vector. Experiments on synthetic and real world
posterior inference problems demonstrate the effectiveness and efficiency of
the proposed method. | http://arxiv.org/pdf/1711.02255 | Guoqing Zheng, Yiming Yang, Jaime Carbonell | cs.LG | ICML 2018 Workshop on Theoretical Foundations and Applications of
Deep Generative Models | null | cs.LG | 20171107 | 20180709 | [
{
"id": "1511.07122"
},
{
"id": "1605.08803"
},
{
"id": "1509.00519"
},
{
"id": "1609.03499"
},
{
"id": "1601.06759"
}
] |
1711.02255 | 31 | C©OINHRUAWNHO
Figure 3: Left: VAE, Middle: VAE+IAF, Right:VAE+ConvFlow. (best viewed in color)
shown in the rightmost ï¬gure).
4.2.4. GENERATION
After the models are trained, generative samples can be obtained by feeding z â¼ N (0, I) to the learned genera- tive model G1 (or z2 â¼ N (0, I) to G2). Since higher log- likelihood estimates are obtained on G2, Figure 4 shows three sets of random generative samples from our proposed method trained with G2 on both MNIST and OMNIGLOT, compared to real samples from the training sets. We ob- serve the generated samples are visually consistent with the training data.
# 5. Conclusions
This paper presents a simple and yet effective architecture to compose normalizing ï¬ows based on 1d convolution on the input vectors. ConvFlow takes advantage of the effective computation of convolution to warp a simple density to the | 1711.02255#31 | Convolutional Normalizing Flows | Bayesian posterior inference is prevalent in various machine learning
problems. Variational inference provides one way to approximate the posterior
distribution, however its expressive power is limited and so is the accuracy of
resulting approximation. Recently, there has a trend of using neural networks
to approximate the variational posterior distribution due to the flexibility of
neural network architecture. One way to construct flexible variational
distribution is to warp a simple density into a complex by normalizing flows,
where the resulting density can be analytically evaluated. However, there is a
trade-off between the flexibility of normalizing flow and computation cost for
efficient transformation. In this paper, we propose a simple yet effective
architecture of normalizing flows, ConvFlow, based on convolution over the
dimensions of random input vector. Experiments on synthetic and real world
posterior inference problems demonstrate the effectiveness and efficiency of
the proposed method. | http://arxiv.org/pdf/1711.02255 | Guoqing Zheng, Yiming Yang, Jaime Carbonell | cs.LG | ICML 2018 Workshop on Theoretical Foundations and Applications of
Deep Generative Models | null | cs.LG | 20171107 | 20180709 | [
{
"id": "1511.07122"
},
{
"id": "1605.08803"
},
{
"id": "1509.00519"
},
{
"id": "1609.03499"
},
{
"id": "1601.06759"
}
] |
1711.02255 | 32 | possibly complex target density, as well as maintaining as few parameters as possible. To further accommodate long range interactions among the dimensions, dilated convolu- tion is incorporated to the framework without increasing model computational complexity. A Revert Layer is used to maximize the opportunity that all dimensions get as much warping as possible. Experimental results on inferring target complex density and density estimation on generative mod- eling on real world handwritten digits data demonstrates the strong performance of ConvFlow. Particularly, density estimates on MNIST show signiï¬cant improvements over state-of-the-art methods, validating the power of ConvFlow in warping multivariate densities. It remains an interesting question to see how ConvFlows can be directly combined with powerful observation models such as PixelRNN to further advance generative modeling with tractable density evaluation. We hope to address these challenges in future work.
Convolutional Normalizing Flows
Table 2: OMNIGLOT test set NLL with generative models G1 and G2 (lower is better, K is number of ConvBlocks) | 1711.02255#32 | Convolutional Normalizing Flows | Bayesian posterior inference is prevalent in various machine learning
problems. Variational inference provides one way to approximate the posterior
distribution, however its expressive power is limited and so is the accuracy of
resulting approximation. Recently, there has a trend of using neural networks
to approximate the variational posterior distribution due to the flexibility of
neural network architecture. One way to construct flexible variational
distribution is to warp a simple density into a complex by normalizing flows,
where the resulting density can be analytically evaluated. However, there is a
trade-off between the flexibility of normalizing flow and computation cost for
efficient transformation. In this paper, we propose a simple yet effective
architecture of normalizing flows, ConvFlow, based on convolution over the
dimensions of random input vector. Experiments on synthetic and real world
posterior inference problems demonstrate the effectiveness and efficiency of
the proposed method. | http://arxiv.org/pdf/1711.02255 | Guoqing Zheng, Yiming Yang, Jaime Carbonell | cs.LG | ICML 2018 Workshop on Theoretical Foundations and Applications of
Deep Generative Models | null | cs.LG | 20171107 | 20180709 | [
{
"id": "1511.07122"
},
{
"id": "1605.08803"
},
{
"id": "1509.00519"
},
{
"id": "1609.03499"
},
{
"id": "1601.06759"
}
] |
1711.02255 | 33 | Table 2: OMNIGLOT test set NLL with generative models G1 and G2 (lower is better, K is number of ConvBlocks)
OMNIGLOT â log p(x) on G1 â log p(x) on G2 VAE (Burda et al., 2015) IWAE (IW = 50) (Burda et al., 2015) 108.22 106.08 106.09 104.14 VAE+ (K = 1) VAE+ (K = 4) VAE+ (K = 8) 108.30 108.31 108.31 106.30 106.48 106.05 VAE+IAF (K = 1) VAE+IAF (K = 2) VAE+IAF (K = 4) VAE+IAF (K = 8) 107.31 106.93 106.69 106.33 105.78 105.34 105.56 105.00 VAE+ConvFlow (K = 1) VAE+ConvFlow (K = 2) VAE+ConvFlow (K = 4) VAE+ConvFlow (K = 8) 106.42 106.08 105.21 104.86 105.33 104.85 104.30 103.49 IWAE+ConvFlow (K = 8, IW = 50) 104.21 103.02 | 1711.02255#33 | Convolutional Normalizing Flows | Bayesian posterior inference is prevalent in various machine learning
problems. Variational inference provides one way to approximate the posterior
distribution, however its expressive power is limited and so is the accuracy of
resulting approximation. Recently, there has a trend of using neural networks
to approximate the variational posterior distribution due to the flexibility of
neural network architecture. One way to construct flexible variational
distribution is to warp a simple density into a complex by normalizing flows,
where the resulting density can be analytically evaluated. However, there is a
trade-off between the flexibility of normalizing flow and computation cost for
efficient transformation. In this paper, we propose a simple yet effective
architecture of normalizing flows, ConvFlow, based on convolution over the
dimensions of random input vector. Experiments on synthetic and real world
posterior inference problems demonstrate the effectiveness and efficiency of
the proposed method. | http://arxiv.org/pdf/1711.02255 | Guoqing Zheng, Yiming Yang, Jaime Carbonell | cs.LG | ICML 2018 Workshop on Theoretical Foundations and Applications of
Deep Generative Models | null | cs.LG | 20171107 | 20180709 | [
{
"id": "1511.07122"
},
{
"id": "1605.08803"
},
{
"id": "1509.00519"
},
{
"id": "1609.03499"
},
{
"id": "1601.06759"
}
] |
1711.02255 | 35 | A as + as Ci ere ere 2ywSyansa SMO woah ~ Ot Hâ HHO ~sWPOGg~w~o rSJ MP Ko oO Gg â Pw AO Qu Awe OCHO Bean Te FHA Vw Low o 6 i 3 g 4 Fa a & | Cnt me C7 INE OL % EN UNSW OW oe ya Wa Cand ty SX On ~~ WM Fe & & S- SFP OG-5 8H (es paso sg Lat ~Ey orf nwry nw SH BA how wD oe eS] NO&o2\ rte ner Wo Derma TGR ~-OV we 2 ~âH- Ao ee MW LW A fn eS we OND te HO BONA SWOâAKAD ⢠Te â Ch WU AD oy om wy et oN own wh ge) > OC OG oT WOOK BDAWHY pla tw OQw ns home OT A We Seve S#vanret-< ww Sw OO Cat GA cake ey So Rees > Nn oq a | 1711.02255#35 | Convolutional Normalizing Flows | Bayesian posterior inference is prevalent in various machine learning
problems. Variational inference provides one way to approximate the posterior
distribution, however its expressive power is limited and so is the accuracy of
resulting approximation. Recently, there has a trend of using neural networks
to approximate the variational posterior distribution due to the flexibility of
neural network architecture. One way to construct flexible variational
distribution is to warp a simple density into a complex by normalizing flows,
where the resulting density can be analytically evaluated. However, there is a
trade-off between the flexibility of normalizing flow and computation cost for
efficient transformation. In this paper, we propose a simple yet effective
architecture of normalizing flows, ConvFlow, based on convolution over the
dimensions of random input vector. Experiments on synthetic and real world
posterior inference problems demonstrate the effectiveness and efficiency of
the proposed method. | http://arxiv.org/pdf/1711.02255 | Guoqing Zheng, Yiming Yang, Jaime Carbonell | cs.LG | ICML 2018 Workshop on Theoretical Foundations and Applications of
Deep Generative Models | null | cs.LG | 20171107 | 20180709 | [
{
"id": "1511.07122"
},
{
"id": "1605.08803"
},
{
"id": "1509.00519"
},
{
"id": "1609.03499"
},
{
"id": "1601.06759"
}
] |
1711.02255 | 36 | A as Ci ere ere 2ywSyansa (es paso sg Lat ~Ey orf nwry nw SH BA how wD oe eS] NO&o2\ rte ner wh ge) > OC OG oT WOOK BDAWHY
+ Pw AO Qu Awe OCHO Bean Te FHA Vw Low Wo Derma TGR ~-OV we 2 ~âH- pla tw OQw ns > a
as SMO woah ~ Ot Hâ HHO ~sWPOGg~w~o Ao ee MW LW A fn eS we OND te HO BONA SWOâAKAD ⢠Te â Ch WU AD oy om home OT A We Seve S#vanret-< ww Sw OO Cat Nn oq
(a) MNIST Training data (b) Random samples 1 from IWAE-ConvFlow (K = 8) (c) Random samples 2 from IWAE-ConvFlow (K = 8) (d) Random samples 3 from IWAE-ConvFlow (K = 8)
m£tepRutachn +OwpI# CHEER
m£tepRutachn +OwpI# CHEER | 1711.02255#36 | Convolutional Normalizing Flows | Bayesian posterior inference is prevalent in various machine learning
problems. Variational inference provides one way to approximate the posterior
distribution, however its expressive power is limited and so is the accuracy of
resulting approximation. Recently, there has a trend of using neural networks
to approximate the variational posterior distribution due to the flexibility of
neural network architecture. One way to construct flexible variational
distribution is to warp a simple density into a complex by normalizing flows,
where the resulting density can be analytically evaluated. However, there is a
trade-off between the flexibility of normalizing flow and computation cost for
efficient transformation. In this paper, we propose a simple yet effective
architecture of normalizing flows, ConvFlow, based on convolution over the
dimensions of random input vector. Experiments on synthetic and real world
posterior inference problems demonstrate the effectiveness and efficiency of
the proposed method. | http://arxiv.org/pdf/1711.02255 | Guoqing Zheng, Yiming Yang, Jaime Carbonell | cs.LG | ICML 2018 Workshop on Theoretical Foundations and Applications of
Deep Generative Models | null | cs.LG | 20171107 | 20180709 | [
{
"id": "1511.07122"
},
{
"id": "1605.08803"
},
{
"id": "1509.00519"
},
{
"id": "1609.03499"
},
{
"id": "1601.06759"
}
] |
1711.02255 | 37 | (e) OMNIGLOT Training data (f) Random samples from IWAE- ConvFlow (K = 8) (g) Random samples from IWAE- ConvFlow (K = 8) (h) Random samples from IWAE- ConvFlow (K = 8)
Figure 4: Training data and generated samples
Convolutional Normalizing Flows
# References
Blei, David M., Ng, Andrew Y., and Jordan, Michael I. Latent dirichlet allocation. Journal of Machine Learning Research, 3:993â1022, 2003.
Burda, Yuri, Grosse, Roger, and Salakhutdinov, Ruslan. arXiv preprint Importance weighted autoencoders. arXiv:1509.00519, 2015.
Maaten, Laurens van der and Hinton, Geoffrey. Visualizing data using t-sne. Journal of machine learning research, 9 (Nov):2579â2605, 2008.
Oord, Aaron van den, Dieleman, Sander, Zen, Heiga, Si- monyan, Karen, Vinyals, Oriol, Graves, Alex, Kalch- brenner, Nal, Senior, Andrew, and Kavukcuoglu, Ko- ray. Wavenet: A generative model for raw audio. arXiv preprint arXiv:1609.03499, 2016a. | 1711.02255#37 | Convolutional Normalizing Flows | Bayesian posterior inference is prevalent in various machine learning
problems. Variational inference provides one way to approximate the posterior
distribution, however its expressive power is limited and so is the accuracy of
resulting approximation. Recently, there has a trend of using neural networks
to approximate the variational posterior distribution due to the flexibility of
neural network architecture. One way to construct flexible variational
distribution is to warp a simple density into a complex by normalizing flows,
where the resulting density can be analytically evaluated. However, there is a
trade-off between the flexibility of normalizing flow and computation cost for
efficient transformation. In this paper, we propose a simple yet effective
architecture of normalizing flows, ConvFlow, based on convolution over the
dimensions of random input vector. Experiments on synthetic and real world
posterior inference problems demonstrate the effectiveness and efficiency of
the proposed method. | http://arxiv.org/pdf/1711.02255 | Guoqing Zheng, Yiming Yang, Jaime Carbonell | cs.LG | ICML 2018 Workshop on Theoretical Foundations and Applications of
Deep Generative Models | null | cs.LG | 20171107 | 20180709 | [
{
"id": "1511.07122"
},
{
"id": "1605.08803"
},
{
"id": "1509.00519"
},
{
"id": "1609.03499"
},
{
"id": "1601.06759"
}
] |
1711.02255 | 38 | Dinh, Laurent, Krueger, David, and Bengio, Yoshua. Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516, 2014.
Oord, Aaron van den, Kalchbrenner, Nal, and Kavukcuoglu, Koray. Pixel recurrent neural networks. arXiv preprint arXiv:1601.06759, 2016b.
Dinh, Laurent, Sohl-Dickstein, Jascha, and Bengio, Samy. arXiv preprint Density estimation using real nvp. arXiv:1605.08803, 2016.
Germain, Mathieu, Gregor, Karol, Murray, Iain, and Larochelle, Hugo. Made: masked autoencoder for distri- bution estimation. In Proceedings of the 32nd Interna- tional Conference on Machine Learning (ICML-15), pp. 881â889, 2015.
Ioffe, Sergey and Szegedy, Christian. Batch normaliza- tion: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the 32nd Inter- national Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, pp. 448â456, 2015. | 1711.02255#38 | Convolutional Normalizing Flows | Bayesian posterior inference is prevalent in various machine learning
problems. Variational inference provides one way to approximate the posterior
distribution, however its expressive power is limited and so is the accuracy of
resulting approximation. Recently, there has a trend of using neural networks
to approximate the variational posterior distribution due to the flexibility of
neural network architecture. One way to construct flexible variational
distribution is to warp a simple density into a complex by normalizing flows,
where the resulting density can be analytically evaluated. However, there is a
trade-off between the flexibility of normalizing flow and computation cost for
efficient transformation. In this paper, we propose a simple yet effective
architecture of normalizing flows, ConvFlow, based on convolution over the
dimensions of random input vector. Experiments on synthetic and real world
posterior inference problems demonstrate the effectiveness and efficiency of
the proposed method. | http://arxiv.org/pdf/1711.02255 | Guoqing Zheng, Yiming Yang, Jaime Carbonell | cs.LG | ICML 2018 Workshop on Theoretical Foundations and Applications of
Deep Generative Models | null | cs.LG | 20171107 | 20180709 | [
{
"id": "1511.07122"
},
{
"id": "1605.08803"
},
{
"id": "1509.00519"
},
{
"id": "1609.03499"
},
{
"id": "1601.06759"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.