doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1609.07061 | 4 | # sppvoach
The most common approach is to compress a trained (full precision) network. Hashed- Nets (Chen et al., 2015) reduce model sizes by using a hash function to randomly group connection weights and force them to share a single parameter value. Gong et al. (2014) compressed deep convnets using vector quantization, which resulteds in only a 1% accuracy loss. However, both methods focused only on the fully connected layers. A recent work by Han and Dally (2015) successfully pruned several state-of-the-art large scale networks and showed that the number of parameters could be reduced by an order of magnitude. | 1609.07061#4 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 5 | Recent works have shown that more computationally eï¬cient DNNs can be constructed by quantizing some of the parameters during the training phase. In most cases, DNNs are trained by minimizing some error function using Back-Propagation (BP) or related gradient descent methods. However, such an approach cannot be directly applied if the weights are restricted to binary values. Soudry et al. (2014) used a variational Bayesian approach with Mean-Field and Central Limit approximation to calculate the posterior distribution of the weights (the probability of each weight to be +1 or -1). During the inference stage (test phase), their method samples from this distribution one binary network and used it to predict the targets of the test set (More than one binary network can also be used). Courbariaux et al. (2015b) similarly used two sets of weights, real-valued and binary. They, however, updated the real valued version of the weights by using gradients computed by applying forward and backward propagation with the set of binary weights (which was obtained by quantizing the real-value weights to +1 and -1). | 1609.07061#5 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 6 | This study proposes a more advanced technique, referred to as Quantized Neural Net- work (QNN), for quantizing the neurons and weights during inference and training. In such networks, all MAC operations can be replaced with XN OR and population count (i.e., counting the number of ones in the binary number) operations. This is especially useful in
2
Quantized Neural Networks
QNNs with the extremely low precision â for example, when only 1-bit is used per weight and activation, leading to a Binarized Neural Network (BNN). The proposed method is par- ticularly beneï¬cial for implementing large convolutional networks whose neuron-to-weight ratio is very large.
This paper makes the following contributions:
⢠We introduce a method to train Quantized-Neural-Networks (QNNs), neural networks with low precision weights and activations, at run-time, and when computing the parameter gradients at train-time. In the extreme case QNNs use only 1-bit per weight and activation(i.e., Binarized NN; see Section 2). | 1609.07061#6 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 7 | ⢠We conduct two sets of experiments, each implemented on a diï¬erent framework, namely Torch7 and Theano, which show that it is possible to train BNNs on MNIST, CIFAR-10 and SVHN and achieve near state-of-the-art results (see Section 4). More- over, we report results on the challenging ImageNet dataset using binary weights/activations as well as quantized version of it (more than 1-bit).
⢠We present preliminary results on quantized gradients and show that it is possible to use only 6-bits with only small accuracy degradation.
⢠We present results for the Penn Treebank dataset using language models (vanilla RNNs and LSTMs) and show that with 4-bit weights and activations Recurrent QNNs achieve similar accuracies as their 32-bit ï¬oating point counterparts.
⢠We show that during the forward pass (both at run-time and train-time), QNNs drastically reduce memory consumption (size and number of accesses), and replace most arithmetic operations with bit-wise operations. A substantial increase in power eï¬ciency is expected as a result (see Section 5). Moreover, a binarized CNN can lead to binary convolution kernel repetitions; we argue that dedicated hardware could reduce the time complexity by 60% . | 1609.07061#7 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 8 | ⢠Last but not least, we programmed a binary matrix multiplication GPU kernel with which it is possible to run our MNIST BNN 7 times faster than with an unoptimized GPU kernel, without suï¬ering any loss in classiï¬cation accuracy (see Section 6).
⢠The code for training and applying our BNNs is available on-line (both the Theano 1 and the Torch framework 2).
# 2. Binarized Neural Networks
In this section, we detail our binarization function, show how we use it to compute the parameter gradients, and how we backpropagate through it.
# 1https://github.com/MatthieuCourbariaux/BinaryNet 2https://github.com/itayhubara/BinaryNet
3
Hubara, Courbariaux, Soudry, El-Yaniv and Bengio
# 2.1 Deterministic vs Stochastic Binarization
When training a BNN, we constrain both the weights and the activations to either +1 or â1. Those two values are very advantageous from a hardware perspective, as we explain in Section 6. In order to transform the real-valued variables into those two values, we use two diï¬erent binarization functions, as proposed by Courbariaux et al. (2015a). The ï¬rst binarization function is deterministic:
+1 ifa>0, | 1609.07061#8 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 9 | +1 ifa>0,
xb = sign(x) = â1 otherwise, (1)
where xb is the binarized variable (weight or activation) and x the real-valued variable. It is very straightforward to implement and works quite well in practice. The second binarization function is stochastic:
(2) pb _ Jf +1. with probability p = (2), =) 1 with probability 1 â p,
where Ï is the âhard sigmoidâ function:
Ï(x) = clip( x + 1 2 , 0, 1) = max(0, min(1, x + 1 2 )). (3)
This stochastic binarization is more appealing theoretically (see Section 4) than the sign function, but somewhat harder to implement as it requires the hardware to generate random bits when quantizing (Torii et al., 2016). As a result, we mostly use the deterministic binarization function (i.e., the sign function), with the exception of activations at train- time in some of our experiments.
# 2.2 Gradient Computation and Accumulation | 1609.07061#9 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 10 | # 2.2 Gradient Computation and Accumulation
Although our BNN training method utilizes binary weights and activations to compute the parameter gradients, the real-valued gradients of the weights are accumulated in real- valued variables, as per Algorithm 1. Real-valued weights are likely required for Stochasic Gradient Descent (SGD) to work at all. SGD explores the space of parameters in small and noisy steps, and that noise is averaged out by the stochastic gradient contributions accumulated in each weight. Therefore, it is important to maintain suï¬cient resolution for these accumulators, which at ï¬rst glance suggests that high precision is absolutely required. Moreover, adding noise to weights and activations when computing the parameter gra- dients provide a form of regularization that can help to generalize better, as previously shown with variational weight noise (Graves, 2011), Dropout (Srivastava et al., 2014) and DropConnect (Wan et al., 2013). Our method of training BNNs can be seen as a vari- ant of Dropout, in which instead of randomly setting half of the activations to zero when computing the parameter gradients, we binarize both the activations and the weights.
# 2.3 Propagating Gradients Through Discretization | 1609.07061#10 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 11 | # 2.3 Propagating Gradients Through Discretization
The derivative of the sign function is zero almost everywhere, making it apparently in- compatible with back-propagation, since the exact gradients of the cost with respect to the
4
Quantized Neural Networks
quantities before the discretization (pre-activations or weights) are zero. Note that this lim- itation remains even if stochastic quantization is used. Bengio (2013) studied the question of estimating or propagating gradients through stochastic discrete neurons. He found that the fastest training was obtained when using the âstraight-through estimator,â previously introduced in Hintonâs lectures (Hinton, 2012). We follow a similar approach but use the version of the straight-through estimator that takes into account the saturation eï¬ect, and does use deterministic rather than stochastic sampling of the bit. Consider the sign function quantization
q = Sign(r),
and assume that an estimator gq of the gradient âC through estimator when needed). Then, our straight-through estimator of âC
gr = gq1|r|â¤1. (4) | 1609.07061#11 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 12 | gr = gq1|r|â¤1. (4)
Note that this preserves the gradient information and cancels the gradient when r is too large. Not cancelling the gradient when r is too large signiï¬cantly worsens performance. To better understand why the straight-through estimator works well, consider the stochastic (2) and rewrite Ï(r) = (HT(r) + 1) /2, where HT(r) is the binarization scheme in Eq. well-known âhard tanhâ,
HT(r) = +1 r > 1, r â1 r < â1. r â [â1, 1], (5)
In this case the input to the next layer has the following form,
Wbhb (r) = WbHT (r) + n (r) , | 1609.07061#12 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 13 | In this case the input to the next layer has the following form,
Wbhb (r) = WbHT (r) + n (r) ,
where we use the fact that HT (r) is the expectation over hb (x) (see Eqs. (2) and (5)), and deï¬ne n (r) as binarization noise with mean equal to zero. When the layer is wide, we expect the deterministic mean term HT (x) to dominate, because the noise term n (r) is a summation over many independent binarizations from all the neurons in the previous layer. Thus, we argue that the binarization noise n (x) can be ignored when performing diï¬erentiation in the backward propagation stage. Therefore, we replace âhb(r) (which cannot be computed) with
âHT (r) âx = 0 r > 1, 1 r â [â1, 1], 0 r < â1, (6)
which is exactly the straight-through estimator deï¬ned in Eq (4). The use of this straight- through estimator is illustrated in Algorithm 1.
A similar binarization process was applied for weights in which we combine two ingre- dients: | 1609.07061#13 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 14 | A similar binarization process was applied for weights in which we combine two ingre- dients:
⢠Project each real-valued weight to [-1,1], i.e., clip the weights during training, as per Algorithm 1. The real-valued weights would otherwise grow very large without any impact on the binary weights.
5
Hubara, Courbariaux, Soudry, El-Yaniv and Bengio
⢠When using a weight wr, quantize it using wb = Sign(wr).
Projecting the weights to [-1,1] is consistent with the gradient cancelling when |wr| > 1, according to Eq. ( 4).
# 2.4 Shift-based Batch Normalization | 1609.07061#14 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 15 | Batch Normalization (BN) (loffe and Szegedy] |2015) accelerates the training and reduces the overall impact of the weight scale (Courbariaux et al.| |2015a). The normalization procedure may also help to regularize the model. However, at train-time, BN requires many multiplications (calculating the standard deviation and dividing by it, namely, dividing by the running variance, which is the weighted mean of the training set activation variance). Although the number of scaling calculations is the same as the number of neurons, in the case of ConvNets this number is quite large. For example, in the CIFAR-10 dataset (using our architecture), the first convolution layer, consisting of only 128 x 3 x 3 filter masks, converts an image of size 3 x 32 x 32 to size 128 x 28 x 28, which is almost two orders of magnitude larger than the number of weights (87.1 to be exact). To achieve the results that BN would obtain, we use a shift-based batch normalization (SBN) technique, presented in Algori hm P| SBN approximates BN almost without multiplications. Define AP2(z) as the approximate power-of-2 of | 1609.07061#15 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 17 | xx yu <> AP2y). (7)
The only operation which is not a binary shift or an add is the inverse square root (see normalization operation Algorithm 2). From the early work of Lomont (2003) we know that the inverse-square operation could be applied with approximately the same complexity as multiplication. There are also faster methods, which involve lookup table tricks that typically obtain lower accuracy (this may not be an issue, since our procedure already adds a lot of noise). However, the number of values on which we apply the inverse-square operation is rather small, since it is done after calculating the variance, i.e., after averaging (for a more precise calculation, see the BN analysis in Lin et al. (2015b). Furthermore, the size of the standard deviation vectors is relatively small. For example, these values make up only 0.3% of the network size (i.e., the number of learnable parameters) in the Cifar-10 network we used in our experiments.
In the experiment we observed no loss in accuracy when using the shift-based BN algo- rithm instead of the vanilla BN algorithm.
# 2.5 Shift Based AdaMax | 1609.07061#17 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 18 | In the experiment we observed no loss in accuracy when using the shift-based BN algo- rithm instead of the vanilla BN algorithm.
# 2.5 Shift Based AdaMax
The ADAM learning method (Kingma and Ba, 2014b) also reduces the impact of the weight scale. Since ADAM requires many multiplications, we suggest using instead the shift-based AdaMax we outlined in Algorithm 3. In the experiment we conducted we observed no loss in accuracy when using the shift-based AdaMax algorithm instead of the vanilla ADAM algorithm.
3Hardware implementation of AP2 is as simple as extracting the index of the most signiï¬cant bit from the numberâs binary representation.
6
Quantized Neural Networks | 1609.07061#18 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 19 | 6
Quantized Neural Networks
Algorithm 1 Training a BNN. C is the cost function for minibatch, λ, the learning rate (â¦) stands for element-wise multiplication. decay factor, and L, the number of layers. The function Binarize(·) speciï¬es how to (stochastically or deterministically) binarize the activations and weights, and Clip(), how to clip the weights. BatchNorm() speciï¬es how to batch-normalize the activations, using either batch normalization (Ioï¬e and Szegedy, 2015) or its shift-based variant we describe in Algorithm 2. BackBatchNorm() speciï¬es how to backpropagate through the normalization. Update() speciï¬es how to update the parameters when their gradients are known, using either ADAM (Kingma and Ba, 2014b) or the shift-based AdaMax we describe in Algorithm 3. Require: a minibatch of inputs and targets (a0, aâ), previous weights W , previous Batch- Norm parameters θ, weight initialization coeï¬cients from (Glorot and Bengio, 2010) γ, and previous learning rate η. | 1609.07061#19 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 20 | Norm parameters 0, weight initialization and previous learning rate 7. Ensure: updated weights Wât!, updated t+1 ing rate 7 {1. Computing the parameter gradients:} {1.1. Forward propagation: } for k =1 to Ldo WP < Binarize(W;,) Sk ab_ we ay < BatchNorm(s,, Ox) if k < L then ab â Binarize(a,) end if end for {1.2. Backward propagation: } {Note that the gradients are not binary.} Compute ga, = 2 on knowing az, and a* for k = L tol do if k < L then Jax Jar ° Vay <i end if (Gsps 90, ) <- BackBatchNorm(ga,, Sk, 9x) Fa_, â I. WR Iwp â Is M1 end for {2. Accumulating the parameter gradients:} for k =1 to L do ott + Update(, 7, 90,) wet + Clip(Update(W;,, VEN Iw?) â1,1) nit! & dn
Ensure: updated weights W t+1, updated BatchNorm parameters θt+1 and updated learn# end for
7
# Hubara, Courbariaux, Soudry, El-Yaniv and Bengio | 1609.07061#20 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 21 | 7
# Hubara, Courbariaux, Soudry, El-Yaniv and Bengio
Algorithm 2 Shift based Batch Normalizing Transform, applied to activation x over a mini-batch. AP2(«) = sign(x) x 2"""4(le82l"I) is the approximate power-of-2|*| and <> stands for both left and right binary shift. Require: Values of x over a mini-batch: B = {21...m}; Parameters to be learned: y, 8 Ensure: {y; = BN(2;,7, 3)} Be i yo, {mini-batch mean}
Require: Values of x over a mini-batch: B = {21...m}; Parameters to be learned: y, 8 Ensure: {y; = BN(2;,7, 3)} Be i yo, v; {mini-batch mean} C(x) (a: â pe) {centered input} oR Hd Ti (C(ai)K>AP2(C(a;))) {apx variance} #; â C(ai) <> AP2((,/o% + â¬)7+) {normalize} yi <â AP2(7) <> 4; {scale and shift} | 1609.07061#21 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 22 | Algorithm 3 Shift based AdaMax learning rule (Kingma and Ba, 2014b). g2 t indicates the element-wise square gt ⦠gt. Good default settings are α = 2â10, 1 â β1 = 2â3, 1 â β2 = 2â10. All operations on vectors are element-wise. With βt 2 we denote β1 and β2 to the power t. Require: Previous parameters θtâ1, their gradient gt, and learning rate α. Ensure: Updated parameters θt
Require: Previous parameters 6,_1, their gradient g,, and learning rate a. Ensure: Updated parameters 6; {Biased 1st and 2nd raw moment estimates:} me â By m1 + (1 â Br) - 9 vt â max(B + v¢-1, gel) {Updated parameters: } 0, + 0-1 â(a@ K> (1â f1)) tr KS v7)
# 2.6 First Layer | 1609.07061#22 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 23 | # 2.6 First Layer
In a BNN, only the binarized values of the weights and activations are used in all calcula- tions. As the output of one layer is the input of the next, the inputs of all the layers are binary, with the exception of the ï¬rst layer. However, we do not believe this to be a major issue. First, in computer vision, the input representation typically has far fewer channels (e.g, red, green and blue) than internal representations (e.g., 512). Consequently, the ï¬rst layer of a ConvNet is often the smallest convolution layer, both in terms of parameters and computations (Szegedy et al., 2014). Second, it is relatively easy to handle continuous- valued inputs as ï¬xed point numbers, with m bits of precision. For example, in the common case of 8-bit ï¬xed point inputs:
8 s=a-w?, s= > ariaâ. w), (8) n=1
where x is a vector of 1024 8-bit inputs, x8 1 is the most signiï¬cant bit of the ï¬rst input, wb is a vector of 1024 1-bit weights, and s is the resulting weighted sum. This method is used in Algorithm 4.
8
Quantized Neural Networks | 1609.07061#23 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 24 | 8
Quantized Neural Networks
Algorithm 4 Running a BNN with L layers. Require: 8-bit input vector a0, binary weights W b, and BatchNorm parameters θ. Ensure: the MLP output aL.
{1. First layer:} a1 â 0 for n = 1 to 8 do a1 â a1 + 2nâ1 à XnorDotProduct(an 0, Wb 1 ) end for ab 1 â Sign(BatchNorm(a1, θ1)) {2. Remaining hidden layers:} for k = 2 to L â 1 do ak â XnorDotProduct(ab ab k â Sign(BatchNorm(ak, θk)) end for {3. Output layer:} aL â XnorDotProduct(ab aL â BatchNorm(aL, θL) kâ1, W b k ) Lâ1, W b L)
# 3. Qunatized Neural network - More than 1-bit | 1609.07061#24 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 25 | # 3. Qunatized Neural network - More than 1-bit
Observing Eq. (8), we can see that using 2-bit activations simply doubles the number of times we need to run our XnorPopCount Kernel (i.e., directly proportional to the activa- tion bitwidth). This idea was recently proposed by Zhou et al. (2016) (DoReFa net) and Miyashita et al. (2016) (published on arXive shortly after our preliminary technical report was published there). However, in contrast to to Zhou et al., we did not ï¬nd it useful to initialize the network with weights obtained by training the network with full precision weights. Moreover, the Zhou et al. network did not quantize the weights of the ï¬rst con- volutional layer and the last fully-connected layer, whereas we binarized both. We followed the quantization schemes suggested by Miyashita et al. (2016), namely, linear quantization:
LinearQuant (x, bitwidth) = Clip (rouna ( ) x bitwidth, minV, mazV ) (9) bitwidth and logarithmic quantization:
LogQuant(x, bitwidth) (x) = Clip (AP2(x), minV, maxV ) , (10) | 1609.07061#25 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 26 | LogQuant(x, bitwidth) (x) = Clip (AP2(x), minV, maxV ) , (10)
where minV and maxV are the minimum and maximum scale range respectively. Where AP2(x) is the approximate-power-of-2 of x as described in Section 2.4. In our experiments (detailed in Section 4) we applied the above quantization schemes on the weights, activations and gradients and tested them on the more challenging ImageNet dataset.
# 4. Benchmark Results
# 4.1 Results on MNIST,SVHN, and CIFAR-10
We performed two sets of experiments, each based on a diï¬erent framework, namely Torch7 and Theano. Other than the framework, the two sets of experiments are very similar:
9
Hubara, Courbariaux, Soudry, El-Yaniv and Bengio
Table 1: Classiï¬cation test error rates of DNNs trained on MNIST (fully connected archi- tecture), CIFAR-10 and SVHN (convnet). No unsupervised pre-training or data augmentation was used. | 1609.07061#26 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 27 | Data set MNIST SVHN CIFAR-10 Binarized activations+weights, during training and test BNN (Torch7) BNN (Theano) Committee Machinesâ Array Baldassi et al. (2015) 1.40% 0.96% 1.35% 2.53% 2.80% - 10.15% 11.40% - Binarized weights, during training and test BinaryConnect Courbariaux et al. (2015a) 1.29± 0.08% 2.30% 9.90% Binarized activations+weights, during test EBP Cheng et al. (2015) Bitwise DNNs Kim and Smaragdis (2016) 2.2± 0.1% 1.33% - - - - Ternary weights, binary activations, during test Hwang and Sung (2014) 1.45% No binarization (standard results) - - No reg Maxout Networks Goodfellow et al. (2013b) Gated pooling Lee et al. (2015) 1.3± 0.2% 0.94% - 2.44% 2.47% 1.69% 10.94% 11.68% 7.62% | 1609.07061#27 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 28 | ⢠In both sets of experiments, we obtain near state-of-the-art results with BNNs on MNIST, CIFAR-10 and the SVHN benchmark datasets.
⢠In our Torch7 experiments, the activations are stochastically binarized at train-time, whereas in our Theano experiments they are deterministically binarized.
⢠In our Torch7 experiments, we use the shift-based BN and AdaMax variants, which are detailed in Algorithms 2 and 3, whereas in our Theano experiments, we use vanilla BN and ADAM.
Results are reported in Table 1. Implementation details are reported in Appendix A.
MNIST MNIST is an image classiï¬cation benchmark dataset (LeCun et al., 1998). It consists of a training set of 60K and a test set of 10K 28 à 28 gray-scale images representing digits ranging from 0 to 9. The Multi-Layer-Perceptron (MLP) we train on MNIST consists of 3 hidden layers. In our Theano implementation we used hidden layers of size 4096 whereas in our Torch implementation we used much smaller size 2048. This diï¬erence explains the accuracy gap between the two implementations. | 1609.07061#28 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 29 | CIFAR-10 CIFAR-10 is an image classiï¬cation benchmark dataset. It consists of a train- ing set of size 50K and a test set of size 10K, where instances are 32 à 32 color images representing airplanes, automobiles, birds, cats, deer, dogs, frogs, horses, ships and trucks. Both implementations share the same structure as reported in Appendix A. Since the Torch implementation uses stochastic binarization, it achieved slightly better results.
10
Quantized Neural Networks
Figure 1: Training curves for diï¬erent methods on the CIFAR-10 dataset. The dotted lines represent the training costs (square hinge losses) and the continuous lines the corresponding validation error rates. Although BNNs are slower to train, they are nearly as accurate as 32-bit ï¬oat DNNs.
CIFAR-10 TRAINING CURVES 25.00% 20.00% 15.00% 10.00% 5.00% VALIDATION ERROR RATE (%) \ we 0.00% () 100 200 300 400 500 EPOCH â âBASELINEâ âBNN (THEANO) â â BNN (TORCH7) | 1609.07061#29 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 31 | # 4.2 Results on ImageNet
To test the strength of our method, we applied it to the challenging ImageNet classiï¬cation task, which is probably the most important classiï¬cation benchmark dataset. It consists of a training set of size 1.2M samples and test set of size 50K. Each instance is labeled with one of 1000 categories including objects, animals, scenes, and even some abstract shapes. On ImageNet, it is customary to report two error rates: top-1 and top-5, where the top-x error rate is the fraction of test images for which the correct label is not among the x labels considered most probable by the model. Considerable research has been concerned with compressing ImageNet architectures while preserving high accuracy. Previous approaches include pruning near zero weights (Gong et al., 2014; Han et al., 2015a) using matrix factorization techniques (Zhang et al., 2015), quantizing the weights (Gupta et al., 2015), using shared weights (Chen et al., 2015) and applying Huï¬man codes (Han et al., 2015a) among others. | 1609.07061#31 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 32 | To the best of our knowledge, before the ï¬rst revision of this paper was published on arXive, no one had reported on successfully quantizing the networkâs activations. On the contrary, a recent work (Han et al., 2015a) showed that accuracy signiï¬cantly deteriorates when trying to quantize convolutional layersâ weights below 4-bit (FC layers are more ro- bust to quantization and can operate quite well with only 2 bits). In the present work we
11
Hubara, Courbariaux, Soudry, El-Yaniv and Bengio
attempted to tackle the diï¬cult task of binarizing both weights and activations. Employ- ing the well-known AlexNet and GoogleNet architectures, we applied our techniques and achieved 41.8% top-1 and 67.1% top-5 accuracy using AlexNet and 47.1% top-1 and 69.1% top-5 accuracy using GoogleNet. While these performance results leave room for improve- ment (relative to full precision nets), they are by far better than all previous attempts to compress ImageNet architectures using less than 4-bit precision for the weights. Moreover, this advantage is achieved while also binarizing neuron activations.
# 4.3 Relaxing âhard tanhâ boundaries | 1609.07061#32 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 33 | # 4.3 Relaxing âhard tanhâ boundaries
We discovered that after training the network it is useful to widen the âhard tanhâ bound- aries and retrain the network. As explained in Section 2.3, the straight-through estimator (which can be written as âhard tanhâ) cancels gradients coming from neurons with absolute values higher than 1. Hence, towards the last training iterations most of the gradient values are zero and the weight values cease to update. By relaxing the âhard tanhâ boundaries we allow more gradients to ï¬ow in the back-propagation phase and improve top-1 accuracies by 1.5% on AlexNet topology using vanilla BNN.
# 4.4 2-bit activations | 1609.07061#33 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 34 | # 4.4 2-bit activations
While training BNNs on the ImageNet dataset we noticed that we could not force the training set error rate to converge to zero. In fact the training error rate stayed fairly close to the validation error rate. This observation led us to investigate a more relaxed activation quantization (more than 1-bit). As can be seen in Table 2, the results are quite impressive and illustrate an approximate 5.6% drop in performance (top-1 accuracy) relative to ï¬oating point representation, using only 1-bit weights and 2-bit activation. Following Miyashita et al. (2016), we also tried quantizing the gradients and discovered that only logarithmic quantization works. With 6-bit gradients we achieved 46.8% degradation. Those results are presently state-of-the-art, surpassing those obtained by the DoReFa net (Zhou et al., 2016). As opposed to DoReFa, we utilized a deterministic quantization process rather than a stochastic one. Moreover, it is important to note that while quantizing the gradients, DoReFa assigns for each instance in a mini-batch its own scaling factor, which increases the number of MAC operations. | 1609.07061#34 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 35 | While AlexNet can be compressed rather easily, compressing GoogleNet is much harder due to its small number of parameters. When using vanilla BNNs, we observed a large degra- dation in the top-1 results. However, by using QNNs with 4-bit weights and activation, we were able to achieve 66.5% top-1 accuracy (only a 5.5% drop in performance compared to the 32-bit ï¬oating point architecture), which is the current state-of-the-art-compression result over GoogleNet. Moreover, by using QNNs with 6-bit weights, activations and gradi- ents we achieved 66.4% top-1 accuracy. Full implementation details of our experiments are reported in Appendix A.6.
# 4.5 Language Models
Recurrent neural networks (RNNs) are very demanding in memory and computational power in comparison to feed forward networks. There are a large variety of recurrent models with
12
Quantized Neural Networks
Table 2: Classiï¬cation test error rates of the AlexNet model trained on the ImageNet 1000 classiï¬cation task. No unsupervised pre-training or data augmentation was used. | 1609.07061#35 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 36 | Model Top-1 Binarized activations+weights, during training and test Top-5 BNN Xnor-Nets4 (Rastegari et al., 2016) 41.8% 67.1% 44.2% 69.2% Binary weights and Quantize activations during training and test QNN 2-bit activation DoReFaNet 2-bit activation4 (Zhou et al., 2016) 51.03% 73.67% 50.7% 72.57% Quantize weights, during test Deep Compression 4/2-bit (conv/FC layer) (Han et al., 2015a) (Gysel et al., 2016) - 2-bit 55.34% 77.67% 0.01% -% No Quantization (standard results) AlexNet - our implementation 56.6% 80.2%
Table 3: Classiï¬cation test error rates of the GoogleNet model trained on the ImageNet 1000 classiï¬cation task. No unsupervised pre-training or data augmentation was used. | 1609.07061#36 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 38 | the Long Short Term Memory networks (LSTMs) introduced by Hochreiter and Schmidhu- ber (1997) are being the most popular model. LSTMs are a special kind of RNN, capable of learning long-term dependencies using unique gating mechanisms. Recently, Ott et al. (2016) tried to quantize the RNNs weight matrices using similar techniques as described in Section 2. They observed that the weight binarization methods do not work with RNNs. However, by using 2-bits (i.e., â1, 0, 1), they have been able to achieve similar and even higher accuracy on several datasets. Here we report on the ï¬rst attempt to quantize both weights and activations by trying to evaluate the accuracy of quantized recurrent models trained on the Penn Treebank dataset. The Penn Treebank Corpus (Marcus et al., 1993) contains 10K unique words. We followed the same setting as in (Mikolov and Zweig, 2012) which resulted in 18.55K words for training set, 14.5K and 16K words in the validation
13
Hubara, Courbariaux, Soudry, El-Yaniv and Bengio | 1609.07061#38 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 39 | 13
Hubara, Courbariaux, Soudry, El-Yaniv and Bengio
and test sets respectively. We experimented with both vanilla RNNs and LSTMs. For our vanilla RNN model we used one hidden layers of size 2048 and ReLU as the activation function. For our LSTM model we use 1 hidden layer of size 300. Our RNN implementation was constructed to predict the next character hence performance was measured using the bits-per-character (BPC) metric. In the LSTM model we tried to predict the next word so performance was measured using the perplexity per word (PPW) metric. Similar to (Ott et al., 2016), our preliminary results indicate that binarization of weight matrices lead to large accuracy degradation. However, as can be seen in Table 4, with 4-bits activations and weights we can achieve similar accuracies as their 32-bit ï¬oating point counterparts.
Table 4: Language Models results on Penn Treebank dataset. Language Models results on Penn Treebank dataset. FP stands for 32-bit ï¬oating point | 1609.07061#39 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 40 | Table 4: Language Models results on Penn Treebank dataset. Language Models results on Penn Treebank dataset. FP stands for 32-bit ï¬oating point
Model Layers Hidden Units bits(weights) bits(activation) Accuracy 1.81 BPC RNN 1.67 BPC RNN 1.11 BPC RNN 1.05 BPC RNN 1.05 BPC RNN 220 PPW LSTM 110 PPW LSTM 100 PPW LSTM 97 PPW LSTM 97 PPW LSTM
# 5. High Power Eï¬ciency during the Forward Pass
Table 5: Energy consumption of multiply- accumulations; see Horowitz (2014)
Operation 8-bit Integer 32-bit Integer 16-bit Floating Point 32-bit Floating Point MUL ADD 0.03pJ 0.2pJ 0.1pJ 3.1pJ 0.4pJ 1.1pJ 0.9pJ 3.7pJ
Table 6: Energy consumption of memory accesses; see Horowitz (2014)
# Memory size 8K 32K 1M DRAM
64-bit Cache 10pJ 20pJ 100pJ 1.3-2.6nJ
8K OpJ 32K 20pJ M O0pJ DRAM .3-2.6nJ | 1609.07061#40 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 41 | 8K OpJ 32K 20pJ M O0pJ DRAM .3-2.6nJ
4 First and last layers were not binarized (i.e., using 32-bit precision weights and activation)
14
Quantized Neural Networks
Computer hardware, be it general-purpose or specialized, is composed of memories, arithmetic operators and control logic. During the forward pass (both at run-time and train-time), BNNs drastically reduce memory size and accesses, and replace most arithmetic operations with bit-wise operations, which might lead to vastly improved power-eï¬ciency. Moreover, a binarized CNN can lead to binary convolution kernel repetitions, and we argue that dedicated hardware could reduce the time complexity by 60% .
Figure 2: Binary weight ï¬lters, sampled from of the ï¬rst convolution layer. Since we have only 2k2 unique 2D ï¬lters (where k is the ï¬lter size), ï¬lter replication is very common. For instance, on our CIFAR-10 ConvNet, only 42% of the ï¬lters are unique.
i on Fo. a â7 Ls rial | 1609.07061#41 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 42 | i on Fo. a â7 Ls rial
Memory Size and Accesses Improving computing performance has always been and re- mains a challenge. Over the last decade, power has been the main constraint on performance (Horowitz, 2014). This is why considerable research eï¬orts have been devoted to reducing the energy consumption of neural networks. Horowitz (2014) provides rough numbers for the energy consumed by the computation (the given numbers are for 45nm technology), as summarized in Tables 5 and 6. Importantly, we can see that memory accesses typically consume more energy than arithmetic operations, and memory access cost increases with memory size. In comparison with 32-bit DNNs, BNNs require 32 times smaller memory size and 32 times fewer memory accesses. This is expected to reduce energy consumption drastically (i.e., by a factor larger than 32).
XNOR-Count Applying a DNN mainly involves convolutions and matrix multiplica- tions. The key arithmetic operation of deep learning is thus the multiply-accumulate oper- ation. Artiï¬cial neurons are basically multiply-accumulators computing weighted sums of their inputs. In BNNs, both the activations and the weights are constrained to either â1 or +1. As a result, most of the 32-bit ï¬oating point multiply-accumulations are replaced
15 | 1609.07061#42 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 44 | When using a ConvNet architecture with binary weights, the number of unique filters is bounded by the filter size. For example, in our implementation we use filters of size 3 x 3, so the maximum number of unique 2D filters is 2° = 512. However, this should not prevent expanding the number of feature maps beyond this number, since the actual filter is a 3D matrix. Assuming we have M, filters in the @ convolutional layer, we have to store a 4D weight matrix of size My x My_1 x k x k. Consequently, the number of unique filters is gk Me-1 When necessary, we apply each filter on the map and perform the required multiply-accumulate (MAC) operations (in our case, using XNOR. and popcount operations). Since we now have binary filters, many 2D filters of size k x k repeat themselves. By using dedicated hardware/software, we can apply only the unique 2D filters on each feature map and sum the results to receive each 3D filterâs convolutional result. Note that an inverse filter (i-e., [-1,1,-1] is the inverse of [1,-1,1]) can also be treated as a repetition; it is merely a multiplication of the | 1609.07061#44 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 46 | QNNs complexity scale up linearly with the number of bits per weight/activation, since it requires the application of the XNOR kernel several times (see Section 3). As of now, QNNs still supply the best compression to accuracy ratio. Moreover, quantizing the gradients allows us to use the XNOR kernel for the backward pass, leading to fully ï¬xed point layers with low bitwidth. By accelerating the training phase, QNNs can play an important role in future power demanding tasks.
# 6. Seven Times Faster on GPU at Run-Time
It is possible to speed up GPU implementations of QNNs, by using a method sometimes called SIMD (single instruction, multiple data) within a register (SWAR). The basic idea of SWAR is to concatenate groups of 32 binary variables into 32-bit registers, and thus obtain a 32-times speed-up on bitwise operations (e.g., XNOR). Using SWAR, it is possible to evaluate 32 connections with only 3 instructions:
a1+ = popcount(xnor(a32b 0 , w32b 1 )), (11) | 1609.07061#46 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 47 | a1+ = popcount(xnor(a32b 0 , w32b 1 )), (11)
where a1 is the resulting weighted sum, and a32b are the concatenated inputs and 0 weights. Those 3 instructions (accumulation, popcount, xnor) take 1+4+1 = 6 clock cycles on recent Nvidia GPUs (and if they were to become a fused instruction, it would only take a single clock cycle). Consequently, we obtain a theoretical Nvidia GPU speed-up of factor of 32/6 â 5.3. In practice, this speed-up is quite easy to obtain as the memory bandwidth to computation ratio is also increased 6 times.
In order to validate those theoretical results, we programmed two GPU kernels:
⢠An unoptimized matrix multiplication kernel that serves as our baseline.
16
Quantized Neural Networks
⢠The XNOR kernel, which is nearly identical to the baseline, except that it uses the SWAR method, as in Equation (11). | 1609.07061#47 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 48 | 16
Quantized Neural Networks
⢠The XNOR kernel, which is nearly identical to the baseline, except that it uses the SWAR method, as in Equation (11).
The two GPU kernels return identical outputs when their inputs are constrained to â1 or +1 (but not otherwise). The XNOR kernel is about 23 times faster than the baseline kernel and 3.4 times faster than cuBLAS, as shown in Figure 3. Last but not least, the MLP from Section 4 runs 7 times faster with the XNOR kernel than with the baseline kernel, without suï¬ering any loss in classiï¬cation accuracy (see Figure 3). As MNISTâs images are not binary, the ï¬rst layerâs computations are always performed by the baseline kernel. The last three columns show that the MLP accuracy does not depend on which kernel is used.
Figure 3: The ï¬rst 3 columns show the time it takes to perform a 8192 à 8192 à 8192 (binary) matrix multiplication on a GTX750 Nvidia GPU, depending on which kernel is used. The next three columns show the time it takes to run the MLP from Section 3 on the full MNIST test set. The last three columns show that the MLP accuracy does not depend on the kernel | 1609.07061#48 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 49 | GPU KERNELSâ EXECUTION TIMES 1 : I ,. aan MATRIX MULT. (5) MNISTMLP (s) MLP TEST ERROR (%) b w nN MBASELINEKERNEL mCUBLAS/THEANO mm XNOR KERNEL
# 7. Discussion and Related Work
Until recently, the use of extremely low-precision networks (binary in the extreme case) was believed to substantially degrade the network performance (Courbariaux et al., 2014). Soudry et al. (2014) and Cheng et al. (2015) proved the contrary by showing that good performance could be achieved even if all neurons and weights are binarized to ±1 . This was done using Expectation BackPropagation (EBP), a variational Bayesian approach, which infers networks with binary weights and neurons by updating the posterior distributions over the weights. These distributions are updated by diï¬erentiating their parameters (e.g., mean values) via the back propagation (BP) algorithm. Esser et al. (2015) implemented a fully binary network at run time using a very similar approach to EBP, showing signiï¬cant
17
Hubara, Courbariaux, Soudry, El-Yaniv and Bengio | 1609.07061#49 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 51 | improvement in energy eï¬ciency. The drawback of EBP is that the binarized parameters are only used during inference.
The probabilistic idea behind EBP was extended in the BinaryConnect algorithm of Courbariaux et al. (2015a). In BinaryConnect, the real-valued version of the weights is saved and used as a key reference for the binarization process. The binarization noise is independent between diï¬erent weights, either by construction (by using stochastic quanti- zation) or by assumption (a common simpliï¬cation; see Spang and Schultheiss, 1962). The noise would have little eï¬ect on the next neuronâs input because the input is a summation over many weighted neurons. Thus, the real-valued version could be updated using the back propagated error by simply ignoring the binarization noise in the update. With this method, Courbariaux et al. (2015a) were the ï¬rst to binarize weights in CNNs and achieved near state-of-the-art performance on several datasets. They also argued that noisy weights provide a form of regularization, which could help to improve generalization, as previously shown by Wan et al. (2013). This method binarized weights while still maintaining full precision neurons. | 1609.07061#51 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 52 | Lin et al. (2015a) carried over the work of Courbariaux et al. (2015a) to the back- propagation process by quantizing the representations at each layer of the network, to convert some of the remaining multiplications into binary shifts by restricting the neuronsâ values to be power-of-two integers. Lin et al. (2015a)âs work and ours seem to share sim- ilar characteristics .However, their approach continues to use full precision weights during the test phase. Moreover, Lin et al. (2015a) quantize the neurons only during the back propagation process, and not during forward propagation.
Other research (Baldassi et al., 2015) showed that full binary training and testing is possible in an array of committee machines with randomized input, where only one weight layer is being adjusted. Gong et al. (2014) aimed to compress a fully trained high precision network by using quantization or matrix factorization methods. These methods required training the network with full precision weights and neurons, thus requiring numerous MAC operations (which the proposed QNN algorithm avoids). Hwang and Sung (2014) focused on a ï¬xed-point neural network design and achieved performance almost identical to that of the ï¬oating-point architecture. Kim and Smaragdis (2016) retrained neural networks with binary weights and activations. | 1609.07061#52 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 53 | As far as we know, before the ï¬rst revision of this paper was published on arXive, no work succeeded in binarizing weights and neurons, at the inference phase and the entire training phase of a deep network. This was achieved in the present work. We relied on the idea that binarization can be done stochastically, or be approximated as random noise. This was previously done for the weights by Courbariaux et al. (2015a), but our BNNs extend this to the activations. Note that the binary activations are especially important for ConvNets, where there are typically many more neurons than free weights. This allows highly eï¬cient operation of the binarized DNN at run time, and at the forward-propagation phase during training. Moreover, our training method has almost no multiplications, and therefore might be implemented eï¬ciently in dedicated hardware. However, we have to save the value of the full precision weights. This is a remaining computational bottleneck during training, since it is an energy-consuming operation.
Shortly after the ï¬rst version of this paper was posted on arXiv, several papers tried to improve and extend it. Rastegari et al. (2016) made a small modiï¬cation to our algo18
Quantized Neural Networks | 1609.07061#53 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 54 | rithm (namely multiplying the binary weights and input by their L1 norm) and published promising results on the ImageNet dataset. Note that their method, named Xnor-Net, re- quires additional multiplication by a diï¬erent scaling factor for each patch in each sample (Rastegari et al., 2016) Section 3.2 Eq. 10 and ï¬gure 2). This in itself, requires many mul- tiplications and prevents eï¬cient implementation of XnorNet on known hardware designs. Moreover, (Rastegari et al., 2016) didnât quantize ï¬rst and last layers, therefore XNOR-Net are only partially binarized NNs. Miyashita et al. (2016) suggested a more relaxed quan- tization (more than 1-bit) for both the weights and activation. Their idea was to quantize both and use shift operations as in our Eq. (4). They proposed to quantize the param- eters in their non-uniform, base-2 logarithmic representation. This idea was inspired by the fact that the weights and activations in a trained network naturally have non-uniform distributions. They moreover showed that they can quantize the gradients | 1609.07061#54 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 55 | was inspired by the fact that the weights and activations in a trained network naturally have non-uniform distributions. They moreover showed that they can quantize the gradients as well to 6-bit without signiï¬cant losses in performance (on the Cifar-10 dataset). Zhou et al. (2016) ap- plied similar ideas to the ImageNet dataset and showed that by using 1-bit weights, 2-bit activations and 6-bit gradients they can achieve 46.1% top-1 accuracies using the AlexNet architecture. They named this method DoReFa net. Here we outperform DoReFa net and achieve 46.8% using a 1-2-6 bit quantization scheme (weight-activation-gradients) and 51% using a 1-2-32 quantization scheme. These results conï¬rm that we can achieve comparable results even on a large dataset by applying the Xnor kernel several times. Merolla et al. (2016) showed that DNN can be robust to more than just weight binarization. They applied several diï¬erent distortions to the weights, including additive and multiplicative noise, and a class of non-linear projections.This was shown to improve robustness to | 1609.07061#55 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 56 | diï¬erent distortions to the weights, including additive and multiplicative noise, and a class of non-linear projections.This was shown to improve robustness to other distortions and even boost results. Zheng and Tang tried to apply our binarization scheme to recurrent neural network for language modeling and achieved comparable results as well. Andri et al. (2016) even created a hardware implementation to speed up BNNs. | 1609.07061#56 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 57 | # Conclusion
We have introduced BNNs, which binarize deep neural networks and can lead to dramatic improvements in both power consumption and computation speed. During the forward pass (both at run-time and train-time), BNNs drastically reduce memory size and accesses, and replace most arithmetic operations with bit-wise operations. Our estimates indicate that power eï¬ciency can be improved by more than one order of magnitude (see Section 5). In terms of speed, we programmed a binary matrix multiplication GPU kernel that enabled running MLP over the MNIST datset 7 times faster (than with an unoptimized GPU kernel) without any loss of accuracy (see Section 6). | 1609.07061#57 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 58 | We have shown that BNNs can handle MNIST, CIFAR-10 and SVHN while achiev- ing nearly state-of-the-art accuracy. While our results for the challenging ImageNet are not on par with the best results achievable with full precision networks, they signiï¬cantly improve all previous attempts to compress ImageNet-capable architectures. Moreover, by quantizing the weights and activations to more than 1-bit (i.e., QNNs), we have been able to achieve comparable results to the 32-bit ï¬oating point architectures (see Section 4.4 and supplementary material - Appendix B). A major open research avenue would be to further improve our results on ImageNet. Substantial progress in this direction might go a long way towards facilitating DNN usability in low power instruments such as mobile phones.
19
Hubara, Courbariaux, Soudry, El-Yaniv and Bengio
# Acknowledgments | 1609.07061#58 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 59 | 19
Hubara, Courbariaux, Soudry, El-Yaniv and Bengio
# Acknowledgments
We would like to express our appreciation to Elad Hoï¬er, for his technical assistance and constructive comments. We thank our fellow MILA lab members who took the time to read the article and give us some feedback. We thank the developers of Torch, (Collobert et al., 2011) a Lua based environment, and Theano (Bergstra et al., 2010; Bastien et al., 2012), a Python library that allowed us to easily develop fast and optimized code for GPU. We also thank the developers of Pylearn2 (Goodfellow et al., 2013a) and Lasagne (Dieleman et al., 2015), two deep learning libraries built on the top of Theano. We thank Yuxin Wu for helping us compare our GPU kernels with cuBLAS. We are also grateful for funding from NSERC, the Canada Research Chairs, Compute Canada, and CIFAR. We are also grateful for funding from CIFAR, NSERC, IBM, Samsung. This research was supported by The Israel Science Foundation (grant No. 1890/14)
20
Quantized Neural Networks
# Appendix A. Implementation Details
In this section we give full implementation details over our MNIST,SVHN, CIFAR-10 and ImageNet datasets. | 1609.07061#59 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 61 | MNIST is an image classiï¬cation benchmark dataset (LeCun et al., 1998). It consists of a training set of 60K and a test set of 10K 28 à 28 gray-scale images representing digits ranging from 0 to 9. In order for this benchmark to remain a challenge, we did not use any convolution, data-augmentation, preprocessing or unsupervised learning. The Multi-Layer- Perceptron (MLP) we train on MNIST consists of 3 hidden layers of 4096 binary units and a L2-SVM output layer; L2-SVM has been shown to perform better than Softmax on several classiï¬cation benchmarks (Tang, 2013; Lee et al., 2014). We regularize the model with Dropout (Srivastava et al., 2014). The square hinge loss is minimized with the ADAM adaptive learning rate method (Kingma and Ba, 2014b). We use an exponentially decaying global learning rate, as per Algorithm 1, and also scale the learning rates of the weights with their initialization coeï¬cients from (Glorot and Bengio, 2010), as suggested by Courbariaux et al. (2015a). We use Batch Normalization with a minibatch of size 100 to speed up the training. As | 1609.07061#61 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 63 | # A.2 MLP on MNIST (Torch7)
We use a similar architecture as in our Theano experiments, without dropout, and with 2048 binary units per layer instead of 4096. Additionally, we use the shift base AdaMax and BN (with a minibatch of size 100) instead of the vanilla implementations, to reduce the number of multiplications. Likewise, we decay the learning rate by using a 1-bit right shift every 10 epochs.
# A.3 ConvNet on CIFAR-10 (Theano) | 1609.07061#63 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 64 | # A.3 ConvNet on CIFAR-10 (Theano)
CIFAR-10 is an image classiï¬cation benchmark dataset. It consists of a training set of size 50K and a test set of size 10K, where instances are 32 à 32 color images representing airplanes, automobiles, birds, cats, deer, dogs, frogs, horses, ships and trucks. We do not use data-augmentation (which can really be a game changer for this dataset; see Graham 2014). The architecture of our ConvNet is identical to that used by Courbariaux et al. (2015b) except for the binarization of the activations. The Courbariaux et al. (2015a) architecture is itself mainly inspired by VGG (Simonyan and Zisserman, 2015). The square hinge loss is minimized with ADAM. We use an exponentially decaying learning rate, as we did for MNIST. We scale the learning rates of the weights with their initialization coeï¬cients from (Glorot and Bengio, 2010). We use Batch Normalization with a minibatch of size 50 to speed up the training. We use the last 5000 samples of the training set as a validation set. We report the test error rate associated with the best validation error rate after 500 training epochs (we do not retrain on the validation set).
21 | 1609.07061#64 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 65 | 21
# Hubara, Courbariaux, Soudry, El-Yaniv and Bengio
Table 7: Architecture of our CIFAR-10 ConvNet. We only use âsameâ convolutions as in VGG (Simonyan and Zisserman, 2015).
CIFAR-10 ConvNet architecture
Input: 32 Ã 32 - RGB image 3 Ã 3 - 128 convolution layer BatchNorm and Binarization layers 3 Ã 3 - 128 convolution and 2 Ã 2 max-pooling layers BatchNorm and Binarization layers 3 Ã 3 - 256 convolution layer BatchNorm and Binarization layers 3 Ã 3 - 256 convolution and 2 Ã 2 max-pooling layers BatchNorm and Binarization layers 3 Ã 3 - 512 convolution layer BatchNorm and Binarization layers 3 Ã 3 - 512 convolution and 2 Ã 2 max-pooling layers BatchNorm and Binarization layers 1024 fully connected layer BatchNorm and Binarization layers 1024 fully connected layer BatchNorm and Binarization layers 10 fully connected layer BatchNorm layer (no binarization) Cost: Mean square hinge loss
# A.4 ConvNet on CIFAR-10 (Torch7) | 1609.07061#65 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 66 | # A.4 ConvNet on CIFAR-10 (Torch7)
We use the same architecture as in our Theano experiments. We apply shift-based AdaMax and BN (with a minibatch of size 200) instead of the vanilla implementations to reduce the number of multiplications. Likewise, we decay the learning rate by using a 1-bit right shift every 50 epochs.
# A.5 ConvNet on SVHN
SVHN is also an image classiï¬cation benchmark dataset. It consists of a training set of size 604K examples and a test set of size 26K, where instances are 32 à 32 color images representing digits ranging from 0 to 9. In both sets of experiments, we follow the same procedure used for the CIFAR-10 experiments, with a few notable exceptions: we use half the number of units in the convolution layers, and we train for 200 epochs instead of 500 (because SVHN is a much larger dataset than CIFAR-10).
# A.6 ConvNet on ImageNet
ImageNet classiï¬cation task consists of a training set of size 1.2M samples and test set of size 50K. Each instance is labeled with one of 1000 categories including objects, animals, scenes, and even some abstract shapes.
22
Quantized Neural Networks | 1609.07061#66 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 67 | 22
Quantized Neural Networks
AlexNet: Our AlexNet implementation consists of 5 convolution layers followed by 3 fully connected layers (see Section 8). Additionally, we use Adam as our optimization method and batch-normalization layers (with a minibatch of size 512). Likewise, we decay the learning rate by 0.1 every 20 epochs.
GoogleNet: Our GoogleNet implementation consist of 2 convolution layers followed by 10 inception layers, spatial-average-pooling and a fully connected classiï¬er. We also used the 2 auxilary classiï¬ers. Additionally, we use Adam (Kingma and Ba, 2014a) as our optimization method and batch-normalization layers (with a minibatch of size 64). Likewise, we decay the learning rate by 0.1 every 10 epochs. | 1609.07061#67 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 68 | Table 8: Our AlexNet Architecture. AlexNet ConvNet architecture Input: 32 Ã 32 - RGB image 11 Ã 11 - 64 convolution layer and 3 Ã 3 max-pooling layers BatchNorm and Binarization layers 5 Ã 5 - 192 convolution layer and 3 Ã 3 max-pooling layers BatchNorm and Binarization layers 3 Ã 3 - 384 convolution layer BatchNorm and Binarization layers 3 Ã 3 - 256 convolution layer BatchNorm and Binarization layers 3 Ã 3 - 256 convolution layer BatchNorm and Binarization layers 4096 fully connected layer BatchNorm and Binarization layers 4096 fully connected layer BatchNorm and Binarization layers 1000 fully connected layer BatchNorm layer (no binarization) SoftMax layer (no binarization) Cost: Negative log likelihood
# References
Renzo Andri, Lukas Cavigelli, Davide Rossi, and Luca Benini. Yodann: An ultra-low power convolutional neural network accelerator based on binary weights. arXiv preprint arXiv:1606.05487, 2016.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. In ICLRâ2015, arXiv:1409.0473, 2015. | 1609.07061#68 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 69 | Carlo Baldassi, Alessandro Ingrosso, Carlo Lucibello, Luca Saglietti, and Riccardo Zecchina. Subdominant Dense Clusters Allow for Simple Learning and High Computational Per- formance in Neural Networks with Discrete Synapses. Physical Review Letters, 115(12): 1â5, 2015. ISSN 10797114. doi: 10.1103/PhysRevLett.115.128101.
Fr´ed´eric Bastien, Pascal Lamblin, Razvan Pascanu, James Bergstra, Ian J. Goodfellow, Arnaud Bergeron, Nicolas Bouchard, and Yoshua Bengio. Theano: new features and
23
Hubara, Courbariaux, Soudry, El-Yaniv and Bengio
speed improvements. Deep Learning and Unsupervised Feature Learning NIPS 2012 Workshop, 2012.
Michael J Beauchamp, Scott Hauck, Keith D Underwood, and K Scott Hemmert. Em- bedded ï¬oating-point units in FPGAs. In Proceedings of the 2006 ACM/SIGDA 14th international symposium on Field programmable gate arrays, pages 12â20. ACM, 2006.
Yoshua Bengio. Estimating or propagating gradients through stochastic neurons. Technical Report arXiv:1305.2982, Universite de Montreal, 2013. | 1609.07061#69 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 70 | Yoshua Bengio. Estimating or propagating gradients through stochastic neurons. Technical Report arXiv:1305.2982, Universite de Montreal, 2013.
James Bergstra, Olivier Breuleux, Fr´ed´eric Bastien, Pascal Lamblin, Razvan Pascanu, Guil- laume Desjardins, Joseph Turian, David Warde-Farley, and Yoshua Bengio. Theano: a CPU and GPU math expression compiler. In Proceedings of the Python for Scientiï¬c Computing Conference (SciPy), June 2010. Oral Presentation.
Tianshi Chen, Zidong Du, Ninghui Sun, Jia Wang, Chengyong Wu, Yunji Chen, and Olivier Temam. Diannao: A small-footprint high-throughput accelerator for ubiquitous machine- learning. In Proceedings of the 19th international conference on Architectural support for programming languages and operating systems, pages 269â284. ACM, 2014a.
Wenlin Chen, James T Wilson, Stephen Tyree, Kilian Q Weinberger, and Yixin Chen. Compressing neural networks with the hashing trick. arXiv preprint arXiv:1504.04788, 2015. | 1609.07061#70 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 71 | Yunji Chen, Tao Luo, Shaoli Liu, Shijin Zhang, Liqiang He, Jia Wang, Ling Li, Tianshi Chen, Zhiwei Xu, Ninghui Sun, et al. Dadiannao: A machine-learning supercomputer. In Microarchitecture (MICRO), 2014 47th Annual IEEE/ACM International Symposium on, pages 609â622. IEEE, 2014b.
Zhiyong Cheng, Daniel Soudry, Zexi Mao, and Zhenzhong Lan. Training binary multilayer neural networks for image classiï¬cation using expectation backpropgation. arXiv preprint arXiv:1503.03562, 2015.
Adam Coates, Brody Huval, Tao Wang, David Wu, Bryan Catanzaro, and Ng Andrew. Deep learning with COTS HPC systems. In Proceedings of the 30th international conference on machine learning, pages 1337â1345, 2013.
Ronan Collobert, Koray Kavukcuoglu, and Cl´ement Farabet. Torch7: A matlab-like envi- ronment for machine learning. In BigLearn, NIPS Workshop, 2011. | 1609.07061#71 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 72 | Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. Training deep neural net- works with low precision multiplications. ArXiv e-prints, abs/1412.7024, December 2014.
Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. Binaryconnect: Train- ing deep neural networks with binary weights during propagations. ArXiv e-prints, abs/1511.00363, November 2015a.
Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. BinaryConnect: Training Deep Neural Networks with binary weights during propagations. Nips, pages 1â9, 2015b. URL http://arxiv.org/abs/1511.00363.
24
Quantized Neural Networks
Jacob Devlin, Rabih Zbib, Zhongqiang Huang, Thomas Lamar, Richard Schwartz, and John Makhoul. Fast and robust neural network joint models for statistical machine translation. In Proc. ACLâ2014, 2014. | 1609.07061#72 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 73 | Sander Dieleman, Jan Schlter, Colin Raï¬el, Eben Olson, Sren Kaae Snderby, Daniel Nouri, Daniel Maturana, Martin Thoma, Eric Battenberg, Jack Kelly, Jeï¬rey De Fauw, Michael Heilman, diogo149, Brian McFee, Hendrik Weideman, takacsg84, peterderivaz, Jon, in- stagibbs, Dr. Kashif Rasul, CongLiu, Britefury, and Jonas Degrave. Lasagne: First release., August 2015. URL http://dx.doi.org/10.5281/zenodo.27878.
Steve K Esser, Rathinakumar Appuswamy, Paul Merolla, John V Arthur, and Dharmen- dra S Modha. Backpropagation for energy-eï¬cient neuromorphic computing. In Advances in Neural Information Processing Systems, pages 1117â1125, 2015.
Cl´ement Farabet, Yann LeCun, Koray Kavukcuoglu, Eugenio Culurciello, Berin Martini, Polina Akselrod, and Selcuk Talay. Large-scale FPGA-based convolutional networks. Machine Learning on Very Large Data Sets, 1, 2011a. | 1609.07061#73 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 74 | Cl´ement Farabet, Berin Martini, Benoit Corda, Polina Akselrod, Eugenio Culurciello, and Yann LeCun. Neuï¬ow: A runtime reconï¬gurable dataï¬ow processor for vision. In Com- puter Vision and Pattern Recognition Workshops (CVPRW), 2011 IEEE Computer So- ciety Conference on, pages 109â116. IEEE, 2011b.
Xavier Glorot and Yoshua Bengio. Understanding the diï¬culty of training deep feedforward neural networks. In AISTATSâ2010, 2010.
Yunchao Gong, Liu Liu, Ming Yang, and Lubomir Bourdev. Compressing deep convolutional networks using vector quantization. arXiv preprint arXiv:1412.6115, 2014.
Ian J. Goodfellow, David Warde-Farley, Pascal Lamblin, Vincent Dumoulin, Mehdi Mirza, Razvan Pascanu, James Bergstra, Fr´ed´eric Bastien, and Yoshua Bengio. Pylearn2: a machine learning research library. arXiv preprint arXiv:1308.4214, 2013a. | 1609.07061#74 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 75 | Ian J. Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron Courville, and Yoshua Bengio. Maxout Networks. arXiv preprint, pages 1319â1327, 2013b. URL http://arxiv.org/ abs/1302.4389.
Gokul Govindu, Ling Zhuo, Seonil Choi, and Viktor Prasanna. Analysis of high-performance ï¬oating-point arithmetic on FPGAs. In Parallel and Distributed Processing Symposium, 2004. Proceedings. 18th International, page 149. IEEE, 2004.
Benjamin Graham. Spatially-sparse convolutional neural networks. arXiv preprint arXiv:1409.6070, 2014.
Alex Graves. Practical variational inference for neural networks. In Advances in Neural Information Processing Systems, pages 2348â2356, 2011.
Suyog Gupta, Ankur Agrawal, Kailash Gopalakrishnan, and Pritish Narayanan. Deep learning with limited numerical precision. CoRR, abs/1502.02551, 392, 2015.
25
Hubara, Courbariaux, Soudry, El-Yaniv and Bengio | 1609.07061#75 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 76 | 25
Hubara, Courbariaux, Soudry, El-Yaniv and Bengio
Philipp Gysel, Mohammad Motamedi, and Soheil Ghiasi. Hardware-oriented approximation of convolutional neural networks. arXiv preprint arXiv:1604.03168, 2016.
Huizi Mao Han, Song and William J. Dally. Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huï¬man Coding. arXiv preprint, pages 1â11, 2015. URL http://arxiv.org/abs/1510.00149.
Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neu- ral networks with pruning, trained quantization and huï¬man coding. arXiv preprint arXiv:1510.00149, 2015a.
Song Han, Jeï¬ Pool, John Tran, and William Dally. Learning both weights and connections for eï¬cient neural network. In Advances in Neural Information Processing Systems, pages 1135â1143, 2015b.
Geoï¬rey Hinton. Neural networks for machine learning. Coursera, video lectures, 2012. | 1609.07061#76 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 77 | Geoï¬rey Hinton. Neural networks for machine learning. Coursera, video lectures, 2012.
Geoï¬rey Hinton, Li Deng, George E. Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, An- drew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara Sainath, and Brian Kingsbury. Deep neural networks for acoustic modeling in speech recognition. IEEE Signal Processing Magazine, 29(6):82â97, Nov. 2012.
Sepp Hochreiter and J¨urgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735â1780, 1997.
IEEE Interational Solid State Circuits Conference, pages 10â14, 2014. ISSN 0018-9200. doi: 10.1109/JSSC.2014.2361354.
Kyuyeon Hwang and Wonyong Sung. Fixed-point feedforward deep neural network design using weights+ 1, 0, and- 1. In Signal Processing Systems (SiPS), 2014 IEEE Workshop on, pages 1â6. IEEE, 2014.
Sergey Ioï¬e and Christian Szegedy. Batch normalization: Accelerating deep network train- ing by reducing internal covariate shift. 2015.
M. Kim and P. Smaragdis. Bitwise Neural Networks. ArXiv e-prints, January 2016. | 1609.07061#77 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 78 | M. Kim and P. Smaragdis. Bitwise Neural Networks. ArXiv e-prints, January 2016.
Diederik Kingma and Jimmy Ba. Adam: A Method for Stochastic Optimization. arXiv:1412.6980 [cs], pages 1â13, 2014a. URL http://arxiv.org/abs/1412.6980$\ delimiter"026E30F$nhttp://www.arxiv.org/pdf/1412.6980.pdf.
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014b.
A. Krizhevsky, I. Sutskever, and G. Hinton. ImageNet classiï¬cation with deep convolutional neural networks. In NIPSâ2012. 2012.
Yann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haï¬ner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278â2324, November 1998.
26
Quantized Neural Networks
Chen-Yu Lee, Saining Xie, Patrick Gallagher, Zhengyou Zhang, and Zhuowen Tu. Deeply- supervised nets. arXiv preprint arXiv:1409.5185, 2014. | 1609.07061#78 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 79 | Chen-Yu Lee, Patrick W Gallagher, and Zhuowen Tu. Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. arXiv preprint arXiv:1509.08985, 2015.
Zhouhan Lin, Matthieu Courbariaux, Roland Memisevic, and Yoshua Bengio. Neural net- works with few multiplications. ArXiv e-prints, abs/1510.03009, October 2015a.
Zhouhan Lin, Matthieu Courbariaux, Roland Memisevic, and Yoshua Bengio. Neural Net- works with Few Multiplications. Iclr, pages 1â8, 2015b. URL http://arxiv.org/abs/ 1510.03009.
Chris Lomont. Fast inverse square root. Tech-315 nical Report, page 32, 2003.
Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. Building a large annotated corpus of english: The penn treebank. Computational linguistics, 19(2):313â 330, 1993.
Paul Merolla, Rathinakumar Appuswamy, John Arthur, Steve K Esser, and Dharmendra Modha. Deep neural networks are robust to weight binarization and other non-linear distortions. arXiv preprint arXiv:1606.01981, 2016. | 1609.07061#79 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 80 | Tomas Mikolov and Geoï¬rey Zweig. Context dependent recurrent neural network language model. In SLT, pages 234â239, 2012.
Daisuke Miyashita, Edward H Lee, and Boris Murmann. Convolutional neural networks using logarithmic data representation. arXiv preprint arXiv:1603.01025, 2016.
Volodymyr Mnih, Koray Kavukcuoglo, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidgeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharsan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. Human-level control through deep reinforcement learning. Nature, 518:529â533, 2015.
Alexander Mordvintsev, Christopher Olah, and Mike Tyka. Inceptionism: Going deeper into neural networks, 2015. URL http://googleresearch.blogspot.co.uk/2015/06/ inceptionism-going-deeper-into-neural.html. Accessed: 2015-06-30. | 1609.07061#80 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 81 | Joachim Ott, Zhouhan Lin, Ying Zhang, Shih-Chii Liu, and Yoshua Bengio. Recurrent neural networks with limited numerical precision. arXiv preprint arXiv:1608.06902, 2016.
Phi-Hung Pham, Darko Jelaca, Clement Farabet, Berin Martini, Yann LeCun, and Eu- In Circuits genio Culurciello. Neuï¬ow: Dataï¬ow vision processing system-on-a-chip. and Systems (MWSCAS), 2012 IEEE 55th International Midwest Symposium on, pages 1044â1047. IEEE, 2012.
Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. Xnor-net: arXiv preprint Imagenet classiï¬cation using binary convolutional neural networks. arXiv:1603.05279, 2016.
27
Hubara, Courbariaux, Soudry, El-Yaniv and Bengio
Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio. Fitnets: Hints for thin deep nets. arXiv preprint arXiv:1412.6550, 2014. | 1609.07061#81 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 82 | Tara Sainath, Abdel rahman Mohamed, Brian Kingsbury, and Bhuvana Ramabhadran. Deep convolutional neural networks for LVCSR. In ICASSP 2013, 2013.
David Silver, Aja Huang, Chris J. Maddison, Arthur Guez, Laurent Sifre, George van den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanc- tot, Sander Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap, Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, and Demis Hassabis. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484â489, Jan 2016. ISSN 0028-0836. URL http://dx.doi.org/10.1038/ nature16961. Article.
Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015.
Daniel Soudry, Itay Hubara, and Ron Meir. Expectation backpropagation: Parameter-free training of multilayer neural networks with continuous or discrete weights. In NIPSâ2014, 2014. | 1609.07061#82 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 83 | H Spang and P Schultheiss. Reduction of quantizing noise by use of feedback. IRE Trans- actions on Communications Systems, 10(4):373â380, 1962.
Nitish Srivastava, Geoï¬rey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhut- dinov. Dropout: A simple way to prevent neural networks from overï¬tting. Journal of Machine Learning Research, 15:1929â1958, 2014. URL http://jmlr.org/papers/v15/ srivastava14a.html.
Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence to sequence learning with neural networks. In NIPSâ2014, 2014.
Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. Technical report, arXiv:1409.4842, 2014.
Yichuan Tang. Deep learning using linear support vector machines. Workshop on Challenges in Representation Learning, ICML, 2013. | 1609.07061#83 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 84 | Yichuan Tang. Deep learning using linear support vector machines. Workshop on Challenges in Representation Learning, ICML, 2013.
Naoya Torii, Hirotaka Kokubo, Dai Yamamoto, Kouichi Itoh, Masahiko Takenaka, and Tsutomu Matsumoto. Asic implementation of random number generators using sr latches and its evaluation. EURASIP Journal on Information Security, 2016(1):1â12, 2016.
Improving the speed of neural networks on CPUs. In Proc. Deep Learning and Unsupervised Feature Learning NIPS Workshop, 2011.
Li Wan, Matthew Zeiler, Sixin Zhang, Yann LeCun, and Rob Fergus. Regularization of neural networks using dropconnect. In ICMLâ2013, 2013.
28
Quantized Neural Networks
Xiangyu Zhang, Jianhua Zou, Xiang Ming, Kaiming He, and Jian Sun. Eï¬cient and accu- rate approximations of nonlinear convolutional networks. pages 1984â1992, 2015.
Weiyi Zheng and Yina Tang. Binarized neural networks for language modeling. | 1609.07061#84 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.06038 | 0 | 7 1 0 2
r p A 6 2 ] L C . s c [
3 v 8 3 0 6 0 . 9 0 6 1 : v i X r a
# Enhanced LSTM for Natural Language Inference
# Qian Chen University of Science and Technology of China [email protected]
# Xiaodan Zhu National Research Council Canada [email protected]
Zhenhua Ling University of Science and Technology of China [email protected]
Si Wei iFLYTEK Research [email protected]
# Hui Jiang York University [email protected]
# Diana Inkpen University of Ottawa [email protected]
# Abstract | 1609.06038#0 | Enhanced LSTM for Natural Language Inference | Reasoning and inference are central to human and artificial intelligence.
Modeling inference in human language is very challenging. With the availability
of large annotated data (Bowman et al., 2015), it has recently become feasible
to train neural network based inference models, which have shown to be very
effective. In this paper, we present a new state-of-the-art result, achieving
the accuracy of 88.6% on the Stanford Natural Language Inference Dataset.
Unlike the previous top models that use very complicated network architectures,
we first demonstrate that carefully designing sequential inference models based
on chain LSTMs can outperform all previous models. Based on this, we further
show that by explicitly considering recursive architectures in both local
inference modeling and inference composition, we achieve additional
improvement. Particularly, incorporating syntactic parsing information
contributes to our best result---it further improves the performance even when
added to the already very strong model. | http://arxiv.org/pdf/1609.06038 | Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, Diana Inkpen | cs.CL | ACL 2017 | null | cs.CL | 20160920 | 20170426 | [
{
"id": "1703.04617"
}
] |
1609.06038 | 1 | # Hui Jiang York University [email protected]
# Diana Inkpen University of Ottawa [email protected]
# Abstract
Reasoning and inference are central to hu- man and artiï¬cial intelligence. Modeling inference in human language is very chal- lenging. With the availability of large an- notated data (Bowman et al., 2015), it has recently become feasible to train neural network based inference models, which have shown to be very effective. In this paper, we present a new state-of-the-art re- sult, achieving the accuracy of 88.6% on the Stanford Natural Language Inference Dataset. Unlike the previous top models that use very complicated network architec- tures, we ï¬rst demonstrate that carefully de- signing sequential inference models based on chain LSTMs can outperform all previ- ous models. Based on this, we further show that by explicitly considering recursive ar- chitectures in both local inference model- ing and inference composition, we achieve additional improvement. Particularly, in- corporating syntactic parsing information contributes to our best resultâit further im- proves the performance even when added to the already very strong model.
# Introduction | 1609.06038#1 | Enhanced LSTM for Natural Language Inference | Reasoning and inference are central to human and artificial intelligence.
Modeling inference in human language is very challenging. With the availability
of large annotated data (Bowman et al., 2015), it has recently become feasible
to train neural network based inference models, which have shown to be very
effective. In this paper, we present a new state-of-the-art result, achieving
the accuracy of 88.6% on the Stanford Natural Language Inference Dataset.
Unlike the previous top models that use very complicated network architectures,
we first demonstrate that carefully designing sequential inference models based
on chain LSTMs can outperform all previous models. Based on this, we further
show that by explicitly considering recursive architectures in both local
inference modeling and inference composition, we achieve additional
improvement. Particularly, incorporating syntactic parsing information
contributes to our best result---it further improves the performance even when
added to the already very strong model. | http://arxiv.org/pdf/1609.06038 | Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, Diana Inkpen | cs.CL | ACL 2017 | null | cs.CL | 20160920 | 20170426 | [
{
"id": "1703.04617"
}
] |
1609.06038 | 2 | # Introduction
condition for true natural language understanding is a mastery of open-domain natural language in- ference.â The previous work has included extensive research on recognizing textual entailment.
Speciï¬cally, natural language inference (NLI) is concerned with determining whether a natural- language hypothesis h can be inferred from a premise p, as depicted in the following example from MacCartney (2009), where the hypothesis is regarded to be entailed from the premise.
p: Several airlines polled saw costs grow more than expected, even after adjusting for inï¬ation.
h: Some of the companies in the poll reported cost increases.
The most recent years have seen advances in modeling natural language inference. An impor- tant contribution is the creation of a much larger annotated dataset, the Stanford Natural Language Inference (SNLI) dataset (Bowman et al., 2015). The corpus has 570,000 human-written English sentence pairs manually labeled by multiple human subjects. This makes it feasible to train more com- plex inference models. Neural network models, which often need relatively large annotated data to estimate their parameters, have shown to achieve the state of the art on SNLI (Bowman et al., 2015, 2016; Munkhdalai and Yu, 2016b; Parikh et al., 2016; Sha et al., 2016; Paria et al., 2016). | 1609.06038#2 | Enhanced LSTM for Natural Language Inference | Reasoning and inference are central to human and artificial intelligence.
Modeling inference in human language is very challenging. With the availability
of large annotated data (Bowman et al., 2015), it has recently become feasible
to train neural network based inference models, which have shown to be very
effective. In this paper, we present a new state-of-the-art result, achieving
the accuracy of 88.6% on the Stanford Natural Language Inference Dataset.
Unlike the previous top models that use very complicated network architectures,
we first demonstrate that carefully designing sequential inference models based
on chain LSTMs can outperform all previous models. Based on this, we further
show that by explicitly considering recursive architectures in both local
inference modeling and inference composition, we achieve additional
improvement. Particularly, incorporating syntactic parsing information
contributes to our best result---it further improves the performance even when
added to the already very strong model. | http://arxiv.org/pdf/1609.06038 | Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, Diana Inkpen | cs.CL | ACL 2017 | null | cs.CL | 20160920 | 20170426 | [
{
"id": "1703.04617"
}
] |
1609.06038 | 3 | Reasoning and inference are central to both human and artiï¬cial intelligence. Modeling inference in human language is notoriously challenging but is a basic problem towards true natural language un- derstanding, as pointed out by MacCartney and Manning (2008), âa necessary (if not sufï¬cient)
While some previous top-performing models use rather complicated network architectures to achieve the state-of-the-art results (Munkhdalai and Yu, 2016b), we demonstrate in this paper that enhanc- ing sequential inference models based on chain
models can outperform all previous results, sug- gesting that the potentials of such sequential in- ference approaches have not been fully exploited yet. More speciï¬cally, we show that our sequential inference model achieves an accuracy of 88.0% on the SNLI benchmark. | 1609.06038#3 | Enhanced LSTM for Natural Language Inference | Reasoning and inference are central to human and artificial intelligence.
Modeling inference in human language is very challenging. With the availability
of large annotated data (Bowman et al., 2015), it has recently become feasible
to train neural network based inference models, which have shown to be very
effective. In this paper, we present a new state-of-the-art result, achieving
the accuracy of 88.6% on the Stanford Natural Language Inference Dataset.
Unlike the previous top models that use very complicated network architectures,
we first demonstrate that carefully designing sequential inference models based
on chain LSTMs can outperform all previous models. Based on this, we further
show that by explicitly considering recursive architectures in both local
inference modeling and inference composition, we achieve additional
improvement. Particularly, incorporating syntactic parsing information
contributes to our best result---it further improves the performance even when
added to the already very strong model. | http://arxiv.org/pdf/1609.06038 | Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, Diana Inkpen | cs.CL | ACL 2017 | null | cs.CL | 20160920 | 20170426 | [
{
"id": "1703.04617"
}
] |
1609.06038 | 4 | Exploring syntax for NLI is very attractive to us. In many problems, syntax and semantics interact closely, including in semantic composition (Partee, 1995), among others. Complicated tasks such as natural language inference could well involve both, which has been discussed in the context of rec- ognizing textual entailment (RTE) (Mehdad et al., 2010; Ferrone and Zanzotto, 2014). In this pa- per, we are interested in exploring this within the neural network frameworks, with the presence of relatively large training data. We show that by explicitly encoding parsing information with re- cursive networks in both local inference modeling and inference composition and by incorporating it into our framework, we achieve additional im- provement, increasing the performance to a new state of the art with an 88.6% accuracy.
# 2 Related Work | 1609.06038#4 | Enhanced LSTM for Natural Language Inference | Reasoning and inference are central to human and artificial intelligence.
Modeling inference in human language is very challenging. With the availability
of large annotated data (Bowman et al., 2015), it has recently become feasible
to train neural network based inference models, which have shown to be very
effective. In this paper, we present a new state-of-the-art result, achieving
the accuracy of 88.6% on the Stanford Natural Language Inference Dataset.
Unlike the previous top models that use very complicated network architectures,
we first demonstrate that carefully designing sequential inference models based
on chain LSTMs can outperform all previous models. Based on this, we further
show that by explicitly considering recursive architectures in both local
inference modeling and inference composition, we achieve additional
improvement. Particularly, incorporating syntactic parsing information
contributes to our best result---it further improves the performance even when
added to the already very strong model. | http://arxiv.org/pdf/1609.06038 | Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, Diana Inkpen | cs.CL | ACL 2017 | null | cs.CL | 20160920 | 20170426 | [
{
"id": "1703.04617"
}
] |
1609.06038 | 5 | # 2 Related Work
Early work on natural language inference has been performed on rather small datasets with more con- ventional methods (refer to MacCartney (2009) for a good literature survey), which includes a large bulk of work on recognizing textual entail- ment, such as (Dagan et al., 2005; Iftene and Balahur-Dobrescu, 2007), among others. More recently, Bowman et al. (2015) made available the SNLI dataset with 570,000 human annotated sen- tence pairs. They also experimented with simple classiï¬cation models as well as simple neural net- works that encode the premise and hypothesis in- dependently. Rocktäschel et al. (2015) proposed neural attention-based models for NLI, which cap- tured the attention information. In general, atten- tion based models have been shown to be effec- tive in a wide range of tasks, including machine translation (Bahdanau et al., 2014), speech recogni- tion (Chorowski et al., 2015; Chan et al., 2016), im- age caption (Xu et al., 2015), and text summariza- tion (Rush et al., 2015; Chen et al., 2016), among others. For NLI, the idea allows neural models to pay attention to speciï¬c areas of the sentences. | 1609.06038#5 | Enhanced LSTM for Natural Language Inference | Reasoning and inference are central to human and artificial intelligence.
Modeling inference in human language is very challenging. With the availability
of large annotated data (Bowman et al., 2015), it has recently become feasible
to train neural network based inference models, which have shown to be very
effective. In this paper, we present a new state-of-the-art result, achieving
the accuracy of 88.6% on the Stanford Natural Language Inference Dataset.
Unlike the previous top models that use very complicated network architectures,
we first demonstrate that carefully designing sequential inference models based
on chain LSTMs can outperform all previous models. Based on this, we further
show that by explicitly considering recursive architectures in both local
inference modeling and inference composition, we achieve additional
improvement. Particularly, incorporating syntactic parsing information
contributes to our best result---it further improves the performance even when
added to the already very strong model. | http://arxiv.org/pdf/1609.06038 | Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, Diana Inkpen | cs.CL | ACL 2017 | null | cs.CL | 20160920 | 20170426 | [
{
"id": "1703.04617"
}
] |
1609.06038 | 6 | A variety of more advanced networks have been developed since then (Bowman et al., 2016; Ven- drov et al., 2015; Mou et al., 2016; Liu et al., 2016;
Munkhdalai and Yu, 2016a; Rocktäschel et al., 2015; Wang and Jiang, 2016; Cheng et al., 2016; Parikh et al., 2016; Munkhdalai and Yu, 2016b; Sha et al., 2016; Paria et al., 2016). Among them, more relevant to ours are the approaches proposed by Parikh et al. (2016) and Munkhdalai and Yu (2016b), which are among the best performing mod- els.
Parikh et al. (2016) propose a relatively sim- ple but very effective decomposable model. The model decomposes the NLI problem into subprob- lems that can be solved separately. On the other hand, Munkhdalai and Yu (2016b) propose much more complicated networks that consider sequen- tial LSTM-based encoding, recursive networks, and complicated combinations of attention mod- els, which provide about 0.5% gain over the results reported by Parikh et al. (2016). | 1609.06038#6 | Enhanced LSTM for Natural Language Inference | Reasoning and inference are central to human and artificial intelligence.
Modeling inference in human language is very challenging. With the availability
of large annotated data (Bowman et al., 2015), it has recently become feasible
to train neural network based inference models, which have shown to be very
effective. In this paper, we present a new state-of-the-art result, achieving
the accuracy of 88.6% on the Stanford Natural Language Inference Dataset.
Unlike the previous top models that use very complicated network architectures,
we first demonstrate that carefully designing sequential inference models based
on chain LSTMs can outperform all previous models. Based on this, we further
show that by explicitly considering recursive architectures in both local
inference modeling and inference composition, we achieve additional
improvement. Particularly, incorporating syntactic parsing information
contributes to our best result---it further improves the performance even when
added to the already very strong model. | http://arxiv.org/pdf/1609.06038 | Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, Diana Inkpen | cs.CL | ACL 2017 | null | cs.CL | 20160920 | 20170426 | [
{
"id": "1703.04617"
}
] |
1609.06038 | 7 | It is, however, not very clear if the potential of the sequential inference networks has been well exploited for NLI. In this paper, we ï¬rst revisit this problem and show that enhancing sequential infer- ence models based on chain networks can actually outperform all previous results. We further show that explicitly considering recursive architectures to encode syntactic parsing information for NLI could further improve the performance.
# 3 Hybrid Neural Inference Models
We present here our natural language inference net- works which are composed of the following major components: input encoding, local inference mod- eling, and inference composition. Figure 1 shows a high-level view of the architecture. Vertically, the ï¬gure depicts the three major components, and hor- izontally, the left side of the ï¬gure represents our sequential NLI model named ESIM, and the right side represents networks that incorporate syntactic parsing information in tree LSTMs.
In our notation, we have two sentences a = (a1,...,ag,) and b = (bi,..., bg,), where ais a premise and b a hypothesis. The a; or b; ⬠R' is an embedding of /-dimensional vector, which can be initialized with some pre-trained word embed- dings and organized with parse trees. The goal is to predict a label y that indicates the logic relationship between a and b.
# 3.1 Input Encoding | 1609.06038#7 | Enhanced LSTM for Natural Language Inference | Reasoning and inference are central to human and artificial intelligence.
Modeling inference in human language is very challenging. With the availability
of large annotated data (Bowman et al., 2015), it has recently become feasible
to train neural network based inference models, which have shown to be very
effective. In this paper, we present a new state-of-the-art result, achieving
the accuracy of 88.6% on the Stanford Natural Language Inference Dataset.
Unlike the previous top models that use very complicated network architectures,
we first demonstrate that carefully designing sequential inference models based
on chain LSTMs can outperform all previous models. Based on this, we further
show that by explicitly considering recursive architectures in both local
inference modeling and inference composition, we achieve additional
improvement. Particularly, incorporating syntactic parsing information
contributes to our best result---it further improves the performance even when
added to the already very strong model. | http://arxiv.org/pdf/1609.06038 | Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, Diana Inkpen | cs.CL | ACL 2017 | null | cs.CL | 20160920 | 20170426 | [
{
"id": "1703.04617"
}
] |
1609.06038 | 8 | # 3.1 Input Encoding
We employ bidirectional LSTM (BiLSTM) as one of our basic building blocks for NLI. We ï¬rst use it
Prediction Softmax Softmax Average&Max Root&Average&Max Inference Composition Hypothesis Fi Local Inference Modeling PO 0-0 el PO 0-0 el Premise Premise > <<>> Premise <> <> Hypothesis Hypothesis Hypothesis Input Encoding Fa: Hypothesis Tree-LSTM > <> Premise <> <> Hypothesis BiLSTM Input
Figure 1: A high-level view of our hybrid neural inference networks.
to encode the input premise and hypothesis (Equa- tion (1) and (2)). Here BiLSTM learns to represent a word (e.g., ai) and its context. Later we will also use BiLSTM to perform inference composition to construct the ï¬nal prediction, where BiLSTM en- codes local inference information and its interac- tion. To bookkeep the notations for later use, we write as ¯ai the hidden (output) state generated by the BiLSTM at time i over the input sequence a. The same is applied to ¯bj:
¯ai = BiLSTM(a, i), i â ¯bj = BiLSTM(b, j), | 1609.06038#8 | Enhanced LSTM for Natural Language Inference | Reasoning and inference are central to human and artificial intelligence.
Modeling inference in human language is very challenging. With the availability
of large annotated data (Bowman et al., 2015), it has recently become feasible
to train neural network based inference models, which have shown to be very
effective. In this paper, we present a new state-of-the-art result, achieving
the accuracy of 88.6% on the Stanford Natural Language Inference Dataset.
Unlike the previous top models that use very complicated network architectures,
we first demonstrate that carefully designing sequential inference models based
on chain LSTMs can outperform all previous models. Based on this, we further
show that by explicitly considering recursive architectures in both local
inference modeling and inference composition, we achieve additional
improvement. Particularly, incorporating syntactic parsing information
contributes to our best result---it further improves the performance even when
added to the already very strong model. | http://arxiv.org/pdf/1609.06038 | Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, Diana Inkpen | cs.CL | ACL 2017 | null | cs.CL | 20160920 | 20170426 | [
{
"id": "1703.04617"
}
] |
1609.06038 | 9 | ¯ai = BiLSTM(a, i), i â ¯bj = BiLSTM(b, j),
(1)
[1,..., 4a], [1,..-, Co].
â
(2)
# j â
â
Due to the space limit, we will skip the descrip- tion of the basic chain LSTM and readers can refer to Hochreiter and Schmidhuber (1997) for details. Brieï¬y, when modeling a sequence, an LSTM em- ploys a set of soft gates together with a memory cell to control message ï¬ows, resulting in an effec- tive modeling of tracking long-distance informa- tion/dependencies in a sequence.
A bidirectional LSTM runs a forward and back- ward LSTM on a sequence starting from the left and the right end, respectively. The hidden states
generated by these two LSTMs at each time step are concatenated to represent that time step and its context. Note that we used LSTM memory blocks in our models. We examined other recurrent memory blocks such as GRUs (Gated Recurrent Units) (Cho et al., 2014) and they are inferior to LSTMs on the heldout set for our NLI task. | 1609.06038#9 | Enhanced LSTM for Natural Language Inference | Reasoning and inference are central to human and artificial intelligence.
Modeling inference in human language is very challenging. With the availability
of large annotated data (Bowman et al., 2015), it has recently become feasible
to train neural network based inference models, which have shown to be very
effective. In this paper, we present a new state-of-the-art result, achieving
the accuracy of 88.6% on the Stanford Natural Language Inference Dataset.
Unlike the previous top models that use very complicated network architectures,
we first demonstrate that carefully designing sequential inference models based
on chain LSTMs can outperform all previous models. Based on this, we further
show that by explicitly considering recursive architectures in both local
inference modeling and inference composition, we achieve additional
improvement. Particularly, incorporating syntactic parsing information
contributes to our best result---it further improves the performance even when
added to the already very strong model. | http://arxiv.org/pdf/1609.06038 | Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, Diana Inkpen | cs.CL | ACL 2017 | null | cs.CL | 20160920 | 20170426 | [
{
"id": "1703.04617"
}
] |
1609.06038 | 10 | As discussed above, it is intriguing to explore the effectiveness of syntax for natural language inference; for example, whether it is useful even when incorporated into the best-performing models. To this end, we will also encode syntactic parse trees of a premise and hypothesis through tree- LSTM (Zhu et al., 2015; Tai et al., 2015; Le and Zuidema, 2015), which extends the chain LSTM to a recursive network (Socher et al., 2011).
Speciï¬cally, given the parse of a premise or hy- pothesis, a tree node is deployed with a tree-LSTM memory block depicted as in Figure 2 and com- puted with Equations (3â10). In short, at each node, an input vector xt and the hidden vectors of its two children (the left child hL tâ1) are taken in as the input to calculate the current nodeâs hidden vector ht.
hey | hf, hh, i hf, Input Gate (i Output Gate (% he, Cell x P+ (Pâ{ + is R hey Left Forget Gate Right Forge} Gate fF ATS L R L R L R hiix, bi chy ef) hhix, bey
Figure 2: A tree-LSTM memory block. | 1609.06038#10 | Enhanced LSTM for Natural Language Inference | Reasoning and inference are central to human and artificial intelligence.
Modeling inference in human language is very challenging. With the availability
of large annotated data (Bowman et al., 2015), it has recently become feasible
to train neural network based inference models, which have shown to be very
effective. In this paper, we present a new state-of-the-art result, achieving
the accuracy of 88.6% on the Stanford Natural Language Inference Dataset.
Unlike the previous top models that use very complicated network architectures,
we first demonstrate that carefully designing sequential inference models based
on chain LSTMs can outperform all previous models. Based on this, we further
show that by explicitly considering recursive architectures in both local
inference modeling and inference composition, we achieve additional
improvement. Particularly, incorporating syntactic parsing information
contributes to our best result---it further improves the performance even when
added to the already very strong model. | http://arxiv.org/pdf/1609.06038 | Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, Diana Inkpen | cs.CL | ACL 2017 | null | cs.CL | 20160920 | 20170426 | [
{
"id": "1703.04617"
}
] |
1609.06038 | 11 | Figure 2: A tree-LSTM memory block.
We describe the updating of a node at a high level with Equation (3) to facilitate references later in the paper, and the detailed computation is described in (4â10). Speciï¬cally, the input of a node is used to conï¬gure four gates: the input gate it, output gate ot, and the two forget gates f L t . The memory cell ct considers each childâs cell vector, tâ1 and cR cL tâ1, which are gated by the left forget
# gate f L
# t and right forget gate f R
# t , respectively.
gate f/ and right forget gate f/*, respectively.
h, = TrLSTM(x,, h/_,, hh ,), hy, = o © tanh(cz), 0, = o(W.x, + UFhY , + UFh? |), co =f oct, +f oct, +i,0u, ff =o(Wyx, + UF ht, + UF" he ff? = o(Wyx, + UP he, + UP Phe), ip = o (Wx, + UPhi_, + UP hf), u, = tanh(W,x;, + UPhy_; + UPh?,),
1), 1), | 1609.06038#11 | Enhanced LSTM for Natural Language Inference | Reasoning and inference are central to human and artificial intelligence.
Modeling inference in human language is very challenging. With the availability
of large annotated data (Bowman et al., 2015), it has recently become feasible
to train neural network based inference models, which have shown to be very
effective. In this paper, we present a new state-of-the-art result, achieving
the accuracy of 88.6% on the Stanford Natural Language Inference Dataset.
Unlike the previous top models that use very complicated network architectures,
we first demonstrate that carefully designing sequential inference models based
on chain LSTMs can outperform all previous models. Based on this, we further
show that by explicitly considering recursive architectures in both local
inference modeling and inference composition, we achieve additional
improvement. Particularly, incorporating syntactic parsing information
contributes to our best result---it further improves the performance even when
added to the already very strong model. | http://arxiv.org/pdf/1609.06038 | Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, Diana Inkpen | cs.CL | ACL 2017 | null | cs.CL | 20160920 | 20170426 | [
{
"id": "1703.04617"
}
] |
1609.06038 | 12 | 1), 1),
where Ï is the sigmoid function, wise multiplication of two vectors, and all W RdÃl, U
R®â¢! U © R**4 are weight matrices to be learned. In the current input encoding layer, x; is used to encode a word embedding for a leaf node. Since a non-leaf node does not correspond to a specific word, we use a special vector xâ, as its input, which is like an unknown word. However, in the inference composition layer that we discuss later, the goal of using tree-LSTM is very different; the input x; will be very different as wellâit will encode local inference information and will have values at all tree nodes.
# 3.2 Local Inference Modeling
Modeling local subsentential inference between a premise and hypothesis is the basic component for determining the overall inference between these two statements. To closely examine local infer- ence, we explore both the sequential and syntactic tree models that have been discussed above. The former helps collect local inference for words and their context, and the tree LSTM helps collect lo- cal information between (linguistic) phrases and clauses. | 1609.06038#12 | Enhanced LSTM for Natural Language Inference | Reasoning and inference are central to human and artificial intelligence.
Modeling inference in human language is very challenging. With the availability
of large annotated data (Bowman et al., 2015), it has recently become feasible
to train neural network based inference models, which have shown to be very
effective. In this paper, we present a new state-of-the-art result, achieving
the accuracy of 88.6% on the Stanford Natural Language Inference Dataset.
Unlike the previous top models that use very complicated network architectures,
we first demonstrate that carefully designing sequential inference models based
on chain LSTMs can outperform all previous models. Based on this, we further
show that by explicitly considering recursive architectures in both local
inference modeling and inference composition, we achieve additional
improvement. Particularly, incorporating syntactic parsing information
contributes to our best result---it further improves the performance even when
added to the already very strong model. | http://arxiv.org/pdf/1609.06038 | Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, Diana Inkpen | cs.CL | ACL 2017 | null | cs.CL | 20160920 | 20170426 | [
{
"id": "1703.04617"
}
] |
1609.06038 | 13 | Locality of inference Modeling local inference needs to employ some forms of hard or soft align- ment to associate the relevant subcomponents be- tween a premise and a hypothesis. This includes early methods motivated from the alignment in conventional automatic machine translation (Mac- Cartney, 2009). In neural network models, this is often achieved with soft attention.
Parikh et al. (2016) decomposed this process: the word sequence of the premise (or hypothesis) is regarded as a bag-of-word embedding vector and inter-sentence âalignmentâ (or attention) is computed individually to softly align each word
GB)
(4)
(5)
(6)
(7)
(8)
()
(10) | 1609.06038#13 | Enhanced LSTM for Natural Language Inference | Reasoning and inference are central to human and artificial intelligence.
Modeling inference in human language is very challenging. With the availability
of large annotated data (Bowman et al., 2015), it has recently become feasible
to train neural network based inference models, which have shown to be very
effective. In this paper, we present a new state-of-the-art result, achieving
the accuracy of 88.6% on the Stanford Natural Language Inference Dataset.
Unlike the previous top models that use very complicated network architectures,
we first demonstrate that carefully designing sequential inference models based
on chain LSTMs can outperform all previous models. Based on this, we further
show that by explicitly considering recursive architectures in both local
inference modeling and inference composition, we achieve additional
improvement. Particularly, incorporating syntactic parsing information
contributes to our best result---it further improves the performance even when
added to the already very strong model. | http://arxiv.org/pdf/1609.06038 | Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, Diana Inkpen | cs.CL | ACL 2017 | null | cs.CL | 20160920 | 20170426 | [
{
"id": "1703.04617"
}
] |
1609.06038 | 14 | GB)
(4)
(5)
(6)
(7)
(8)
()
(10)
to the content of hypothesis (or premise, respec- tively). While their basic framework is very effec- tive, achieving one of the previous best results, us- ing a pre-trained word embedding by itself does not automatically consider the context around a word in NLI. Parikh et al. (2016) did take into account the word order and context information through an optional distance-sensitive intra-sentence attention. In this paper, we argue for leveraging attention over the bidirectional sequential encoding of the input, as discussed above. We will show that this plays an important role in achieving our best results, and the intra-sentence attention used by Parikh et al. (2016) actually does not further improve over our model, while the overall framework they proposed is very effective.
Our soft alignment layer computes the attention weights as the similarity of a hidden state tuple <¯ai, ¯bj> between a premise and a hypothesis with Equation (11). We did study more complicated relationships between ¯ai and ¯bj with multilayer perceptrons, but observed no further improvement on the heldout data.
eij = ¯aT i ¯bj. (11) | 1609.06038#14 | Enhanced LSTM for Natural Language Inference | Reasoning and inference are central to human and artificial intelligence.
Modeling inference in human language is very challenging. With the availability
of large annotated data (Bowman et al., 2015), it has recently become feasible
to train neural network based inference models, which have shown to be very
effective. In this paper, we present a new state-of-the-art result, achieving
the accuracy of 88.6% on the Stanford Natural Language Inference Dataset.
Unlike the previous top models that use very complicated network architectures,
we first demonstrate that carefully designing sequential inference models based
on chain LSTMs can outperform all previous models. Based on this, we further
show that by explicitly considering recursive architectures in both local
inference modeling and inference composition, we achieve additional
improvement. Particularly, incorporating syntactic parsing information
contributes to our best result---it further improves the performance even when
added to the already very strong model. | http://arxiv.org/pdf/1609.06038 | Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, Diana Inkpen | cs.CL | ACL 2017 | null | cs.CL | 20160920 | 20170426 | [
{
"id": "1703.04617"
}
] |
1609.06038 | 15 | eij = ¯aT i ¯bj. (11)
In the formula, ¯ai and ¯bj are computed earlier in Equations (1) and (2), or with Equation (3) when tree-LSTM is used. Again, as discussed above, we will use bidirectional LSTM and tree-LSTM to en- code the premise and hypothesis, respectively. In our sequential inference model, unlike in Parikh et al. (2016) which proposed to use a function F (¯ai), i.e., a feedforward neural network, to map the original word representation for calculating eij, we instead advocate to use BiLSTM, which en- codes the information in premise and hypothesis very well and achieves better performance shown in the experiment section. We tried to apply the F (.) function on our hidden states before computing eij and it did not further help our models.
Local inference collected over sequences Lo- cal inference is determined by the attention weight eij computed above, which is used to obtain the local relevance between a premise and hypothesis. For the hidden state of a word in a premise, i.e., ¯ai (already encoding the word itself and its context), the relevant semantics in the hypothesis is iden- tiï¬ed and composed using eij, more speciï¬cally
with Equation (12). | 1609.06038#15 | Enhanced LSTM for Natural Language Inference | Reasoning and inference are central to human and artificial intelligence.
Modeling inference in human language is very challenging. With the availability
of large annotated data (Bowman et al., 2015), it has recently become feasible
to train neural network based inference models, which have shown to be very
effective. In this paper, we present a new state-of-the-art result, achieving
the accuracy of 88.6% on the Stanford Natural Language Inference Dataset.
Unlike the previous top models that use very complicated network architectures,
we first demonstrate that carefully designing sequential inference models based
on chain LSTMs can outperform all previous models. Based on this, we further
show that by explicitly considering recursive architectures in both local
inference modeling and inference composition, we achieve additional
improvement. Particularly, incorporating syntactic parsing information
contributes to our best result---it further improves the performance even when
added to the already very strong model. | http://arxiv.org/pdf/1609.06038 | Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, Diana Inkpen | cs.CL | ACL 2017 | null | cs.CL | 20160920 | 20170426 | [
{
"id": "1703.04617"
}
] |
1609.06038 | 16 | with Equation (12).
Lo . exp(ei;) a; = ay 2S exten)
~ fo exp(e;;) b=>> PMG) ai Vj E[L,...,4], (13) fat Loker CXP(CR;)
where A; is a weighted summation of {bj} - In- tuitively, the content in {bj} , that is relevant to a; will be selected and represented as a;. The same is performed for each word in the hypothesis with Equation (13).
Local inference collected over parse trees We use tree models to help collect local inference in- formation over linguistic phrases and clauses in this layer. The tree structures of the premise and hypothesis are produced by a constituency parser. Once the hidden states of a tree are all computed with Equation (3), we treat all tree nodes equally as we do not have further heuristics to discrimi- nate them, but leave the attention weights to ï¬gure out their relationship. So, we use Equation (11) to compute the attention weights for all node pairs between a premise and hypothesis. This connects all words, constituent phrases, and clauses between the premise and hypothesis. We then collect the in- formation between all the pairs with Equations (12) and (13) and feed them into the next layer. | 1609.06038#16 | Enhanced LSTM for Natural Language Inference | Reasoning and inference are central to human and artificial intelligence.
Modeling inference in human language is very challenging. With the availability
of large annotated data (Bowman et al., 2015), it has recently become feasible
to train neural network based inference models, which have shown to be very
effective. In this paper, we present a new state-of-the-art result, achieving
the accuracy of 88.6% on the Stanford Natural Language Inference Dataset.
Unlike the previous top models that use very complicated network architectures,
we first demonstrate that carefully designing sequential inference models based
on chain LSTMs can outperform all previous models. Based on this, we further
show that by explicitly considering recursive architectures in both local
inference modeling and inference composition, we achieve additional
improvement. Particularly, incorporating syntactic parsing information
contributes to our best result---it further improves the performance even when
added to the already very strong model. | http://arxiv.org/pdf/1609.06038 | Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, Diana Inkpen | cs.CL | ACL 2017 | null | cs.CL | 20160920 | 20170426 | [
{
"id": "1703.04617"
}
] |
1609.06038 | 17 | inference information Enhancement of local In our models, we further enhance the local in- ference information collected. We compute the difference and the element-wise product for the tu- ple <¯a, Ëa> as well as for <¯b, Ëb>. We expect that such operations could help sharpen local inference information between elements in the tuples and cap- ture inference relationships such as contradiction. The difference and element-wise product are then concatenated with the original vectors, ¯a and Ëa, or ¯b and Ëb, respectively (Mou et al., 2016; Zhang et al., 2017). The enhancement is performed for both the sequential and the tree models.
ma = [¯a; Ëa; ¯a mb = [¯b; Ëb; ¯b
(14)
Ëa], Ëb].
â
(15)
â
This process could be regarded as a special case of modeling some high-order interaction between the tuple elements. Along this direction, we have
also further modeled the interaction by feeding the tuples into feedforward neural networks and added the top layer hidden states to the above concate- nation. We found that it does not further help the inference accuracy on the heldout dataset.
# Inference Composition | 1609.06038#17 | Enhanced LSTM for Natural Language Inference | Reasoning and inference are central to human and artificial intelligence.
Modeling inference in human language is very challenging. With the availability
of large annotated data (Bowman et al., 2015), it has recently become feasible
to train neural network based inference models, which have shown to be very
effective. In this paper, we present a new state-of-the-art result, achieving
the accuracy of 88.6% on the Stanford Natural Language Inference Dataset.
Unlike the previous top models that use very complicated network architectures,
we first demonstrate that carefully designing sequential inference models based
on chain LSTMs can outperform all previous models. Based on this, we further
show that by explicitly considering recursive architectures in both local
inference modeling and inference composition, we achieve additional
improvement. Particularly, incorporating syntactic parsing information
contributes to our best result---it further improves the performance even when
added to the already very strong model. | http://arxiv.org/pdf/1609.06038 | Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, Diana Inkpen | cs.CL | ACL 2017 | null | cs.CL | 20160920 | 20170426 | [
{
"id": "1703.04617"
}
] |
1609.06038 | 18 | # Inference Composition
To determine the overall inference relationship be- tween a premise and hypothesis, we explore a com- position layer to compose the enhanced local in- ference information ma and mb. We perform the composition sequentially or in its parse context using BiLSTM and tree-LSTM, respectively.
The composition layer In our sequential infer- ence model, we keep using BiLSTM to compose local inference information sequentially. The for- mulas for BiLSTM are similar to those in Equations (1) and (2) in their forms so we skip the details, but the aim is very different hereâthey are used to cap- ture local inference information ma and mb and their context here for inference composition.
In the tree composition, the high-level formulas of how a tree node is updated to compose local inference is as follows:
1, hR va,t = TrLSTM(F (ma,t), hL 1), t t â â 1, hR vb,t = TrLSTM(F (mb,t), hL 1). t t â â | 1609.06038#18 | Enhanced LSTM for Natural Language Inference | Reasoning and inference are central to human and artificial intelligence.
Modeling inference in human language is very challenging. With the availability
of large annotated data (Bowman et al., 2015), it has recently become feasible
to train neural network based inference models, which have shown to be very
effective. In this paper, we present a new state-of-the-art result, achieving
the accuracy of 88.6% on the Stanford Natural Language Inference Dataset.
Unlike the previous top models that use very complicated network architectures,
we first demonstrate that carefully designing sequential inference models based
on chain LSTMs can outperform all previous models. Based on this, we further
show that by explicitly considering recursive architectures in both local
inference modeling and inference composition, we achieve additional
improvement. Particularly, incorporating syntactic parsing information
contributes to our best result---it further improves the performance even when
added to the already very strong model. | http://arxiv.org/pdf/1609.06038 | Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, Diana Inkpen | cs.CL | ACL 2017 | null | cs.CL | 20160920 | 20170426 | [
{
"id": "1703.04617"
}
] |
1609.06038 | 19 | We propose to control model complexity in this layer, since the concatenation we described above to compute ma and mb can signiï¬cantly increase the overall parameter size to potentially overï¬t the models. We propose to use a mapping F as in Equation (16) and (17). More speciï¬cally, we use a 1-layer feedforward neural network with the ReLU activation. This function is also applied to BiLSTM in our sequential inference composition.
Pooling Our inference model converts the result- ing vectors obtained above to a ï¬xed-length vector with pooling and feeds it to the ï¬nal classiï¬er to determine the overall inference relationship.
We consider that summation (Parikh et al., 2016) could be sensitive to the sequence length and hence less robust. We instead suggest the following strat- egy: compute both average and max pooling, and concatenate all these vectors to form the ï¬nal ï¬xed length vector v. Our experiments show that this leads to signiï¬cantly better results than summa- tion. The ï¬nal ï¬xed length vector v is calculated
(16)
(17)
as follows:
la Vai la Va,ave = » > Va,max =MAXVa,i, (18) la i=1 i=1 | 1609.06038#19 | Enhanced LSTM for Natural Language Inference | Reasoning and inference are central to human and artificial intelligence.
Modeling inference in human language is very challenging. With the availability
of large annotated data (Bowman et al., 2015), it has recently become feasible
to train neural network based inference models, which have shown to be very
effective. In this paper, we present a new state-of-the-art result, achieving
the accuracy of 88.6% on the Stanford Natural Language Inference Dataset.
Unlike the previous top models that use very complicated network architectures,
we first demonstrate that carefully designing sequential inference models based
on chain LSTMs can outperform all previous models. Based on this, we further
show that by explicitly considering recursive architectures in both local
inference modeling and inference composition, we achieve additional
improvement. Particularly, incorporating syntactic parsing information
contributes to our best result---it further improves the performance even when
added to the already very strong model. | http://arxiv.org/pdf/1609.06038 | Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, Diana Inkpen | cs.CL | ACL 2017 | null | cs.CL | 20160920 | 20170426 | [
{
"id": "1703.04617"
}
] |
1609.06038 | 20 | (16)
(17)
as follows:
la Vai la Va,ave = » > Va,max =MAXVa,i, (18) la i=1 i=1
by ; Vbij b Vo,ave = » G7 Vbamax = MAX Vp, j, (19) â7 *b j=1 j=l
v = [va,ave; va,max; vb,ave; vb,max]. (20)
Note that for tree composition, Equation (20) is slightly different from that in sequential com- position. Our tree composition will concatenate also the hidden states computed for the roots with Equations (16) and (17), which are not shown here. We then put v into a ï¬nal multilayer perceptron (MLP) classiï¬er. The MLP has a hidden layer with tanh activation and softmax output layer in our ex- periments. The entire model (all three components described above) is trained end-to-end. For train- ing, we use multi-class cross-entropy loss. | 1609.06038#20 | Enhanced LSTM for Natural Language Inference | Reasoning and inference are central to human and artificial intelligence.
Modeling inference in human language is very challenging. With the availability
of large annotated data (Bowman et al., 2015), it has recently become feasible
to train neural network based inference models, which have shown to be very
effective. In this paper, we present a new state-of-the-art result, achieving
the accuracy of 88.6% on the Stanford Natural Language Inference Dataset.
Unlike the previous top models that use very complicated network architectures,
we first demonstrate that carefully designing sequential inference models based
on chain LSTMs can outperform all previous models. Based on this, we further
show that by explicitly considering recursive architectures in both local
inference modeling and inference composition, we achieve additional
improvement. Particularly, incorporating syntactic parsing information
contributes to our best result---it further improves the performance even when
added to the already very strong model. | http://arxiv.org/pdf/1609.06038 | Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, Diana Inkpen | cs.CL | ACL 2017 | null | cs.CL | 20160920 | 20170426 | [
{
"id": "1703.04617"
}
] |
1609.06038 | 21 | Overall inference models Our model can be based only on the sequential networks by removing all tree components and we call it Enhanced Se- quential Inference Model (ESIM) (see the left part of Figure 1). We will show that ESIM outperforms all previous results. We will also encode parse in- formation with tree LSTMs in multiple layers as described (see the right side of Figure 1). We train this model and incorporate it into ESIM by averag- ing the predicted probabilities to get the ï¬nal label for a premise-hypothesis pair. We will show that parsing information complements very well with ESIM and further improves the performance, and we call the ï¬nal model Hybrid Inference Model (HIM).
# 4 Experimental Setup
Data The Stanford Natural Language Inference (SNLI) corpus (Bowman et al., 2015) focuses on three basic relationships between a premise and a potential hypothesis: the premise entails the hy- pothesis (entailment), they contradict each other (contradiction), or they are not related (neutral). The original SNLI corpus contains also âthe otherâ category, which includes the sentence pairs lacking consensus among multiple human annotators. As in the related work, we remove this category. We used the same split as in Bowman et al. (2015) and other previous work. | 1609.06038#21 | Enhanced LSTM for Natural Language Inference | Reasoning and inference are central to human and artificial intelligence.
Modeling inference in human language is very challenging. With the availability
of large annotated data (Bowman et al., 2015), it has recently become feasible
to train neural network based inference models, which have shown to be very
effective. In this paper, we present a new state-of-the-art result, achieving
the accuracy of 88.6% on the Stanford Natural Language Inference Dataset.
Unlike the previous top models that use very complicated network architectures,
we first demonstrate that carefully designing sequential inference models based
on chain LSTMs can outperform all previous models. Based on this, we further
show that by explicitly considering recursive architectures in both local
inference modeling and inference composition, we achieve additional
improvement. Particularly, incorporating syntactic parsing information
contributes to our best result---it further improves the performance even when
added to the already very strong model. | http://arxiv.org/pdf/1609.06038 | Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, Diana Inkpen | cs.CL | ACL 2017 | null | cs.CL | 20160920 | 20170426 | [
{
"id": "1703.04617"
}
] |
1609.06038 | 22 | The parse trees used in this paper are produced by the Stanford PCFG Parser 3.5.3 (Klein and Man- ning, 2003) and they are delivered as part of the SNLI corpus. We use classiï¬cation accuracy as the evaluation metric, as in related work.
Training We use the development set to select models for testing. To help replicate our results, we publish our code1. Below, we list our training details. We use the Adam method (Kingma and Ba, 2014) for optimization. The ï¬rst momentum is set to be 0.9 and the second 0.999. The initial learning rate is 0.0004 and the batch size is 32. All hidden states of LSTMs, tree-LSTMs, and word embeddings have 300 dimensions.
We use dropout with a rate of 0.5, which is applied to all feedforward connections. We use pre-trained 300-D Glove 840B vectors (Penning- ton et al., 2014) to initialize our word embeddings. Out-of-vocabulary (OOV) words are initialized ran- domly with Gaussian samples. All vectors includ- ing word embedding are updated during training.
# 5 Results | 1609.06038#22 | Enhanced LSTM for Natural Language Inference | Reasoning and inference are central to human and artificial intelligence.
Modeling inference in human language is very challenging. With the availability
of large annotated data (Bowman et al., 2015), it has recently become feasible
to train neural network based inference models, which have shown to be very
effective. In this paper, we present a new state-of-the-art result, achieving
the accuracy of 88.6% on the Stanford Natural Language Inference Dataset.
Unlike the previous top models that use very complicated network architectures,
we first demonstrate that carefully designing sequential inference models based
on chain LSTMs can outperform all previous models. Based on this, we further
show that by explicitly considering recursive architectures in both local
inference modeling and inference composition, we achieve additional
improvement. Particularly, incorporating syntactic parsing information
contributes to our best result---it further improves the performance even when
added to the already very strong model. | http://arxiv.org/pdf/1609.06038 | Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, Diana Inkpen | cs.CL | ACL 2017 | null | cs.CL | 20160920 | 20170426 | [
{
"id": "1703.04617"
}
] |
1609.06038 | 24 | The next group of models (2)-(7) are based on sentence encoding. The model of Bowman et al. (2016) encodes the premise and hypothe- sis with two different LSTMs. The model in Ven- drov et al. (2015) uses unsupervised âskip-thoughtsâ pre-training in GRU encoders. The approach pro- posed by Mou et al. (2016) considers tree-based CNN to capture sentence-level semantics, while the model of Bowman et al. (2016) introduces a stack-augmented parser-interpreter neural network (SPINN) which combines parsing and interpreta- tion within a single tree-sequence hybrid model. The work by Liu et al. (2016) uses BiLSTM to gen- erate sentence representations, and then replaces average pooling with intra-attention. The approach proposed by Munkhdalai and Yu (2016a) presents a memory augmented neural network, neural se- mantic encoders (NSE), to encode sentences.
The next group of methods in the table, models
# 1https://github.com/lukecq1231/nli | 1609.06038#24 | Enhanced LSTM for Natural Language Inference | Reasoning and inference are central to human and artificial intelligence.
Modeling inference in human language is very challenging. With the availability
of large annotated data (Bowman et al., 2015), it has recently become feasible
to train neural network based inference models, which have shown to be very
effective. In this paper, we present a new state-of-the-art result, achieving
the accuracy of 88.6% on the Stanford Natural Language Inference Dataset.
Unlike the previous top models that use very complicated network architectures,
we first demonstrate that carefully designing sequential inference models based
on chain LSTMs can outperform all previous models. Based on this, we further
show that by explicitly considering recursive architectures in both local
inference modeling and inference composition, we achieve additional
improvement. Particularly, incorporating syntactic parsing information
contributes to our best result---it further improves the performance even when
added to the already very strong model. | http://arxiv.org/pdf/1609.06038 | Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, Diana Inkpen | cs.CL | ACL 2017 | null | cs.CL | 20160920 | 20170426 | [
{
"id": "1703.04617"
}
] |
1609.06038 | 25 | Model #Para. Train Test (1) Handcrafted features (Bowman et al., 2015) - 99.7 78.2 (2) 300D LSTM encoders (Bowman et al., 2016) (3) 1024D pretrained GRU encoders (Vendrov et al., 2015) (4) 300D tree-based CNN encoders (Mou et al., 2016) (5) 300D SPINN-PI encoders (Bowman et al., 2016) (6) 600D BiLSTM intra-attention encoders (Liu et al., 2016) (7) 300D NSE encoders (Munkhdalai and Yu, 2016a) 3.0M 83.9 15M 98.8 3.5M 83.3 3.7M 89.2 2.8M 84.5 3.0M 86.2 80.6 81.4 82.1 83.2 84.2 84.6 (8) 100D LSTM with attention (Rocktäschel et al., 2015) (9) 300D mLSTM (Wang and Jiang, 2016) (10) 450D LSTMN with deep attention fusion (Cheng et al., 2016) (11) 200D decomposable attention model (Parikh et al., 2016) (12) | 1609.06038#25 | Enhanced LSTM for Natural Language Inference | Reasoning and inference are central to human and artificial intelligence.
Modeling inference in human language is very challenging. With the availability
of large annotated data (Bowman et al., 2015), it has recently become feasible
to train neural network based inference models, which have shown to be very
effective. In this paper, we present a new state-of-the-art result, achieving
the accuracy of 88.6% on the Stanford Natural Language Inference Dataset.
Unlike the previous top models that use very complicated network architectures,
we first demonstrate that carefully designing sequential inference models based
on chain LSTMs can outperform all previous models. Based on this, we further
show that by explicitly considering recursive architectures in both local
inference modeling and inference composition, we achieve additional
improvement. Particularly, incorporating syntactic parsing information
contributes to our best result---it further improves the performance even when
added to the already very strong model. | http://arxiv.org/pdf/1609.06038 | Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, Diana Inkpen | cs.CL | ACL 2017 | null | cs.CL | 20160920 | 20170426 | [
{
"id": "1703.04617"
}
] |
1609.06038 | 26 | LSTMN with deep attention fusion (Cheng et al., 2016) (11) 200D decomposable attention model (Parikh et al., 2016) (12) Intra-sentence attention + (11) (Parikh et al., 2016) (13) 300D NTI-SLSTM-LSTM (Munkhdalai and Yu, 2016b) (14) 300D re-read LSTM (Sha et al., 2016) (15) 300D btree-LSTM encoders (Paria et al., 2016) 250K 85.3 1.9M 92.0 3.4M 88.5 380K 89.5 580K 90.5 3.2M 88.5 2.0M 90.7 2.0M 88.6 83.5 86.1 86.3 86.3 86.8 87.3 87.5 87.6 (16) 600D ESIM (17) HIM (600D ESIM + 300D Syntactic tree-LSTM) 4.3M 92.6 7.7M 93.5 88.0 88.6 | 1609.06038#26 | Enhanced LSTM for Natural Language Inference | Reasoning and inference are central to human and artificial intelligence.
Modeling inference in human language is very challenging. With the availability
of large annotated data (Bowman et al., 2015), it has recently become feasible
to train neural network based inference models, which have shown to be very
effective. In this paper, we present a new state-of-the-art result, achieving
the accuracy of 88.6% on the Stanford Natural Language Inference Dataset.
Unlike the previous top models that use very complicated network architectures,
we first demonstrate that carefully designing sequential inference models based
on chain LSTMs can outperform all previous models. Based on this, we further
show that by explicitly considering recursive architectures in both local
inference modeling and inference composition, we achieve additional
improvement. Particularly, incorporating syntactic parsing information
contributes to our best result---it further improves the performance even when
added to the already very strong model. | http://arxiv.org/pdf/1609.06038 | Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, Diana Inkpen | cs.CL | ACL 2017 | null | cs.CL | 20160920 | 20170426 | [
{
"id": "1703.04617"
}
] |
1609.06038 | 28 | (8)-(15), are inter-sentence attention-based model. The model marked with Rocktäschel et al. (2015) is LSTMs enforcing the so called word-by-word attention. The model of Wang and Jiang (2016) ex- tends this idea to explicitly enforce word-by-word matching between the hypothesis and the premise. Long short-term memory-networks (LSTMN) with deep attention fusion (Cheng et al., 2016) link the current word to previous words stored in memory. Parikh et al. (2016) proposed a decomposable atten- tion model without relying on any word-order in- formation. In general, adding intra-sentence atten- tion yields further improvement, which is not very surprising as it could help align the relevant text spans between premise and hypothesis. The model of Munkhdalai and Yu (2016b) extends the frame- work of Wang and Jiang (2016) to a full n-ary tree model and achieves further improvement. Sha et al. (2016) proposes a special LSTM variant which con- siders the attention vector of another sentence as an inner state of LSTM. Paria et al. (2016) use a neu- ral architecture with a complete binary tree-LSTM encoders without syntactic information. | 1609.06038#28 | Enhanced LSTM for Natural Language Inference | Reasoning and inference are central to human and artificial intelligence.
Modeling inference in human language is very challenging. With the availability
of large annotated data (Bowman et al., 2015), it has recently become feasible
to train neural network based inference models, which have shown to be very
effective. In this paper, we present a new state-of-the-art result, achieving
the accuracy of 88.6% on the Stanford Natural Language Inference Dataset.
Unlike the previous top models that use very complicated network architectures,
we first demonstrate that carefully designing sequential inference models based
on chain LSTMs can outperform all previous models. Based on this, we further
show that by explicitly considering recursive architectures in both local
inference modeling and inference composition, we achieve additional
improvement. Particularly, incorporating syntactic parsing information
contributes to our best result---it further improves the performance even when
added to the already very strong model. | http://arxiv.org/pdf/1609.06038 | Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, Diana Inkpen | cs.CL | ACL 2017 | null | cs.CL | 20160920 | 20170426 | [
{
"id": "1703.04617"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.