_id
stringlengths 3
8
| text
stringlengths 34
3.44k
|
---|---|
c_62797 | Training deep neural networks on large datasets containing high-dimensional
data requires a large amount of computation. A solution to this problem is
data-parallel distributed training, where a model is replicated into several
computational nodes that have access to different chunks of the data. This
approach, however, entails high communication rates and latency because of the
computed gradients that need to be shared among nodes at every iteration. The
problem becomes more pronounced in the case that there is wireless
communication between the nodes (i.e. due to the limited network bandwidth). To
address this problem, various compression methods have been proposed including
sparsification, quantization, and entropy encoding of the gradients. Existing
methods leverage the intra-node information redundancy, that is, they compress
gradients at each node independently. In contrast, we advocate that the
gradients across the nodes are correlated and propose methods to leverage this
inter-node redundancy to improve compression efficiency. Depending on the node
communication protocol (parameter server or ring-allreduce), we propose two
instances of the LGC approach that we coin Learned Gradient Compression (LGC).
Our methods exploit an autoencoder (i.e. trained during the first stages of the
distributed training) to capture the common information that exists in the
gradients of the distributed nodes. We have tested our LGC methods on the image
classification and semantic segmentation tasks using different convolutional
neural networks (ResNet50, ResNet101, PSPNet) and multiple datasets (ImageNet,
Cifar10, CamVid). The ResNet101 model trained for image classification on
Cifar10 achieved an accuracy of 93.57%, which is lower than the baseline
distributed training with uncompressed gradients only by 0.18%. |
c_273946 | A large portion of data mining and analytic services use modern machine
learning techniques, such as deep learning. The state-of-the-art results by
deep learning come at the price of an intensive use of computing resources. The
leading frameworks (e.g., TensorFlow) are executed on GPUs or on high-end
servers in datacenters. On the other end, there is a proliferation of personal
devices with possibly free CPU cycles; this can enable services to run in
users' homes, embedding machine learning operations. In this paper, we ask the
following question: Is distributed deep learning computation on WAN connected
devices feasible, in spite of the traffic caused by learning tasks? We show
that such a setup rises some important challenges, most notably the ingress
traffic that the servers hosting the up-to-date model have to sustain.
In order to reduce this stress, we propose adaComp, a novel algorithm for
compressing worker updates to the model on the server. Applicable to stochastic
gradient descent based approaches, it combines efficient gradient selection and
learning rate modulation. We then experiment and measure the impact of
compression, device heterogeneity and reliability on the accuracy of learned
models, with an emulator platform that embeds TensorFlow into Linux containers.
We report a reduction of the total amount of data sent by workers to the server
by two order of magnitude (e.g., 191-fold reduction for a convolutional network
on the MNIST dataset), when compared to a standard asynchronous stochastic
gradient descent, while preserving model accuracy. |
c_68364 | Although distributed machine learning has opened up many new and exciting
research frontiers, fragmentation of models and data across different machines,
nodes, and sites still results in considerable communication overhead, impeding
reliable training in real-world contexts.
The focus on gradients as the primary shared statistic during training has
spawned a number of intuitive algorithms for distributed deep learning;
however, gradient-centric training of large deep neural networks (DNNs) tends
to be communication-heavy, often requiring additional adaptations such as
sparsity constraints, compression, quantization, and more, to curtail
bandwidth.
We introduce an innovative, communication-friendly approach for training
distributed DNNs, which capitalizes on the outer-product structure of the
gradient as revealed by the mechanics of auto-differentiation. The exposed
structure of the gradient evokes a new class of distributed learning algorithm,
which is naturally more communication-efficient than full gradient sharing. Our
approach, called distributed auto-differentiation (dAD), builds off a marriage
of rank-based compression and the innate structure of the gradient as an
outer-product. We demonstrate that dAD trains more efficiently than other state
of the art distributed methods on modern architectures, such as transformers,
when applied to large-scale text and imaging datasets. The future of
distributed learning, we determine, need not be dominated by gradient-centric
algorithms. |
c_29841 | Distributed training is an effective way to accelerate the training process
of large-scale deep learning models. However, the parameter exchange and
synchronization of distributed stochastic gradient descent introduce a large
amount of communication overhead. Gradient compression is an effective method
to reduce communication overhead. In synchronization SGD compression methods,
many Top-k sparsification based gradient compression methods have been proposed
to reduce the communication. However, the centralized method based on the
parameter servers has the single point of failure problem and limited
scalability, while the decentralized method with global parameter exchanging
may reduce the convergence rate of training. In contrast with Top-$k$ based
methods, we proposed a gradient compression method with globe gradient vector
sketching, which uses the Count-Sketch structure to store the gradients to
reduce the loss of the accuracy in the training process, named global-sketching
SGD (gs-SGD). The gs-SGD has better convergence efficiency on deep learning
models and a communication complexity of O($\log d*\log P$), where $d$ is the
number of model parameters and P is the number of workers. We conducted
experiments on GPU clusters to verify that our method has better convergence
efficiency than global Top-$k$ and Sketching-based methods. In addition, gs-SGD
achieves 1.3-3.1x higher throughput compared with gTop-$k$, and 1.1-1.2x higher
throughput compared with original Sketched-SGD. |
c_54489 | Large-scale distributed training of Deep Neural Networks (DNNs) on
state-of-the-art platforms is expected to be severely communication
constrained. To overcome this limitation, numerous gradient compression
techniques have been proposed and have demonstrated high compression ratios.
However, most existing methods do not scale well to large scale distributed
systems (due to gradient build-up) and/or fail to evaluate model fidelity (test
accuracy) on large datasets. To mitigate these issues, we propose a new
compression technique, Scalable Sparsified Gradient Compression (ScaleCom),
that leverages similarity in the gradient distribution amongst learners to
provide significantly improved scalability. Using theoretical analysis, we show
that ScaleCom provides favorable convergence guarantees and is compatible with
gradient all-reduce techniques. Furthermore, we experimentally demonstrate that
ScaleCom has small overheads, directly reduces gradient traffic and provides
high compression rates (65-400X) and excellent scalability (up to 64 learners
and 8-12X larger batch sizes over standard training) across a wide range of
applications (image, language, and speech) without significant accuracy loss. |
c_168762 | Large amount of data is often required to train and deploy useful machine
learning models in industry. Smaller enterprises do not have the luxury of
accessing enough data for machine learning, For privacy sensitive fields such
as banking, insurance and healthcare, aggregating data to a data warehouse
poses a challenge of data security and limited computational resources. These
challenges are critical when developing machine learning algorithms in
industry. Several attempts have been made to address the above challenges by
using distributed learning techniques such as federated learning over disparate
data stores in order to circumvent the need for centralised data aggregation.
This paper proposes an improved algorithm to securely train deep neural
networks over several data sources in a distributed way, in order to eliminate
the need to centrally aggregate the data and the need to share the data thus
preserving privacy. The proposed method allows training of deep neural networks
using data from multiple de-linked nodes in a distributed environment and to
secure the representation shared during training. Only a representation of the
trained models (network architecture and weights) are shared. The algorithm was
evaluated on existing healthcare patients data and the performance of this
implementation was compared to that of a regular deep neural network trained on
a single centralised architecture. This algorithm will pave a way for
distributed training of neural networks on privacy sensitive applications where
raw data may not be shared directly or centrally aggregating this data in a
data warehouse is not feasible. |
c_4421 | With the rapid increase of big data, distributed Machine Learning (ML) has
been widely applied in training large-scale models. Stochastic Gradient Descent
(SGD) is arguably the workhorse algorithm of ML. Distributed ML models trained
by SGD involve large amounts of gradient communication, which limits the
scalability of distributed ML. Thus, it is important to compress the gradients
for reducing communication. In this paper, we propose FastSGD, a Fast
compressed SGD framework for distributed ML. To achieve a high compression
ratio at a low cost, FastSGD represents the gradients as key-value pairs, and
compresses both the gradient keys and values in linear time complexity. For the
gradient value compression, FastSGD first uses a reciprocal mapper to transform
original values into reciprocal values, and then, it utilizes a logarithm
quantization to further reduce reciprocal values to small integers. Finally,
FastSGD filters reduced gradient integers by a given threshold. For the
gradient key compression, FastSGD provides an adaptive fine-grained delta
encoding method to store gradient keys with fewer bits. Extensive experiments
on practical ML models and datasets demonstrate that FastSGD achieves the
compression ratio up to 4 orders of magnitude, and accelerates the convergence
time up to 8x, compared with state-of-the-art methods. |
c_248216 | Large-scale distributed training requires significant communication bandwidth
for gradient exchange that limits the scalability of multi-node training, and
requires expensive high-bandwidth network infrastructure. The situation gets
even worse with distributed training on mobile devices (federated learning),
which suffers from higher latency, lower throughput, and intermittent poor
connections. In this paper, we find 99.9% of the gradient exchange in
distributed SGD is redundant, and propose Deep Gradient Compression (DGC) to
greatly reduce the communication bandwidth. To preserve accuracy during
compression, DGC employs four methods: momentum correction, local gradient
clipping, momentum factor masking, and warm-up training. We have applied Deep
Gradient Compression to image classification, speech recognition, and language
modeling with multiple datasets including Cifar10, ImageNet, Penn Treebank, and
Librispeech Corpus. On these scenarios, Deep Gradient Compression achieves a
gradient compression ratio from 270x to 600x without losing accuracy, cutting
the gradient size of ResNet-50 from 97MB to 0.35MB, and for DeepSpeech from
488MB to 0.74MB. Deep gradient compression enables large-scale distributed
training on inexpensive commodity 1Gbps Ethernet and facilitates distributed
training on mobile. Code is available at:
https://github.com/synxlin/deep-gradient-compression. |
c_49115 | Communication overhead severely hinders the scalability of distributed
machine learning systems. Recently, there has been a growing interest in using
gradient compression to reduce the communication overhead of the distributed
training. However, there is little understanding of applying gradient
compression to adaptive gradient methods. Moreover, its performance benefits
are often limited by the non-negligible compression overhead. In this paper, we
first introduce a novel adaptive gradient method with gradient compression. We
show that the proposed method has a convergence rate of
$\mathcal{O}(1/\sqrt{T})$ for non-convex problems. In addition, we develop a
scalable system called BytePS-Compress for two-way compression, where the
gradients are compressed in both directions between workers and parameter
servers. BytePS-Compress pipelines the compression and decompression on CPUs
and achieves a high degree of parallelism. Empirical evaluations show that we
improve the training time of ResNet50, VGG16, and BERT-base by 5.0%, 58.1%,
23.3%, respectively, without any accuracy loss with 25 Gb/s networking.
Furthermore, for training the BERT models, we achieve a compression rate of
333x compared to the mixed-precision training. |
c_184406 | Communication overhead is a major bottleneck hampering the scalability of
distributed machine learning systems. Recently, there has been a surge of
interest in using gradient compression to improve the communication efficiency
of distributed neural network training. Using 1-bit quantization, signSGD with
majority vote achieves a 32x reduction on communication cost. However, its
convergence is based on unrealistic assumptions and can diverge in practice. In
this paper, we propose a general distributed compressed SGD with Nesterov's
momentum. We consider two-way compression, which compresses the gradients both
to and from workers. Convergence analysis on nonconvex problems for general
gradient compressors is provided. By partitioning the gradient into blocks, a
blockwise compressor is introduced such that each gradient block is compressed
and transmitted in 1-bit format with a scaling factor, leading to a nearly 32x
reduction on communication. Experimental results show that the proposed method
converges as fast as full-precision distributed momentum SGD and achieves the
same testing accuracy. In particular, on distributed ResNet training with 7
workers on the ImageNet, the proposed algorithm achieves the same testing
accuracy as momentum SGD using full-precision gradients, but with $46\%$ less
wall clock time. |
c_32413 | Gradient quantization is an emerging technique in reducing communication
costs in distributed learning. Existing gradient quantization algorithms often
rely on engineering heuristics or empirical observations, lacking a systematic
approach to dynamically quantize gradients. This paper addresses this issue by
proposing a novel dynamically quantized SGD (DQ-SGD) framework, enabling us to
dynamically adjust the quantization scheme for each gradient descent step by
exploring the trade-off between communication cost and convergence error. We
derive an upper bound, tight in some cases, of the convergence error for a
restricted family of quantization schemes and loss functions. We design our
DQ-SGD algorithm via minimizing the communication cost under the convergence
error constraints. Finally, through extensive experiments on large-scale
natural language processing and computer vision tasks on AG-News, CIFAR-10, and
CIFAR-100 datasets, we demonstrate that our quantization scheme achieves better
tradeoffs between the communication cost and learning performance than other
state-of-the-art gradient quantization methods. |
c_80557 | We consider machine learning applications that train a model by leveraging
data distributed over a trusted network, where communication constraints can
create a performance bottleneck. A number of recent approaches propose to
overcome this bottleneck through compression of gradient updates. However, as
models become larger, so does the size of the gradient updates. In this paper,
we propose an alternate approach to learn from distributed data that quantizes
data instead of gradients, and can support learning over applications where the
size of gradient updates is prohibitive. Our approach leverages the dependency
of the computed gradient on data samples, which lie in a much smaller space in
order to perform the quantization in the smaller dimension data space. At the
cost of an extra gradient computation, the gradient estimate can be refined by
conveying the difference between the gradient at the quantized data point and
the original gradient using a small number of bits. Lastly, in order to save
communication, our approach adds a layer that decides whether to transmit a
quantized data sample or not based on its importance for learning. We analyze
the convergence of the proposed approach for smooth convex and non-convex
objective functions and show that we can achieve order optimal convergence
rates with communication that mostly depends on the data rather than the model
(gradient) dimension. We use our proposed algorithm to train ResNet models on
the CIFAR-10 and ImageNet datasets, and show that we can achieve an order of
magnitude savings over gradient compression methods. These communication
savings come at the cost of increasing computation at the learning agent, and
thus our approach is beneficial in scenarios where communication load is the
main problem. |
c_147242 | Federated learning (FL) is an emerging technique for training machine
learning models using geographically dispersed data collected by local
entities. It includes local computation and synchronization steps. To reduce
the communication overhead and improve the overall efficiency of FL, gradient
sparsification (GS) can be applied, where instead of the full gradient, only a
small subset of important elements of the gradient is communicated. Existing
work on GS uses a fixed degree of gradient sparsity for i.i.d.-distributed data
within a datacenter. In this paper, we consider adaptive degree of sparsity and
non-i.i.d. local datasets. We first present a fairness-aware GS method which
ensures that different clients provide a similar amount of updates. Then, with
the goal of minimizing the overall training time, we propose a novel online
learning formulation and algorithm for automatically determining the
near-optimal communication and computation trade-off that is controlled by the
degree of gradient sparsity. The online learning algorithm uses an estimated
sign of the derivative of the objective function, which gives a regret bound
that is asymptotically equal to the case where exact derivative is available.
Experiments with real datasets confirm the benefits of our proposed approaches,
showing up to $40\%$ improvement in model accuracy for a finite training time. |
c_209233 | The performance and efficiency of distributed training of Deep Neural
Networks highly depend on the performance of gradient averaging among all
participating nodes, which is bounded by the communication between nodes. There
are two major strategies to reduce communication overhead: one is to hide
communication by overlapping it with computation, and the other is to reduce
message sizes. The first solution works well for linear neural architectures,
but latest networks such as ResNet and Inception offer limited opportunity for
this overlapping. Therefore, researchers have paid more attention to minimizing
communication. In this paper, we present a novel gradient compression framework
derived from insights of real gradient distributions, and which strikes a
balance between compression ratio, accuracy, and computational overhead. Our
framework has two major novel components: sparsification of gradients in the
frequency domain, and a range-based floating point representation to quantize
and further compress gradients frequencies. Both components are dynamic, with
tunable parameters that achieve different compression ratio based on the
accuracy requirement and systems' platforms, and achieve very high throughput
on GPUs. We prove that our techniques guarantee the convergence with a
diminishing compression ratio. Our experiments show that the proposed
compression framework effectively improves the scalability of most popular
neural networks on a 32 GPU cluster to the baseline of no compression, without
compromising the accuracy and convergence speed. |
c_155507 | Compressed communication, in the form of sparsification or quantization of
stochastic gradients, is employed to reduce communication costs in distributed
data-parallel training of deep neural networks. However, there exists a
discrepancy between theory and practice: while theoretical analysis of most
existing compression methods assumes compression is applied to the gradients of
the entire model, many practical implementations operate individually on the
gradients of each layer of the model. In this paper, we prove that layer-wise
compression is, in theory, better, because the convergence rate is upper
bounded by that of entire-model compression for a wide range of biased and
unbiased compression methods. However, despite the theoretical bound, our
experimental study of six well-known methods shows that convergence, in
practice, may or may not be better, depending on the actual trained model and
compression ratio. Our findings suggest that it would be advantageous for deep
learning frameworks to include support for both layer-wise and entire-model
compression. |
c_186247 | A standard approach in large scale machine learning is distributed stochastic
gradient training, which requires the computation of aggregated stochastic
gradients over multiple nodes on a network. Communication is a major bottleneck
in such applications, and in recent years, compressed stochastic gradient
methods such as QSGD (quantized SGD) and sparse SGD have been proposed to
reduce communication. It was also shown that error compensation can be combined
with compression to achieve better convergence in a scheme that each node
compresses its local stochastic gradient and broadcast the result to all other
nodes over the network in a single pass. However, such a single pass broadcast
approach is not realistic in many practical implementations. For example, under
the popular parameter server model for distributed learning, the worker nodes
need to send the compressed local gradients to the parameter server, which
performs the aggregation. The parameter server has to compress the aggregated
stochastic gradient again before sending it back to the worker nodes. In this
work, we provide a detailed analysis on this two-pass communication model and
its asynchronous parallel variant, with error-compensated compression both on
the worker nodes and on the parameter server. We show that the
error-compensated stochastic gradient algorithm admits three very nice
properties: 1) it is compatible with an \emph{arbitrary} compression technique;
2) it admits an improved convergence rate than the non error-compensated
stochastic gradient methods such as QSGD and sparse SGD; 3) it admits linear
speedup with respect to the number of workers. The empirical study is also
conducted to validate our theoretical results. |
c_18871 | Distributed stochastic gradient descent (SGD) approach has been widely used
in large-scale deep learning, and the gradient collective method is vital to
ensure the training scalability of the distributed deep learning system.
Collective communication such as AllReduce has been widely adopted for the
distributed SGD process to reduce the communication time. However, AllReduce
incurs large bandwidth resources while most gradients are sparse in many cases
since many gradient values are zeros and should be efficiently compressed for
bandwidth saving. To reduce the sparse gradient communication overhead, we
propose Sparse-Sketch Reducer (S2 Reducer), a novel sketch-based sparse
gradient aggregation method with convergence guarantees. S2 Reducer reduces the
communication cost by only compressing the non-zero gradients with count-sketch
and bitmap, and enables the efficient AllReduce operators for parallel SGD
training. We perform extensive evaluation against four state-of-the-art methods
over five training models. Our results show that S2 Reducer converges to the
same accuracy, reduces 81\% sparse communication overhead, and achieves 1.8$
\times $ speedup compared to state-of-the-art approaches. |
c_242402 | Training large neural networks requires distributing learning across multiple
workers, where the cost of communicating gradients can be a significant
bottleneck. signSGD alleviates this problem by transmitting just the sign of
each minibatch stochastic gradient. We prove that it can get the best of both
worlds: compressed gradients and SGD-level convergence rate. The relative
$\ell_1/\ell_2$ geometry of gradients, noise and curvature informs whether
signSGD or SGD is theoretically better suited to a particular problem. On the
practical side we find that the momentum counterpart of signSGD is able to
match the accuracy and convergence speed of Adam on deep Imagenet models. We
extend our theory to the distributed setting, where the parameter server uses
majority vote to aggregate gradient signs from each worker enabling 1-bit
compression of worker-server communication in both directions. Using a theorem
by Gauss we prove that majority vote can achieve the same reduction in variance
as full precision distributed SGD. Thus, there is great promise for sign-based
optimisation schemes to achieve fast communication and fast convergence. Code
to reproduce experiments is to be found at https://github.com/jxbz/signSGD . |
c_241906 | Due to the substantial computational cost, training state-of-the-art deep
neural networks for large-scale datasets often requires distributed training
using multiple computation workers. However, by nature, workers need to
frequently communicate gradients, causing severe bottlenecks, especially on
lower bandwidth connections. A few methods have been proposed to compress
gradient for efficient communication, but they either suffer a low compression
ratio or significantly harm the resulting model accuracy, particularly when
applied to convolutional neural networks. To address these issues, we propose a
method to reduce the communication overhead of distributed deep learning. Our
key observation is that gradient updates can be delayed until an unambiguous
(high amplitude, low variance) gradient has been calculated. We also present an
efficient algorithm to compute the variance with negligible additional cost. We
experimentally show that our method can achieve very high compression ratio
while maintaining the result model accuracy. We also analyze the efficiency
using computation and communication cost models and provide the evidence that
this method enables distributed deep learning for many scenarios with commodity
environments. |
c_6469 | In this paper, we present a communication-efficient federated learning
framework inspired by quantized compressed sensing. The presented framework
consists of gradient compression for wireless devices and gradient
reconstruction for a parameter server (PS). Our strategy for gradient
compression is to sequentially perform block sparsification, dimensional
reduction, and quantization. Thanks to gradient sparsification and
quantization, our strategy can achieve a higher compression ratio than one-bit
gradient compression. For accurate aggregation of the local gradients from the
compressed signals at the PS, we put forth an approximate minimum mean square
error (MMSE) approach for gradient reconstruction using the
expectation-maximization generalized-approximate-message-passing (EM-GAMP)
algorithm. Assuming Bernoulli Gaussian-mixture prior, this algorithm
iteratively updates the posterior mean and variance of local gradients from the
compressed signals. We also present a low-complexity approach for the gradient
reconstruction. In this approach, we use the Bussgang theorem to aggregate
local gradients from the compressed signals, then compute an approximate MMSE
estimate of the aggregated gradient using the EM-GAMP algorithm. We also
provide a convergence rate analysis of the presented framework. Using the MNIST
dataset, we demonstrate that the presented framework achieves almost identical
performance with the case that performs no compression, while significantly
reducing communication overhead for federated learning. |
c_283126 | Parallel implementations of stochastic gradient descent (SGD) have received
significant research attention, thanks to excellent scalability properties of
this algorithm, and to its efficiency in the context of training deep neural
networks. A fundamental barrier for parallelizing large-scale SGD is the fact
that the cost of communicating the gradient updates between nodes can be very
large. Consequently, lossy compression heuristics have been proposed, by which
nodes only communicate quantized gradients. Although effective in practice,
these heuristics do not always provably converge, and it is not clear whether
they are optimal.
In this paper, we propose Quantized SGD (QSGD), a family of compression
schemes which allow the compression of gradient updates at each node, while
guaranteeing convergence under standard assumptions. QSGD allows the user to
trade off compression and convergence time: it can communicate a sublinear
number of bits per iteration in the model dimension, and can achieve
asymptotically optimal communication cost. We complement our theoretical
results with empirical data, showing that QSGD can significantly reduce
communication cost, while being competitive with standard uncompressed
techniques on a variety of real tasks.
In particular, experiments show that gradient quantization applied to
training of deep neural networks for image classification and automated speech
recognition can lead to significant reductions in communication cost, and
end-to-end training time. For instance, on 16 GPUs, we are able to train a
ResNet-152 network on ImageNet 1.8x faster to full accuracy. Of note, we show
that there exist generic parameter settings under which all known network
architectures preserve or slightly improve their full accuracy when using
quantization. |
c_201483 | Training large machine learning models requires a distributed computing
approach, with communication of the model updates being the bottleneck. For
this reason, several methods based on the compression (e.g., sparsification
and/or quantization) of updates were recently proposed, including QSGD
(Alistarh et al., 2017), TernGrad (Wen et al., 2017), SignSGD (Bernstein et
al., 2018), and DQGD (Khirirat et al., 2018). However, none of these methods
are able to learn the gradients, which renders them incapable of converging to
the true optimum in the batch mode, incompatible with non-smooth regularizers,
and slows down their convergence. In this work we propose a new distributed
learning method --- DIANA --- which resolves these issues via compression of
gradient differences. We perform a theoretical analysis in the strongly convex
and nonconvex settings and show that our rates are superior to existing rates.
Our analysis of block-quantization and differences between $\ell_2$ and
$\ell_\infty$ quantization closes the gaps in theory and practice. Finally, by
applying our analysis technique to TernGrad, we establish the first convergence
rate for this method. |
c_174972 | We study distributed algorithms for expected loss minimization where the
datasets are large and have to be stored on different machines. Often we deal
with minimizing the average of a set of convex functions where each function is
the empirical risk of the corresponding part of the data. In the distributed
setting where the individual data instances can be accessed only on the local
machines, there would be a series of rounds of local computations followed by
some communication among the machines. Since the cost of the communication is
usually higher than the local machine computations, it is important to reduce
it as much as possible. However, we should not allow this to make the
computation too expensive to become a burden in practice. Using second-order
methods could make the algorithms converge faster and decrease the amount of
communication needed. There are some successful attempts in developing
distributed second-order methods. Although these methods have shown fast
convergence, their local computation is expensive and could enjoy more
improvement for practical uses. In this study we modify an existing approach,
DANE (Distributed Approximate NEwton), in order to improve the computational
cost while maintaining the accuracy. We tackle this problem by using iterative
methods for solving the local subproblems approximately instead of providing
exact solutions for each round of communication. We study how using different
iterative methods affect the behavior of the algorithm and try to provide an
appropriate tradeoff between the amount of local computation and the required
amount of communication. We demonstrate the practicality of our algorithm and
compare it to the existing distributed gradient based methods such as SGD. |
c_21392 | Deep Neural Networks have gained significant attraction due to their wide
applicability in different domains. DNN sizes and training samples are
constantly growing, making training of such workloads more challenging.
Distributed training is a solution to reduce the training time.
High-performance distributed training platforms should leverage
multi-dimensional hierarchical networks, which interconnect accelerators
through different levels of the network, to dramatically reduce expensive NICs
required for the scale-out network. However, it comes at the expense of
communication overhead between distributed accelerators to exchange gradients
or input/output activation. In order to allow for further scaling of the
workloads, communication overhead needs to be minimized. In this paper, we
motivate the fact that in training platforms, adding more intermediate network
dimensions is beneficial for efficiently mitigating the excessive use of
expensive NIC resources. Further, we address different challenges of the DNN
training on hierarchical networks. We discuss when designing the interconnect,
how to distribute network bandwidth resources across different dimensions in
order to (i) maximize BW utilization of all dimensions, and (ii) minimizing the
overall training time for the target workload. We then implement a framework
that, for a given workload, determines the best network configuration that
maximizes performance, or performance-per-cost. |
c_2965 | We consider large scale distributed optimization over a set of edge devices
connected to a central server, where the limited communication bandwidth
between the server and edge devices imposes a significant bottleneck for the
optimization procedure. Inspired by recent advances in federated learning, we
propose a distributed stochastic gradient descent (SGD) type algorithm that
exploits the sparsity of the gradient, when possible, to reduce communication
burden. At the heart of the algorithm is to use compressed sensing techniques
for the compression of the local stochastic gradients at the device side; and
at the server side, a sparse approximation of the global stochastic gradient is
recovered from the noisy aggregated compressed local gradients. We conduct
theoretical analysis on the convergence of our algorithm in the presence of
noise perturbation incurred by the communication channels, and also conduct
numerical experiments to corroborate its effectiveness. |
c_154231 | Generative models are becoming increasingly popular in the literature, with
Generative Adversarial Networks (GAN) being the most successful variant, yet.
With this increasing demand and popularity, it is becoming equally difficult
and challenging to implement and consume GAN models. A qualitative user survey
conducted across 47 practitioners show that expert level skill is required to
use GAN model for a given task, despite the presence of various open source
libraries. In this research, we propose a novel system called AuthorGAN, aiming
to achieve true democratization of GAN authoring. A highly modularized library
agnostic representation of GAN model is defined to enable interoperability of
GAN architecture across different libraries such as Keras, Tensorflow, and
PyTorch. An intuitive drag-and-drop based visual designer is built using
node-red platform to enable custom architecture designing without the need for
writing any code. Five different GAN models are implemented as a part of this
framework and the performance of the different GAN models are shown using the
benchmark MNIST dataset. |
c_18196 | While the availability of large datasets is perceived to be a key requirement
for training deep neural networks, it is possible to train such models with
relatively little data. However, compensating for the absence of large datasets
demands a series of actions to enhance the quality of the existing samples and
to generate new ones. This paper summarizes our winning submission to the
"Data-Centric AI" competition. We discuss some of the challenges that arise
while training with a small dataset, offer a principled approach for systematic
data quality enhancement, and propose a GAN-based solution for synthesizing new
data points. Our evaluations indicate that the dataset generated by the
proposed pipeline offers 5% accuracy improvement while being significantly
smaller than the baseline. |
c_123785 | Generative adversarial networks (GANs) have been recently adopted for
super-resolution, an application closely related to what is referred to as
"downscaling" in the atmospheric sciences: improving the spatial resolution of
low-resolution images. The ability of conditional GANs to generate an ensemble
of solutions for a given input lends itself naturally to stochastic
downscaling, but the stochastic nature of GANs is not usually considered in
super-resolution applications. Here, we introduce a recurrent, stochastic
super-resolution GAN that can generate ensembles of time-evolving
high-resolution atmospheric fields for an input consisting of a low-resolution
sequence of images of the same field. We test the GAN using two datasets, one
consisting of radar-measured precipitation from Switzerland, the other of cloud
optical thickness derived from the Geostationary Earth Observing Satellite 16
(GOES-16). We find that the GAN can generate realistic, temporally consistent
super-resolution sequences for both datasets. The statistical properties of the
generated ensemble are analyzed using rank statistics, a method adapted from
ensemble weather forecasting; these analyses indicate that the GAN produces
close to the correct amount of variability in its outputs. As the GAN generator
is fully convolutional, it can be applied after training to input images larger
than the images used to train it. It is also able to generate time series much
longer than the training sequences, as demonstrated by applying the generator
to a three-month dataset of the precipitation radar data. The source code to
our GAN is available at https://github.com/jleinonen/downscaling-rnn-gan. |
c_192405 | Generative Adversarial Networks (GANs) have received a great deal of
attention due in part to recent success in generating original, high-quality
samples from visual domains. However, most current methods only allow for users
to guide this image generation process through limited interactions. In this
work we develop a novel GAN framework that allows humans to be "in-the-loop" of
the image generation process. Our technique iteratively accepts relative
constraints of the form "Generate an image more like image A than image B".
After each constraint is given, the user is presented with new outputs from the
GAN, informing the next round of feedback. This feedback is used to constrain
the output of the GAN with respect to an underlying semantic space that can be
designed to model a variety of different notions of similarity (e.g. classes,
attributes, object relationships, color, etc.). In our experiments, we show
that our GAN framework is able to generate images that are of comparable
quality to equivalent unsupervised GANs while satisfying a large number of the
constraints provided by users, effectively changing a GAN into one that allows
users interactive control over image generation without sacrificing image
quality. |
c_36476 | One of the major prerequisites for any deep learning approach is the
availability of large-scale training data. When dealing with scanned document
images in real world scenarios, the principal information of its content is
stored in the layout itself. In this work, we have proposed an automated deep
generative model using Graph Neural Networks (GNNs) to generate synthetic data
with highly variable and plausible document layouts that can be used to train
document interpretation systems, in this case, specially in digital mailroom
applications. It is also the first graph-based approach for document layout
generation task experimented on administrative document images, in this case,
invoices. |
c_128190 | In this paper, we present a distributed variant of adaptive stochastic
gradient method for training deep neural networks in the parameter-server
model. To reduce the communication cost among the workers and server, we
incorporate two types of quantization schemes, i.e., gradient quantization and
weight quantization, into the proposed distributed Adam. Besides, to reduce the
bias introduced by quantization operations, we propose an error-feedback
technique to compensate for the quantized gradient. Theoretically, in the
stochastic nonconvex setting, we show that the distributed adaptive gradient
method with gradient quantization and error-feedback converges to the
first-order stationary point, and that the distributed adaptive gradient method
with weight quantization and error-feedback converges to the point related to
the quantized level under both the single-worker and multi-worker modes. At
last, we apply the proposed distributed adaptive gradient methods to train deep
neural networks. Experimental results demonstrate the efficacy of our methods. |
c_107776 | Generative Adversarial Networks (GAN) have many potential medical imaging
applications, including data augmentation, domain adaptation, and model
explanation. Due to the limited memory of Graphical Processing Units (GPUs),
most current 3D GAN models are trained on low-resolution medical images, these
models either cannot scale to high-resolution or are prone to patchy artifacts.
In this work, we propose a novel end-to-end GAN architecture that can generate
high-resolution 3D images. We achieve this goal by using different
configurations between training and inference. During training, we adopt a
hierarchical structure that simultaneously generates a low-resolution version
of the image and a randomly selected sub-volume of the high-resolution image.
The hierarchical design has two advantages: First, the memory demand for
training on high-resolution images is amortized among sub-volumes. Furthermore,
anchoring the high-resolution sub-volumes to a single low-resolution image
ensures anatomical consistency between sub-volumes. During inference, our model
can directly generate full high-resolution images. We also incorporate an
encoder with a similar hierarchical structure into the model to extract
features from the images. Experiments on 3D thorax CT and brain MRI demonstrate
that our approach outperforms state of the art in image generation. We also
demonstrate clinical applications of the proposed model in data augmentation
and clinical-relevant feature extraction. |
c_57663 | Recent years have witnessed the rapid progress of generative adversarial
networks (GANs). However, the success of the GAN models hinges on a large
amount of training data. This work proposes a regularization approach for
training robust GAN models on limited data. We theoretically show a connection
between the regularized loss and an f-divergence called LeCam-divergence, which
we find is more robust under limited training data. Extensive experiments on
several benchmark datasets demonstrate that the proposed regularization scheme
1) improves the generalization performance and stabilizes the learning dynamics
of GAN models under limited training data, and 2) complements the recent data
augmentation methods. These properties facilitate training GAN models to
achieve state-of-the-art performance when only limited training data of the
ImageNet benchmark is available. |
c_198503 | Sentiment analysis is a task that may suffer from a lack of data in certain
cases, as the datasets are often generated and annotated by humans. In cases
where data is inadequate for training discriminative models, generate models
may aid training via data augmentation. Generative Adversarial Networks (GANs)
are one such model that has advanced the state of the art in several tasks,
including as image and text generation. In this paper, I train GAN models on
low resource datasets, then use them for the purpose of data augmentation
towards improving sentiment classifier generalization. Given the constraints of
limited data, I explore various techniques to train the GAN models. I also
present an analysis of the quality of generated GAN data as more training data
for the GAN is made available. In this analysis, the generated data is
evaluated as a test set (against a model trained on real data points) as well
as a training set to train classification models. Finally, I also conduct a
visual analysis by projecting the generated and the real data into a
two-dimensional space using the t-Distributed Stochastic Neighbor Embedding
(t-SNE) method. |
c_252312 | Modern large scale machine learning applications require stochastic
optimization algorithms to be implemented on distributed computational
architectures. A key bottleneck is the communication overhead for exchanging
information such as stochastic gradients among different workers. In this
paper, to reduce the communication cost we propose a convex optimization
formulation to minimize the coding length of stochastic gradients. To solve the
optimal sparsification efficiently, several simple and fast algorithms are
proposed for approximate solution, with theoretical guaranteed for sparseness.
Experiments on $\ell_2$ regularized logistic regression, support vector
machines, and convolutional neural networks validate our sparsification
approaches. |
c_114599 | In federated learning, communication cost is often a critical bottleneck to
scale up distributed optimization algorithms to collaboratively learn a model
from millions of devices with potentially unreliable or limited communication
and heterogeneous data distributions. Two notable trends to deal with the
communication overhead of federated algorithms are gradient compression and
local computation with periodic communication. Despite many attempts,
characterizing the relationship between these two approaches has proven
elusive. We address this by proposing a set of algorithms with periodical
compressed (quantized or sparsified) communication and analyze their
convergence properties in both homogeneous and heterogeneous local data
distribution settings. For the homogeneous setting, our analysis improves
existing bounds by providing tighter convergence rates for both strongly convex
and non-convex objective functions. To mitigate data heterogeneity, we
introduce a local gradient tracking scheme and obtain sharp convergence rates
that match the best-known communication complexities without compression for
convex, strongly convex, and nonconvex settings. We complement our theoretical
results and demonstrate the effectiveness of our proposed methods by several
experiments on real-world datasets. |
c_166455 | The present paper develops a novel aggregated gradient approach for
distributed machine learning that adaptively compresses the gradient
communication. The key idea is to first quantize the computed gradients, and
then skip less informative quantized gradient communications by reusing
outdated gradients. Quantizing and skipping result in `lazy' worker-server
communications, which justifies the term Lazily Aggregated Quantized gradient
that is henceforth abbreviated as LAQ. Our LAQ can provably attain the same
linear convergence rate as the gradient descent in the strongly convex case,
while effecting major savings in the communication overhead both in transmitted
bits as well as in communication rounds. Empirically, experiments with real
data corroborate a significant communication reduction compared to existing
gradient- and stochastic gradient-based algorithms. |
c_154688 | Machine learning systems based on deep neural networks (DNNs) produce
state-of-the-art results in many applications. Considering the large amount of
training data and know-how required to generate the network, it is more
practical to use third-party DNN intellectual property (IP) cores for many
designs. No doubt to say, it is essential for DNN IP vendors to provide test
cases for functional validation without leaking their parameters to IP users.
To satisfy this requirement, we propose to effectively generate test cases that
activate parameters as many as possible and propagate their perturbations to
outputs. Then the functionality of DNN IPs can be validated by only checking
their outputs. However, it is difficult considering large numbers of parameters
and highly non-linearity of DNNs. In this paper, we tackle this problem by
judiciously selecting samples from the DNN training set and applying a
gradient-based method to generate new test cases. Experimental results
demonstrate the efficacy of our proposed solution. |
c_201282 | Sign-based algorithms (e.g. signSGD) have been proposed as a biased gradient
compression technique to alleviate the communication bottleneck in training
large neural networks across multiple workers. We show simple convex
counter-examples where signSGD does not converge to the optimum. Further, even
when it does converge, signSGD may generalize poorly when compared with SGD.
These issues arise because of the biased nature of the sign compression
operator. We then show that using error-feedback, i.e. incorporating the error
made by the compression operator into the next step, overcomes these issues. We
prove that our algorithm EF-SGD with arbitrary compression operator achieves
the same rate of convergence as SGD without any additional assumptions. Thus
EF-SGD achieves gradient compression for free. Our experiments thoroughly
substantiate the theory and show that error-feedback improves both convergence
and generalization. Code can be found at
\url{https://github.com/epfml/error-feedback-SGD}. |
c_67795 | Leveraging powerful deep learning techniques, the end-to-end (E2E) learning
of communication system is able to outperform the classical communication
system. Unfortunately, this communication system cannot be trained by deep
learning without known channel. To deal with this problem, a generative
adversarial network (GAN) based training scheme has been recently proposed to
imitate the real channel. However, the gradient vanishing and overfitting
problems of GAN will result in the serious performance degradation of E2E
learning of communication system. To mitigate these two problems, we propose a
residual aided GAN (RA-GAN) based training scheme in this paper. Particularly,
inspired by the idea of residual learning, we propose a residual generator to
mitigate the gradient vanishing problem by realizing a more robust gradient
backpropagation. Moreover, to cope with the overfitting problem, we reconstruct
the loss function for training by adding a regularizer, which limits the
representation ability of RA-GAN. Simulation results show that the trained
residual generator has better generation performance than the conventional
generator, and the proposed RA-GAN based training scheme can achieve the
near-optimal block error rate (BLER) performance with a negligible
computational complexity increase in both the theoretical channel model and the
ray-tracing based channel dataset. |
c_120045 | The Generative Models have gained considerable attention in the field of
unsupervised learning via a new and practical framework called Generative
Adversarial Networks (GAN) due to its outstanding data generation capability.
Many models of GAN have proposed, and several practical applications emerged in
various domains of computer vision and machine learning. Despite GAN's
excellent success, there are still obstacles to stable training. The problems
are due to Nash-equilibrium, internal covariate shift, mode collapse, vanishing
gradient, and lack of proper evaluation metrics. Therefore, stable training is
a crucial issue in different applications for the success of GAN. Herein, we
survey several training solutions proposed by different researchers to
stabilize GAN training. We survey, (I) the original GAN model and its modified
classical versions, (II) detail analysis of various GAN applications in
different domains, (III) detail study about the various GAN training obstacles
as well as training solutions. Finally, we discuss several new issues as well
as research outlines to the topic. |
c_227271 | Large-scale distributed optimization is of great importance in various
applications. For data-parallel based distributed learning, the inter-node
gradient communication often becomes the performance bottleneck. In this paper,
we propose the error compensated quantized stochastic gradient descent
algorithm to improve the training efficiency. Local gradients are quantized to
reduce the communication overhead, and accumulated quantization error is
utilized to speed up the convergence. Furthermore, we present theoretical
analysis on the convergence behaviour, and demonstrate its advantage over
competitors. Extensive experiments indicate that our algorithm can compress
gradients by a factor of up to two magnitudes without performance degradation. |
c_82513 | Deep neural networks (DNNs) have been extremely successful in solving many
challenging AI tasks in natural language processing, speech recognition, and
computer vision nowadays. However, DNNs are typically computation intensive,
memory demanding, and power hungry, which significantly limits their usage on
platforms with constrained resources. Therefore, a variety of compression
techniques (e.g. quantization, pruning, and knowledge distillation) have been
proposed to reduce the size and power consumption of DNNs. Blockwise knowledge
distillation is one of the compression techniques that can effectively reduce
the size of a highly complex DNN. However, it is not widely adopted due to its
long training time. In this paper, we propose a novel parallel blockwise
distillation algorithm to accelerate the distillation process of sophisticated
DNNs. Our algorithm leverages local information to conduct independent
blockwise distillation, utilizes depthwise separable layers as the efficient
replacement block architecture, and properly addresses limiting factors (e.g.
dependency, synchronization, and load balancing) that affect parallelism. The
experimental results running on an AMD server with four Geforce RTX 2080Ti GPUs
show that our algorithm can achieve 3x speedup plus 19% energy savings on VGG
distillation, and 3.5x speedup plus 29% energy savings on ResNet distillation,
both with negligible accuracy loss. The speedup of ResNet distillation can be
further improved to 3.87 when using four RTX6000 GPUs in a distributed cluster. |
c_117881 | Recently there has been a surge of research on improving the communication
efficiency of distributed training. However, little work has been done to
systematically understand whether the network is the bottleneck and to what
extent.
In this paper, we take a first-principles approach to measure and analyze the
network performance of distributed training. As expected, our measurement
confirms that communication is the component that blocks distributed training
from linear scale-out. However, contrary to the common belief, we find that the
network is running at low utilization and that if the network can be fully
utilized, distributed training can achieve a scaling factor of close to one.
Moreover, while many recent proposals on gradient compression advocate over
100x compression ratio, we show that under full network utilization, there is
no need for gradient compression in 100 Gbps network. On the other hand, a
lower speed network like 10 Gbps requires only 2x--5x gradients compression
ratio to achieve almost linear scale-out. Compared to application-level
techniques like gradient compression, network-level optimizations do not
require changes to applications and do not hurt the performance of trained
models. As such, we advocate that the real challenge of distributed training is
for the network community to develop high-performance network transport to
fully utilize the network capacity and achieve linear scale-out. |
c_172698 | As deep learning is showing unprecedented success in medical image analysis
tasks, the lack of sufficient medical data is emerging as a critical problem.
While recent attempts to solve the limited data problem using Generative
Adversarial Networks (GAN) have been successful in generating realistic images
with diversity, most of them are based on image-to-image translation and thus
require extensive datasets from different domains. Here, we propose a novel
model that can successfully generate 3D brain MRI data from random vectors by
learning the data distribution. Our 3D GAN model solves both image blurriness
and mode collapse problems by leveraging alpha-GAN that combines the advantages
of Variational Auto-Encoder (VAE) and GAN with an additional code discriminator
network. We also use the Wasserstein GAN with Gradient Penalty (WGAN-GP) loss
to lower the training instability. To demonstrate the effectiveness of our
model, we generate new images of normal brain MRI and show that our model
outperforms baseline models in both quantitative and qualitative measurements.
We also train the model to synthesize brain disorder MRI data to demonstrate
the wide applicability of our model. Our results suggest that the proposed
model can successfully generate various types and modalities of 3D whole brain
volumes from a small set of training data. |
c_149198 | Learning predictive models from interaction with the world allows an agent,
such as a robot, to learn about how the world works, and then use this learned
model to plan coordinated sequences of actions to bring about desired outcomes.
However, learning a model that captures the dynamics of complex skills
represents a major challenge: if the agent needs a good model to perform these
skills, it might never be able to collect the experience on its own that is
required to learn these delicate and complex behaviors. Instead, we can imagine
augmenting the training set with observational data of other agents, such as
humans. Such data is likely more plentiful, but represents a different
embodiment. For example, videos of humans might show a robot how to use a tool,
but (i) are not annotated with suitable robot actions, and (ii) contain a
systematic distributional shift due to the embodiment differences between
humans and robots. We address the first challenge by formulating the
corresponding graphical model and treating the action as an observed variable
for the interaction data and an unobserved variable for the observation data,
and the second challenge by using a domain-dependent prior. In addition to
interaction data, our method is able to leverage videos of passive observations
in a driving dataset and a dataset of robotic manipulation videos. A robotic
planning agent equipped with our method can learn to use tools in a tabletop
robotic manipulation setting by observing humans without ever seeing a robotic
video of tool use. |
c_251632 | Deep Neural Networks (DNNs) have achieved im- pressive accuracy in many
application domains including im- age classification. Training of DNNs is an
extremely compute- intensive process and is solved using variants of the
stochastic gradient descent (SGD) algorithm. A lot of recent research has
focussed on improving the performance of DNN training. In this paper, we
present optimization techniques to improve the performance of the data parallel
synchronous SGD algorithm using the Torch framework: (i) we maintain data
in-memory to avoid file I/O overheads, (ii) we present a multi-color based MPI
Allreduce algorithm to minimize communication overheads, and (iii) we propose
optimizations to the Torch data parallel table framework that handles
multi-threading. We evaluate the performance of our optimizations on a Power 8
Minsky cluster with 32 nodes and 128 NVidia Pascal P100 GPUs. With our
optimizations, we are able to train 90 epochs of the ResNet-50 model on the
Imagenet-1k dataset using 256 GPUs in just 48 minutes. This significantly
improves on the previously best known performance of training 90 epochs of the
ResNet-50 model on the same dataset using 256 GPUs in 65 minutes. To the best
of our knowledge, this is the best known training performance demonstrated for
the Imagenet- 1k dataset. |
c_46329 | While pre-trained language models (e.g., BERT) have achieved impressive
results on different natural language processing tasks, they have large numbers
of parameters and suffer from big computational and memory costs, which make
them difficult for real-world deployment. Therefore, model compression is
necessary to reduce the computation and memory cost of pre-trained models. In
this work, we aim to compress BERT and address the following two challenging
practical issues: (1) The compression algorithm should be able to output
multiple compressed models with different sizes and latencies, in order to
support devices with different memory and latency limitations; (2) The
algorithm should be downstream task agnostic, so that the compressed models are
generally applicable for different downstream tasks. We leverage techniques in
neural architecture search (NAS) and propose NAS-BERT, an efficient method for
BERT compression. NAS-BERT trains a big supernet on a search space containing a
variety of architectures and outputs multiple compressed models with adaptive
sizes and latency. Furthermore, the training of NAS-BERT is conducted on
standard self-supervised pre-training tasks (e.g., masked language model) and
does not depend on specific downstream tasks. Thus, the compressed models can
be used across various downstream tasks. The technical challenge of NAS-BERT is
that training a big supernet on the pre-training task is extremely costly. We
employ several techniques including block-wise search, search space pruning,
and performance approximation to improve search efficiency and accuracy.
Extensive experiments on GLUE and SQuAD benchmark datasets demonstrate that
NAS-BERT can find lightweight models with better accuracy than previous
approaches, and can be directly applied to different downstream tasks with
adaptive model sizes for different requirements of memory or latency. |
c_192875 | Generative Adversarial Networks (GANs) have become a dominant class of
generative models. In recent years, GAN variants have yielded especially
impressive results in the synthesis of a variety of forms of data. Examples
include compelling natural and artistic images, textures, musical sequences,
and 3D object files. However, one obvious synthesis candidate is missing. In
this work, we answer one of deep learning's most pressing questions: GAN you do
the GAN GAN? That is, is it possible to train a GAN to model a distribution of
GANs? We release the full source code for this project under the MIT license. |
c_19396 | BERT based ranking models have achieved superior performance on various
information retrieval tasks. However, the large number of parameters and
complex self-attention operation come at a significant latency overhead. To
remedy this, recent works propose late-interaction architectures, which allow
pre-computation of intermediate document representations, thus reducing the
runtime latency. Nonetheless, having solved the immediate latency issue, these
methods now introduce storage costs and network fetching latency, which limits
their adoption in real-life production systems.
In this work, we propose the Succinct Document Representation (SDR) scheme
that computes highly compressed intermediate document representations,
mitigating the storage/network issue. Our approach first reduces the dimension
of token representations by encoding them using a novel autoencoder
architecture that uses the document's textual content in both the encoding and
decoding phases. After this token encoding step, we further reduce the size of
entire document representations using a modern quantization technique.
Extensive evaluations on passage re-reranking on the MSMARCO dataset show
that compared to existing approaches using compressed document representations,
our method is highly efficient, achieving 4x-11.6x better compression rates for
the same ranking quality. |
c_28133 | This article is in the context of gradient compression. Gradient compression
is a popular technique for mitigating the communication bottleneck observed
when training large machine learning models in a distributed manner using
gradient-based methods such as stochastic gradient descent. In this article,
assuming a Gaussian distribution for the components in gradient, we find the
rate distortion trade-off of gradient quantization schemes such as Scaled-sign
and Top-K, and compare with the Shannon rate distortion limit. A similar
comparison with vector quantizers also is presented. |
c_106196 | Although the distributed machine learning methods can speed up the training
of large deep neural networks, the communication cost has become the
non-negligible bottleneck to constrain the performance. To address this
challenge, the gradient compression based communication-efficient distributed
learning methods were designed to reduce the communication cost, and more
recently the local error feedback was incorporated to compensate for the
corresponding performance loss. However, in this paper, we will show that a new
"gradient mismatch" problem is raised by the local error feedback in
centralized distributed training and can lead to degraded performance compared
with full-precision training. To solve this critical problem, we propose two
novel techniques, 1) step ahead and 2) error averaging, with rigorous
theoretical analysis. Both our theoretical and empirical results show that our
new methods can handle the "gradient mismatch" problem. The experimental
results show that we can even train faster with common gradient compression
schemes than both the full-precision training and local error feedback
regarding the training epochs and without performance loss. |
c_135921 | Conditional Generative Adversarial Networks (cGANs) have enabled controllable
image synthesis for many vision and graphics applications. However, recent
cGANs are 1-2 orders of magnitude more compute-intensive than modern
recognition CNNs. For example, GauGAN consumes 281G MACs per image, compared to
0.44G MACs for MobileNet-v3, making it difficult for interactive deployment. In
this work, we propose a general-purpose compression framework for reducing the
inference time and model size of the generator in cGANs. Directly applying
existing compression methods yields poor performance due to the difficulty of
GAN training and the differences in generator architectures. We address these
challenges in two ways. First, to stabilize GAN training, we transfer knowledge
of multiple intermediate representations of the original model to its
compressed model and unify unpaired and paired learning. Second, instead of
reusing existing CNN designs, our method finds efficient architectures via
neural architecture search. To accelerate the search process, we decouple the
model training and search via weight sharing. Experiments demonstrate the
effectiveness of our method across different supervision settings, network
architectures, and learning methods. Without losing image quality, we reduce
the computation of CycleGAN by 21x, Pix2pix by 12x, MUNIT by 29x, and GauGAN by
9x, paving the way for interactive image synthesis. |
c_189260 | The Gradient Boosted Tree (GBT) algorithm is one of the most popular machine
learning algorithms used in production, for tasks that include Click-Through
Rate (CTR) prediction and learning-to-rank. To deal with the massive datasets
available today, many distributed GBT methods have been proposed. However, they
all assume a row-distributed dataset, addressing scalability only with respect
to the number of data points and not the number of features, and increasing
communication cost for high-dimensional data. In order to allow for scalability
across both the data point and feature dimensions, and reduce communication
cost, we propose block-distributed GBTs. We achieve communication efficiency by
making full use of the data sparsity and adapting the Quickscorer algorithm to
the block-distributed setting. We evaluate our approach using datasets with
millions of features, and demonstrate that we are able to achieve multiple
orders of magnitude reduction in communication cost for sparse data, with no
loss in accuracy, while providing a more scalable design. As a result, we are
able to reduce the training time for high-dimensional data, and allow more
cost-effective scale-out without the need for expensive network communication. |
c_316961 | This paper presents a new state-of-the-art for document image classification
and retrieval, using features learned by deep convolutional neural networks
(CNNs). In object and scene analysis, deep neural nets are capable of learning
a hierarchical chain of abstraction from pixel inputs to concise and
descriptive representations. The current work explores this capacity in the
realm of document analysis, and confirms that this representation strategy is
superior to a variety of popular hand-crafted alternatives. Experiments also
show that (i) features extracted from CNNs are robust to compression, (ii) CNNs
trained on non-document images transfer well to document analysis tasks, and
(iii) enforcing region-specific feature-learning is unnecessary given
sufficient training data. This work also makes available a new labelled subset
of the IIT-CDIP collection, containing 400,000 document images across 16
categories, useful for training new CNNs for document analysis. |
c_103483 | The proliferation of big data has brought an urgent demand for
privacy-preserving data publishing. Traditional solutions to this demand have
limitations on effectively balancing the tradeoff between privacy and utility
of the released data. Thus, the database community and machine learning
community have recently studied a new problem of relational data synthesis
using generative adversarial networks (GAN) and proposed various algorithms.
However, these algorithms are not compared under the same framework and thus it
is hard for practitioners to understand GAN's benefits and limitations. To
bridge the gaps, we conduct so far the most comprehensive experimental study
that investigates applying GAN to relational data synthesis. We introduce a
unified GAN-based framework and define a space of design solutions for each
component in the framework, including neural network architectures and training
strategies. We conduct extensive experiments to explore the design space and
compare with traditional data synthesis approaches. Through extensive
experiments, we find that GAN is very promising for relational data synthesis,
and provide guidance for selecting appropriate design solutions. We also point
out limitations of GAN and identify future research directions. |
c_202383 | Document classification is a challenging task with important applications.
The deep learning approaches to the problem have gained much attention
recently. Despite the progress, the proposed models do not incorporate the
knowledge of the document structure in the architecture efficiently and not
take into account the contexting importance of words and sentences. In this
paper, we propose a new approach based on a combination of convolutional neural
networks, gated recurrent units, and attention mechanisms for document
classification tasks. The main contribution of this work is the use of
convolution layers to extract more meaningful, generalizable and abstract
features by the hierarchical representation. The proposed method in this paper
improves the results of the current attention-based approaches for document
classification. |
c_31484 | Communication cost is one major bottleneck for the scalability for
distributed learning. One approach to reduce the communication cost is to
compress the gradient during communication. However, directly compressing the
gradient decelerates the convergence speed, and the resulting algorithm may
diverge for biased compression. Recent work addressed this problem for
stochastic gradient descent by adding back the compression error from the
previous step. This idea was further extended to one class of variance reduced
algorithms, where the variance of the stochastic gradient is reduced by taking
a moving average over all history gradients. However, our analysis shows that
just adding the previous step's compression error, as done in existing work,
does not fully compensate the compression error. So, we propose
ErrorCompensatedX, which uses the compression error from the previous two
steps. We show that ErrorCompensatedX can achieve the same asymptotic
convergence rate with the training without compression. Moreover, we provide a
unified theoretical analysis framework for this class of variance reduced
algorithms, with or without error compensation. |
c_158215 | Humans can naturally learn to execute a new task by seeing it performed by
other individuals once, and then reproduce it in a variety of configurations.
Endowing robots with this ability of imitating humans from third person is a
very immediate and natural way of teaching new tasks. Only recently, through
meta-learning, there have been successful attempts to one-shot imitation
learning from humans; however, these approaches require a lot of human
resources to collect the data in the real world to train the robot. But is
there a way to remove the need for real world human demonstrations during
training? We show that with Task-Embedded Control Networks, we can infer
control polices by embedding human demonstrations that can condition a control
policy and achieve one-shot imitation learning. Importantly, we do not use a
real human arm to supply demonstrations during training, but instead leverage
domain randomisation in an application that has not been seen before:
sim-to-real transfer on humans. Upon evaluating our approach on pushing and
placing tasks in both simulation and in the real world, we show that in
comparison to a system that was trained on real-world data we are able to
achieve similar results by utilising only simulation data. |
c_230928 | GANS are powerful generative models that are able to model the manifold of
natural images. We leverage this property to perform manifold regularization by
approximating the Laplacian norm using a Monte Carlo approximation that is
easily computed with the GAN. When incorporated into the feature-matching GAN
of Improved GAN, we achieve state-of-the-art results for GAN-based
semi-supervised learning on the CIFAR-10 dataset, with a method that is
significantly easier to implement than competing methods. |
c_9234 | Due to limited communication resources at the client and a massive number of
model parameters, large-scale distributed learning tasks suffer from
communication bottleneck. Gradient compression is an effective method to reduce
communication load by transmitting compressed gradients. Motivated by the fact
that in the scenario of stochastic gradients descent, gradients between
adjacent rounds may have a high correlation since they wish to learn the same
model, this paper proposes a practical gradient compression scheme for
federated learning, which uses historical gradients to compress gradients and
is based on Wyner-Ziv coding but without any probabilistic assumption. We also
implement our gradient quantization method on the real dataset, and the
performance of our method is better than the previous schemes. |
c_121899 | Due to the increasing amount of data on the internet, finding a
highly-informative, low-dimensional representation for text is one of the main
challenges for efficient natural language processing tasks including text
classification. This representation should capture the semantic information of
the text while retaining their relevance level for document classification.
This approach maps the documents with similar topics to a similar space in
vector space representation. To obtain representation for large text, we
propose the utilization of deep Siamese neural networks. To embed document
relevance in topics in the distributed representation, we use a Siamese neural
network to jointly learn document representations. Our Siamese network consists
of two sub-network of multi-layer perceptron. We examine our representation for
the text categorization task on BBC news dataset. The results show that the
proposed representations outperform the conventional and state-of-the-art
representations in the text classification task on this dataset. |
c_180352 | Sufficient supervised information is crucial for any machine learning models
to boost performance. However, labeling data is expensive and sometimes
difficult to obtain. Active learning is an approach to acquire annotations for
data from a human oracle by selecting informative samples with a high
probability to enhance performance. In recent emerging studies, a generative
adversarial network (GAN) has been integrated with active learning to generate
good candidates to be presented to the oracle. In this paper, we propose a
novel model that is able to obtain labels for data in a cheaper manner without
the need to query an oracle. In the model, a novel reward for each sample is
devised to measure the degree of uncertainty, which is obtained from a
classifier trained with existing labeled data. This reward is used to guide a
conditional GAN to generate informative samples with a higher probability for a
certain label. With extensive evaluations, we have confirmed the effectiveness
of the model, showing that the generated samples are capable of improving the
classification performance in popular image classification tasks. |
c_36072 | Generative adversarial networks (GANs) are very popular to generate realistic
images, but they often suffer from the training instability issues and the
phenomenon of mode loss. In order to attain greater diversity in GAN
synthesized data, it is critical to solving the problem of mode loss. Our work
explores probabilistic approaches to GAN modelling that could allow us to
tackle these issues. We present Prb-GANs, a new variation that uses dropout to
create a distribution over the network parameters with the posterior learnt
using variational inference. We describe theoretically and validate
experimentally using simple and complex datasets the benefits of such an
approach. We look into further improvements using the concept of uncertainty
measures. Through a set of further modifications to the loss functions for each
network of the GAN, we are able to get results that show the improvement of GAN
performance. Our methods are extremely simple and require very little
modification to existing GAN architecture. |
c_214766 | Neural networks have proven their capabilities by outperforming many other
approaches on regression or classification tasks on various kinds of data.
Other astonishing results have been achieved using neural nets as data
generators, especially in settings of generative adversarial networks (GANs).
One special application is the field of image domain translations. Here, the
goal is to take an image with a certain style (e.g. a photography) and
transform it into another one (e.g. a painting). If such a task is performed
for unpaired training examples, the corresponding GAN setting is complex, the
neural networks are large, and this leads to a high peak memory consumption
during, both, training and evaluation phase. This sets a limit to the highest
processable image size. We address this issue by the idea of not processing the
whole image at once, but to train and evaluate the domain translation on the
level of overlapping image subsamples. This new approach not only enables us to
translate high-resolution images that otherwise cannot be processed by the
neural network at once, but also allows us to work with comparably small neural
networks and with limited hardware resources. Additionally, the number of
images required for the training process is significantly reduced. We present
high-quality results on images with a total resolution of up to over 50
megapixels and emonstrate that our method helps to preserve local image details
while it also keeps global consistency. |
c_40731 | Communication overhead is the key challenge for distributed training.
Gradient compression is a widely used approach to reduce communication traffic.
When combining with parallel communication mechanism method like pipeline,
gradient compression technique can greatly alleviate the impact of
communication overhead. However, there exists two problems of gradient
compression technique to be solved. Firstly, gradient compression brings in
extra computation cost, which will delay the next training iteration. Secondly,
gradient compression usually leads to the decrease of convergence accuracy. |
c_207742 | Data sets are growing in complexity thanks to the increasing facilities we
have nowadays to both generate and store data. This poses many challenges to
machine learning that are leading to the proposal of new methods and paradigms,
in order to be able to deal with what is nowadays referred to as Big Data. In
this paper we propose a method for the aggregation of different Bayesian
network structures that have been learned from separate data sets, as a first
step towards mining data sets that need to be partitioned in an horizontal way,
i.e. with respect to the instances, in order to be processed. Considerations
that should be taken into account when dealing with this situation are
discussed. Scalable learning of Bayesian networks is slowly emerging, and our
method constitutes one of the first insights into Gaussian Bayesian network
aggregation from different sources. Tested on synthetic data it obtains good
results that surpass those from individual learning. Future research will be
focused on expanding the method and testing more diverse data sets. |
c_357781 | The performance of neural decoders can degrade over time due to
nonstationarities in the relationship between neuronal activity and behavior.
In this case, brain-machine interfaces (BMI) require adaptation of their
decoders to maintain high performance across time. One way to achieve this is
by use of periodical calibration phases, during which the BMI system (or an
external human demonstrator) instructs the user to perform certain movements or
behaviors. This approach has two disadvantages: (i) calibration phases
interrupt the autonomous operation of the BMI and (ii) between two calibration
phases the BMI performance might not be stable but continuously decrease. A
better alternative would be that the BMI decoder is able to continuously adapt
in an unsupervised manner during autonomous BMI operation, i.e. without knowing
the movement intentions of the user.
In the present article, we present an efficient method for such unsupervised
training of BMI systems for continuous movement control. The proposed method
utilizes a cost function derived from neuronal recordings, which guides a
learning algorithm to evaluate the decoding parameters. We verify the
performance of our adaptive method by simulating a BMI user with an optimal
feedback control model and its interaction with our adaptive BMI decoder. The
simulation results show that the cost function and the algorithm yield fast and
precise trajectories towards targets at random orientations on a 2-dimensional
computer screen. For initially unknown and non-stationary tuning parameters,
our unsupervised method is still able to generate precise trajectories and to
keep its performance stable in the long term. The algorithm can optionally work
also with neuronal error signals instead or in conjunction with the proposed
unsupervised adaptation. |
c_10799 | Many engineering problems require the prediction of
realization-to-realization variability or a refined description of modeled
quantities. In that case, it is necessary to sample elements from unknown
high-dimensional spaces with possibly millions of degrees of freedom. While
there exist methods able to sample elements from probability density functions
(PDF) with known shapes, several approximations need to be made when the
distribution is unknown. In this paper the sampling method, as well as the
inference of the underlying distribution, are both handled with a data-driven
method known as generative adversarial networks (GAN), which trains two
competing neural networks to produce a network that can effectively generate
samples from the training set distribution. In practice, it is often necessary
to draw samples from conditional distributions. When the conditional variables
are continuous, only one (if any) data point corresponding to a particular
value of a conditioning variable may be available, which is not sufficient to
estimate the conditional distribution. This work handles this problem using an
a priori estimation of the conditional moments of a PDF. Two approaches,
stochastic estimation, and an external neural network are compared here for
computing these moments; however, any preferred method can be used. The
algorithm is demonstrated in the case of the deconvolution of a filtered
turbulent flow field. It is shown that all the versions of the proposed
algorithm effectively sample the target conditional distribution with minimal
impact on the quality of the samples compared to state-of-the-art methods.
Additionally, the procedure can be used as a metric for the diversity of
samples generated by a conditional GAN (cGAN) conditioned with continuous
variables. |
c_127603 | Extracting information from full documents is an important problem in many
domains, but most previous work focus on identifying relationships within a
sentence or a paragraph. It is challenging to create a large-scale information
extraction (IE) dataset at the document level since it requires an
understanding of the whole document to annotate entities and their
document-level relationships that usually span beyond sentences or even
sections. In this paper, we introduce SciREX, a document level IE dataset that
encompasses multiple IE tasks, including salient entity identification and
document level $N$-ary relation identification from scientific articles. We
annotate our dataset by integrating automatic and human annotations, leveraging
existing scientific knowledge resources. We develop a neural model as a strong
baseline that extends previous state-of-the-art IE models to document-level IE.
Analyzing the model performance shows a significant gap between human
performance and current baselines, inviting the community to use our dataset as
a challenge to develop document-level IE models. Our data and code are publicly
available at https://github.com/allenai/SciREX |
c_319696 | With the increase of information, document classification as one of the
methods of text mining, plays vital role in many management and organizing
information. Document classification is the process of assigning a document to
one or more predefined category labels. Document classification includes
different parts such as text processing, term selection, term weighting and
final classification. The accuracy of document classification is very
important. Thus improvement in each part of classification should lead to
better results and higher precision. Term weighting has a great impact on the
accuracy of the classification. Most of the existing weighting methods exploit
the statistical information of terms in documents and do not consider semantic
relations between words. In this paper, an automated document classification
system is presented that uses a novel term weighting method based on semantic
relations between terms. To evaluate the proposed method, three standard
Persian corpuses are used. Experiment results show 2 to 4 percent improvement
in classification accuracy compared with the best previous designed system for
Persian documents. |
c_236298 | We present a learned image compression system based on GANs, operating at
extremely low bitrates. Our proposed framework combines an encoder,
decoder/generator and a multi-scale discriminator, which we train jointly for a
generative learned compression objective. The model synthesizes details it
cannot afford to store, obtaining visually pleasing results at bitrates where
previous methods fail and show strong artifacts. Furthermore, if a semantic
label map of the original image is available, our method can fully synthesize
unimportant regions in the decoded image such as streets and trees from the
label map, proportionally reducing the storage cost. A user study confirms that
for low bitrates, our approach is preferred to state-of-the-art methods, even
when they use more than double the bits. |
c_160526 | How to leverage cross-document interactions to improve ranking performance is
an important topic in information retrieval (IR) research. However, this topic
has not been well-studied in the learning-to-rank setting and most of the
existing work still treats each document independently while scoring. The
recent development of deep learning shows strength in modeling complex
relationships across sequences and sets. It thus motivates us to study how to
leverage cross-document interactions for learning-to-rank in the deep learning
framework. In this paper, we formally define the permutation-equivariance
requirement for a scoring function that captures cross-document interactions.
We then propose a self-attention based document interaction network and show
that it satisfies the permutation-equivariant requirement, and can generate
scores for document sets of varying sizes. Our proposed methods can
automatically learn to capture document interactions without any auxiliary
information, and can scale across large document sets. We conduct experiments
on three ranking datasets: the benchmark Web30k, a Gmail search, and a Google
Drive Quick Access dataset. Experimental results show that our proposed methods
are both more effective and efficient than baselines. |
c_193311 | In information retrieval (IR) and related tasks, term weighting approaches
typically consider the frequency of the term in the document and in the
collection in order to compute a score reflecting the importance of the term
for the document. In tasks characterized by the presence of training data (such
as text classification) it seems logical that the term weighting function
should take into account the distribution (as estimated from training data) of
the term across the classes of interest. Although `supervised term weighting'
approaches that use this intuition have been described before, they have failed
to show consistent improvements. In this article we analyse the possible
reasons for this failure, and call consolidated assumptions into question.
Following this criticism we propose a novel supervised term weighting approach
that, instead of relying on any predefined formula, learns a term weighting
function optimised on the training set of interest; we dub this approach
\emph{Learning to Weight} (LTW). The experiments that we run on several
well-known benchmarks, and using different learning methods, show that our
method outperforms previous term weighting approaches in text classification. |
c_111989 | One major drawback of state of the art Neural Networks (NN)-based approaches
for document classification purposes is the large number of training samples
required to obtain an efficient classification. The minimum required number is
around one thousand annotated documents for each class. In many cases it is
very difficult, if not impossible, to gather this number of samples in real
industrial processes. In this paper, we analyse the efficiency of NN-based
document classification systems in a sub-optimal training case, based on the
situation of a company document stream. We evaluated three different
approaches, one based on image content and two on textual content. The
evaluation was divided into four parts: a reference case, to assess the
performance of the system in the lab; two cases that each simulate a specific
difficulty linked to document stream processing; and a realistic case that
combined all of these difficulties. The realistic case highlighted the fact
that there is a significant drop in the efficiency of NN-Based document
classification systems. Although they remain efficient for well represented
classes (with an over-fitting of the system for those classes), it is
impossible for them to handle appropriately less well represented classes.
NN-Based document classification systems need to be adapted to resolve these
two problems before they can be considered for use in a company document
stream. |
c_97694 | Recent approaches in literature have exploited the multi-modal information in
documents (text, layout, image) to serve specific downstream document tasks.
However, they are limited by their - (i) inability to learn cross-modal
representations across text, layout and image dimensions for documents and (ii)
inability to process multi-page documents. Pre-training techniques have been
shown in Natural Language Processing (NLP) domain to learn generic textual
representations from large unlabelled datasets, applicable to various
downstream NLP tasks. In this paper, we propose a multi-task learning-based
framework that utilizes a combination of self-supervised and supervised
pre-training tasks to learn a generic document representation applicable to
various downstream document tasks. Specifically, we introduce Document Topic
Modelling and Document Shuffle Prediction as novel pre-training tasks to learn
rich image representations along with the text and layout representations for
documents. We utilize the Longformer network architecture as the backbone to
encode the multi-modal information from multi-page documents in an end-to-end
fashion. We showcase the applicability of our pre-training framework on a
variety of different real-world document tasks such as document classification,
document information extraction, and document retrieval. We evaluate our
framework on different standard document datasets and conduct exhaustive
experiments to compare performance against various ablations of our framework
and state-of-the-art baselines. |
c_250302 | Generative Adversarial Networks (GAN) have received wide attention in the
machine learning field for their potential to learn high-dimensional, complex
real data distribution. Specifically, they do not rely on any assumptions about
the distribution and can generate real-like samples from latent space in a
simple manner. This powerful property leads GAN to be applied to various
applications such as image synthesis, image attribute editing, image
translation, domain adaptation and other academic fields. In this paper, we aim
to discuss the details of GAN for those readers who are familiar with, but do
not comprehend GAN deeply or who wish to view GAN from various perspectives. In
addition, we explain how GAN operates and the fundamental meaning of various
objective functions that have been suggested recently. We then focus on how the
GAN can be combined with an autoencoder framework. Finally, we enumerate the
GAN variants that are applied to various tasks and other fields for those who
are interested in exploiting GAN for their research. |
c_7178 | This is a tutorial and survey paper on Generative Adversarial Network (GAN),
adversarial autoencoders, and their variants. We start with explaining
adversarial learning and the vanilla GAN. Then, we explain the conditional GAN
and DCGAN. The mode collapse problem is introduced and various methods,
including minibatch GAN, unrolled GAN, BourGAN, mixture GAN, D2GAN, and
Wasserstein GAN, are introduced for resolving this problem. Then, maximum
likelihood estimation in GAN are explained along with f-GAN, adversarial
variational Bayes, and Bayesian GAN. Then, we cover feature matching in GAN,
InfoGAN, GRAN, LSGAN, energy-based GAN, CatGAN, MMD GAN, LapGAN, progressive
GAN, triple GAN, LAG, GMAN, AdaGAN, CoGAN, inverse GAN, BiGAN, ALI, SAGAN,
Few-shot GAN, SinGAN, and interpolation and evaluation of GAN. Then, we
introduce some applications of GAN such as image-to-image translation
(including PatchGAN, CycleGAN, DeepFaceDrawing, simulated GAN, interactive
GAN), text-to-image translation (including StackGAN), and mixing image
characteristics (including FineGAN and MixNMatch). Finally, we explain the
autoencoders based on adversarial learning including adversarial autoencoder,
PixelGAN, and implicit autoencoder. |
c_28716 | This paper studies a distributed multi-agent convex optimization problem. The
system comprises multiple agents in this problem, each with a set of local data
points and an associated local cost function. The agents are connected to a
server, and there is no inter-agent communication. The agents' goal is to learn
a parameter vector that optimizes the aggregate of their local costs without
revealing their local data points. In principle, the agents can solve this
problem by collaborating with the server using the traditional distributed
gradient-descent method. However, when the aggregate cost is ill-conditioned,
the gradient-descent method (i) requires a large number of iterations to
converge, and (ii) is highly unstable against process noise. We propose an
iterative pre-conditioning technique to mitigate the deleterious effects of the
cost function's conditioning on the convergence rate of distributed
gradient-descent. Unlike the conventional pre-conditioning techniques, the
pre-conditioner matrix in our proposed technique updates iteratively to
facilitate implementation on the distributed network. In the distributed
setting, we provably show that the proposed algorithm converges linearly with
an improved rate of convergence than the traditional and adaptive
gradient-descent methods. Additionally, for the special case when the minimizer
of the aggregate cost is unique, our algorithm converges superlinearly. We
demonstrate our algorithm's superior performance compared to prominent
distributed algorithms for solving real logistic regression problems and
emulating neural network training via a noisy quadratic model, thereby
signifying the proposed algorithm's efficiency for distributively solving
non-convex optimization. Moreover, we empirically show that the proposed
algorithm results in faster training without compromising the generalization
performance. |
c_291946 | The Electromyography (EMG) signal is the electrical activity produced by
cells of skeletal muscles in order to provide a movement. The non-invasive
prosthetic hand works with several electrodes, placed on the stump of an
amputee, that record this signal. In order to favour the control of prosthesis,
the EMG signal is analyzed with algorithms based on machine learning theory to
decide the movement that the subject is going to do. In order to obtain a
significant control of the prosthesis and avoid mismatch between desired and
performed movements, a long training period is needed when we use the
traditional algorithm of machine learning (i.e. Support Vector Machines). An
actual challenge in this field concerns the reduction of the time necessary for
an amputee to learn how to use the prosthesis. Recently, several algorithms
that exploit a form of prior knowledge have been proposed. In general, we refer
to prior knowledge as a past experience available in the form of models. In our
case an amputee, that attempts to perform some movements with the prosthesis,
could use experience from different subjects that are already able to perform
those movements. The aim of this work is to verify, with a computational
investigation, if for an amputee this kind of previous experience is useful in
order to reduce the training time and boost the prosthetic control.
Furthermore, we want to understand if and how the final results change when the
previous knowledge of intact or amputated subjects is used for a new amputee.
Our experiments indicate that: (1) the use of experience, from other subjects
already trained to perform a task, makes us able to reduce the training time of
about an order of magnitude; (2) it seems that an amputee that tries to learn
to use the prosthesis doesn't reach different results when he/she exploits
previous experience of amputees or intact. |
c_248431 | We propose a Label Propagation based algorithm for weakly supervised text
classification. We construct a graph where each document is represented by a
node and edge weights represent similarities among the documents. Additionally,
we discover underlying topics using Latent Dirichlet Allocation (LDA) and
enrich the document graph by including the topics in the form of additional
nodes. The edge weights between a topic and a text document represent level of
"affinity" between them. Our approach does not require document level
labelling, instead it expects manual labels only for topic nodes. This
significantly minimizes the level of supervision needed as only a few topics
are observed to be enough for achieving sufficiently high accuracy. The Label
Propagation Algorithm is employed on this enriched graph to propagate labels
among the nodes. Our approach combines the advantages of Label Propagation
(through document-document similarities) and Topic Modelling (for minimal but
smart supervision). We demonstrate the effectiveness of our approach on various
datasets and compare with state-of-the-art weakly supervised text
classification approaches. |
c_226506 | Generative models are known to be difficult to assess. Recent works,
especially on generative adversarial networks (GANs), produce good visual
samples of varied categories of images. However, the validation of their
quality is still difficult to define and there is no existing agreement on the
best evaluation process. This paper aims at making a step toward an objective
evaluation process for generative models. It presents a new method to assess a
trained generative model by evaluating the test accuracy of a classifier
trained with generated data. The test set is composed of real images.
Therefore, The classifier accuracy is used as a proxy to evaluate if the
generative model fit the true data distribution. By comparing results with
different generated datasets we are able to classify and compare generative
models. The motivation of this approach is also to evaluate if generative
models can help discriminative neural networks to learn, i.e., measure if
training on generated data is able to make a model successful at testing on
real settings. Our experiments compare different generators from the
Variational Auto-Encoders (VAE) and Generative Adversarial Network (GAN)
frameworks on MNIST and fashion MNIST datasets. Our results show that none of
the generative models is able to replace completely true data to train a
discriminative model. But they also show that the initial GAN and WGAN are the
best choices to generate on MNIST database (Modified National Institute of
Standards and Technology database) and fashion MNIST database. |
c_244011 | In this work, a region-based Deep Convolutional Neural Network framework is
proposed for document structure learning. The contribution of this work
involves efficient training of region based classifiers and effective
ensembling for document image classification. A primary level of `inter-domain'
transfer learning is used by exporting weights from a pre-trained VGG16
architecture on the ImageNet dataset to train a document classifier on whole
document images. Exploiting the nature of region based influence modelling, a
secondary level of `intra-domain' transfer learning is used for rapid training
of deep learning models for image segments. Finally, stacked generalization
based ensembling is utilized for combining the predictions of the base deep
neural network models. The proposed method achieves state-of-the-art accuracy
of 92.2% on the popular RVL-CDIP document image dataset, exceeding benchmarks
set by existing algorithms. |
c_351738 | We consider a request processing system composed of organizations and their
servers connected by the Internet.
The latency a user observes is a sum of communication delays and the time
needed to handle the request on a server. The handling time depends on the
server congestion, i.e. the total number of requests a server must handle. We
analyze the problem of balancing the load in a network of servers in order to
minimize the total observed latency. We consider both cooperative and selfish
organizations (each organization aiming to minimize the latency of the
locally-produced requests). The problem can be generalized to the task
scheduling in a distributed cloud; or to content delivery in an
organizationally-distributed CDNs.
In a cooperative network, we show that the problem is polynomially solvable.
We also present a distributed algorithm iteratively balancing the load. We show
how to estimate the distance between the current solution and the optimum based
on the amount of load exchanged by the algorithm. During the experimental
evaluation, we show that the distributed algorithm is efficient, therefore it
can be used in networks with dynamically changing loads.
In a network of selfish organizations, we prove that the price of anarchy
(the worst-case loss of performance due to selfishness) is low when the network
is homogeneous and the servers are loaded (the request handling time is high
compared to the communication delay). After relaxing these assumptions, we
assess the loss of performance caused by the selfishness experimentally,
showing that it remains low.
Our results indicate that a network of servers handling requests can be
efficiently managed by a distributed algorithm. Additionally, even if the
network is organizationally distributed, with individual organizations
optimizing performance of their requests, the network remains efficient. |
c_216063 | Adam is shown not being able to converge to the optimal solution in certain
cases. Researchers recently propose several algorithms to avoid the issue of
non-convergence of Adam, but their efficiency turns out to be unsatisfactory in
practice. In this paper, we provide new insight into the non-convergence issue
of Adam as well as other adaptive learning rate methods. We argue that there
exists an inappropriate correlation between gradient $g_t$ and the
second-moment term $v_t$ in Adam ($t$ is the timestep), which results in that a
large gradient is likely to have small step size while a small gradient may
have a large step size. We demonstrate that such biased step sizes are the
fundamental cause of non-convergence of Adam, and we further prove that
decorrelating $v_t$ and $g_t$ will lead to unbiased step size for each
gradient, thus solving the non-convergence problem of Adam. Finally, we propose
AdaShift, a novel adaptive learning rate method that decorrelates $v_t$ and
$g_t$ by temporal shifting, i.e., using temporally shifted gradient $g_{t-n}$
to calculate $v_t$. The experiment results demonstrate that AdaShift is able to
address the non-convergence issue of Adam, while still maintaining a
competitive performance with Adam in terms of both training speed and
generalization. |
c_237218 | Nowadays, the Internet represents a vast informational space, growing
exponentially and the problem of search for relevant data becomes essential as
never before. The algorithm proposed in the article allows to perform natural
language queries on content of the document and get comprehensive meaningful
answers. The problem is partially solved for English as SQuAD contains enough
data to learn on, but there is no such dataset in Russian, so the methods used
by scientists now are not applicable to Russian. Brain2 framework allows to
cope with the problem - it stands out for its ability to be applied on small
datasets and does not require impressive computing power. The algorithm is
illustrated on Sberbank of Russia Strategy's text and assumes the use of a
neuromodel consisting of 65 mln synapses. The trained model is able to
construct word-by-word answers to questions based on a given text. The existing
limitations are its current inability to identify synonyms, pronoun relations
and allegories. Nevertheless, the results of conducted experiments showed high
capacity and generalisation ability of the suggested approach. |
c_156881 | Complex deep learning models now achieve state of the art performance for
many document retrieval tasks. The best models process the query or claim
jointly with the document. However for fast scalable search it is desirable to
have document embeddings which are independent of the claim. In this paper we
show that knowledge distillation can be used to encourage a model that
generates claim independent document encodings to mimic the behavior of a more
complex model which generates claim dependent encodings. We explore this
approach in document retrieval for a fact extraction and verification task. We
show that by using the soft labels from a complex cross attention teacher
model, the performance of claim independent student LSTM or CNN models is
improved across all the ranking metrics. The student models we use are 12x
faster in runtime and 20x smaller in number of parameters than the teacher |
c_219618 | Neural network-based methods for abstractive summarization produce outputs
that are more fluent than other techniques, but which can be poor at content
selection. This work proposes a simple technique for addressing this issue: use
a data-efficient content selector to over-determine phrases in a source
document that should be part of the summary. We use this selector as a
bottom-up attention step to constrain the model to likely phrases. We show that
this approach improves the ability to compress text, while still generating
fluent summaries. This two-step process is both simpler and higher performing
than other end-to-end content selection models, leading to significant
improvements on ROUGE for both the CNN-DM and NYT corpus. Furthermore, the
content selector can be trained with as little as 1,000 sentences, making it
easy to transfer a trained summarizer to a new domain. |
c_233468 | The concept of a decentralized ledger usually implies that each node of a
blockchain network stores the entire blockchain. However, in the case of
popular blockchains, which each weigh several hundreds of GB, the large amount
of data to be stored can incite new or low-capacity nodes to run lightweight
clients. Such nodes do not participate to the global storage effort and can
result in a centralization of the blockchain by very few nodes, which is
contrary to the basic concepts of a blockchain.
To avoid this problem, we propose new low storage nodes that store a reduced
amount of data generated from the blockchain by using erasure codes. The
properties of this technique ensure that any block of the chain can be easily
rebuild from a small number of such nodes. This system should encourage low
storage nodes to contribute to the storage of the blockchain and to maintain
decentralization despite of a globally increasing size of the blockchain. This
system paves the way to new types of blockchains which would only be managed by
low capacity nodes. |
c_68225 | In this paper, we proposed a new technique, {\em variance controlled
stochastic gradient} (VCSG), to improve the performance of the stochastic
variance reduced gradient (SVRG) algorithm. To avoid over-reducing the variance
of gradient by SVRG, a hyper-parameter $\lambda$ is introduced in VCSG that is
able to control the reduced variance of SVRG. Theory shows that the
optimization method can converge by using an unbiased gradient estimator, but
in practice, biased gradient estimation can allow more efficient convergence to
the vicinity since an unbiased approach is computationally more expensive.
$\lambda$ also has the effect of balancing the trade-off between unbiased and
biased estimations. Secondly, to minimize the number of full gradient
calculations in SVRG, a variance-bounded batch is introduced to reduce the
number of gradient calculations required in each iteration. For smooth
non-convex functions, the proposed algorithm converges to an approximate
first-order stationary point (i.e.
$\mathbb{E}\|\nabla{f}(x)\|^{2}\leq\epsilon$) within
$\mathcal{O}(min\{1/\epsilon^{3/2},n^{1/4}/\epsilon\})$ number of stochastic
gradient evaluations, which improves the leading gradient complexity of
stochastic gradient-based method SCS
$(\mathcal{O}(min\{1/\epsilon^{5/3},n^{2/3}/\epsilon\})$. It is shown
theoretically and experimentally that VCSG can be deployed to improve
convergence. |
c_156911 | Novel contexts may often arise in complex querying scenarios such as in
evidence-based medicine (EBM) involving biomedical literature, that may not
explicitly refer to entities or canonical concept forms occurring in any fact-
or rule-based knowledge source such as an ontology like the UMLS. Moreover,
hidden associations between candidate concepts meaningful in the current
context, may not exist within a single document, but within the collection, via
alternate lexical forms. Therefore, inspired by the recent success of
sequence-to-sequence neural models in delivering the state-of-the-art in a wide
range of NLP tasks, we develop a novel sequence-to-set framework with neural
attention for learning document representations that can effect term transfer
within the corpus, for semantically tagging a large collection of documents. We
demonstrate that our proposed method can be effective in both a supervised
multi-label classification setup for text categorization, as well as in a
unique unsupervised setting with no human-annotated document labels that uses
no external knowledge resources and only corpus-derived term statistics to
drive the training. Further, we show that semi-supervised training using our
architecture on large amounts of unlabeled data can augment performance on the
text categorization task when limited labeled data is available. Our approach
to generate document encodings employing our sequence-to-set models for
inference of semantic tags, gives to the best of our knowledge, the
state-of-the-art for both, the unsupervised query expansion task for the TREC
CDS 2016 challenge dataset when evaluated on an Okapi BM25--based document
retrieval system; and also over the MLTM baseline (Soleimani et al, 2016), for
both supervised and semi-supervised multi-label prediction tasks on the
del.icio.us and Ohsumed datasets. We will make our code and data publicly
available. |
c_52486 | Graph Neural Networks (GNNs) have received significant attention due to their
state-of-the-art performance on various graph representation learning tasks.
However, recent studies reveal that GNNs are vulnerable to adversarial attacks,
i.e. an attacker is able to fool the GNNs by perturbing the graph structure or
node features deliberately. While being able to successfully decrease the
performance of GNNs, most existing attacking algorithms require access to
either the model parameters or the training data, which is not practical in the
real world.
In this paper, we develop deeper insights into the Mettack algorithm, which
is a representative grey-box attacking method, and then we propose a
gradient-based black-box attacking algorithm. Firstly, we show that the Mettack
algorithm will perturb the edges unevenly, thus the attack will be highly
dependent on a specific training set. As a result, a simple yet useful strategy
to defense against Mettack is to train the GNN with the validation set.
Secondly, to overcome the drawbacks, we propose the Black-Box Gradient Attack
(BBGA) algorithm. Extensive experiments demonstrate that out proposed method is
able to achieve stable attack performance without accessing the training sets
of the GNNs. Further results shows that our proposed method is also applicable
when attacking against various defense methods. |
c_213344 | Document clustering is a text mining technique used to provide better
document search and browsing in digital libraries or online corpora. A lot of
research has been done on biomedical document clustering that is based on using
existing ontology. But, associations and co-occurrences of the medical concepts
are not well represented by using ontology. In this research, a vector
representation of concepts of diseases and similarity measurement between
concepts are proposed. They identify the closest concepts of diseases in the
context of a corpus. Each document is represented by using the vector space
model. A weight scheme is proposed to consider both local content and
associations between concepts. A Self-Organizing Map is used as document
clustering algorithm. The vector projection and visualization features of SOM
enable visualization and analysis of the clusters distributions and
relationships on the two dimensional space. The experimental results show that
the proposed document clustering framework generates meaningful clusters and
facilitate visualization of the clusters based on the concepts of diseases. |
c_315258 | Traditional Relational Topic Models provide a way to discover the hidden
topics from a document network. Many theoretical and practical tasks, such as
dimensional reduction, document clustering, link prediction, benefit from this
revealed knowledge. However, existing relational topic models are based on an
assumption that the number of hidden topics is known in advance, and this is
impractical in many real-world applications. Therefore, in order to relax this
assumption, we propose a nonparametric relational topic model in this paper.
Instead of using fixed-dimensional probability distributions in its generative
model, we use stochastic processes. Specifically, a gamma process is assigned
to each document, which represents the topic interest of this document.
Although this method provides an elegant solution, it brings additional
challenges when mathematically modeling the inherent network structure of
typical document network, i.e., two spatially closer documents tend to have
more similar topics. Furthermore, we require that the topics are shared by all
the documents. In order to resolve these challenges, we use a subsampling
strategy to assign each document a different gamma process from the global
gamma process, and the subsampling probabilities of documents are assigned with
a Markov Random Field constraint that inherits the document network structure.
Through the designed posterior inference algorithm, we can discover the hidden
topics and its number simultaneously. Experimental results on both synthetic
and real-world network datasets demonstrate the capabilities of learning the
hidden topics and, more importantly, the number of topics. |
c_270743 | Rapid increase of digitized document give birth to high demand of document
image retrieval. While conventional document image retrieval approaches depend
on complex OCR-based text recognition and text similarity detection, this paper
proposes a new content-based approach, in which more attention is paid to
features extraction and fusion. In the proposed approach, multiple features of
document images are extracted by different CNN models. After that, the
extracted CNN features are reduced and fused into weighted average feature.
Finally, the document images are ranked based on feature similarity to a
provided query image. Experimental procedure is performed on a group of
document images that transformed from academic papers, which contain both
English and Chinese document, the results show that the proposed approach has
good ability to retrieve document images with similar text content, and the
fusion of CNN features can effectively improve the retrieval accuracy. |
c_357922 | Latent topic models have been successfully applied as an unsupervised topic
discovery technique in large document collections. With the proliferation of
hypertext document collection such as the Internet, there has also been great
interest in extending these approaches to hypertext [6, 9]. These approaches
typically model links in an analogous fashion to how they model words - the
document-link co-occurrence matrix is modeled in the same way that the
document-word co-occurrence matrix is modeled in standard topic models. In this
paper we present a probabilistic generative model for hypertext document
collections that explicitly models the generation of links. Specifically, links
from a word w to a document d depend directly on how frequent the topic of w is
in d, in addition to the in-degree of d. We show how to perform EM learning on
this model efficiently. By not modeling links as analogous to words, we end up
using far fewer free parameters and obtain better link prediction results. |
c_328305 | In both the fields of computer science and medicine there is very strong
interest in developing personalized treatment policies for patients who have
variable responses to treatments. In particular, I aim to find an optimal
personalized treatment policy which is a non-deterministic function of the
patient specific covariate data that maximizes the expected survival time or
clinical outcome. I developed an algorithmic framework to solve multistage
decision problem with a varying number of stages that are subject to censoring
in which the "rewards" are expected survival times. In specific, I developed a
novel Q-learning algorithm that dynamically adjusts for these parameters.
Furthermore, I found finite upper bounds on the generalized error of the
treatment paths constructed by this algorithm. I have also shown that when the
optimal Q-function is an element of the approximation space, the anticipated
survival times for the treatment regime constructed by the algorithm will
converge to the optimal treatment path. I demonstrated the performance of the
proposed algorithmic framework via simulation studies and through the analysis
of chronic depression data and a hypothetical clinical trial. The censored
Q-learning algorithm I developed is more effective than the state of the art
clinical decision support systems and is able to operate in environments when
many covariate parameters may be unobtainable or censored. |
c_275272 | Document categorization is a technique where the category of a document is
determined. In this paper three well-known supervised learning techniques which
are Support Vector Machine(SVM), Na\"ive Bayes(NB) and Stochastic Gradient
Descent(SGD) compared for Bengali document categorization. Besides classifier,
classification also depends on how feature is selected from dataset. For
analyzing those classifier performances on predicting a document against twelve
categories several feature selection techniques are also applied in this
article namely Chi square distribution, normalized TFIDF (term
frequency-inverse document frequency) with word analyzer. So, we attempt to
explore the efficiency of those three-classification algorithms by using two
different feature selection techniques in this article. |
c_182677 | Modern entity linking systems rely on large collections of documents
specifically annotated for the task (e.g., AIDA CoNLL). In contrast, we propose
an approach which exploits only naturally occurring information: unlabeled
documents and Wikipedia. Our approach consists of two stages. First, we
construct a high recall list of candidate entities for each mention in an
unlabeled document. Second, we use the candidate lists as weak supervision to
constrain our document-level entity linking model. The model treats entities as
latent variables and, when estimated on a collection of unlabelled texts,
learns to choose entities relying both on local context of each mention and on
coherence with other entities in the document. The resulting approach rivals
fully-supervised state-of-the-art systems on standard test sets. It also
approaches their performance in the very challenging setting: when tested on a
test set sampled from the data used to estimate the supervised systems. By
comparing to Wikipedia-only training of our model, we demonstrate that modeling
unlabeled documents is beneficial. |
c_166518 | Information on different fields which are collected by users requires
appropriate management and organization to be structured in a standard way and
retrieved fast and more easily. Document classification is a conventional
method to separate text based on their subjects among scientific text, web
pages and digital library. Different methods and techniques are proposed for
document classifications that have advantages and deficiencies. In this paper,
several unsupervised and supervised document classification methods are studied
and compared. |