ID
int64 1
21k
| TITLE
stringlengths 7
239
| ABSTRACT
stringlengths 7
2.76k
| Computer Science
int64 0
1
| Physics
int64 0
1
| Mathematics
int64 0
1
| Statistics
int64 0
1
| Quantitative Biology
int64 0
1
| Quantitative Finance
int64 0
1
|
---|---|---|---|---|---|---|---|---|
17,301 | Deep-HiTS: Rotation Invariant Convolutional Neural Network for Transient Detection | We introduce Deep-HiTS, a rotation invariant convolutional neural network
(CNN) model for classifying images of transients candidates into artifacts or
real sources for the High cadence Transient Survey (HiTS). CNNs have the
advantage of learning the features automatically from the data while achieving
high performance. We compare our CNN model against a feature engineering
approach using random forests (RF). We show that our CNN significantly
outperforms the RF model reducing the error by almost half. Furthermore, for a
fixed number of approximately 2,000 allowed false transient candidates per
night we are able to reduce the miss-classified real transients by
approximately 1/5. To the best of our knowledge, this is the first time CNNs
have been used to detect astronomical transient events. Our approach will be
very useful when processing images from next generation instruments such as the
Large Synoptic Survey Telescope (LSST). We have made all our code and data
available to the community for the sake of allowing further developments and
comparisons at this https URL.
| 1 | 1 | 0 | 0 | 0 | 0 |
17,302 | On Measuring and Quantifying Performance: Error Rates, Surrogate Loss, and an Example in SSL | In various approaches to learning, notably in domain adaptation, active
learning, learning under covariate shift, semi-supervised learning, learning
with concept drift, and the like, one often wants to compare a baseline
classifier to one or more advanced (or at least different) strategies. In this
chapter, we basically argue that if such classifiers, in their respective
training phases, optimize a so-called surrogate loss that it may also be
valuable to compare the behavior of this loss on the test set, next to the
regular classification error rate. It can provide us with an additional view on
the classifiers' relative performances that error rates cannot capture. As an
example, limited but convincing empirical results demonstrates that we may be
able to find semi-supervised learning strategies that can guarantee performance
improvements with increasing numbers of unlabeled data in terms of
log-likelihood. In contrast, the latter may be impossible to guarantee for the
classification error rate.
| 1 | 0 | 0 | 1 | 0 | 0 |
17,303 | Do Reichenbachian Common Cause Systems of Arbitrary Finite Size Exist? | The principle of common cause asserts that positive correlations between
causally unrelated events ought to be explained through the action of some
shared causal factors. Reichenbachian common cause systems are probabilistic
structures aimed at accounting for cases where correlations of the aforesaid
sort cannot be explained through the action of a single common cause. The
existence of Reichenbachian common cause systems of arbitrary finite size for
each pair of non-causally correlated events was allegedly demonstrated by
Hofer-Szabó and Rédei in 2006. This paper shows that their proof is
logically deficient, and we propose an improved proof.
| 1 | 1 | 0 | 1 | 0 | 0 |
17,304 | Co-evolution of nodes and links: diversity driven coexistence in cyclic competition of three species | When three species compete cyclically in a well-mixed, stochastic system of
$N$ individuals, extinction is known to typically occur at times scaling as the
system size $N$. This happens, for example, in rock-paper-scissors games or
conserved Lotka-Volterra models in which every pair of individuals can interact
on a complete graph. Here we show that if the competing individuals also have a
"social temperament" to be either introverted or extroverted, leading them to
cut or add links respectively, then long-living state in which all species
coexist can occur when both introverts and extroverts are present. These states
are non-equilibrium quasi-steady states, maintained by a subtle balance between
species competition and network dynamcis. Remarkably, much of the phenomena is
embodied in a mean-field description. However, an intuitive understanding of
why diversity stabilizes the co-evolving node and link dynamics remains an open
issue.
| 0 | 0 | 0 | 0 | 1 | 0 |
17,305 | Online Learning with an Almost Perfect Expert | We study the multiclass online learning problem where a forecaster makes a
sequence of predictions using the advice of $n$ experts. Our main contribution
is to analyze the regime where the best expert makes at most $b$ mistakes and
to show that when $b = o(\log_4{n})$, the expected number of mistakes made by
the optimal forecaster is at most $\log_4{n} + o(\log_4{n})$. We also describe
an adversary strategy showing that this bound is tight and that the worst case
is attained for binary prediction.
| 0 | 0 | 0 | 1 | 0 | 0 |
17,306 | Actively Learning what makes a Discrete Sequence Valid | Deep learning techniques have been hugely successful for traditional
supervised and unsupervised machine learning problems. In large part, these
techniques solve continuous optimization problems. Recently however, discrete
generative deep learning models have been successfully used to efficiently
search high-dimensional discrete spaces. These methods work by representing
discrete objects as sequences, for which powerful sequence-based deep models
can be employed. Unfortunately, these techniques are significantly hindered by
the fact that these generative models often produce invalid sequences. As a
step towards solving this problem, we propose to learn a deep recurrent
validator model. Given a partial sequence, our model learns the probability of
that sequence occurring as the beginning of a full valid sequence. Thus this
identifies valid versus invalid sequences and crucially it also provides
insight about how individual sequence elements influence the validity of
discrete objects. To learn this model we propose an approach inspired by
seminal work in Bayesian active learning. On a synthetic dataset, we
demonstrate the ability of our model to distinguish valid and invalid
sequences. We believe this is a key step toward learning generative models that
faithfully produce valid discrete objects.
| 1 | 0 | 0 | 1 | 0 | 0 |
17,307 | Symmetries and conservation laws of Hamiltonian systems | In this paper we study the infinitesimal symmetries, Newtonoid vector fields,
infinitesimal Noether symmetries and conservation laws of Hamiltonian systems.
Using the dynamical covariant derivative and Jacobi endomorphism on the
cotangent bundle we find the invariant equations of infinitesimal symmetries
and Newtonoid vector fields and prove that the canonical nonlinear connection
induced by a regular Hamiltonian can be determined by these symmetries.
Finally, an example from optimal control theory is given.
| 0 | 0 | 1 | 0 | 0 | 0 |
17,308 | Fractional differential and fractional integral modified-Bloch equations for PFG anomalous diffusion and their general solutions | The studying of anomalous diffusion by pulsed field gradient (PFG) diffusion
technique still faces challenges. Two different research groups have proposed
modified Bloch equation for anomalous diffusion. However, these equations have
different forms and, therefore, yield inconsistent results. The discrepancy in
these reported modified Bloch equations may arise from different ways of
combining the fractional diffusion equation with the precession equation where
the time derivatives have different derivative orders and forms. Moreover, to
the best of my knowledge, the general PFG signal attenuation expression
including finite gradient pulse width (FGPW) effect for time-space fractional
diffusion based on the fractional derivative has yet to be reported by other
methods. Here, based on different combination strategy, two new modified Bloch
equations are proposed, which belong to two significantly different types: a
differential type based on the fractal derivative and an integral type based on
the fractional derivative. The merit of the integral type modified Bloch
equation is that the original properties of the contributions from linear or
nonlinear processes remain unchanged at the instant of the combination. The
general solutions including the FGPW effect were derived from these two
equations as well as from two other methods: a method observing the signal
intensity at the origin and the recently reported effective phase shift
diffusion equation method. The relaxation effect was also considered. It is
found that the relaxation behavior influenced by fractional diffusion based on
the fractional derivative deviates from that of normal diffusion. The general
solution agrees perfectly with continuous-time random walk (CTRW) simulations
as well as reported literature results. The new modified Bloch equations is a
valuable tool to describe PFG anomalous diffusion in NMR and MRI.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,309 | Change of the vortex core structure in two-band superconductors at impurity-scattering-driven $s_\pm/s_{++}$ crossover | We report a nontrivial transition in the core structure of vortices in
two-band superconductors as a function of interband impurity scattering. We
demonstrate that, in addition to singular zeros of the order parameter, the
vortices there can acquire a circular nodal line around the singular point in
one of the superconducting components. It results in the formation of the
peculiar "moat"-like profile in one of the superconducting gaps. The moat-core
vortices occur generically in the vicinity of the impurity-induced crossover
between $s_{\pm}$ and $s_{++}$ states.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,310 | Nearly Optimal Adaptive Procedure with Change Detection for Piecewise-Stationary Bandit | Multi-armed bandit (MAB) is a class of online learning problems where a
learning agent aims to maximize its expected cumulative reward while repeatedly
selecting to pull arms with unknown reward distributions. We consider a
scenario where the reward distributions may change in a piecewise-stationary
fashion at unknown time steps. We show that by incorporating a simple
change-detection component with classic UCB algorithms to detect and adapt to
changes, our so-called M-UCB algorithm can achieve nearly optimal regret bound
on the order of $O(\sqrt{MKT\log T})$, where $T$ is the number of time steps,
$K$ is the number of arms, and $M$ is the number of stationary segments.
Comparison with the best available lower bound shows that our M-UCB is nearly
optimal in $T$ up to a logarithmic factor. We also compare M-UCB with the
state-of-the-art algorithms in numerical experiments using a public Yahoo!
dataset to demonstrate its superior performance.
| 0 | 0 | 0 | 1 | 0 | 0 |
17,311 | An initial-boundary value problem of the general three-component nonlinear Schrodinger equation with a 4x4 Lax pair on a finite interval | We investigate the initial-boundary value problem for the general
three-component nonlinear Schrodinger (gtc-NLS) equation with a 4x4 Lax pair on
a finite interval by extending the Fokas unified approach. The solutions of the
gtc-NLS equation can be expressed in terms of the solutions of a 4x4 matrix
Riemann-Hilbert (RH) problem formulated in the complex k-plane. Moreover, the
relevant jump matrices of the RH problem can be explicitly found via the three
spectral functions arising from the initial data, the Dirichlet-Neumann
boundary data. The global relation is also established to deduce two distinct
but equivalent types of representations (i.e., one by using the large k of
asymptotics of the eigenfunctions and another one in terms of the
Gelfand-Levitan-Marchenko (GLM) method) for the Dirichlet and Neumann boundary
value problems. Moreover, the relevant formulae for boundary value problems on
the finite interval can reduce to ones on the half-line as the length of the
interval approaches to infinity. Finally, we also give the linearizable
boundary conditions for the GLM representation.
| 0 | 1 | 1 | 0 | 0 | 0 |
17,312 | Deep Learning Microscopy | We demonstrate that a deep neural network can significantly improve optical
microscopy, enhancing its spatial resolution over a large field-of-view and
depth-of-field. After its training, the only input to this network is an image
acquired using a regular optical microscope, without any changes to its design.
We blindly tested this deep learning approach using various tissue samples that
are imaged with low-resolution and wide-field systems, where the network
rapidly outputs an image with remarkably better resolution, matching the
performance of higher numerical aperture lenses, also significantly surpassing
their limited field-of-view and depth-of-field. These results are
transformative for various fields that use microscopy tools, including e.g.,
life sciences, where optical microscopy is considered as one of the most widely
used and deployed techniques. Beyond such applications, our presented approach
is broadly applicable to other imaging modalities, also spanning different
parts of the electromagnetic spectrum, and can be used to design computational
imagers that get better and better as they continue to image specimen and
establish new transformations among different modes of imaging.
| 1 | 1 | 0 | 0 | 0 | 0 |
17,313 | Effects of pressure impulse and peak pressure of a shock wave on microjet velocity and the onset of cavitation in a microchannel | The development of needle-free injection systems utilizing high-speed
microjets is of great importance to world healthcare. It is thus crucial to
control the microjets, which are often induced by underwater shock waves. In
this contribution from fluid-mechanics point of view, we experimentally
investigate the effect of a shock wave on the velocity of a free surface
(microjet) and underwater cavitation onset in a microchannel, focusing on the
pressure impulse and peak pressure of the shock wave. The shock wave used had a
non-spherically-symmetric peak pressure distribution and a spherically
symmetric pressure impulse distribution [Tagawa et al., J. Fluid Mech., 2016,
808, 5-18]. First, we investigate the effect of the shock wave on the jet
velocity by installing a narrow tube and a hydrophone in different
configurations in a large water tank, and measuring the shock wave pressure and
the jet velocity simultaneously. The results suggest that the jet velocity
depends only on the pressure impulse of the shock wave. We then investigate the
effect of the shock wave on the cavitation onset by taking measurements in an
L-shaped microchannel. The results suggest that the probability of cavitation
onset depends only on the peak pressure of the shock wave. In addition, the jet
velocity varies according to the presence or absence of cavitation. The above
findings provide new insights for advancing a control method for high-speed
microjets.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,314 | Clustering with Noisy Queries | In this paper, we initiate a rigorous theoretical study of clustering with
noisy queries (or a faulty oracle). Given a set of $n$ elements, our goal is to
recover the true clustering by asking minimum number of pairwise queries to an
oracle. Oracle can answer queries of the form : "do elements $u$ and $v$ belong
to the same cluster?" -- the queries can be asked interactively (adaptive
queries), or non-adaptively up-front, but its answer can be erroneous with
probability $p$. In this paper, we provide the first information theoretic
lower bound on the number of queries for clustering with noisy oracle in both
situations. We design novel algorithms that closely match this query complexity
lower bound, even when the number of clusters is unknown. Moreover, we design
computationally efficient algorithms both for the adaptive and non-adaptive
settings. The problem captures/generalizes multiple application scenarios. It
is directly motivated by the growing body of work that use crowdsourcing for
{\em entity resolution}, a fundamental and challenging data mining task aimed
to identify all records in a database referring to the same entity. Here crowd
represents the noisy oracle, and the number of queries directly relates to the
cost of crowdsourcing. Another application comes from the problem of {\em sign
edge prediction} in social network, where social interactions can be both
positive and negative, and one must identify the sign of all pair-wise
interactions by querying a few pairs. Furthermore, clustering with noisy oracle
is intimately connected to correlation clustering, leading to improvement
therein. Finally, it introduces a new direction of study in the popular {\em
stochastic block model} where one has an incomplete stochastic block model
matrix to recover the clusters.
| 1 | 0 | 0 | 1 | 0 | 0 |
17,315 | Divide-and-Conquer Checkpointing for Arbitrary Programs with No User Annotation | Classical reverse-mode automatic differentiation (AD) imposes only a small
constant-factor overhead in operation count over the original computation, but
has storage requirements that grow, in the worst case, in proportion to the
time consumed by the original computation. This storage blowup can be
ameliorated by checkpointing, a process that reorders application of classical
reverse-mode AD over an execution interval to tradeoff space \vs\ time.
Application of checkpointing in a divide-and-conquer fashion to strategically
chosen nested execution intervals can break classical reverse-mode AD into
stages which can reduce the worst-case growth in storage from linear to
sublinear. Doing this has been fully automated only for computations of
particularly simple form, with checkpoints spanning execution intervals
resulting from a limited set of program constructs. Here we show how the
technique can be automated for arbitrary computations. The essential innovation
is to apply the technique at the level of the language implementation itself,
thus allowing checkpoints to span any execution interval.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,316 | Bow Ties in the Sky II: Searching for Gamma-ray Halos in the Fermi Sky Using Anisotropy | Many-degree-scale gamma-ray halos are expected to surround extragalactic
high-energy gamma ray sources. These arise from the inverse Compton emission of
an intergalactic population of relativistic electron/positron pairs generated
by the annihilation of >100 GeV gamma rays on the extragalactic background
light. These are typically anisotropic due to the jetted structure from which
they originate or the presence of intergalactic magnetic fields. Here we
propose a novel method for detecting these inverse-Compton gamma-ray halos
based upon this anisotropic structure. Specifically, we show that by stacking
suitably defined angular power spectra instead of images it is possible to
robustly detect gamma-ray halos with existing Fermi Large Area Telescope (LAT)
observations for a broad class of intergalactic magnetic fields. Importantly,
these are largely insensitive to systematic uncertainties within the LAT
instrumental response or associated with contaminating astronomical sources.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,317 | Gain-loss-driven travelling waves in PT-symmetric nonlinear metamaterials | In this work we investigate a one-dimensional parity-time (PT)-symmetric
magnetic metamaterial consisting of split-ring dimers having gain or loss.
Employing a Melnikov analysis we study the existence of localized travelling
waves, i.e. homoclinic or heteroclinic solutions. We find conditions under
which the homoclinic or heteroclinic orbits persist. Our analytical results are
found to be in good agreement with direct numerical computations. For the
particular nonlinearity admitting travelling kinks, numerically we observe
homoclinic snaking in the bifurcation diagram. The Melnikov analysis yields a
good approximation to one of the boundaries of the snaking profile.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,318 | CapsuleGAN: Generative Adversarial Capsule Network | We present Generative Adversarial Capsule Network (CapsuleGAN), a framework
that uses capsule networks (CapsNets) instead of the standard convolutional
neural networks (CNNs) as discriminators within the generative adversarial
network (GAN) setting, while modeling image data. We provide guidelines for
designing CapsNet discriminators and the updated GAN objective function, which
incorporates the CapsNet margin loss, for training CapsuleGAN models. We show
that CapsuleGAN outperforms convolutional-GAN at modeling image data
distribution on MNIST and CIFAR-10 datasets, evaluated on the generative
adversarial metric and at semi-supervised image classification.
| 0 | 0 | 0 | 1 | 0 | 0 |
17,319 | sourceR: Classification and Source Attribution of Infectious Agents among Heterogeneous Populations | Zoonotic diseases are a major cause of morbidity, and productivity losses in
both humans and animal populations. Identifying the source of food-borne
zoonoses (e.g. an animal reservoir or food product) is crucial for the
identification and prioritisation of food safety interventions. For many
zoonotic diseases it is difficult to attribute human cases to sources of
infection because there is little epidemiological information on the cases.
However, microbial strain typing allows zoonotic pathogens to be categorised,
and the relative frequencies of the strain types among the sources and in human
cases allows inference on the likely source of each infection. We introduce
sourceR, an R package for quantitative source attribution, aimed at food-borne
diseases. It implements a fully joint Bayesian model using strain-typed
surveillance data from both human cases and source samples, capable of
identifying important sources of infection. The model measures the force of
infection from each source, allowing for varying survivability, pathogenicity
and virulence of pathogen strains, and varying abilities of the sources to act
as vehicles of infection. A Bayesian non-parametric (Dirichlet process)
approach is used to cluster pathogen strain types by epidemiological behaviour,
avoiding model overfitting and allowing detection of strain types associated
with potentially high 'virulence'.
sourceR is demonstrated using Campylobacter jejuni isolate data collected in
New Zealand between 2005 and 2008. It enables straightforward attribution of
cases of zoonotic infection to putative sources of infection by epidemiologists
and public health decision makers. As sourceR develops, we intend it to become
an important and flexible resource for food-borne disease attribution studies.
| 0 | 0 | 0 | 1 | 0 | 0 |
17,320 | Low resistive edge contacts to CVD-grown graphene using a CMOS compatible metal | The exploitation of the excellent intrinsic electronic properties of graphene
for device applications is hampered by a large contact resistance between the
metal and graphene. The formation of edge contacts rather than top contacts is
one of the most promising solutions for realizing low ohmic contacts. In this
paper the fabrication and characterization of edge contacts to large area
CVD-grown monolayer graphene by means of optical lithography using CMOS
compatible metals, i.e. Nickel and Aluminum is reported. Extraction of the
contact resistance by Transfer Line Method (TLM) as well as the direct
measurement using Kelvin Probe Force Microscopy demonstrates a very low width
specific contact resistance.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,321 | Uniqueness of planar vortex patch in incompressible steady flow | We investigate a steady planar flow of an ideal fluid in a bounded simple
connected domain and focus on the vortex patch problem with prescribed
vorticity strength. There are two methods to deal with the existence of
solutions for this problem: the vorticity method and the stream function
method. A long standing open problem is whether these two entirely different
methods result in the same solution. In this paper, we will give a positive
answer to this problem by studying the local uniqueness of the solutions.
Another result obtained in this paper is that if the domain is convex, then the
vortex patch problem has a unique solution.
| 0 | 0 | 1 | 0 | 0 | 0 |
17,322 | An Equivalence of Fully Connected Layer and Convolutional Layer | This article demonstrates that convolutional operation can be converted to
matrix multiplication, which has the same calculation way with fully connected
layer. The article is helpful for the beginners of the neural network to
understand how fully connected layer and the convolutional layer work in the
backend. To be concise and to make the article more readable, we only consider
the linear case. It can be extended to the non-linear case easily through
plugging in a non-linear encapsulation to the values like this $\sigma(x)$
denoted as $x^{\prime}$.
| 1 | 0 | 0 | 1 | 0 | 0 |
17,323 | Critical Points of Neural Networks: Analytical Forms and Landscape Properties | Due to the success of deep learning to solving a variety of challenging
machine learning tasks, there is a rising interest in understanding loss
functions for training neural networks from a theoretical aspect. Particularly,
the properties of critical points and the landscape around them are of
importance to determine the convergence performance of optimization algorithms.
In this paper, we provide full (necessary and sufficient) characterization of
the analytical forms for the critical points (as well as global minimizers) of
the square loss functions for various neural networks. We show that the
analytical forms of the critical points characterize the values of the
corresponding loss functions as well as the necessary and sufficient conditions
to achieve global minimum. Furthermore, we exploit the analytical forms of the
critical points to characterize the landscape properties for the loss functions
of these neural networks. One particular conclusion is that: The loss function
of linear networks has no spurious local minimum, while the loss function of
one-hidden-layer nonlinear networks with ReLU activation function does have
local minimum that is not global minimum.
| 1 | 0 | 0 | 1 | 0 | 0 |
17,324 | When the Annihilator Graph of a Commutative Ring Is Planar or Toroidal? | Let $R$ be a commutative ring with identity, and let $Z(R)$ be the set of
zero-divisors of $R$. The annihilator graph of $R$ is defined as the undirected
graph $AG(R)$ with the vertex set $Z(R)^*=Z(R)\setminus\{0\}$, and two distinct
vertices $x$ and $y$ are adjacent if and only if $ann_R(xy)\neq ann_R(x)\cup
ann_R(y)$. In this paper, all rings whose annihilator graphs can be embed on
the plane or torus are classified.
| 0 | 0 | 1 | 0 | 0 | 0 |
17,325 | Econometric modelling and forecasting of intraday electricity prices | In the following paper we analyse the ID$_3$-Price on German Intraday
Continuous Electricity Market using an econometric time series model. A
multivariate approach is conducted for hourly and quarter-hourly products
separately. We estimate the model using lasso and elastic net techniques and
perform an out-of-sample very short-term forecasting study. The model's
performance is compared with benchmark models and is discussed in detail.
Forecasting results provide new insights to the German Intraday Continuous
Electricity Market regarding its efficiency and to the ID$_3$-Price behaviour.
The supplementary materials are available online.
| 0 | 0 | 0 | 0 | 0 | 1 |
17,326 | Matrix-Based Characterization of the Motion and Wrench Uncertainties in Robotic Manipulators | Characterization of the uncertainty in robotic manipulators is the focus of
this paper. Based on the random matrix theory (RMT), we propose uncertainty
characterization schemes in which the uncertainty is modeled at the macro
(system) level. This is different from the traditional approaches that model
the uncertainty in the parametric space of micro (state) level. We show that
perturbing the system matrices rather than the state of the system provides
unique advantages especially for robotic manipulators. First, it requires only
limited statistical information that becomes effective when dealing with
complex systems where detailed information on their variability is not
available. Second, the RMT-based models are aware of the system state and
configuration that are significant factors affecting the level of uncertainty
in system behavior. In this study, in addition to the motion uncertainty
analysis that was first proposed in our earlier work, we also develop an
RMT-based model for the quantification of the static wrench uncertainty in
multi-agent cooperative systems. This model is aimed to be an alternative to
the elaborate parametric formulation when only rough bounds are available on
the system parameters. We discuss that how RMT-based model becomes advantageous
when the complexity of the system increases. We perform experimental studies on
a KUKA youBot arm to demonstrate the superiority of the RMT-based motion
uncertainty models. We show that how these models outperform the traditional
models built upon Gaussianity assumption in capturing real-system uncertainty
and providing accurate bounds on the state estimation errors. In addition, to
experimentally support our wrench uncertainty quantification model, we study
the behavior of a cooperative system of mobile robots. It is shown that one can
rely on less demanding RMT-based formulation and yet meets the acceptable
accuracy.
| 1 | 0 | 0 | 1 | 0 | 0 |
17,327 | Good Similar Patches for Image Denoising | Patch-based denoising algorithms like BM3D have achieved outstanding
performance. An important idea for the success of these methods is to exploit
the recurrence of similar patches in an input image to estimate the underlying
image structures. However, in these algorithms, the similar patches used for
denoising are obtained via Nearest Neighbour Search (NNS) and are sometimes not
optimal. First, due to the existence of noise, NNS can select similar patches
with similar noise patterns to the reference patch. Second, the unreliable
noisy pixels in digital images can bring a bias to the patch searching process
and result in a loss of color fidelity in the final denoising result. We
observe that given a set of good similar patches, their distribution is not
necessarily centered at the noisy reference patch and can be approximated by a
Gaussian component. Based on this observation, we present a patch searching
method that clusters similar patch candidates into patch groups using Gaussian
Mixture Model-based clustering, and selects the patch group that contains the
reference patch as the final patches for denoising. We also use an unreliable
pixel estimation algorithm to pre-process the input noisy images to further
improve the patch searching. Our experiments show that our approach can better
capture the underlying patch structures and can consistently enable the
state-of-the-art patch-based denoising algorithms, such as BM3D, LPCA and PLOW,
to better denoise images by providing them with patches found by our approach
while without modifying these algorithms.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,328 | Ginzburg - Landau expansion in strongly disordered attractive Anderson - Hubbard model | We have studied disordering effects on the coefficients of Ginzburg - Landau
expansion in powers of superconducting order - parameter in attractive Anderson
- Hubbard model within the generalized $DMFT+\Sigma$ approximation. We consider
the wide region of attractive potentials $U$ from the weak coupling region,
where superconductivity is described by BCS model, to the strong coupling
region, where superconducting transition is related with Bose - Einstein
condensation (BEC) of compact Cooper pairs formed at temperatures essentially
larger than the temperature of superconducting transition, and the wide range
of disorder - from weak to strong, where the system is in the vicinity of
Anderson transition. In case of semi - elliptic bare density of states disorder
influence upon the coefficients $A$ and $B$ before the square and the fourth
power of the order - parameter is universal for any value of electron
correlation and is related only to the general disorder widening of the bare
band (generalized Anderson theorem). Such universality is absent for the
gradient term expansion coefficient $C$. In the usual theory of "dirty"
superconductors the $C$ coefficient drops with the growth of disorder. In the
limit of strong disorder in BCS limit the coefficient $C$ is very sensitive to
the effects of Anderson localization, which lead to its further drop with
disorder growth up to the region of Anderson insulator. In the region of BCS -
BEC crossover and in BEC limit the coefficient $C$ and all related physical
properties are weakly dependent on disorder. In particular, this leads to
relatively weak disorder dependence of both penetration depth and coherence
lengths, as well as of related slope of the upper critical magnetic field at
superconducting transition, in the region of very strong coupling.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,329 | Reallocating and Resampling: A Comparison for Inference | Simulation-based inference plays a major role in modern statistics, and often
employs either reallocating (as in a randomization test) or resampling (as in
bootstrapping). Reallocating mimics random allocation to treatment groups,
while resampling mimics random sampling from a larger population; does it
matter whether the simulation method matches the data collection method?
Moreover, do the results differ for testing versus estimation? Here we answer
these questions in a simple setting by exploring the distribution of a sample
difference in means under a basic two group design and four different
scenarios: true random allocation, true random sampling, reallocating, and
resampling. For testing a sharp null hypothesis, reallocating is superior in
small samples, but reallocating and resampling are asymptotically equivalent.
For estimation, resampling is generally superior, unless the effect is truly
additive. Moreover, these results hold regardless of whether the data were
collected by random sampling or random allocation.
| 0 | 0 | 1 | 1 | 0 | 0 |
17,330 | An Efficient Algorithm for Bayesian Nearest Neighbours | K-Nearest Neighbours (k-NN) is a popular classification and regression
algorithm, yet one of its main limitations is the difficulty in choosing the
number of neighbours. We present a Bayesian algorithm to compute the posterior
probability distribution for k given a target point within a data-set,
efficiently and without the use of Markov Chain Monte Carlo (MCMC) methods or
simulation - alongside an exact solution for distributions within the
exponential family. The central idea is that data points around our target are
generated by the same probability distribution, extending outwards over the
appropriate, though unknown, number of neighbours. Once the data is projected
onto a distance metric of choice, we can transform the choice of k into a
change-point detection problem, for which there is an efficient solution: we
recursively compute the probability of the last change-point as we move towards
our target, and thus de facto compute the posterior probability distribution
over k. Applying this approach to both a classification and a regression UCI
data-sets, we compare favourably and, most importantly, by removing the need
for simulation, we are able to compute the posterior probability of k exactly
and rapidly. As an example, the computational time for the Ripley data-set is a
few milliseconds compared to a few hours when using a MCMC approach.
| 1 | 0 | 0 | 1 | 0 | 0 |
17,331 | In search of a new economic model determined by logistic growth | In this paper we extend the work by Ryuzo Sato devoted to the development of
economic growth models within the framework of the Lie group theory. We propose
a new growth model based on the assumption of logistic growth in factors. It is
employed to derive new production functions and introduce a new notion of wage
share. In the process it is shown that the new functions compare reasonably
well against relevant economic data. The corresponding problem of maximization
of profit under conditions of perfect competition is solved with the aid of one
of these functions. In addition, it is explained in reasonably rigorous
mathematical terms why Bowley's law no longer holds true in post-1960 data.
| 0 | 0 | 1 | 0 | 0 | 0 |
17,332 | Limits on light WIMPs with a 1 kg-scale germanium detector at 160 eVee physics threshold at the China Jinping Underground Laboratory | We report results of a search for light weakly interacting massive particle
(WIMP) dark matter from the CDEX-1 experiment at the China Jinping Underground
Laboratory (CJPL). Constraints on WIMP-nucleon spin-independent (SI) and
spin-dependent (SD) couplings are derived with a physics threshold of 160 eVee,
from an exposure of 737.1 kg-days. The SI and SD limits extend the lower reach
of light WIMPs to 2 GeV and improve over our earlier bounds at WIMP mass less
than 6 GeV.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,333 | A stellar census of the nearby, young 32 Orionis group | The 32 Orionis group was discovered almost a decade ago and despite the fact
that it represents the first northern, young (age ~ 25 Myr) stellar aggregate
within 100 pc of the Sun ($d \simeq 93$ pc), a comprehensive survey for members
and detailed characterisation of the group has yet to be performed. We present
the first large-scale spectroscopic survey for new (predominantly M-type)
members of the group after combining kinematic and photometric data to select
candidates with Galactic space motion and positions in colour-magnitude space
consistent with membership. We identify 30 new members, increasing the number
of known 32 Ori group members by a factor of three and bringing the total
number of identified members to 46, spanning spectral types B5 to L1. We also
identify the lithium depletion boundary (LDB) of the group, i.e. the luminosity
at which lithium remains unburnt in a coeval population. We estimate the age of
the 32 Ori group independently using both isochronal fitting and LDB analyses
and find it is essentially coeval with the {\beta} Pictoris moving group, with
an age of $24\pm4$ Myr. Finally, we have also searched for circumstellar disc
hosts utilising the AllWISE catalogue. Although we find no evidence for warm,
dusty discs, we identify several stars with excess emission in the WISE W4-band
at 22 {\mu}m. Based on the limited number of W4 detections we estimate a debris
disc fraction of $32^{+12}_{-8}$ per cent for the 32 Ori group.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,334 | A High-Level Rule-based Language for Software Defined Network Programming based on OpenFlow | This paper proposes XML-Defined Network policies (XDNP), a new high-level
language based on XML notation, to describe network control rules in Software
Defined Network environments. We rely on existing OpenFlow controllers
specifically Floodlight but the novelty of this project is to separate
complicated language- and framework-specific APIs from policy descriptions.
This separation makes it possible to extend the current work as a northbound
higher level abstraction that can support a wide range of controllers who are
based on different programming languages. By this approach, we believe that
network administrators can develop and deploy network control policies easier
and faster.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,335 | Domain Objects and Microservices for Systems Development: a roadmap | This paper discusses a roadmap to investigate Domain Objects being an
adequate formalism to capture the peculiarity of microservice architecture, and
to support Software development since the early stages. It provides a survey of
both Microservices and Domain Objects, and it discusses plans and reflections
on how to investigate whether a modeling approach suited to adaptable
service-based components can also be applied with success to the microservice
scenario.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,336 | Stabilization of prethermal Floquet steady states in a periodically driven dissipative Bose-Hubbard model | We discuss the effect of dissipation on heating which occurs in periodically
driven quantum many body systems. We especially focus on a periodically driven
Bose-Hubbard model coupled to an energy and particle reservoir. Without
dissipation, this model is known to undergo parametric instabilities which can
be considered as an initial stage of heating. By taking the weak on-site
interaction limit as well as the weak system-reservoir coupling limit, we find
that parametric instabilities are suppressed if the dissipation is stronger
than the on-site interaction strength and stable steady states appear. Our
results demonstrate that periodically-driven systems can emit energy, which is
absorbed from external drivings, to the reservoir so that they can avoid
heating.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,337 | Compressed Sensing using Generative Models | The goal of compressed sensing is to estimate a vector from an
underdetermined system of noisy linear measurements, by making use of prior
knowledge on the structure of vectors in the relevant domain. For almost all
results in this literature, the structure is represented by sparsity in a
well-chosen basis. We show how to achieve guarantees similar to standard
compressed sensing but without employing sparsity at all. Instead, we suppose
that vectors lie near the range of a generative model $G: \mathbb{R}^k \to
\mathbb{R}^n$. Our main theorem is that, if $G$ is $L$-Lipschitz, then roughly
$O(k \log L)$ random Gaussian measurements suffice for an $\ell_2/\ell_2$
recovery guarantee. We demonstrate our results using generative models from
published variational autoencoder and generative adversarial networks. Our
method can use $5$-$10$x fewer measurements than Lasso for the same accuracy.
| 1 | 0 | 0 | 1 | 0 | 0 |
17,338 | Two-part models with stochastic processes for modelling longitudinal semicontinuous data: computationally efficient inference and modelling the overall marginal mean | Several researchers have described two-part models with patient-specific
stochastic processes for analysing longitudinal semicontinuous data. In theory,
such models can offer greater flexibility than the standard two-part model with
patient-specific random effects. However, in practice the high dimensional
integrations involved in the marginal likelihood (i.e. integrated over the
stochastic processes) significantly complicates model fitting. Thus
non-standard computationally intensive procedures based on simulating the
marginal likelihood have so far only been proposed. In this paper, we describe
an efficient method of implementation by demonstrating how the high dimensional
integrations involved in the marginal likelihood can be computed efficiently.
Specifically, by using a property of the multivariate normal distribution and
the standard marginal cumulative distribution function identity, we transform
the marginal likelihood so that the high dimensional integrations are contained
in the cumulative distribution function of a multivariate normal distribution,
which can then be efficiently evaluated. Hence maximum likelihood estimation
can be used to obtain parameter estimates and asymptotic standard errors (from
the observed information matrix) of model parameters. We describe our proposed
efficient implementation procedure for the standard two-part model
parameterisation and when it is of interest to directly model the overall
marginal mean. The methodology is applied on a psoriatic arthritis data set
concerning functional disability.
| 0 | 0 | 0 | 1 | 0 | 0 |
17,339 | Progressive Image Deraining Networks: A Better and Simpler Baseline | Along with the deraining performance improvement of deep networks, their
structures and learning become more and more complicated and diverse, making it
difficult to analyze the contribution of various network modules when
developing new deraining networks. To handle this issue, this paper provides a
better and simpler baseline deraining network by considering network
architecture, input and output, and loss functions. Specifically, by repeatedly
unfolding a shallow ResNet, progressive ResNet (PRN) is proposed to take
advantage of recursive computation. A recurrent layer is further introduced to
exploit the dependencies of deep features across stages, forming our
progressive recurrent network (PReNet). Furthermore, intra-stage recursive
computation of ResNet can be adopted in PRN and PReNet to notably reduce
network parameters with graceful degradation in deraining performance. For
network input and output, we take both stage-wise result and original rainy
image as input to each ResNet and finally output the prediction of {residual
image}. As for loss functions, single MSE or negative SSIM losses are
sufficient to train PRN and PReNet. Experiments show that PRN and PReNet
perform favorably on both synthetic and real rainy images. Considering its
simplicity, efficiency and effectiveness, our models are expected to serve as a
suitable baseline in future deraining research. The source codes are available
at this https URL.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,340 | Optimal Nonparametric Inference under Quantization | Statistical inference based on lossy or incomplete samples is of fundamental
importance in research areas such as signal/image processing, medical image
storage, remote sensing, signal transmission. In this paper, we propose a
nonparametric testing procedure based on quantized samples. In contrast to the
classic nonparametric approach, our method lives on a coarse grid of sample
information and are simple-to-use. Under mild technical conditions, we
establish the asymptotic properties of the proposed procedures including
asymptotic null distribution of the quantization test statistic as well as its
minimax power optimality. Concrete quantizers are constructed for achieving the
minimax optimality in practical use. Simulation results and a real data
analysis are provided to demonstrate the validity and effectiveness of the
proposed test. Our work bridges the classical nonparametric inference to modern
lossy data setting.
| 1 | 0 | 1 | 1 | 0 | 0 |
17,341 | Nearest neighbor imputation for general parameter estimation in survey sampling | Nearest neighbor imputation is popular for handling item nonresponse in
survey sampling. In this article, we study the asymptotic properties of the
nearest neighbor imputation estimator for general population parameters,
including population means, proportions and quantiles. For variance estimation,
the conventional bootstrap inference for matching estimators with fixed number
of matches has been shown to be invalid due to the nonsmoothness nature of the
matching estimator. We propose asymptotically valid replication variance
estimation. The key strategy is to construct replicates of the estimator
directly based on linear terms, instead of individual records of variables. A
simulation study confirms that the new procedure provides valid variance
estimation.
| 0 | 0 | 0 | 1 | 0 | 0 |
17,342 | Time-delay signature suppression in a chaotic semiconductor laser by fiber random grating induced distributed feedback | We demonstrate that a semiconductor laser perturbed by the distributed
feedback from a fiber random grating can emit light chaotically without the
time delay signature. A theoretical model is developed based on the
Lang-Kobayashi model in order to numerically explore the chaotic dynamics of
the laser diode subjected to the random distributed feedback. It is predicted
that the random distributed feedback is superior to the single reflection
feedback in suppressing the time-delay signature. In experiments, a massive
number of feedbacks with randomly varied time delays induced by a fiber random
grating introduce large numbers of external cavity modes into the semiconductor
laser, leading to the high dimension of chaotic dynamics and thus the
concealment of the time delay signature. The obtained time delay signature with
the maximum suppression is 0.0088, which is the smallest to date.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,343 | SAFS: A Deep Feature Selection Approach for Precision Medicine | In this paper, we propose a new deep feature selection method based on deep
architecture. Our method uses stacked auto-encoders for feature representation
in higher-level abstraction. We developed and applied a novel feature learning
approach to a specific precision medicine problem, which focuses on assessing
and prioritizing risk factors for hypertension (HTN) in a vulnerable
demographic subgroup (African-American). Our approach is to use deep learning
to identify significant risk factors affecting left ventricular mass indexed to
body surface area (LVMI) as an indicator of heart damage risk. The results show
that our feature learning and representation approach leads to better results
in comparison with others.
| 1 | 0 | 0 | 1 | 0 | 0 |
17,344 | Deep Reasoning with Multi-scale Context for Salient Object Detection | To detect and segment salient objects accurately, existing methods are
usually devoted to designing complex network architectures to fuse powerful
features from the backbone networks. However, they put much less efforts on the
saliency inference module and only use a few fully convolutional layers to
perform saliency reasoning from the fused features. However, should feature
fusion strategies receive much attention but saliency reasoning be ignored a
lot? In this paper, we find that weakness of the saliency reasoning unit limits
salient object detection performance, and claim that saliency reasoning after
multi-scale convolutional features fusion is critical. To verify our findings,
we first extract multi-scale features with a fully convolutional network, and
then directly reason from these comprehensive features using a deep yet
light-weighted network, modified from ShuffleNet, to fast and precisely predict
salient objects. Such simple design is shown to be capable of reasoning from
multi-scale saliency features as well as giving superior saliency detection
performance with less computation cost. Experimental results show that our
simple framework outperforms the best existing method with 2.3\% and 3.6\%
promotion for F-measure scores, 2.8\% reduction for MAE score on PASCAL-S,
DUT-OMRON and SOD datasets respectively.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,345 | On Estimation of $L_{r}$-Norms in Gaussian White Noise Models | We provide a complete picture of asymptotically minimax estimation of
$L_r$-norms (for any $r\ge 1$) of the mean in Gaussian white noise model over
Nikolskii-Besov spaces. In this regard, we complement the work of Lepski,
Nemirovski and Spokoiny (1999), who considered the cases of $r=1$ (with
poly-logarithmic gap between upper and lower bounds) and $r$ even (with
asymptotically sharp upper and lower bounds) over Hölder spaces. We
additionally consider the case of asymptotically adaptive minimax estimation
and demonstrate a difference between even and non-even $r$ in terms of an
investigator's ability to produce asymptotically adaptive minimax estimators
without paying a penalty.
| 1 | 0 | 1 | 1 | 0 | 0 |
17,346 | Secure communications with cooperative jamming: Optimal power allocation and secrecy outage analysis | This paper studies the secrecy rate maximization problem of a secure wireless
communication system, in the presence of multiple eavesdroppers. The security
of the communication link is enhanced through cooperative jamming, with the
help of multiple jammers. First, a feasibility condition is derived to achieve
a positive secrecy rate at the destination. Then, we solve the original secrecy
rate maximization problem, which is not convex in terms of power allocation at
the jammers. To circumvent this non-convexity, the achievable secrecy rate is
approximated for a given power allocation at the jammers and the approximated
problem is formulated into a geometric programming one. Based on this
approximation, an iterative algorithm has been developed to obtain the optimal
power allocation at the jammers. Next, we provide a bisection approach, based
on one-dimensional search, to validate the optimality of the proposed
algorithm. In addition, by assuming Rayleigh fading, the secrecy outage
probability (SOP) of the proposed cooperative jamming scheme is analyzed. More
specifically, a single-integral form expression for SOP is derived for the most
general case as well as a closed-form expression for the special case of two
cooperative jammers and one eavesdropper. Simulation results have been provided
to validate the convergence and the optimality of the proposed algorithm as
well as the theoretical derivations of the presented SOP analysis.
| 1 | 0 | 1 | 0 | 0 | 0 |
17,347 | Stochastic Calculus with respect to Gaussian Processes: Part I | Stochastic integration \textit{wrt} Gaussian processes has raised strong
interest in recent years, motivated in particular by its applications in
Internet traffic modeling, biomedicine and finance. The aim of this work is to
define and develop a White Noise Theory-based anticipative stochastic calculus
with respect to all Gaussian processes that have an integral representation
over a real (maybe infinite) interval. Very rich, this class of Gaussian
processes contains, among many others, Volterra processes (and thus fractional
Brownian motion) as well as processes the regularity of which varies along the
time (such as multifractional Brownian motion).A systematic comparison of the
stochastic calculus (including It{ô} formula) we provide here, to the ones
given by Malliavin calculus in
\cite{nualart,MV05,NuTa06,KRT07,KrRu10,LN12,SoVi14,LN12}, and by It{ô}
stochastic calculus is also made. Not only our stochastic calculus fully
generalizes and extends the ones originally proposed in \cite{MV05} and in
\cite{NuTa06} for Gaussian processes, but also the ones proposed in
\cite{ell,bosw,ben1} for fractional Brownian motion (\textit{resp.} in
\cite{JLJLV1,JL13,LLVH} for multifractional Brownian motion).
| 0 | 0 | 1 | 0 | 0 | 0 |
17,348 | Path-like integrals of lenght on surfaces of constant curvature | We naturally associate a measurable space of paths to a couple of orthogonal
vector fields over a surface and we integrate the length function over it. This
integral is interpreted as a natural continuous generalization of indirect
influences on finite graphs and can be thought as a tool to capture geometric
information of the surface. As a byproduct we calculate volumes in different
examples of spaces of paths.
| 0 | 0 | 1 | 0 | 0 | 0 |
17,349 | Automated Synthesis of Divide and Conquer Parallelism | This paper focuses on automated synthesis of divide-and-conquer parallelism,
which is a common parallel programming skeleton supported by many
cross-platform multithreaded libraries. The challenges of producing (manually
or automatically) a correct divide-and-conquer parallel program from a given
sequential code are two-fold: (1) assuming that individual worker threads
execute a code identical to the sequential code, the programmer has to provide
the extra code for dividing the tasks and combining the computation results,
and (2) sometimes, the sequential code may not be usable as is, and may need to
be modified by the programmer. We address both challenges in this paper. We
present an automated synthesis technique for the case where no modifications to
the sequential code are required, and we propose an algorithm for modifying the
sequential code to make it suitable for parallelization when some modification
is necessary. The paper presents theoretical results for when this {\em
modification} is efficiently possible, and experimental evaluation of the
technique and the quality of the produced parallel programs.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,350 | Nikol'ski\uı, Jackson and Ul'yanov type inequalities with Muckenhoupt weights | In the present work we prove a Nikol'ski inequality for trigonometric
polynomials and Ul'yanov type inequalities for functions in Lebesgue spaces
with Muckenhoupt weights. Realization result and Jackson inequalities are
obtained. Simultaneous approximation by polynomials is considered. Some uniform
norm inequalities are transferred to weighted Lebesgue space.
| 0 | 0 | 1 | 0 | 0 | 0 |
17,351 | CosmoGAN: creating high-fidelity weak lensing convergence maps using Generative Adversarial Networks | Inferring model parameters from experimental data is a grand challenge in
many sciences, including cosmology. This often relies critically on high
fidelity numerical simulations, which are prohibitively computationally
expensive. The application of deep learning techniques to generative modeling
is renewing interest in using high dimensional density estimators as
computationally inexpensive emulators of fully-fledged simulations. These
generative models have the potential to make a dramatic shift in the field of
scientific simulations, but for that shift to happen we need to study the
performance of such generators in the precision regime needed for science
applications. To this end, in this work we apply Generative Adversarial
Networks to the problem of generating weak lensing convergence maps. We show
that our generator network produces maps that are described by, with high
statistical confidence, the same summary statistics as the fully simulated
maps.
| 1 | 1 | 0 | 0 | 0 | 0 |
17,352 | Gaussian approximation of maxima of Wiener functionals and its application to high-frequency data | This paper establishes an upper bound for the Kolmogorov distance between the
maximum of a high-dimensional vector of smooth Wiener functionals and the
maximum of a Gaussian random vector. As a special case, we show that the
maximum of multiple Wiener-Itô integrals with common orders is
well-approximated by its Gaussian analog in terms of the Kolmogorov distance if
their covariance matrices are close to each other and the maximum of the fourth
cumulants of the multiple Wiener-Itô integrals is close to zero. This may be
viewed as a new kind of fourth moment phenomenon, which has attracted
considerable attention in the recent studies of probability. This type of
Gaussian approximation result has many potential applications to statistics. To
illustrate this point, we present two statistical applications in
high-frequency financial econometrics: One is the hypothesis testing problem
for the absence of lead-lag effects and the other is the construction of
uniform confidence bands for spot volatility.
| 0 | 0 | 1 | 1 | 0 | 0 |
17,353 | A Kronecker-type identity and the representations of a number as a sum of three squares | By considering a limiting case of a Kronecker-type identity, we obtain an
identity found by both Andrews and Crandall. We then use the Andrews-Crandall
identity to give a new proof of a formula of Gauss for the representations of a
number as a sum of three squares. From the Kronecker-type identity, we also
deduce Gauss's theorem that every positive integer is representable as a sum of
three triangular numbers.
| 0 | 0 | 1 | 0 | 0 | 0 |
17,354 | DeepTrend: A Deep Hierarchical Neural Network for Traffic Flow Prediction | In this paper, we consider the temporal pattern in traffic flow time series,
and implement a deep learning model for traffic flow prediction. Detrending
based methods decompose original flow series into trend and residual series, in
which trend describes the fixed temporal pattern in traffic flow and residual
series is used for prediction. Inspired by the detrending method, we propose
DeepTrend, a deep hierarchical neural network used for traffic flow prediction
which considers and extracts the time-variant trend. DeepTrend has two stacked
layers: extraction layer and prediction layer. Extraction layer, a fully
connected layer, is used to extract the time-variant trend in traffic flow by
feeding the original flow series concatenated with corresponding simple average
trend series. Prediction layer, an LSTM layer, is used to make flow prediction
by feeding the obtained trend from the output of extraction layer and
calculated residual series. To make the model more effective, DeepTrend needs
first pre-trained layer-by-layer and then fine-tuned in the entire network.
Experiments show that DeepTrend can noticeably boost the prediction performance
compared with some traditional prediction models and LSTM with detrending based
methods.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,355 | A new approach to Kaluza-Klein Theory | We propose in this paper a new approach to the Kaluza-Klein idea of a five
dimensional space-time unifying gravitation and electromagnetism, and extension
to higher-dimensional space-time. By considering a natural geometric definition
of a matter fluid and abandoning the usual requirement of a Ricci-flat five
dimensional space-time, we show that a unified geometrical frame can be set for
gravitation and electromagnetism, giving, by projection on the classical
4-dimensional space-time, the known Einstein-Maxwell-Lorentz equations for
charged fluids. Thus, although not introducing new physics, we get a very
aesthetic presentation of classical physics in the spirit of general
relativity. The usual physical concepts, such as mass, energy, charge,
trajectory, Maxwell-Lorentz law, are shown to be only various aspects of the
geometry, for example curvature, of space-time considered as a Lorentzian
manifold; that is no physical objects are introduced in space-time, no laws are
given, everything is only geometry.
We then extend these ideas to more than 5 dimensions, by considering
spacetime as a generalization of a $(S^1\times W)$-fiber bundle, that we named
multi-fibers bundle, where $S^1$ is the circle and $W$ a compact manifold. We
will use this geometric structure as a possible way to model or encode
deviations from standard 4-dimensional General Relativity, or "dark" effects
such as dark matter or energy.
| 0 | 0 | 1 | 0 | 0 | 0 |
17,356 | Density of orbits of dominant regular self-maps of semiabelian varieties | We prove a conjecture of Medvedev and Scanlon in the case of regular
morphisms of semiabelian varieties. That is, if $G$ is a semiabelian variety
defined over an algebraically closed field $K$ of characteristic $0$, and
$\varphi\colon G\to G$ is a dominant regular self-map of $G$ which is not
necessarily a group homomorphism, we prove that one of the following holds:
either there exists a non-constant rational fibration preserved by $\varphi$,
or there exists a point $x\in G(K)$ whose $\varphi$-orbit is Zariski dense in
$G$.
| 0 | 0 | 1 | 0 | 0 | 0 |
17,357 | Asymptotic coverage probabilities of bootstrap percentile confidence intervals for constrained parameters | The asymptotic behaviour of the commonly used bootstrap percentile confidence
interval is investigated when the parameters are subject to linear inequality
constraints. We concentrate on the important one- and two-sample problems with
data generated from general parametric distributions in the natural exponential
family. The focus of this paper is on quantifying the coverage probabilities of
the parametric bootstrap percentile confidence intervals, in particular their
limiting behaviour near boundaries. We propose a local asymptotic framework to
study this subtle coverage behaviour. Under this framework, we discover that
when the true parameters are on, or close to, the restriction boundary, the
asymptotic coverage probabilities can always exceed the nominal level in the
one-sample case; however, they can be, remarkably, both under and over the
nominal level in the two-sample case. Using illustrative examples, we show that
the results provide theoretical justification and guidance on applying the
bootstrap percentile method to constrained inference problems.
| 0 | 0 | 1 | 1 | 0 | 0 |
17,358 | Correlations and enlarged superconducting phase of $t$-$J_\perp$ chains of ultracold molecules on optical lattices | We compute physical properties across the phase diagram of the $t$-$J_\perp$
chain with long-range dipolar interactions, which describe ultracold polar
molecules on optical lattices. Our results obtained by the density-matrix
renormalization group (DMRG) indicate that superconductivity is enhanced when
the Ising component $J_z$ of the spin-spin interaction and the charge component
$V$ are tuned to zero, and even further by the long-range dipolar interactions.
At low densities, a substantially larger spin gap is obtained. We provide
evidence that long-range interactions lead to algebraically decaying
correlation functions despite the presence of a gap. Although this has recently
been observed in other long-range interacting spin and fermion models, the
correlations in our case have the peculiar property of having a small and
continuously varying exponent. We construct simple analytic models and
arguments to understand the most salient features.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,359 | MinimalRNN: Toward More Interpretable and Trainable Recurrent Neural Networks | We introduce MinimalRNN, a new recurrent neural network architecture that
achieves comparable performance as the popular gated RNNs with a simplified
structure. It employs minimal updates within RNN, which not only leads to
efficient learning and testing but more importantly better interpretability and
trainability. We demonstrate that by endorsing the more restrictive update
rule, MinimalRNN learns disentangled RNN states. We further examine the
learning dynamics of different RNN structures using input-output Jacobians, and
show that MinimalRNN is able to capture longer range dependencies than existing
RNN architectures.
| 1 | 0 | 0 | 1 | 0 | 0 |
17,360 | Boolean quadric polytopes are faces of linear ordering polytopes | Let $BQP(n)$ be a boolean quadric polytope, $LOP(m)$ be a linear ordering
polytope. It is shown that $BQP(n)$ is linearly isomorphic to a face of
$LOP(2n)$.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,361 | Sparse Matrix Code Dependence Analysis Simplification at Compile Time | Analyzing array-based computations to determine data dependences is useful
for many applications including automatic parallelization, race detection,
computation and communication overlap, verification, and shape analysis. For
sparse matrix codes, array data dependence analysis is made more difficult by
the use of index arrays that make it possible to store only the nonzero entries
of the matrix (e.g., in A[B[i]], B is an index array). Here, dependence
analysis is often stymied by such indirect array accesses due to the values of
the index array not being available at compile time. Consequently, many
dependences cannot be proven unsatisfiable or determined until runtime.
Nonetheless, index arrays in sparse matrix codes often have properties such as
monotonicity of index array elements that can be exploited to reduce the amount
of runtime analysis needed. In this paper, we contribute a formulation of array
data dependence analysis that includes encoding index array properties as
universally quantified constraints. This makes it possible to leverage existing
SMT solvers to determine whether such dependences are unsatisfiable and
significantly reduces the number of dependences that require runtime analysis
in a set of eight sparse matrix kernels. Another contribution is an algorithm
for simplifying the remaining satisfiable data dependences by discovering
equalities and/or subset relationships. These simplifications are essential to
make a runtime-inspection-based approach feasible.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,362 | ICA based on the data asymmetry | Independent Component Analysis (ICA) - one of the basic tools in data
analysis - aims to find a coordinate system in which the components of the data
are independent. Most of existing methods are based on the minimization of the
function of fourth-order moment (kurtosis). Skewness (third-order moment) has
received much less attention.
In this paper we present a competitive approach to ICA based on the Split
Gaussian distribution, which is well adapted to asymmetric data. Consequently,
we obtain a method which works better than the classical approaches, especially
in the case when the underlying density is not symmetric, which is a typical
situation in the color distribution in images.
| 0 | 0 | 1 | 1 | 0 | 0 |
17,363 | Solid hulls of weighted Banach spaces of analytic functions on the unit disc with exponential weights | We study weighted $H^\infty$ spaces of analytic functions on the open unit
disc in the case of non-doubling weights, which decrease rapidly with respect
to the boundary distance. We characterize the solid hulls of such spaces and
give quite explicit representations of them in the case of the most natural
exponentially decreasing weights.
| 0 | 0 | 1 | 0 | 0 | 0 |
17,364 | Line bundles defined by the Schwarz function | Cauchy and exponential transforms are characterized, and constructed, as
canonical holomorphic sections of certain line bundles on the Riemann sphere
defined in terms of the Schwarz function. A well known natural connection
between Schwarz reflection and line bundles defined on the Schottky double of a
planar domain is briefly discussed in the same context.
| 0 | 0 | 1 | 0 | 0 | 0 |
17,365 | Collisional excitation of NH3 by atomic and molecular hydrogen | We report extensive theoretical calculations on the rotation-inversion
excitation of interstellar ammonia (NH3) due to collisions with atomic and
molecular hydrogen (both para- and ortho-H2). Close-coupling calculations are
performed for total energies in the range 1-2000 cm-1 and rotational cross
sections are obtained for all transitions among the lowest 17 and 34
rotation-inversion levels of ortho- and para-NH3, respectively. Rate
coefficients are deduced for kinetic temperatures up to 200 K. Propensity rules
for the three colliding partners are discussed and we also compare the new
results to previous calculations for the spherically symmetrical He and para-H2
projectiles. Significant differences are found between the different sets of
calculations. Finally, we test the impact of the new rate coefficients on the
calibration of the ammonia thermometer. We find that the calibration curve is
only weakly sensitive to the colliding partner and we confirm that the ammonia
thermometer is robust.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,366 | Deterministic and Probabilistic Conditions for Finite Completability of Low-rank Multi-View Data | We consider the multi-view data completion problem, i.e., to complete a
matrix $\mathbf{U}=[\mathbf{U}_1|\mathbf{U}_2]$ where the ranks of
$\mathbf{U},\mathbf{U}_1$, and $\mathbf{U}_2$ are given. In particular, we
investigate the fundamental conditions on the sampling pattern, i.e., locations
of the sampled entries for finite completability of such a multi-view data
given the corresponding rank constraints. In contrast with the existing
analysis on Grassmannian manifold for a single-view matrix, i.e., conventional
matrix completion, we propose a geometric analysis on the manifold structure
for multi-view data to incorporate more than one rank constraint. We provide a
deterministic necessary and sufficient condition on the sampling pattern for
finite completability. We also give a probabilistic condition in terms of the
number of samples per column that guarantees finite completability with high
probability. Finally, using the developed tools, we derive the deterministic
and probabilistic guarantees for unique completability.
| 1 | 0 | 1 | 0 | 0 | 0 |
17,367 | Grid-forming Control for Power Converters based on Matching of Synchronous Machines | We consider the problem of grid-forming control of power converters in
low-inertia power systems. Starting from an average-switch three-phase inverter
model, we draw parallels to a synchronous machine (SM) model and propose a
novel grid-forming converter control strategy which dwells upon the main
characteristic of a SM: the presence of an internal rotating magnetic field. In
particular, we augment the converter system with a virtual oscillator whose
frequency is driven by the DC-side voltage measurement and which sets the
converter pulse-width-modulation signal, thereby achieving exact matching
between the converter in closed-loop and the SM dynamics. We then provide a
sufficient condition assuring existence, uniqueness, and global asymptotic
stability of equilibria in a coordinate frame attached to the virtual
oscillator angle. By actuating the DC-side input of the converter we are able
to enforce this sufficient condition. In the same setting, we highlight strict
incremental passivity, droop, and power-sharing properties of the proposed
framework, which are compatible with conventional requirements of power system
operation. We subsequently adopt disturbance decoupling techniques to design
additional control loops that regulate the DC-side voltage, as well as AC-side
frequency and amplitude, while in the end validating them with numerical
experiments.
| 0 | 0 | 1 | 0 | 0 | 0 |
17,368 | Characterizing Dust Attenuation in Local Star-Forming Galaxies: Near-Infrared Reddening and Normalization | We characterize the near-infrared (NIR) dust attenuation for a sample of
~5500 local (z<0.1) star-forming galaxies and obtain an estimate of their
average total-to-selective attenuation $k(\lambda)$. We utilize data from the
United Kingdom Infrared Telescope (UKIRT) and the Two Micron All-Sky Survey
(2MASS), which is combined with previously measured UV-optical data for these
galaxies. The average attenuation curve is slightly lower in the far-UV than
local starburst galaxies, by roughly 15%, but appears similar at longer
wavelengths with a total-to-selective normalization at V-band of
$R_V=3.67\substack{+0.44 \\ -0.35}$. Under the assumption of energy balance,
the total attenuated energy inferred from this curve is found to be broadly
consistent with the observed infrared dust emission ($L_{\rm{TIR}}$) in a small
sample of local galaxies for which far-IR measurements are available. However,
the significant scatter in this quantity among the sample may reflect large
variations in the attenuation properties of individual galaxies. We also derive
the attenuation curve for sub-populations of the main sample, separated
according to mean stellar population age (via $D_n4000$), specific star
formation rate, stellar mass, and metallicity, and find that they show only
tentative trends with low significance, at least over the range which is probed
by our sample. These results indicate that a single curve is reasonable for
applications seeking to broadly characterize large samples of galaxies in the
local Universe, while applications to individual galaxies would yield large
uncertainties and is not recommended.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,369 | Sequential Checking: Reallocation-Free Data-Distribution Algorithm for Scale-out Storage | Using tape or optical devices for scale-out storage is one option for storing
a vast amount of data. However, it is impossible or almost impossible to
rewrite data with such devices. Thus, scale-out storage using such devices
cannot use standard data-distribution algorithms because they rewrite data for
moving between servers constituting the scale-out storage when the server
configuration is changed. Although using rewritable devices for scale-out
storage, when server capacity is huge, rewriting data is very hard when server
constitution is changed. In this paper, a data-distribution algorithm called
Sequential Checking is proposed, which can be used for scale-out storage
composed of devices that are hardly able to rewrite data. Sequential Checking
1) does not need to move data between servers when the server configuration is
changed, 2) distribute data, the amount of which depends on the server's
volume, 3) select a unique server when datum is written, and 4) select servers
when datum is read (there are few such server(s) in most cases) and find out a
unique server that stores the newest datum from them. These basic
characteristics were confirmed through proofs and simulations. Data can be read
by accessing 1.98 servers on average from a storage comprising 256 servers
under a realistic condition. And it is confirmed by evaluations in real
environment that access time is acceptable. Sequential Checking makes selecting
scale-out storage using tape or optical devices or using huge capacity servers
realistic.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,370 | Learning with Confident Examples: Rank Pruning for Robust Classification with Noisy Labels | Noisy PN learning is the problem of binary classification when training
examples may be mislabeled (flipped) uniformly with noise rate rho1 for
positive examples and rho0 for negative examples. We propose Rank Pruning (RP)
to solve noisy PN learning and the open problem of estimating the noise rates,
i.e. the fraction of wrong positive and negative labels. Unlike prior
solutions, RP is time-efficient and general, requiring O(T) for any
unrestricted choice of probabilistic classifier with T fitting time. We prove
RP has consistent noise estimation and equivalent expected risk as learning
with uncorrupted labels in ideal conditions, and derive closed-form solutions
when conditions are non-ideal. RP achieves state-of-the-art noise estimation
and F1, error, and AUC-PR for both MNIST and CIFAR datasets, regardless of the
amount of noise and performs similarly impressively when a large portion of
training examples are noise drawn from a third distribution. To highlight, RP
with a CNN classifier can predict if an MNIST digit is a "one"or "not" with
only 0.25% error, and 0.46 error across all digits, even when 50% of positive
examples are mislabeled and 50% of observed positive labels are mislabeled
negative examples.
| 1 | 0 | 0 | 1 | 0 | 0 |
17,371 | code2vec: Learning Distributed Representations of Code | We present a neural model for representing snippets of code as continuous
distributed vectors ("code embeddings"). The main idea is to represent a code
snippet as a single fixed-length $\textit{code vector}$, which can be used to
predict semantic properties of the snippet. This is performed by decomposing
code to a collection of paths in its abstract syntax tree, and learning the
atomic representation of each path $\textit{simultaneously}$ with learning how
to aggregate a set of them. We demonstrate the effectiveness of our approach by
using it to predict a method's name from the vector representation of its body.
We evaluate our approach by training a model on a dataset of 14M methods. We
show that code vectors trained on this dataset can predict method names from
files that were completely unobserved during training. Furthermore, we show
that our model learns useful method name vectors that capture semantic
similarities, combinations, and analogies. Comparing previous techniques over
the same data set, our approach obtains a relative improvement of over 75%,
being the first to successfully predict method names based on a large,
cross-project, corpus. Our trained model, visualizations and vector
similarities are available as an interactive online demo at
this http URL. The code, data, and trained models are available at
this https URL.
| 1 | 0 | 0 | 1 | 0 | 0 |
17,372 | Learning a Local Feature Descriptor for 3D LiDAR Scans | Robust data association is necessary for virtually every SLAM system and
finding corresponding points is typically a preprocessing step for scan
alignment algorithms. Traditionally, handcrafted feature descriptors were used
for these problems but recently learned descriptors have been shown to perform
more robustly. In this work, we propose a local feature descriptor for 3D LiDAR
scans. The descriptor is learned using a Convolutional Neural Network (CNN).
Our proposed architecture consists of a Siamese network for learning a feature
descriptor and a metric learning network for matching the descriptors. We also
present a method for estimating local surface patches and obtaining
ground-truth correspondences. In extensive experiments, we compare our learned
feature descriptor with existing 3D local descriptors and report highly
competitive results for multiple experiments in terms of matching accuracy and
computation time. \end{abstract}
| 1 | 0 | 0 | 0 | 0 | 0 |
17,373 | Dynamical tides in exoplanetary systems containing Hot Jupiters: confronting theory and observations | We study the effect of dynamical tides associated with the excitation of
gravity waves in an interior radiative region of the central star on orbital
evolution in observed systems containing Hot Jupiters. We consider WASP-43,
Ogle-tr-113, WASP-12, and WASP-18 which contain stars on the main sequence
(MS). For these systems there are observational estimates regarding the rate of
change of the orbital period. We also investigate Kepler-91 which contains an
evolved giant star. We adopt the formalism of Ivanov et al. for calculating the
orbital evolution.
For the MS stars we determine expected rates of orbital evolution under
different assumptions about the amount of dissipation acting on the tides,
estimate the effect of stellar rotation for the two most rapidly rotating stars
and compare results with observations. All cases apart from possibly WASP-43
are consistent with a regime in which gravity waves are damped during their
propagation over the star. However, at present this is not definitive as
observational errors are large. We find that although it is expected to apply
to Kepler-91, linear radiative damping cannot explain this dis- sipation regime
applying to MS stars. Thus, a nonlinear mechanism may be needed.
Kepler-91 is found to be such that the time scale for evolution of the star
is comparable to that for the orbit. This implies that significant orbital
circularisation may have occurred through tides acting on the star.
Quasi-static tides, stellar winds, hydrodynamic drag and tides acting on the
planet have likely played a minor role.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,374 | Metastability versus collapse following a quench in attractive Bose-Einstein condensates | We consider a Bose-Einstein condensate (BEC) with attractive two-body
interactions in a cigar-shaped trap, initially prepared in its ground state for
a given negative scattering length, which is quenched to a larger absolute
value of the scattering length. Using the mean-field approximation, we compute
numerically, for an experimentally relevant range of aspect ratios and initial
strengths of the coupling, two critical values of quench: one corresponds to
the weakest attraction strength the quench to which causes the system to
collapse before completing even a single return from the narrow configuration
("perihelion") in its breathing cycle. The other is a similar critical point
for the occurrence of collapse before completing two returns. In the latter
case, we also compute the limiting value, as we keep increasing the strength of
the post-quench attraction towards its critical value, of the time interval
between the first two perihelia. We also use a Gaussian variational model to
estimate the critical quenched attraction strength below which the system is
stable against the collapse for long times. These time intervals and critical
attraction strengths---apart from being fundamental properties of nonlinear
dynamics of self-attractive BECs---may provide clues to the design of upcoming
experiments that are trying to create robust BEC breathers.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,375 | A similarity criterion for sequential programs using truth-preserving partial functions | The execution of sequential programs allows them to be represented using
mathematical functions formed by the composition of statements following one
after the other. Each such statement is in itself a partial function, which
allows only inputs satisfying a particular Boolean condition to carry forward
the execution and hence, the composition of such functions (as a result of
sequential execution of the statements) strengthens the valid set of input
state variables for the program to complete its execution and halt succesfully.
With this thought in mind, this paper tries to study a particular class of
partial functions, which tend to preserve the truth of two given Boolean
conditions whenever the state variables satisfying one are mapped through such
functions into a domain of state variables satisfying the other. The existence
of such maps allows us to study isomorphism between different programs, based
not only on their structural characteristics (e.g. the kind of programming
constructs used and the overall input-output transformation), but also the
nature of computation performed on seemingly different inputs. Consequently, we
can now relate programs which perform a given type of computation, like a loop
counting down indefinitely, without caring about the input sets they work on
individually or the set of statements each program contains.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,376 | Subsampling large graphs and invariance in networks | Specify a randomized algorithm that, given a very large graph or network,
extracts a random subgraph. What can we learn about the input graph from a
single subsample? We derive laws of large numbers for the sampler output, by
relating randomized subsampling to distributional invariance: Assuming an
invariance holds is tantamount to assuming the sample has been generated by a
specific algorithm. That in turn yields a notion of ergodicity. Sampling
algorithms induce model classes---graphon models, sparse generalizations of
exchangeable graphs, and random multigraphs with exchangeable edges can all be
obtained in this manner, and we specialize our results to a number of examples.
One class of sampling algorithms emerges as special: Roughly speaking, those
defined as limits of random transformations drawn uniformly from certain
sequences of groups. Some known pathologies of network models based on graphons
are explained as a form of selection bias.
| 0 | 0 | 1 | 1 | 0 | 0 |
17,377 | Taylor coefficients of non-holomorphic Jacobi forms and applications | In this paper, we prove modularity results of Taylor coefficients of certain
non-holomorphic Jacobi forms. It is well-known that Taylor coefficients of
holomorphic Jacobi forms are quasimoular forms. However recently there has been
a wide interest for Taylor coefficients of non-holomorphic Jacobi forms for
example arising in combinatorics. In this paper, we show that such coefficients
still inherit modular properties. We then work out the precise spaces in which
these coefficients lie for two examples.
| 0 | 0 | 1 | 0 | 0 | 0 |
17,378 | Beamspace SU-MIMO for Future Millimeter Wave Wireless Communications | For future networks (i.e., the fifth generation (5G) wireless networks and
beyond), millimeter-wave (mmWave) communication with large available unlicensed
spectrum is a promising technology that enables gigabit multimedia
applications. Thanks to the short wavelength of mmWave radio, massive antenna
arrays can be packed into the limited dimensions of mmWave transceivers.
Therefore, with directional beamforming (BF), both mmWave transmitters (MTXs)
and mmWave receivers (MRXs) are capable of supporting multiple beams in 5G
networks. However, for the transmission between an MTX and an MRX, most works
have only considered a single beam, which means that they do not make full
potential use of mmWave. Furthermore, the connectivity of single beam
transmission can easily be blocked. In this context, we propose a single-user
multi-beam concurrent transmission scheme for future mmWave networks with
multiple reflected paths. Based on spatial spectrum reuse, the scheme can be
described as a multiple-input multiple-output (MIMO) technique in beamspace
(i.e., in the beam-number domain). Moreover, this study investigates the
challenges and potential solutions for implementing this scheme, including
multibeam selection, cooperative beam tracking, multi-beam power allocation and
synchronization. The theoretical and numerical results show that the proposed
beamspace SU-MIMO can largely improve the achievable rate of the transmission
between an MTX and an MRX and, meanwhile, can maintain the connectivity.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,379 | Learning Robust Visual-Semantic Embeddings | Many of the existing methods for learning joint embedding of images and text
use only supervised information from paired images and its textual attributes.
Taking advantage of the recent success of unsupervised learning in deep neural
networks, we propose an end-to-end learning framework that is able to extract
more robust multi-modal representations across domains. The proposed method
combines representation learning models (i.e., auto-encoders) together with
cross-domain learning criteria (i.e., Maximum Mean Discrepancy loss) to learn
joint embeddings for semantic and visual features. A novel technique of
unsupervised-data adaptation inference is introduced to construct more
comprehensive embeddings for both labeled and unlabeled data. We evaluate our
method on Animals with Attributes and Caltech-UCSD Birds 200-2011 dataset with
a wide range of applications, including zero and few-shot image recognition and
retrieval, from inductive to transductive settings. Empirically, we show that
our framework improves over the current state of the art on many of the
considered tasks.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,380 | Quantitative estimates of the surface habitability of Kepler-452b | Kepler-452b is currently the best example of an Earth-size planet in the
habitable zone of a sun-like star, a type of planet whose number of detections
is expected to increase in the future. Searching for biosignatures in the
supposedly thin atmospheres of these planets is a challenging goal that
requires a careful selection of the targets. Under the assumption of a
rocky-dominated nature for Kepler-452b, we considered it as a test case to
calculate a temperature-dependent habitability index, $h_{050}$, designed to
maximize the potential presence of biosignature-producing activity (Silva et
al.\ 2016). The surface temperature has been computed for a broad range of
climate factors using a climate model designed for terrestrial-type exoplanets
(Vladilo et al.\ 2015). After fixing the planetary data according to the
experimental results (Jenkins et al.\ 2015), we changed the surface gravity,
CO$_2$ abundance, surface pressure, orbital eccentricity, rotation period, axis
obliquity and ocean fraction within the range of validity of our model. For
most choices of parameters we find habitable solutions with $h_{050}>0.2$ only
for CO$_2$ partial pressure $p_\mathrm{CO_2} \lesssim 0.04$\,bar. At this
limiting value of CO$_2$ abundance the planet is still habitable if the total
pressure is $p \lesssim 2$\,bar. In all cases the habitability drops for
eccentricity $e \gtrsim 0.3$. Changes of rotation period and obliquity affect
the habitability through their impact on the equator-pole temperature
difference rather than on the mean global temperature. We calculated the
variation of $h_{050}$ resulting from the luminosity evolution of the host star
for a wide range of input parameters. Only a small combination of parameters
yield habitability-weighted lifetimes $\gtrsim 2$\,Gyr, sufficiently long to
develop atmospheric biosignatures still detectable at the present time.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,381 | Design and implementation of dynamic logic gates and R-S flip-flop using quasiperiodically driven Murali-Lakshmanan-Chua circuit | We report the propagation of a square wave signal in a quasi-periodically
driven Murali-Lakshmanan-Chua (QPDMLC) circuit system. It is observed that
signal propagation is possible only above a certain threshold strength of the
square wave or digital signal and all the values above the threshold amplitude
are termed as 'region of signal propagation'. Then, we extend this region of
signal propagation to perform various logical operations like AND/NAND/OR/NOR
and hence it is also designated as the 'region of logical operation'. Based on
this region, we propose implementing the dynamic logic gates, namely
AND/NAND/OR/NOR, which can be decided by the asymmetrical input square waves
without altering the system parameters. Further, we show that a single QPDMLC
system will produce simultaneously two outputs which are complementary to each
other. As a result, a single QPDMLC system yields either AND as well as NAND or
OR as well as NOR gates simultaneously. Then we combine the corresponding two
QPDMLC systems in a cross-coupled way and report that its dynamics mimics that
of fundamental R-S flip-flop circuit. All these phenomena have been explained
with analytical solutions of the circuit equations characterizing the system
and finally the results are compared with the corresponding numerical and
experimental analysis.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,382 | Sequential Attend, Infer, Repeat: Generative Modelling of Moving Objects | We present Sequential Attend, Infer, Repeat (SQAIR), an interpretable deep
generative model for videos of moving objects. It can reliably discover and
track objects throughout the sequence of frames, and can also generate future
frames conditioning on the current frame, thereby simulating expected motion of
objects. This is achieved by explicitly encoding object presence, locations and
appearances in the latent variables of the model. SQAIR retains all strengths
of its predecessor, Attend, Infer, Repeat (AIR, Eslami et. al., 2016),
including learning in an unsupervised manner, and addresses its shortcomings.
We use a moving multi-MNIST dataset to show limitations of AIR in detecting
overlapping or partially occluded objects, and show how SQAIR overcomes them by
leveraging temporal consistency of objects. Finally, we also apply SQAIR to
real-world pedestrian CCTV data, where it learns to reliably detect, track and
generate walking pedestrians with no supervision.
| 0 | 0 | 0 | 1 | 0 | 0 |
17,383 | Learning Local Shape Descriptors from Part Correspondences With Multi-view Convolutional Networks | We present a new local descriptor for 3D shapes, directly applicable to a
wide range of shape analysis problems such as point correspondences, semantic
segmentation, affordance prediction, and shape-to-scan matching. The descriptor
is produced by a convolutional network that is trained to embed geometrically
and semantically similar points close to one another in descriptor space. The
network processes surface neighborhoods around points on a shape that are
captured at multiple scales by a succession of progressively zoomed out views,
taken from carefully selected camera positions. We leverage two extremely large
sources of data to train our network. First, since our network processes
rendered views in the form of 2D images, we repurpose architectures pre-trained
on massive image datasets. Second, we automatically generate a synthetic dense
point correspondence dataset by non-rigid alignment of corresponding shape
parts in a large collection of segmented 3D models. As a result of these design
choices, our network effectively encodes multi-scale local context and
fine-grained surface detail. Our network can be trained to produce either
category-specific descriptors or more generic descriptors by learning from
multiple shape categories. Once trained, at test time, the network extracts
local descriptors for shapes without requiring any part segmentation as input.
Our method can produce effective local descriptors even for shapes whose
category is unknown or different from the ones used while training. We
demonstrate through several experiments that our learned local descriptors are
more discriminative compared to state of the art alternatives, and are
effective in a variety of shape analysis applications.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,384 | Alternating minimization for dictionary learning with random initialization | We present theoretical guarantees for an alternating minimization algorithm
for the dictionary learning/sparse coding problem. The dictionary learning
problem is to factorize vector samples $y^{1},y^{2},\ldots, y^{n}$ into an
appropriate basis (dictionary) $A^*$ and sparse vectors $x^{1*},\ldots,x^{n*}$.
Our algorithm is a simple alternating minimization procedure that switches
between $\ell_1$ minimization and gradient descent in alternate steps.
Dictionary learning and specifically alternating minimization algorithms for
dictionary learning are well studied both theoretically and empirically.
However, in contrast to previous theoretical analyses for this problem, we
replace the condition on the operator norm (that is, the largest magnitude
singular value) of the true underlying dictionary $A^*$ with a condition on the
matrix infinity norm (that is, the largest magnitude term). This not only
allows us to get convergence rates for the error of the estimated dictionary
measured in the matrix infinity norm, but also ensures that a random
initialization will provably converge to the global optimum. Our guarantees are
under a reasonable generative model that allows for dictionaries with growing
operator norms, and can handle an arbitrary level of overcompleteness, while
having sparsity that is information theoretically optimal. We also establish
upper bounds on the sample complexity of our algorithm.
| 1 | 0 | 0 | 1 | 0 | 0 |
17,385 | Optimal Transmission Line Switching under Geomagnetic Disturbances | In recent years, there have been increasing concerns about how geomagnetic
disturbances (GMDs) impact electrical power systems. Geomagnetically-induced
currents (GICs) can saturate transformers, induce hot spot heating and increase
reactive power losses. These effects can potentially cause catastrophic damage
to transformers and severely impact the ability of a power system to deliver
power. To address this problem, we develop a model of GIC impacts to power
systems that includes 1) GIC thermal capacity of transformers as a function of
normal Alternating Current (AC) and 2) reactive power losses as a function of
GIC. We use this model to derive an optimization problem that protects power
systems from GIC impacts through line switching, generator redispatch, and load
shedding. We employ state-of-the-art convex relaxations of AC power flow
equations to lower bound the objective. We demonstrate the approach on a
modified RTS96 system and the UIUC 150-bus system and show that line switching
is an effective means to mitigate GIC impacts. We also provide a sensitivity
analysis of optimal switching decisions with respect to GMD direction.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,386 | Image Forgery Localization Based on Multi-Scale Convolutional Neural Networks | In this paper, we propose to utilize Convolutional Neural Networks (CNNs) and
the segmentation-based multi-scale analysis to locate tampered areas in digital
images. First, to deal with color input sliding windows of different scales, a
unified CNN architecture is designed. Then, we elaborately design the training
procedures of CNNs on sampled training patches. With a set of robust
multi-scale tampering detectors based on CNNs, complementary tampering
possibility maps can be generated. Last but not least, a segmentation-based
method is proposed to fuse the maps and generate the final decision map. By
exploiting the benefits of both the small-scale and large-scale analyses, the
segmentation-based multi-scale analysis can lead to a performance leap in
forgery localization of CNNs. Numerous experiments are conducted to demonstrate
the effectiveness and efficiency of our method.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,387 | The QKP limit of the quantum Euler-Poisson equation | In this paper, we consider the derivation of the Kadomtsev-Petviashvili (KP)
equation for cold ion-acoustic wave in the long wavelength limit of the
two-dimensional quantum Euler-Poisson system, under different scalings for
varying directions in the Gardner-Morikawa transform. It is shown that the
types of the KP equation depend on the scaled quantum parameter $H>0$. The
QKP-I is derived for $H>2$, QKP-II for $0<H<2$ and the dispersive-less KP (dKP)
equation for the critical case $H=2$. The rigorous proof for these limits is
given in the well-prepared initial data case, and the norm that is chosen to
close the proof is anisotropic in the two directions, in accordance with the
anisotropic structure of the KP equation as well as the Gardner-Morikawa
transform. The results can be generalized in several directions.
| 0 | 0 | 1 | 0 | 0 | 0 |
17,388 | Variational Implicit Processes | This paper introduces the variational implicit processes (VIPs), a Bayesian
nonparametric method based on a class of highly flexible priors over functions.
Similar to Gaussian processes (GPs), in implicit processes (IPs), an implicit
multivariate prior (data simulators, Bayesian neural networks, etc.) is placed
over any finite collections of random variables. A novel and efficient
variational inference algorithm for IPs is derived using wake-sleep updates,
which gives analytic solutions and allows scalable hyper-parameter learning
with stochastic optimization. Experiments on real-world regression datasets
demonstrate that VIPs return better uncertainty estimates and superior
performance over existing inference methods for GPs and Bayesian neural
networks. With a Bayesian LSTM as the implicit prior, the proposed approach
achieves state-of-the-art results on predicting power conversion efficiency of
molecules based on raw chemical formulas.
| 0 | 0 | 0 | 1 | 0 | 0 |
17,389 | Silicon Micromachined High-contrast Artificial Dielectrics for Millimeter-wave Transformation Optics Antennas | Transformation optics methods and gradient index electromagnetic structures
rely upon spatially varied arbitrary permittivity. This, along with recent
interest in millimeter-wave lens-based antennas demands high spatial resolution
dielectric variation. Perforated media have been used to fabricate gradient
index structures from microwaves to THz but are often limited in contrast. We
show that by employing regular polygon unit-cells (hexagon, square, and
triangle) on matched lattices we can realize very high contrast permittivity
ranging from 0.1-1.0 of the background permittivity. Silicon micromachining
(Bosch process) is performed on high resistivity Silicon wafers to achieve a
minimum permittivity of 1.25 (10% of Silicon) in the WR28 waveguide band,
specifically targeting the proposed 39 GHz 5G communications band. The method
is valid into the THz band.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,390 | Pseudogap and Fermi surface in the presence of spin-vortex checkerboard for 1/8-doped lanthanum cuprates | Lanthanum family of high-temperature cuprate superconductors is known to
exhibit both spin and charge electronic modulations around doping level 1/8. We
assume that these modulations have the character of two-dimensional spin-vortex
checkerboard and investigate whether this assumption is consistent with the
Fermi surface and the pseudogap measured by angle-resolved photo-emission
spectroscopy. We also explore the possibility of observing quantum oscillations
of transport coefficients in such a background. These investigations are based
on a model of non-interacting spin-1/2 fermions hopping on a square lattice and
coupled through spins to a magnetic field imitating spin-vortex checkerboard.
The main results of this article include (i) calculation of Fermi surface
containing Fermi arcs at the positions in the Brillouin zone largely consistent
with experiments; (ii) identification of factors complicating the observations
of quantum oscillations in the presence of spin modulations; and (iii)
investigation of the symmetries of the resulting electronic energy bands,
which, in particular, indicates that each band is double-degenerate and, in
addition, has at least one conical point, where it touches another
double-degenerate band. We discuss possible implications these cones may have
for the transport properties and the pseudogap.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,391 | Revealing the cluster of slow transients behind a large slow slip event | Capable of reaching similar magnitudes to large megathrust earthquakes
($M_w>7$), slow slip events play a major role in accommodating tectonic motion
on plate boundaries. These slip transients are the slow release of built-up
tectonic stress that are geodetically imaged as a predominantly aseismic
rupture, which is smooth in both time and space. We demonstrate here that large
slow slip events are in fact a cluster of short-duration slow transients. Using
a dense catalog of low-frequency earthquakes as a guide, we investigate the
$M_w7.5$ slow slip event that occurred in 2006 along the subduction interface
40~km beneath Guerrero, Mexico. We show that while the long-period surface
displacement as recorded by GPS suggests a six month duration, motion in the
direction of tectonic release only sporadically occurs over 55 days and its
surface signature is attenuated by rapid relocking of the plate interface.These
results demonstrate that our current conceptual model of slow and continuous
rupture is an artifact of low-resolution geodetic observations of a
superposition of small, clustered slip events. Our proposed description of slow
slip as a cluster of slow transients implies that we systematically
overestimate the duration $T$ and underestimate the moment magnitude $M$ of
large slow slip events.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,392 | General $N$-solitons and their dynamics in several nonlocal nonlinear Schrödinger equations | General $N$-solitons in three recently-proposed nonlocal nonlinear
Schrödinger equations are presented. These nonlocal equations include the
reverse-space, reverse-time, and reverse-space-time nonlinear Schrödinger
equations, which are nonlocal reductions of the Ablowitz-Kaup-Newell-Segur
(AKNS) hierarchy. It is shown that general $N$-solitons in these different
equations can be derived from the same Riemann-Hilbert solutions of the AKNS
hierarchy, except that symmetry relations on the scattering data are different
for these equations. This Riemann-Hilbert framework allows us to identify new
types of solitons with novel eigenvalue configurations in the spectral plane.
Dynamics of $N$-solitons in these equations is also explored. In all the three
nonlocal equations, a generic feature of their solutions is repeated
collapsing. In addition, multi-solitons can behave very differently from
fundamental solitons and may not correspond to a nonlinear superposition of
fundamental solitons.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,393 | Revisiting wireless network jamming by SIR-based considerations and Multiband Robust Optimization | We revisit the mathematical models for wireless network jamming introduced by
Commander et al.: we first point out the strong connections with classical
wireless network design and then we propose a new model based on the explicit
use of signal-to-interference quantities. Moreover, to address the intrinsic
uncertain nature of the jamming problem and tackle the peculiar right-hand-side
(RHS) uncertainty of the problem, we propose an original robust cutting-plane
algorithm drawing inspiration from Multiband Robust Optimization. Finally, we
assess the performance of the proposed cutting plane algorithm by experiments
on realistic network instances.
| 1 | 0 | 1 | 0 | 0 | 0 |
17,394 | New models for symbolic data analysis | Symbolic data analysis (SDA) is an emerging area of statistics based on
aggregating individual level data into group-based distributional summaries
(symbols), and then developing statistical methods to analyse them. It is ideal
for analysing large and complex datasets, and has immense potential to become a
standard inferential technique in the near future. However, existing SDA
techniques are either non-inferential, do not easily permit meaningful
statistical models, are unable to distinguish between competing models, and are
based on simplifying assumptions that are known to be false. Further, the
procedure for constructing symbols from the underlying data is erroneously not
considered relevant to the resulting statistical analysis. In this paper we
introduce a new general method for constructing likelihood functions for
symbolic data based on a desired probability model for the underlying classical
data, while only observing the distributional summaries. This approach resolves
many of the conceptual and practical issues with current SDA methods, opens the
door for new classes of symbol design and construction, in addition to
developing SDA as a viable tool to enable and improve upon classical data
analyses, particularly for very large and complex datasets. This work creates a
new direction for SDA research, which we illustrate through several real and
simulated data analyses.
| 0 | 0 | 0 | 1 | 0 | 0 |
17,395 | Soft Methodology for Cost-and-error Sensitive Classification | Many real-world data mining applications need varying cost for different
types of classification errors and thus call for cost-sensitive classification
algorithms. Existing algorithms for cost-sensitive classification are
successful in terms of minimizing the cost, but can result in a high error rate
as the trade-off. The high error rate holds back the practical use of those
algorithms. In this paper, we propose a novel cost-sensitive classification
methodology that takes both the cost and the error rate into account. The
methodology, called soft cost-sensitive classification, is established from a
multicriteria optimization problem of the cost and the error rate, and can be
viewed as regularizing cost-sensitive classification with the error rate. The
simple methodology allows immediate improvements of existing cost-sensitive
classification algorithms. Experiments on the benchmark and the real-world data
sets show that our proposed methodology indeed achieves lower test error rates
and similar (sometimes lower) test costs than existing cost-sensitive
classification algorithms. We also demonstrate that the methodology can be
extended for considering the weighted error rate instead of the original error
rate. This extension is useful for tackling unbalanced classification problems.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,396 | Raman LIDARs and atmospheric calibration for the Cherenkov Telescope Array | The Cherenkov Telescope Array (CTA) is the next generation of Imaging
Atmospheric Cherenkov Telescopes. It will reach a sensitivity and energy
resolution never obtained until now by any other high energy gamma-ray
experiment. Understanding the systematic uncertainties in general will be a
crucial issue for the performance of CTA. It is well known that atmospheric
conditions contribute particularly in this aspect.Within the CTA consortium
several groups are currently building Raman LIDARs to be installed on the two
sites. Raman LIDARs are devices composed of a powerful laser that shoots into
the atmosphere, a collector that gathers the backscattered light from molecules
and aerosols, a photo-sensor, an optical module that spectrally selects
wavelengths of interest, and a read--out system.Unlike currently used elastic
LIDARs, they can help reduce the systematic uncertainties of the molecular and
aerosol components of the atmosphere to <5% so that CTA can achieve its energy
resolution requirements of<10% uncertainty at 1 TeV.All the Raman LIDARs in
this work have design features that make them different than typical Raman
LIDARs used in atmospheric science and are characterized by large collecting
mirrors (2.5m2) and reduced acquisition time.They provide both multiple elastic
and Raman read-out channels and custom made optics design.In this paper, the
motivation for Raman LIDARs, the design and the status of advance of these
technologies are described.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,397 | Generalized notions of sparsity and restricted isometry property. Part II: Applications | The restricted isometry property (RIP) is a universal tool for data recovery.
We explore the implication of the RIP in the framework of generalized sparsity
and group measurements introduced in the Part I paper. It turns out that for a
given measurement instrument the number of measurements for RIP can be improved
by optimizing over families of Banach spaces. Second, we investigate the
preservation of difference of two sparse vectors, which is not trivial in
generalized models. Third, we extend the RIP of partial Fourier measurements at
optimal scaling of number of measurements with random sign to far more general
group structured measurements. Lastly, we also obtain RIP in infinite dimension
in the context of Fourier measurement concepts with sparsity naturally replaced
by smoothness assumptions.
| 0 | 0 | 0 | 1 | 0 | 0 |
17,398 | Mellin-Meijer-kernel density estimation on $\mathbb{R}^+$ | Nonparametric kernel density estimation is a very natural procedure which
simply makes use of the smoothing power of the convolution operation. Yet, it
performs poorly when the density of a positive variable is to be estimated
(boundary issues, spurious bumps in the tail). So various extensions of the
basic kernel estimator allegedly suitable for $\mathbb{R}^+$-supported
densities, such as those using Gamma or other asymmetric kernels, abound in the
literature. Those, however, are not based on any valid smoothing operation
analogous to the convolution, which typically leads to inconsistencies. By
contrast, in this paper a kernel estimator for $\mathbb{R}^+$-supported
densities is defined by making use of the Mellin convolution, the natural
analogue of the usual convolution on $\mathbb{R}^+$. From there, a very
transparent theory flows and leads to new type of asymmetric kernels strongly
related to Meijer's $G$-functions. The numerous pleasant properties of this
`Mellin-Meijer-kernel density estimator' are demonstrated in the paper. Its
pointwise and $L_2$-consistency (with optimal rate of convergence) is
established for a large class of densities, including densities unbounded at 0
and showing power-law decay in their right tail. Its practical behaviour is
investigated further through simulations and some real data analyses.
| 0 | 0 | 1 | 1 | 0 | 0 |
17,399 | Gene Ontology (GO) Prediction using Machine Learning Methods | We applied machine learning to predict whether a gene is involved in axon
regeneration. We extracted 31 features from different databases and trained
five machine learning models. Our optimal model, a Random Forest Classifier
with 50 submodels, yielded a test score of 85.71%, which is 4.1% higher than
the baseline score. We concluded that our models have some predictive
capability. Similar methodology and features could be applied to predict other
Gene Ontology (GO) terms.
| 1 | 0 | 0 | 1 | 0 | 0 |
17,400 | Dimension Spectra of Lines | This paper investigates the algorithmic dimension spectra of lines in the
Euclidean plane. Given any line L with slope a and vertical intercept b, the
dimension spectrum sp(L) is the set of all effective Hausdorff dimensions of
individual points on L. We draw on Kolmogorov complexity and geometrical
arguments to show that if the effective Hausdorff dimension dim(a, b) is equal
to the effective packing dimension Dim(a, b), then sp(L) contains a unit
interval. We also show that, if the dimension dim(a, b) is at least one, then
sp(L) is infinite. Together with previous work, this implies that the dimension
spectrum of any line is infinite.
| 1 | 0 | 0 | 0 | 0 | 0 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.