title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
Local asymptotic equivalence of pure quantum states ensembles and quantum Gaussian white noise | Quantum technology is increasingly relying on specialised statistical
inference methods for analysing quantum measurement data. This motivates the
development of "quantum statistics", a field that is shaping up at the overlap
of quantum physics and "classical" statistics. One of the less investigated
topics to date is that of statistical inference for infinite dimensional
quantum systems, which can be seen as quantum counterpart of non-parametric
statistics. In this paper we analyse the asymptotic theory of quantum
statistical models consisting of ensembles of quantum systems which are
identically prepared in a pure state. In the limit of large ensembles we
establish the local asymptotic equivalence (LAE) of this i.i.d. model to a
quantum Gaussian white noise model. We use the LAE result in order to establish
minimax rates for the estimation of pure states belonging to Hermite-Sobolev
classes of wave functions. Moreover, for quadratic functional estimation of the
same states we note an elbow effect in the rates, whereas for testing a pure
state a sharp parametric rate is attained over the nonparametric
Hermite-Sobolev class.
| 0 | 0 | 1 | 1 | 0 | 0 |
Training DNNs with Hybrid Block Floating Point | The wide adoption of DNNs has given birth to unrelenting computing
requirements, forcing datacenter operators to adopt domain-specific
accelerators to train them. These accelerators typically employ densely packed
full precision floating-point arithmetic to maximize performance per area.
Ongoing research efforts seek to further increase that performance density by
replacing floating-point with fixed-point arithmetic. However, a significant
roadblock for these attempts has been fixed point's narrow dynamic range, which
is insufficient for DNN training convergence. We identify block floating point
(BFP) as a promising alternative representation since it exhibits wide dynamic
range and enables the majority of DNN operations to be performed with
fixed-point logic. Unfortunately, BFP alone introduces several limitations that
preclude its direct applicability. In this work, we introduce HBFP, a hybrid
BFP-FP approach, which performs all dot products in BFP and other operations in
floating point. HBFP delivers the best of both worlds: the high accuracy of
floating point at the superior hardware density of fixed point. For a wide
variety of models, we show that HBFP matches floating point's accuracy while
enabling hardware implementations that deliver up to 8.5x higher throughput.
| 1 | 0 | 0 | 1 | 0 | 0 |
Unified Spectral Clustering with Optimal Graph | Spectral clustering has found extensive use in many areas. Most traditional
spectral clustering algorithms work in three separate steps: similarity graph
construction; continuous labels learning; discretizing the learned labels by
k-means clustering. Such common practice has two potential flaws, which may
lead to severe information loss and performance degradation. First, predefined
similarity graph might not be optimal for subsequent clustering. It is
well-accepted that similarity graph highly affects the clustering results. To
this end, we propose to automatically learn similarity information from data
and simultaneously consider the constraint that the similarity matrix has exact
c connected components if there are c clusters. Second, the discrete solution
may deviate from the spectral solution since k-means method is well-known as
sensitive to the initialization of cluster centers. In this work, we transform
the candidate solution into a new one that better approximates the discrete
one. Finally, those three subtasks are integrated into a unified framework,
with each subtask iteratively boosted by using the results of the others
towards an overall optimal solution. It is known that the performance of a
kernel method is largely determined by the choice of kernels. To tackle this
practical problem of how to select the most suitable kernel for a particular
data set, we further extend our model to incorporate multiple kernel learning
ability. Extensive experiments demonstrate the superiority of our proposed
method as compared to existing clustering approaches.
| 1 | 0 | 0 | 1 | 0 | 0 |
The Effect of Mixing on the Observed Metallicity of the Smith Cloud | Measurements of high-velocity clouds' metallicities provide important clues
about their origins, and hence on whether they play a role in fueling ongoing
star formation in the Galaxy. However, accurate interpretation of these
measurements requires compensating for the galactic material that has been
mixed into the clouds. In order to determine how much the metallicity changes
as a result of this mixing, we have carried out three-dimensional
wind-tunnel-like hydrodynamical simulations of an example cloud. Our model
cloud is patterned after the Smith Cloud, a particularly well-studied cloud of
mass $\sim 5 \times 10^6~M_\odot$. We calculated the fraction of the
high-velocity material that had originated in the galactic halo,
$F_\mathrm{h}$, for various sight lines passing through our model cloud. We
find that $F_\mathrm{h}$ generally increases with distance from the head of the
cloud, reaching $\sim$0.5 in the tail of the cloud. Models in which the
metallicities (relative to solar) of the original cloud, $Z_\mathrm{cl}$, and
of the halo, $Z_\mathrm{h}$, are in the approximate ranges $0.1 \lesssim
Z_\mathrm{cl} \lesssim 0.3$ and $0.7 \lesssim Z_\mathrm{h} \lesssim 1.0$,
respectively, are in rough agreement with the observations. Models with
$Z_\mathrm{h} \sim 0.1$ and $Z_\mathrm{cl} \gtrsim 0.5$ are also in rough
agreement with the observations, but such a low halo metallicity is
inconsistent with recent independent measurements. We conclude that the Smith
Cloud's observed metallicity may not be a true reflection of its original
metallicity and that the cloud's ultimate origin remains uncertain.
| 0 | 1 | 0 | 0 | 0 | 0 |
Semi-Supervised Generation with Cluster-aware Generative Models | Deep generative models trained with large amounts of unlabelled data have
proven to be powerful within the domain of unsupervised learning. Many real
life data sets contain a small amount of labelled data points, that are
typically disregarded when training generative models. We propose the
Cluster-aware Generative Model, that uses unlabelled information to infer a
latent representation that models the natural clustering of the data, and
additional labelled data points to refine this clustering. The generative
performances of the model significantly improve when labelled information is
exploited, obtaining a log-likelihood of -79.38 nats on permutation invariant
MNIST, while also achieving competitive semi-supervised classification
accuracies. The model can also be trained fully unsupervised, and still improve
the log-likelihood performance with respect to related methods.
| 1 | 0 | 0 | 1 | 0 | 0 |
A PAC-Bayesian Approach to Spectrally-Normalized Margin Bounds for Neural Networks | We present a generalization bound for feedforward neural networks in terms of
the product of the spectral norm of the layers and the Frobenius norm of the
weights. The generalization bound is derived using a PAC-Bayes analysis.
| 1 | 0 | 0 | 0 | 0 | 0 |
Tidal viscosity of Enceladus | In the preceding paper (Efroimsky 2017), we derived an expression for the
tidal dissipation rate in a homogeneous near-spherical Maxwell body librating
in longitude. Now, by equating this expression to the outgoing energy flux due
to the vapour plumes, we estimate the mean tidal viscosity of Enceladus, under
the assumption that the Enceladean mantle behaviour is Maxwell. This method
yields a value of $\,0.24\times 10^{14}\;\mbox{Pa~s}\,$ for the mean tidal
viscosity, which is very close to the viscosity of ice near the melting point.
| 0 | 1 | 0 | 0 | 0 | 0 |
MOROCO: The Moldavian and Romanian Dialectal Corpus | In this work, we introduce the MOldavian and ROmanian Dialectal COrpus
(MOROCO), which is freely available for download at
this https URL. The corpus contains 33564 samples of
text (with over 10 million tokens) collected from the news domain. The samples
belong to one of the following six topics: culture, finance, politics, science,
sports and tech. The data set is divided into 21719 samples for training, 5921
samples for validation and another 5924 samples for testing. For each sample,
we provide corresponding dialectal and category labels. This allows us to
perform empirical studies on several classification tasks such as (i) binary
discrimination of Moldavian versus Romanian text samples, (ii) intra-dialect
multi-class categorization by topic and (iii) cross-dialect multi-class
categorization by topic. We perform experiments using a shallow approach based
on string kernels, as well as a novel deep approach based on character-level
convolutional neural networks containing Squeeze-and-Excitation blocks. We also
present and analyze the most discriminative features of our best performing
model, before and after named entity removal.
| 1 | 0 | 0 | 0 | 0 | 0 |
Linearized Einstein's field equations | From the Einstein field equations, in a weak-field approximation and for
speeds small compared to the speed of light in vacuum, the following system is
obtained \begin{align*}
\nabla \times \overrightarrow{E_g} & =
-\frac{1}{c} \frac{\partial \overrightarrow{B_g}}{\partial t},
\nabla \cdot \overrightarrow{E_g} \;\; & \approx -4\pi G\rho_g,
\nabla \times \overrightarrow{B_g} & \approx
-\frac{4\pi G}{c^{2}}\overrightarrow{J_g}+
\frac{1}{c}\frac{\partial \overrightarrow{E_g}}{\partial t},
\nabla \cdot \overrightarrow{B_g} \;\; & = 0, \end{align*} where
$\overrightarrow{E_g}$ is the gravitoelectric field, $\overrightarrow{B_g}$ is
the gravitomagnetic field, $\overrightarrow{J_g}$ is the space-time-mass
current density and $\rho_g$ is the space-time-mass density. This last
gravitoelectromagnetic field system is similar to the Maxwell equations, thus
showing an analogy between the electromagnetic theory and gravitation.
| 0 | 1 | 0 | 0 | 0 | 0 |
Treelogy: A Novel Tree Classifier Utilizing Deep and Hand-crafted Representations | We propose a novel tree classification system called Treelogy, that fuses
deep representations with hand-crafted features obtained from leaf images to
perform leaf-based plant classification. Key to this system are segmentation of
the leaf from an untextured background, using convolutional neural networks
(CNNs) for learning deep representations, extracting hand-crafted features with
a number of image processing techniques, training a linear SVM with feature
vectors, merging SVM and CNN results, and identifying the species from a
dataset of 57 trees. Our classification results show that fusion of deep
representations with hand-crafted features leads to the highest accuracy. The
proposed algorithm is embedded in a smart-phone application, which is publicly
available. Furthermore, our novel dataset comprised of 5408 leaf images is also
made public for use of other researchers.
| 1 | 0 | 0 | 0 | 0 | 0 |
Simulation and stability analysis of oblique shock wave/boundary layer interactions at Mach 5.92 | We investigate flow instability created by an oblique shock wave impinging on
a Mach 5.92 laminar boundary layer at a transitional Reynolds number. The
adverse pressure gradient of the oblique shock causes the boundary layer to
separate from the wall, resulting in the formation of a recirculation bubble.
For sufficiently large oblique shock angles, the recirculation bubble is
unstable to three-dimensional perturbations and the flow bifurcates from its
original laminar state. We utilize Direct Numerical Simulation (DNS) and Global
Stability Analysis (GSA) to show that this first occurs at a critical shock
angle of $\theta = 12.9^o$. At bifurcation, the least stable global mode is
non-oscillatory, and it takes place at a spanwise wavenumber $\beta=0.25$, in
good agreement with DNS results. Examination of the critical global mode
reveals that it originates from an interaction between small spanwise
corrugations at the base of the incident shock, streamwise vortices inside the
recirculation bubble, and spanwise modulation of the bubble strength. The
global mode drives the formation of long streamwise streaks downstream of the
bubble. While the streaks may be amplified by either the lift-up effect or by
Görtler instability, we show that centrifugal instability plays no role in
the upstream self-sustaining mechanism of the global mode. We employ an adjoint
solver to corroborate our physical interpretation by showing that the critical
global mode is most sensitive to base flow modifications that are entirely
contained inside the recirculation bubble.
| 0 | 1 | 0 | 0 | 0 | 0 |
Notes on rate equations in nonlinear continuum mechanics | The paper gives an introduction to rate equations in nonlinear continuum
mechanics which should obey specific transformation rules. Emphasis is placed
on the geometrical nature of the operations involved in order to clarify the
different concepts. The paper is particularly concerned with common classes of
constitutive equations based on corotational stress rates and their proper
implementation in time for solving initial boundary value problems. Hypoelastic
simple shear is considered as an example application for the derived theory and
algorithms.
| 1 | 1 | 0 | 0 | 0 | 0 |
Streaming PCA and Subspace Tracking: The Missing Data Case | For many modern applications in science and engineering, data are collected
in a streaming fashion carrying time-varying information, and practitioners
need to process them with a limited amount of memory and computational
resources in a timely manner for decision making. This often is coupled with
the missing data problem, such that only a small fraction of data attributes
are observed. These complications impose significant, and unconventional,
constraints on the problem of streaming Principal Component Analysis (PCA) and
subspace tracking, which is an essential building block for many inference
tasks in signal processing and machine learning. This survey article reviews a
variety of classical and recent algorithms for solving this problem with low
computational and memory complexities, particularly those applicable in the big
data regime with missing data. We illustrate that streaming PCA and subspace
tracking algorithms can be understood through algebraic and geometric
perspectives, and they need to be adjusted carefully to handle missing data.
Both asymptotic and non-asymptotic convergence guarantees are reviewed.
Finally, we benchmark the performance of several competitive algorithms in the
presence of missing data for both well-conditioned and ill-conditioned systems.
| 0 | 0 | 0 | 1 | 0 | 0 |
Do altmetrics correlate with the quality of papers? A large-scale empirical study based on F1000Prime data | In this study, we address the question whether (and to what extent,
respectively) altmetrics are related to the scientific quality of papers (as
measured by peer assessments). Only a few studies have previously investigated
the relationship between altmetrics and assessments by peers. In the first
step, we analyse the underlying dimensions of measurement for traditional
metrics (citation counts) and altmetrics - by using principal component
analysis (PCA) and factor analysis (FA). In the second step, we test the
relationship between the dimensions and quality of papers (as measured by the
post-publication peer-review system of F1000Prime assessments) - using
regression analysis. The results of the PCA and FA show that altmetrics operate
along different dimensions, whereas Mendeley counts are related to citation
counts, and tweets form a separate dimension. The results of the regression
analysis indicate that citation-based metrics and readership counts are
significantly more related to quality, than tweets. This result on the one hand
questions the use of Twitter counts for research evaluation purposes and on the
other hand indicates potential use of Mendeley reader counts.
| 1 | 0 | 0 | 0 | 0 | 0 |
Introduction to the declination function for gerrymanders | The declination is a quantitative method for identifying possible partisan
gerrymanders by analyzing vote distributions. In this expository note we
explain and motivate the definition of the declination. The minimal computer
code required for computing the declination is included. We end by computing
its value on several recent elections.
| 0 | 0 | 0 | 1 | 0 | 0 |
TFDASH: A Fairness, Stability, and Efficiency Aware Rate Control Approach for Multiple Clients over DASH | Dynamic adaptive streaming over HTTP (DASH) has recently been widely deployed
in the Internet and adopted in the industry. It, however, does not impose any
adaptation logic for selecting the quality of video fragments requested by
clients and suffers from lackluster performance with respect to a number of
desirable properties: efficiency, stability, and fairness when multiple players
compete for a bottleneck link. In this paper, we propose a throughput-friendly
DASH (TFDASH) rate control scheme for video streaming with multiple clients
over DASH to well balance the trade-offs among efficiency, stability, and
fairness. The core idea behind guaranteeing fairness and high efficiency
(bandwidth utilization) is to avoid OFF periods during the downloading process
for all clients, i.e., the bandwidth is in perfect-subscription or
over-subscription with bandwidth utilization approach to 100\%. We also propose
a dual-threshold buffer model to solve the instability problem caused by the
above idea. As a result, by integrating these novel components, we also propose
a probability-driven rate adaption logic taking into account several key
factors that most influence visual quality, including buffer occupancy, video
playback quality, video bit-rate switching frequency and amplitude, to
guarantee high-quality video streaming. Our experiments evidently demonstrate
the superior performance of the proposed method.
| 1 | 0 | 0 | 0 | 0 | 0 |
Predicting language diversity with complex network | Evolution and propagation of the world's languages is a complex phenomenon,
driven, to a large extent, by social interactions. Multilingual society can be
seen as a system of interacting agents, where the interaction leads to a
modification of the language spoken by the individuals. Two people can reach
the state of full linguistic compatibility due to the positive interactions,
like transfer of loanwords. But, on the other hand, if they speak entirely
different languages, they will separate from each other. These simple
observations make the network science the most suitable framework to describe
and analyze dynamics of language change. Although many mechanisms have been
explained, we lack a qualitative description of the scaling behavior for
different sizes of a population. Here we address the issue of the language
diversity in societies of different sizes, and we show that local interactions
are crucial to capture characteristics of the empirical data. We propose a
model of social interactions, extending the idea from, that explains the growth
of the language diversity with the size of a population of country or society.
We argue that high clustering and network disintegration are the most important
characteristics of models properly describing empirical data. Furthermore, we
cancel the contradiction between previous models and the Solomon Islands case.
Our results demonstrate the importance of the topology of the network, and the
rewiring mechanism in the process of language change.
| 1 | 1 | 0 | 0 | 0 | 0 |
The paradox of Vito Volterra's predator-prey model | This article is dedicated to the late Giorgio Israel. R{é}sum{é}. The aim
of this article is to propose on the one hand a brief history of modeling
starting from the works of Fibonacci, Robert Malthus, Pierre Francis Verhulst
and then Vito Volterra and, on the other hand, to present the main hypotheses
of the very famous but very little known predator-prey model elaborated in the
1920s by Volterra in order to solve a problem posed by his son-in-law, Umberto
D'Ancona. It is thus shown that, contrary to a widely-held notion, Volterra's
model is realistic and his seminal work laid the groundwork for modern
population dynamics and mathematical ecology, including seasonality, migration,
pollution and more. 1. A short history of modeling 1.1. The Malthusian model.
If the rst scientic view of population growth seems to be that of Leonardo
Fibonacci [2], also called Leonardo of Pisa, whose famous sequence of numbers
was presented in his Liber abaci (1202) as a solution to a population growth
problem, the modern foundations of population dynamics clearly date from Thomas
Robert Malthus [20]. Considering an ideal population consisting of a single
homogeneous animal species, that is, neglecting the variations in age, size and
any periodicity for birth or mortality, and which lives alone in an invariable
environment or coexists with other species without any direct or indirect
inuence, he founded in 1798, with his celebrated claim Population, when
unchecked, increases in a geometrical ratio, the paradigm of exponential
growth. This consists in assuming that the increase of the number N (t) of
individuals of this population, during a short interval of time, is
proportional to N (t). This translates to the following dierential equation :
(1) dN (t) dt = $\epsilon$N (t) where $\epsilon$ is a constant factor of
proportionality that represents the growth coe-cient or growth rate. By
integrating (1) we obtain the law of exponential growth or law of Malthusian
growth (see Fig. 1). This law, which does not take into account the limits
imposed by the environment on growth and which is in disagreement with the
actual facts, had a profound inuence on Charles Darwin's work on natural
selection. Indeed, Darwin [1] founded the idea of survival of the ttest on the
1. According to Frontier and Pichod-Viale [3] the correct terminology should be
population kinetics, since the interaction between species cannot be
represented by forces. 2. A population is dened as the set of individuals of
the same species living on the same territory and able to reproduce among
themselves.
| 0 | 0 | 0 | 0 | 1 | 0 |
Distributive Minimization Comprehensions and the Polynomial Hierarchy | A categorical point of view about minimization in subrecursive classes is
presented by extending the concept of Symmetric Monoidal Comprehension to that
of Distributive Minimization Comprehension. This is achieved by endowing the
former with coproducts and a finality condition for coalgebras over the
endofunctor sending X to ${1}\oplus{X}$ to perform a safe minimization
operator. By relying on the characterization given by Bellantoni, a tiered
structure is presented from which one can obtain the levels of the Polytime
Hierarchy as those classes of partial functions obtained after a certain number
of minimizations.
| 1 | 0 | 1 | 0 | 0 | 0 |
On the differentiability of hairs for Zorich maps | Devaney and Krych showed that for the exponential family $\lambda e^z$, where
$0<\lambda <1/e$, the Julia set consists of uncountably many pairwise disjoint
simple curves tending to $\infty$. Viana proved that these curves are smooth.
In this article we consider a quasiregular counterpart of the exponential map,
the so-called Zorich maps, and generalize Viana's result to these maps.
| 0 | 0 | 1 | 0 | 0 | 0 |
On the Real-time Vehicle Placement Problem | Motivated by ride-sharing platforms' efforts to reduce their riders' wait
times for a vehicle, this paper introduces a novel problem of placing vehicles
to fulfill real-time pickup requests in a spatially and temporally changing
environment. The real-time nature of this problem makes it fundamentally
different from other placement and scheduling problems, as it requires not only
real-time placement decisions but also handling real-time request dynamics,
which are influenced by human mobility patterns. We use a dataset of ten
million ride requests from four major U.S. cities to show that the requests
exhibit significant self-similarity. We then propose distributed online
learning algorithms for the real-time vehicle placement problem and bound their
expected performance under this observed self-similarity.
| 1 | 0 | 0 | 0 | 0 | 0 |
Energy Harvesting Communication Using Finite-Capacity Batteries with Internal Resistance | Modern systems will increasingly rely on energy harvested from their
environment. Such systems utilize batteries to smoothen out the random
fluctuations in harvested energy. These fluctuations induce highly variable
battery charge and discharge rates, which affect the efficiencies of practical
batteries that typically have non-zero internal resistances. In this paper, we
study an energy harvesting communication system using a finite battery with
non-zero internal resistance. We adopt a dual-path architecture, in which
harvested energy can be directly used, or stored and then used. In a frame,
both time and power can be split between energy storage and data transmission.
For a single frame, we derive an analytical expression for the rate optimal
time and power splitting ratios between harvesting energy and transmitting
data. We then optimize the time and power splitting ratios for a group of
frames, assuming non-causal knowledge of harvested power and fading channel
gains, by giving an approximate solution. When only the statistics of the
energy arrivals and channel gains are known, we derive a dynamic programming
based policy and, propose three sub-optimal policies, which are shown to
perform competitively. In summary, our study suggests that battery internal
resistance significantly impacts the design and performance of energy
harvesting communication systems and must be taken into account.
| 1 | 0 | 1 | 0 | 0 | 0 |
A Novel Algorithm for Optimal Electricity Pricing in a Smart Microgrid Network | The evolution of smart microgrid and its demand-response characteristics not
only will change the paradigms of the century-old electric grid but also will
shape the electricity market. In this new market scenario, once always energy
consumers, now may act as sellers due to the excess of energy generated from
newly deployed distributed generators (DG). The smart microgrid will use the
existing electrical transmission network and a pay per use transportation cost
without implementing new transmission lines which involve a massive capital
investment. In this paper, we propose a novel algorithm to minimize the
electricity price with the optimal trading of energy between sellers and buyers
of the smart microgrid network. The algorithm is capable of solving the optimal
power allocation problem (with optimal transmission cost) for a microgrid
network in a polynomial time without modifying the actual marginal costs of
power generation. We mathematically formulate the problem as a nonlinear
non-convex and decompose the problem to separate the optimal marginal cost
model from the electricity allocation model. Then, we develop a
divide-and-conquer method to minimize the electricity price by jointly solving
the optimal marginal cost model and electricity allocation problems. To
evaluate the performance of the solution method, we develop and simulate the
model with different marginal cost functions and compare it with a first come
first serve electricity allocation method.
| 1 | 0 | 0 | 0 | 0 | 0 |
A Reinforcement Learning Approach to Jointly Adapt Vehicular Communications and Planning for Optimized Driving | Our premise is that autonomous vehicles must optimize communications and
motion planning jointly. Specifically, a vehicle must adapt its motion plan
staying cognizant of communications rate related constraints and adapt the use
of communications while being cognizant of motion planning related restrictions
that may be imposed by the on-road environment. To this end, we formulate a
reinforcement learning problem wherein an autonomous vehicle jointly chooses
(a) a motion planning action that executes on-road and (b) a communications
action of querying sensed information from the infrastructure. The goal is to
optimize the driving utility of the autonomous vehicle. We apply the Q-learning
algorithm to make the vehicle learn the optimal policy, which makes the optimal
choice of planning and communications actions at any given time. We demonstrate
the ability of the optimal policy to smartly adapt communications and planning
actions, while achieving large driving utilities, using simulations.
| 1 | 0 | 0 | 0 | 0 | 0 |
Differentially Private High Dimensional Sparse Covariance Matrix Estimation | In this paper, we study the problem of estimating the covariance matrix under
differential privacy, where the underlying covariance matrix is assumed to be
sparse and of high dimensions. We propose a new method, called DP-Thresholding,
to achieve a non-trivial $\ell_2$-norm based error bound, which is
significantly better than the existing ones from adding noise directly to the
empirical covariance matrix. We also extend the $\ell_2$-norm based error bound
to a general $\ell_w$-norm based one for any $1\leq w\leq \infty$, and show
that they share the same upper bound asymptotically. Our approach can be easily
extended to local differential privacy. Experiments on the synthetic datasets
show consistent results with our theoretical claims.
| 1 | 0 | 0 | 1 | 0 | 0 |
Stochastic Functional Gradient Path Planning in Occupancy Maps | Planning safe paths is a major building block in robot autonomy. It has been
an active field of research for several decades, with a plethora of planning
methods. Planners can be generally categorised as either trajectory optimisers
or sampling-based planners. The latter is the predominant planning paradigm for
occupancy maps. Trajectory optimisation entails major algorithmic changes to
tackle contextual information gaps caused by incomplete sensor coverage of the
map. However, the benefits are substantial, as trajectory optimisers can reason
on the trade-off between path safety and efficiency.
In this work, we improve our previous work on stochastic functional gradient
planners. We introduce a novel expressive path representation based on kernel
approximation, that allows cost effective model updates based on stochastic
samples. The main drawback of the previous stochastic functional gradient
planner was the cubic cost, stemming from its non-parametric path
representation. Our novel approximate kernel based model, on the other hand,
has a fixed linear cost that depends solely on the number of features used to
represent the path. We show that the stochasticity of the samples is crucial
for the planner and present comparisons to other state-of-the-art planning
methods in both simulation and with real occupancy data. The experiments
demonstrate the advantages of the stochastic approximate kernel method for path
planning in occupancy maps.
| 1 | 0 | 0 | 0 | 0 | 0 |
MM2RTB: Bringing Multimedia Metrics to Real-Time Bidding | In display advertising, users' online ad experiences are important for the
advertising effectiveness. However, users have not been well accommodated in
real-time bidding (RTB). This further influences their site visits and
perception of the displayed banner ads. In this paper, we propose a novel
computational framework which brings multimedia metrics, like the contextual
relevance, the visual saliency and the ad memorability into RTB to improve the
users' ad experiences as well as maintain the benefits of the publisher and the
advertiser. We aim at developing a vigorous ecosystem by optimizing the
trade-offs among all stakeholders. The framework considers the scenario of a
webpage with multiple ad slots. Our experimental results show that the benefits
of the advertiser and the user can be significantly improved if the publisher
would slightly sacrifice his short-term revenue. The improved benefits will
increase the advertising requests (demand) and the site visits (supply), which
can further boost the publisher's revenue in the long run.
| 1 | 0 | 0 | 0 | 0 | 0 |
Generic coexistence of Fermi arcs and Dirac cones on the surface of time-reversal invariant Weyl semimetals | The hallmark of Weyl semimetals is the existence of open constant-energy
contours on their surface -- the so-called Fermi arcs -- connecting Weyl
points. Here, we show that for time-reversal symmetric realizations of Weyl
semimetals these Fermi arcs in many cases coexist with closed Fermi pockets
originating from surface Dirac cones pinned to time-reversal invariant momenta.
The existence of Fermi pockets is required for certain Fermi-arc connectivities
due to additional restrictions imposed by the six $\mathbb{Z}_2$ topological
invariants characterizing a generic time-reversal invariant Weyl semimetal. We
show that a change of the Fermi-arc connectivity generally leads to a different
topology of the surface Fermi surface, and identify the half-Heusler compound
LaPtBi under in-plane compressive strain as a material that realizes this
surface Lifshitz transition. We also discuss universal features of this
coexistence in quasi-particle interference spectra.
| 0 | 1 | 0 | 0 | 0 | 0 |
Superfluid Field response to Edge dislocation motion | We study the dynamic response of a superfluid field to a moving edge
dislocation line to which the field is minimally coupled. We use a dissipative
Gross-Pitaevskii equation, and determine the initial conditions by solving the
equilibrium version of the model. We consider the subsequent time evolution of
the field for both glide and climb dislocation motion and analyze the results
for a range of values of the constant speed $V_D$ of the moving dislocation. We
find that the type of motion of the dislocation line is very important in
determining the time evolution of the superfluid field distribution associated
with it. Climb motion of the dislocation line induces increasing asymmetry, as
function of time, in the field profile, with part of the probability being, as
it were, left behind. On the other hand, glide motion has no effect on the
symmetry properties of the superfluid field distribution. Damping of the
superfluid field due to excitations associated with the moving dislocation line
occurs in both cases.
| 0 | 1 | 0 | 0 | 0 | 0 |
Koopman Operator Spectrum and Data Analysis | We examine spectral operator-theoretic properties of linear and nonlinear
dynamical systems with equilibrium and quasi-periodic attractors and use such
properties to characterize a class of datasets and introduce a new notion of
the principal dimension of the data.
| 0 | 1 | 1 | 0 | 0 | 0 |
Exploiting Physical Dynamics to Detect Actuator and Sensor Attacks in Mobile Robots | Mobile robots are cyber-physical systems where the cyberspace and the
physical world are strongly coupled. Attacks against mobile robots can
transcend cyber defenses and escalate into disastrous consequences in the
physical world. In this paper, we focus on the detection of active attacks that
are capable of directly influencing robot mission operation. Through leveraging
physical dynamics of mobile robots, we develop RIDS, a novel robot intrusion
detection system that can detect actuator attacks as well as sensor attacks for
nonlinear mobile robots subject to stochastic noises. We implement and evaluate
a RIDS on Khepera mobile robot against concrete attack scenarios via various
attack channels including signal interference, sensor spoofing, logic bomb, and
physical damage. Evaluation of 20 experiments shows that the averages of false
positive rates and false negative rates are both below 1%. Average detection
delay for each attack remains within 0.40s.
| 1 | 0 | 0 | 0 | 0 | 0 |
Unsupervised Neural Machine Translation | In spite of the recent success of neural machine translation (NMT) in
standard benchmarks, the lack of large parallel corpora poses a major practical
problem for many language pairs. There have been several proposals to alleviate
this issue with, for instance, triangulation and semi-supervised learning
techniques, but they still require a strong cross-lingual signal. In this work,
we completely remove the need of parallel data and propose a novel method to
train an NMT system in a completely unsupervised manner, relying on nothing but
monolingual corpora. Our model builds upon the recent work on unsupervised
embedding mappings, and consists of a slightly modified attentional
encoder-decoder model that can be trained on monolingual corpora alone using a
combination of denoising and backtranslation. Despite the simplicity of the
approach, our system obtains 15.56 and 10.21 BLEU points in WMT 2014
French-to-English and German-to-English translation. The model can also profit
from small parallel corpora, and attains 21.81 and 15.24 points when combined
with 100,000 parallel sentences, respectively. Our implementation is released
as an open source project.
| 1 | 0 | 0 | 0 | 0 | 0 |
Geometric Methods for Robust Data Analysis in High Dimension | Machine learning and data analysis now finds both scientific and industrial
application in biology, chemistry, geology, medicine, and physics. These
applications rely on large quantities of data gathered from automated sensors
and user input. Furthermore, the dimensionality of many datasets is extreme:
more details are being gathered about single user interactions or sensor
readings. All of these applications encounter problems with a common theme: use
observed data to make inferences about the world. Our work obtains the first
provably efficient algorithms for Independent Component Analysis (ICA) in the
presence of heavy-tailed data. The main tool in this result is the centroid
body (a well-known topic in convex geometry), along with optimization and
random walks for sampling from a convex body. This is the first algorithmic use
of the centroid body and it is of independent theoretical interest, since it
effectively replaces the estimation of covariance from samples, and is more
generally accessible.
This reduction relies on a non-linear transformation of samples from such an
intersection of halfspaces (i.e. a simplex) to samples which are approximately
from a linearly transformed product distribution. Through this transformation
of samples, which can be done efficiently, one can then use an ICA algorithm to
recover the vertices of the intersection of halfspaces.
Finally, we again use ICA as an algorithmic primitive to construct an
efficient solution to the widely-studied problem of learning the parameters of
a Gaussian mixture model. Our algorithm again transforms samples from a
Gaussian mixture model into samples which fit into the ICA model and, when
processed by an ICA algorithm, result in recovery of the mixture parameters.
Our algorithm is effective even when the number of Gaussians in the mixture
grows polynomially with the ambient dimension
| 1 | 0 | 0 | 0 | 0 | 0 |
Modules Over the Ring of Ponderation functions with Applications to a Class of Integral Operators | In this paper we introduce new modules over the ring of ponderation
functions, so we recover old results in harmonic analysis from the side of ring
theory.
Moreover, we prove that Laplace transform, Fourier transform and Hankel
transform generate some kind of modules over the ring of ponderation functions.
| 0 | 0 | 1 | 0 | 0 | 0 |
Strict monotonicity of principal eigenvalues of elliptic operators in $\mathbb{R}^d$ and risk-sensitive control | This paper studies the eigenvalue problem on $\mathbb{R}^d$ for a class of
second order, elliptic operators of the form $\mathscr{L} =
a^{ij}\partial_{x_i}\partial_{x_j} + b^{i}\partial_{x_i} + f$, associated with
non-degenerate diffusions. We show that strict monotonicity of the principal
eigenvalue of the operator with respect to the potential function $f$ fully
characterizes the ergodic properties of the associated ground state diffusion,
and the unicity of the ground state, and we present a comprehensive study of
the eigenvalue problem from this point of view. This allows us to extend or
strengthen various results in the literature for a class of viscous
Hamilton-Jacobi equations of ergodic type with smooth coefficients to equations
with measurable drift and potential. In addition, we establish the strong
duality for the equivalent infinite dimensional linear programming formulation
of these ergodic control problems. We also apply these results to the study of
the infinite horizon risk-sensitive control problem for diffusions, and
establish existence of optimal Markov controls, verification of optimality
results, and the continuity of the controlled principal eigenvalue with respect
to stationary Markov controls.
| 0 | 0 | 1 | 0 | 0 | 0 |
Best Practices for Applying Deep Learning to Novel Applications | This report is targeted to groups who are subject matter experts in their
application but deep learning novices. It contains practical advice for those
interested in testing the use of deep neural networks on applications that are
novel for deep learning. We suggest making your project more manageable by
dividing it into phases. For each phase this report contains numerous
recommendations and insights to assist novice practitioners.
| 1 | 0 | 0 | 0 | 0 | 0 |
Precision Interfaces | Building interactive tools to support data analysis is hard because it is not
always clear what to build and how to build it. To address this problem, we
present Precision Interfaces, a semi-automatic system to generate task-specific
data analytics interfaces. Precision Interface can turn a log of executed
programs into an interface, by identifying micro-variations between the
programs and mapping them to interface components. This paper focuses on SQL
query logs, but we can generalize the approach to other languages. Our system
operates in two steps: it first build an interaction graph, which describes how
the queries can be transformed into each other. Then, it finds a set of UI
components that covers a maximal number of transformations. To restrict the
domain of changes to be detected, our system uses a domain-specific language,
PILang. We give a full description of Precision Interface's components,
showcase an early prototype on real program logs and discuss future research
opportunities.
| 1 | 0 | 0 | 0 | 0 | 0 |
Gaussian Approximation of a Risk Model with Stationary Hawkes Arrivals of Claims | We consider a classical risk process with arrival of claims following a
stationary Hawkes process. We study the asymptotic regime when the premium rate
and the baseline intensity of the claims arrival process are large, and claim
size is small. The main goal of this article is to establish a diffusion
approximation by verifying a functional central limit theorem of this model and
to compute both the finite-time and infinite-time horizon ruin probabilities.
Numerical results will also be given.
| 0 | 0 | 0 | 0 | 0 | 1 |
Data Modelling for the Evaluation of Virtualized Network Functions Resource Allocation Algorithms | To conduct a more realistic evaluation on Virtualized Network Functions
resource allocation algorithms, researches needed data on: (1) potential NFs
chains (policies), (2) traffic flows passing through these NFs chains, (3) how
the dynamic traffic changes affect the NFs (scale out/in) and (4) different
data center architectures for the NFC. However, there are no publicly available
real data sets on NF chains and traffic that pass through NF chains. Therefore
we have used data from previous empirical analyses and made some assumptions to
derive the required data to evaluate resource allocation algorithms for VNFs.
We developed four programs to model the gathered data and generate the required
data. All gathered data and data modelling programs are publicly available at
github repository.
| 1 | 0 | 0 | 0 | 0 | 0 |
Understanding Negations in Information Processing: Learning from Replicating Human Behavior | Information systems experience an ever-growing volume of unstructured data,
particularly in the form of textual materials. This represents a rich source of
information from which one can create value for people, organizations and
businesses. For instance, recommender systems can benefit from automatically
understanding preferences based on user reviews or social media. However, it is
difficult for computer programs to correctly infer meaning from narrative
content. One major challenge is negations that invert the interpretation of
words and sentences. As a remedy, this paper proposes a novel learning strategy
to detect negations: we apply reinforcement learning to find a policy that
replicates the human perception of negations based on an exogenous response,
such as a user rating for reviews. Our method yields several benefits, as it
eliminates the former need for expensive and subjective manual labeling in an
intermediate stage. Moreover, the inferred policy can be used to derive
statistical inferences and implications regarding how humans process and act on
negations.
| 1 | 0 | 0 | 1 | 0 | 0 |
A study of sliding motion of a solid body on a rough surface with asymmetric friction | Recent studies show interest in materials with asymmetric friction forces. We
investigate terminal motion of a solid body with circular contact area. We
assume that friction forces are asymmetric orthotropic. Two cases of pressure
distribution are analyzed: Hertz and Boussinesq laws. Equations for friction
force and moment are formulated and solved for these cases. Numer- ical results
show significant impact of the asymmetry of friction on the motion. Our results
can be used for more accurate prediction of contact behavior of bodies made
from new materials with asymmetric surface textures.
| 0 | 1 | 0 | 0 | 0 | 0 |
Fisher information matrix of binary time series | A common approach to analyzing categorical correlated time series data is to
fit a generalized linear model (GLM) with past data as covariate inputs. There
remain challenges to conducting inference for short time series length. By
treating the historical data as covariate inputs, standard errors of estimates
of GLM parameters computed using the empirical Fisher information do not fully
account the auto-correlation in the data. To overcome this serious limitation,
we derive the exact conditional Fisher information matrix of a general logistic
autoregressive model with endogenous covariates for any series length $T$.
Moreover, we also develop an iterative computational formula that allows for
relatively easy implementation of the proposed estimator. Our simulation
studies show that confidence intervals derived using the exact Fisher
information matrix tend to be narrower than those utilizing the empirical
Fisher information matrix while maintaining type I error rates at or below
nominal levels. Further, we establish that the exact Fisher information matrix
approaches, as T tends to infinity, the asymptotic Fisher information matrix
previously derived for binary time series data. The developed exact conditional
Fisher information matrix is applied to time-series data on respiratory rate
among a cohort of expectant mothers where it is found to provide narrower
confidence intervals for functionals of scientific interest and lead to greater
statistical power when compared to the empirical Fisher information matrix.
| 0 | 0 | 1 | 1 | 0 | 0 |
A Fourier analytic approach to inhomogeneous Diophantine approximation | In this paper, we study inhomogeneous Diophantine approximation with rational
numbers of reduced form. The central object to study is the set $W(f,\theta)$
as follows, \begin{eqnarray*} \left\{x\in [0,1]:\left
|x-\frac{m+\theta(n)}{n}\right|<\frac{f(n)}{n}\text{ for infinitely many
coprime pairs of numbers } m,n\right\}, \end{eqnarray*} where
$\{f(n)\}_{n\in\mathbb{N}}$ and $\{\theta(n)\}_{n\in\mathbb{N}}$ are sequences
of real numbers in $[0,1/2]$. We will completely determine the Hausdorff
dimension of $W(f,\theta)$ in terms of $f$ and $\theta$. As a by-product, we
also obtain a new sufficient condition for $W(f,\theta)$ to have full Lebesgue
measure and this result is closely related to the study of \ds with extra
conditions.
| 0 | 0 | 1 | 0 | 0 | 0 |
Screening in perturbative approaches to LSS | A specific value for the cosmological constant, \Lambda, can account for
late-time cosmic acceleration. However, motivated by the so-called cosmological
constant problem(s), several alternative mechanisms have been explored. To
date, a host of well-studied dynamical dark energy and modified gravity models
exists. Going beyond \Lambda CDM often comes with additional degrees of freedom
(dofs). For these to pass existing observational tests, an efficient screening
mechanism must be in place. The linear and quasi-linear regimes of structure
formation are ideal probes of such dofs and can capture the onset of screening.
We propose here a semi-phenomenological treatment to account for screening
dynamics on LSS observables, with special emphasis on Vainshtein-type
screening.
| 0 | 1 | 0 | 0 | 0 | 0 |
Thin films with precisely engineered nanostructures | Synthesis of rationally designed nanostructured materials with optimized
mechanical properties, e.g., high strength with considerable ductility,
requires rigorous control of diverse microstructural parameters including the
mean size, size dispersion and spatial distribution of grains. However,
currently available synthesis techniques can seldom satisfy these requirements.
Here, we report a new methodology to synthesize thin films with unprecedented
microstructural control via systematic, in-situ seeding of nanocrystals into
amorphous precursor films. When the amorphous films are subsequently
crystallized by thermal annealing, the nanocrystals serve as preferential grain
nucleation sites and control their microstructure. We demonstrate the
capability of this approach by precisely tailoring the size, geometry and
spatial distribution of nanostructured grains in structural (TiAl) as well as
functional (TiNi) thin films. The approach, which is applicable to a broad
class of metallic alloys and ceramics, enables explicit microstructural control
of thin film materials for a wide spectrum of applications.
| 0 | 1 | 0 | 0 | 0 | 0 |
End-to-End Network Delay Guarantees for Real-Time Systems using SDN | We propose a novel framework that reduces the management and integration
overheads for real-time network flows by leveraging the capabilities
(especially global visibility and management) of software-defined networking
(SDN) architectures. Given the specifications of flows that must meet hard
real-time requirements, our framework synthesizes paths through the network and
associated switch configurations - to guarantee that these flows meet their
end-to-end timing requirements. In doing so, our framework makes SDN
architectures "delay-aware" - remember that SDN is otherwise not able to reason
about delays. Hence, it is easier to use such architectures in safety-critical
and other latency-sensitive applications. We demonstrate our principles as well
as the feasibility of our approach using both - exhaustive simulations as well
as experiments using real hardware switches.
| 1 | 0 | 0 | 0 | 0 | 0 |
Efficient cold outflows driven by cosmic rays in high redshift galaxies and their global effects on the IGM | We present semi-analytical models of galactic outflows in high redshift
galaxies driven by both hot thermal gas and non-thermal cosmic rays. Thermal
pressure alone may not sustain a large scale outflow in low mass galaxies (i.e
$M\sim 10^8$~M$_\odot$), in the presence of supernovae (SNe) feedback with
large mass loading. We show that inclusion of cosmic ray pressure allows
outflow solutions even in these galaxies. In massive galaxies for the same
energy efficiency, cosmic ray driven winds can propagate to larger distances
compared to pure thermally driven winds. On an average gas in the cosmic ray
driven winds has a lower temperature which could aid detecting it through
absorption lines in the spectra of background sources. Using our constrained
semi-analytical models of galaxy formation (that explains the observed UV
luminosity functions of galaxies) we study the influence of cosmic ray driven
winds on the properties of the intergalactic medium (IGM) at different
redshifts. In particular, we study the volume filling factor, average
metallicity, cosmic ray and magnetic field energy densities for models invoking
atomic cooled and molecular cooled halos. We show that the cosmic rays in the
IGM could have enough energy that can be transferred to the thermal gas in
presence of magnetic fields to influence the thermal history of the
intergalactic medium. The significant volume filling and resulting strength of
IGM magnetic fields can also account for recent $\gamma$-ray observations of
blazars.
| 0 | 1 | 0 | 0 | 0 | 0 |
On the Effectiveness of Discretizing Quantitative Attributes in Linear Classifiers | Learning algorithms that learn linear models often have high representation
bias on real-world problems. In this paper, we show that this representation
bias can be greatly reduced by discretization. Discretization is a common
procedure in machine learning that is used to convert a quantitative attribute
into a qualitative one. It is often motivated by the limitation of some
learners to qualitative data. Discretization loses information, as fewer
distinctions between instances are possible using discretized data relative to
undiscretized data. In consequence, where discretization is not essential, it
might appear desirable to avoid it. However, it has been shown that
discretization often substantially reduces the error of the linear generative
Bayesian classifier naive Bayes. This motivates a systematic study of the
effectiveness of discretizing quantitative attributes for other linear
classifiers. In this work, we study the effect of discretization on the
performance of linear classifiers optimizing three distinct discriminative
objective functions --- logistic regression (optimizing negative
log-likelihood), support vector classifiers (optimizing hinge loss) and a
zero-hidden layer artificial neural network (optimizing mean-square-error). We
show that discretization can greatly increase the accuracy of these linear
discriminative learners by reducing their representation bias, especially on
big datasets. We substantiate our claims with an empirical study on $42$
benchmark datasets.
| 1 | 0 | 0 | 0 | 0 | 0 |
Kustaanheimo-Stiefel transformation with an arbitrary defining vector | Kustaanheimo-Stiefel (KS) transformation depends on the choice of some
preferred direction in the Cartesian 3D space. This choice, seldom explicitly
mentioned, amounts typically to the direction of the first or the third
coordinate axis in celestial mechanics and atomic physics, respectively. The
present work develops a canonical KS transformation with an arbitrary preferred
direction, indicated by what we call a defining vector. Using a mix of vector
and quaternion algebra, we formulate the transformation in a reference frame
independent manner. The link between the oscillator and Keplerian first
integrals is given. As an example of the present formulation, the Keplerian
motion in a rotating frame is re-investigated.
| 0 | 0 | 1 | 0 | 0 | 0 |
A dynamic game approach to distributionally robust safety specifications for stochastic systems | This paper presents a new safety specification method that is robust against
errors in the probability distribution of disturbances. Our proposed
distributionally robust safe policy maximizes the probability of a system
remaining in a desired set for all times, subject to the worst possible
disturbance distribution in an ambiguity set. We propose a dynamic game
formulation of constructing such policies and identify conditions under which a
non-randomized Markov policy is optimal. Based on this existence result, we
develop a practical design approach to safety-oriented stochastic controllers
with limited information about disturbance distributions. This control method
can be used to minimize another cost function while ensuring safety in a
probabilistic way. However, an associated Bellman equation involves
infinite-dimensional minimax optimization problems since the disturbance
distribution may have a continuous density. To resolve computational issues, we
propose a duality-based reformulation method that converts the
infinite-dimensional minimax problem into a semi-infinite program that can be
solved using existing convergent algorithms. We prove that there is no duality
gap, and that this approach thus preserves optimality. The results of numerical
tests confirm that the proposed method is robust against distributional errors
in disturbances, while a standard stochastic safety specification tool is not.
| 1 | 0 | 1 | 0 | 0 | 0 |
Entity Linking for Queries by Searching Wikipedia Sentences | We present a simple yet effective approach for linking entities in queries.
The key idea is to search sentences similar to a query from Wikipedia articles
and directly use the human-annotated entities in the similar sentences as
candidate entities for the query. Then, we employ a rich set of features, such
as link-probability, context-matching, word embeddings, and relatedness among
candidate entities as well as their related entities, to rank the candidates
under a regression based framework. The advantages of our approach lie in two
aspects, which contribute to the ranking process and final linking result.
First, it can greatly reduce the number of candidate entities by filtering out
irrelevant entities with the words in the query. Second, we can obtain the
query sensitive prior probability in addition to the static link-probability
derived from all Wikipedia articles. We conduct experiments on two benchmark
datasets on entity linking for queries, namely the ERD14 dataset and the GERDAQ
dataset. Experimental results show that our method outperforms state-of-the-art
systems and yields 75.0% in F1 on the ERD14 dataset and 56.9% on the GERDAQ
dataset.
| 1 | 0 | 0 | 0 | 0 | 0 |
Towards personalized human AI interaction - adapting the behavior of AI agents using neural signatures of subjective interest | Reinforcement Learning AI commonly uses reward/penalty signals that are
objective and explicit in an environment -- e.g. game score, completion time,
etc. -- in order to learn the optimal strategy for task performance. However,
Human-AI interaction for such AI agents should include additional reinforcement
that is implicit and subjective -- e.g. human preferences for certain AI
behavior -- in order to adapt the AI behavior to idiosyncratic human
preferences. Such adaptations would mirror naturally occurring processes that
increase trust and comfort during social interactions. Here, we show how a
hybrid brain-computer-interface (hBCI), which detects an individual's level of
interest in objects/events in a virtual environment, can be used to adapt the
behavior of a Deep Reinforcement Learning AI agent that is controlling a
virtual autonomous vehicle. Specifically, we show that the AI learns a driving
strategy that maintains a safe distance from a lead vehicle, and most novelly,
preferentially slows the vehicle when the human passengers of the vehicle
encounter objects of interest. This adaptation affords an additional 20\%
viewing time for subjectively interesting objects. This is the first
demonstration of how an hBCI can be used to provide implicit reinforcement to
an AI agent in a way that incorporates user preferences into the control
system.
| 1 | 0 | 0 | 1 | 0 | 0 |
Relational recurrent neural networks | Memory-based neural networks model temporal data by leveraging an ability to
remember information for long periods. It is unclear, however, whether they
also have an ability to perform complex relational reasoning with the
information they remember. Here, we first confirm our intuitions that standard
memory architectures may struggle at tasks that heavily involve an
understanding of the ways in which entities are connected -- i.e., tasks
involving relational reasoning. We then improve upon these deficits by using a
new memory module -- a \textit{Relational Memory Core} (RMC) -- which employs
multi-head dot product attention to allow memories to interact. Finally, we
test the RMC on a suite of tasks that may profit from more capable relational
reasoning across sequential information, and show large gains in RL domains
(e.g. Mini PacMan), program evaluation, and language modeling, achieving
state-of-the-art results on the WikiText-103, Project Gutenberg, and GigaWord
datasets.
| 0 | 0 | 0 | 1 | 0 | 0 |
A Scalable Discrete-Time Survival Model for Neural Networks | There is currently great interest in applying neural networks to prediction
tasks in medicine. It is important for predictive models to be able to use
survival data, where each patient has a known follow-up time and
event/censoring indicator. This avoids information loss when training the model
and enables generation of predicted survival curves. In this paper, we describe
a discrete-time survival model that is designed to be used with neural
networks, which we refer to as Nnet-survival. The model is trained with the
maximum likelihood method using minibatch stochastic gradient descent (SGD).
The use of SGD enables rapid convergence and application to large datasets that
do not fit in memory. The model is flexible, so that the baseline hazard rate
and the effect of the input data on hazard probability can vary with follow-up
time. It has been implemented in the Keras deep learning framework, and source
code for the model and several examples is available online. We demonstrate the
performance of the model on both simulated and real data and compare it to
existing models Cox-nnet and Deepsurv.
| 0 | 0 | 0 | 1 | 0 | 0 |
A driven-dissipative spin chain model based on exciton-polariton condensates | An infinite chain of driven-dissipative condensate spins with uniform
nearest-neighbor coherent coupling is solved analytically and investigated
numerically. Above a critical occupation threshold the condensates undergo
spontaneous spin bifurcation (becoming magnetized) forming a binary chain of
spin-up or spin-down states. Minimization of the bifurcation threshold
determines the magnetic order as a function of the coupling strength. This
allows control of multiple magnetic orders via adiabatic (slow ramping of)
pumping. In addition to ferromagnetic and anti-ferromagnetic ordered states we
show the formation of a paired-spin ordered state $\left|\dots \uparrow
\uparrow \downarrow \downarrow \dots \right. \rangle$ as a consequence of the
phase degree of freedom between condensates.
| 0 | 1 | 0 | 0 | 0 | 0 |
Projected Primal-Dual Gradient Flow of Augmented Lagrangian with Application to Distributed Maximization of the Algebraic Connectivity of a Network | In this paper, a projected primal-dual gradient flow of augmented Lagrangian
is presented to solve convex optimization problems that are not necessarily
strictly convex. The optimization variables are restricted by a convex set with
computable projection operation on its tangent cone as well as equality
constraints. As a supplement of the analysis in
\cite{niederlander2016distributed}, we show that the projected dynamical system
converges to one of the saddle points and hence finding an optimal solution.
Moreover, the problem of distributedly maximizing the algebraic connectivity of
an undirected network by optimizing the port gains of each nodes (base
stations) is considered. The original semi-definite programming (SDP) problem
is relaxed into a nonlinear programming (NP) problem that will be solved by the
aforementioned projected dynamical system. Numerical examples show the
convergence of the aforementioned algorithm to one of the optimal solutions.
The effect of the relaxation is illustrated empirically with numerical
examples. A methodology is presented so that the number of iterations needed to
reach the equilibrium is suppressed. Complexity per iteration of the algorithm
is illustrated with numerical examples.
| 0 | 0 | 1 | 0 | 0 | 0 |
A Short Note on Almost Sure Convergence of Bayes Factors in the General Set-Up | Although there is a significant literature on the asymptotic theory of Bayes
factor, the set-ups considered are usually specialized and often involves
independent and identically distributed data. Even in such specialized cases,
mostly weak consistency results are available. In this article, for the first
time ever, we derive the almost sure convergence theory of Bayes factor in the
general set-up that includes even dependent data and misspecified models.
Somewhat surprisingly, the key to the proof of such a general theory is a
simple application of a result of Shalizi (2009) to a well-known identity
satisfied by the Bayes factor.
| 0 | 0 | 1 | 1 | 0 | 0 |
Rainbow matchings in properly-coloured multigraphs | Aharoni and Berger conjectured that in any bipartite multigraph that is
properly edge-coloured by $n$ colours with at least $n + 1$ edges of each
colour there must be a matching that uses each colour exactly once. In this
paper we consider the same question without the bipartiteness assumption. We
show that in any multigraph with edge multiplicities $o(n)$ that is properly
edge-coloured by $n$ colours with at least $n + o(n)$ edges of each colour
there must be a matching of size $n-O(1)$ that uses each colour at most once.
| 1 | 0 | 0 | 0 | 0 | 0 |
Recursive computation of the invariant distribution of Markov and Feller processes | This paper provides a general and abstract approach to approximate ergodic
regimes of Markov and Feller processes. More precisely, we show that the
recursive algorithm presented in Lamberton & Pages (2002) and based on
simulation algorithms of stochastic schemes with decreasing step can be used to
build invariant measures for general Markov and Feller processes. We also
propose applications in three different configurations: Approximation of Markov
switching Brownian diffusion ergodic regimes using Euler scheme, approximation
of Markov Brownian diffusion ergodic regimes with Milstein scheme and
approximation of general diffusions with jump components ergodic regimes.
| 0 | 0 | 1 | 0 | 0 | 0 |
Secondary resonances and the boundary of effective stability of Trojan motions | One of the most interesting features in the libration domain of co-orbital
motions is the existence of secondary resonances. For some combinations of
physical parameters, these resonances occupy a large fraction of the domain of
stability and rule the dynamics within the stable tadpole region. In this work,
we present an application of a recently introduced `basic Hamiltonian model' Hb
for Trojan dynamics, in Paez and Efthymiopoulos (2015), Paez, Locatelli and
Efthymiopoulos (2016): we show that the inner border of the secondary resonance
of lowermost order, as defined by Hb, provides a good estimation of the region
in phase-space for which the orbits remain regular regardless the orbital
parameters of the system. The computation of this boundary is straightforward
by combining a resonant normal form calculation in conjunction with an
`asymmetric expansion' of the Hamiltonian around the libration points, which
speeds up convergence. Applications to the determination of the effective
stability domain for exoplanetary Trojans (planet-sized objects or asteroids)
which may accompany giant exoplanets are discussed.
| 0 | 1 | 0 | 0 | 0 | 0 |
Qualitative Measurements of Policy Discrepancy for Return-based Deep Q-Network | The deep Q-network (DQN) and return-based reinforcement learning are two
promising algorithms proposed in recent years. DQN brings advances to complex
sequential decision problems, while return-based algorithms have advantages in
making use of sample trajectories. In this paper, we propose a general
framework to combine DQN and most of the return-based reinforcement learning
algorithms, named R-DQN. We show the performance of traditional DQN can be
improved effectively by introducing return-based reinforcement learning. In
order to further improve the R-DQN, we design a strategy with two measurements
which can qualitatively measure the policy discrepancy. Moreover, we give the
two measurements' bounds in the proposed R-DQN framework. We show that
algorithms with our strategy can accurately express the trace coefficient and
achieve a better approximation to return. The experiments, conducted on several
representative tasks from the OpenAI Gym library, validate the effectiveness of
the proposed measurements. The results also show that the algorithms with our
strategy outperform the state-of-the-art methods.
| 0 | 0 | 0 | 1 | 0 | 0 |
Probabilistic Constraints on the Mass and Composition of Proxima b | Recent studies regarding the habitability, observability, and possible
orbital evolution of the indirectly detected exoplanet Proxima b have mostly
assumed a planet with $M \sim 1.3$ $M_\oplus$, a rocky composition, and an
Earth-like atmosphere or none at all. In order to assess these assumptions, we
use previous studies of the radii, masses, and compositions of super-Earth
exoplanets to probabilistically constrain the mass and radius of Proxima b,
assuming an isotropic inclination probability distribution. We find it is ~90%
likely that the planet's density is consistent with a rocky composition;
conversely, it is at least 10% likely that the planet has a significant amount
of ice or an H/He envelope. If the planet does have a rocky composition, then
we find expectation values and 95% confidence intervals of
$\left<M\right>_\text{rocky} = 1.63_{-0.72}^{+1.66}$ $M_\oplus$ for its mass
and $\left<R\right>_\text{rocky} = 1.07_{-0.31}^{+0.38}$ $R_\oplus$ for its
radius.
| 0 | 1 | 0 | 0 | 0 | 0 |
Explicit Salem sets, Fourier restriction, and metric Diophantine approximation in the $p$-adic numbers | We exhibit the first explicit examples of Salem sets in $\mathbb{Q}_p$ of
every dimension $0 < \alpha < 1$ by showing that certain sets of
well-approximable $p$-adic numbers are Salem sets. We construct measures
supported on these sets that satisfy essentially optimal Fourier decay and
upper regularity conditions, and we observe that these conditions imply that
the measures satisfy strong Fourier restriction inequalities. We also partially
generalize our results to higher dimensions. Our results extend theorems of
Kaufman, Papadimitropoulos, and Hambrook from the real to the $p$-adic setting.
| 0 | 0 | 1 | 0 | 0 | 0 |
Tunneling of the hard-core model on finite triangular lattices | We consider the hard-core model on finite triangular lattices with Metropolis
dynamics. Under suitable conditions on the triangular lattice dimensions, this
interacting particle system has three maximum-occupancy configurations and we
investigate its high-fugacity behavior by studying tunneling times, i.e., the
first hitting times between between these maximum-occupancy configurations, and
the mixing time. The proof method relies on the analysis of the corresponding
state space using geometrical and combinatorial properties of the hard-core
configurations on finite triangular lattices, in combination with known results
for first hitting times of Metropolis Markov chains in the equivalent
zero-temperature limit. In particular, we show how the order of magnitude of
the expected tunneling times depends on the triangular lattice dimensions in
the low-temperature regime and prove the asymptotic exponentiality of the
rescaled tunneling time leveraging the intrinsic symmetry of the state space.
| 0 | 1 | 1 | 0 | 0 | 0 |
Spectral State Compression of Markov Processes | Model reduction of the Markov process is a basic problem in modeling
state-transition systems. Motivated by the state aggregation approach rooted in
control theory, we study the statistical state compression of a finite-state
Markov chain from empirical trajectories. Through the lens of spectral
decomposition, we study the rank and features of Markov processes, as well as
properties like representability, aggregatability, and lumpability. We develop
a class of spectral state compression methods for three tasks: (1) estimate the
transition matrix of a low-rank Markov model, (2) estimate the leading subspace
spanned by Markov features, and (3) recover latent structures of the state
space like state aggregation and lumpable partition. The proposed methods
provide an unsupervised learning framework for identifying Markov features and
clustering states. We provide upper bounds for the estimation errors and nearly
matching minimax lower bounds. Numerical studies are performed on synthetic
data and a dataset of New York City taxi trips.
| 0 | 0 | 0 | 1 | 0 | 0 |
Multi-band characterization of the hot Jupiters: WASP-5b, WASP-44b and WASP-46b | We have carried out a campaign to characterize the hot Jupiters WASP-5b,
WASP-44b and WASP-46b using multiband photometry collected at the
Observatório do Pico Dos Dias in Brazil. We have determined the planetary
physical properties and new transit ephemerides for these systems. The new
orbital parameters and physical properties of WASP-5b and WASP-44b are
consistent with previous estimates. In the case of WASP-46b, there is some
quota of disagreement between previous results. We provide a new determination
of the radius of this planet and help clarify the previous differences. We also
studied the transit time variations including our new measurements. No clear
variation from a linear trend was found for the systems WASP-5b and WASP-44b.
In the case of WASP-46b, we found evidence of deviations indicating the
presence of a companion but statistical analysis of the existing times points
to a signal due to the sampling rather than a new planet. Finally, we studied
the fractional radius variation as a function of wavelength for these systems.
The broad-band spectrums of WASP-5b and WASP-44b are mostly flat. In the case
of WASP-46b we found a trend, but further measurements are necessary to confirm
this finding.
| 0 | 1 | 0 | 0 | 0 | 0 |
Non-Fermi liquid at the FFLO quantum critical point | When a 2D superconductor is subjected to a strong in-plane magnetic field,
Zeeman polarization of the Fermi surface can give rise to inhomogeneous FFLO
order with a spatially modulated gap. Further increase of the magnetic field
eventually drives the system into a normal metal state. Here, we perform a
renormalization group analysis of this quantum phase transition, starting from
an appropriate low-energy theory recently introduced by Piazza et al. (Ref.1).
We compute one-loop flow equations within the controlled dimensional
regularization scheme with fixed dimension of Fermi surface, expanding in
$\epsilon = 5/2 - d$. We find a new stable non-Fermi liquid fixed point and
discuss its critical properties. One of the most interesting aspects of the
FFLO non-Fermi liquid scenario is that the quantum critical point is
potentially naked, with the scaling regime observable down to arbitrary low
temperatures. In order to study this possibility, we perform a general analysis
of competing instabilities, which suggests that only charge density wave order
is enhanced in the vicinity of the quantum critical point.
| 0 | 1 | 0 | 0 | 0 | 0 |
Unsupervised Ensemble Regression | Consider a regression problem where there is no labeled data and the only
observations are the predictions $f_i(x_j)$ of $m$ experts $f_{i}$ over many
samples $x_j$. With no knowledge on the accuracy of the experts, is it still
possible to accurately estimate the unknown responses $y_{j}$? Can one still
detect the least or most accurate experts? In this work we propose a framework
to study these questions, based on the assumption that the $m$ experts have
uncorrelated deviations from the optimal predictor. Assuming the first two
moments of the response are known, we develop methods to detect the best and
worst regressors, and derive U-PCR, a novel principal components approach for
unsupervised ensemble regression. We provide theoretical support for U-PCR and
illustrate its improved accuracy over the ensemble mean and median on a variety
of regression problems.
| 1 | 0 | 0 | 1 | 0 | 0 |
Accurate approximation of the distributions of the 3D Poisson-Voronoi typical cell geometrical features | Although Poisson-Voronoi diagrams have interesting mathematical properties,
there is still much to discover about the geometrical properties of its grains.
Through simulations, many authors were able to obtain numerical approximations
of the moments of the distributions of more or less all geometrical
characteristics of the grain. Furthermore, many proposals on how to get close
parametric approximations to the real distributions were put forward by several
authors. In this paper we show that exploiting the scaling property of the
underlying Poisson process, we are able to derive the distribution of the main
geometrical features of the grain for every value of the intensity parameter.
Moreover, we use a sophisticated simulation program to construct a close Monte
Carlo based approximation for the distributions of interest. Using this, we
also determine the closest approximating distributions within the mentioned
frequently used parametric classes of distributions and conclude that these
approximations can be quite accurate.
| 0 | 0 | 0 | 1 | 0 | 0 |
Flux-Stabilized Majorana Zero Modes in Coupled One-Dimensional Fermi Wires | One promising avenue to study one-dimensional ($1$D) topological phases is to
realize them in synthetic materials such as cold atomic gases. Intriguingly, it
is possible to realize Majorana boundary modes in a $1$D number-conserving
system consisting of two fermionic chains coupled only by pair-hopping
processes. It is commonly believed that significant interchain single-particle
tunneling necessarily destroys these Majorana modes, as it spoils the
$\mathbb{Z}_2$ fermion parity symmetry that protects them. In this Letter, we
present a new mechanism to overcome this obstacle, by piercing a (synthetic)
magnetic $\pi$-flux through each plaquette of the Fermi ladder. Using
bosonization, we show that in this case there exists an exact leg-interchange
symmetry that is robust to interchain hopping, and acts as fermion parity at
long wavelengths. We utilize density matrix renormalization group and exact
diagonalization to verify that the resulting model exhibits Majorana boundary
modes up to large single-particle tunnelings, comparable to the intrachain
hopping strength. Our work highlights the unusual impacts of different
topologically trivial band structures on these interaction-driven topological
phases, and identifies a distinct route to stabilizing Majorana boundary modes
in $1$D fermionic ladders.
| 0 | 1 | 0 | 0 | 0 | 0 |
Nonlinear Kalman Filtering for Censored Observations | The use of Kalman filtering, as well as its nonlinear extensions, for the
estimation of system variables and parameters has played a pivotal role in many
fields of scientific inquiry where observations of the system are restricted to
a subset of variables. However in the case of censored observations, where
measurements of the system beyond a certain detection point are impossible, the
estimation problem is complicated. Without appropriate consideration, censored
observations can lead to inaccurate estimates. Motivated by the work of [1], we
develop a modified version of the extended Kalman filter to handle the case of
censored observations in nonlinear systems. We validate this methodology in a
simple oscillator system first, showing its ability to accurately reconstruct
state variables and track system parameters when observations are censored.
Finally, we utilize the nonlinear censored filter to analyze censored datasets
from patients with hepatitis C and human immunodeficiency virus.
| 0 | 0 | 1 | 1 | 0 | 0 |
Advances in Atomic Resolution In Situ Environmental Transmission Electron Microscopy and 1 Angstrom Aberration Corrected In Situ Electron Microscopy | Advances in atomic resolution in situ environmental transmission electron
microscopy for direct probing of gas-solid reactions, including at very high
temperatures are described. In addition, recent developments of dynamic real
time in situ studies at the Angstrom level using a hot stage in an aberration
corrected environment are presented. In situ data from Pt and Pd nanoparticles
on carbon with the corresponding FFT (optical diffractogram) illustrate an
achieved resolution of 0.11 nm at 500 C and higher in a double aberration
corrected TEM and STEM instrument employing a wider gap objective pole piece.
The new results open up opportunities for dynamic studies of materials in an
aberration corrected environment.
| 0 | 1 | 0 | 0 | 0 | 0 |
An improved high order finite difference method for non-conforming grid interfaces for the wave equation | This paper presents an extension of a recently developed high order finite
difference method for the wave equation on a grid with non-conforming
interfaces. The stability proof of the existing methods relies on the
interpolation operators being norm-contracting, which is satisfied by the
second and fourth order operators, but not by the sixth order operator. We
construct new penalty terms to impose interface conditions such that the
stability proof does not require the norm-contracting condition. As a
consequence, the sixth order accurate scheme is also provably stable. Numerical
experiments demonstrate the improved stability and accuracy property.
| 0 | 0 | 1 | 0 | 0 | 0 |
Refined estimates for simple blow-ups of the scalar curvature equation on S^n | In their work on a sharp compactness theorem for the Yamabe problem, Khuri,
Marques and Schoen apply a refined blow-up analysis (what we call `second order
blow-up argument' in this article) to obtain highly accurate approximate
solutions for the Yamabe equation. As for the conformal scalar curvature
equation on S^n with n > 3, we examine the second order blow-up argument and
obtain refined estimate for a blow-up sequence near a simple blow-up point. The
estimate involves local effect from the Taylor expansion of the scalar
curvature function, global effect from other blow-up points, and the balance
formula as expressed in the Pohozaev identity in an essential way.
| 0 | 0 | 1 | 0 | 0 | 0 |
The discrete logarithm problem over prime fields: the safe prime case. The Smart attack, non-canonical lifts and logarithmic derivatives | In this brief note we connect the discrete logarithm problem over prime
fields in the safe prime case to the logarithmic derivative.
| 1 | 0 | 1 | 0 | 0 | 0 |
DGCNN: Disordered Graph Convolutional Neural Network Based on the Gaussian Mixture Model | Convolutional neural networks (CNNs) can be applied to graph similarity
matching, in which case they are called graph CNNs. Graph CNNs are attracting
increasing attention due to their effectiveness and efficiency. However, the
existing convolution approaches focus only on regular data forms and require
the transfer of the graph or key node neighborhoods of the graph into the same
fixed form. During this transfer process, structural information of the graph
can be lost, and some redundant information can be incorporated. To overcome
this problem, we propose the disordered graph convolutional neural network
(DGCNN) based on the mixed Gaussian model, which extends the CNN by adding a
preprocessing layer called the disordered graph convolutional layer (DGCL). The
DGCL uses a mixed Gaussian function to realize the mapping between the
convolution kernel and the nodes in the neighborhood of the graph. The output
of the DGCL is the input of the CNN. We further implement a
backward-propagation optimization process of the convolutional layer by which
we incorporate the feature-learning model of the irregular node neighborhood
structure into the network. Thereafter, the optimization of the convolution
kernel becomes part of the neural network learning process. The DGCNN can
accept arbitrary scaled and disordered neighborhood graph structures as the
receptive fields of CNNs, which reduces information loss during graph
transformation. Finally, we perform experiments on multiple standard graph
datasets. The results show that the proposed method outperforms the
state-of-the-art methods in graph classification and retrieval.
| 1 | 0 | 0 | 1 | 0 | 0 |
Profile Estimation for Partial Functional Partially Linear Single-Index Model | This paper studies a \textit{partial functional partially linear single-index
model} that consists of a functional linear component as well as a linear
single-index component. This model generalizes many well-known existing models
and is suitable for more complicated data structures. However, its estimation
inherits the difficulties and complexities from both components and makes it a
challenging problem, which calls for new methodology. We propose a novel
profile B-spline method to estimate the parameters by approximating the unknown
nonparametric link function in the single-index component part with B-spline,
while the linear slope function in the functional component part is estimated
by the functional principal component basis. The consistency and asymptotic
normality of the parametric estimators are derived, and the global convergence
of the proposed estimator of the linear slope function is also established.
More excitingly, the latter convergence is optimal in the minimax sense. A
two-stage procedure is implemented to estimate the nonparametric link function,
and the resulting estimator possesses the optimal global rate of convergence.
Furthermore, the convergence rate of the mean squared prediction error for a
predictor is also obtained. Empirical properties of the proposed procedures are
studied through Monte Carlo simulations. A real data example is also analyzed
to illustrate the power and flexibility of the proposed methodology.
| 0 | 0 | 1 | 1 | 0 | 0 |
Optimal $k$-Coverage Charging Problem | Wireless rechargeable sensor networks, consisting of sensor nodes with
rechargeable batteries and mobile chargers to replenish their batteries, have
gradually become a promising solution to the bottleneck of energy limitation
that hinders the wide deployment of wireless sensor networks (WSN). In this
paper, we focus on the mobile charger scheduling and path optimization scenario
in which the $k$-coverage ability of a network system needs to be maintained.
We formulate the optimal $k$-coverage charging problem of finding a feasible
path for a mobile charger to charge a set of sensor nodes within their
estimated charging time windows under the constraint of maintaining the
$k$-coverage ability of the network system, with an objective of minimizing the
energy consumption on traveling per tour. We show the hardness of the problem
that even finding a feasible path for the trivial case of the problem is an
NP-complete one with no polytime constant-factor approximation algorithm.
| 1 | 0 | 0 | 0 | 0 | 0 |
Facebook's gender divide | Online social media are information resources that can have a transformative
power in society. While the Web was envisioned as an equalizing force that
allows everyone to access information, the digital divide prevents large
amounts of people from being present online. Online social media in particular
are prone to gender inequality, an important issue given the link between
social media use and employment. Understanding gender inequality in social
media is a challenging task due to the necessity of data sources that can
provide unbiased measurements across multiple countries. Here we show how the
Facebook Gender Divide (FGD), a metric based on a dataset including more than
1.4 Billion users in 217 countries, explains various aspects of worldwide
gender inequality. Our analysis shows that the FGD encodes gender equality
indices in education, health, and economic opportunity. We find network effects
that suggest that using social media has an added value for women. Furthermore,
we find that low values of the FGD precede the approach of countries towards
economic gender equality. Our results suggest that online social networks,
while suffering evident gender imbalance, may lower the barriers that women
have to access informational resources and help to narrow the economic gender
gap.
| 1 | 0 | 0 | 0 | 0 | 0 |
Knowing the past improves cooperation in the future | Cooperation is the cornerstone of human evolutionary success. Like no other
species, we champion the sacrifice of personal benefits for the common good,
and we work together to achieve what we are unable to achieve alone. Knowledge
and information from past generations is thereby often instrumental in ensuring
we keep cooperating rather than deteriorating to less productive ways of
coexistence. Here we present a mathematical model based on evolutionary game
theory that shows how using the past as the benchmark for evolutionary success,
rather than just current performance, significantly improves cooperation in the
future. Interestingly, the details of just how the past is taken into account
play only second-order importance, whether it be a weighted average of past
payoffs or just a single payoff value from the past. Cooperation is promoted
because information from the past disables fast invasions of defectors, thus
enhancing the long-term benefits of cooperative behavior.
| 1 | 0 | 0 | 0 | 1 | 0 |
BMO estimate of lacunary Fourier series on nonabelian discrete groups | We show that the classical equivalence between the BMO norm and the $L^2$
norm of a lacunary Fourier series has an analogue on any discrete group $G$
equipped with a conditionally negative function.
| 0 | 0 | 1 | 0 | 0 | 0 |
Multi-objective training of Generative Adversarial Networks with multiple discriminators | Recent literature has demonstrated promising results for training Generative
Adversarial Networks by employing a set of discriminators, in contrast to the
traditional game involving one generator against a single adversary. Such
methods perform single-objective optimization on some simple consolidation of
the losses, e.g. an arithmetic average. In this work, we revisit the
multiple-discriminator setting by framing the simultaneous minimization of
losses provided by different models as a multi-objective optimization problem.
Specifically, we evaluate the performance of multiple gradient descent and the
hypervolume maximization algorithm on a number of different datasets. Moreover,
we argue that the previously proposed methods and hypervolume maximization can
all be seen as variations of multiple gradient descent in which the update
direction can be computed efficiently. Our results indicate that hypervolume
maximization presents a better compromise between sample quality and
computational cost than previous methods.
| 1 | 0 | 0 | 1 | 0 | 0 |
Application of Surface Coil for Nuclear Magnetic Resonance Studies of Semi-conducting Thin Films | We conduct a comprehensive set of tests of performance of surface coils used
for nuclear magnetic resonance (NMR) study of quasi 2-dimensional samples. We
report ${^{115} \rm{In}}$ and ${^{31} \rm{P}}$ NMR measurements on InP,
semi-conducting thin substrate samples. Surface coils of both zig-zag
meander-line and concentric spiral geometries were used. We compare reception
sensitivity and signal-to-noise ratio (SNR) of NMR signal obtained by using
surface-type coils to that obtained by standard solenoid-type coils. As
expected, we find that surface-type coils provide better sensitivity for NMR
study of thin films samples. Moreover, we compare the reception sensitivity of
different types of the surface coils. We identify the optimal geometry of the
surface coils for a given application and/or direction of the applied magnetic
field.
| 0 | 1 | 0 | 0 | 0 | 0 |
A universal coarse K-theory | In this paper, we construct an equivariant coarse homology theory with values
in the category of non-commutative motives of Blumberg, Gepner and Tabuada,
with coefficients in any small additive category. Equivariant coarse K-theory
is obtained from the latter by passing to global sections. The present
construction extends joint work of the first named author with Engel,
Kasprowski and Winges by promoting codomain of the equivariant coarse
K-homology functor to non-commutative motives.
| 0 | 0 | 1 | 0 | 0 | 0 |
Parameters of Three Selected Model Galactic Potentials Based on the Velocities of Objects at Distances up to 200 kpc | This paper is a continuation of our recent paper devoted to refining the
parameters of three component (bulge, disk, halo) axisymmetric model Galactic
gravitational potentials differing by the expression for the dark matter halo
using the velocities of distant objects. In all models the bulge and disk
potentials are described by the Miyamoto-Nagai expressions. In our previous
paper we used the Allen-Santill'an (I), Wilkinson--Evans (II), and
Navarro-Frenk-White (III) models to describe the halo. In this paper we use a
spherical logarithmic Binney potential (model IV), a Plummer sphere (model V),
and a Hernquist potential (model VI) to describe the halo. A set of present-day
observational data in the range of Galactocentric distances R from 0 to 200 kpc
is used to refine the parameters of the listed models, which are employed most
commonly at present. The model rotation curves are fitted to the observed
velocities by taking into account the constraints on the local matter density
and the vertical force . Model VI looks best among the three models considered
here from the viewpoint of the achieved accuracy of fitting the model rotation
curves to the measurements. This model is close to the Navarro-Frenk-White
model III refined and considered best in our previous paper, which is shown
using the integration of the orbits of two globular clusters, Lynga 7 and NGC
5053, as an example.
| 0 | 1 | 0 | 0 | 0 | 0 |
Giant Thermal Conductivity Enhancement in Multilayer MoS2 under Highly Compressive Strain | Multilayer MoS2 possesses highly anisotropic thermal conductivities along
in-plane and cross-plane directions that could hamper heat dissipation in
electronics. With about 9% cross-plane compressive strain created by
hydrostatic pressure in a diamond anvil cell, we observed about 12 times
increase in the cross-plane thermal conductivity of multilayer MoS2. Our
experimental and theoretical studies reveal that this drastic change arises
from the greatly strengthened interlayer interaction and heavily modified
phonon dispersions along cross-plane direction, with negligible contribution
from electronic thermal conductivity, despite its enhancement of 4 orders of
magnitude. The anisotropic thermal conductivity in the multilayer MoS2 at
ambient environment becomes almost isotropic under highly compressive strain,
effectively transitioning from 2D to 3D heat dissipation. This strain tuning
approach also makes possible parallel tuning of structural, thermal and
electrical properties, and can be extended to the whole family of 2D Van der
Waals solids, down to two layer systems.
| 0 | 1 | 0 | 0 | 0 | 0 |
Interrogation of spline surfaces with application to isogeometric design and analysis of lattice-skin structures | A novel surface interrogation technique is proposed to compute the
intersection of curves with spline surfaces in isogeometric analysis. The
intersection points are determined in one-shot without resorting to a
Newton-Raphson iteration or successive refinement. Surface-curve intersection
requires usually the solution of a system of nonlinear equations. It is assumed
that the surface is given in form of a spline, such as a NURBS, T-spline or
Catmull-Clark subdivision surface, and is convertible into a collection of
Bézier patches. First, a hierarchical bounding volume tree is used to
efficiently identify the Bézier patches with a convex-hull intersecting the
convex-hull of a given curve segment. For ease of implementation convex-hulls
are approximated with k-dops (discrete orientation polytopes). Subsequently,
the intersections of the identified Bézier patches with the curve segment are
determined with a matrix-based implicit representation leading to the
computation of a sequence of small singular value decompositions (SVDs). As an
application of the developed interrogation technique the isogeometric design
and analysis of lattice-skin structures is investigated. Current additive
manufacturing technologies make it possible to produce up to metre size parts
with designed geometric features reaching down to submillimetre scale. The skin
is a spline surface that is usually created in a computer-aided design (CAD)
system and the periodic lattice to be fitted consists of unit cells, each
containing a small number of struts. The lattice-skin structure is generated by
projecting selected lattice nodes onto the surface after determining the
intersection of unit cell edges with the surface. For mechanical analysis, the
skin is modelled as a Kirchhoff-Love thin-shell and the lattice as a
pin-jointed truss. The two types of structures are coupled with a standard
Lagrange multiplier approach.
| 1 | 0 | 0 | 0 | 0 | 0 |
On recursive computation of coprime factorizations of rational matrices | We propose general computational procedures based on descriptor state-space
realizations to compute coprime factorizations of rational matrices with
minimum degree denominators. Enhanced recursive pole dislocation techniques are
developed, which allow to successively place all poles of the factors into a
given "good" domain of the complex plane. The resulting McMillan degree of the
denominator factor is equal to the number of poles lying in the complementary
"bad" region and therefore is minimal. The new pole dislocation techniques are
employed to compute coprime factorizations with proper and stable factors of
arbitrary improper rational matrices and coprime factorizations with inner
denominators. The proposed algorithms work for arbitrary descriptor
representations, regardless they are stabilizable or detectable.
| 1 | 0 | 1 | 0 | 0 | 0 |
On Dark Matter Interactions with the Standard Model through an Anomalous $Z'$ | We study electroweak scale Dark Matter (DM) whose interactions with baryonic
matter are mediated by a heavy anomalous $Z'$. We emphasize that when the DM is
a Majorana particle, its low-velocity annihilations are dominated by loop
suppressed annihilations into the gauge bosons, rather than by p-wave or
chirally suppressed annihilations into the SM fermions. Because the $Z'$ is
anomalous, these kinds of DM models can be realized only as effective field
theories (EFTs) with a well-defined cutoff, where heavy spectator fermions
restore gauge invariance at high energies. We formulate these EFTs, estimate
their cutoff and properly take into account the effect of the Chern-Simons
terms one obtains after the spectator fermions are integrated out. We find
that, while for light DM collider and direct detection experiments usually
provide the strongest bounds, the bounds at higher masses are heavily dominated
by indirect detection experiments, due to strong annihilation into $W^+W^-$,
$ZZ$, $Z\gamma$ and possibly into $gg$ and $\gamma\gamma$. We emphasize that
these annihilation channels are generically significant because of the
structure of the EFT, and therefore these models are prone to strong indirect
detection constraints. Even though we focus on selected $Z'$ models for
illustrative purposes, our setup is completely generic and can be used for
analyzing the predictions of any anomalous $Z'$-mediated DM model with
arbitrary charges.
| 0 | 1 | 0 | 0 | 0 | 0 |
Spin diffusion from an inhomogeneous quench in an integrable system | Generalised hydrodynamics predicts universal ballistic transport in
integrable lattice systems when prepared in generic inhomogeneous initial
states. However, the ballistic contribution to transport can vanish in systems
with additional discrete symmetries. Here we perform large scale numerical
simulations of spin dynamics in the anisotropic Heisenberg $XXZ$ spin $1/2$
chain starting from an inhomogeneous mixed initial state which is symmetric
with respect to a combination of spin-reversal and spatial reflection. In the
isotropic and easy-axis regimes we find non-ballistic spin transport which we
analyse in detail in terms of scaling exponents of the transported
magnetisation and scaling profiles of the spin density. While in the easy-axis
regime we find accurate evidence of normal diffusion, the spin transport in the
isotropic case is clearly super-diffusive, with the scaling exponent very close
to $2/3$, but with universal scaling dynamics which obeys the diffusion
equation in nonlinearly scaled time.
| 0 | 1 | 0 | 0 | 0 | 0 |
Morphological Simplification of Archaeological Fracture Surfaces | We propose to employ scale spaces of mathematical morphology to
hierarchically simplify fracture surfaces of complementarily fitting
archaeological fragments. This representation preserves contact and is
insensitive to different kinds of abrasion affecting the exact complementarity
of the original fragments. We present a pipeline for morphologically
simplifying fracture surfaces, based on their Lipschitz nature; its core is a
new embedding of fracture surfaces to simultaneously compute both closing and
opening morphological operations, using distance transforms.
| 1 | 0 | 0 | 0 | 0 | 0 |
A note on a separating system of rational invariants for finite dimensional generic algebras | The paper deals with a construction of a separating system of rational
invariants for finite dimensional generic algebras. In the process of dealing
an approach to a rough classification of finite dimensional algebras is offered
by attaching them some quadratic forms.
| 0 | 0 | 1 | 0 | 0 | 0 |
A Modality-Adaptive Method for Segmenting Brain Tumors and Organs-at-Risk in Radiation Therapy Planning | In this paper we present a method for simultaneously segmenting brain tumors
and an extensive set of organs-at-risk for radiation therapy planning of
glioblastomas. The method combines a contrast-adaptive generative model for
whole-brain segmentation with a new spatial regularization model of tumor shape
using convolutional restricted Boltzmann machines. We demonstrate
experimentally that the method is able to adapt to image acquisitions that
differ substantially from any available training data, ensuring its
applicability across treatment sites; that its tumor segmentation accuracy is
comparable to that of the current state of the art; and that it captures most
organs-at-risk sufficiently well for radiation therapy planning purposes. The
proposed method may be a valuable step towards automating the delineation of
brain tumors and organs-at-risk in glioblastoma patients undergoing radiation
therapy.
| 0 | 0 | 0 | 1 | 0 | 0 |
Protein Pattern Formation | Protein pattern formation is essential for the spatial organization of many
intracellular processes like cell division, flagellum positioning, and
chemotaxis. A prominent example of intracellular patterns are the oscillatory
pole-to-pole oscillations of Min proteins in \textit{E. coli} whose biological
function is to ensure precise cell division. Cell polarization, a prerequisite
for processes such as stem cell differentiation and cell polarity in yeast, is
also mediated by a diffusion-reaction process. More generally, these functional
modules of cells serve as model systems for self-organization, one of the core
principles of life. Under which conditions spatio-temporal patterns emerge, and
how these patterns are regulated by biochemical and geometrical factors are
major aspects of current research. Here we review recent theoretical and
experimental advances in the field of intracellular pattern formation, focusing
on general design principles and fundamental physical mechanisms.
| 0 | 0 | 0 | 0 | 1 | 0 |
What do we need to build explainable AI systems for the medical domain? | Artificial intelligence (AI) generally and machine learning (ML) specifically
demonstrate impressive practical success in many different application domains,
e.g. in autonomous driving, speech recognition, or recommender systems. Deep
learning approaches, trained on extremely large data sets or using
reinforcement learning methods have even exceeded human performance in visual
tasks, particularly on playing games such as Atari, or mastering the game of
Go. Even in the medical domain there are remarkable results. The central
problem of such models is that they are regarded as black-box models and even
if we understand the underlying mathematical principles, they lack an explicit
declarative knowledge representation, hence have difficulty in generating the
underlying explanatory structures. This calls for systems enabling to make
decisions transparent, understandable and explainable. A huge motivation for
our approach are rising legal and privacy aspects. The new European General
Data Protection Regulation entering into force on May 25th 2018, will make
black-box approaches difficult to use in business. This does not imply a ban on
automatic learning approaches or an obligation to explain everything all the
time, however, there must be a possibility to make the results re-traceable on
demand. In this paper we outline some of our research topics in the context of
the relatively new area of explainable-AI with a focus on the application in
medicine, which is a very special domain. This is due to the fact that medical
professionals are working mostly with distributed heterogeneous and complex
sources of data. In this paper we concentrate on three sources: images, *omics
data and text. We argue that research in explainable-AI would generally help to
facilitate the implementation of AI/ML in the medical domain, and specifically
help to facilitate transparency and trust.
| 1 | 0 | 0 | 1 | 0 | 0 |
Targeted matrix completion | Matrix completion is a problem that arises in many data-analysis settings
where the input consists of a partially-observed matrix (e.g., recommender
systems, traffic matrix analysis etc.). Classical approaches to matrix
completion assume that the input partially-observed matrix is low rank. The
success of these methods depends on the number of observed entries and the rank
of the matrix; the larger the rank, the more entries need to be observed in
order to accurately complete the matrix. In this paper, we deal with matrices
that are not necessarily low rank themselves, but rather they contain low-rank
submatrices. We propose Targeted, which is a general framework for completing
such matrices. In this framework, we first extract the low-rank submatrices and
then apply a matrix-completion algorithm to these low-rank submatrices as well
as the remainder matrix separately. Although for the completion itself we use
state-of-the-art completion methods, our results demonstrate that Targeted
achieves significantly smaller reconstruction errors than other classical
matrix-completion methods. One of the key technical contributions of the paper
lies in the identification of the low-rank submatrices from the input
partially-observed matrices.
| 1 | 0 | 0 | 1 | 0 | 0 |
Coresets for Dependency Networks | Many applications infer the structure of a probabilistic graphical model from
data to elucidate the relationships between variables. But how can we train
graphical models on a massive data set? In this paper, we show how to construct
coresets -compressed data sets which can be used as proxy for the original data
and have provably bounded worst case error- for Gaussian dependency networks
(DNs), i.e., cyclic directed graphical models over Gaussians, where the parents
of each variable are its Markov blanket. Specifically, we prove that Gaussian
DNs admit coresets of size independent of the size of the data set.
Unfortunately, this does not extend to DNs over members of the exponential
family in general. As we will prove, Poisson DNs do not admit small coresets.
Despite this worst-case result, we will provide an argument why our coreset
construction for DNs can still work well in practice on count data. To
corroborate our theoretical results, we empirically evaluated the resulting
Core DNs on real data sets. The results
| 1 | 0 | 0 | 1 | 0 | 0 |
Relative stability of a ferroelectric state in (Na0.5Bi0.5)TiO3-based compounds under substitutions: Role of a tolerance factor in expansion of the temperature interval of stable ferroelectric state | The influence of the B-site ion substitutions in
(1-x)(Bi1/2Na1/2)TiO3-xBaTiO3 system of solid solutions on the relative
stability of the ferroelectric and antiferroelectric phases has been studied.
The ions of zirconium, tin, along with (In0.5Nb0.5), (Fe0.5Nb0.5), (Al0.5V0.5)
ion complexes have been used as substituting elements. An increase in the
concentration of the substituting ion results in a near linear variation in the
size of the crystal lattice cell. Along with the cell size variation a change
in the relative stability of the ferroelectric and antiferroelectric phases
takes place according to the changes of the tolerance factor of the solid
solution. An increase in the tolerance factor leads to the increase in the
temperature of the ferroelectric-antiferroelectric phase transition, and vice
versa. All obtained results demonstrate the predominant influence of the ion
size factor on the relative stability of the ferroelectric and
antiferroelectric states in the (Na0.5Bi0.5)TiO3-based solid solutions and
indicate the way for raising the temperature of the
ferroelectric-antiferroelectric phase transition.
| 0 | 1 | 0 | 0 | 0 | 0 |
Data-driven modelling and validation of aircraft inbound-stream at some major European airports | This paper presents an exhaustive study on the arrivals process at eight
important European airports. Using inbound traffic data, we define, compare,
and contrast a data-driven Poisson and PSRA point process. Although, there is
sufficient evidence that the interarrivals might follow an exponential
distribution, this finding does not directly translate to evidence that the
arrivals stream is Poisson. The main reason is that finite-capacity constraints
impose a correlation structure to the arrivals stream, which a Poisson model
cannot capture. We show the weaknesses and somehow the difficulties of using a
Poisson process to model with good approximation the arrivals stream. On the
other hand, our innovative non-parametric, data-driven PSRA model, predicts
quite well and captures important properties of the typical arrivals stream.
| 0 | 0 | 0 | 1 | 0 | 0 |
Exact evolution equation for the effective potential | We derive a new exact evolution equation for the scale dependence of an
effective action. The corresponding equation for the effective potential
permits a useful truncation. This allows one to deal with the infrared problems
of theories with massless modes in less than four dimensions which are relevant
for the high temperature phase transition in particle physics or the
computation of critical exponents in statistical mechanics.
| 0 | 1 | 0 | 0 | 0 | 0 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.