title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
On Achievable Rates of AWGN Energy-Harvesting Channels with Block Energy Arrival and Non-Vanishing Error Probabilities | This paper investigates the achievable rates of an additive white Gaussian
noise (AWGN) energy-harvesting (EH) channel with an infinite battery. The EH
process is characterized by a sequence of blocks of harvested energy, which is
known causally at the source. The harvested energy remains constant within a
block while the harvested energy across different blocks is characterized by a
sequence of independent and identically distributed (i.i.d.) random variables.
The blocks have length $L$, which can be interpreted as the coherence time of
the energy arrival process. If $L$ is a constant or grows sublinearly in the
blocklength $n$, we fully characterize the first-order term in the asymptotic
expansion of the maximum transmission rate subject to a fixed tolerable error
probability $\varepsilon$. The first-order term is known as the
$\varepsilon$-capacity. In addition, we obtain lower and upper bounds on the
second-order term in the asymptotic expansion, which reveal that the second
order term scales as $\sqrt{\frac{L}{n}}$ for any $\varepsilon$ less than
$1/2$. The lower bound is obtained through analyzing the save-and-transmit
strategy. If $L$ grows linearly in $n$, we obtain lower and upper bounds on the
$\varepsilon$-capacity, which coincide whenever the cumulative distribution
function (cdf) of the EH random variable is continuous and strictly increasing.
In order to achieve the lower bound, we have proposed a novel adaptive
save-and-transmit strategy, which chooses different save-and-transmit codes
across different blocks according to the energy variation across the blocks.
| 1 | 0 | 1 | 0 | 0 | 0 |
Probabilistic Projection of Subnational Total Fertility Rates | We consider the problem of probabilistic projection of the total fertility
rate (TFR) for subnational regions. We seek a method that is consistent with
the UN's recently adopted Bayesian method for probabilistic TFR projections for
all countries, and works well for all countries. We assess various possible
methods using subnational TFR data for 47 countries. We find that the method
that performs best in terms of out-of-sample predictive performance and also in
terms of reproducing the within-country correlation in TFR is a method that
scales the national trajectory by a region-specific scale factor that is
allowed to vary slowly over time. This supports the hypothesis of Watkins
(1990, 1991) that within-country TFR converges over time in response to
country-specific factors, and extends the Watkins hypothesis to the last 50
years and to a much wider range of countries around the world.
| 0 | 0 | 0 | 1 | 0 | 0 |
Optimising the topological information of the $A_\infty$-persistence groups | Persistent homology typically studies the evolution of homology groups
$H_p(X)$ (with coefficients in a field) along a filtration of topological
spaces. $A_\infty$-persistence extends this theory by analysing the evolution
of subspaces such as $V := \text{Ker}\, {\Delta_n}_{| H_p(X)} \subseteq
H_p(X)$, where $\{\Delta_m\}_{m\geq1}$ denotes a structure of
$A_\infty$-coalgebra on $H_*(X)$. In this paper we illustrate how
$A_\infty$-persistence can be useful beyond persistent homology by discussing
the topological meaning of $V$, which is the most basic form of
$A_\infty$-persistence group. In addition, we explore how to choose
$A_\infty$-coalgebras along a filtration to make the $A_\infty$-persistence
groups carry more faithful information.
| 1 | 0 | 1 | 0 | 0 | 0 |
Transmission XMCD-PEEM imaging of an engineered vertical FEBID cobalt nanowire with a domain wall | Using focused electron-beam-induced deposition (FEBID), we fabricate
vertical, platinum-coated cobalt nanowires with a controlled three-dimensional
structure. The latter is engineered to feature bends along the height: these
are used as pinning sites for domain walls, the presence of which we
investigate using X-ray Magnetic Circular Dichroism (XMCD) coupled to
PhotoEmission Electron Microscopy (PEEM). The vertical geometry of our sample
combined with the low incidence of the X-ray beam produce an extended wire
shadow which we use to recover the wire's magnetic configuration. In this
transmission configuration, the whole sample volume is probed, thus
circumventing the limitation of PEEM to surfaces. This article reports on the
first study of magnetic nanostructures standing perpendicular to the substrate
with XMCD-PEEM. The use of this technique in shadow mode enabled us to confirm
the presence of a domain wall (DW) without direct imaging of the nanowire.
| 0 | 1 | 0 | 0 | 0 | 0 |
A Deep Neural Network Surrogate for High-Dimensional Random Partial Differential Equations | Developing efficient numerical algorithms for the solution of high
dimensional random Partial Differential Equations (PDEs) has been a challenging
task due to the well-known curse of dimensionality. We present a new solution
framework for these problems based on a deep learning approach. Specifically,
the random PDE is approximated by a feed-forward fully-connected deep residual
network, with either strong or weak enforcement of initial and boundary
constraints. The framework is mesh-free, and can handle irregular computational
domains. Parameters of the approximating deep neural network are determined
iteratively using variants of the Stochastic Gradient Descent (SGD) algorithm.
The satisfactory accuracy of the proposed frameworks is numerically
demonstrated on diffusion and heat conduction problems, in comparison with the
converged Monte Carlo-based finite element results.
| 1 | 0 | 0 | 1 | 0 | 0 |
Fluid photonic crystal from colloidal quantum dots | We study optical forces acting upon semiconductor quantum dots and the force
driven motion of the dots in a colloid. In the spectral range of exciton
transitions in uantum dots, when the photon energy is close to the exciton
energy, the polarizability of the dots is drastically increased. It leads to a
resonant increase of both the gradient and the scattering contributions to the
optical force, which enables the efficient manipulation with the dots. We
reveal that the optical grating of the colloid leads to the formation of a
fluid photonic crystal with spatially periodic circulating fluxes and density
of the dots. Pronounced resonant dielectric response of semiconductor quantum
dots enables a separation of the quantum dots with different exciton
frequencies.
| 0 | 1 | 0 | 0 | 0 | 0 |
Subject Selection on a Riemannian Manifold for Unsupervised Cross-subject Seizure Detection | Inter-subject variability between individuals poses a challenge in
inter-subject brain signal analysis problems. A new algorithm for
subject-selection based on clustering covariance matrices on a Riemannian
manifold is proposed. After unsupervised selection of the subsets of relevant
subjects, data in a cluster is mapped to a tangent space at the mean point of
covariance matrices in that cluster and an SVM classifier on labeled data from
relevant subjects is trained. Experiment on an EEG seizure database shows that
the proposed method increases the accuracy over state-of-the-art from 86.83% to
89.84% and specificity from 87.38% to 89.64% while reducing the false positive
rate/hour from 0.8/hour to 0.77/hour.
| 1 | 0 | 0 | 1 | 0 | 0 |
On a Surprising Oversight by John S. Bell in the Proof of his Famous Theorem | Bell inequalities are usually derived by assuming locality and realism, and
therefore experimental violations of Bell inequalities are usually taken to
imply violations of either locality or realism, or both. But, after reviewing
an oversight by Bell, here we derive the Bell-CHSH inequality by assuming only
that Bob can measure along the directions b and b' simultaneously while Alice
measures along either a or a', and likewise Alice can measure along the
directions a and a' simultaneously while Bob measures along either b or b',
without assuming locality. The observed violations of the Bell-CHSH inequality
therefore simply verify the manifest impossibility of measuring along the
directions b and b' (or along the directions a and a') simultaneously, in any
realizable EPR-Bohm type experiment.
| 0 | 1 | 0 | 0 | 0 | 0 |
On the higher derivatives of the inverse tangent function | In this paper, we find explicit formulas for higher order derivatives of the
inverse tangent function. More precisely, we study polynomials which are
induced from the higher-order derivatives of arctan(x). Successively, we give
generating functions, recurrence relations and some particular properties for
these polynomials. Connections to Chebyshev, Fibonacci, Lucas and Matching
polynomials are established.
| 0 | 0 | 1 | 0 | 0 | 0 |
Science with e-ASTROGAM (A space mission for MeV-GeV gamma-ray astrophysics) | e-ASTROGAM (enhanced ASTROGAM) is a breakthrough Observatory space mission,
with a detector composed by a Silicon tracker, a calorimeter, and an
anticoincidence system, dedicated to the study of the non-thermal Universe in
the photon energy range from 0.3 MeV to 3 GeV - the lower energy limit can be
pushed to energies as low as 150 keV for the tracker, and to 30 keV for
calorimetric detection. The mission is based on an advanced space-proven
detector technology, with unprecedented sensitivity, angular and energy
resolution, combined with polarimetric capability. Thanks to its performance in
the MeV-GeV domain, substantially improving its predecessors, e-ASTROGAM will
open a new window on the non-thermal Universe, making pioneering observations
of the most powerful Galactic and extragalactic sources, elucidating the nature
of their relativistic outflows and their effects on the surroundings. With a
line sensitivity in the MeV energy range one to two orders of magnitude better
than previous generation instruments, e-ASTROGAM will determine the origin of
key isotopes fundamental for the understanding of supernova explosion and the
chemical evolution of our Galaxy. The mission will provide unique data of
significant interest to a broad astronomical community, complementary to
powerful observatories such as LIGO-Virgo-GEO600-KAGRA, SKA, ALMA, E-ELT, TMT,
LSST, JWST, Athena, CTA, IceCube, KM3NeT, and LISA.
| 0 | 1 | 0 | 0 | 0 | 0 |
$J_1$-$J_2$ square lattice antiferromagnetism in the orbitally quenched insulator MoOPO$_4$ | We report magnetic and thermodynamic properties of a $4d^1$ (Mo$^{5+}$)
magnetic insulator MoOPO$_4$ single crystal, which realizes a $J_1$-$J_2$
Heisenberg spin-$1/2$ model on a stacked square lattice. The specific-heat
measurements show a magnetic transition at 16 K which is also confirmed by
magnetic susceptibility, ESR, and neutron diffraction measurements. Magnetic
entropy deduced from the specific heat corresponds to a two-level degree of
freedom per Mo$^{5+}$ ion, and the effective moment from the susceptibility
corresponds to the spin-only value. Using {\it ab initio} quantum chemistry
calculations we demonstrate that the Mo$^{5+}$ ion hosts a purely spin-$1/2$
magnetic moment, indicating negligible effects of spin-orbit interaction. The
quenched orbital moments originate from the large displacement of Mo ions
inside the MoO$_6$ octahedra along the apical direction. The ground state is
shown by neutron diffraction to support a collinear Néel-type magnetic order,
and a spin-flop transition is observed around an applied magnetic field of 3.5
T. The magnetic phase diagram is reproduced by a mean-field calculation
assuming a small easy-axis anisotropy in the exchange interactions. Our results
suggest $4d$ molybdates as an alternative playground to search for model
quantum magnets.
| 0 | 1 | 0 | 0 | 0 | 0 |
Eventness: Object Detection on Spectrograms for Temporal Localization of Audio Events | In this paper, we introduce the concept of Eventness for audio event
detection, which can, in part, be thought of as an analogue to Objectness from
computer vision. The key observation behind the eventness concept is that audio
events reveal themselves as 2-dimensional time-frequency patterns with specific
textures and geometric structures in spectrograms. These time-frequency
patterns can then be viewed analogously to objects occurring in natural images
(with the exception that scaling and rotation invariance properties do not
apply). With this key observation in mind, we pose the problem of detecting
monophonic or polyphonic audio events as an equivalent visual object(s)
detection problem under partial occlusion and clutter in spectrograms. We adapt
a state-of-the-art visual object detection model to evaluate the audio event
detection task on publicly available datasets. The proposed network has
comparable results with a state-of-the-art baseline and is more robust on
minority events. Provided large-scale datasets, we hope that our proposed
conceptual model of eventness will be beneficial to the audio signal processing
community towards improving performance of audio event detection.
| 1 | 0 | 0 | 0 | 0 | 0 |
Bayesian Scale Estimation for Monocular SLAM Based on Generic Object Detection for Correcting Scale Drift | This work proposes a new, online algorithm for estimating the local scale
correction to apply to the output of a monocular SLAM system and obtain an as
faithful as possible metric reconstruction of the 3D map and of the camera
trajectory. Within a Bayesian framework, it integrates observations from a
deep-learning based generic object detector and a prior on the evolution of the
scale drift. For each observation class, a predefined prior on the heights of
the class objects is used. This allows to define the observations likelihood.
Due to the scale drift inherent to monocular SLAM systems, we integrate a rough
model on the dynamics of scale drift. Quantitative evaluations of the system
are presented on the KITTI dataset, and compared with different approaches. The
results show a superior performance of our proposal in terms of relative
translational error when compared to other monocular systems.
| 1 | 0 | 0 | 0 | 0 | 0 |
A Linear-time Algorithm for Orthogonal Watchman Route Problem with Minimum Bends | Given an orthogonal polygon $ P $ with $ n $ vertices, the goal of the
watchman route problem is finding a path $ S $ of the minimum length in $ P $
such that every point of the polygon $ P $ is visible from at least one of the
point of $ S $. In the other words, in the watchman route problem we must
compute a shortest watchman route inside a simple polygon of $ n $ vertices
such that all the points interior to the polygon and on its boundary are
visible to at least one point on the route. If route and polygon be orthogonal,
it is called orthogonal watchman route problem. One of the targets of this
problem is finding the orthogonal path with the minimum number of bends as
possible. We present a linear-time algorithm for the orthogonal watchman route
problem, in which the given polygon is monotone. Our algorithm can be used also
for the problem on simple orthogonal polygons $ P $ for which the dual graph
induced by the vertical decomposition of $ P $ is a path, which is called path
polygon.
| 1 | 0 | 0 | 0 | 0 | 0 |
A note on surjectivity of piecewise affine mappings | A standard theorem in nonsmooth analysis states that a piecewise affine
function $F:\mathbb R^n\rightarrow\mathbb R^n$ is surjective if it is
coherently oriented in that the linear parts of its selection functions all
have the same nonzero determinant sign. In this note we prove that surjectivity
already follows from coherent orientation of the selection functions which are
active on the unbounded sets of a polyhedral subdivision of the domain
corresponding to $F$. A side bonus of the argumentation is a short proof of the
classical statement that an injective piecewise affine function is coherently
oriented.
| 0 | 0 | 1 | 0 | 0 | 0 |
Intrinsic Gaussian processes on complex constrained domains | We propose a class of intrinsic Gaussian processes (in-GPs) for
interpolation, regression and classification on manifolds with a primary focus
on complex constrained domains or irregular shaped spaces arising as subsets or
submanifolds of R, R2, R3 and beyond. For example, in-GPs can accommodate
spatial domains arising as complex subsets of Euclidean space. in-GPs respect
the potentially complex boundary or interior conditions as well as the
intrinsic geometry of the spaces. The key novelty of the proposed approach is
to utilise the relationship between heat kernels and the transition density of
Brownian motion on manifolds for constructing and approximating valid and
computationally feasible covariance kernels. This enables in-GPs to be
practically applied in great generality, while existing approaches for
smoothing on constrained domains are limited to simple special cases. The broad
utilities of the in-GP approach is illustrated through simulation studies and
data examples.
| 0 | 0 | 0 | 1 | 0 | 0 |
Adaptive Interference Removal for Un-coordinated Radar/Communication Co-existence | Most existing approaches to co-existing communication/radar systems assume
that the radar and communication systems are coordinated, i.e., they share
information, such as relative position, transmitted waveforms and channel
state. In this paper, we consider an un-coordinated scenario where a
communication receiver is to operate in the presence of a number of radars, of
which only a sub-set may be active, which poses the problem of estimating the
active waveforms and the relevant parameters thereof, so as to cancel them
prior to demodulation. Two algorithms are proposed for such a joint waveform
estimation/data demodulation problem, both exploiting sparsity of a proper
representation of the interference and of the vector containing the errors of
the data block, so as to implement an iterative joint interference removal/data
demodulation process. The former algorithm is based on classical on-grid
compressed sensing (CS), while the latter forces an atomic norm (AN)
constraint: in both cases the radar parameters and the communication
demodulation errors can be estimated by solving a convex problem. We also
propose a way to improve the efficiency of the AN-based algorithm. The
performance of these algorithms are demonstrated through extensive simulations,
taking into account a variety of conditions concerning both the interferers and
the respective channel states.
| 1 | 0 | 0 | 1 | 0 | 0 |
Learning Credible Models | In many settings, it is important that a model be capable of providing
reasons for its predictions (i.e., the model must be interpretable). However,
the model's reasoning may not conform with well-established knowledge. In such
cases, while interpretable, the model lacks \textit{credibility}. In this work,
we formally define credibility in the linear setting and focus on techniques
for learning models that are both accurate and credible. In particular, we
propose a regularization penalty, expert yielded estimates (EYE), that
incorporates expert knowledge about well-known relationships among covariates
and the outcome of interest. We give both theoretical and empirical results
comparing our proposed method to several other regularization techniques.
Across a range of settings, experiments on both synthetic and real data show
that models learned using the EYE penalty are significantly more credible than
those learned using other penalties. Applied to a large-scale patient risk
stratification task, our proposed technique results in a model whose top
features overlap significantly with known clinical risk factors, while still
achieving good predictive performance.
| 1 | 0 | 0 | 1 | 0 | 0 |
The common patterns of abundance: the log series and Zipf's law | In a language corpus, the probability that a word occurs $n$ times is often
proportional to $1/n^2$. Assigning rank, $s$, to words according to their
abundance, $\log s$ vs $\log n$ typically has a slope of minus one. That simple
Zipf's law pattern also arises in the population sizes of cities, the sizes of
corporations, and other patterns of abundance. By contrast, for the abundances
of different biological species, the probability of a population of size $n$ is
typically proportional to $1/n$, declining exponentially for larger $n$, the
log series pattern. This article shows that the differing patterns of Zipf's
law and the log series arise as the opposing endpoints of a more general
theory. The general theory follows from the generic form of all probability
patterns as a consequence of conserved average values and the associated
invariances of scale. To understand the common patterns of abundance, the
generic form of probability distributions plus the conserved average abundance
is sufficient. The general theory includes cases that are between the Zipf and
log series endpoints, providing a broad framework for analyzing widely observed
abundance patterns.
| 0 | 0 | 0 | 0 | 1 | 0 |
Modular meta-learning | Many prediction problems, such as those that arise in the context of
robotics, have a simplifying underlying structure that could accelerate
learning. In this paper, we present a strategy for learning a set of neural
network modules that can be combined in different ways. We train different
modular structures on a set of related tasks and generalize to new tasks by
composing the learned modules in new ways. We show this improves performance in
two robotics-related problems.
| 1 | 0 | 0 | 1 | 0 | 0 |
Universal experimental test for the role of free charge carriers in thermal Casimir effect within a micrometer separation range | We propose a universal experiment to measure the differential Casimir force
between a Au-coated sphere and two halves of a structured plate covered with a
P-doped Si overlayer. The concentration of free charge carriers in the
overlayer is chosen slightly below the critical one, f or which the phase
transition from dielectric to metal occurs. One ha f of the structured plate is
insulating, while its second half is made of gold. For the former we consider
two different structures, one consisting of bulk high-resistivity Si and the
other of a layer of silica followed by bulk high-resistivity Si. The
differential Casimir force is computed within the Lifshitz theory using four
approaches that have been proposed in the literature to account for the role of
free charge carriers in metallic and dielectric materials interacting with
quantum fluctuations. According to these approaches, Au at low frequencies is
described by either the Drude or the plasma model, whereas the free charge
carriers in dielectric materials at room temperature are either taken into
account or disregarded. It is shown that the values of differential Casimir
forces, computed in the micrometer separation range using these four
approaches, are widely distinct from each other and can be easily discriminated
experimentally. It is shown that for all approaches the thermal component of
the differential Casimir force is sufficiently large for direct observation.
The possible errors and uncertainties in the proposed experiment are estimated
and its importance for the theory of quantum fluctuations is discussed.
| 0 | 1 | 0 | 0 | 0 | 0 |
Gauss-Bonnet for matrix conformally rescaled Dirac | We derive an explicit formula for the scalar curvature over a two-torus with
a Dirac operator conformally rescaled by a globally diagonalizable matrix. We
show that the Gauss-Bonnet theorem holds and extend the result to all Riemann
surfaces with Dirac operators modified in the same way.
| 0 | 0 | 1 | 0 | 0 | 0 |
Multi-Pose Face Recognition Using Hybrid Face Features Descriptor | This paper presents a multi-pose face recognition approach using hybrid face
features descriptors (HFFD). The HFFD is a face descriptor containing of rich
discriminant information that is created by fusing some frequency-based
features extracted using both wavelet and DCT analysis of several different
poses of 2D face images. The main aim of this method is to represent the
multi-pose face images using a dominant frequency component with still having
reasonable achievement compared to the recent multi-pose face recognition
methods. The HFFD based face recognition tends to achieve better performance
than that of the recent 2D-based face recognition method. In addition, the
HFFD-based face recognition also is sufficiently to handle large face
variability due to face pose variations .
| 1 | 0 | 0 | 0 | 0 | 0 |
Freed-Moore K-theory | The twisted equivariant K-theory given by Freed and Moore is a K-theory which
unifies twisted equivariant complex K-theory, Atiyah's `Real' K-theory, and
their variants. In a general setting, we formulate this K-theory by using
Fredholm operators, and establish basic properties such as the Bott periodicity
and the Thom isomorphism. We also provide formulations of the K-theory based on
Karoubi's gradations in both infinite and finite dimensions, clarifying their
relationship with the Fredholm formulation.
| 0 | 0 | 1 | 0 | 0 | 0 |
Dirac dispersion and non-trivial Berry's phase in three-dimensional semimetal RhSb3 | We report observations of magnetoresistance, quantum oscillations and
angle-resolved photoemission in RhSb$_3$, a unfilled skutterudite semimetal
with low carrier density. The calculated electronic band structure of RhSb$_3$
entails a $Z_2$ quantum number $\nu_0=0,\nu_1=\nu_2=\nu_3=1$ in analogy to
strong topological insulators, and inverted linear valence/conduction bands
that touch at discrete points close to the Fermi level, in agreement with
angle-resolved photoemission results. Transport experiments reveal an
unsaturated linear magnetoresistance that approaches a factor of 200 at 60 T
magnetic fields, and quantum oscillations observable up to 150~K that are
consistent with a large Fermi velocity ($\sim 1.3\times 10^6$ ms$^{-1}$), high
carrier mobility ($\sim 14$ $m^2$/Vs), and small three dimensional hole pockets
with nontrivial Berry phase. A very small, sample-dependent effective mass that
falls as low as $0.015(7)$ bare masses scales with Fermi velocity, suggesting
RhSb$_3$ is a new class of zero-gap three-dimensional Dirac semimetal.
| 0 | 1 | 0 | 0 | 0 | 0 |
Fréchet Analysis Of Variance For Random Objects | Fréchet mean and variance provide a way of obtaining mean and variance for
general metric space valued random variables and can be used for statistical
analysis of data objects that lie in abstract spaces devoid of algebraic
structure and operations. Examples of such spaces include covariance matrices,
graph Laplacians of networks and univariate probability distribution functions.
We derive a central limit theorem for Fréchet variance under mild regularity
conditions, utilizing empirical process theory, and also provide a consistent
estimator of the asymptotic variance. These results lead to a test to compare k
populations based on Fréchet variance for general metric space valued data
objects, with emphasis on comparing means and variances. We examine the finite
sample performance of this inference procedure through simulation studies for
several special cases that include probability distributions and graph
Laplacians, which leads to tests to compare populations of networks. The
proposed methodology has good finite sample performance in simulations for
different kinds of random objects. We illustrate the proposed methods with data
on mortality profiles of various countries and resting state Functional
Magnetic Resonance Imaging data.
| 0 | 0 | 1 | 1 | 0 | 0 |
Backward Simulation of Stochastic Process using a Time Reverse Monte Carlo method | The "backward simulation" of a stochastic process is defined as the
stochastic dynamics that trace a time-reversed path from the target region to
the initial configuration. If the probabilities calculated by the original
simulation are easily restored from those obtained by backward dynamics, we can
use it as a computational tool. It is shown that the naive approach to backward
simulation does not work as expected. As a remedy, the Time Reverse Monte Carlo
method (TRMC) based on the ideas of Sequential Importance Sampling (SIS) and
Sequential Monte Carlo (SMC) is proposed and successfully tested with a
stochastic typhoon model and the Lorenz 96 model. TRMC with SMC, which contains
resampling steps, is shown to be more efficient for simulations with a larger
number of time steps. A limitation of TRMC and its relation to the Bayes
formula are also discussed.
| 0 | 1 | 0 | 1 | 0 | 0 |
Spectral Inference Networks: Unifying Spectral Methods With Deep Learning | We present Spectral Inference Networks, a framework for learning
eigenfunctions of linear operators by stochastic optimization. Spectral
Inference Networks generalize Slow Feature Analysis to generic symmetric
operators, and are closely related to Variational Monte Carlo methods from
computational physics. As such, they can be a powerful tool for unsupervised
representation learning from video or pairs of data. We derive a training
algorithm for Spectral Inference Networks that addresses the bias in the
gradients due to finite batch size and allows for online learning of multiple
eigenfunctions. We show results of training Spectral Inference Networks on
problems in quantum mechanics and feature learning for videos on synthetic
datasets as well as the Arcade Learning Environment. Our results demonstrate
that Spectral Inference Networks accurately recover eigenfunctions of linear
operators, can discover interpretable representations from video and find
meaningful subgoals in reinforcement learning environments.
| 0 | 0 | 0 | 1 | 0 | 0 |
Heavy tailed spatial autocorrelation models | Appropriate models for spatially autocorrelated data account for the fact
that observations are not independent. A popular model in this context is the
simultaneous autoregressive (SAR) model that allows to model the spatial
dependency structure of a response variable and the influence of covariates on
this variable. This spatial regression model assumes that the error follows a
normal distribution. Since this assumption cannot always be met, it is
necessary to extend this model to other error distributions. We propose the
extension to the $t$-distribution, the tSAR model, which can be used if we
observe heavy tails in the fitted residuals of the SAR model. In addition, we
provide a variance estimate that considers the spatial structure of a variable
which helps us to specify inputs for our models. An extended simulation study
shows that the proposed estimators of the tSAR model are performing well and in
an application to fire danger we see that the tSAR model is a notable
improvement compared to the SAR model.
| 0 | 0 | 0 | 1 | 0 | 0 |
Only Bayes should learn a manifold (on the estimation of differential geometric structure from data) | We investigate learning of the differential geometric structure of a data
manifold embedded in a high-dimensional Euclidean space. We first analyze
kernel-based algorithms and show that under the usual regularizations,
non-probabilistic methods cannot recover the differential geometric structure,
but instead find mostly linear manifolds or spaces equipped with teleports. To
properly learn the differential geometric structure, non-probabilistic methods
must apply regularizations that enforce large gradients, which go against
common wisdom. We repeat the analysis for probabilistic methods and find that
under reasonable priors, the geometric structure can be recovered. Fully
exploiting the recovered structure, however, requires the development of
stochastic extensions to classic Riemannian geometry. We take early steps in
that regard. Finally, we partly extend the analysis to modern models based on
neural networks, thereby highlighting geometric and probabilistic shortcomings
of current deep generative models.
| 0 | 0 | 0 | 1 | 0 | 0 |
Predictive modelling of training loads and injury in Australian football | To investigate whether training load monitoring data could be used to predict
injuries in elite Australian football players, data were collected from elite
athletes over 3 seasons at an Australian football club. Loads were quantified
using GPS devices, accelerometers and player perceived exertion ratings.
Absolute and relative training load metrics were calculated for each player
each day (rolling average, exponentially weighted moving average, acute:chronic
workload ratio, monotony and strain). Injury prediction models (regularised
logistic regression, generalised estimating equations, random forests and
support vector machines) were built for non-contact, non-contact time-loss and
hamstring specific injuries using the first two seasons of data. Injury
predictions were generated for the third season and evaluated using the area
under the receiver operator characteristic (AUC). Predictive performance was
only marginally better than chance for models of non-contact and non-contact
time-loss injuries (AUC$<$0.65). The best performing model was a multivariate
logistic regression for hamstring injuries (best AUC=0.76). Learning curves
suggested logistic regression was underfitting the load-injury relationship and
that using a more complex model or increasing the amount of model building data
may lead to future improvements. Injury prediction models built using training
load data from a single club showed poor ability to predict injuries when
tested on previously unseen data, suggesting they are limited as a daily
decision tool for practitioners. Focusing the modelling approach on specific
injury types and increasing the amount of training data may lead to the
development of improved predictive models for injury prevention.
| 0 | 0 | 0 | 1 | 0 | 0 |
Level structure of deeply bound levels of the $c^3Σ_g^+$ state of $^{87}\text{Rb}_2$ | We spectroscopically investigate the hyperfine, rotational and Zeeman
structure of the vibrational levels $\text{v}'=0$, $7$, $13$ within the
electronically excited $c^3\Sigma_g^+$ state of $^{87}\text{Rb}_2$ for magnetic
fields of up to $1000\,\text{G}$. As spectroscopic methods we use short-range
photoassociation of ultracold Rb atoms as well as photoexcitation of ultracold
molecules which have been previously prepared in several well-defined quantum
states of the $a^3\Sigma_u^+$ potential. As a byproduct, we present optical
two-photon transfer of weakly bound Feshbach molecules into $a^3\Sigma_u^+$,
$\text{v}=0$ levels featuring different nuclear spin quantum numbers. A simple
model reproduces well the molecular level structures of the $c^3\Sigma_g^+$
vibrational states and provides a consistent assignment of the measured
resonance lines. Furthermore, the model can be used to predict the relative
transition strengths of the lines. From fits to the data we extract for each
vibrational level the rotational constant, the effective spin-spin interaction
constant, as well as the Fermi contact parameter and (for the first time) the
anisotropic hyperfine constant. In an alternative approach, we perform
coupled-channel calculations where we fit the relevant potential energy curves,
spin-orbit interactions and hyperfine functions. The calculations reproduce the
measured hyperfine level term frequencies with an average uncertainty of
$\pm9\:$MHz, similar as for the simple model. From these fits we obtain a
section of the potential energy curve for the $c^3\Sigma_g^+$ state which can
be used for predicting the level structure for the vibrational manifold
$\text{v}'=0$ to $13$ of this electronic state.
| 0 | 1 | 0 | 0 | 0 | 0 |
The mod 2 cohomology of the infinite families of Coxeter groups of type B and D as almost Hopf rings | We describe a Hopf ring structure on the direct sum of the cohomology groups
$\bigoplus_{n \geq 0} H^* \left( B_n; \mathbb{Z}_2 \right)$ of the Coxeter
groups of type $B_n$, and an almost-Hopf ring structure on the direct sum of
the cohomology groups $\bigoplus_{n \geq 0} H^* \left( D_n; \mathbb{Z}_2
\right)$ of the Coxeter groups of type $D_n$, with coefficient in the field
with two elements $\mathbb{Z}_2$. We give presentations with generators and
relations, determine additive bases and compute the Steenrod algebra action.
The generators are described both in terms of a geometric construction by De
Concini and Salvetti and in terms of their restriction to elementary abelian
2-subgroups.
| 0 | 0 | 1 | 0 | 0 | 0 |
Geometry-Based Optimization of One-Way Quantum Computation Measurement Patterns | In one-way quantum computation (1WQC) model, an initial highly entangled
state called a graph state is used to perform universal quantum computations by
a sequence of adaptive single-qubit measurements and post-measurement Pauli-X
and Pauli-Z corrections. The needed computations are organized as measurement
patterns, or simply patterns, in the 1WQC model. The entanglement operations in
a pattern can be shown by a graph which together with the set of its input and
output qubits is called the geometry of the pattern. Since a one-way quantum
computation pattern is based on quantum measurements, which are fundamentally
nondeterministic evolutions, there must be conditions over geometries to
guarantee determinism. Causal flow is a sufficient and generalized flow (gflow)
is a necessary and sufficient condition over geometries to identify a
dependency structure for the measurement sequences in order to achieve
determinism. Previously, three optimization methods have been proposed to
simplify 1WQC patterns which are called standardization, signal shifting and
Pauli simplification. These optimizations can be performed using measurement
calculus formalism by rewriting rules. However, maintaining and searching these
rules in the library can be complicated with respect to implementation.
Moreover, serial execution of these rules is time consuming due to executing
many ineffective commutation rules. To overcome this problem, in this paper, a
new scheme is proposed to perform optimization techniques on patterns with flow
or gflow only based on their geometries instead of using rewriting rules.
Furthermore, the proposed scheme obtains the maximally delayed gflow order for
geometries with flow. It is shown that the time complexity of the proposed
approach is improved over the previous ones.
| 1 | 0 | 0 | 0 | 0 | 0 |
Categorical entropy for Fourier-Mukai transforms on generic abelian surfaces | In this note, we shall compute the categorical entropy of an autoequivalence
on a generic abelian surface.
| 0 | 0 | 1 | 0 | 0 | 0 |
Machine Learning Approach to RF Transmitter Identification | With the development and widespread use of wireless devices in recent years
(mobile phones, Internet of Things, Wi-Fi), the electromagnetic spectrum has
become extremely crowded. In order to counter security threats posed by rogue
or unknown transmitters, it is important to identify RF transmitters not by the
data content of the transmissions but based on the intrinsic physical
characteristics of the transmitters. RF waveforms represent a particular
challenge because of the extremely high data rates involved and the potentially
large number of transmitters present in a given location. These factors outline
the need for rapid fingerprinting and identification methods that go beyond the
traditional hand-engineered approaches. In this study, we investigate the use
of machine learning (ML) strategies to the classification and identification
problems, and the use of wavelets to reduce the amount of data required. Four
different ML strategies are evaluated: deep neural nets (DNN), convolutional
neural nets (CNN), support vector machines (SVM), and multi-stage training
(MST) using accelerated Levenberg-Marquardt (A-LM) updates. The A-LM MST method
preconditioned by wavelets was by far the most accurate, achieving 100%
classification accuracy of transmitters, as tested using data originating from
12 different transmitters. We discuss strategies for extension of MST to a much
larger number of transmitters.
| 1 | 0 | 0 | 1 | 0 | 0 |
Analyzing Chaos in Higher Order Disordered Quartic-Sextic Klein-Gordon Lattices Using $q$-Statistics | In the study of subdiffusive wave-packet spreading in disordered Klein-Gordon
(KG) nonlinear lattices, a central open question is whether the motion
continues to be chaotic despite decreasing densities, or tends to become
quasi-periodic as nonlinear terms become negligible. In a recent study of such
KG particle chains with quartic (4th order) anharmonicity in the on-site
potential, it was shown that $q-$Gaussian probability distribution functions of
sums of position observables with $q > 1$ always approach pure Gaussians
($q=1$) in the long time limit and hence the motion of the full system is
ultimately "strongly chaotic". In the present paper, we show that these results
continue to hold even when a sextic (6th order) term is gradually added to the
potential and ultimately prevails over the 4th order anharmonicity, despite
expectations that the dynamics is more "regular", at least in the regime of
small oscillations. Analyzing this system in the subdiffusive energy domain
using $q$-statistics, we demonstrate that groups of oscillators centered around
the initially excited one (as well as the full chain) possess strongly chaotic
dynamics and are thus far from any quasi-periodic torus, for times as long as
$t=10^9$.
| 0 | 1 | 0 | 0 | 0 | 0 |
UV/EUV High-Throughput Spectroscopic Telescope: A Next Generation Solar Physics Mission white paper | The origin of the activity in the solar corona is a long-standing problem in
solar physics. Recent satellite observations, such as Hinode, Solar Dynamics
Observatory (SDO), Interface Region Imaging Spectrograph (IRIS), show the
detail characteristics of the solar atmosphere and try to reveal the energy
transfer from the photosphere to the corona through the magnetic fields and its
energy conversion by various processes. However, quantitative estimation of
energy transfer along the magnetic field is not enough. There are mainly two
reason why it is difficult to observe the energy transfer from photosphere to
corona; 1) spatial resolution gap between photosphere (a few 0.1 arcsec) and
corona (a few arcsec), 2) lack in temperature coverage. Furthermore, there is
not enough observational knowledge of the physical parameters in the energy
dissipation region. There are mainly three reason why it is difficult to
observe in the vicinity of the energy dissipation region; 1) small spatial
scale, 2) short time scale, 3) low emission. It is generally believed that the
energy dissipation occurs in the very small scale and its duration is very
short (10 second). Further, the density in the dissipation region might be very
low. Therefore, the high spatial and temporal resolution UV/EUV spectroscopic
observation with wide temperature coverage is crucial to estimate the energy
transport from photosphere to corona quantitatively and diagnose the plasma
dynamics in the vicinity of the energy dissipation region. Main Science Target
for the telescope is quantitative estimation for the energy transfer from the
photosphere to the corona, and clarification of the plasma dynamics in the
vicinity of the energy dissipation region, where is the key region for coronal
heating, solar wind acceleration, and/or solar flare, by the high spatial and
temporal resolution UV/EUV spectroscopy.
| 0 | 1 | 0 | 0 | 0 | 0 |
Investigating Collaboration Within Online Communities: Software Development Vs. Artistic Creation | Online creative communities have been able to develop large, open source
software (OSS) projects like Linux and Firefox throughout the successful
collaborations carried out over the Internet. These communities have also
expanded to creative arts domains such as animation, video games, and music.
Despite their growing popularity, the factors that lead to successful
collaborations in these communities are not entirely understood. In the
following, I describe my PhD research project aimed at improving communication,
collaboration, and retention in creative arts communities, starting from the
experience gained from the literature about OSS communities.
| 1 | 0 | 0 | 0 | 0 | 0 |
Safe Non-blocking Synchronization in Ada 202x | The mutual-exclusion property of locks stands in the way to scalability of
parallel programs on many-core architectures. Locks do not allow progress
guarantees, because a task may fail inside a critical section and keep holding
a lock that blocks other tasks from accessing shared data. With non-blocking
synchronization, the drawbacks of locks are avoided by synchronizing access to
shared data by atomic read-modify-write operations. To incorporate non-blocking
synchronization in Ada~202x, programmers must be able to reason about the
behavior and performance of tasks in the absence of protected objects and
rendezvous. We therefore extend Ada's memory model by synchronized types, which
support the expression of memory ordering operations at a sufficient level of
detail. To mitigate the complexity associated with non-blocking
synchronization, we propose concurrent objects as a novel high-level language
construct. Entities of a concurrent object execute in parallel, due to a
fine-grained, optimistic synchronization mechanism. Synchronization is framed
by the semantics of concurrent entry execution. The programmer is only required
to label shared data accesses in the code of concurrent entries. Labels
constitute memory-ordering operations expressed through attributes. To the best
of our knowledge, this is the first approach to provide a non-blocking
synchronization construct as a first-class citizen of a high-level programming
language. We illustrate the use of concurrent objects by several examples.
| 1 | 0 | 0 | 0 | 0 | 0 |
Rotational Unit of Memory | The concepts of unitary evolution matrices and associative memory have
boosted the field of Recurrent Neural Networks (RNN) to state-of-the-art
performance in a variety of sequential tasks. However, RNN still have a limited
capacity to manipulate long-term memory. To bypass this weakness the most
successful applications of RNN use external techniques such as attention
mechanisms. In this paper we propose a novel RNN model that unifies the
state-of-the-art approaches: Rotational Unit of Memory (RUM). The core of RUM
is its rotational operation, which is, naturally, a unitary matrix, providing
architectures with the power to learn long-term dependencies by overcoming the
vanishing and exploding gradients problem. Moreover, the rotational unit also
serves as associative memory. We evaluate our model on synthetic memorization,
question answering and language modeling tasks. RUM learns the Copying Memory
task completely and improves the state-of-the-art result in the Recall task.
RUM's performance in the bAbI Question Answering task is comparable to that of
models with attention mechanism. We also improve the state-of-the-art result to
1.189 bits-per-character (BPC) loss in the Character Level Penn Treebank (PTB)
task, which is to signify the applications of RUM to real-world sequential
data. The universality of our construction, at the core of RNN, establishes RUM
as a promising approach to language modeling, speech recognition and machine
translation.
| 1 | 0 | 0 | 1 | 0 | 0 |
A Bayesian Approach for Inferring Local Causal Structure in Gene Regulatory Networks | Gene regulatory networks play a crucial role in controlling an organism's
biological processes, which is why there is significant interest in developing
computational methods that are able to extract their structure from
high-throughput genetic data. A typical approach consists of a series of
conditional independence tests on the covariance structure meant to
progressively reduce the space of possible causal models. We propose a novel
efficient Bayesian method for discovering the local causal relationships among
triplets of (normally distributed) variables. In our approach, we score the
patterns in the covariance matrix in one go and we incorporate the available
background knowledge in the form of priors over causal structures. Our method
is flexible in the sense that it allows for different types of causal
structures and assumptions. We apply the approach to the task of inferring gene
regulatory networks by learning regulatory relationships between gene
expression levels. We show that our algorithm produces stable and conservative
posterior probability estimates over local causal structures that can be used
to derive an honest ranking of the most meaningful regulatory relationships. We
demonstrate the stability and efficacy of our method both on simulated data and
on real-world data from an experiment on yeast.
| 0 | 0 | 0 | 1 | 1 | 0 |
$η$-metric structures | In this paper, we discuss recent results about generalized metric spaces and
fixed point theory. We introduce the notion of $\eta$-cone metric spaces, give
some topological properties and prove some fixed point theorems for contractive
type maps on these spaces. In particular we show that theses $\eta$-cone metric
spaces are natural generalizations of both cone metric spaces and metric type
spaces.
| 0 | 0 | 1 | 0 | 0 | 0 |
Statistical Analysis of Precipitation Events | In the present paper we demonstrate the results of a statistical analysis of
some characteristics of precipitation events and propose a kind of a
theoretical explanation of the proposed models in terms of mixed Poisson and
mixed exponential distributions based on the information-theoretical entropy
reasoning. The proposed models can be also treated as the result of following
the popular Bayesian approach.
| 0 | 0 | 0 | 1 | 0 | 0 |
Reinforced Cross-Modal Matching and Self-Supervised Imitation Learning for Vision-Language Navigation | Vision-language navigation (VLN) is the task of navigating an embodied agent
to carry out natural language instructions inside real 3D environments. In this
paper, we study how to address three critical challenges for this task: the
cross-modal grounding, the ill-posed feedback, and the generalization problems.
First, we propose a novel Reinforced Cross-Modal Matching (RCM) approach that
enforces cross-modal grounding both locally and globally via reinforcement
learning (RL). Particularly, a matching critic is used to provide an intrinsic
reward to encourage global matching between instructions and trajectories, and
a reasoning navigator is employed to perform cross-modal grounding in the local
visual scene. Evaluation on a VLN benchmark dataset shows that our RCM model
significantly outperforms existing methods by 10% on SPL and achieves the new
state-of-the-art performance. To improve the generalizability of the learned
policy, we further introduce a Self-Supervised Imitation Learning (SIL) method
to explore unseen environments by imitating its own past, good decisions. We
demonstrate that SIL can approximate a better and more efficient policy, which
tremendously minimizes the success rate performance gap between seen and unseen
environments (from 30.7% to 11.7%).
| 1 | 0 | 0 | 0 | 0 | 0 |
Proton Beam Intensity Upgrades for the Neutrino Program at Fermilab | Fermilab is committed to upgrading its accelerator complex towards the
intensity frontier to pursue HEP research in the neutrino sector and beyond.
The upgrade has two steps: 1) the Proton Improvement Plan (PIP), which is
underway, has its primary goal to start providing 700 kW beam power on NOvA
target by the end of 2017 and 2) the foreseen PIP-II will replace the existing
LINAC, a 400 MeV injector to the Booster, by an 800 MeV superconducting LINAC
by the middle of next decade, with output beam intensity from the Booster
increased significantly and the beam power on the NOvA target increased to <1.2
MW. In any case, the Fermilab Booster is going to play a very significant role
for the next two decades. In this context, we have recently developed and
commissioned an innovative beam injection scheme for the Booster called "early
injection scheme." This scheme is already in operation and has a potential to
increase the Booster beam intensity from the PIP design goal by a considerable
amount with a reduced beam emittance and beam loss. In this paper, we will
present results from our experience from the new scheme in operation, current
status and future plans.
| 0 | 1 | 0 | 0 | 0 | 0 |
On indecomposable $τ$-rigid modules over cluster-tilted algebras of tame type | For a given cluster-tilted algebra $A$ of tame type, it is proved that
different indecomposable $\tau$-rigid $A$-modules have different dimension
vectors. This is motivated by Fomin-Zelevinsky's denominator conjecture for
cluster algebras. As an application, we establish a weak version of the
denominator conjecture for cluster algebras of tame type. Namely, we show that
different cluster variables have different denominators with respect to a given
cluster for a cluster algebra of tame type. Our approach involves
Iyama-Yoshino's construction of subfactors of triangulated categories. In
particular,we obtain a description of the subfactors of cluster categories of
tame type with respect to an indecomposable rigid object, which is of
independent interest.
| 0 | 0 | 1 | 0 | 0 | 0 |
Tensors Come of Age: Why the AI Revolution will help HPC | This article discusses how the automation of tensor algorithms, based on A
Mathematics of Arrays and Psi Calculus, and a new way to represent numbers,
Unum Arithmetic, enables mechanically provable, scalable, portable, and more
numerically accurate software.
| 1 | 0 | 0 | 0 | 0 | 0 |
Discriminative conditional restricted Boltzmann machine for discrete choice and latent variable modelling | Conventional methods of estimating latent behaviour generally use attitudinal
questions which are subjective and these survey questions may not always be
available. We hypothesize that an alternative approach can be used for latent
variable estimation through an undirected graphical models. For instance,
non-parametric artificial neural networks. In this study, we explore the use of
generative non-parametric modelling methods to estimate latent variables from
prior choice distribution without the conventional use of measurement
indicators. A restricted Boltzmann machine is used to represent latent
behaviour factors by analyzing the relationship information between the
observed choices and explanatory variables. The algorithm is adapted for latent
behaviour analysis in discrete choice scenario and we use a graphical approach
to evaluate and understand the semantic meaning from estimated parameter vector
values. We illustrate our methodology on a financial instrument choice dataset
and perform statistical analysis on parameter sensitivity and stability. Our
findings show that through non-parametric statistical tests, we can extract
useful latent information on the behaviour of latent constructs through machine
learning methods and present strong and significant influence on the choice
process. Furthermore, our modelling framework shows robustness in input
variability through sampling and validation.
| 1 | 0 | 0 | 0 | 0 | 0 |
LLASSO: A linear unified LASSO for multicollinear situations | We propose a rescaled LASSO, by premultipying the LASSO with a matrix term,
namely linear unified LASSO (LLASSO) for multicollinear situations. Our
numerical study has shown that the LLASSO is comparable with other sparse
modeling techniques and often outperforms the LASSO and elastic net. Our
findings open new visions about using the LASSO still for sparse modeling and
variable selection. We conclude our study by pointing that the LLASSO can be
solved by the same efficient algorithm for solving the LASSO and suggest to
follow the same construction technique for other penalized estimators.
| 0 | 0 | 0 | 1 | 0 | 0 |
Filamentary fragmentation in a turbulent medium | We present the results of smoothed particle hydrodynamic simulations
investigating the evolution and fragmentation of filaments that are accreting
from a turbulent medium. We show that the presence of turbulence, and the
resulting inhomogeneities in the accretion flow, play a significant role in the
fragmentation process. Filaments which experience a weakly turbulent accretion
flow fragment in a two-tier hierarchical fashion, similar to the fragmentation
pattern seen in the Orion Integral Shaped Filament. Increasing the energy in
the turbulent velocity field results in more sub-structure within the
filaments, and one sees a shift from gravity-dominated fragmentation to
turbulence-dominated fragmentation. The sub-structure formed in the filaments
is elongated and roughly parallel to the longitudinal axis of the filament,
similar to the fibres seen in observations of Taurus, and suggests that the
fray and fragment scenario is a possible mechanism for the production of
fibres. We show that the formation of these fibre-like structures is linked to
the vorticity of the velocity field inside the filament and the filament's
accretion from an inhomogeneous medium. Moreover, we find that accretion is
able to drive and sustain roughly sonic levels of turbulence inside the
filaments, but is not able to prevent radial collapse once the filaments become
supercritical. However, the supercritical filaments which contain fibre-like
structures do not collapse radially, suggesting that fibrous filaments may not
necessarily become radially unstable once they reach the critical line-density.
| 0 | 1 | 0 | 0 | 0 | 0 |
Uncovering the role of flow rate in redox-active polymer flow batteries: simulation of reaction distributions with simultaneous mixing in tanks | Redox flow batteries (RFBs) are potential solutions for grid-scale energy
storage, and deeper understanding of the effect of flow rate on RFB performance
is needed to develop efficient, low-cost designs. In this study we highlight
the importance of modeling tanks, which can limit the charge/discharge capacity
of redox-active polymer (RAP) based RFBs. The losses due to tank mixing
dominate over the polarization-induced capacity losses that arise due to
resistive processes in the reactor. A porous electrode model is used to
separate these effects by predicting the time variation of active species
concentration in electrodes and tanks. A simple transient model based on
species conservation laws developed in this study reveals that charge
utilization and polarization are affected by two dimensionless numbers
quantifying (1) flow rate relative to stoichiometric flow and (2) size of flow
battery tanks relative to the reactor. The RFB's utilization is shown to
increase monotonically with flow rate, reaching 90% of the theoretical value
only when flow rate exceeds twenty-fold of the stoichiometric value. We also
identify polarization due to irreversibilities inherent to RFB architecture as
a result of tank mixing and current distribution internal to the reactor, and
this polarization dominates over that resulting from ohmic resistances
particularly when cycling RFBs at low flow rates and currents. These findings
are summarized in a map of utilization and polarization that can be used to
select adequate flow rate for a given tank size.
| 0 | 1 | 0 | 0 | 0 | 0 |
The magnetocaloric effect from the point of view of Tsallis non-extensive thermostatistics | In this work we have analyzed the magnetocaloric effect (MCE) from the
Tsallis thermostatistics formalism (TTF) point of view. The problem discussed
here is a two level system MCE. We have calculated, both analytically and
numerically, the entropy of this system as a function of the Tsallis' parameter
(the well known q-parameter) which value depends on the extensivity (q<1) or
non-extensivity (q>1) of the system. Since we consider this MCE not depending
on the initial conditions, which classify our system as a non-extensive one, we
used several greater than one q-parameters to understand the effect of the
nonextensive formalism in the entropy as well as the magnetocaloric potential,
$\Delta S$. We have plotted several curves that shows precisely the behavior of
this effect when dealt with non-extensive statistics.
| 0 | 1 | 0 | 0 | 0 | 0 |
Reheating, thermalization and non-thermal gravitino production in MSSM inflation | In the framework of MSSM inflation, matter and gravitino production are here
investigated through the decay of the fields which are coupled to the udd
inflaton, a gauge invariant combination of squarks. After the end of inflation,
the flat direction oscillates about the minimum of its potential, losing at
each oscillation about 56% of its energy into bursts of gauge/gaugino and
scalar quanta when crossing the origin. These particles then acquire a large
inflaton VEV-induced mass and decay perturbatively into the MSSM quanta and
gravitinos, transferring the inflaton energy very efficiently via instant
preheating. Regarding thermalization, we show that the MSSM degrees of freedom
thermalize very quickly, yet not immediately by virtue of the large vacuum
expectation value of the inflaton, which breaks the $SU(3)_C\times U(1)_Y$
symmetry into a residual $U(1)$. The energy transfer to the MSSM quanta is very
efficient, since full thermalization is achieved after only $\mathcal{O}(40)$
complete oscillations. The udd inflaton thus provides an extremely efficient
reheating of the Universe, with a temperature
$T_{reh}=\mathcal{O}(10^8\mathrm{GeV})$ that allows for instance several
mechanisms of baryogenesis. We also compute the gravitino number density from
the perturbative decay of the flat direction and of the SUSY multiplet. We find
that the gravitinos are produced in negligible amount and satisfy cosmological
bounds such as the Big Bang Nucleosynthesis (BBN) and Dark Matter (DM)
constraints.
| 0 | 1 | 0 | 0 | 0 | 0 |
Emotion Controlled Spectrum Mobility Scheme for Efficient Syntactic Interoperability In Cognitive Radio Based Internet of Vehicles | Blind spots are one of the causes of road accidents in the hilly and flat
areas. These blind spot accidents can be decreased by establishing an Internet
of Vehicles (IoV) using Vehicle-2-Vehicle (V2V) and Vehicle-2-Infrastrtructure
(V2I) communication systems. But the problem with these IoV is that most of
them are using DSRC or single Radio Access Technology (RAT) as a wireless
technology, which has been proven to be failed for efficient communication
between vehicles. Recently, Cognitive Radio (CR) based IoV have to be proven
best wireless communication systems for vehicular networks. However, the
spectrum mobility is a challenging task to keep CR based vehicular networks
interoperable and has not been addressed sufficiently in existing research. In
our previous research work, the Cognitive Radio Site (CR-Site) has been
proposed as in-vehicle CR-device, which can be utilized to establish efficient
IoV systems. H In this paper, we have introduced the Emotions Inspired
Cognitive Agent (EIC_Agent) based spectrum mobility mechanism in CR-Site and
proposed a novel emotions controlled spectrum mobility scheme for efficient
syntactic interoperability between vehicles. For this purpose, a probabilistic
deterministic finite automaton using fear factor is proposed to perform
efficient spectrum mobility using fuzzy logic. In addition, the quantitative
computation of different fear intensity levels has been performed with the help
of fuzzy logic. The system has been tested using active data from different GSM
service providers on Mangla-Mirpur road. This is supplemented by extensive
simulation experiments which validate the proposed scheme for CR based
high-speed vehicular networks. The qualitative comparison with the
existing-state-of the-art has proven the superiority of the proposed emotions
controlled syntactic interoperable spectrum mobility scheme within cognitive
radio based IoV systems.
| 1 | 0 | 0 | 0 | 0 | 0 |
Rota-Baxter modules toward derived functors | In this paper we study Rota-Baxter modules with emphasis on the role played
by the Rota-Baxter operators and resulting difference between Rota-Baxter
modules and the usual modules over an algebra. We introduce the concepts of
free, projective, injective and flat Rota-Baxter modules. We give the
construction of free modules and show that there are enough projective,
injective and flat Rota-Baxter modules to provide the corresponding resolutions
for derived functor.
| 0 | 0 | 1 | 0 | 0 | 0 |
Video Pandemics: Worldwide Viral Spreading of Psy's Gangnam Style Video | Viral videos can reach global penetration traveling through international
channels of communication similarly to real diseases starting from a
well-localized source. In past centuries, disease fronts propagated in a
concentric spatial fashion from the the source of the outbreak via the short
range human contact network. The emergence of long-distance air-travel changed
these ancient patterns. However, recently, Brockmann and Helbing have shown
that concentric propagation waves can be reinstated if propagation time and
distance is measured in the flight-time and travel volume weighted underlying
air-travel network. Here, we adopt this method for the analysis of viral meme
propagation in Twitter messages, and define a similar weighted network distance
in the communication network connecting countries and states of the World. We
recover a wave-like behavior on average and assess the randomizing effect of
non-locality of spreading. We show that similar result can be recovered from
Google Trends data as well.
| 1 | 0 | 0 | 0 | 0 | 0 |
Orthogonal foliations on riemannian manifolds | In this work, we find an equation that relates the Ricci curvature of a
riemannian manifold $M$ and the second fundamental forms of two orthogonal
foliations of complementary dimensions, $\mathcal{F}$ and $\mathcal{F}^{\bot}$,
defined on $M$. Using this equation, we show a sufficient condition for the
manifold M to be locally a riemannian product of the leaves of $\mathcal{F}$
and $\mathcal{F}^{\bot}$, if one of the foliations is totally umbilical. We
also prove an integral formula for such foliations.
| 0 | 0 | 1 | 0 | 0 | 0 |
Mapping Objects to Persistent Predicates | The Logic Programming through Prolog has been widely used for supply
persistence in many systems that need store knowledge. Some implementations of
Prolog Programming Language used for supply persistence have bidirectional
interfaces with other programming languages over all with Object Oriented
Programing Languages. In present days is missing tools and frameworks for the
systems development that use logic predicate persistence in easy and agile
form. More specifically an object oriented and logic persistence provider is
need in present days that allow the object manipulation in main memory and the
persistence for this objects have a Logic Programming predicates aspect. The
present work introduce an object-prolog declarative mappings alternative to
support by an object oriented and logic persistence provider. The proposed
alternative consists in a correspondence of the Logic Programming predicates
with an Object Oriented approach, where for each element of the Logic
Programming one Object Oriented element makes to reciprocate. The Object
Oriented representation of Logic Programming predicates offers facility of
manipulation on the elements that compose a knowledge.
| 1 | 0 | 0 | 0 | 0 | 0 |
Efficient modified Jacobi-Bernstein basis transformations | In the paper, we show that the transformations between modified Jacobi and
Bernstein bases of the constrained space of polynomials of degree at most $n$
can be performed with the complexity $O(n^2)$. As a result, the algorithm of
degree reduction of Bézier curves that was first presented in (Bhrawy et al.,
J. Comput. Appl. Math. 302 (2016), 369--384), and then corrected in (Lu and
Xiang, J. Comput. Appl. Math. 315 (2017), 65--69), can be significantly
improved, since the necessary transformations are done in those papers with the
complexity $O(n^3)$. The comparison of running times shows that our
transformations are also faster in practice.
| 0 | 0 | 1 | 0 | 0 | 0 |
Cospectral mates for the union of some classes in the Johnson association scheme | Let $n\geq k\geq 2$ be two integers and $S$ a subset of $\{0,1,\dots,k-1\}$.
The graph $J_{S}(n,k)$ has as vertices the $k$-subsets of the $n$-set
$[n]=\{1,\dots,n\}$ and two $k$-subsets $A$ and $B$ are adjacent if $|A\cap
B|\in S$. In this paper, we use Godsil-McKay switching to prove that for $m\geq
0$, $k\geq \max(m+2,3)$ and $S = \{0, 1, ..., m\}$, the graphs $J_S(3k-2m-1,k)$
are not determined by spectrum and for $m\geq 2$, $n\geq 4m+2$ and $S =
\{0,1,...,m\}$ the graphs $J_{S}(n,2m+1)$ are not determined by spectrum. We
also report some computational searches for Godsil-McKay switching sets in the
union of classes in the Johnson scheme for $k\leq 5$.
| 1 | 0 | 1 | 0 | 0 | 0 |
Electromagnetic properties of terbium gallium garnet at millikelvin temperatures and single photon energy | Electromagnetic properties of single crystal terbium gallium garnet (TGG) are
characterised from room down to millikelvin temperatures using the whispering
gallery mode method. Microwave spectroscopy is performed at low powers
equivalent to a few photons in energy and conducted as functions of the
magnetic field and temperature. A phase transition is detected close to the
temperature of 3.5 K. This is observed for multiple whispering gallery modes
causing an abrupt negative frequency shift and a change in transmission due to
extra losses in the new phase caused by a change in complex magnetic
susceptibility.
| 0 | 1 | 0 | 0 | 0 | 0 |
The abundance of compact quiescent galaxies since z ~ 0.6 | We set out to quantify the number density of quiescent massive compact
galaxies at intermediate redshifts. We determine structural parameters based on
i-band imaging using the CFHT equatorial SDSS Stripe 82 (CS82) survey (~170 sq.
degrees) taking advantage of an exquisite median seeing of ~0.6''. We select
compact massive (M > 5x10^10 M_sun) galaxies within the redshift range of
0.2<z<0.6. The large volume sampled allows to decrease the effect of cosmic
variance that has hampered the calculation of the number density for this
enigmatic population in many previous studies. We undertake an exhaustive
analysis in an effort to untangle the various findings inherent to the diverse
definition of compactness present in the literature. We find that the absolute
number of compact galaxies is very dependent on the adopted definition and can
change up to a factor of >10. We systematically measure a factor of ~5 more
compacts at the same redshift than what was previously reported on smaller
fields with HST imaging, which are more affected by cosmic variance. This means
that the decrease in number density from z ~ 1.5 to z ~ 0.2 might be only of a
factor of ~2-5, significantly smaller than what previously reported. This
supports progenitor bias as the main contributor to the size evolution. This
milder decrease is roughly compatible with the predictions from recent
numerical simulations. Only the most extreme compact galaxies, with Reff <
1.5x( M/10^11 M_sun)^0.75 and M > 10^10.7 M_sun, appear to drop in number by a
factor of ~20 and hence likely experience a noticeable size evolution.
| 0 | 1 | 0 | 0 | 0 | 0 |
Combining Probabilistic Load Forecasts | Probabilistic load forecasts provide comprehensive information about future
load uncertainties. In recent years, many methodologies and techniques have
been proposed for probabilistic load forecasting. Forecast combination, a
widely recognized best practice in point forecasting literature, has never been
formally adopted to combine probabilistic load forecasts. This paper proposes a
constrained quantile regression averaging (CQRA) method to create an improved
ensemble from several individual probabilistic forecasts. We formulate the CQRA
parameter estimation problem as a linear program with the objective of
minimizing the pinball loss, with the constraints that the parameters are
nonnegative and summing up to one. We demonstrate the effectiveness of the
proposed method using two publicly available datasets, the ISO New England data
and Irish smart meter data. Comparing with the best individual probabilistic
forecast, the ensemble can reduce the pinball score by 4.39% on average. The
proposed ensemble also demonstrates superior performance over nine other
benchmark ensembles.
| 0 | 0 | 0 | 1 | 0 | 0 |
Stability and instability of the sub-extremal Reissner-Nordström black hole interior for the Einstein-Maxwell-Klein-Gordon equations in spherical symmetry | We show non-linear stability and instability results in spherical symmetry
for the interior of a charged black hole -approaching a sub-extremal
Reissner-Nordström background fast enough at infinity- in presence of a
massive and charged scalar field, motivated by the strong cosmic censorship
conjecture in that setting :
1. Stability : We prove that spherically symmetric characteristic initial
data to the Einstein-Maxwell- Klein-Gordon equations approaching a
Reissner-Nordström background with a sufficiently decaying polynomial decay
rate on the event horizon gives rise to a space-time possessing a Cauchy
horizon in a neighbourhood of time-like infinity. Moreover if the decay is even
stronger, we prove that the spacetime metric admits a continuous extension to
the Cauchy horizon. This generalizes the celebrated stability result of
Dafermos for Einstein-Maxwell-real-scalar-field in spherical symmetry.
2. Instability : We prove that for the class of space-times considered in the
stability part, whose scalar field in addition obeys a polynomial averaged-L^2
(consistent) lower bound on the event horizon, the scalar field obeys an
integrated lower bound transversally to the Cauchy horizon. As a consequence we
prove that the non-degenerate energy is infinite on any null surface crossing
the Cauchy horizon and the curvature of a geodesic vector field blows up at the
Cauchy horizon near time-like infinity. This generalizes an instability result
due to Luk and Oh for Einstein-Maxwell-real-scalar-field in spherical symmetry.
This instability of the black hole interior can also be viewed as a step
towards the resolution of the C^2 strong cosmic censorship conjecture for
one-ended asymptotically initial data.
| 0 | 0 | 1 | 0 | 0 | 0 |
Using JAGS for Bayesian Cognitive Diagnosis Modeling: A Tutorial | In this article, the JAGS software program is systematically introduced to
fit common Bayesian cognitive diagnosis models (CDMs), including the
deterministic inputs, noisy "and" gate (DINA) model, the deterministic inputs,
noisy "or" gate (DINO) model, the linear logistic model, the reduced
reparameterized unified model (rRUM), and the log-linear CDM (LCDM). The
unstructured latent structural model and the higher-order latent structural
model are both introduced. We also show how to extend those models to consider
the polytomous attributes, the testlet effect, and the longitudinal diagnosis.
Finally, an empirical example is presented as a tutorial to illustrate how to
use the JAGS codes in R.
| 0 | 0 | 0 | 1 | 0 | 0 |
Deep Convolutional Framelet Denosing for Low-Dose CT via Wavelet Residual Network | Model based iterative reconstruction (MBIR) algorithms for low-dose X-ray CT
are computationally expensive. To address this problem, we recently proposed a
deep convolutional neural network (CNN) for low-dose X-ray CT and won the
second place in 2016 AAPM Low-Dose CT Grand Challenge. However, some of the
texture were not fully recovered. To address this problem, here we propose a
novel framelet-based denoising algorithm using wavelet residual network which
synergistically combines the expressive power of deep learning and the
performance guarantee from the framelet-based denoising algorithms. The new
algorithms were inspired by the recent interpretation of the deep convolutional
neural network (CNN) as a cascaded convolution framelet signal representation.
Extensive experimental results confirm that the proposed networks have
significantly improved performance and preserves the detail texture of the
original images.
| 1 | 0 | 0 | 1 | 0 | 0 |
Static Analysis of Deterministic Negotiations | Negotiation diagrams are a model of concurrent computation akin to workflow
Petri nets. Deterministic negotiation diagrams, equivalent to the much studied
and used free-choice workflow Petri nets, are surprisingly amenable to
verification. Soundness (a property close to deadlock-freedom) can be decided
in PTIME. Further, other fundamental questions like computing summaries or the
expected cost, can also be solved in PTIME for sound deterministic negotiation
diagrams, while they are PSPACE-complete in the general case.
In this paper we generalize and explain these results. We extend the
classical "meet-over-all-paths" (MOP) formulation of static analysis problems
to our concurrent setting, and introduce Mazurkiewicz-invariant analysis
problems, which encompass the questions above and new ones. We show that any
Mazurkiewicz-invariant analysis problem can be solved in PTIME for sound
deterministic negotiations whenever it is in PTIME for sequential
flow-graphs---even though the flow-graph of a deterministic negotiation diagram
can be exponentially larger than the diagram itself. This gives a common
explanation to the low-complexity of all the analysis questions studied so far.
Finally, we show that classical gen/kill analyses are also an instance of our
framework, and obtain a PTIME algorithm for detecting anti-patterns in
free-choice workflow Petri nets.
Our result is based on a novel decomposition theorem, of independent
interest, showing that sound deterministic negotiation diagrams can be
hierarchically decomposed into (possibly overlapping) smaller sound diagrams.
| 1 | 0 | 0 | 0 | 0 | 0 |
Wild character varieties, meromorphic Hitchin systems and Dynkin diagrams | The theory of Hitchin systems is something like a "global theory of Lie
groups", where one works over a Riemann surface rather than just at a point.
We'll describe how one can take this analogy a few steps further by attempting
to make precise the class of rich geometric objects that appear in this story
(including the non-compact case), and discuss their classification, outlining a
theory of "Dynkin diagrams" as a step towards classifying some examples of such
objects.
| 0 | 1 | 1 | 0 | 0 | 0 |
A mathematical characterization of confidence as valid belief | Confidence is a fundamental concept in statistics, but there is a tendency to
misinterpret it as probability. In this paper, I argue that an intuitively and
mathematically more appropriate interpretation of confidence is through
belief/plausibility functions, in particular, those that satisfy a certain
validity property. Given their close connection with confidence, it is natural
to ask how a valid belief/plausibility function can be constructed directly.
The inferential model (IM) framework provides such a construction, and here I
prove a complete-class theorem stating that, for every nominal confidence
region, there exists a valid IM whose plausibility regions are contained by the
given confidence region. This characterization has implications for statistics
understanding and communication, and highlights the importance of belief
functions and the IM framework.
| 0 | 0 | 1 | 1 | 0 | 0 |
Optimal Control Problems with Symmetry Breaking Cost Functions | We investigate symmetry reduction of optimal control problems for
left-invariant control systems on Lie groups, with partial symmetry breaking
cost functions. Our approach emphasizes the role of variational principles and
considers a discrete-time setting as well as the standard continuous-time
formulation. Specifically, we recast the optimal control problem as a
constrained variational problem with a partial symmetry breaking Lagrangian and
obtain the Euler--Poincaré equations from a variational principle. By
applying a Legendre transformation to it, we recover the Lie-Poisson equations
obtained by A. D. Borum [Master's Thesis, University of Illinois at
Urbana-Champaign, 2015] in the same context. We also discretize the variational
principle in time and obtain the discrete-time Lie-Poisson equations. We
illustrate the theory with some practical examples including a motion planning
problem in the presence of an obstacle.
| 1 | 0 | 1 | 0 | 0 | 0 |
Multiplying a Gaussian Matrix by a Gaussian Vector | We provide a new and simple characterization of the multivariate generalized
Laplace distribution. In particular, this result implies that the product of a
Gaussian matrix with independent and identically distributed columns by an
independent isotropic Gaussian vector follows a symmetric multivariate
generalized Laplace distribution.
| 0 | 0 | 1 | 1 | 0 | 0 |
NEON+: Accelerated Gradient Methods for Extracting Negative Curvature for Non-Convex Optimization | Accelerated gradient (AG) methods are breakthroughs in convex optimization,
improving the convergence rate of the gradient descent method for optimization
with smooth functions. However, the analysis of AG methods for non-convex
optimization is still limited. It remains an open question whether AG methods
from convex optimization can accelerate the convergence of the gradient descent
method for finding local minimum of non-convex optimization problems. This
paper provides an affirmative answer to this question. In particular, we
analyze two renowned variants of AG methods (namely Polyak's Heavy Ball method
and Nesterov's Accelerated Gradient method) for extracting the negative
curvature from random noise, which is central to escaping from saddle points.
By leveraging the proposed AG methods for extracting the negative curvature, we
present a new AG algorithm with double loops for non-convex
optimization~\footnote{this is in contrast to a single-loop AG algorithm
proposed in a recent manuscript~\citep{AGNON}, which directly analyzed the
Nesterov's AG method for non-convex optimization and appeared online on
November 29, 2017. However, we emphasize that our work is an independent work,
which is inspired by our earlier work~\citep{NEON17} and is based on a
different novel analysis.}, which converges to second-order stationary point
$\x$ such that $\|\nabla f(\x)\|\leq \epsilon$ and $\nabla^2 f(\x)\geq
-\sqrt{\epsilon} I$ with $\widetilde O(1/\epsilon^{1.75})$ iteration
complexity, improving that of gradient descent method by a factor of
$\epsilon^{-0.25}$ and matching the best iteration complexity of second-order
Hessian-free methods for non-convex optimization.
| 0 | 0 | 0 | 1 | 0 | 0 |
Optimal Energy Distribution with Energy Packet Networks | We use Energy Packet Network paradigms to investigate energy distribution
problems in a computer system with energy harvesting and storages units. Our
goal is to minimize both the overall average response time of jobs at
workstations and the total rate of energy lost in the network. Energy is lost
when it arrives at idle workstations which are empty. Energy is also lost in
storage leakages. We assume that the total rate of energy harvesting and the
rate of jobs arriving at workstations are known. We also consider a special
case in which the total rate of energy harvesting is sufficiently large so that
workstations are less busy. In this case, energy is more likely to be sent to
an idle workstation. Optimal solutions are obtained which minimize both the
overall response time and energy loss under the constraint of a fixed energy
harvesting rate.
| 1 | 0 | 0 | 0 | 0 | 0 |
A refined version of Grothendieck's anabelian conjecture for hyperbolic curves over finite fields | In this paper we prove a refined version of a theorem by Tamagawa and
Mochizuki on isomorphisms between (tame) arithmetic fundamental groups of
hyperbolic curves over finite fields, where one "ignores" the information
provided by a "small" set of primes.
| 0 | 0 | 1 | 0 | 0 | 0 |
A discussion about LNG Experiment: Irreversible or Reversible Generation of the OR Logic Gate? | In a recent paper M. Lopez-Suarez, I. Neri, and L. Gammaitoni (LNG) present a
concrete realization of the Boolean OR irreversible gate, but contrary to the
standard Landauer principle, with an arbitrary small dissipation of energy. A
Popperian good falsification! In this paper we discuss a theoretical
description of the LNG device which is indeed a 3in/3out self--reversible
realization of the involved OR gate, satisfying in this way the Landauer
principle of no dispersion of energy, contrary to the LNG conclusions. The
different point of view is due to a different interpretation of the two outputs
corresponding to the inputs 10 and 01, which are considered by LNG
indistinguishable so producing a non reversible realization of the standard
2in/1out gate. On the contrary, always considering these two outputs
indistinguishable, by a suitable normalization function of the cantilever
angles, the experimental results obtained by the LNG device coincide with the
OR connective obtained from the third output of the self-reversible 3in/3out CL
gate by the Inputs-Ancilla->Garbage-Output procedure. Thus, by the
self-reversibility this realization is without dissipation of energy according
to the Landauer principle. Furthermore, using the self-reversible Toffoli gate
it is possible to obtain from the LNG device the realization of the connective
AND adopting another normalization function on the cantilever angles. Finally,
by other suitable normalization procedures on cantilever angles it is possible
to obtain also the other logic NOR and NAND connectives, and in a more
sophisticated way the XOR and NXOR connectives in a self-reversible way. All
this leads to introduce a universal logic machine consisting of the LNG device
plus a memory containing all the necessary angle normalization functions to
produce in a self-reversible way, by choosing one of these latter, the logic
connectives now listed.
| 1 | 0 | 0 | 0 | 0 | 0 |
On $(σ,δ)$-skew McCoy modules | Let $(\sigma,\delta)$ be a quasi derivation of a ring $R$ and $M_R$ a right
$R$-module. In this paper, we introduce the notion of $(\sigma,\delta)$-skew
McCoy modules which extends the notion of McCoy modules and $\sigma$-skew McCoy
modules. This concept can be regarded also as a generalization of
$(\sigma,\delta)$-skew Armendariz modules. Some properties of this concept are
established and some connections between $(\sigma,\delta)$-skew McCoyness and
$(\sigma,\delta)$-compatible reduced modules are examined. Also, we study the
property $(\sigma,\delta)$-skew McCoy of some skew triangular matrix extensions
$V_n(M,\sigma)$, for any nonnegative integer $n\geq 2$. As a consequence, we
obtain: (1) $M_R$ is $(\sigma,\delta)$-skew McCoy if and only if
$M[x]/M[x](x^n)$ is $(\overline{\sigma},\overline{\delta})$-skew McCoy, and (2)
$M_R$ is $\sigma$-skew McCoy if and only if $M[x;\sigma]/M[x;\sigma](x^n)$ is
$\overline{\sigma}$-skew McCoy.
| 0 | 0 | 1 | 0 | 0 | 0 |
Bivariate Exponentiated Generalized Linear Exponential Distribution with Applications in Reliability Analysis | The aim of this paper, is to define a bivariate exponentiated generalized
linear exponential distribution based on Marshall-Olkin shock model.
Statistical and reliability properties of this distribution are discussed. This
includes quantiles, moments, stress-strength reliability, joint reliability
function, joint reversed (hazard) rates functions and joint mean waiting time
function. Moreover, the hazard rate, the availability and the mean residual
lifetime functions for a parallel system, are established. One data set is
analyzed, and it is observed that, the proposed distribution provides a better
fit than Marshall-Olkin bivariate exponential, bivariate generalized
exponential and bivariate generalized linear failure rate distributions.
Simulation studies are presented to estimate both the relative absolute bias,
and the relative mean square error for the distribution parameters based on
complete data.
| 0 | 0 | 1 | 1 | 0 | 0 |
Soft-proton exchange on Magnesium-oxide-doped substrates a route toward efficient and power-resistant nonlinear converters | Despite its attractive features, Congruent-melted Lithium Niobate (CLN)
suffers from Photo-Refractive Damage (PRD). This light-induced refractive-index
change hampers the use of CLN when high-power densities are in play, a typical
regime in integrated optics. The resistance to PRD can be largely improved by
doping the lithium-niobate substrates with magnesium oxide. However, the
fabrication of waveguides on MgO-doped substrates is not as effective as for
CLN: either the resistance to PRD is strongly reduced by the waveguide
fabrication process (as it happens in Ti-indiffused waveguides) or the
nonlinear conversion efficiency is lowered (as it occurs in annealed-proton
exchange). Here we fabricate, for the first time, waveguides starting from
MgO-doped substrates using the Soft-Proton Exchange (SPE) technique and we show
that this third way represents a promising alternative. We demonstrate that SPE
allows to produce refractive-index profiles almost identical to those produced
on CLN without reducing the nonlinearity in the substrate. We also prove that
the SPE does not affect substantially the resistance to PRD. Since the
fabrication recipe is identical between CLN and MgO-doped substrates, we
believe that SPE might outperform standard techniques to fabricate robust and
efficient waveguides for high-intensity-beam confinement.
| 0 | 1 | 0 | 0 | 0 | 0 |
Optimized Cost per Click in Taobao Display Advertising | Taobao, as the largest online retail platform in the world, provides billions
of online display advertising impressions for millions of advertisers every
day. For commercial purposes, the advertisers bid for specific spots and target
crowds to compete for business traffic. The platform chooses the most suitable
ads to display in tens of milliseconds. Common pricing methods include cost per
mille (CPM) and cost per click (CPC). Traditional advertising systems target
certain traits of users and ad placements with fixed bids, essentially regarded
as coarse-grained matching of bid and traffic quality. However, the fixed bids
set by the advertisers competing for different quality requests cannot fully
optimize the advertisers' key requirements. Moreover, the platform has to be
responsible for the business revenue and user experience. Thus, we proposed a
bid optimizing strategy called optimized cost per click (OCPC) which
automatically adjusts the bid to achieve finer matching of bid and traffic
quality of page view (PV) request granularity. Our approach optimizes
advertisers' demands, platform business revenue and user experience and as a
whole improves traffic allocation efficiency. We have validated our approach in
Taobao display advertising system in production. The online A/B test shows our
algorithm yields substantially better results than previous fixed bid manner.
| 1 | 0 | 0 | 1 | 0 | 0 |
Recursive constructions and their maximum likelihood decoding | We consider recursive decoding techniques for RM codes, their subcodes, and
newly designed codes. For moderate lengths up to 512, we obtain near-optimum
decoding with feasible complexity.
| 1 | 0 | 0 | 0 | 0 | 0 |
Development of a 32-channel ASIC for an X-ray APD Detector onboard the ISS | We report on the design and performance of a mixed-signal application
specific integrated circuit (ASIC) dedicated to avalanche photodiodes (APDs) in
order to detect hard X-ray emissions in a wide energy band onboard the
International Space Station. To realize wide-band detection from 20 keV to 1
MeV, we use Ce:GAGG scintillators, each coupled to an APD, with low-noise
front-end electronics capable of achieving a minimum energy detection threshold
of 20 keV. The developed ASIC has the ability to read out 32-channel APD
signals using 0.35 $\mu$m CMOS technology, and an analog amplifier at the input
stage is designed to suppress the capacitive noise primarily arising from the
large detector capacitance of the APDs. The ASIC achieves a performance of 2099
e$^{-}$ + 1.5 e$^{-}$/pF at root mean square (RMS) with a wide 300 fC dynamic
range. Coupling a reverse-type APD with a Ce:GAGG scintillator, we obtain an
energy resolution of 6.7% (FWHM) at 662 keV and a minimum detectable energy of
20 keV at room temperature (20 $^{\circ}$C). Furthermore, we examine the
radiation tolerance for space applications by using a 90 MeV proton beam,
confirming that the ASIC is free of single-event effects and can operate
properly without serious degradation in analog and digital processing.
| 0 | 1 | 0 | 0 | 0 | 0 |
Impact of Detour-Aware Policies on Maximizing Profit in Ridesharing | This paper provides efficient solutions to maximize profit for commercial
ridesharing services, under a pricing model with detour-based discounts for
passengers. We propose greedy heuristics for real-time ride matching that offer
different trade-offs between optimality and speed. Simulations on New York City
(NYC) taxi trip data show that our heuristics are up to 90% optimal and 10^5
times faster than the (necessarily) exponential-time optimal algorithm.
Commercial ridesharing service providers generate significant savings by
matching multiple ride requests using heuristic methods. The resulting savings
are typically shared between the service provider (in the form of increased
profit) and the ridesharing passengers (in the form of discounts). It is not
clear a priori how this split should be effected, since higher discounts would
encourage more ridesharing, thereby increasing total savings, but the fraction
of savings taken as profit is reduced. We simulate a scenario where the
decisions of the passengers to opt for ridesharing depend on the discount
offered by the service provider. We provide an adaptive learning algorithm
IDFLA that learns the optimal profit-maximizing discount factor for the
provider. An evaluation over NYC data shows that IDFLA, on average, learns the
optimal discount factor in under 16 iterations.
Finally, we investigate the impact of imposing a detour-aware routing policy
based on sequential individual rationality, a recently proposed concept. Such
restricted policies offer a better ride experience, increasing the provider's
market share, but at the cost of decreased average per-ride profit due to the
reduced number of matched rides. We construct a model that captures these
opposing effects, wherein simulations based on NYC data show that a 7% increase
in market share would suffice to offset the decreased average per-ride profit.
| 1 | 0 | 1 | 0 | 0 | 0 |
Optimal lower exponent for the higher gradient integrability of solutions to two-phase elliptic equations in two dimensions | We study the higher gradient integrability of distributional solutions $u$ to
the equation $div(\sigma \nabla u) = 0$ in dimension two, in the case when the
essential range of $\sigma$ consists of only two elliptic matrices, i.e.,
$\sigma\in\{\sigma_1, \sigma_2\}$ a.e. in $\Omega$.
In [4], for every pair of elliptic matrices $\sigma_1$ and $\sigma_2$,
exponents $p_{\sigma_1,\sigma_2}\in(2,+\infty)$ and $q_{\sigma_1,\sigma_2}\in
(1,2)$ have been characterised so that if $u\in
W^{1,q_{\sigma_1,\sigma_2}}(\Omega)$ is solution to the elliptic equation then
$\nabla u\in L^{p_{\sigma_1,\sigma_2}}_{\rm weak}(\Omega)$ and the optimality
of the upper exponent $p_{\sigma_1,\sigma_2}$ has been proved. In this paper we
complement the above result by proving the optimality of the lower exponent
$q_{\sigma_1,\sigma_2}$. Precisely, we show that for every arbitrarily small
$\delta$, one can find a particular microgeometry, i.e., an arrangement of the
sets $\sigma^{-1}(\sigma_1)$ and $\sigma^{-1}(\sigma_2)$, for which there
exists a solution $u$ to the corresponding elliptic equation such that $\nabla
u \in L^{q_{\sigma_1,\sigma_2}-\delta}$, but $\nabla u \notin
L^{q_{\sigma_1,\sigma_2}}.$ The existence of such optimal microgeometries is
achieved by convex integration methods, adapting to the present setting the
geometric constructions provided in [2] for the isotropic case.
| 0 | 0 | 1 | 0 | 0 | 0 |
On the existence of young embedded clusters at high Galactic latitude | Careful analyses of photometric and star count data available for the nine
putative young clusters identified by Camargo et al. (2015, 2016) at high
Galactic latitudes reveal that none of the groups contain early-type stars, and
most are not significant density enhancements above field level. 2MASS colours
for stars in the groups match those of unreddened late-type dwarfs and giants,
as expected for contamination by (mostly) thin disk objects. A simulation of
one such field using only typical high latitude foreground stars yields a
colour-magnitude diagram that is very similar to those constructed by Camargo
et al. (2015, 2016) as evidence for their young groups as well as the means of
deriving their reddenings and distances. Although some of the fields are
coincident with clusters of galaxies, one must conclude that there is no
evidence that the putative clusters are extremely young stellar groups.
| 0 | 1 | 0 | 0 | 0 | 0 |
The risk of contagion spreading and its optimal control in the economy | The global crisis of 2008 provoked a heightened interest among scientists to
study the phenomenon, its propagation and negative consequences. The process of
modelling the spread of a virus is commonly used in epidemiology. Conceptually,
the spread of a disease among a population is similar to the contagion process
in economy. This similarity allows considering the contagion in the world
financial system using the same mathematical model of infection spread that is
often used in epidemiology. Our research focuses on the dynamic behaviour of
contagion spreading in the global financial network. The effect of infection by
a systemic spread of risks in the network of national banking systems of
countries is tested. An optimal control problem is then formulated to simulate
a control that may avoid significant financial losses. The results show that
the proposed approach describes well the reality of the world economy, and
emphasizes the importance of international relations between countries on the
financial stability.
| 0 | 0 | 0 | 0 | 0 | 1 |
A Learning-to-Infer Method for Real-Time Power Grid Topology Identification | Identifying arbitrary topologies of power networks in real time is a
computationally hard problem due to the number of hypotheses that grows
exponentially with the network size. A new "Learning-to-Infer" variational
inference method is developed for efficient inference of every line status in
the network. Optimizing the variational model is transformed to and solved as a
discriminative learning problem based on Monte Carlo samples generated with
power flow simulations. A major advantage of the developed Learning-to-Infer
method is that the labeled data used for training can be generated in an
arbitrarily large amount fast and at very little cost. As a result, the power
of offline training is fully exploited to learn very complex classifiers for
effective real-time topology identification. The proposed methods are evaluated
in the IEEE 30, 118 and 300 bus systems. Excellent performance in identifying
arbitrary power network topologies in real time is achieved even with
relatively simple variational models and a reasonably small amount of data.
| 1 | 0 | 0 | 1 | 0 | 0 |
Convergence analysis of belief propagation for pairwise linear Gaussian models | Gaussian belief propagation (BP) has been widely used for distributed
inference in large-scale networks such as the smart grid, sensor networks, and
social networks, where local measurements/observations are scattered over a
wide geographical area. One particular case is when two neighboring agents
share a common observation. For example, to estimate voltage in the direct
current (DC) power flow model, the current measurement over a power line is
proportional to the voltage difference between two neighboring buses. When
applying the Gaussian BP algorithm to this type of problem, the convergence
condition remains an open issue. In this paper, we analyze the convergence
properties of Gaussian BP for this pairwise linear Gaussian model. We show
analytically that the updating information matrix converges at a geometric rate
to a unique positive definite matrix with arbitrary positive semidefinite
initial value and further provide the necessary and sufficient convergence
condition for the belief mean vector to the optimal estimate.
| 1 | 0 | 0 | 1 | 0 | 0 |
Banach-Alaoglu theorem for Hilbert $H^*$-module | We provided an analogue Banach-Alaoglu theorem for Hilbert $H^*$-module. We
construct a $\Lambda$-weak$^*$ topology on a Hilbert $H^*$-module over a proper
$H^*$-algebra $\Lambda$, such that the unit ball is compact with respect to
$\Lambda$-weak$^*$ topology.
| 0 | 0 | 1 | 0 | 0 | 0 |
An Overflow Free Fixed-point Eigenvalue Decomposition Algorithm: Case Study of Dimensionality Reduction in Hyperspectral Images | We consider the problem of enabling robust range estimation of eigenvalue
decomposition (EVD) algorithm for a reliable fixed-point design. The simplicity
of fixed-point circuitry has always been so tempting to implement EVD algo-
rithms in fixed-point arithmetic. Working towards an effective fixed-point
design, integer bit-width allocation is a significant step which has a crucial
impact on accuracy and hardware efficiency. This paper investigates the
shortcomings of the existing range estimation methods while deriving bounds for
the variables of the EVD algorithm. In light of the circumstances, we introduce
a range estimation approach based on vector and matrix norm properties together
with a scaling procedure that maintains all the assets of an analytical method.
The method could derive robust and tight bounds for the variables of EVD
algorithm. The bounds derived using the proposed approach remain same for any
input matrix and are also independent of the number of iterations or size of
the problem. Some benchmark hyperspectral data sets have been used to evaluate
the efficiency of the proposed technique. It was found that by the proposed
range estimation approach, all the variables generated during the computation
of Jacobi EVD is bounded within $\pm1$.
| 1 | 0 | 0 | 0 | 0 | 0 |
Incentivized Advertising: Treatment Effect and Adverse Selection | Incentivized advertising is a new ad format that is gaining popularity in
digital mobile advertising. In incentivized advertising, the publisher rewards
users for watching an ad. An endemic issue here is adverse selection, where
reward-seeking users select into incentivized ad placements to obtain rewards.
Adverse selection reduces the publisher's ad profit as well as poses a
difficulty to causal inference of the effectiveness of incentivized
advertising. To this end, we develop a treatment effect model that allows and
controls for unobserved adverse selection, and estimate the model using data
from a mobile gaming app that offers both incentivized and non-incentivized
ads. We find that rewarding users to watch an ad has an overall positive effect
on the ad conversion rate. A user is 27% more likely to convert when being
rewarded to watch an ad. However there is a negative offsetting effect that
reduces the effectiveness of incentivized ads. Some users are averse to delayed
rewards, they prefer to collect their rewards immediately after watching the
incentivized ads, instead of pursuing the content of the ads further. For the
subset of users who are averse to delayed rewards, the treatment effect is only
13%, while it can be as high as 47% for other users.
| 0 | 0 | 0 | 1 | 0 | 0 |
Prospects for indirect MeV Dark Matter detection with Gamma Rays in light of Cosmic Microwave Background Constraints | The self-annihilation of dark matter particles with mass in the MeV range can
produce gamma rays via prompt or secondary radiation. The annihilation rate for
such light dark matter particles is however tightly constrained by cosmic
microwave background (CMB) data. Here we explore the possibility of discovering
MeV dark matter annihilation with future MeV gamma-ray telescopes taking into
account the latest and future CMB constraints. We study the optimal energy
window as a function of the dominant annihilation final state. We consider both
the (conservative) case of the dwarf spheroidal galaxy Draco and the (more
optimistic) case of the Galactic center. We find that for certain channels,
including those with one or two monochromatic photon(s) and one or two neutral
pion(s), a detectable gamma-ray signal is possible for both targets under
consideration, and compatible with CMB constraints. For other annihilation
channels, however, including all leptonic annihilation channels and two charged
pions, CMB data rule out any significant signal of dark matter annihilation at
future MeV gamma-ray telescopes from dwarf galaxies, but possibly not for the
Galactic center.
| 0 | 1 | 0 | 0 | 0 | 0 |
Special cases of the orbifold version of Zvonkine's $r$-ELSV formula | We prove the orbifold version of Zvonkine's $r$-ELSV formula in two special
cases: the case of $r=2$ (complete $3$-cycles) for any genus $g\geq 0$ and the
case of any $r\geq 1$ for genus $g=0$.
| 0 | 0 | 1 | 0 | 0 | 0 |
From homogeneous metric spaces to Lie groups | We study connected, locally compact metric spaces with transitive isometry
groups. For all $\epsilon \in \mathbb{R}^+$, each such space is
$(1,\epsilon)$-quasi-isometric to a Lie group equipped with a left-invariant
metric. Further, every metric Lie group is $(1, C)$-quasi-isometric to a
solvable Lie group, and every simply connected metric Lie group is $(1,
C)$-quasi-isometrically homeomorphic to a solvable-by-compact metric Lie group.
While any contractible Lie group may be made isometric to a solvable group,
only those that are solvable and of type (R) may be made isometric to a
nilpotent Lie group, in which case the nilpotent group is the nilshadow of the
group. Finally, we give a complete metric characterisation of metric Lie groups
for which there exists an automorphic dilation. These coincide with the metric
spaces that are locally compact, connected, homogeneous, and admit a metric
dilation.
| 0 | 0 | 1 | 0 | 0 | 0 |
Growth of Sobolev norms for abstract linear Schrödinger Equations | We prove an abstract theorem giving a $\langle t\rangle^\epsilon$ bound
($\forall \epsilon>0$) on the growth of the Sobolev norms in linear
Schrödinger equations of the form $i \dot \psi = H_0 \psi + V(t) \psi $ when
the time $t \to \infty$. The abstract theorem is applied to several cases,
including the cases where (i) $H_0$ is the Laplace operator on a Zoll manifold
and $V(t)$ a pseudodifferential operator of order smaller then 2; (ii) $H_0$ is
the (resonant or nonresonant) Harmonic oscillator in $R^d$ and $V(t)$ a
pseudodifferential operator of order smaller then $H_0$ depending in a
quasiperiodic way on time. The proof is obtained by first conjugating the
system to some normal form in which the perturbation is a smoothing operator
and then applying the results of \cite{MaRo}.
| 0 | 0 | 1 | 0 | 0 | 0 |
Trust-Based Collaborative Filtering: Tackling the Cold Start Problem Using Regular Equivalence | User-based Collaborative Filtering (CF) is one of the most popular approaches
to create recommender systems. This approach is based on finding the most
relevant k users from whose rating history we can extract items to recommend.
CF, however, suffers from data sparsity and the cold-start problem since users
often rate only a small fraction of available items. One solution is to
incorporate additional information into the recommendation process such as
explicit trust scores that are assigned by users to others or implicit trust
relationships that result from social connections between users. Such
relationships typically form a very sparse trust network, which can be utilized
to generate recommendations for users based on people they trust. In our work,
we explore the use of a measure from network science, i.e. regular equivalence,
applied to a trust network to generate a similarity matrix that is used to
select the k-nearest neighbors for recommending items. We evaluate our approach
on Epinions and we find that we can outperform related methods for tackling
cold-start users in terms of recommendation accuracy.
| 1 | 0 | 0 | 0 | 0 | 0 |
Experimental parametric study of the self-coherent camera | Direct imaging of exoplanets requires the detection of very faint objects
orbiting close to very bright stars. In this context, the SPICES mission was
proposed to the European Space Agency for planet characterization at visible
wavelength. SPICES is a 1.5m space telescope which uses a coronagraph to
strongly attenuate the central source. However, small optical aberrations,
which appear even in space telescopes, dramatically decrease coronagraph
performance. To reduce these aberrations, we want to estimate, directly on the
coronagraphic image, the electric field, and, with the help of a deformable
mirror, correct the wavefront upstream of the coronagraph. We propose an
instrument, the Self-Coherent Camera (SCC) for this purpose. By adding a small
"reference hole" into the Lyot stop, located after the coronagraph, we can
produce interferences in the focal plane, using the coherence of the stellar
light. We developed algorithms to decode the information contained in these
Fizeau fringes and retrieve an estimation of the field in the focal plane.
After briefly recalling the SCC principle, we will present the results of a
study, based on both experiment and numerical simulation, analyzing the impact
of the size of the reference hole.
| 0 | 1 | 0 | 0 | 0 | 0 |
A Comparison of Parallel Graph Processing Implementations | The rapidly growing number of large network analysis problems has led to the
emergence of many parallel and distributed graph processing systems---one
survey in 2014 identified over 80. Since then, the landscape has evolved; some
packages have become inactive while more are being developed. Determining the
best approach for a given problem is infeasible for most developers. To enable
easy, rigorous, and repeatable comparison of the capabilities of such systems,
we present an approach and associated software for analyzing the performance
and scalability of parallel, open-source graph libraries. We demonstrate our
approach on five graph processing packages: GraphMat, the Graph500, the Graph
Algorithm Platform Benchmark Suite, GraphBIG, and PowerGraph using synthetic
and real-world datasets. We examine previously overlooked aspects of parallel
graph processing performance such as phases of execution and energy usage for
three algorithms: breadth first search, single source shortest paths, and
PageRank and compare our results to Graphalytics.
| 1 | 0 | 0 | 0 | 0 | 0 |
A note on optimization with Morse polynomials | In this paper we prove that the gradient ideal of a Morse polynomial is
radical. This gives a generic class of polynomials whose gradient ideals are
radical. As a consequence we reclaim a previous result that the unconstrained
polynomial optimization problem for Morse polynomials has a finite convergence.
| 0 | 0 | 1 | 0 | 0 | 0 |
Compression of Deep Neural Networks for Image Instance Retrieval | Image instance retrieval is the problem of retrieving images from a database
which contain the same object. Convolutional Neural Network (CNN) based
descriptors are becoming the dominant approach for generating {\it global image
descriptors} for the instance retrieval problem. One major drawback of
CNN-based {\it global descriptors} is that uncompressed deep neural network
models require hundreds of megabytes of storage making them inconvenient to
deploy in mobile applications or in custom hardware. In this work, we study the
problem of neural network model compression focusing on the image instance
retrieval task. We study quantization, coding, pruning and weight sharing
techniques for reducing model size for the instance retrieval problem. We
provide extensive experimental results on the trade-off between retrieval
performance and model size for different types of networks on several data sets
providing the most comprehensive study on this topic. We compress models to the
order of a few MBs: two orders of magnitude smaller than the uncompressed
models while achieving negligible loss in retrieval performance.
| 1 | 0 | 0 | 0 | 0 | 0 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.