title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
Improved Semantic-Aware Network Embedding with Fine-Grained Word Alignment | Network embeddings, which learn low-dimensional representations for each
vertex in a large-scale network, have received considerable attention in recent
years. For a wide range of applications, vertices in a network are typically
accompanied by rich textual information such as user profiles, paper abstracts,
etc. We propose to incorporate semantic features into network embeddings by
matching important words between text sequences for all pairs of vertices. We
introduce a word-by-word alignment framework that measures the compatibility of
embeddings between word pairs, and then adaptively accumulates these alignment
features with a simple yet effective aggregation function. In experiments, we
evaluate the proposed framework on three real-world benchmarks for downstream
tasks, including link prediction and multi-label vertex classification. Results
demonstrate that our model outperforms state-of-the-art network embedding
methods by a large margin.
| 1 | 0 | 0 | 0 | 0 | 0 |
Spin Precession Experiments for Light Axionic Dark Matter | Axion-like particles are promising candidates to make up the dark matter of
the universe, but it is challenging to design experiments that can detect them
over their entire allowed mass range. Dark matter in general, and in particular
axion-like particles and hidden photons, can be as light as roughly $10^{-22}
\;\rm{eV}$ ($\sim 10^{-8} \;\rm{Hz}$), with astrophysical anomalies providing
motivation for the lightest masses ("fuzzy dark matter"). We propose
experimental techniques for direct detection of axion-like dark matter in the
mass range from roughly $10^{-13} \;\rm{eV}$ ($\sim 10^2 \;\rm{Hz}$) down to
the lowest possible masses. In this range, these axion-like particles act as a
time-oscillating magnetic field coupling only to spin, inducing effects such as
a time-oscillating torque and periodic variations in the spin-precession
frequency with the frequency and direction set by fundamental physics. We show
how these signals can be measured using existing experimental technology,
including torsion pendulums, atomic magnetometers, and atom interferometry.
These experiments demonstrate a strong discovery capability, with future
iterations of these experiments capable of pushing several orders of magnitude
past current astrophysical bounds.
| 0 | 1 | 0 | 0 | 0 | 0 |
Persistence barcodes and Laplace eigenfunctions on surfaces | We obtain restrictions on the persistence barcodes of Laplace-Beltrami
eigenfunctions and their linear combinations on compact surfaces with
Riemannian metrics. Some applications to uniform approximation by linear
combinations of Laplace eigenfunctions are also discussed.
| 0 | 0 | 1 | 0 | 0 | 0 |
DFT study of ionic liquids adsorption on circumcoronene shaped graphene | Carbon materials have a range of properties such as high electrical
conductivity, high specific surface area, and mechanical flexibility are
relevant for electrochemical applications. Carbon materials are utilised in
energy conversion-and-storage devices along with electrolytes of complementary
properties. In this work, we study the interaction of highly concentrated
electrolytes (ionic liquids) at a model carbon surface (circumcoronene) using
density functional theory methods. Our results indicate the decisive role of
the dispersion interactions that noticeably strengthen the circumcoronene-ion
interaction. Also, we focus on the adsorption of halide anions as the
electrolytes containing these ions are promising for practical use in
supercapacitors and solar cells.
| 0 | 1 | 0 | 0 | 0 | 0 |
On the overestimation of the largest eigenvalue of a covariance matrix | In this paper, we use a new approach to prove that the largest eigenvalue of
the sample covariance matrix of a normally distributed vector is bigger than
the true largest eigenvalue with probability 1 when the dimension is infinite.
We prove a similar result for the smallest eigenvalue.
| 0 | 0 | 1 | 1 | 0 | 0 |
Thermodynamic Mechanism of Life and Aging | Life is a complex biological phenomenon represented by numerous chemical,
physical and biological processes performed by a biothermodynamic
system/cell/organism. Both living organisms and inanimate objects are subject
to aging, a biological and physicochemical process characterized by changes in
biological and thermodynamic state. Thus, the same physical laws govern
processes in both animate and inanimate matter. All life processes lead to
change of an organism's state. The change of biological and thermodynamic state
of an organism in time underlies all of three kinds of aging (chronological,
biological and thermodynamic). Life and aging of an organism both start at the
moment of fertilization and continue through entire lifespan. Fertilization
represents formation of a new organism. The new organism represents a new
thermodynamic system. From the very beginning, it changes its state by changing
thermodynamic parameters. The change of thermodynamic parameters is observed as
aging and can be related to change in entropy. Entropy is thus the parameter
that is related to all others and describes aging in the best manner. In the
beginning, entropy change appears as a consequence of accumulation of matter
(growth). Later, decomposition and configurational changes dominate, as a
consequence of various chemical reactions (free radical, decomposition,
fragmentation, accumulation of lipofuscin-like substances...).
| 0 | 0 | 0 | 0 | 1 | 0 |
Flat families of point schemes for connected graded algebras | We study truncated point schemes of connected graded algebras as families
over the parameter space of varying relations for the algebras, proving that
the families are flat over the open dense locus where the point schemes achieve
the expected (i.e. minimal) dimension.
When the truncated point scheme is zero-dimensional we obtain its number of
points counted with multiplicity via a Chow ring computation. This latter
application in particular confirms a conjecture of Brazfield to the effect that
a generic two-generator, two-relator 4-dimensional Artin-Schelter regular
algebra has seventeen truncated point modules of length six.
| 0 | 0 | 1 | 0 | 0 | 0 |
Multi-task Learning in the Computerized Diagnosis of Breast Cancer on DCE-MRIs | Hand-crafted features extracted from dynamic contrast-enhanced magnetic
resonance images (DCE-MRIs) have shown strong predictive abilities in
characterization of breast lesions. However, heterogeneity across medical image
datasets hinders the generalizability of these features. One of the sources of
the heterogeneity is the variation of MR scanner magnet strength, which has a
strong influence on image quality, leading to variations in the extracted image
features. Thus, statistical decision algorithms need to account for such data
heterogeneity. Despite the variations, we hypothesize that there exist
underlying relationships between the features extracted from the datasets
acquired with different magnet strength MR scanners. We compared the use of a
multi-task learning (MTL) method that incorporates those relationships during
the classifier training to support vector machines run on a merged dataset that
includes cases with various MRI strength images. As a result, higher predictive
power is achieved with the MTL method.
| 0 | 1 | 0 | 0 | 0 | 0 |
Real-time Distracted Driver Posture Classification | In this paper, we present a new dataset for "distracted driver" posture
estimation. In addition, we propose a novel system that achieves 95.98% driving
posture estimation classification accuracy. The system consists of a
genetically-weighted ensemble of Convolutional Neural Networks (CNNs). We show
that a weighted ensemble of classifiers using a genetic algorithm yields in
better classification confidence. We also study the effect of different visual
elements (i.e. hands and face) in distraction detection and classification by
means of face and hand localizations. Finally, we present a thinned version of
our ensemble that could achieve a 94.29% classification accuracy and operate in
a realtime environment.
| 1 | 0 | 0 | 0 | 0 | 0 |
Time Series Cube Data Model | The purpose of this document is to create a data model and its serialization
for expressing generic time series data. Already existing IVOA data models are
reused as much as possible. The model is also made as generic as possible to be
open to new extensions but at the same time closed for modifications. This
enables maintaining interoperability throughout different versions of the data
model. We define the necessary building blocks for metadata discovery,
serialization of time series data and understanding it by clients. We present
several categories of time series science cases with examples of
implementation. We also take into account the most pressing topics for time
series providers like tracking original images for every individual point of a
light curve or time-derived axes like frequency for gravitational wave
analysis. The main motivation for the creation of a new model is to provide a
unified time series data publishing standard - not only for light curves but
also more generic time series data, e.g., radial velocity curves, power
spectra, hardness ratio, provenance linkage, etc. The flexibility is the most
crucial part of our model - we are not dependent on any physical domain or
frame models. While images or spectra are already stable and standardized
products, the time series related domains are still not completely evolved and
new ones will likely emerge in near future. That is why we need to keep models
like Time Series Cube DM independent of any underlying physical models. In our
opinion, this is the only correct and sustainable way for future development of
IVOA standards.
| 1 | 1 | 0 | 0 | 0 | 0 |
Markov modeling of peptide folding in the presence of protein crowders | We use Markov state models (MSMs) to analyze the dynamics of a
$\beta$-hairpin-forming peptide in Monte Carlo (MC) simulations with
interacting protein crowders, for two different types of crowder proteins
[bovine pancreatic trypsin inhibitor (BPTI) and GB1]. In these systems, at the
temperature used, the peptide can be folded or unfolded and bound or unbound to
crowder molecules. Four or five major free-energy minima can be identified. To
estimate the dominant MC relaxation times of the peptide, we build MSMs using a
range of different time resolutions or lag times. We show that stable
relaxation-time estimates can be obtained from the MSM eigenfunctions through
fits to autocorrelation data. The eigenfunctions remain sufficiently accurate
to permit stable relaxation-time estimation down to small lag times, at which
point simple estimates based on the corresponding eigenvalues have large
systematic uncertainties. The presence of the crowders have a stabilizing
effect on the peptide, especially with BPTI crowders, which can be attributed
to a reduced unfolding rate $k_\text{u}$, while the folding rate $k_\text{f}$
is left largely unchanged.
| 0 | 0 | 0 | 0 | 1 | 0 |
Ultra-fast magnetization manipulation using single femtosecond light and hot-electrons pulse | Current induced magnetization manipulation is a key issue for spintronic
application. Therefore, deterministic switching of the magnetization at the
picoseconds timescale with a single electronic pulse represents a major step
towards the future developments of ultrafast spintronic. Here, we have studied
the ultrafast magnetization dynamics in engineered Gdx[FeCo]1-x based structure
to compare the effect of femtosecond laser and hot-electrons pulses. We
demonstrate that a single femtosecond hot-electrons pulse allows a
deterministic magnetization reversal in either Gd-rich and FeCo-rich alloys
similarly to a femtosecond laser pulse. In addition, we show that the limiting
factor of such manipulation for perpendicular magnetized films arises from the
multi-domain formation due to dipolar interaction. By performing time resolved
measurements under various field, we demonstrate that the same magnetization
dynamics is observed for both light and hot-electrons excitation and that the
full magnetization reversal take place within 5 ps. The energy efficiency of
the ultra-fast current induced magnetization manipulation is optimized thanks
to the ballistic transport of hot-electrons before reaching the GdFeCo magnetic
layer.
| 0 | 1 | 0 | 0 | 0 | 0 |
Grain Boundary Resistance in Copper Interconnects from an Atomistic Model to a Neural Network | Orientation effects on the resistivity of copper grain boundaries are studied
systematically with two different atomistic tight binding methods. A
methodology is developed to model the resistivity of grain boundaries using the
Embedded Atom Model, tight binding methods and non-equilibrum Green's functions
(NEGF). The methodology is validated against first principles calculations for
small, ultra-thin body grain boundaries (<5nm) with 6.4% deviation in the
resistivity. A statistical ensemble of 600 large, random structures with grains
is studied. For structures with three grains, it is found that the distribution
of resistivities is close to normal. Finally, a compact model for grain
boundary resistivity is constructed based on a neural network.
| 0 | 1 | 0 | 0 | 0 | 0 |
Some remarkable infinite product identities involving Fibonacci and Lucas numbers | By applying the classic telescoping summation formula and its variants to
identities involving inverse hyperbolic tangent functions having inverse powers
of the golden ratio as arguments and employing subtle properties of the
Fibonacci and Lucas numbers, we derive interesting general infinite product
identities involving these numbers.
| 0 | 0 | 1 | 0 | 0 | 0 |
Limit Theorems in Mallows Distance for Processes with Gibssian Dependence | In this paper, we explore the connection between convergence in distribution
and Mallows distance in the context of positively associated random variables.
Our results extend some known invariance principles for sequences with FKG
property. Applications for processes with Gibbssian dependence structures are
included.
| 0 | 0 | 1 | 0 | 0 | 0 |
Localized-endemic state transition in the susceptible-infected-susceptible model on networks | It is a longstanding debate concerning the absence of threshold for the
susceptible-infected-susceptible spreading model on networks with localized
state. The key to resolve this controversy is the dynamical interaction
pattern, which has not been uncovered. Here we show that the interaction
driving the localized-endemic state transition is not the global interaction
between a node and all the other nodes on the network, but exists at the level
of super node composed of highly connected node and its neighbors. The internal
interactions within a super node induce localized state with limited lifetime,
while the interactions between neighboring super nodes via a path of two hops
enable them to avoid trapping in the absorbing state, marking the onset of
endemic state. The hybrid interactions render highly connected nodes
exponentially increasing infection density, which truly account for the null
threshold. These results are crucial for correctly understanding diverse
recurrent contagion phenomena
| 0 | 1 | 0 | 0 | 0 | 0 |
Analytic approximation of solutions of parabolic partial differential equations with variable coefficients | A complete family of solutions for the one-dimensional reaction-diffusion
equation \[ u_{xx}(x,t)-q(x)u(x,t) = u_t(x,t) \] with a coefficient $q$
depending on $x$ is constructed. The solutions represent the images of the heat
polynomials under the action of a transmutation operator. Their use allows one
to obtain an explicit solution of the noncharacteristic Cauchy problem for the
considered equation with sufficiently regular Cauchy data as well as to solve
numerically initial boundary value problems. In the paper the Dirichlet
boundary conditions are considered however the proposed method can be easily
extended onto other standard boundary conditions. The proposed numerical method
is shown to reveal good accuracy.
| 0 | 0 | 1 | 0 | 0 | 0 |
Final-State Constrained Optimal Control via a Projection Operator Approach | In this paper we develop a numerical method to solve nonlinear optimal
control problems with final-state constraints. Specifically, we extend the
PRojection Operator based Netwon's method for Trajectory Optimization (PRONTO),
which was proposed by Hauser for unconstrained optimal control problems. While
in the standard method final-state constraints can be only approximately
handled by means of a terminal penalty, in this work we propose a methodology
to meet the constraints exactly. Moreover, our method guarantees recursive
feasibility of the final-state constraint. This is an appealing property
especially in realtime applications in which one would like to be able to stop
the computation even if the desired tolerance has not been reached, but still
satisfy the constraints. Following the same conceptual idea of PRONTO, the
proposed strategy is based on two main steps which (differently from the
standard scheme) preserve the feasibility of the final-state constraints: (i)
solve a quadratic approximation of the nonlinear problem to find a descent
direction, and (ii) get a (feasible) trajectory by means of a feedback law
(which turns out to be a nonlinear projection operator). To find the (feasible)
descent direction we take advantage of final-state constrained Linear Quadratic
optimal control methods, while the second step is performed by suitably
designing a constrained version of the trajectory tracking projection operator.
The effectiveness of the proposed strategy is tested on the optimal state
transfer of an inverted pendulum.
| 1 | 0 | 0 | 0 | 0 | 0 |
Robust Tracking with Model Mismatch for Fast and Safe Planning: an SOS Optimization Approach | In the pursuit of real-time motion planning, a commonly adopted practice is
to compute a trajectory by running a planning algorithm on a simplified,
low-dimensional dynamical model, and then employ a feedback tracking controller
that tracks such a trajectory by accounting for the full, high-dimensional
system dynamics. While this strategy of planning with model mismatch generally
yields fast computation times, there are no guarantees of dynamic feasibility,
which hampers application to safety-critical systems. Building upon recent work
that addressed this problem through the lens of Hamilton-Jacobi (HJ)
reachability, we devise an algorithmic framework whereby one computes, offline,
for a pair of "planner" (i.e., low-dimensional) and "tracking" (i.e.,
high-dimensional) models, a feedback tracking controller and associated
tracking bound. This bound is then used as a safety margin when generating
motion plans via the low-dimensional model. Specifically, we harness the
computational tool of sum-of-squares (SOS) programming to design a bilinear
optimization algorithm for the computation of the feedback tracking controller
and associated tracking bound. The algorithm is demonstrated via numerical
experiments, with an emphasis on investigating the trade-off between the
increased computational scalability afforded by SOS and its intrinsic
conservativeness. Collectively, our results enable scaling the appealing
strategy of planning with model mismatch to systems that are beyond the reach
of HJ analysis, while maintaining safety guarantees.
| 1 | 0 | 0 | 0 | 0 | 0 |
Robust Gaussian Stochastic Process Emulation | We consider estimation of the parameters of a Gaussian Stochastic Process
(GaSP), in the context of emulation (approximation) of computer models for
which the outcomes are real-valued scalars. The main focus is on estimation of
the GaSP parameters through various generalized maximum likelihood methods,
mostly involving finding posterior modes; this is because full Bayesian
analysis in computer model emulation is typically prohibitively expensive. The
posterior modes that are studied arise from objective priors, such as the
reference prior. These priors have been studied in the literature for the
situation of an isotropic covariance function or under the assumption of
separability in the design of inputs for model runs used in the GaSP
construction. In this paper, we consider more general designs (e.g., a Latin
Hypercube Design) with a class of commonly used anisotropic correlation
functions, which can be written as a product of isotropic correlation
functions, each having an unknown range parameter and a fixed roughness
parameter. We discuss properties of the objective priors and marginal
likelihoods for the parameters of the GaSP and establish the posterior
propriety of the GaSP parameters, but our main focus is to demonstrate that
certain parameterizations result in more robust estimation of the GaSP
parameters than others, and that some parameterizations that are in common use
should clearly be avoided. These results are applicable to many frequently used
covariance functions, e.g., power exponential, Mat{é}rn, rational quadratic
and spherical covariance. We also generalize the results to the GaSP model with
a nugget parameter. Both theoretical and numerical evidence is presented
concerning the performance of the studied procedures.
| 0 | 0 | 1 | 1 | 0 | 0 |
Observing the Atmospheres of Known Temperate Earth-sized Planets with JWST | Nine transiting Earth-sized planets have recently been discovered around
nearby late M dwarfs, including the TRAPPIST-1 planets and two planets
discovered by the MEarth survey, GJ 1132b and LHS 1140b. These planets are the
smallest known planets that may have atmospheres amenable to detection with
JWST. We present model thermal emission and transmission spectra for each
planet, varying composition and surface pressure of the atmosphere. We base
elemental compositions on those of Earth, Titan, and Venus and calculate the
molecular compositions assuming chemical equilibrium, which can strongly depend
on temperature. Both thermal emission and transmission spectra are sensitive to
the atmospheric composition; thermal emission spectra are sensitive to surface
pressure and temperature. We predict the observability of each planet's
atmosphere with JWST. GJ 1132b and TRAPPIST-1b are excellent targets for
emission spectroscopy with JWST/MIRI, requiring fewer than 10 eclipse
observations. Emission photometry for TRAPPIST-1c requires 5-15 eclipses; LHS
1140b and TRAPPIST-1d, TRAPPIST-1e, and TRAPPIST-1f, which could possibly have
surface liquid water, may be accessible with photometry. Seven of the nine
planets are strong candidates for transmission spectroscopy measurements with
JWST, though the number of transits required depends strongly on the planets'
actual masses. Using the measured masses, fewer than 20 transits are required
for a 5 sigma detection of spectral features for GJ 1132b and six of the
TRAPPIST-1 planets. Dedicated campaigns to measure the atmospheres of these
nine planets will allow us, for the first time, to probe formation and
evolution processes of terrestrial planetary atmospheres beyond our solar
system.
| 0 | 1 | 0 | 0 | 0 | 0 |
Front Propagation for Nonlocal KPP Reaction-Diffusion Equations in Periodic Media | We study front propagation phenomena for a large class of nonlocal KPP-type
reaction-diffusion equations in oscillatory environments, which model various
forms of population growth with periodic dependence. The nonlocal diffusion is
an anisotropic integro-differential operator of order $\alpha \in (0,2)$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Distributed Functional Observers for LTI Systems | We study the problem of designing distributed functional observers for LTI
systems. Specifically, we consider a setting consisting of a state vector that
evolves over time according to a dynamical process. A set of nodes distributed
over a communication network wish to collaboratively estimate certain functions
of the state. We first show that classical existence conditions for the design
of centralized functional observers do not directly translate to the
distributed setting, due to the coupling that exists between the dynamics of
the functions of interest and the diverse measurements at the various nodes.
Accordingly, we design transformations that reveal such couplings and identify
portions of the corresponding dynamics that are locally detectable at each
sensor node. We provide sufficient conditions on the network, along with state
estimate update and exchange rules for each node, that guarantee asymptotic
reconstruction of the functions at each sensor node.
| 0 | 0 | 1 | 0 | 0 | 0 |
Wavelet eigenvalue regression for $n$-variate operator fractional Brownian motion | In this contribution, we extend the methodology proposed in Abry and Didier
(2017) to obtain the first joint estimator of the real parts of the Hurst
eigenvalues of $n$-variate OFBM. The procedure consists of a wavelet regression
on the log-eigenvalues of the sample wavelet spectrum. The estimator is shown
to be consistent for any time reversible OFBM and, under stronger assumptions,
also asymptotically normal starting from either continuous or discrete time
measurements. Simulation studies establish the finite sample effectiveness of
the methodology and illustrate its benefits compared to univariate-like
(entrywise) analysis. As an application, we revisit the well-known self-similar
character of Internet traffic by applying the proposed methodology to 4-variate
time series of modern, high quality Internet traffic data. The analysis reveals
the presence of a rich multivariate self-similarity structure.
| 0 | 0 | 1 | 1 | 0 | 0 |
Deep Neural Network for Analysis of DNA Methylation Data | Many researches demonstrated that the DNA methylation, which occurs in the
context of a CpG, has strong correlation with diseases, including cancer. There
is a strong interest in analyzing the DNA methylation data to find how to
distinguish different subtypes of the tumor. However, the conventional
statistical methods are not suitable for analyzing the highly dimensional DNA
methylation data with bounded support. In order to explicitly capture the
properties of the data, we design a deep neural network, which composes of
several stacked binary restricted Boltzmann machines, to learn the low
dimensional deep features of the DNA methylation data. Experiments show these
features perform best in breast cancer DNA methylation data cluster analysis,
comparing with some state-of-the-art methods.
| 0 | 0 | 0 | 1 | 1 | 0 |
One look at the rating of scientific publications and corresponding toy-model | A toy-model of publications and citations processes is proposed. The model
shows that the role of randomness in the processes is essential and cannot be
ignored. Some other aspects of scientific publications rating are discussed.
| 1 | 0 | 0 | 1 | 0 | 0 |
Alignment, Orientation, and Coulomb Explosion of Difluoroiodobenzene Studied with the Pixel Imaging Mass Spectrometry (PImMS) Camera | Laser-induced adiabatic alignment and mixed-field orientation of
2,6-difluoroiodobenzene (C6H3F2I) molecules are probed by Coulomb explosion
imaging following either near-infrared strong-field ionization or
extreme-ultraviolet multi-photon inner-shell ionization using free-electron
laser pulses. The resulting photoelectrons and fragment ions are captured by a
double-sided velocity map imaging spectrometer and projected onto two
position-sensitive detectors. The ion side of the spectrometer is equipped with
the Pixel Imaging Mass Spectrometry (PImMS) camera, a time-stamping pixelated
detector that can record the hit positions and arrival times of up to four ions
per pixel per acquisition cycle. Thus, the time-of-flight trace and ion
momentum distributions for all fragments can be recorded simultaneously. We
show that we can obtain a high degree of one- and three-dimensional alignment
and mixed- field orientation, and compare the Coulomb explosion process induced
at both wavelengths.
| 0 | 1 | 0 | 0 | 0 | 0 |
Unsupervised Machine Learning of Open Source Russian Twitter Data Reveals Global Scope and Operational Characteristics | We developed and used a collection of statistical methods (unsupervised
machine learning) to extract relevant information from a Twitter supplied data
set consisting of alleged Russian trolls who (allegedly) attempted to influence
the 2016 US Presidential election. These unsupervised statistical methods allow
fast identification of (i) emergent language communities within the troll
population, (ii) the transnational scope of the operation and (iii) operational
characteristics of trolls that can be used for future identification. Using
natural language processing, manifold learning and Fourier analysis, we
identify an operation that includes not only the 2016 US election, but also the
French National and both local and national German elections. We show the
resulting troll population is composed of users with common, but clearly
customized, behavioral characteristics.
| 1 | 0 | 0 | 0 | 0 | 0 |
Distributed, scalable and gossip-free consensus optimization with application to data analysis | Distributed algorithms for solving additive or consensus optimization
problems commonly rely on first-order or proximal splitting methods. These
algorithms generally come with restrictive assumptions and at best enjoy a
linear convergence rate. Hence, they can require many iterations or
communications among agents to converge. In many cases, however, we do not seek
a highly accurate solution for consensus problems. Based on this we propose a
controlled relaxation of the coupling in the problem which allows us to compute
an approximate solution, where the accuracy of the approximation can be
controlled by the level of relaxation. The relaxed problem can be efficiently
solved in a distributed way using a combination of primal-dual interior-point
methods (PDIPMs) and message-passing. This algorithm purely relies on
second-order methods and thus requires far fewer iterations and communications
to converge. This is illustrated in numerical experiments, showing its superior
performance compared to existing methods.
| 0 | 0 | 1 | 0 | 0 | 0 |
Consistency of Maximum Likelihood for Continuous-Space Network Models | Network analysis needs tools to infer distributions over graphs of arbitrary
size from a single graph. Assuming the distribution is generated by a
continuous latent space model which obeys certain natural symmetry and
smoothness properties, we establish three levels of consistency for
non-parametric maximum likelihood inference as the number of nodes grows: (i)
the estimated locations of all nodes converge in probability on their true
locations; (ii) the distribution over locations in the latent space converges
on the true distribution; and (iii) the distribution over graphs of arbitrary
size converges.
| 0 | 0 | 1 | 1 | 0 | 0 |
Consistency Results for Stationary Autoregressive Processes with Constrained Coefficients | We consider stationary autoregressive processes with coefficients restricted
to an ellipsoid, which includes autoregressive processes with absolutely
summable coefficients. We provide consistency results under different norms for
the estimation of such processes using constrained and penalized estimators. As
an application we show some weak form of universal consistency. Simulations
show that directly including the constraint in the estimation can lead to more
robust results.
| 0 | 0 | 0 | 1 | 0 | 0 |
Higher order mobile coverage control with application to localization | Most current results on coverage control using mobile sensors require that
one partitioned cell is associated with precisely one sensor. In this paper, we
consider a class of coverage control problems involving higher order Voronoi
partitions, motivated by applications where more than one sensor is required to
monitor and cover one cell. Such applications are frequent in scenarios
requiring the sensors to localize targets. We introduce a framework depending
on a coverage performance function incorporating higher order Voronoi cells and
then design a gradient-based controller which allows the multi-sensor system to
achieve a local equilibrium in a distributed manner. The convergence properties
are studied and related to Lloyd algorithm. We study also the extension to
coverage of a discrete set of points. In addition, we provide a number of real
world scenarios where our framework can be applied. Simulation results are also
provided to show the controller performance.
| 1 | 0 | 0 | 0 | 0 | 0 |
Borg's Periodicity Theorems for first order self-adjoint systems with complex potentials | A self-adjoint first order system with Hermitian $\pi$-periodic potential
$Q(z)$, integrable on compact sets, is considered. It is shown that all zeros
of $\Delta + 2e^{-i\int_0^\pi \Im q dt}$ are double zeros if and only if this
self-adjoint system is unitarily equivalent to one in which $Q(z)$ is
$\frac{\pi}{2}$-periodic. Furthermore, the zeros of $\Delta - 2e^{-i\int_0^\pi
\Im q dt}$ are all double zeros if and only if the associated self-adjoint
system is unitarily equivalent to one in which $Q(z) = \sigma_2 Q(z) \sigma_2$.
Here $\Delta$ denotes the discriminant of the system and $\sigma_0$, $\sigma_2$
are Pauli matrices. Finally, it is shown that all instability intervals vanish
if and only if $Q = r\sigma_0 + q\sigma_2$, for some real valued $\pi$-periodic
functions $r$ and $q$ integrable on compact sets.
| 0 | 0 | 1 | 0 | 0 | 0 |
On the Ubiquity of Information Inconsistency for Conjugate Priors | Informally, "Information Inconsistency" is the property that has been
observed in many Bayesian hypothesis testing and model selection procedures
whereby the Bayesian conclusion does not become definitive when the data seems
to become definitive. An example is that, when performing a t-test using
standard conjugate priors, the Bayes factor of the alternative hypothesis to
the null hypothesis remains bounded as the t statistic grows to infinity. This
paper shows that information inconsistency is ubiquitous in Bayesian hypothesis
testing under conjugate priors. Yet the title does not fully describe the
paper, since we also show that theoretically recommended priors, including
scale mixtures of conjugate priors and adaptive priors, are information
consistent. Hence the paper is simply a forceful warning that use of conjugate
priors in testing and model selection is highly problematical, and should be
replaced by the information consistent alternatives.
| 0 | 0 | 1 | 1 | 0 | 0 |
Competition evolution of Rayleigh-Taylor bubbles | Material mixing induced by a Rayleigh-Taylor instability occurs ubiquitously
in either nature or engineering when a light fluid pushes against a heavy
fluid, accompanying with the formation and evolution of chaotic bubbles. Its
general evolution involves two mechanisms: bubble-merge and bubble-competition.
The former obeys a universa1 evolution law and has been well-studied, while the
latter depends on many factors and has not been well-recognized. In this paper,
we establish a theory for the latter to clarify and quantify the longstanding
open question: the dependence of bubbles evolution on the dominant factors of
arbitrary density ratio, broadband initial perturbations and various material
properties (e.g., viscosity, miscibility, surface tensor). Evolution of the
most important characteristic quantities, i.e., the diameter of dominant bubble
$D$ and the height of bubble zone $h$, is derived: (i) the $D$ expands
self-similarly with steady aspect ratio $\beta \equiv D/h \thickapprox (1{\rm{
+ }}A)/4$, depending only on dimensionless density ratio $A$, and (ii) the $h$
grows quadratically with constant growth coefficient $\alpha \equiv h/(Ag{t^2})
\thickapprox [2\phi/{\ln}(2{\eta _{\rm{0}}})]^2$, depending on both
dimensionless initial perturbation amplitude ${\eta _{\rm{0}}}$ and
material-property-associated linear growth rate ratio
$\phi\equiv\Gamma_{actual}/\Gamma_{ideal}\leqslant1$. The theory successfully
explains the continued puzzle about the widely varying $\alpha\in (0.02,0.12)$
in experiments and simulations, conducted at all value of $A \in (0,1)$ and
widely varying value of ${\eta _{\rm{0}}} \in [{10^{ - 7}},{10^{ - 2}}]$ with
different materials. The good agreement between theory and experiments implies
that majority of actual mixing depends on initial perturbations and material
properties, to which more attention should be paid in either natural or
engineering problems.
| 0 | 1 | 0 | 0 | 0 | 0 |
Nonlinear oblique projections | We construct nonlinear oblique projections along subalgebras of nilpotent Lie
algebras in terms of the Baker-Campbell-Hausdorff multiplication. We prove that
these nonlinear projections are real analytic on every Schubert cell of the
Grassmann manifold whose points are the subalgebras of the nilpotent Lie
algebra under consideration.
| 0 | 0 | 1 | 0 | 0 | 0 |
Granger Mediation Analysis of Multiple Time Series with an Application to fMRI | It becomes increasingly popular to perform mediation analysis for complex
data from sophisticated experimental studies. In this paper, we present Granger
Mediation Analysis (GMA), a new framework for causal mediation analysis of
multiple time series. This framework is motivated by a functional magnetic
resonance imaging (fMRI) experiment where we are interested in estimating the
mediation effects between a randomized stimulus time series and brain activity
time series from two brain regions. The stable unit treatment assumption for
causal mediation analysis is thus unrealistic for this type of time series
data. To address this challenge, our framework integrates two types of models:
causal mediation analysis across the variables and vector autoregressive models
across the temporal observations. We further extend this framework to handle
multilevel data to address individual variability and correlated errors between
the mediator and the outcome variables. These models not only provide valid
causal mediation for time series data but also model the causal dynamics across
time. We show that the modeling parameters in our models are identifiable, and
we develop computationally efficient methods to maximize the likelihood-based
optimization criteria. Simulation studies show that our method reduces the
estimation bias and improve statistical power, compared to existing approaches.
On a real fMRI data set, our approach not only infers the causal effects of
brain pathways but accurately captures the feedback effect of the outcome
region on the mediator region.
| 0 | 0 | 0 | 1 | 0 | 0 |
Parameterization of Sequence of MFCCs for DNN-based voice disorder detection | In this article a DNN-based system for detection of three common voice
disorders (vocal nodules, polyps and cysts; laryngeal neoplasm; unilateral
vocal paralysis) is presented. The input to the algorithm is (at least 3-second
long) audio recording of sustained vowel sound /a:/. The algorithm was
developed as part of the "2018 FEMH Voice Data Challenge" organized by Far
Eastern Memorial Hospital and obtained score value (defined in the challenge
specification) of 77.44. This was the second best result before final
submission. Final challenge results are not yet known during writing of this
document. The document also reports changes that were made for the final
submission which improved the score value in cross-validation by 0.6% points.
| 1 | 0 | 0 | 0 | 0 | 0 |
Near-Optimal Adversarial Policy Switching for Decentralized Asynchronous Multi-Agent Systems | A key challenge in multi-robot and multi-agent systems is generating
solutions that are robust to other self-interested or even adversarial parties
who actively try to prevent the agents from achieving their goals. The
practicality of existing works addressing this challenge is limited to only
small-scale synchronous decision-making scenarios or a single agent planning
its best response against a single adversary with fixed, procedurally
characterized strategies. In contrast this paper considers a more realistic
class of problems where a team of asynchronous agents with limited observation
and communication capabilities need to compete against multiple strategic
adversaries with changing strategies. This problem necessitates agents that can
coordinate to detect changes in adversary strategies and plan the best response
accordingly. Our approach first optimizes a set of stratagems that represent
these best responses. These optimized stratagems are then integrated into a
unified policy that can detect and respond when the adversaries change their
strategies. The near-optimality of the proposed framework is established
theoretically as well as demonstrated empirically in simulation and hardware.
| 1 | 0 | 0 | 0 | 0 | 0 |
Structural Connectome Validation Using Pairwise Classification | In this work, we study the extent to which structural connectomes and
topological derivative measures are unique to individual changes within human
brains. To do so, we classify structural connectome pairs from two large
longitudinal datasets as either belonging to the same individual or not. Our
data is comprised of 227 individuals from the Alzheimer's Disease Neuroimaging
Initiative (ADNI) and 226 from the Parkinson's Progression Markers Initiative
(PPMI). We achieve 0.99 area under the ROC curve score for features which
represent either weights or network structure of the connectomes (node degrees,
PageRank and local efficiency). Our approach may be useful for eliminating
noisy features as a preprocessing step in brain aging studies and early
diagnosis classification problems.
| 1 | 0 | 0 | 0 | 0 | 0 |
Deep Robust Kalman Filter | A Robust Markov Decision Process (RMDP) is a sequential decision making model
that accounts for uncertainty in the parameters of dynamic systems. This
uncertainty introduces difficulties in learning an optimal policy, especially
for environments with large state spaces. We propose two algorithms, RTD-DQN
and Deep-RoK, for solving large-scale RMDPs using nonlinear approximation
schemes such as deep neural networks. The RTD-DQN algorithm incorporates the
robust Bellman temporal difference error into a robust loss function, yielding
robust policies for the agent. The Deep-RoK algorithm is a robust Bayesian
method, based on the Extended Kalman Filter (EKF), that accounts for both the
uncertainty in the weights of the approximated value function and the
uncertainty in the transition probabilities, improving the robustness of the
agent. We provide theoretical results for our approach and test the proposed
algorithms on a continuous state domain.
| 1 | 0 | 0 | 1 | 0 | 0 |
Universal Statistics of Fisher Information in Deep Neural Networks: Mean Field Approach | The Fisher information matrix (FIM) is a fundamental quantity to represent
the characteristics of a stochastic model, including deep neural networks
(DNNs). The present study reveals novel statistics of FIM that are universal
among a wide class of DNNs. To this end, we use random weights and large width
limits, which enables us to utilize mean field theories. We investigate the
asymptotic statistics of the FIM's eigenvalues and reveal that most of them are
close to zero while the maximum takes a huge value. This implies that the
eigenvalue distribution has a long tail. Because the landscape of the parameter
space is defined by the FIM, it is locally flat in most dimensions, but
strongly distorted in others. We also demonstrate the potential usage of the
derived statistics through two exercises. First, small eigenvalues that induce
flatness can be connected to a norm-based capacity measure of generalization
ability. Second, the maximum eigenvalue that induces the distortion enables us
to quantitatively estimate an appropriately sized learning rate for gradient
methods to converge.
| 0 | 0 | 0 | 1 | 0 | 0 |
An extension problem and trace Hardy inequality for the sublaplacian on $H$-type groups | In this paper we study the extension problem for the sublaplacian on a
$H$-type group and use the solutions to prove trace Hardy and Hardy
inequalities for fractional powers of the sublaplacian.
| 0 | 0 | 1 | 0 | 0 | 0 |
Centered Isotonic Regression: Point and Interval Estimation for Dose-Response Studies | Univariate isotonic regression (IR) has been used for nonparametric
estimation in dose-response and dose-finding studies. One undesirable property
of IR is the prevalence of piecewise-constant stretches in its estimates,
whereas the dose-response function is usually assumed to be strictly
increasing. We propose a simple modification to IR, called centered isotonic
regression (CIR). CIR's estimates are strictly increasing in the interior of
the dose range. In the absence of monotonicity violations, CIR and IR both
return the original observations. Numerical examination indicates that for
sample sizes typical of dose-response studies and with realistic dose-response
curves, CIR provides a substantial reduction in estimation error compared with
IR when monotonicity violations occur. We also develop analytical interval
estimates for IR and CIR, with good coverage behavior. An R package implements
these point and interval estimates.
| 0 | 0 | 0 | 1 | 0 | 0 |
Denoising Neural Machine Translation Training with Trusted Data and Online Data Selection | Measuring domain relevance of data and identifying or selecting well-fit
domain data for machine translation (MT) is a well-studied topic, but denoising
is not yet. Denoising is concerned with a different type of data quality and
tries to reduce the negative impact of data noise on MT training, in
particular, neural MT (NMT) training. This paper generalizes methods for
measuring and selecting data for domain MT and applies them to denoising NMT
training. The proposed approach uses trusted data and a denoising curriculum
realized by online data selection. Intrinsic and extrinsic evaluations of the
approach show its significant effectiveness for NMT to train on data with
severe noise.
| 0 | 0 | 0 | 1 | 0 | 0 |
Discovering Signals from Web Sources to Predict Cyber Attacks | Cyber attacks are growing in frequency and severity. Over the past year alone
we have witnessed massive data breaches that stole personal information of
millions of people and wide-scale ransomware attacks that paralyzed critical
infrastructure of several countries. Combating the rising cyber threat calls
for a multi-pronged strategy, which includes predicting when these attacks will
occur. The intuition driving our approach is this: during the planning and
preparation stages, hackers leave digital traces of their activities on both
the surface web and dark web in the form of discussions on platforms like
hacker forums, social media, blogs and the like. These data provide predictive
signals that allow anticipating cyber attacks. In this paper, we describe
machine learning techniques based on deep neural networks and autoregressive
time series models that leverage external signals from publicly available Web
sources to forecast cyber attacks. Performance of our framework across ground
truth data over real-world forecasting tasks shows that our methods yield a
significant lift or increase of F1 for the top signals on predicted cyber
attacks. Our results suggest that, when deployed, our system will be able to
provide an effective line of defense against various types of targeted cyber
attacks.
| 1 | 0 | 0 | 1 | 0 | 0 |
Higher-degree Smoothness of Perturbations I | In this paper and its sequels, we give an unified treatment of the
higher-degree smoothness of admissible perturbations and related results used
in the global perturbation method for GW and Floer theories.
| 0 | 0 | 1 | 0 | 0 | 0 |
Space Telescope and Optical Reverberation Mapping Project. V. Optical Spectroscopic Campaign and Emission-Line Analysis for NGC 5548 | We present the results of an optical spectroscopic monitoring program
targeting NGC 5548 as part of a larger multi-wavelength reverberation mapping
campaign. The campaign spanned six months and achieved an almost daily cadence
with observations from five ground-based telescopes. The H$\beta$ and He II
$\lambda$4686 broad emission-line light curves lag that of the 5100 $\AA$
optical continuum by $4.17^{+0.36}_{-0.36}$ days and $0.79^{+0.35}_{-0.34}$
days, respectively. The H$\beta$ lag relative to the 1158 $\AA$ ultraviolet
continuum light curve measured by the Hubble Space Telescope is roughly
$\sim$50% longer than that measured against the optical continuum, and the lag
difference is consistent with the observed lag between the optical and
ultraviolet continua. This suggests that the characteristic radius of the
broad-line region is $\sim$50% larger than the value inferred from optical data
alone. We also measured velocity-resolved emission-line lags for H$\beta$ and
found a complex velocity-lag structure with shorter lags in the line wings,
indicative of a broad-line region dominated by Keplerian motion. The responses
of both the H$\beta$ and He II $\lambda$4686 emission lines to the driving
continuum changed significantly halfway through the campaign, a phenomenon also
observed for C IV, Ly $\alpha$, He II(+O III]), and Si IV(+O IV]) during the
same monitoring period. Finally, given the optical luminosity of NGC 5548
during our campaign, the measured H$\beta$ lag is a factor of five shorter than
the expected value implied by the $R_\mathrm{BLR} - L_\mathrm{AGN}$ relation
based on the past behavior of NGC 5548.
| 0 | 1 | 0 | 0 | 0 | 0 |
An Inexact Regularized Newton Framework with a Worst-Case Iteration Complexity of $\mathcal{O}(ε^{-3/2})$ for Nonconvex Optimization | An algorithm for solving smooth nonconvex optimization problems is proposed
that, in the worst-case, takes $\mathcal{O}(\epsilon^{-3/2})$ iterations to
drive the norm of the gradient of the objective function below a prescribed
positive real number $\epsilon$ and can take $\mathcal{O}(\epsilon^{-3})$
iterations to drive the leftmost eigenvalue of the Hessian of the objective
above $-\epsilon$. The proposed algorithm is a general framework that covers a
wide range of techniques including quadratically and cubically regularized
Newton methods, such as the Adaptive Regularisation using Cubics (ARC) method
and the recently proposed Trust-Region Algorithm with Contractions and
Expansions (TRACE). The generality of our method is achieved through the
introduction of generic conditions that each trial step is required to satisfy,
which in particular allow for inexact regularized Newton steps to be used.
These conditions center around a new subproblem that can be approximately
solved to obtain trial steps that satisfy the conditions. A new instance of the
framework, distinct from ARC and TRACE, is described that may be viewed as a
hybrid between quadratically and cubically regularized Newton methods.
Numerical results demonstrate that our hybrid algorithm outperforms a cublicly
regularized Newton method.
| 0 | 0 | 1 | 0 | 0 | 0 |
An inverse problem for Maxwell's equations with Lipschitz parameters | We consider an inverse boundary value problem for Maxwell's equations, which
aims to recover the electromagnetic material properties of a body from
measurements on the boundary. We show that a Lipschitz continuous conductivity,
electric permittivity, and magnetic permeability are uniquely determined by
knowledge of all tangential electric and magnetic fields on the boundary of the
body at a fixed frequency.
| 0 | 0 | 1 | 0 | 0 | 0 |
Context Aware Robot Navigation using Interactively Built Semantic Maps | We discuss the process of building semantic maps, how to interactively label
entities in them, and how to use them to enable context-aware navigation
behaviors in human environments. We utilize planar surfaces, such as walls and
tables, and static objects, such as door signs, as features for our semantic
mapping approach. Users can interactively annotate these features by having the
robot follow him/her, entering the label through a mobile app, and performing a
pointing gesture toward the landmark of interest. Our gesture based approach
can reliably estimate which object is being pointed at and detect ambiguous
gestures with probabilistic modeling. Our person following method attempts to
maximize future utility by a search for future actions assuming constant
velocity model for the human. We describe a method to extract metric goals from
a semantic map landmark and to plan a human aware path that takes into account
the personal spaces of people. Finally, we demonstrate context-awareness for
person following in two scenarios: interactive labeling and door passing. We
believe that future navigation approaches and service robotics applications can
be made more effective by further exploiting the structure of human
environments.
| 1 | 0 | 0 | 0 | 0 | 0 |
Disentangling by Partitioning: A Representation Learning Framework for Multimodal Sensory Data | Multimodal sensory data resembles the form of information perceived by humans
for learning, and are easy to obtain in large quantities. Compared to unimodal
data, synchronization of concepts between modalities in such data provides
supervision for disentangling the underlying explanatory factors of each
modality. Previous work leveraging multimodal data has mainly focused on
retaining only the modality-invariant factors while discarding the rest. In
this paper, we present a partitioned variational autoencoder (PVAE) and several
training objectives to learn disentangled representations, which encode not
only the shared factors, but also modality-dependent ones, into separate latent
variables. Specifically, PVAE integrates a variational inference framework and
a multimodal generative model that partitions the explanatory factors and
conditions only on the relevant subset of them for generation. We evaluate our
model on two parallel speech/image datasets, and demonstrate its ability to
learn disentangled representations by qualitatively exploring within-modality
and cross-modality conditional generation with semantics and styles specified
by examples. For quantitative analysis, we evaluate the classification accuracy
of automatically discovered semantic units. Our PVAE can achieve over 99%
accuracy on both modalities.
| 0 | 0 | 0 | 1 | 0 | 0 |
Blind Gain and Phase Calibration via Sparse Spectral Methods | Blind gain and phase calibration (BGPC) is a bilinear inverse problem
involving the determination of unknown gains and phases of the sensing system,
and the unknown signal, jointly. BGPC arises in numerous applications, e.g.,
blind albedo estimation in inverse rendering, synthetic aperture radar
autofocus, and sensor array auto-calibration. In some cases, sparse structure
in the unknown signal alleviates the ill-posedness of BGPC. Recently there has
been renewed interest in solutions to BGPC with careful analysis of error
bounds. In this paper, we formulate BGPC as an eigenvalue/eigenvector problem,
and propose to solve it via power iteration, or in the sparsity or joint
sparsity case, via truncated power iteration. Under certain assumptions, the
unknown gains, phases, and the unknown signal can be recovered simultaneously.
Numerical experiments show that power iteration algorithms work not only in the
regime predicted by our main results, but also in regimes where theoretical
analysis is limited. We also show that our power iteration algorithms for BGPC
compare favorably with competing algorithms in adversarial conditions, e.g.,
with noisy measurement or with a bad initial estimate.
| 1 | 0 | 0 | 0 | 0 | 0 |
Reconciling cooperation, biodiversity and stability in complex ecological communities | Empirical observations show that ecological communities can have a huge
number of coexisting species, also with few or limited number of resources.
These ecosystems are characterized by multiple type of interactions, in
particular displaying cooperative behaviors. However, standard modeling of
population dynamics based on Lotka-Volterra type of equations predicts that
ecosystem stability should decrease as the number of species in the community
increases and that cooperative systems are less stable than communities with
only competitive and/or exploitative interactions. Here we propose a stochastic
model of population dynamics, which includes exploitative interactions as well
as cooperative interactions induced by cross-feeding. The model is exactly
solved and we obtain results for relevant macro-ecological patterns, such as
species abundance distributions and correlation functions. In the large system
size limit, any number of species can coexist for a very general class of
interaction networks and stability increases as the number of species grows.
For pure mutualistic/commensalistic interactions we determine the topological
properties of the network that guarantee species coexistence. We also show that
the stationary state is globally stable and that inferring species interactions
through species abundance correlation analysis may be misleading. Our
theoretical approach thus show that appropriate models of cooperation naturally
leads to a solution of the long-standing question about complexity-stability
paradox and on how highly biodiverse communities can coexist.
| 0 | 0 | 0 | 0 | 1 | 0 |
The Young L Dwarf 2MASS J11193254-1137466 is a Planetary-Mass Binary | We have discovered that the extremely red, low-gravity L7 dwarf 2MASS
J11193254-1137466 is a 0.14" (3.6 AU) binary using Keck laser guide star
adaptive optics imaging. 2MASS J11193254-1137466 has previously been identified
as a likely member of the TW Hydrae Association (TWA). Using our updated
photometric distance and proper motion, a kinematic analysis based on the
BANYAN II model gives an 82% probability of TWA membership. At TWA's 10$\pm$3
Myr age and using hot-start evolutionary models, 2MASS J11193254-1137466AB is a
pair of $3.7^{+1.2}_{-0.9}$ $M_{\rm Jup}$ brown dwarfs, making it the
lowest-mass binary discovered to date. We estimate an orbital period of
$90^{+80}_{-50}$ years. One component is marginally brighter in $K$ band but
fainter in $J$ band, making this a probable flux-reversal binary, the first
discovered with such a young age. We also imaged the spectrally similar TWA L7
dwarf WISEA J114724.10-204021.3 with Keck and found no sign of binarity. Our
evolutionary model-derived $T_{\rm eff}$ estimate for WISEA J114724.10-204021.3
is $\approx$230 K higher than for 2MASS J11193254-1137466AB, at odds with their
spectral similarity. This discrepancy suggests that WISEA J114724.10-204021.3
may actually be a tight binary with masses and temperatures very similar to
2MASS J11193254-1137466AB, or further supporting the idea that near-infrared
spectra of young ultracool dwarfs are shaped by factors other than temperature
and gravity. 2MASS J11193254-1137466AB will be an essential benchmark for
testing evolutionary and atmospheric models in the young planetary-mass regime.
| 0 | 1 | 0 | 0 | 0 | 0 |
Diversity of Abundance Patterns of Light Neutron-capture Elements in Very-metal-poor Stars | We determine the abundances of neutron-capture elements from Sr to Eu for
five very-metal-poor stars (-3<[Fe/H]<-2) in the Milky Way halo to reveal the
origin of light neutron-capture elements. Previous spectroscopic studies have
shown evidence of at least two components in the r-process; one referred to as
the "main r-process" and the other as the "weak r-process," which is mainly
responsible for producing heavy and light neutron-capture elements,
respectively. Observational studies of metal-poor stars suggest that there is a
universal pattern in the main r-process, similar to the abundance pattern of
the r-process component of solar-system material. Still, it is uncertain
whether the abundance pattern of the weak r-process shows universality or
diversity, due to the sparseness of measured light neutron-capture elements. We
have detected the key elements, Mo, Ru, and Pd, in five target stars to give an
answer to this question. The abundance patterns of light neutron-capture
elements from Sr to Pd suggest a diversity in the weak r-process. In
particular, scatter in the abundance ratio between Ru and Pd is significant
when the abundance patterns are normalized at Zr. Our results are compared with
the elemental abundances predicted by nucleosynthesis models of supernovae with
parameters such as electron fraction or proto-neutron-star mass, to investigate
sources of such diversity in the abundance patterns of light neutron-capture
elements. This paper presents that the variation in the abundances of observed
stars can be explained with a small range of parameters, which can serve as
constraints on future modeling of supernova models.
| 0 | 1 | 0 | 0 | 0 | 0 |
Hypergames and Cyber-Physical Security for Control Systems | The identification of the Stuxnet worm in 2010 provided a highly publicized
example of a cyber attack used to damage an industrial control system
physically. This raised public awareness about the possibility of similar
attacks against other industrial targets -- including critical infrastructure.
In this paper, we use hypergames to analyze how adversarial perturbations can
be used to manipulate a system using optimal control. Hypergames form an
extension of game theory that enables us to model strategic interactions where
the players may have significantly different perceptions of the game(s) they
are playing. Past work with hypergames has been limited to relatively simple
interactions consisting of a small set of discrete choices for each player, but
here, we apply hypergames to larger systems with continuous variables. We find
that manipulating constraints can be a more effective attacker strategy than
directly manipulating objective function parameters. Moreover, the attacker
need not change the underlying system to carry out a successful attack -- it
may be sufficient to deceive the defender controlling the system. It is
possible to scale our approach up to even larger systems, but the ability to do
so will depend on the characteristics of the system in question, and we
identify several characteristics that will make those systems amenable to
hypergame analysis.
| 1 | 0 | 0 | 0 | 0 | 0 |
Measuring the academic reputation through citation networks via PageRank | The objective assessment of the prestige of an academic institution is a
difficult and hotly debated task. In the last few years, different types of
University Rankings have been proposed to quantify the excellence of different
research institutions in the world. Albeit met with criticism in some cases,
the relevance of university rankings is being increasingly acknowledged:
indeed, rankings are having a major impact on the design of research policies,
both at the institutional and governmental level. Yet, the debate on what
rankings are {\em exactly} measuring is enduring. Here, we address the issue by
measuring a quantitive and reliable proxy of the academic reputation of a given
institution and by evaluating its correlation with different university
rankings. Specifically, we study citation patterns among universities in five
different Web of Science Subject Categories and use the \pr~algorithm on the
five resulting citation networks. The rationale behind our work is that
scientific citations are driven by the reputation of the reference so that the
PageRank algorithm is expected to yield a rank which reflects the reputation of
an academic institution in a specific field. Our results allow to quantifying
the prestige of a set of institutions in a certain research field based only on
hard bibliometric data. Given the volume of the data analysed, our findings are
statistically robust and less prone to bias, at odds with ad--hoc surveys often
employed by ranking bodies in order to attain similar results. Because our
findings are found to correlate extremely well with the ARWU Subject rankings,
the approach we propose in our paper may open the door to new, Academic Ranking
methodologies that go beyond current methods by reconciling the qualitative
evaluation of Academic Prestige with its quantitative measurements via
publication impact.
| 1 | 0 | 0 | 0 | 0 | 0 |
A Model Order Reduction Algorithm for Estimating the Absorption Spectrum | The ab initio description of the spectral interior of the absorption spectrum
poses both a theoretical and computational challenge for modern electronic
structure theory. Due to the often spectrally dense character of this domain in
the quantum propagator's eigenspectrum for medium-to-large sized systems,
traditional approaches based on the partial diagonalization of the propagator
often encounter oscillatory and stagnating convergence. Electronic structure
methods which solve the molecular response problem through the solution of
spectrally shifted linear systems, such as the complex polarization propagator,
offer an alternative approach which is agnostic to the underlying spectral
density or domain location. This generality comes at a seemingly high
computational cost associated with solving a large linear system for each
spectral shift in some discretization of the spectral domain of interest. We
present a novel, adaptive solution based on model order reduction techniques
via interpolation. Model order reduction reduces the computational complexity
of mathematical models and is ubiquitous in the simulation of dynamical
systems. The efficiency and effectiveness of the proposed algorithm in the ab
initio prediction of X-Ray absorption spectra is demonstrated using a test set
of challenging water clusters which are spectrally dense in the neighborhood of
the oxygen K-edge. Based on a single, user defined tolerance we automatically
determine the order of the reduced models and approximate the absorption
spectrum up to the given tolerance. We also illustrate that the automatically
determined model order increases logarithmically with the problem dimension,
compared to a linear increase of the number of eigenvalues within the energy
window. Furthermore, we observed that the computational cost of the proposed
algorithm only scales quadratically with respect to the problem dimension.
| 1 | 1 | 0 | 0 | 0 | 0 |
Uniform rank gradient, cost and local-global convergence | We analyze the rank gradient of finitely generated groups with respect to
sequences of subgroups of finite index that do not necessarily form a chain, by
connecting it to the cost of p.m.p. actions. We generalize several results that
were only known for chains before. The connection is made by the notion of
local-global convergence.
In particular, we show that for a finitely generated group $\Gamma$ with
fixed price $c$, every Farber sequence has rank gradient $c-1$. By adapting
Lackenby's trichotomy theorem to this setting, we also show that in a finitely
presented amenable group, every sequence of subgroups with index tending to
infinity has vanishing rank gradient.
| 0 | 0 | 1 | 0 | 0 | 0 |
Optimal design of a model energy conversion device | Fuel cells, batteries, thermochemical and other energy conversion devices
involve the transport of a number of (electro-)chemical species through
distinct materials so that they can meet and react at specified multi-material
interfaces. Therefore, morphology or arrangement of these different materials
can be critical in the performance of an energy conversion device. In this
paper, we study a model problem motivated by a solar-driven thermochemical
conversion device that splits water into hydrogen and oxygen. We formulate the
problem as a system of coupled multi-material reaction-diffusion equations
where each species diffuses selectively through a given material and where the
reaction occurs at multi-material interfaces. We express the problem of optimal
design of the material arrangement as a saddle point problem and obtain an
effective functional which shows that regions with very fine phase mixtures of
the material arise naturally. To explore this further, we introduce a
phase-field formulation of the optimal design problem, and numerically study
selected examples.
| 0 | 1 | 1 | 0 | 0 | 0 |
Mining Communication Data in a Music Community: A Preliminary Analysis | Comments play an important role within online creative communities because
they make it possible to foster the production and improvement of authors'
artifacts. We investigate how comment-based communication help shape members'
behavior within online creative communities. In this paper, we report the
results of a preliminary study aimed at mining the communication network of a
music community for collaborative songwriting, where users collaborate online
by first uploading new songs and then by adding new tracks and providing
feedback in forms of comments.
| 1 | 0 | 0 | 0 | 0 | 0 |
Multidimensional VlasovPoisson Simulations with High-order Monotonicity- and Positivity-preserving Schemes | We develop new numerical schemes for Vlasov--Poisson equations with
high-order accuracy. Our methods are based on a spatially
monotonicity-preserving (MP) scheme and are modified suitably so that
positivity of the distribution function is also preserved. We adopt an
efficient semi-Lagrangian time integration scheme that is more accurate and
computationally less expensive than the three-stage TVD Runge-Kutta
integration. We apply our spatially fifth- and seventh-order schemes to a suite
of simulations of collisionless self-gravitating systems and electrostatic
plasma simulations, including linear and nonlinear Landau damping in one
dimension and Vlasov--Poisson simulations in a six-dimensional phase space. The
high-order schemes achieve a significantly improved accuracy in comparison with
the third-order positive-flux-conserved scheme adopted in our previous study.
With the semi-Lagrangian time integration, the computational cost of our
high-order schemes does not significantly increase, but remains roughly the
same as that of the third-order scheme. Vlasov--Poisson simulations on $128^3
\times 128^3$ mesh grids have been successfully performed on a massively
parallel computer.
| 0 | 1 | 0 | 0 | 0 | 0 |
Markov Chain Monte Carlo Methods for Bayesian Data Analysis in Astronomy | Markov Chain Monte Carlo based Bayesian data analysis has now become the
method of choice for analyzing and interpreting data in almost all disciplines
of science. In astronomy, over the last decade, we have also seen a steady
increase in the number of papers that employ Monte Carlo based Bayesian
analysis. New, efficient Monte Carlo based methods are continuously being
developed and explored. In this review, we first explain the basics of Bayesian
theory and discuss how to set up data analysis problems within this framework.
Next, we provide an overview of various Monte Carlo based methods for
performing Bayesian data analysis. Finally, we discuss advanced ideas that
enable us to tackle complex problems and thus hold great promise for the
future. We also distribute downloadable computer software (available at
this https URL ) that implements some of the algorithms and
examples discussed here.
| 0 | 1 | 0 | 1 | 0 | 0 |
Stable splitting of mapping spaces via nonabelian Poincaré duality | We use nonabelian Poincaré duality to recover the stable splitting of
compactly supported mapping spaces, $\rm{Map_c}$$(M,\Sigma^nX)$, where $M$ is a
parallelizable $n$-manifold. Our method for deriving this splitting is new, and
naturally extends to give a more general stable splitting of the space of
compactly supported sections of a certain bundle on $M$ with fibers
$\Sigma^nX$, twisted by the tangent bundle of $M$. This generalization
incorporates possible $O(n)$-actions on $X$ as well as accommodating
non-parallelizable manifolds.
| 0 | 0 | 1 | 0 | 0 | 0 |
Static Gesture Recognition using Leap Motion | In this report, an automated bartender system was developed for making orders
in a bar using hand gestures. The gesture recognition of the system was
developed using Machine Learning techniques, where the model was trained to
classify gestures using collected data. The final model used in the system
reached an average accuracy of 95%. The system raised ethical concerns both in
terms of user interaction and having such a system in a real world scenario,
but it could initially work as a complement to a real bartender.
| 1 | 0 | 0 | 1 | 0 | 0 |
B-spline-like bases for $C^2$ cubics on the Powell-Sabin 12-split | For spaces of constant, linear, and quadratic splines of maximal smoothness
on the Powell-Sabin 12-split of a triangle, the so-called S-bases were recently
introduced. These are simplex spline bases with B-spline-like properties on the
12-split of a single triangle, which are tied together across triangles in a
Bézier-like manner.
In this paper we give a formal definition of an S-basis in terms of certain
basic properties. We proceed to investigate the existence of S-bases for the
aforementioned spaces and additionally the cubic case, resulting in an
exhaustive list. From their nature as simplex splines, we derive simple
differentiation and recurrence formulas to other S-bases. We establish a
Marsden identity that gives rise to various quasi-interpolants and domain
points forming an intuitive control net, in terms of which conditions for
$C^0$-, $C^1$-, and $C^2$-smoothness are derived.
| 1 | 0 | 0 | 0 | 0 | 0 |
Synthesizing Correlations with Computational Likelihood Approach: Vitamin C Data | It is known that the primary source of dietary vitamin C is fruit and
vegetables and the plasma level of vitamin C has been considered a good
surrogate biomarker of vitamin C intake by fruit and vegetable consumption. To
combine the information about association between vitamin C intake and the
plasma level of vitamin C, numerical approximation methods for likelihood
function of correlation coefficient are studied. The least squares approach is
used to estimate a log-likelihood function by a function from a space of
B-splines having desirable mathematical properties. The likelihood interval
from the Highest Likelihood Regions (HLR) is used for further inference. This
approach can be easily extended to the realm of meta-analysis involving sample
correlations from different studies by use of an approximated combined
likelihood function. The sample correlations between vitamin C intake and serum
level of vitamin C from many studies are used to illustrate application of this
approach.
| 0 | 0 | 0 | 1 | 0 | 0 |
Axion detection via Topological Casimir Effect | We propose a new table-top experimental configuration for the direct
detection of dark matter axions with mass in the $(10^{-6} \rm eV - 10^{-2} \rm
eV)$ range using non-perturbative effects in a system with non-trivial spatial
topology. Different from most experimental setups found in literature on direct
dark matter axion detection, which relies on $\dot{\theta}$ or
$\vec{\nabla}\theta$, we found that our system is in principle sensitive to a
static $\theta\geq 10^{-14}$ and can also be used to set limit on the
fundamental constant $\theta_{\rm QED}$ which becomes the fundamental
observable parameter of the Maxwell system if some conditions are met.
Connection with Witten effect when the induced electric charge $e'$ is
proportional to $\theta$ and the magnetic monopole becomes the dyon with
non-vanishing $e'=-e \frac{\theta}{2\pi}$ is also discussed.
| 0 | 1 | 0 | 0 | 0 | 0 |
Novel Feature-Based Clustering of Micro-Panel Data (CluMP) | Micro-panel data are collected and analysed in many research and industry
areas. Cluster analysis of micro-panel data is an unsupervised learning
exploratory method identifying subgroup clusters in a data set which include
homogeneous objects in terms of the development dynamics of monitored
variables. The supply of clustering methods tailored to micro-panel data is
limited. The present paper focuses on a feature-based clustering method,
introducing a novel two-step characteristic-based approach designed for this
type of data. The proposed CluMP method aims to identify clusters that are at
least as internally homogeneous and externally heterogeneous as those obtained
by alternative methods already implemented in the statistical system R. We
compare the clustering performance of the devised algorithm with two extant
methods using simulated micro-panel data sets. Our approach has yielded similar
or better outcomes than the other methods, the advantage of the proposed
algorithm being time efficiency which makes it applicable for large data sets.
| 0 | 0 | 0 | 1 | 0 | 0 |
Ab initio effective Hamiltonians for cuprate superconductors | Ab initio low-energy effective Hamiltonians of two typical high-temperature
copper-oxide superconductors, whose mother compounds are La$_2$CuO$_4$ and
HgBa$_2$CuO$_4$, are derived by utilizing the multi-scale ab initio scheme for
correlated electrons (MACE). The effective Hamiltonians obtained in the present
study serve as platforms of future studies to accurately solve the low-energy
effective Hamiltonians beyond the density functional theory. It allows further
study on the superconducting mechanism from the first principles and
quantitative basis without adjustable parameters not only for the available
cuprates but also for future design of higher Tc in general. More concretely,
we derive effective Hamiltonians for three variations, 1)one-band Hamiltonian
for the antibonding orbital generated from strongly hybridized Cu
$3d_{x^2-y^2}$ and O $2p_\sigma$ orbitals 2)two-band Hamiltonian constructed
from the antibonding orbital and Cu $3d_{3z^2-r^2}$ orbital hybridized mainly
with the apex oxygen $p_z$ orbital 3)three-band Hamiltonian consisting mainly
of Cu $3d_{x^2-y^2}$ orbitals and two O $2p_\sigma$ orbitals. Differences
between the Hamiltonians for La$_2$CuO$_4$ and HgBa$_2$CuO$_4$, which have
relatively low and high critical temperatures, respectively, at optimally doped
compounds, are elucidated. The main differences are summarized as i) the oxygen
$2p_\sigma$ orbitals are farther(~3.7eV) below from the Cu $d_{x^2-y^2}$
orbital for the La compound than the Hg compound(~2.4eV) in the three-band
Hamiltonian. This causes a substantial difference in the character of the
$d_{x^2-y^2}-2p_\sigma$ antibonding band at the Fermi level and makes the
effective onsite Coulomb interaction U larger for the La compound than the Hg
compound for the two- and one-band Hamiltonians. ii)The ratio of the
second-neighbor to the nearest transfer t'/t is also substantially different
(~0.26) for the Hg and ~0.15 for the La compound in the one-band Hamiltonian.
| 0 | 1 | 0 | 0 | 0 | 0 |
Longitudinal data analysis using matrix completion | In clinical practice and biomedical research, measurements are often
collected sparsely and irregularly in time while the data acquisition is
expensive and inconvenient. Examples include measurements of spine bone mineral
density, cancer growth through mammography or biopsy, a progression of defect
of vision, or assessment of gait in patients with neurological disorders. Since
the data collection is often costly and inconvenient, estimation of progression
from sparse observations is of great interest for practitioners.
From the statistical standpoint, such data is often analyzed in the context
of a mixed-effect model where time is treated as both random and fixed effect.
Alternatively, researchers analyze Gaussian processes or functional data where
observations are assumed to be drawn from a certain distribution of processes.
These models are flexible but rely on probabilistic assumptions and require
very careful implementation.
In this study, we propose an alternative elementary framework for analyzing
longitudinal data, relying on matrix completion. Our method yields point
estimates of progression curves by iterative application of the SVD. Our
framework covers multivariate longitudinal data, regression and can be easily
extended to other settings.
We apply our methods to understand trends of progression of motor impairment
in children with Cerebral Palsy. Our model approximates individual progression
curves and explains 30% of the variability. Low-rank representation of
progression trends enables discovering that subtypes of Cerebral Palsy exhibit
different progression trends.
| 0 | 0 | 0 | 1 | 0 | 0 |
Emission-line Diagnostics of Nearby HII Regions Including Supernova Hosts | We present a new model of the optical nebular emission from HII regions by
combin- ing the results of the Binary Population and Spectral Synthesis (bpass)
code with the photoion- ization code cloudy (Ferland et al. 1998). We explore a
variety of emission-line diagnostics of these star-forming HII regions and
examine the effects of metallicity and interacting binary evo- lution on the
nebula emission-line production. We compare the line emission properties of HII
regions with model stellar populations, and provide new constraints on their
stellar populations and supernova progenitors. We find that models including
massive binary stars can successfully match all the observational constraints
and provide reasonable age and mass estimation of the HII regions and supernova
progenitors.
| 0 | 1 | 0 | 0 | 0 | 0 |
On monomial linearisation and supercharacters of pattern subgroups | Column closed pattern subgroups $U$ of the finite upper unitriangular groups
$U_n(q)$ are defined as sets of matrices in $U_n(q)$ having zeros in a
prescribed set of columns besides the diagonal ones. We explain Jedlitschky's
construction of monomial linearisation and apply this to $C U$ yielding a
generalisation of Yan's coadjoint cluster representations. Then we give a
complete classification of the resulting supercharacters, by describing the
resulting orbits and determining the Hom-spaces between orbit modules.
| 0 | 0 | 1 | 0 | 0 | 0 |
The Structural Fate of Individual Multicomponent Metal-Oxide Nanoparticles in Polymer Nanoreactors | Multicomponent nanoparticles can be synthesized with either homogeneous or
phase-segregated architectures depending on the synthesis conditions and
elements incorporated. To understand the parameters that determine their
structural fate, multicomponent metal-oxide nanoparticles consisting of
combinations of Co, Ni, and Cu were synthesized via scanning probe block
copolymer lithography and characterized using correlated electron microscopy.
These studies revealed that the miscibility, ratio of the metallic components,
and the synthesis temperature determine the crystal structure and architecture
of the nanoparticles. A Co-Ni-O system forms a rock salt structure largely due
to the miscibility of CoO and NiO, while Cu-Ni-O, which has large miscibility
gaps, forms either homogeneous oxides, heterojunctions, or alloys depending on
the annealing temperature and composition. Moreover, a higher ordered
structure, Co-Ni-Cu-O, was found to follow the behavior of lower ordered
systems.
| 0 | 1 | 0 | 0 | 0 | 0 |
On Compression of Unsupervised Neural Nets by Pruning Weak Connections | Unsupervised neural nets such as Restricted Boltzmann Machines(RBMs) and Deep
Belif Networks(DBNs), are powerful in automatic feature extraction,unsupervised
weight initialization and density estimation. In this paper,we demonstrate that
the parameters of these neural nets can be dramatically reduced without
affecting their performance. We describe a method to reduce the parameters
required by RBM which is the basic building block for deep architectures.
Further we propose an unsupervised sparse deep architectures selection
algorithm to form sparse deep neural networks.Experimental results show that
there is virtually no loss in either generative or discriminative performance.
| 1 | 0 | 0 | 0 | 0 | 0 |
A matrix generalization of a theorem of Fine | In 1947 Nathan Fine gave a beautiful product for the number of binomial
coefficients $\binom{n}{m}$, for $m$ in the range $0 \leq m \leq n$, that are
not divisible by $p$. We give a matrix product that generalizes Fine's formula,
simultaneously counting binomial coefficients with $p$-adic valuation $\alpha$
for each $\alpha \geq 0$. For each $n$ this information is naturally encoded in
a polynomial generating function, and the sequence of these polynomials is
$p$-regular in the sense of Allouche and Shallit. We also give a further
generalization to multinomial coefficients.
| 1 | 0 | 1 | 0 | 0 | 0 |
Training Multi-Task Adversarial Network For Extracting Noise-Robust Speaker Embedding | Under noisy environments, to achieve the robust performance of speaker
recognition is still a challenging task. Motivated by the promising performance
of multi-task training in a variety of image processing tasks, we explore the
potential of multi-task adversarial training for learning a noise-robust
speaker embedding. In this paper we present a novel framework which consists of
three components: an encoder that extracts noise-robust speaker embedding; a
classifier that classifies the speakers; a discriminator that discriminates the
noise type of the speaker embedding. Besides, we propose a training strategy
using the training accuracy as an indicator to stabilize the multi-class
adversarial optimization process. We conduct our experiments on the English and
Mandarin corpus and the experimental results demonstrate that our proposed
multi-task adversarial training method could greatly outperform the other
methods without adversarial training in noisy environments. Furthermore,
experiments indicate that our method is also able to improve the speaker
verification performance the clean condition.
| 1 | 0 | 0 | 0 | 0 | 0 |
Playing Games with Bounded Entropy | In this paper, we consider zero-sum repeated games in which the maximizer is
restricted to strategies requiring no more than a limited amount of randomness.
Particularly, we analyze the maxmin payoff of the maximizer in two models: the
first model forces the maximizer to randomize her action in each stage just by
conditioning her decision to outcomes of a given sequence of random source,
whereas, in the second model, the maximizer is a team of players who are free
to privately randomize their corresponding actions but do not have access to
any explicit source of shared randomness needed for cooperation. The works of
Gossner and Vieille, and Gossner and Tomala adopted the method of types to
establish their results; however, we utilize the idea of random hashing which
is the core of randomness extractors in the information theory literature. In
addition, we adopt the well-studied tool of simulation of a source from another
source. By utilizing these tools, we are able to simplify the prior results and
generalize them as well. We characterize the maxmin payoff of the maximizer in
the repeated games under study. Particularly, the maxmin payoff of the first
model is fully described by the function $J(h)$ which is the maximum payoff
that the maximizer can secure in a one-shot game by choosing mixed strategies
of entropy at most $h$. In the second part of the paper, we study the
computational aspects of $J(h)$. We offer three explicit lower bounds on the
entropy-payoff trade-off curve. To do this, we provide and utilize new results
for the set of distributions that guarantee a certain payoff for Alice. In
particular, we study how this set of distributions shrinks as we increase the
security level. While the use of total variation distance is common in game
theory, our derivation indicates the suitability of utilizing the
Renyi-divergence of order two.
| 1 | 0 | 0 | 0 | 0 | 0 |
Exact Hausdorff and packing measures for random self-similar code-trees with necks | Random code-trees with necks were introduced recently to generalise the
notion of $V$-variable and random homogeneous sets. While it is known that the
Hausdorff and packing dimensions coincide irrespective of overlaps, their exact
Hausdorff and packing measure has so far been largely ignored. In this article
we consider the general question of an appropriate gauge function for positive
and finite Hausdorff and packing measure. We first survey the current state of
knowledge and establish some bounds on these gauge functions. We then show that
self-similar code-trees do not admit a gauge functions that simultaneously give
positive and finite Hausdorff measure almost surely. This surprising result is
in stark contrast to the random recursive model and sheds some light on the
question of whether $V$-variable sets interpolate between random homogeneous
and random recursive sets. We conclude by discussing implications of our
results.
| 0 | 0 | 1 | 0 | 0 | 0 |
On Modules over a G-set | Let R be a commutative ring with unity, M a module over R and let S be a
G-set for a finite group G. We define a set MS to be the set of elements
expressed as the formal finite sum of the form similar to the elements of group
ring RG. The set MS is a module over the group ring RG under the addition and
the scalar multiplication similar to the RG-module MG. With this notion, we not
only generalize but also unify the theories of both of the group algebra and
the group module, and we also establish some significant properties of MS. In
particular, we describe a method for decomposing a given RG-module MS as a
direct sum of RG-submodules. Furthermore, we prove the semisimplicity problem
of MS with regard to the properties of M, S and G.
| 0 | 0 | 1 | 0 | 0 | 0 |
Polynomial-time algorithms for the Longest Induced Path and Induced Disjoint Paths problems on graphs of bounded mim-width | We give the first polynomial-time algorithms on graphs of bounded maximum
induced matching width (mim-width) for problems that are not locally checkable.
In particular, we give $n^{\mathcal{O}(w)}$-time algorithms on graphs of
mim-width at most $w$, when given a decomposition, for the following problems:
Longest Induced Path, Induced Disjoint Paths and $H$-Induced Topological Minor
for fixed $H$. Our results imply that the following graph classes have
polynomial-time algorithms for these three problems: Interval and Bi-Interval
graphs, Circular Arc, Permutation and Circular Permutation graphs, Convex
graphs, $k$-Trapezoid, Circular $k$-Trapezoid, $k$-Polygon, Dilworth-$k$ and
Co-$k$-Degenerate graphs for fixed $k$.
| 1 | 0 | 0 | 0 | 0 | 0 |
Bug or Not? Bug Report Classification Using N-Gram IDF | Previous studies have found that a significant number of bug reports are
misclassified between bugs and non-bugs, and that manually classifying bug
reports is a time-consuming task. To address this problem, we propose a bug
reports classification model with N-gram IDF, a theoretical extension of
Inverse Document Frequency (IDF) for handling words and phrases of any length.
N-gram IDF enables us to extract key terms of any length from texts, these key
terms can be used as the features to classify bug reports. We build
classification models with logistic regression and random forest using features
from N-gram IDF and topic modeling, which is widely used in various software
engineering tasks. With a publicly available dataset, our results show that our
N-gram IDF-based models have a superior performance than the topic-based models
on all of the evaluated cases. Our models show promising results and have a
potential to be extended to other software engineering tasks.
| 1 | 0 | 0 | 0 | 0 | 0 |
Prior Information Guided Regularized Deep Learning for Cell Nucleus Detection | Cell nuclei detection is a challenging research topic because of limitations
in cellular image quality and diversity of nuclear morphology, i.e. varying
nuclei shapes, sizes, and overlaps between multiple cell nuclei. This has been
a topic of enduring interest with promising recent success shown by deep
learning methods. These methods train Convolutional Neural Networks (CNNs) with
a training set of input images and known, labeled nuclei locations. Many such
methods are supplemented by spatial or morphological processing. Using a set of
canonical cell nuclei shapes, prepared with the help of a domain expert, we
develop a new approach that we call Shape Priors with Convolutional Neural
Networks (SP-CNN). We further extend the network to introduce a shape prior
(SP) layer and then allowing it to become trainable (i.e. optimizable). We call
this network tunable SP-CNN (TSP-CNN). In summary, we present new network
structures that can incorporate 'expected behavior' of nucleus shapes via two
components: learnable layers that perform the nucleus detection and a fixed
processing part that guides the learning with prior information. Analytically,
we formulate two new regularization terms that are targeted at: 1) learning the
shapes, 2) reducing false positives while simultaneously encouraging detection
inside the cell nucleus boundary. Experimental results on two challenging
datasets reveal that the proposed SP-CNN and TSP-CNN can outperform
state-of-the-art alternatives.
| 1 | 0 | 0 | 1 | 0 | 0 |
Anisotropic hydrodynamic turbulence in accretion disks | Recently, the vertical shear instability (VSI) has become an attractive
purely hydrodynamic candidate for the anomalous angular momentum transport
required for weakly ionized accretion disks. In direct three-dimensional
numerical simulations of VSI turbulence in disks, a meridional circulation
pattern was observed that is opposite to the usual viscous flow behavior. Here,
we investigate whether this feature can possibly be explained by an anisotropy
of the VSI turbulence. Using three-dimensional hydrodynamical simulations, we
calculate the turbulent Reynolds stresses relevant for angular momentum
transport for a representative section of a disk.
We find that the vertical stress is significantly stronger than the radial
stress. Using our results in viscous disk simulations with different viscosity
coefficients for the radial and vertical direction, we find good agreement with
the VSI turbulence for the stresses and meridional flow; this provides
additional evidence for the anisotropy. The results are important with respect
to the transport of small embedded particles in disks.
| 0 | 1 | 0 | 0 | 0 | 0 |
Optimal Algorithms for Distributed Optimization | In this paper, we study the optimal convergence rate for distributed convex
optimization problems in networks. We model the communication restrictions
imposed by the network as a set of affine constraints and provide optimal
complexity bounds for four different setups, namely: the function $F(\xb)
\triangleq \sum_{i=1}^{m}f_i(\xb)$ is strongly convex and smooth, either
strongly convex or smooth or just convex. Our results show that Nesterov's
accelerated gradient descent on the dual problem can be executed in a
distributed manner and obtains the same optimal rates as in the centralized
version of the problem (up to constant or logarithmic factors) with an
additional cost related to the spectral gap of the interaction matrix. Finally,
we discuss some extensions to the proposed setup such as proximal friendly
functions, time-varying graphs, improvement of the condition numbers.
| 1 | 0 | 0 | 1 | 0 | 0 |
Evaluation of Classical Features and Classifiers in Brain-Computer Interface Tasks | Brain-Computer Interface (BCI) uses brain signals in order to provide a new
method for communication between human and outside world. Feature extraction,
selection and classification are among the main matters of concerns in signal
processing stage of BCI. In this article, we present our findings about the
most effective features and classifiers in some brain tasks. Six different
groups of classical features and twelve classifiers have been examined in nine
datasets of brain signal. The results indicate that energy of brain signals in
{\alpha} and \b{eta} frequency bands, together with some statistical parameters
are more effective, comparing to the other types of extracted features. In
addition, Bayesian classifier with Gaussian distribution assumption and also
Support Vector Machine (SVM) show to classify different BCI datasets more
accurately than the other classifiers. We believe that the results can give an
insight about a strategy for blind classification of brain signals in
brain-computer interface.
| 1 | 0 | 0 | 1 | 0 | 0 |
On exceptional compact homogeneous geometries of type C3 | We provide a uniform framework to study the exceptional homogeneous compact
geometries of type C3. This framework is then used to show that these are
simply connected, answering a question by Kramer and Lytchak, and to calculate
the full automorphism groups.
| 0 | 0 | 1 | 0 | 0 | 0 |
Intensity estimation of transaction arrivals on the intraday electricity market | In the following paper we present a simple intensity estimation method of
transaction arrivals on the intraday electricity market. Assuming the
interarrival times distribution, we utilize a maximum likelihood estimation.
The method's performance is briefly tested using German Intraday Continuous
data. Despite the simplicity of the method, the results are encouraging. The
supplementary materials containing the R-codes and the data are attached to
this paper.
| 0 | 0 | 0 | 1 | 0 | 1 |
Capacitive Mechanism of Oxygen Functional Groups on Carbon Surface in Supercapacitors | Oxygen functional groups are one of the most important subjects in the study
of electrochemical properties of carbon materials which can change the
wettability, conductivity and pore size distributions of carbon materials, and
can occur redox reactions. In the electrode materials of carbon-based
supercapacitors, the oxygen functional groups have widely been used to improve
the capacitive performance. In this paper, we not only analyzed the reasons for
the increase of the capacity that promoted by oxygen functional groups in the
charge-discharge cycling tests, but also analyzed the mechanism how the
pseudocapacitance was provided by the oxygen functional groups in the
acid/alkaline aqueous electrolyte. Moreover, we also discussed the effect of
the oxygen functional groups in electrochemical impedance spectroscopy.
| 0 | 1 | 0 | 0 | 0 | 0 |
On conditional least squares estimation for affine diffusions based on continuous time observations | We study asymptotic properties of conditional least squares estimators for
the drift parameters of two-factor affine diffusions based on continuous time
observations. We distinguish three cases: subcritical, critical and
supercritical. For all the drift parameters, in the subcritical and
supercritical cases, asymptotic normality and asymptotic mixed normality is
proved, while in the critical case, non-standard asymptotic behavior is
described.
| 0 | 0 | 1 | 1 | 0 | 0 |
Discretization error estimates for penalty formulations of a linearized Canham-Helfrich type energy | This paper is concerned with minimization of a fourth-order linearized
Canham-Helfrich energy subject to Dirichlet boundary conditions on curves
inside the domain. Such problems arise in the modeling of the mechanical
interaction of biomembranes with embedded particles. There, the curve
conditions result from the imposed particle--membrane coupling. We prove
almost-$H^{\frac{5}{2}}$ regularity of the solution and then consider two
possible penalty formulations. For the combination of these penalty
formulations with a Bogner-Fox-Schmit finite element discretization we prove
discretization error estimates which are optimal in view of the solution's
reduced regularity. The error estimates are based on a general estimate for
linear penalty problems in Hilbert spaces. Finally, we illustrate the
theoretical results by numerical computations. An important feature of the
presented discretization is that it does not require to resolve the particle
boundary. This is crucial in order to avoid re-meshing if the presented problem
arises as subproblem in a model where particles are allowed to move or rotate.
| 0 | 0 | 1 | 0 | 0 | 0 |
On the Humphreys conjecture on support varieties of tilting modules | Let $G$ be a simply-connected semisimple algebraic group over an
algebraically closed field of characteristic $p$, assumed to be larger than the
Coxeter number. The "support variety" of a $G$-module $M$ is a certain closed
subvariety of the nilpotent cone of $G$, defined in terms of cohomology for the
first Frobenius kernel $G_1$. In the 1990s, Humphreys proposed a conjectural
description of the support varieties of tilting modules; this conjecture has
been proved for $G = \mathrm{SL}_n$ in earlier work of the second author.
In this paper, we show that for any $G$, the support variety of a tilting
module always contains the variety predicted by Humphreys, and that they
coincide (i.e., the Humphreys conjecture is true) when $p$ is sufficiently
large. We also prove variants of these statements involving "relative support
varieties."
| 0 | 0 | 1 | 0 | 0 | 0 |
Using Randomness to Improve Robustness of Machine-Learning Models Against Evasion Attacks | Machine learning models have been widely used in security applications such
as intrusion detection, spam filtering, and virus or malware detection.
However, it is well-known that adversaries are always trying to adapt their
attacks to evade detection. For example, an email spammer may guess what
features spam detection models use and modify or remove those features to avoid
detection. There has been some work on making machine learning models more
robust to such attacks. However, one simple but promising approach called {\em
randomization} is underexplored. This paper proposes a novel
randomization-based approach to improve robustness of machine learning models
against evasion attacks. The proposed approach incorporates randomization into
both model training time and model application time (meaning when the model is
used to detect attacks). We also apply this approach to random forest, an
existing ML method which already has some degree of randomness. Experiments on
intrusion detection and spam filtering data show that our approach further
improves robustness of random-forest method. We also discuss how this approach
can be applied to other ML models.
| 0 | 0 | 0 | 1 | 0 | 0 |
Further extension of the generalized Hurwitz-Lerch Zeta function of two variables | The main aim of this paper is to give a new generalization of Hurwitz-Lerch
Zeta function of two variables.Also, we investigate several interesting
properties such as integral representations, summation formula and a connection
with generalized hypergeometric function. To strengthen the main results we
also consider many important special cases.
| 0 | 0 | 1 | 0 | 0 | 0 |
A Physarum-inspired model for the probit-based stochastic user equilibrium problem | Stochastic user equilibrium is an important issue in the traffic assignment
problems, tradition models for the stochastic user equilibrium problem are
designed as mathematical programming problems. In this article, a
Physarum-inspired model for the probit-based stochastic user equilibrium
problem is proposed. There are two main contributions of our work. On the one
hand, the origin Physarum model is modified to find the shortest path in
traffic direction networks with the properties of two-way traffic
characteristic. On the other hand, the modified Physarum-inspired model could
get the equilibrium flows when traveller's perceived transportation cost
complies with normal distribution. The proposed method is constituted with a
two-step procedure. First, the modified Physarum model is applied to get the
auxiliary flows. Second, the auxiliary flows are averaged to obtain the
equilibrium flows. Numerical examples are conducted to illustrate the
performance of the proposed method, which is compared with the Method of
Successive Average method.
| 1 | 0 | 0 | 0 | 0 | 0 |
Going Higher in First-Order Quantifier Alternation Hierarchies on Words | We investigate quantifier alternation hierarchies in first-order logic on
finite words. Levels in these hierarchies are defined by counting the number of
quantifier alternations in formulas. We prove that one can decide membership of
a regular language in the levels $\mathcal{B}{\Sigma}_2$ (finite boolean
combinations of formulas having only one alternation) and ${\Sigma}_3$
(formulas having only two alternations and beginning with an existential
block). Our proofs work by considering a deeper problem, called separation,
which, once solved for lower levels, allows us to solve membership for higher
levels.
| 1 | 0 | 0 | 0 | 0 | 0 |
Invariance of Ideal Limit Points | Let $\mathcal{I}$ be an analytic P-ideal [respectively, a summable ideal] on
the positive integers and let $(x_n)$ be a sequence taking values in a metric
space $X$. First, it is shown that the set of ideal limit points of $(x_n)$ is
an $F_\sigma$-set [resp., a closet set]. Let us assume that $X$ is also
separable and the ideal $\mathcal{I}$ satisfies certain additional assumptions,
which however includes several well-known examples, e.g., the collection of
sets with zero asymptotic density, sets with zero logarithmic density, and some
summable ideals. Then, it is shown that the set of ideal limit points of
$(x_n)$ is equal to the set of ideal limit points of almost all its
subsequences.
| 0 | 0 | 1 | 0 | 0 | 0 |
Optimizing the Wisdom of the Crowd: Inference, Learning, and Teaching | The unprecedented demand for large amount of data has catalyzed the trend of
combining human insights with machine learning techniques, which facilitate the
use of crowdsourcing to enlist label information both effectively and
efficiently. The classic work on crowdsourcing mainly focuses on the label
inference problem under the categorization setting. However, inferring the true
label requires sophisticated aggregation models that usually can only perform
well under certain assumptions. Meanwhile, no matter how complicated the
aggregation model is, the true model that generated the crowd labels remains
unknown. Therefore, the label inference problem can never infer the ground
truth perfectly. Based on the fact that the crowdsourcing labels are abundant
and utilizing aggregation will lose such kind of rich annotation information
(e.g., which worker provided which labels), we believe that it is critical to
take the diverse labeling abilities of the crowdsourcing workers as well as
their correlations into consideration. To address the above challenge, we
propose to tackle three research problems, namely inference, learning, and
teaching.
| 0 | 0 | 0 | 1 | 0 | 0 |
Learning With Errors and Extrapolated Dihedral Cosets | The hardness of the learning with errors (LWE) problem is one of the most
fruitful resources of modern cryptography. In particular, it is one of the most
prominent candidates for secure post-quantum cryptography. Understanding its
quantum complexity is therefore an important goal. We show that under quantum
polynomial time reductions, LWE is equivalent to a relaxed version of the
dihedral coset problem (DCP), which we call extrapolated DCP (eDCP). The extent
of extrapolation varies with the LWE noise rate. By considering different
extents of extrapolation, our result generalizes Regev's famous proof that if
DCP is in BQP (quantum poly-time) then so is LWE (FOCS'02). We also discuss a
connection between eDCP and Childs and Van Dam's algorithm for generalized
hidden shift problems (SODA'07). Our result implies that a BQP solution for LWE
might not require the full power of solving DCP, but rather only a solution for
its relaxed version, eDCP, which could be easier.
| 1 | 0 | 0 | 0 | 0 | 0 |
Subsets and Splits