ID
int64 1
21k
| TITLE
stringlengths 7
239
| ABSTRACT
stringlengths 7
2.76k
| Computer Science
int64 0
1
| Physics
int64 0
1
| Mathematics
int64 0
1
| Statistics
int64 0
1
| Quantitative Biology
int64 0
1
| Quantitative Finance
int64 0
1
|
---|---|---|---|---|---|---|---|---|
16,301 | Fault Localization for Declarative Models in Alloy | Fault localization is a popular research topic and many techniques have been
proposed to locate faults in imperative code, e.g. C and Java. In this paper,
we focus on the problem of fault localization for declarative models in Alloy
-- a first order relational logic with transitive closure. We introduce
AlloyFL, the first set of fault localization techniques for faulty Alloy models
which leverages multiple test formulas. AlloyFL is also the first set of fault
localization techniques at the AST node granularity. We implements in AlloyFL
both spectrum-based and mutation-based fault localization techniques, as well
as techniques that are based on Alloy's built-in unsat core. We introduce new
metrics to measure the accuracy of AlloyFL and systematically evaluate AlloyFL
on 38 real faulty models and 9000 mutant models. The results show that the
mutation-based fault localization techniques are significantly more accurate
than other types of techniques.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,302 | Comparing the fractal basins of attraction in the Hill problem with oblateness and radiation | The basins of convergence, associated with the roots (attractors) of a
complex equation, are revealed in the Hill problem with oblateness and
radiation, using a large variety of numerical methods. Three cases are
investigated, regarding the values of the oblateness and radiation. In all
cases, a systematic and thorough scan of the complex plane is performed in
order to determine the basins of attraction of the several iterative schemes.
The correlations between the attracting domains and the corresponding required
number of iterations are also illustrated and discussed. Our numerical analysis
strongly suggests that the basins of convergence, with the highly fractal basin
boundaries, produce extraordinary and beautiful formations on the complex
plane.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,303 | Parameters for Generalized Hecke Algebras in Type B | The irreducible representations of full support in the rational Cherednik
category $\mathcal{O}_c(W)$ attached to a Coxeter group $W$ are in bijection
with the irreducible representations of an associated Iwahori-Hecke algebra.
Recent work has shown that the irreducible representations in
$\mathcal{O}_c(W)$ of arbitrary given support are similarly governed by certain
generalized Hecke algebras. In this paper we compute the parameters for these
generalized Hecke algebras in the remaining previously unknown cases,
corresponding to the parabolic subgroup $B_n \times S_k$ in $B_{n+k}$ for $k
\geq 2$ and $n \geq 0$.
| 0 | 0 | 1 | 0 | 0 | 0 |
16,304 | Arbitrage-Free Interpolation in Models of Market Observable Interest Rates | Models which postulate lognormal dynamics for interest rates which are
compounded according to market conventions, such as forward LIBOR or forward
swap rates, can be constructed initially in a discrete tenor framework.
Interpolating interest rates between maturities in the discrete tenor structure
is equivalent to extending the model to continuous tenor. The present paper
sets forth an alternative way of performing this extension; one which preserves
the Markovian properties of the discrete tenor models and guarantees the
positivity of all interpolated rates.
| 0 | 0 | 0 | 0 | 0 | 1 |
16,305 | Bayesian Simultaneous Estimation for Means in $k$ Sample Problems | This paper is concerned with the simultaneous estimation of $k$ population
means when one suspects that the $k$ means are nearly equal. As an alternative
to the preliminary test estimator based on the test statistics for testing
hypothesis of equal means, we derive Bayesian and minimax estimators which
shrink individual sample means toward a pooled mean estimator given under the
hypothesis. It is shown that both the preliminary test estimator and the
Bayesian minimax shrinkage estimators are further improved by shrinking the
pooled mean estimator. The performance of the proposed shrinkage estimators is
investigated by simulation.
| 0 | 0 | 1 | 1 | 0 | 0 |
16,306 | The Camassa--Holm Equation and The String Density Problem | In this paper we review the recent progress in the (indefinite) string
density problem and its applications to the Camassa--Holm equation.
| 0 | 0 | 1 | 0 | 0 | 0 |
16,307 | A simple efficient density estimator that enables fast systematic search | This paper introduces a simple and efficient density estimator that enables
fast systematic search. To show its advantage over commonly used kernel density
estimator, we apply it to outlying aspects mining. Outlying aspects mining
discovers feature subsets (or subspaces) that describe how a query stand out
from a given dataset. The task demands a systematic search of subspaces. We
identify that existing outlying aspects miners are restricted to datasets with
small data size and dimensions because they employ kernel density estimator,
which is computationally expensive, for subspace assessments. We show that a
recent outlying aspects miner can run orders of magnitude faster by simply
replacing its density estimator with the proposed density estimator, enabling
it to deal with large datasets with thousands of dimensions that would
otherwise be impossible.
| 1 | 0 | 0 | 1 | 0 | 0 |
16,308 | Perches, Post-holes and Grids | The "Planning in the Early Medieval Landscape" project (PEML)
<this http URL>,
funded by the Leverhulme Trust, has organized and collated a substantial
quantity of images, and has used this as evidence to support the hypothesis
that Anglo-Saxon building construction was based on grid-like planning
structures based on fixed modules or quanta of measurement. We report on the
development of some statistical contributions to the debate concerning this
hypothesis. In practice the PEML images correspond to data arising in a wide
variety of different forms. It does not seem feasible to produce a single
automatic method which can be applied uniformly to all such images; even the
initial chore of cleaning up an image (removing extraneous material such as
legends and physical features which do not bear on the planning hypothesis)
typically presents a separate and demanding challenge for each different image.
Moreover care must be taken, even in the relatively straightforward cases of
clearly defined ground-plans (for example for large ecclesiastical buildings of
the period), to consider exactly what measurements might be relevant. We report
on pilot statistical analyses concerning three different situations. These
establish not only the presence of underlying structure (which indeed is often
visually obvious), but also provide an account of the numerical evidence
supporting the deduction that such structure is present. We contend that
statistical methodology thus contributes to the larger historical debate and
provides useful input to the wide and varied range of evidence that has to be
debated.
| 0 | 0 | 0 | 1 | 0 | 0 |
16,309 | Sharp bounds for population recovery | The population recovery problem is a basic problem in noisy unsupervised
learning that has attracted significant research attention in recent years
[WY12,DRWY12, MS13, BIMP13, LZ15,DST16]. A number of different variants of this
problem have been studied, often under assumptions on the unknown distribution
(such as that it has restricted support size). In this work we study the sample
complexity and algorithmic complexity of the most general version of the
problem, under both bit-flip noise and erasure noise model. We give essentially
matching upper and lower sample complexity bounds for both noise models, and
efficient algorithms matching these sample complexity bounds up to polynomial
factors.
| 1 | 0 | 1 | 1 | 0 | 0 |
16,310 | MultiBUGS: A parallel implementation of the BUGS modelling framework for faster Bayesian inference | MultiBUGS (this https URL) is a new version of the general-purpose
Bayesian modelling software BUGS that implements a generic algorithm for
parallelising Markov chain Monte Carlo (MCMC) algorithms to speed up posterior
inference of Bayesian models. The algorithm parallelises evaluation of the
product-form likelihoods formed when a parameter has many children in the
directed acyclic graph (DAG) representation; and parallelises sampling of
conditionally-independent sets of parameters. A heuristic algorithm is used to
decide which approach to use for each parameter and to apportion computation
across computational cores. This enables MultiBUGS to automatically parallelise
the broad range of statistical models that can be fitted using BUGS-language
software, making the dramatic speed-ups of modern multi-core computing
accessible to applied statisticians, without requiring any experience of
parallel programming. We demonstrate the use of MultiBUGS on simulated data
designed to mimic a hierarchical e-health linked-data study of methadone
prescriptions including 425,112 observations and 20,426 random effects.
Posterior inference for the e-health model takes several hours in existing
software, but MultiBUGS can perform inference in only 28 minutes using 48
computational cores.
| 0 | 0 | 0 | 1 | 0 | 0 |
16,311 | Road to safe autonomy with data and formal reasoning | We present an overview of recently developed data-driven tools for safety
analysis of autonomous vehicles and advanced driver assist systems. The core
algorithms combine model-based, hybrid system reachability analysis with
sensitivity analysis of components with unknown or inaccessible models. We
illustrate the applicability of this approach with a new case study of
emergency braking systems in scenarios with two or three vehicles. This problem
is representative of the most common type of rear-end crashes, which is
relevant for safety analysis of automatic emergency braking (AEB) and forward
collision avoidance systems. We show that our verification tool can effectively
prove the safety of certain scenarios (specified by several parameters like
braking profiles, initial velocities, uncertainties in position and reaction
times), and also compute the severity of accidents for unsafe scenarios.
Through hundreds of verification experiments, we quantified the safety envelope
of the system across relevant parameters. These results show that the approach
is promising for design, debugging and certification. We also show how the
reachability analysis can be combined with statistical information about the
parameters, to assess the risk level of the control system, which in turn is
essential, for example, for determining Automotive Safety Integrity Levels
(ASIL) for the ISO26262 standard.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,312 | Social Events in a Time-Varying Mobile Phone Graph | The large-scale study of human mobility has been significantly enhanced over
the last decade by the massive use of mobile phones in urban populations.
Studying the activity of mobile phones allows us, not only to infer social
networks between individuals, but also to observe the movements of these
individuals in space and time. In this work, we investigate how these two
related sources of information can be integrated within the context of
detecting and analyzing large social events. We show that large social events
can be characterized not only by an anomalous increase in activity of the
antennas in the neighborhood of the event, but also by an increase in social
relationships of the attendants present in the event. Moreover, having detected
a large social event via increased antenna activity, we can use the network
connections to infer whether an unobserved user was present at the event. More
precisely, we address the following three challenges: (i) automatically
detecting large social events via increased antenna activity; (ii)
characterizing the social cohesion of the detected event; and (iii) analyzing
the feasibility of inferring whether unobserved users were in the event.
| 1 | 1 | 0 | 0 | 0 | 0 |
16,313 | Range Assignment of Base-Stations Maximizing Coverage Area without Interference | We study the problem of assigning non-overlapping geometric objects centered
at a given set of points such that the sum of area covered by them is
maximized. If the points are placed on a straight-line and the objects are
disks, then the problem is solvable in polynomial time. However, we show that
the problem is NP-hard even for simplest objects like disks or squares in
${\mathbb{R}}^2$. Eppstein [CCCG, pages 260--265, 2016] proposed a polynomial
time algorithm for maximizing the sum of radii (or perimeter) of
non-overlapping balls or disks when the points are arbitrarily placed on a
plane. We show that Eppstein's algorithm for maximizing sum of perimeter of the
disks in ${\mathbb{R}}^2$ gives a $2$-approximation solution for the sum of
area maximization problem. We propose a PTAS for our problem. These
approximation results are extendible to higher dimensions. All these
approximation results hold for the area maximization problem by regular convex
polygons with even number of edges centered at the given points.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,314 | A Polylogarithm Solution to the Epsilon--Delta Problem | Let $f$ be a continuous real function defined in a subset of the real line.
The standard definition of continuity at a point $x$ allow us to correlate any
given epsilon with a (possibly depending of $x$) delta value. This pairing is
known as the epsilon--delta relation of $f$. In this work, we demonstrate the
existence of a privileged choice of delta in the sense that it is continuous,
invertible, maximal and it is the solution of a simple functional equation. We
also introduce an algorithm that can be used to numerically calculate this map
in polylogarithm time, proving the computability of the epsilon--delta
relation. Finally, some examples are analyzed in order to showcase the accuracy
and effectiveness of these methods, even when the explicit formula for the
aforementioned privileged function is unknown due to the lack of analytical
tools for solving the functional equation.
| 0 | 0 | 1 | 0 | 0 | 0 |
16,315 | Pricing of debt and equity in a financial network with comonotonic endowments | In this paper we present formulas for the valuation of debt and equity of
firms in a financial network under comonotonic endowments. We demonstrate that
the comonotonic setting provides a lower bound to the price of debt under
Eisenberg-Noe financial networks with consistent marginal endowments. Such
financial networks encode the interconnection of firms through debt claims. The
proposed pricing formulas consider the realized, endogenous, recovery rate on
debt claims. Special consideration will be given to the setting in which firms
only invest in a risk-free bond and a common risky asset following a geometric
Brownian motion.
| 0 | 0 | 0 | 0 | 0 | 1 |
16,316 | Resolvent expansion for the Schrödinger operator on a graph with infinite rays | We consider the Schrödinger operator on a combinatorial graph consisting of
a finite graph and a finite number of discrete half-lines, all jointed
together, and compute an asymptotic expansion of its resolvent around the
threshold $0$. Precise expressions are obtained for the first few coefficients
of the expansion in terms of the generalized eigenfunctions. This result
justifies the classification of threshold types solely by growth properties of
the generalized eigenfunctions. By choosing an appropriate free operator a
priori possessing no zero eigenvalue or zero resonance we can simplify the
expansion procedure as much as that on the single discrete half-line.
| 0 | 0 | 1 | 0 | 0 | 0 |
16,317 | Space-Filling Fractal Description of Ion-induced Local Thermal Spikes in Molecular Solid of ZnO | Anions of the molecules ZnO, O2 and atomic Zn and O constitute mass spectra
of the species sputtered from pellets of molecular solid of ZnO under Cs+
irradiation. Their normalized yields are independent of energy of the
irradiating Cs+. Collision cascades cannot explain the simultaneous sputtering
of atoms and molecules. We propose that the origin of the molecular
sublimation, dissociation and subsequent emission is the result of localized
thermal spikes induced by individual Cs+ ions. The fractal dimension of binary
collision cascades of atomic recoils in the irradiated ZnO solid increases with
reduction in the energy of recoils. Upon reaching the collision diameters of
atomic dimensions, the space-filling fractal-like transition occurs where
cascades transform into thermal spikes. These localized thermal spikes induce
sublimation, dissociation and sputtering from the region. The calculated rates
of the subliming and dissociating species due to localized thermal spikes agree
well with the experimental results.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,318 | Integrable modules over affine Lie superalgebras sl(1|n)^ | We describe the category of integrable sl(1|n)^ -modules with the positive
central charge and show that the irreducible modules provide the full set of
irreducible representations for the corresponding simple vertex algebra.
| 0 | 0 | 1 | 0 | 0 | 0 |
16,319 | LocDyn: Robust Distributed Localization for Mobile Underwater Networks | How to self-localize large teams of underwater nodes using only noisy range
measurements? How to do it in a distributed way, and incorporating dynamics
into the problem? How to reject outliers and produce trustworthy position
estimates? The stringent acoustic communication channel and the accuracy needs
of our geophysical survey application demand faster and more accurate
localization methods. We approach dynamic localization as a MAP estimation
problem where the prior encodes dynamics, and we devise a convex relaxation
method that takes advantage of previous estimates at each measurement
acquisition step; The algorithm converges at an optimal rate for first order
methods. LocDyn is distributed: there is no fusion center responsible for
processing acquired data and the same simple computations are performed for
each node. LocDyn is accurate: experiments attest to a smaller positioning
error than a comparable Kalman filter. LocDyn is robust: it rejects outlier
noise, while the comparing methods succumb in terms of positioning error.
| 1 | 0 | 1 | 1 | 0 | 0 |
16,320 | High-speed X-ray imaging spectroscopy system with Zynq SoC for solar observations | We have developed a system combining a back-illuminated
Complementary-Metal-Oxide-Semiconductor (CMOS) imaging sensor and Xilinx Zynq
System-on-Chip (SoC) device for a soft X-ray (0.5-10 keV) imaging spectroscopy
observation of the Sun to investigate the dynamics of the solar corona. Because
typical timescales of energy release phenomena in the corona span a few minutes
at most, we aim to obtain the corresponding energy spectra and derive the
physical parameters, i.e., temperature and emission measure, every few tens of
seconds or less for future solar X-ray observations. An X-ray photon-counting
technique, with a frame rate of a few hundred frames per second or more, can
achieve such results. We used the Zynq SoC device to achieve the requirements.
Zynq contains an ARM processor core, which is also known as the Processing
System (PS) part, and a Programmable Logic (PL) part in a single chip. We use
the PL and PS to control the sensor and seamless recording of data to a storage
system, respectively. We aim to use the system for the third flight of the
Focusing Optics Solar X-ray Imager (FOXSI-3) sounding rocket experiment for the
first photon-counting X-ray imaging and spectroscopy of the Sun.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,321 | Cooperative Multi-Sender Index Coding | In this paper, we propose a new coding scheme and establish new bounds on the
capacity region for the multi-sender unicast index-coding problem. We revisit
existing partitioned Distributed Composite Coding (DCC) proposed by Sadeghi et
al. and identify its limitations in the implementation of multi-sender
composite coding and in the strategy of sender partitioning. We then propose
two new coding components to overcome these limitations and develop a
multi-sender Cooperative Composite Coding (CCC). We show that CCC can strictly
improve upon partitioned DCC, and is the key to achieve optimality for a number
of index-coding instances. The usefulness of CCC and its special cases is
illuminated via non-trivial examples, and the capacity region is established
for each example. Comparisons between CCC and other non-cooperative schemes in
recent works are also provided to further demonstrate the advantage of CCC.
| 1 | 0 | 1 | 0 | 0 | 0 |
16,322 | Comment on "Eshelby twist and correlation effects in diffraction from nanocrystals" [J. Appl. Phys. 117, 164304 (2015)] | The aim of this comment is to show that anisotropic effects and image fields
should not be omitted as they are in the publication of A. Leonardi, S. Ryu, N.
M. Pugno, and P. Scardi (LRPS) [J. Appl. Phys. 117, 164304 (2015)] on Pd <011>
cylindrical nanowires containing an axial screw dislocation. Indeed, according
to our previous study [Phys. Rev. B 88, 224101 (2013)], the axial displacement
field along the nanowire exhibits both a radial and an azimuthal dependence
with a twofold symmetry due the <011> orientation. As a consequence, the
deviatoric strain term used by LRPS is not suitable to analyze the anisotropic
strain fields that should be observed in their atomistic simulations. In this
comment, we first illustrate the importance of anisotropy in <011> Pd nanowire
by calculating the azimuthal dependence of the deviatoric strain term. Then the
expression of the anisotropic elastic field is recalled in term of strain
tensor components to show that image fields should be also considered. The
other aspect of this comment concerns the supposedly loss of correlation along
the nanorod caused by the twist. It is claimed for instance by LRPS that : "As
an effect of the dislocation strain and twist, if the cylinder is long enough,
upper/lower regions tend to lose correlation, as if the rod were made of
different sub-domains.". This assertion appears to us misleading since for any
twist the position of all the atoms in the nanorod is perfectly defined and
therefore prevents any loss of correlation. To clarify this point, it should be
specified that this apparent loss of correlation can not be ascribed to the
twisted state of the nanowire but is rather due to a limitation of the X-ray
powder diffraction. Considering for instance coherent X-ray diffraction, we
show an example of high twist where the simulated diffractogram presents a
clear signature of the perfect correlation.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,323 | Variational Community Partition with Novel Network Structure Centrality Prior | In this paper, we proposed a novel two-stage optimization method for network
community partition, which is based on inherent network structure information.
The introduced optimization approach utilizes the new network centrality
measure of both links and vertices to construct the key affinity description of
the given network, where the direct similarities between graph nodes or nodal
features are not available to obtain the classical affinity matrix. Indeed,
such calculated network centrality information presents the essential structure
of network, hence, the proper measure for detecting network communities, which
also introduces a `confidence' criterion for referencing new labeled benchmark
nodes. For the resulted challenging combinatorial optimization problem of graph
clustering, the proposed optimization method iteratively employs an efficient
convex optimization algorithm which is developed based under a new variational
perspective of primal and dual. Experiments over both artificial and real-world
network datasets demonstrate that the proposed optimization strategy of
community detection significantly improves result accuracy and outperforms the
state-of-the-art algorithms in terms of accuracy and reliability.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,324 | Possible formation pathways for the low density Neptune-mass planet HAT-P-26b | We investigate possible pathways for the formation of the low density
Neptune-mass planet HAT-P-26b. We use two formation different models based on
pebbles and planetesimals accretion, and includes gas accretion, disk migration
and simple photoevaporation. The models tracks the atmospheric oxygen
abundance, in addition to the orbital period, and mass of the forming planets,
that we compare to HAT-P-26b. We find that pebbles accretion can explain this
planet more naturally than planetesimals accretion that fails completely unless
we artificially enhance the disk metallicity significantly. Pebble accretion
models can reproduce HAT-P-26b with either a high initial core mass and low
amount of envelope enrichment through core erosion or pebbles dissolution, or
the opposite, with both scenarios being possible. Assuming a low envelope
enrichment factor as expected from convection theory and comparable to the
values we can infer from the D/H measurements in Uranus and Neptune, our most
probable formation pathway for HAT-P-26b is through pebble accretion starting
around 10 AU early in the disk's lifetime.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,325 | Shape Classification using Spectral Graph Wavelets | Spectral shape descriptors have been used extensively in a broad spectrum of
geometry processing applications ranging from shape retrieval and segmentation
to classification. In this pa- per, we propose a spectral graph wavelet
approach for 3D shape classification using the bag-of-features paradigm. In an
effort to capture both the local and global geometry of a 3D shape, we present
a three-step feature description framework. First, local descriptors are
extracted via the spectral graph wavelet transform having the Mexican hat
wavelet as a generating ker- nel. Second, mid-level features are obtained by
embedding lo- cal descriptors into the visual vocabulary space using the soft-
assignment coding step of the bag-of-features model. Third, a global descriptor
is constructed by aggregating mid-level fea- tures weighted by a geodesic
exponential kernel, resulting in a matrix representation that describes the
frequency of appearance of nearby codewords in the vocabulary. Experimental
results on two standard 3D shape benchmarks demonstrate the effective- ness of
the proposed classification approach in comparison with state-of-the-art
methods.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,326 | Dynamical inverse problem for Jacobi matrices | We consider the inverse dynamical problem for the dynamical system with
discrete time associated with the semi-infinite Jacobi matrix. We solve the
inverse problem for such a system and answer a question on the characterization
of the inverse data. As a by-product we give a necessary and sufficient
condition for the measure on the real line line to be the spectral measure of
semi-infinite discrete Schrodinger operator.
| 0 | 0 | 1 | 0 | 0 | 0 |
16,327 | A short note on the computation of the generalised Jacobsthal function for paired progressions | Jacobsthal's function was recently generalised for the case of paired
progressions. It was proven that a specific bound of this function is
sufficient for the truth of Goldbach's conjecture and of the prime pairs
conjecture as well. We extended and adapted algorithms described for the
computation of the common Jacobsthal function, and computed respective function
values of the paired Jacobsthal function for primorial numbers for primes up to
73. All these values fulfil the conjectured specific bound. In addition to this
note, we provide a detailed review of the algorithmic approaches and the
complete computational results in ancillary files.
| 0 | 0 | 1 | 0 | 0 | 0 |
16,328 | Improved SVD-based Initialization for Nonnegative Matrix Factorization using Low-Rank Correction | Due to the iterative nature of most nonnegative matrix factorization
(\textsc{NMF}) algorithms, initialization is a key aspect as it significantly
influences both the convergence and the final solution obtained. Many
initialization schemes have been proposed for NMF, among which one of the most
popular class of methods are based on the singular value decomposition (SVD).
However, these SVD-based initializations do not satisfy a rather natural
condition, namely that the error should decrease as the rank of factorization
increases. In this paper, we propose a novel SVD-based \textsc{NMF}
initialization to specifically address this shortcoming by taking into account
the SVD factors that were discarded to obtain a nonnegative initialization.
This method, referred to as nonnegative SVD with low-rank correction
(NNSVD-LRC), allows us to significantly reduce the initial error at a
negligible additional computational cost using the low-rank structure of the
discarded SVD factors. NNSVD-LRC has two other advantages compared to previous
SVD-based initializations: (1) it provably generates sparse initial factors,
and (2) it is faster as it only requires to compute a truncated SVD of rank
$\lceil r/2 + 1 \rceil$ where $r$ is the factorization rank of the sought NMF
decomposition (as opposed to a rank-$r$ truncated SVD for other methods). We
show on several standard dense and sparse data sets that our new method
competes favorably with state-of-the-art SVD-based initializations for NMF.
| 1 | 0 | 0 | 1 | 0 | 0 |
16,329 | Connectivity jamming game for physical layer attack in peer to peer networks | Because of the open access nature of wireless communications, wireless
networks can suffer from malicious activity, such as jamming attacks, aimed at
undermining the network's ability to sustain communication links and acceptable
throughput. One important consideration when designing networks is to
appropriately tune the network topology and its connectivity so as to support
the communication needs of those participating in the network. This paper
examines the problem of interference attacks that are intended to harm
connectivity and throughput, and illustrates the method of mapping network
performance parameters into the metric of topographic connectivity.
Specifically, this paper arrives at anti-jamming strategies aimed at coping
with interference attacks through a unified stochastic game. In such a
framework, an entity trying to protect a network faces a dilemma: (i) the
underlying motivations for the adversary can be quite varied, which depends
largely on the network's characteristics such as power and distance; (ii) the
metrics for such an attack can be incomparable (e.g., network connectivity and
total throughput). To deal with the problem of such incomparable metrics, this
paper proposes using the attack's expected duration as a unifying metric to
compare distinct attack metrics because a longer-duration of unsuccessful
attack assumes a higher cost. Based on this common metric, a mechanism of
maxmin selection for an attack prevention strategy is suggested.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,330 | Global Symmetries, Counterterms, and Duality in Chern-Simons Matter Theories with Orthogonal Gauge Groups | We study three-dimensional gauge theories based on orthogonal groups.
Depending on the global form of the group these theories admit discrete
$\theta$-parameters, which control the weights in the sum over topologically
distinct gauge bundles. We derive level-rank duality for these topological
field theories. Our results may also be viewed as level-rank duality for
$SO(N)_{K}$ Chern-Simons theory in the presence of background fields for
discrete global symmetries. In particular, we include the required counterterms
and analysis of the anomalies. We couple our theories to charged matter and
determine how these counterterms are shifted by integrating out massive
fermions. By gauging discrete global symmetries we derive new boson-fermion
dualities for vector matter, and present the phase diagram of theories with
two-index tensor fermions, thus extending previous results for $SO(N)$ to other
global forms of the gauge group.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,331 | Tail sums of Wishart and GUE eigenvalues beyond the bulk edge | Consider the classical Gaussian unitary ensemble of size $N$ and the real
Wishart ensemble $W_N(n,I)$. In the limits as $N \to \infty$ and $N/n \to
\gamma > 0$, the expected number of eigenvalues that exit the upper bulk edge
is less than one, 0.031 and 0.170 respectively, the latter number being
independent of $\gamma$. These statements are consequences of quantitative
bounds on tail sums of eigenvalues outside the bulk which are established here
for applications in high dimensional covariance matrix estimation.
| 0 | 0 | 1 | 1 | 0 | 0 |
16,332 | Ultrafast Energy Transfer with Competing Channels: Non-equilibrium Foerster and Modified Redfield Theories | We derive equations of motion for the reduced density matrix of a molecular
system which undergoes energy transfer dynamics competing with fast internal
conversion channels. Environmental degrees of freedom of such a system have no
time to relax to quasi-equilibrium in the electronic excited state of the donor
molecule, and thus the conditions of validity of Foerster and Modified Redfield
theories in their standard formulations do not apply. We derive non-equilibrium
versions of the two well-known rate theories and apply them to the case of
carotenoid-chlorophyll energy transfer. Although our reduced density matrix
approach does not account for the formation of vibronic excitons, it still
confirms the important role of the donor ground-state vibrational states in
establishing the resonance energy transfer conditions. We show that it is
essential to work with a theory valid in strong system-bath interaction regime
to obtain correct dependence of the rates on donor-acceptor energy gap.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,333 | Observation of non-Fermi liquid behavior in hole doped Eu2Ir2O7 | The Weyl semimetallic compound Eu2Ir2O7 along with its hole doped derivatives
(which is achieved by substituting trivalent Eu by divalent Sr) are
investigated through transport, magnetic and calorimetric studies. The
metal-insulator transition (MIT) temperature is found to get substantially
reduced with hole doping and for 10% Sr doping the composition is metallic down
to temperature as low as 5 K. These doped compounds are found to violate the
Mott-Ioffe-Regel condition for minimum electrical conductivity and show
distinct signature of non-Fermi liquid behavior at low temperature. The MIT in
the doped compounds does not correlate with the magnetic transition point and
Anderson-Mott type disorder induced localization may be attributed to the
ground state insulating phase. The observed non-Fermi liquid behavior can be
understood on the basis of disorder induced distribution of spin orbit coupling
parameter which is markedly different in case of Ir4+ and Ir5+ ions.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,334 | Galactic Outflows, Star Formation Histories, and Timescales in Starburst Dwarf Galaxies from STARBIRDS | Winds are predicted to be ubiquitous in low-mass, actively star-forming
galaxies. Observationally, winds have been detected in relatively few local
dwarf galaxies, with even fewer constraints placed on their timescales. Here,
we compare galactic outflows traced by diffuse, soft X-ray emission from
Chandra Space Telescope archival observations to the star formation histories
derived from Hubble Space Telescope imaging of the resolved stellar populations
in six starburst dwarfs. We constrain the longevity of a wind to have an upper
limit of 25 Myr based on galaxies whose starburst activity has already
declined, although a larger sample is needed to confirm this result. We find an
average 16% efficiency for converting the mechanical energy of stellar feedback
to thermal, soft X-ray emission on the 25 Myr timescale, somewhat higher than
simulations predict. The outflows have likely been sustained for timescales
comparable to the duration of the starbursts (i.e., 100's Myr), after taking
into account the time for the development and cessation of the wind. The wind
timescales imply that material is driven to larger distances in the
circumgalactic medium than estimated by assuming short, 5-10 Myr starburst
durations, and that less material is recycled back to the host galaxy on short
timescales. In the detected outflows, the expelled hot gas shows various
morphologies which are not consistent with a simple biconical outflow
structure. The sample and analysis are part of a larger program, the STARBurst
IRregular Dwarf Survey (STARBIRDS), aimed at understanding the lifecycle and
impact of starburst activity in low-mass systems.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,335 | Compact microwave kinetic inductance nanowire galvanometer for cryogenic detectors at 4.2 K | We present a compact current sensor based on a superconducting microwave
lumped-element resonator with a nanowire kinetic inductor, operating at 4.2 K.
The sensor is suitable for multiplexed readout in GHz range of large-format
arrays of cryogenic detectors. The device consists of a lumped-element resonant
circuit, fabricated from a single 4-nm-thick superconducting layer of niobium
nitride. Thus, the fabrication and operation is significantly simplified in
comparison to state-of-the-art approaches. Because the resonant circuit is
inductively coupled to the feed line the current to be measured can directly be
injected without having the need of an impedance matching circuit, reducing the
system complexity. With the proof-of-concept device we measured a current noise
floor {\delta}Imin of 10 pA/Hz1/2 at 10 kHz. Furthermore, we demonstrate the
ability of our sensor to amplify a pulsed response of a superconducting
nanowire single-photon detector using a GHz-range carrier for effective
frequency-division multiplexing.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,336 | Evolution of dust extinction curves in galaxy simulation | To understand the evolution of extinction curve, we calculate the dust
evolution in a galaxy using smoothed particle hydrodynamics simulations
incorporating stellar dust production, dust destruction in supernova shocks,
grain growth by accretion and coagulation, and grain disruption by shattering.
The dust species are separated into carbonaceous dust and silicate. The
evolution of grain size distribution is considered by dividing grain population
into large and small gains, which allows us to estimate extinction curves. We
examine the dependence of extinction curves on the position, gas density, and
metallicity in the galaxy, and find that extinction curves are flat at $t
\lesssim 0.3$ Gyr because stellar dust production dominates the total dust
abundance. The 2175 \AA\ bump and far-ultraviolet (FUV) rise become prominent
after dust growth by accretion. At $t \gtrsim 3$ Gyr, shattering works
efficiently in the outer disc and low density regions, so extinction curves
show a very strong 2175 \AA\ bump and steep FUV rise. The extinction curves at
$t\gtrsim 3$ Gyr are consistent with the Milky Way extinction curve, which
implies that we successfully included the necessary dust processes in the
model. The outer disc component caused by stellar feedback has an extinction
curves with a weaker 2175 \AA\ bump and flatter FUV slope. The strong
contribution of carbonaceous dust tends to underproduce the FUV rise in the
Small Magellanic Cloud extinction curve, which supports selective loss of small
carbonaceous dust in the galaxy. The snapshot at young ages also explain the
extinction curves in high-redshift quasars.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,337 | Witten Deformation And Some Topics Relating To It | This is a simple reading report of professor Weiping Zhang's lectures. In
this article we will mainly introduce the basic ideas of Witten deformation,
which were first introduced by Edward Witten on, and some applications of it.
The first part of this article mainly focuses on deformation of Dirac operators
and some important analytic facts about the deformed Dirac operators. In the
second part of this article some applications of Witten deformation will be
given, to be more specific, an analytic proof of Poincar$\acute{e}$-Hopf index
theorem and Real Morse Inequilities will be given. Also we will use Witten
deformation to prove that the Thom Smale complex is quasi-isomorphism to the
de-Rham complex (Witten suggested that Thom Smale complex can be recovered from
his deformation and his suggestion was first realized by Helffer and
Sj$\ddot{o}$strand, the proof in this article is given by Bismut and Zhang).
And in the last part an analytic proof of Atiyah vanishing theorem via Witten
deformation will be given.
| 0 | 0 | 1 | 0 | 0 | 0 |
16,338 | Inverse sensitivity of plasmonic nanosensors at the single-molecule limit | Recent work using plasmonic nanosensors in a clinically relevant detection
assay reports extreme sensitivity based upon a mechanism termed 'inverse
sensitivity', whereby reduction of substrate concentration increases reaction
rate, even at the single-molecule limit. This near-homoeopathic mechanism
contradicts the law of mass action. The assay involves deposition of silver
atoms upon gold nanostars, changing their absorption spectrum. Multiple
additional aspects of the assay appear to be incompatible with settled chemical
knowledge, in particular the detection of tiny numbers of silver atoms on a
background of the classic 'silver mirror reaction'. Finally, it is estimated
here that the reported spectral changes require some 2.5E11 times more silver
atoms than are likely to be produced. It is suggested that alternative
explanations must be sought for the original observations.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,339 | On approximation of Ginzburg-Landau minimizers by $\mathbb S^1$-valued maps in domains with vanishingly small holes | We consider a two-dimensional Ginzburg-Landau problem on an arbitrary domain
with a finite number of vanishingly small circular holes. A special choice of
scaling relation between the material and geometric parameters (Ginzburg-Landau
parameter vs hole radius) is motivated by a recently dsicovered phenomenon of
vortex phase separation in superconducting composites. We show that, for each
hole, the degrees of minimizers of the Ginzburg-Landau problems in the classes
of $\mathbb S^1$-valued and $\mathbb C$-valued maps, respectively, are the
same. The presence of two parameters that are widely separated on a logarithmic
scale constitutes the principal difficulty of the analysis that is based on
energy decomposition techniques.
| 0 | 0 | 1 | 0 | 0 | 0 |
16,340 | A Theory of Solvability for Lossless Power Flow Equations -- Part II: Conditions for Radial Networks | This two-part paper details a theory of solvability for the power flow
equations in lossless power networks. In Part I, we derived a new formulation
of the lossless power flow equations, which we term the fixed-point power flow.
The model is parameterized by several graph-theoretic matrices -- the power
network stiffness matrices -- which quantify the internal coupling strength of
the network. In Part II, we leverage the fixed-point power flow to study power
flow solvability. For radial networks, we derive parametric conditions which
guarantee the existence and uniqueness of a high-voltage power flow solution,
and construct examples for which the conditions are also necessary. Our
conditions (i) imply convergence of the fixed-point power flow iteration, (ii)
unify and extend recent results on solvability of decoupled power flow, (iii)
directly generalize the textbook two-bus system results, and (iv) provide new
insights into how the structure and parameters of the grid influence power flow
solvability.
| 0 | 0 | 1 | 0 | 0 | 0 |
16,341 | Coordination game in bidirectional flow | We have introduced evolutionary game dynamics to a one-dimensional
cellular-automaton to investigate evolution and maintenance of cooperative
avoiding behavior of self-driven particles in bidirectional flow. In our model,
there are two kinds of particles, which are right-going particles and
left-going particles. They often face opponent particles, so that they swerve
to the right or left stochastically in order to avoid conflicts. The particles
reinforce their preferences of the swerving direction after their successful
avoidance. The preference is also weakened by memory-loss effect.
Result of our simulation indicates that cooperative avoiding behavior is
achieved, i.e., swerving directions of the particles are unified, when the
density of particles is close to 1/2 and the memory-loss rate is small.
Furthermore, when the right-going particles occupy the majority of the system,
we observe that their flow increases when the number of left-going particles,
which prevent the smooth movement of right-going particles, becomes large. It
is also investigated that the critical memory-loss rate of the cooperative
avoiding behavior strongly depends on the size of the system. Small system can
prolong the cooperative avoiding behavior in wider range of memory-loss rate
than large system.
| 1 | 1 | 0 | 0 | 0 | 0 |
16,342 | Leaderboard Effects on Player Performance in a Citizen Science Game | Quantum Moves is a citizen science game that investigates the ability of
humans to solve complex physics challenges that are intractable for computers.
During the launch of Quantum Moves in April 2016 the game's leaderboard
function broke down resulting in a "no leaderboard" game experience for some
players for a couple of days (though their scores were still displayed). The
subsequent quick fix of an all-time Top 5 leaderboard, and the following
long-term implementation of a personalized relative-position (infinite)
leaderboard provided us with a unique opportunity to compare and investigate
the effect of different leaderboard implementations on player performance in a
points-driven citizen science game.
All three conditions were live sequentially during the game's initial influx
of more than 150.000 players that stemmed from global press attention on
Quantum Moves due the publication of a Nature paper about the use of Quantum
Moves in solving a specific quantum physics problem. Thus, it has been possible
to compare the three conditions and their influence on the performance (defined
as a player's quality of game play related to a high-score) of over 4500 new
players. These 4500 odd players in our three leaderboard-conditions have a
similar demographic background based upon the time-window over which the
implementations occurred and controlled against Player ID tags. Our results
placed Condition 1 experience over condition 3 and in some cases even over
condition 2 which goes against the general assumption that leaderboards enhance
gameplay and its subsequent overuse as a an oft-relied upon element that
designers slap onto a game to enhance said appeal. Our study thus questions the
use of leaderboards as general performance enhancers in gamification contexts
and brings some empirical rigor to an often under-reported but overused
phenomenon.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,343 | Marchenko-based target replacement, accounting for all orders of multiple reflections | In seismic monitoring one is usually interested in the response of a changing
target zone, embedded in a static inhomogeneous medium. We introduce an
efficient method which predicts reflection responses at the earth's surface for
different target-zone scenarios, from a single reflection response at the
surface and a model of the changing target zone. The proposed process consists
of two main steps. In the first step, the response of the original target zone
is removed from the reflection response, using the Marchenko method. In the
second step, the modelled response of a new target zone is inserted between the
overburden and underburden responses. The method fully accounts for all orders
of multiple scattering and, in the elastodynamic case, for wave conversion. For
monitoring purposes, only the second step needs to be repeated for each
target-zone model. Since the target zone covers only a small part of the entire
medium, the proposed method is much more efficient than repeated modelling of
the entire reflection response.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,344 | Bayesian model checking: A comparison of tests | Two procedures for checking Bayesian models are compared using a simple test
problem based on the local Hubble expansion. Over four orders of magnitude,
p-values derived from a global goodness-of-fit criterion for posterior
probability density functions (Lucy 2017) agree closely with posterior
predictive p-values. The former can therefore serve as an effective proxy for
the difficult-to-calculate posterior predictive p-values.
| 0 | 1 | 0 | 1 | 0 | 0 |
16,345 | Idempotents in Intersection of the Kernel and the Image of Locally Finite Derivations and $\mathcal E$-derivations | Let $K$ be a field of characteristic zero, $\mathcal A$ a $K$-algebra and
$\delta$ a $K$-derivation of $\mathcal A$ or $K$-$\mathcal E$-derivation of
$\mathcal A$ (i.e., $\delta=\operatorname{Id}_A-\phi$ for some $K$-algebra
endomorphism $\phi$ of $\mathcal A$). Motivated by the Idempotent conjecture
proposed in [Z4], we first show that for every idempotent $e$ lying in both the
kernel ${\mathcal A}^\delta$ and the image $\operatorname{Im}\delta \!:=\delta
({\mathcal A})$ of $\delta$, the principal ideal $(e)\subseteq
\operatorname{Im} \delta$ if $\delta$ is a locally finite $K$-derivation or a
locally nilpotent $K$-$\mathcal E$-derivation of $\mathcal A$; and $e{\mathcal
A}, {\mathcal A}e \subseteq \operatorname{Im} \delta$ if $\delta$ is a locally
finite $K$-$\mathcal E$-derivation of $\mathcal A$. Consequently, the
Idempotent conjecture holds for all locally finite $K$-derivations and all
locally nilpotent $K$-$\mathcal E$-derivations of $\mathcal A$. We then show
that $1_{\mathcal A} \in \operatorname{Im} \delta$, (if and) only if $\delta$
is surjective, which generalizes the same result [GN, W] for locally nilpotent
$K$-derivations of commutative $K$-algebras to locally finite $K$-derivations
and $K$-$\mathcal E$-derivations $\delta$ of all $K$-algebras $\mathcal A$.
| 0 | 0 | 1 | 0 | 0 | 0 |
16,346 | A connection between the good property of an Artinian Gorenstein local ring and that of its quotient modulo socle | Following Roos, we say that a local ring $R$ is good if all finitely
generated $R$-modules have rational Poincaré series over $R$, sharing a
common denominator. Rings with the Backelin-Roos property and generalised Golod
rings are good due to results of Levin and Avramov respectively. Let $R$ be an
Artinian Gorenstein local ring. The ring $R$ is shown to have the Backelin-Roos
property if $R/ soc(R)$ is a Golod ring. Furthermore the ring $R$ is
generalised Golod if and only if $R/ soc(R)$ is so.
We explore when connected sums of Artinian Gorenstein local rings are good.
We provide a uniform argument to show that stretched, almost stretched
Gorenstein rings are good and show further that the Auslander-Reiten conjecture
holds true for such rings. We prove that Gorenstein rings of multiplicity at
most eleven are good. We recover a result of Rossi-Şega on the good
property of compressed Gorenstein local rings in a stronger form by a shorter
argument.
| 0 | 0 | 1 | 0 | 0 | 0 |
16,347 | Towards Efficient Verification of Population Protocols | Population protocols are a well established model of computation by
anonymous, identical finite state agents. A protocol is well-specified if from
every initial configuration, all fair executions reach a common consensus. The
central verification question for population protocols is the
well-specification problem: deciding if a given protocol is well-specified.
Esparza et al. have recently shown that this problem is decidable, but with
very high complexity: it is at least as hard as the Petri net reachability
problem, which is EXPSPACE-hard, and for which only algorithms of non-primitive
recursive complexity are currently known.
In this paper we introduce the class WS3 of well-specified strongly-silent
protocols and we prove that it is suitable for automatic verification. More
precisely, we show that WS3 has the same computational power as general
well-specified protocols, and captures standard protocols from the literature.
Moreover, we show that the membership problem for WS3 reduces to solving
boolean combinations of linear constraints over N. This allowed us to develop
the first software able to automatically prove well-specification for all of
the infinitely many possible inputs.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,348 | Simplicial Homotopy Theory, Link Homology and Khovanov Homology | The purpose of this note is to point out that simplicial methods and the
well-known Dold-Kan construction in simplicial homotopy theory can be
fruitfully applied to convert link homology theories into homotopy theories.
Dold and Kan prove that there is a functor from the category of chain complexes
over a commutative ring with unit to the category of simplicial objects over
that ring such that chain homotopic maps go to homotopic maps in the simplicial
category. Furthermore, this is an equivalence of categories. In this way, given
a link homology theory, we construct a mapping taking link diagrams to a
category of simplicial objects such that up to looping or delooping, link
diagrams related by Reidemeister moves will give rise to homotopy equivalent
simplicial objects, and the homotopy groups of these objects will be equal to
the link homology groups of the original link homology theory. The construction
is independent of the particular link homology theory. A simplifying point in
producing a homotopy simplicial object in relation to a chain complex occurs
when the chain complex is itself derived (via face maps) from a simplicial
object that satisfies the Kan extension condition. Under these circumstances
one can use that simplicial object rather than apply the Dold-Kan functor to
the chain complex. We will give examples of this situation in regard to
Khovanov homology. We will investigate detailed working out of this
correspondence in separate papers. The purpose of this note is to announce the
basic relationships for using simplicial methods in this domain. Thus we do
more than just quote the Dold-Kan Theorem. We give a review of simplicial
theory and we point to specific constructions, particularly in relation to
Khovanov homology, that can be used to make simplicial homotopy types directly.
| 0 | 0 | 1 | 0 | 0 | 0 |
16,349 | Optimised Maintenance of Datalog Materialisations | To efficiently answer queries, datalog systems often materialise all
consequences of a datalog program, so the materialisation must be updated
whenever the input facts change. Several solutions to the materialisation
update problem have been proposed. The Delete/Rederive (DRed) and the
Backward/Forward (B/F) algorithms solve this problem for general datalog, but
both contain steps that evaluate rules 'backwards' by matching their heads to a
fact and evaluating the partially instantiated rule bodies as queries. We show
that this can be a considerable source of overhead even on very small updates.
In contrast, the Counting algorithm does not evaluate the rules 'backwards',
but it can handle only nonrecursive rules. We present two hybrid approaches
that combine DRed and B/F with Counting so as to reduce or even eliminate
'backward' rule evaluation while still handling arbitrary datalog programs. We
show empirically that our hybrid algorithms are usually significantly faster
than existing approaches, sometimes by orders of magnitude.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,350 | Inversion of some curvature operators near a parallel Ricci metric II: Non-compact manifold with bounded geometry | Let (M,g) be a complete noncompact riemannian manifold with bounded geometry
and parallel Ricci curvature. We show that some operators, "affine" relatively
to the Ricci curvature, are locally invertible, in some classical Sobolev
spaces, near the metric g.
| 0 | 0 | 1 | 0 | 0 | 0 |
16,351 | Partisan gerrymandering with geographically compact districts | Bizarrely shaped voting districts are frequently lambasted as likely
instances of gerrymandering. In order to systematically identify such
instances, researchers have devised several tests for so-called geographic
compactness (i.e., shape niceness). We demonstrate that under certain
conditions, a party can gerrymander a competitive state into geographically
compact districts to win an average of over 70% of the districts. Our results
suggest that geometric features alone may fail to adequately combat partisan
gerrymandering.
| 0 | 0 | 1 | 0 | 0 | 0 |
16,352 | Random Scalar Fields and Hyperuniformity | Disordered many-particle hyperuniform systems are exotic amorphous states of
matter that lie between crystals and liquids. Hyperuniform systems have
attracted recent attention because they are endowed with novel transport and
optical properties. Recently, the hyperuniformity concept has been generalized
to characterize scalar fields, two-phase media and random vector fields. In
this paper, we devise methods to explicitly construct hyperuniform scalar
fields. We investigate explicitly spatial patterns generated from Gaussian
random fields, which have been used to model the microwave background radiation
and heterogeneous materials, the Cahn-Hilliard equation for spinodal
decomposition, and Swift-Hohenberg equations that have been used to model
emergent pattern formation, including Rayleigh-B{\' e}nard convection. We show
that the Gaussian random scalar fields can be constructed to be hyperuniform.
We also numerically study the time evolution of spinodal decomposition patterns
and demonstrate that these patterns are hyperuniform in the scaling regime.
Moreover, we find that labyrinth-like patterns generated by the Swift-Hohenberg
equation are effectively hyperuniform. We show that thresholding a hyperuniform
Gaussian random field to produce a two-phase random medium tends to destroy the
hyperuniformity of the progenitor scalar field. We then propose guidelines to
achieve effectively hyperuniform two-phase media derived from thresholded
non-Gaussian fields. Our investigation paves the way for new research
directions to characterize the large-structure spatial patterns that arise in
physics, chemistry, biology and ecology. Moreover, our theoretical results are
expected to guide experimentalists to synthesize new classes of hyperuniform
materials with novel physical properties via coarsening processes and using
state-of-the-art techniques, such as stereolithography and 3D printing.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,353 | Methodological Approach for the Design of a Complex Inclusive Human-Machine System | Modern industrial automatic machines and robotic cells are equipped with
highly complex human-machine interfaces (HMIs) that often prevent human
operators from an effective use of the automatic systems. In particular, this
applies to vulnerable users, such as those with low experience or education
level, the elderly and the disabled. To tackle this issue, it becomes necessary
to design user-oriented HMIs, which adapt to the capabilities and skills of
users, thus compensating their limitations and taking full advantage of their
knowledge. In this paper, we propose a methodological approach to the design of
complex adaptive human-machine systems that might be inclusive of all users, in
particular the vulnerable ones. The proposed approach takes into account both
the technical requirements and the requirements for ethical, legal and social
implications (ELSI) for the design of automatic systems. The technical
requirements derive from a thorough analysis of three use cases taken from the
European project INCLUSIVE. To achieve the ELSI requirements, the MEESTAR
approach is combined with the specific legal issues for occupational systems
and requirements of the target users.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,354 | Deep learning for comprehensive forecasting of Alzheimer's Disease progression | Most approaches to machine learning from electronic health data can only
predict a single endpoint. Here, we present an alternative that uses
unsupervised deep learning to simulate detailed patient trajectories. We use
data comprising 18-month trajectories of 44 clinical variables from 1908
patients with Mild Cognitive Impairment or Alzheimer's Disease to train a model
for personalized forecasting of disease progression. We simulate synthetic
patient data including the evolution of each sub-component of cognitive exams,
laboratory tests, and their associations with baseline clinical
characteristics, generating both predictions and their confidence intervals.
Our unsupervised model predicts changes in total ADAS-Cog scores with the same
accuracy as specifically trained supervised models and identifies
sub-components associated with word recall as predictive of progression. The
ability to simultaneously simulate dozens of patient characteristics is a
crucial step towards personalized medicine for Alzheimer's Disease.
| 0 | 0 | 0 | 1 | 0 | 0 |
16,355 | Experimental constraints on the rheology, eruption and emplacement dynamics of analog lavas comparable to Mercury's northern volcanic plains | We present new viscosity measurements of a synthetic silicate system
considered an analogue for the lava erupted on the surface of Mercury. In
particular, we focus on the northern volcanic plains (NVP), which correspond to
the largest lava flows on Mercury and possibly in the Solar System.
High-temperature viscosity measurements were performed at both superliquidus
(up to 1736 K) and subliquidus conditions (1569-1502 K) to constrain the
viscosity variations as a function of crystallinity (from 0 to 28\%) and shear
rate (from 0.1 to 5 s 1). Melt viscosity shows moderate variations (4-16 Pa s)
in the temperature range of 1736-1600 K. Experiments performed below the
liquidus temperature show an increase in viscosity as shear rate decreases from
5 to 0.1 s 1, resulting in a shear thinning behavior, with a decrease in
viscosity of 1 log unit. The low viscosity of the studied composition may
explain the ability of NVP lavas to cover long distances, on the order of
hundreds of kilometers in a turbulent flow regime. Using our experimental data
we estimate that lava flows with thickness of 1, 5, and 10 m are likely to have
velocities of 4.8, 6.5, and 7.2 m/s, respectively, on a 5 degree ground slope.
Numerical modeling incorporating both the heat loss of the lavas and its
possible crystallization during emplacement allows us to infer that high
effusion rates (>10,000 m3/s) are necessary to cover the large distances
indicated by satellite data from the MErcury Surface, Space ENvironment,
GEochemistry, and Ranging spacecraft.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,356 | Empirical Bayes Estimators for High-Dimensional Sparse Vectors | The problem of estimating a high-dimensional sparse vector
$\boldsymbol{\theta} \in \mathbb{R}^n$ from an observation in i.i.d. Gaussian
noise is considered. The performance is measured using squared-error loss. An
empirical Bayes shrinkage estimator, derived using a Bernoulli-Gaussian prior,
is analyzed and compared with the well-known soft-thresholding estimator. We
obtain concentration inequalities for the Stein's unbiased risk estimate and
the loss function of both estimators. The results show that for large $n$, both
the risk estimate and the loss function concentrate on deterministic values
close to the true risk.
Depending on the underlying $\boldsymbol{\theta}$, either the proposed
empirical Bayes (eBayes) estimator or soft-thresholding may have smaller loss.
We consider a hybrid estimator that attempts to pick the better of the
soft-thresholding estimator and the eBayes estimator by comparing their risk
estimates. It is shown that: i) the loss of the hybrid estimator concentrates
on the minimum of the losses of the two competing estimators, and ii) the risk
of the hybrid estimator is within order $\frac{1}{\sqrt{n}}$ of the minimum of
the two risks. Simulation results are provided to support the theoretical
results. Finally, we use the eBayes and hybrid estimators as denoisers in the
approximate message passing (AMP) algorithm for compressed sensing, and show
that their performance is superior to the soft-thresholding denoiser in a wide
range of settings.
| 0 | 0 | 1 | 1 | 0 | 0 |
16,357 | Self-consistent DFT+U method for real-space time-dependent density functional theory calculations | We implemented various DFT+U schemes, including the ACBN0 self-consistent
density-functional version of the DFT+U method [Phys. Rev. X 5, 011006 (2015)]
within the massively parallel real-space time-dependent density functional
theory (TDDFT) code Octopus. We further extended the method to the case of the
calculation of response functions with real-time TDDFT+U and to the description
of non-collinear spin systems. The implementation is tested by investigating
the ground-state and optical properties of various transition metal oxides,
bulk topological insulators, and molecules. Our results are found to be in good
agreement with previously published results for both the electronic band
structure and structural properties. The self consistent calculated values of U
and J are also in good agreement with the values commonly used in the
literature. We found that the time-dependent extension of the self-consistent
DFT+U method yields improved optical properties when compared to the empirical
TDDFT+U scheme. This work thus opens a different theoretical framework to
address the non equilibrium properties of correlated systems.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,358 | The search for neutron-antineutron oscillations at the Sudbury Neutrino Observatory | Tests on $B-L$ symmetry breaking models are important probes to search for
new physics. One proposed model with $\Delta(B-L)=2$ involves the oscillations
of a neutron to an antineutron. In this paper a new limit on this process is
derived for the data acquired from all three operational phases of the Sudbury
Neutrino Observatory experiment. The search was concentrated in oscillations
occurring within the deuteron, and 23 events are observed against a background
expectation of 30.5 events. These translate to a lower limit on the nuclear
lifetime of $1.48\times 10^{31}$ years at 90% confidence level (CL) when no
restriction is placed on the signal likelihood space (unbounded).
Alternatively, a lower limit on the nuclear lifetime was found to be
$1.18\times 10^{31}$ years at 90% CL when the signal was forced into a positive
likelihood space (bounded). Values for the free oscillation time derived from
various models are also provided in this article. This is the first search for
neutron-antineutron oscillation with the deuteron as a target.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,359 | Compact Multi-Class Boosted Trees | Gradient boosted decision trees are a popular machine learning technique, in
part because of their ability to give good accuracy with small models. We
describe two extensions to the standard tree boosting algorithm designed to
increase this advantage. The first improvement extends the boosting formalism
from scalar-valued trees to vector-valued trees. This allows individual trees
to be used as multiclass classifiers, rather than requiring one tree per class,
and drastically reduces the model size required for multiclass problems. We
also show that some other popular vector-valued gradient boosted trees
modifications fit into this formulation and can be easily obtained in our
implementation. The second extension, layer-by-layer boosting, takes smaller
steps in function space, which is empirically shown to lead to a faster
convergence and to a more compact ensemble. We have added both improvements to
the open-source TensorFlow Boosted trees (TFBT) package, and we demonstrate
their efficacy on a variety of multiclass datasets. We expect these extensions
will be of particular interest to boosted tree applications that require small
models, such as embedded devices, applications requiring fast inference, or
applications desiring more interpretable models.
| 1 | 0 | 0 | 1 | 0 | 0 |
16,360 | Road Friction Estimation for Connected Vehicles using Supervised Machine Learning | In this paper, the problem of road friction prediction from a fleet of
connected vehicles is investigated. A framework is proposed to predict the road
friction level using both historical friction data from the connected cars and
data from weather stations, and comparative results from different methods are
presented. The problem is formulated as a classification task where the
available data is used to train three machine learning models including
logistic regression, support vector machine, and neural networks to predict the
friction class (slippery or non-slippery) in the future for specific road
segments. In addition to the friction values, which are measured by moving
vehicles, additional parameters such as humidity, temperature, and rainfall are
used to obtain a set of descriptive feature vectors as input to the
classification methods. The proposed prediction models are evaluated for
different prediction horizons (0 to 120 minutes in the future) where the
evaluation shows that the neural networks method leads to more stable results
in different conditions.
| 1 | 0 | 0 | 1 | 0 | 0 |
16,361 | A Decentralized Mobile Computing Network for Multi-Robot Systems Operations | Collective animal behaviors are paradigmatic examples of fully decentralized
operations involving complex collective computations such as collective turns
in flocks of birds or collective harvesting by ants. These systems offer a
unique source of inspiration for the development of fault-tolerant and
self-healing multi-robot systems capable of operating in dynamic environments.
Specifically, swarm robotics emerged and is significantly growing on these
premises. However, to date, most swarm robotics systems reported in the
literature involve basic computational tasks---averages and other algebraic
operations. In this paper, we introduce a novel Collective computing framework
based on the swarming paradigm, which exhibits the key innate features of
swarms: robustness, scalability and flexibility. Unlike Edge computing, the
proposed Collective computing framework is truly decentralized and does not
require user intervention or additional servers to sustain its operations. This
Collective computing framework is applied to the complex task of collective
mapping, in which multiple robots aim at cooperatively map a large area. Our
results confirm the effectiveness of the cooperative strategy, its robustness
to the loss of multiple units, as well as its scalability. Furthermore, the
topology of the interconnecting network is found to greatly influence the
performance of the collective action.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,362 | Deep Architectures for Neural Machine Translation | It has been shown that increasing model depth improves the quality of neural
machine translation. However, different architectural variants to increase
model depth have been proposed, and so far, there has been no thorough
comparative study.
In this work, we describe and evaluate several existing approaches to
introduce depth in neural machine translation. Additionally, we explore novel
architectural variants, including deep transition RNNs, and we vary how
attention is used in the deep decoder. We introduce a novel "BiDeep" RNN
architecture that combines deep transition RNNs and stacked RNNs.
Our evaluation is carried out on the English to German WMT news translation
dataset, using a single-GPU machine for both training and inference. We find
that several of our proposed architectures improve upon existing approaches in
terms of speed and translation quality. We obtain best improvements with a
BiDeep RNN of combined depth 8, obtaining an average improvement of 1.5 BLEU
over a strong shallow baseline.
We release our code for ease of adoption.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,363 | Reinforcement Learning of Speech Recognition System Based on Policy Gradient and Hypothesis Selection | Speech recognition systems have achieved high recognition performance for
several tasks. However, the performance of such systems is dependent on the
tremendously costly development work of preparing vast amounts of task-matched
transcribed speech data for supervised training. The key problem here is the
cost of transcribing speech data. The cost is repeatedly required to support
new languages and new tasks. Assuming broad network services for transcribing
speech data for many users, a system would become more self-sufficient and more
useful if it possessed the ability to learn from very light feedback from the
users without annoying them. In this paper, we propose a general reinforcement
learning framework for speech recognition systems based on the policy gradient
method. As a particular instance of the framework, we also propose a hypothesis
selection-based reinforcement learning method. The proposed framework provides
a new view for several existing training and adaptation methods. The
experimental results show that the proposed method improves the recognition
performance compared to unsupervised adaptation.
| 1 | 0 | 0 | 1 | 0 | 0 |
16,364 | Profile-Based Ad Hoc Social Networking Using Wi-Fi Direct on the Top of Android | Ad-hoc Social Networks have become popular to support novel applications
related to location-based mobile services that are of great importance to users
and businesses. Unlike traditional social services using a centralized server
to fetch location, ad-hoc social network services support infrastructure less
real-time social networking. It allows users to collaborate and share views
anytime anywhere. However, current ad-hoc social network applications are
either not available without rooting the mobile phones or don't filter the
nearby users based on common interests without a centralized server. This paper
presents an architecture and implementation of social networks on commercially
available mobile devices that allow broadcasting name and a limited number of
keywords representing users' interests without any connection in a nearby
region to facilitate matching of interests. The broadcasting region creates a
digital aura and is limited by WiFi region that is around 200 meters. The
application connects users to form a group based on their profile or interests
using peer-to-peer communication mode without using any centralized networking
or profile matching infrastructure. The peer-to-peer group can be used for
private communication when the network is not available.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,365 | An a posteriori error analysis for a coupled continuum pipe-flow/Darcy model in Karst aquifers: anisotropic and isotropic discretizations | This paper presents an a posteriori error analysis for a coupled continuum
pipe-flow/Darcy model in karst aquifers. We consider a unified anisotropic
finite element discretization (i.e. elements with very large aspect ratio). Our
analysis covers two-dimensional domains, conforming and nonconforming
discretizations as well as different elements. Many examples of finite elements
that are covered by analysis are presented. From the finite element solution,
the error estimators are constructed and based on the residual of model
equations. Lower and upper error bounds form the main result with minimal
assumptions on the elements. The lower error bound is uniform with respect to
the mesh anisotropy in the entire domain. The upper error bound depends on a
proper alignment of the anisotropy of the mesh which is a common feature of
anisotropic error estimation. In the special case of isotropic meshes, the
results simplify, and upper and lower error bounds hold unconditionally.
| 0 | 0 | 1 | 0 | 0 | 0 |
16,366 | A Hybrid DOS-Tolerant PKC-Based Key Management System for WSNs | Security is a critical and vital task in wireless sensor networks, therefore
different key management systems have been proposed, many of which are based on
symmetric cryptography. Such systems are very energy efficient, but they lack
some other desirable characteristics. On the other hand, systems based on
public key cryptography have those desirable characteristics, but they consume
more energy. Recently based on authenticated messages from base station a new
PKC based key agreement protocol was proposed. We show this method is
susceptible to a form of denial of service attack where resources of the
network can be exhausted with bogus messages. Then, we propose two different
improvements to solve this vulnerability. Simulation results show that these
new protocols retain desirable characteristics of the basic method and solve
its deficiencies.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,367 | Enceladus's crust as a non-uniform thin shell: I Tidal deformations | The geologic activity at Enceladus's south pole remains unexplained, though
tidal deformations are probably the ultimate cause. Recent gravity and
libration data indicate that Enceladus's icy crust floats on a global ocean, is
rather thin, and has a strongly non-uniform thickness. Tidal effects are
enhanced by crustal thinning at the south pole, so that realistic models of
tidal tectonics and dissipation should take into account the lateral variations
of shell structure. I construct here the theory of non-uniform viscoelastic
thin shells, allowing for depth-dependent rheology and large lateral variations
of shell thickness and rheology. Coupling to tides yields two 2D linear partial
differential equations of the 4th order on the sphere which take into account
self-gravity, density stratification below the shell, and core viscoelasticity.
If the shell is laterally uniform, the solution agrees with analytical formulas
for tidal Love numbers; errors on displacements and stresses are less than 5%
and 15%, respectively, if the thickness is less than 10% of the radius. If the
shell is non-uniform, the tidal thin shell equations are solved as a system of
coupled linear equations in a spherical harmonic basis. Compared to finite
element models, thin shell predictions are similar for the deformations due to
Enceladus's pressurized ocean, but differ for the tides of Ganymede. If
Enceladus's shell is conductive with isostatic thickness variations, surface
stresses are approximately inversely proportional to the local shell thickness.
The radial tide is only moderately enhanced at the south pole. The combination
of crustal thinning and convection below the poles can amplify south polar
stresses by a factor of 10, but it cannot explain the apparent time lag between
the maximum plume brightness and the opening of tiger stripes. In a second
paper, I will study tidal dissipation in a non-uniform crust.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,368 | Minimax Rates and Efficient Algorithms for Noisy Sorting | There has been a recent surge of interest in studying permutation-based
models for ranking from pairwise comparison data. Despite being structurally
richer and more robust than parametric ranking models, permutation-based models
are less well understood statistically and generally lack efficient learning
algorithms. In this work, we study a prototype of permutation-based ranking
models, namely, the noisy sorting model. We establish the optimal rates of
learning the model under two sampling procedures. Furthermore, we provide a
fast algorithm to achieve near-optimal rates if the observations are sampled
independently. Along the way, we discover properties of the symmetric group
which are of theoretical interest.
| 1 | 0 | 1 | 1 | 0 | 0 |
16,369 | Automated Synthesis of Secure Platform Mappings | System development often involves decisions about how a high-level design is
to be implemented using primitives from a low-level platform. Certain
decisions, however, may introduce undesirable behavior into the resulting
implementation, possibly leading to a violation of a desired property that has
already been established at the design level. In this paper, we introduce the
problem of synthesizing a property-preserving platform mapping: A set of
implementation decisions ensuring that a desired property is preserved from a
high-level design into a low-level platform implementation. We provide a
formalization of the synthesis problem and propose a technique for synthesizing
a mapping based on symbolic constraint search. We describe our prototype
implementation, and a real-world case study demonstrating the application of
our technique to synthesizing secure mappings for the popular web authorization
protocols OAuth 1.0 and 2.0.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,370 | Timely Feedback in Unstructured Cybersecurity Exercises | Cyber defence exercises are intensive, hands-on learning events for teams of
professionals who gain or develop their skills to successfully prevent and
respond to cyber attacks. The exercises mimic the real-life, routine operation
of an organization which is being attacked by an unknown offender. Teams of
learners receive very limited immediate feedback from the instructors during
the exercise; they can usually see only a scoreboard showing the aggregated
gain or loss of points for particular tasks. An in-depth analysis of learners'
actions requires considerable human effort, which results in days or weeks of
delay. The intensive experience is thus not followed by proper feedback
facilitating actual learning, and this diminishes the effect of the exercise.
In this initial work, we investigate how to provide valuable feedback to
learners right after the exercise without any unnecessary delay. Based on the
scoring system of a cyber defence exercise, we have developed a new feedback
tool that presents an interactive, personalized timeline of exercise events. We
deployed this tool during an international exercise, where we monitored
participants' interactions and gathered their reflections. The results show
that learners did use the new tool and rated it positively. Since this new
feature is not bound to a particular defence exercise, it can be applied to all
exercises that employ scoring based on the evaluation of individual exercise
objectives. As a result, it enables the learner to immediately reflect on the
experience gained.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,371 | An $ω$-Algebra for Real-Time Energy Problems | We develop a $^*$-continuous Kleene $\omega$-algebra of real-time energy
functions. Together with corresponding automata, these can be used to model
systems which can consume and regain energy (or other types of resources)
depending on available time. Using recent results on $^*$-continuous Kleene
$\omega$-algebras and computability of certain manipulations on real-time
energy functions, it follows that reachability and Büchi acceptance in
real-time energy automata can be decided in a static way which only involves
manipulations of real-time energy functions.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,372 | Trajectory Normalized Gradients for Distributed Optimization | Recently, researchers proposed various low-precision gradient compression,
for efficient communication in large-scale distributed optimization. Based on
these work, we try to reduce the communication complexity from a new direction.
We pursue an ideal bijective mapping between two spaces of gradient
distribution, so that the mapped gradient carries greater information entropy
after the compression. In our setting, all servers should share a reference
gradient in advance, and they communicate via the normalized gradients, which
are the subtraction or quotient, between current gradients and the reference.
To obtain a reference vector that yields a stronger signal-to-noise ratio,
dynamically in each iteration, we extract and fuse information from the past
trajectory in hindsight, and search for an optimal reference for compression.
We name this to be the trajectory-based normalized gradients (TNG). It bridges
the research from different societies, like coding, optimization, systems, and
learning. It is easy to implement and can universally combine with existing
algorithms. Our experiments on benchmarking hard non-convex functions, convex
problems like logistic regression demonstrate that TNG is more
compression-efficient for communication of distributed optimization of general
functions.
| 1 | 0 | 0 | 1 | 0 | 0 |
16,373 | On the convergence of mirror descent beyond stochastic convex programming | In this paper, we examine the convergence of mirror descent in a class of
stochastic optimization problems that are not necessarily convex (or even
quasi-convex), and which we call variationally coherent. Since the standard
technique of "ergodic averaging" offers no tangible benefits beyond convex
programming, we focus directly on the algorithm's last generated sample (its
"last iterate"), and we show that it converges with probabiility $1$ if the
underlying problem is coherent. We further consider a localized version of
variational coherence which ensures local convergence of stochastic mirror
descent (SMD) with high probability. These results contribute to the landscape
of non-convex stochastic optimization by showing that (quasi-)convexity is not
essential for convergence to a global minimum: rather, variational coherence, a
much weaker requirement, suffices. Finally, building on the above, we reveal an
interesting insight regarding the convergence speed of SMD: in problems with
sharp minima (such as generic linear programs or concave minimization
problems), SMD reaches a minimum point in a finite number of steps (a.s.), even
in the presence of persistent gradient noise. This result is to be contrasted
with existing black-box convergence rate estimates that are only asymptotic.
| 1 | 0 | 1 | 0 | 0 | 0 |
16,374 | Can MPTCP Secure Internet Communications from Man-in-the-Middle Attacks? | -Multipath communications at the Internet scale have been a myth for a long
time, with no actual protocol being deployed so that multiple paths could be
taken by a same connection on the way towards an Internet destination.
Recently, the Multipath Transport Control Protocol (MPTCP) extension was
standardized and is undergoing a quick adoption in many use-cases, from mobile
to fixed access networks, from data-centers to core networks. Among its major
benefits -- i.e., reliability thanks to backup path rerouting; throughput
increase thanks to link aggregation; and confidentiality thanks to harder
capacity to intercept a full connection -- the latter has attracted lower
attention. How interesting would it be using MPTCP to exploit multiple
Internet-scale paths hence decreasing the probability of man-in-the-middle
(MITM) attacks is a question to which we try to answer. By analyzing the
Autonomous System (AS) level graph, we identify which countries and regions
show a higher level of robustness against MITM AS-level attacks, for example
due to core cable tapping or route hijacking practices.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,375 | Network Structure of Two-Dimensional Decaying Isotropic Turbulence | The present paper reports on our effort to characterize vortical interactions
in complex fluid flows through the use of network analysis. In particular, we
examine the vortex interactions in two-dimensional decaying isotropic
turbulence and find that the vortical interaction network can be characterized
by a weighted scale-free network. It is found that the turbulent flow network
retains its scale-free behavior until the characteristic value of circulation
reaches a critical value. Furthermore, we show that the two-dimensional
turbulence network is resilient against random perturbations but can be greatly
influenced when forcing is focused towards the vortical structures that are
categorized as network hubs. These findings can serve as a network-analytic
foundation to examine complex geophysical and thin-film flows and take
advantage of the rapidly growing field of network theory, which complements
ongoing turbulence research based on vortex dynamics, hydrodynamic stability,
and statistics. While additional work is essential to extend the mathematical
tools from network analysis to extract deeper physical insights of turbulence,
an understanding of turbulence based on the interaction-based network-theoretic
framework presents a promising alternative in turbulence modeling and control
efforts.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,376 | Pattern Generation for Walking on Slippery Terrains | In this paper, we extend state of the art Model Predictive Control (MPC)
approaches to generate safe bipedal walking on slippery surfaces. In this
setting, we formulate walking as a trade off between realizing a desired
walking velocity and preserving robust foot-ground contact. Exploiting this
formulation inside MPC, we show that safe walking on various flat terrains can
be achieved by compromising three main attributes, i. e. walking velocity
tracking, the Zero Moment Point (ZMP) modulation, and the Required Coefficient
of Friction (RCoF) regulation. Simulation results show that increasing the
walking velocity increases the possibility of slippage, while reducing the
slippage possibility conflicts with reducing the tip-over possibility of the
contact and vice versa.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,377 | Fisher-Rao Metric, Geometry, and Complexity of Neural Networks | We study the relationship between geometry and capacity measures for deep
neural networks from an invariance viewpoint. We introduce a new notion of
capacity --- the Fisher-Rao norm --- that possesses desirable invariance
properties and is motivated by Information Geometry. We discover an analytical
characterization of the new capacity measure, through which we establish
norm-comparison inequalities and further show that the new measure serves as an
umbrella for several existing norm-based complexity measures. We discuss upper
bounds on the generalization error induced by the proposed measure. Extensive
numerical experiments on CIFAR-10 support our theoretical findings. Our
theoretical analysis rests on a key structural lemma about partial derivatives
of multi-layer rectifier networks.
| 1 | 0 | 0 | 1 | 0 | 0 |
16,378 | YUI and HANA: Control and Visualization Programs for HRC in J-PARC | We developed control and visualization programs, YUI and HANA, for High-
Resolution Chopper spectrometer (HRC) installed at BL12 in MLF, J-PARC. YUI is
a comprehensive program to control DAQ-middleware, the accessories, and sample
environment devices. HANA is a program for the data transformation and
visualization of inelastic neutron scattering spectra. In this paper, we
describe the basic system structures and unique functions of these programs
from the viewpoint of users.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,379 | A generalization of a theorem of Hurewicz for quasi-Polish spaces | We identify four countable topological spaces $S_2$, $S_1$, $S_D$, and $S_0$
which serve as canonical examples of topological spaces which fail to be
quasi-Polish. These four spaces respectively correspond to the $T_2$, $T_1$,
$T_D$, and $T_0$-separation axioms. $S_2$ is the space of rationals, $S_1$ is
the natural numbers with the cofinite topology, $S_D$ is an infinite chain
without a top element, and $S_0$ is the set of finite sequences of natural
numbers with the lower topology induced by the prefix ordering. Our main result
is a generalization of Hurewicz's theorem showing that a co-analytic subset of
a quasi-Polish space is either quasi-Polish or else contains a countable
$\Pi^0_2$-subset homeomorphic to one of these four spaces.
| 1 | 0 | 1 | 0 | 0 | 0 |
16,380 | Reconstruction of stochastic 3-D signals with symmetric statistics from 2-D projection images motivated by cryo-electron microscopy | Cryo-electron microscopy provides 2-D projection images of the 3-D electron
scattering intensity of many instances of the particle under study (e.g., a
virus). Both symmetry (rotational point groups) and heterogeneity are important
aspects of biological particles and both aspects can be combined by describing
the electron scattering intensity of the particle as a stochastic process with
a symmetric probability law and therefore symmetric moments. A maximum
likelihood estimator implemented by an expectation-maximization algorithm is
described which estimates the unknown statistics of the electron scattering
intensity stochastic process from images of instances of the particle. The
algorithm is demonstrated on the bacteriophage HK97 and the virus N$\omega$V.
The results are contrasted with existing algorithms which assume that each
instance of the particle has the symmetry rather than the less restrictive
assumption that the probability law has the symmetry.
| 0 | 1 | 0 | 1 | 0 | 0 |
16,381 | Gradient estimates for singular quasilinear elliptic equations with measure data | In this paper, we prove $L^q$-estimates for gradients of solutions to
singular quasilinear elliptic equations with measure data
$$-\operatorname{div}(A(x,\nabla u))=\mu,$$ in a bounded domain
$\Omega\subset\mathbb{R}^{N}$, where $A(x,\nabla u)\nabla u \asymp |\nabla
u|^p$, $p\in (1,2-\frac{1}{n}]$ and $\mu$ is a Radon measure in $\Omega$
| 0 | 0 | 1 | 0 | 0 | 0 |
16,382 | Learning Feature Nonlinearities with Non-Convex Regularized Binned Regression | For various applications, the relations between the dependent and independent
variables are highly nonlinear. Consequently, for large scale complex problems,
neural networks and regression trees are commonly preferred over linear models
such as Lasso. This work proposes learning the feature nonlinearities by
binning feature values and finding the best fit in each quantile using
non-convex regularized linear regression. The algorithm first captures the
dependence between neighboring quantiles by enforcing smoothness via
piecewise-constant/linear approximation and then selects a sparse subset of
good features. We prove that the proposed algorithm is statistically and
computationally efficient. In particular, it achieves linear rate of
convergence while requiring near-minimal number of samples. Evaluations on
synthetic and real datasets demonstrate that algorithm is competitive with
current state-of-the-art and accurately learns feature nonlinearities. Finally,
we explore an interesting connection between the binning stage of our algorithm
and sparse Johnson-Lindenstrauss matrices.
| 1 | 0 | 1 | 1 | 0 | 0 |
16,383 | Empirical distributions of the robustified $t$-test statistics | Based on the median and the median absolute deviation estimators, and the
Hodges-Lehmann and Shamos estimators, robustified analogues of the conventional
$t$-test statistic are proposed. The asymptotic distributions of these
statistics are recently provided. However, when the sample size is small, it is
not appropriate to use the asymptotic distribution of the robustified $t$-test
statistics for making a statistical inference including hypothesis testing,
confidence interval, p-value, etc.
In this article, through extensive Monte Carlo simulations, we obtain the
empirical distributions of the robustified $t$-test statistics and their
quantile values. Then these quantile values can be used for making a
statistical inference.
| 0 | 0 | 0 | 1 | 0 | 0 |
16,384 | Exponential error rates of SDP for block models: Beyond Grothendieck's inequality | In this paper we consider the cluster estimation problem under the Stochastic
Block Model. We show that the semidefinite programming (SDP) formulation for
this problem achieves an error rate that decays exponentially in the
signal-to-noise ratio. The error bound implies weak recovery in the sparse
graph regime with bounded expected degrees, as well as exact recovery in the
dense regime. An immediate corollary of our results yields error bounds under
the Censored Block Model. Moreover, these error bounds are robust, continuing
to hold under heterogeneous edge probabilities and a form of the so-called
monotone attack.
Significantly, this error rate is achieved by the SDP solution itself without
any further pre- or post-processing, and improves upon existing
polynomially-decaying error bounds proved using the Grothendieck\textquoteright
s inequality. Our analysis has two key ingredients: (i) showing that the graph
has a well-behaved spectrum, even in the sparse regime, after discounting an
exponentially small number of edges, and (ii) an order-statistics argument that
governs the final error rate. Both arguments highlight the implicit
regularization effect of the SDP formulation.
| 1 | 0 | 1 | 1 | 0 | 0 |
16,385 | Large-Margin Classification in Hyperbolic Space | Representing data in hyperbolic space can effectively capture latent
hierarchical relationships. With the goal of enabling accurate classification
of points in hyperbolic space while respecting their hyperbolic geometry, we
introduce hyperbolic SVM, a hyperbolic formulation of support vector machine
classifiers, and elucidate through new theoretical work its connection to the
Euclidean counterpart. We demonstrate the performance improvement of hyperbolic
SVM for multi-class prediction tasks on real-world complex networks as well as
simulated datasets. Our work allows analytic pipelines that take the inherent
hyperbolic geometry of the data into account in an end-to-end fashion without
resorting to ill-fitting tools developed for Euclidean space.
| 0 | 0 | 0 | 1 | 0 | 0 |
16,386 | The airglow layer emission altitude cannot be determined unambiguously from temperature comparison with lidars | I investigate the nightly mean emission height and width of the OH*(3-1)
layer by comparing nightly mean temperatures measured by the ground-based
spectrometer GRIPS 9 and the Na lidar at ALOMAR. The data set contains 42
coincident measurements between November 2010 and February 2014, when GRIPS 9
was in operation at the ALOMAR observatory (69.3$^\circ$N, 16.0$^\circ$E) in
northern Norway. To closely resemble the mean temperature measured by GRIPS 9,
I weight each nightly mean temperature profile measured by the lidar using
Gaussian distributions with 40 different centre altitudes and 40 different full
widths at half maximum. In principle, one can thus determine the altitude and
width of an airglow layer by finding the minimum temperature difference between
the two instruments. On most nights, several combinations of centre altitude
and width yield a temperature difference of $\pm$2 K. The generally assumed
altitude of 87 km and width of 8 km is never an unambiguous, good solution for
any of the measurements. Even for a fixed width of $\sim$8.4 km, one can
sometimes find several centre altitudes that yield equally good temperature
agreement. Weighted temperatures measured by lidar are not suitable to
determine unambiguously the emission height and width of an airglow layer.
However, when actual altitude and width data are lacking, a comparison with
lidars can provide an estimate of how representative a measured rotational
temperature is of an assumed altitude and width. I found the rotational
temperature to represent the temperature at the commonly assumed altitude of
87.4 km and width of 8.4 km to within $\pm$16 K, on average. This is not a
measurement uncertainty.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,387 | Microscopic origin of the mobility enhancement at a spinel/perovskite oxide heterointerface revealed by photoemission spectroscopy | The spinel/perovskite heterointerface $\gamma$-Al$_2$O$_3$/SrTiO$_3$ hosts a
two-dimensional electron system (2DES) with electron mobilities exceeding those
in its all-perovskite counterpart LaAlO$_3$/SrTiO$_3$ by more than an order of
magnitude despite the abundance of oxygen vacancies which act as electron
donors as well as scattering sites. By means of resonant soft x-ray
photoemission spectroscopy and \textit{ab initio} calculations we reveal the
presence of a sharply localized type of oxygen vacancies at the very interface
due to the local breaking of the perovskite symmetry. We explain the
extraordinarily high mobilities by reduced scattering resulting from the
preferential formation of interfacial oxygen vacancies and spatial separation
of the resulting 2DES in deeper SrTiO$_3$ layers. Our findings comply with
transport studies and pave the way towards defect engineering at interfaces of
oxides with different crystal structures.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,388 | Ordered Monoids: Languages and Relations | We give a finite axiomatization for the variety generated by relational,
integral ordered monoids. As a corollary we get a finite axiomatization for the
language interpretation as well.
| 0 | 0 | 1 | 0 | 0 | 0 |
16,389 | On the secrecy gain of $\ell$-modular lattices | We show that for every $\ell>1$, there is a counterexample to the
$\ell$-modular secrecy function conjecture by Oggier, Solé and Belfiore.
These counterexamples all satisfy the modified conjecture by Ernvall-Hytönen
and Sethuraman. Furthermore, we provide a method to prove or disprove the
modified conjecture for any given $\ell$-modular lattice rationally equivalent
to a suitable amount of copies of $\mathbb{Z}\oplus \sqrt{\ell}\,\mathbb{Z}$
with $\ell \in \{3,5,7,11,23\}$. We also provide a variant of the method for
strongly $\ell$-modular lattices when $\ell\in \{6,14,15\}$.
| 0 | 0 | 1 | 0 | 0 | 0 |
16,390 | Extremal invariant polynomials not satisfying the Riemann hypothesis | Zeta functions for linear codes were defined by Iwan Duursma in 1999. They
were generalized to the case of some invariant polynomials by the preset
author. One of the most important problems is whether extremal weight
enumerators satisfy the Riemann hypothesis. In this article, we show there
exist extremal polynomials of the weight enumerator type which are invariant
under the MacWilliams transform and do not satisfy the Riemann hypothesis.
| 0 | 0 | 1 | 0 | 0 | 0 |
16,391 | Fourier analysis of serial dependence measures | Classical spectral analysis is based on the discrete Fourier transform of the
auto-covariances. In this paper we investigate the asymptotic properties of new
frequency domain methods where the auto-covariances in the spectral density are
replaced by alternative dependence measures which can be estimated by
U-statistics. An interesting example is given by Kendall{'}s $\tau$ , for which
the limiting variance exhibits a surprising behavior.
| 0 | 0 | 1 | 1 | 0 | 0 |
16,392 | HVACKer: Bridging the Air-Gap by Attacking the Air Conditioning System | Modern corporations physically separate their sensitive computational
infrastructure from public or other accessible networks in order to prevent
cyber-attacks. However, attackers still manage to infect these networks, either
by means of an insider or by infiltrating the supply chain. Therefore, an
attacker's main challenge is to determine a way to command and control the
compromised hosts that are isolated from an accessible network (e.g., the
Internet).
In this paper, we propose a new adversarial model that shows how an air
gapped network can receive communications over a covert thermal channel.
Concretely, we show how attackers may use a compromised air-conditioning system
(connected to the internet) to send commands to infected hosts within an
air-gapped network. Since thermal communication protocols are a rather
unexplored domain, we propose a novel line-encoding and protocol suitable for
this type of channel. Moreover, we provide experimental results to demonstrate
the covert channel's feasibility, and to calculate the channel's bandwidth.
Lastly, we offer a forensic analysis and propose various ways this channel can
be detected and prevented.
We believe that this study details a previously unseen vector of attack that
security experts should be aware of.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,393 | Koszul binomial edge ideals of pairs of graphs | We study the Koszul property of a standard graded $K$-algebra $R$ defined by
the binomial edge ideal of a pair of graphs $(G_1,G_2)$. We show that the
following statements are equivalent: (i) $R$ is Koszul; (ii) the defining ideal
$J_{G_1,G_2}$ of $R$ has a quadratic Gröbner basis; (iii) the graded maximal
ideal of $R$ has linear quotients with respect to a suitable order of its
generators
| 0 | 0 | 1 | 0 | 0 | 0 |
16,394 | On Loss Functions for Deep Neural Networks in Classification | Deep neural networks are currently among the most commonly used classifiers.
Despite easily achieving very good performance, one of the best selling points
of these models is their modular design - one can conveniently adapt their
architecture to specific needs, change connectivity patterns, attach
specialised layers, experiment with a large amount of activation functions,
normalisation schemes and many others. While one can find impressively wide
spread of various configurations of almost every aspect of the deep nets, one
element is, in authors' opinion, underrepresented - while solving
classification problems, vast majority of papers and applications simply use
log loss. In this paper we try to investigate how particular choices of loss
functions affect deep models and their learning dynamics, as well as resulting
classifiers robustness to various effects. We perform experiments on classical
datasets, as well as provide some additional, theoretical insights into the
problem. In particular we show that L1 and L2 losses are, quite surprisingly,
justified classification objectives for deep nets, by providing probabilistic
interpretation in terms of expected misclassification. We also introduce two
losses which are not typically used as deep nets objectives and show that they
are viable alternatives to the existing ones.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,395 | Are theoretical results 'Results'? | Yes.
| 0 | 0 | 0 | 0 | 1 | 0 |
16,396 | Self-supervised learning of visual features through embedding images into text topic spaces | End-to-end training from scratch of current deep architectures for new
computer vision problems would require Imagenet-scale datasets, and this is not
always possible. In this paper we present a method that is able to take
advantage of freely available multi-modal content to train computer vision
algorithms without human supervision. We put forward the idea of performing
self-supervised learning of visual features by mining a large scale corpus of
multi-modal (text and image) documents. We show that discriminative visual
features can be learnt efficiently by training a CNN to predict the semantic
context in which a particular image is more probable to appear as an
illustration. For this we leverage the hidden semantic structures discovered in
the text corpus with a well-known topic modeling technique. Our experiments
demonstrate state of the art performance in image classification, object
detection, and multi-modal retrieval compared to recent self-supervised or
natural-supervised approaches.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,397 | Detector sampling of optical/IR spectra: how many pixels per FWHM? | Most optical and IR spectra are now acquired using detectors with
finite-width pixels in a square array. This paper examines the effects of such
pixellation, using computed simulations to illustrate the effects which most
concern the astronomer end-user. Coarse sampling increases the random noise
errors in wavelength by typically 10 - 20% at 2 pixels/FWHM, but with wide
variation depending on the functional form of the instrumental Line Spread
Function (LSF) and on the pixel phase. Line widths are even more strongly
affected at low sampling frequencies. However, the noise in fitted peak
amplitudes is minimally affected. Pixellation has a substantial but complex
effect on the ability to see a relative minimum between two closely-spaced
peaks (or relative maximum between two absorption lines). The consistent scale
of resolving power presented by Robertson (2013) is extended to cover
pixellated spectra. The systematic bias errors in wavelength introduced by
pixellation are examined. While they may be negligible for smooth well-sampled
symmetric LSFs, they are very sensitive to asymmetry and high spatial frequency
substructure. The Modulation Transfer Function for sampled data is shown to
give a useful indication of the extent of improperly sampled signal in an LSF.
The common maxim that 2 pixels/FWHM is the Nyquist limit is incorrect and most
LSFs will exhibit some aliasing at this sample frequency. While 2 pixels/FWHM
is often an acceptable minimum for moderate signal/noise work, it is preferable
to carry out simulations for any actual or proposed LSF to find the effects of
sampling frequency. Where end-users have a choice of sampling frequencies,
through on-chip binning and/or spectrograph configurations, the instrument user
manual should include an examination of their effects. (Abridged)
| 0 | 1 | 0 | 0 | 0 | 0 |
16,398 | Accurate Single Stage Detector Using Recurrent Rolling Convolution | Most of the recent successful methods in accurate object detection and
localization used some variants of R-CNN style two stage Convolutional Neural
Networks (CNN) where plausible regions were proposed in the first stage then
followed by a second stage for decision refinement. Despite the simplicity of
training and the efficiency in deployment, the single stage detection methods
have not been as competitive when evaluated in benchmarks consider mAP for high
IoU thresholds. In this paper, we proposed a novel single stage end-to-end
trainable object detection network to overcome this limitation. We achieved
this by introducing Recurrent Rolling Convolution (RRC) architecture over
multi-scale feature maps to construct object classifiers and bounding box
regressors which are "deep in context". We evaluated our method in the
challenging KITTI dataset which measures methods under IoU threshold of 0.7. We
showed that with RRC, a single reduced VGG-16 based model already significantly
outperformed all the previously published results. At the time this paper was
written our models ranked the first in KITTI car detection (the hard level),
the first in cyclist detection and the second in pedestrian detection. These
results were not reached by the previous single stage methods. The code is
publicly available.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,399 | On separable higher Gauss maps | We study the $m$-th Gauss map in the sense of F.~L.~Zak of a projective
variety $X \subset \mathbb{P}^N$ over an algebraically closed field in any
characteristic. For all integer $m$ with $n:=\dim(X) \leq m < N$, we show that
the contact locus on $X$ of a general tangent $m$-plane is a linear variety if
the $m$-th Gauss map is separable. We also show that for smooth $X$ with $n <
N-2$, the $(n+1)$-th Gauss map is birational if it is separable, unless $X$ is
the Segre embedding $\mathbb{P}^1 \times \mathbb{P}^n \subset
\mathbb{P}^{2n-1}$. This is related to L. Ein's classification of varieties
with small dual varieties in characteristic zero.
| 0 | 0 | 1 | 0 | 0 | 0 |
16,400 | Diffeomorphisms of the closed unit disc converging to the identity | If $\mathcal{G}$ is the group (under composition) of diffeomorphisms $f :
{\bar{D}}(0;1) \rightarrow {\bar{D}}(0;1)$ of the closed unit disc
${\bar{D}}(0;1)$ which are the identity map $id : {\bar{D}}(0;1) \rightarrow
{\bar{D}}(0;1)$ on the closed unit circle and satisfy the condition $det(J(f))
> 0$, where $J(f)$ is the Jacobian matrix of $f$ or (equivalently) the
Fréchet derivative of $f$, then $\mathcal{G}$ equipped with the metric
$d_{\mathcal{G}}(f,g) = \Vert f-g \Vert_{\infty } + \Vert J(f) - J(g)
\Vert_{\infty }$, where $f$, $g$ range over $\mathcal{G}$, is a metric space in
which $d_{\mathcal{G}} \left( f_{t} , id \right) \rightarrow 0$ as $t
\rightarrow 1^{+}$, where $f_{t}(z) = \frac{ tz }{ 1 + (t-1) \vert z \vert }$,
whenever $z \in {\bar{D}}(0;1)$ and $t \geq 1$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.