ID
int64 1
21k
| TITLE
stringlengths 7
239
| ABSTRACT
stringlengths 7
2.76k
| Computer Science
int64 0
1
| Physics
int64 0
1
| Mathematics
int64 0
1
| Statistics
int64 0
1
| Quantitative Biology
int64 0
1
| Quantitative Finance
int64 0
1
|
---|---|---|---|---|---|---|---|---|
18,301 | Delooping the functor calculus tower | We study a connection between mapping spaces of bimodules and of
infinitesimal bimodules over an operad. As main application and motivation of
our work, we produce an explicit delooping of the manifold calculus tower
associated to the space of smooth maps $D^{m}\rightarrow D^{n}$ pf discs,
$n\geq m$, avoiding any given multisingularity and coinciding with the standard
inclusion near $\partial D^{m}$. In particular, we give a new proof of the
delooping of the space of disc embeddings in terms of little discs operads maps
with the advantage that it can be applied to more general mapping spaces.
| 0 | 0 | 1 | 0 | 0 | 0 |
18,302 | Area Law Violations and Quantum Phase Transitions in Modified Motzkin Walk Spin Chains | Area law violations for entanglement entropy in the form of a square root has
recently been studied for one-dimensional frustration-free quantum systems
based on the Motzkin walks and their variations. Here we consider a Motzkin
walk with a different Hilbert space on each step of the walk spanned by
elements of a {\it Symmetric Inverse Semigroup} with the direction of each step
governed by its algebraic structure. This change alters the number of paths
allowed in the Motzkin walk and introduces a ground state degeneracy sensitive
to boundary perturbations. We study the frustration-free spin chains based on
three symmetric inverse semigroups, $\cS^3_1$, $\cS^3_2$ and $\cS^2_1$. The
system based on $\cS^3_1$ and $\cS^3_2$ provide examples of quantum phase
transitions in one dimensions with the former exhibiting a transition between
the area law and a logarithmic violation of the area law and the latter
providing an example of transition from logarithmic scaling to a square root
scaling in the system size, mimicking a colored $\cS^3_1$ system. The system
with $\cS^2_1$ is much simpler and produces states that continue to obey the
area law.
| 0 | 1 | 0 | 0 | 0 | 0 |
18,303 | Diffusion and confusion of chaotic iteration based hash functions | To guarantee the integrity and security of data transmitted through the
Internet, hash functions are fundamental tools. But recent researches have
shown that security flaws exist in the most widely used hash functions. So a
new way to improve their security performance is urgently demanded. In this
article, we propose new hash functions based on chaotic iterations, which have
chaotic properties as defined by Devaney. The corresponding diffusion and
confusion analyzes are provided and a comparative study between the proposed
hash functions is carried out, to make their use more applicable in any
security context.
| 1 | 1 | 0 | 0 | 0 | 0 |
18,304 | L1188: a promising candidate of cloud-cloud collision triggering the formation of the low- and intermediate-mass stars | We present a new large-scale (4 square degrees) simultaneous $^{12}$CO,
$^{13}$CO, and C$^{18}$O ($J$=1$-$0) mapping of L1188 with the PMO 13.7-m
telescope. Our observations have revealed that L1188 consists of two nearly
orthogonal filamentary molecular clouds at two clearly separated velocities.
Toward the intersection showing large velocity spreads, we find several
bridging features connecting the two clouds in velocity, and an open arc
structure which exhibits high excitation temperatures, enhanced $^{12}$CO and
$^{13}$CO emission, and broad $^{12}$CO line wings. This agrees with the
scenario that the two clouds are colliding with each other. The distribution of
young stellar object (YSO) candidates implies an enhancement of star formation
in the intersection of the two clouds. We suggest that a cloud-cloud collision
happened in L1188 about 1~Myr ago, possibly triggering the formation of low-
and intermediate-mass YSOs in the intersection.
| 0 | 1 | 0 | 0 | 0 | 0 |
18,305 | Asymmetric Matrix-Valued Covariances for Multivariate Random Fields on Spheres | Matrix-valued covariance functions are crucial to geostatistical modeling of
multivariate spatial data. The classical assumption of symmetry of a
multivariate covariance function is overlay restrictive and has been considered
as unrealistic for most of real data applications. Despite of that, the
literature on asymmetric covariance functions has been very sparse. In
particular, there is some work related to asymmetric covariances on Euclidean
spaces, depending on the Euclidean distance. However, for data collected over
large portions of planet Earth, the most natural spatial domain is a sphere,
with the corresponding geodesic distance being the natural metric. In this
work, we propose a strategy based on spatial rotations to generate asymmetric
covariances for multivariate random fields on the $d$-dimensional unit sphere.
We illustrate through simulations as well as real data analysis that our
proposal allows to achieve improvements in the predictive performance in
comparison to the symmetric counterpart.
| 0 | 0 | 1 | 1 | 0 | 0 |
18,306 | A dynamic stochastic blockmodel for interaction lengths | We propose a new dynamic stochastic blockmodel that focuses on the analysis
of interaction lengths in networks. The model does not rely on a discretization
of the time dimension and may be used to analyze networks that evolve
continuously over time. The framework relies on a clustering structure on the
nodes, whereby two nodes belonging to the same latent group tend to create
interactions and non-interactions of similar lengths. We introduce a fast
variational expectation-maximization algorithm to perform inference, and adapt
a widely used clustering criterion to perform model choice. Finally, we test
our methodology on artificial data, and propose a demonstration on a dataset
concerning face-to-face interactions between students in a high-school.
| 0 | 0 | 0 | 1 | 0 | 0 |
18,307 | Do we agree on user interface aesthetics of Android apps? | Context: Visual aesthetics is increasingly seen as an essential factor in
perceived usability, interaction, and overall appraisal of user interfaces
especially with respect to mobile applications. Yet, a question that remains is
how to assess and to which extend users agree on visual aesthetics. Objective:
This paper analyzes the inter-rater agreement on visual aesthetics of user
interfaces of Android apps as a basis for guidelines and evaluation models.
Method: We systematically collected ratings on the visual aesthetics of 100
user interfaces of Android apps from 10 participants and analyzed the frequency
distribution, reliability and influencing design aspects. Results: In general,
user interfaces of Android apps are perceived more ugly than beautiful. Yet,
raters only moderately agree on the visual aesthetics. Disagreements seem to be
related to subtle differences with respect to layout, shapes, colors,
typography, and background images. Conclusion: Visual aesthetics is a key
factor for the success of apps. However, the considerable disagreement of
raters on the perceived visual aesthetics indicates the need for a better
understanding of this software quality with respect to mobile apps.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,308 | A Solution for Large-scale Multi-object Tracking | A large-scale multi-object tracker based on the generalised labeled
multi-Bernoulli (GLMB) filter is proposed. The algorithm is capable of tracking
a very large, unknown and time-varying number of objects simultaneously, in the
presence of a high number of false alarms, as well as misdetections and
measurement origin uncertainty due to closely spaced objects. The algorithm is
demonstrated on a simulated large-scale tracking scenario, where the peak
number objects appearing simultaneously exceeds one million. To evaluate the
performance of the proposed tracker, we also introduce a new method of applying
the optimal sub-pattern assignment (OSPA) metric, and an efficient strategy for
its evaluation in large-scale scenarios.
| 0 | 0 | 0 | 1 | 0 | 0 |
18,309 | Empirical Survival Jensen-Shannon Divergence as a Goodness-of-Fit Measure for Maximum Likelihood Estimation and Curve Fitting | The coefficient of determination, known as $R^2$, is commonly used as a
goodness-of-fit criterion for fitting linear models. $R^2$ is somewhat
controversial when fitting nonlinear models, although it may be generalised on
a case-by-case basis to deal with specific models such as the logistic model.
Assume we are fitting a parametric distribution to a data set using, say, the
maximum likelihood estimation method. A general approach to measure the
goodness-of-fit of the fitted parameters, which we advocate herein, is to use a
nonparametric measure for model comparison between the raw data and the fitted
model. In particular, for this purpose we put forward the {\em Survival
Jensen-Shannon divergence} ($SJS$) and its empirical counterpart (${\cal
E}SJS$) as a metric which is bounded, and is a natural generalisation of the
Jensen-Shannon divergence. We demonstrate, via a straightforward procedure
making use of the ${\cal E}SJS$, that it can be used as part of maximum
likelihood estimation or curve fitting as a measure of goodness-of-fit,
including the construction of a confidence interval for the fitted parametric
distribution. Furthermore, we show the validity of the proposed method with
simulated data, and three empirical data sets of interest to researchers in
sociophysics and econophysics.
| 0 | 0 | 0 | 0 | 0 | 1 |
18,310 | Cooperative Localisation of a GPS-Denied UAV using Direction of Arrival Measurements | A GPS-denied UAV (Agent B) is localised through INS alignment with the aid of
a nearby GPS-equipped UAV (Agent A), which broadcasts its position at several
time instants. Agent B measures the signals' direction of arrival with respect
to Agent B's inertial navigation frame. Semidefinite programming and the
Orthogonal Procrustes algorithm are employed, and accuracy is improved through
maximum likelihood estimation. The method is validated using flight data and
simulations. A three-agent extension is explored.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,311 | Optical Surface Properties and their RF Limitations of European XFEL Cavities | The inner surface of superconducting cavities plays a crucial role to achieve
highest accelerating fields and low losses. The industrial fabrication of
cavities for the European X-Ray Free Electron Laser (XFEL) and the
International Linear Collider (ILC) HiGrade Research Project allowed for an
investigation of this interplay. For the serial inspection of the inner
surface, the optical inspection robot OBACHT was constructed and to analyze the
large amount of data, represented in the images of the inner surface, an image
processing and analysis code was developed and new variables to describe the
cavity surface were obtained. This quantitative analysis identified vendor
specific surface properties which allow to perform a quality control and
assurance during the production. In addition, a strong negative correlation of
$\rho= -0.93$ with a significance of $6\,\sigma$ of the integrated grain
boundary area $\sum{\mathrm{A}}$ versus the maximal achievable accelerating
field $\mathrm{E_{acc,max}}$ has been found.
| 0 | 1 | 0 | 0 | 0 | 0 |
18,312 | A comment on `An improved macroscale model for gas slip flow in porous media' | In a recent paper by Lasseux, Valdés-Parada and Porter (J.~Fluid~Mech.
\textbf{805} (2016) 118-146), it is found that the apparent gas permeability of
the porous media is a nonlinear function of the Knudsen number. However, this
result is highly questionable, because the adopted Navier-Stokes equations and
the first-order velocity-slip boundary condition are first-order (in terms of
the Knudsen number) approximations of the Boltzmann equation and the kinetic
boundary condition for rarefied gas flows. Our numerical simulations based on
the Bhatnagar-Gross-Krook kinetic equation and regularized 20-moment equations
prove that the Navier-Stokes equations with the first-order velocity-slip
boundary condition are only accurate at a very small Knudsen number limit,
where the apparent gas permeability is a linear function of the Knudsen number.
| 0 | 1 | 0 | 0 | 0 | 0 |
18,313 | AdaComp : Adaptive Residual Gradient Compression for Data-Parallel Distributed Training | Highly distributed training of Deep Neural Networks (DNNs) on future compute
platforms (offering 100 of TeraOps/s of computational capacity) is expected to
be severely communication constrained. To overcome this limitation, new
gradient compression techniques are needed that are computationally friendly,
applicable to a wide variety of layers seen in Deep Neural Networks and
adaptable to variations in network architectures as well as their
hyper-parameters. In this paper we introduce a novel technique - the Adaptive
Residual Gradient Compression (AdaComp) scheme. AdaComp is based on localized
selection of gradient residues and automatically tunes the compression rate
depending on local activity. We show excellent results on a wide spectrum of
state of the art Deep Learning models in multiple domains (vision, speech,
language), datasets (MNIST, CIFAR10, ImageNet, BN50, Shakespeare), optimizers
(SGD with momentum, Adam) and network parameters (number of learners,
minibatch-size etc.). Exploiting both sparsity and quantization, we demonstrate
end-to-end compression rates of ~200X for fully-connected and recurrent layers,
and ~40X for convolutional layers, without any noticeable degradation in model
accuracies.
| 1 | 0 | 0 | 1 | 0 | 0 |
18,314 | Halo-independent determination of the unmodulated WIMP signal in DAMA: the isotropic case | We present a halo-independent determination of the unmodulated signal
corresponding to the DAMA modulation if interpreted as due to dark matter
weakly interacting massive particles (WIMPs). First we show how a modulated
signal gives information on the WIMP velocity distribution function in the
Galactic rest frame, from which the unmodulated signal descends. Then we
perform a mathematically-sound profile likelihood analysis in which we profile
the likelihood over a continuum of nuisance parameters (namely, the WIMP
velocity distribution). As a first application of the method, which is very
general and valid for any class of velocity distributions, we restrict the
analysis to velocity distributions that are isotropic in the Galactic frame. In
this way we obtain halo-independent maximum-likelihood estimates and confidence
intervals for the DAMA unmodulated signal. We find that the estimated
unmodulated signal is in line with expectations for a WIMP-induced modulation
and is compatible with the DAMA background+signal rate. Specifically, for the
isotropic case we find that the modulated amplitude ranges between a few
percent and about 25% of the unmodulated amplitude, depending on the WIMP mass.
| 0 | 1 | 0 | 0 | 0 | 0 |
18,315 | Can Deep Reinforcement Learning Solve Erdos-Selfridge-Spencer Games? | Deep reinforcement learning has achieved many recent successes, but our
understanding of its strengths and limitations is hampered by the lack of rich
environments in which we can fully characterize optimal behavior, and
correspondingly diagnose individual actions against such a characterization.
Here we consider a family of combinatorial games, arising from work of Erdos,
Selfridge, and Spencer, and we propose their use as environments for evaluating
and comparing different approaches to reinforcement learning. These games have
a number of appealing features: they are challenging for current learning
approaches, but they form (i) a low-dimensional, simply parametrized
environment where (ii) there is a linear closed form solution for optimal
behavior from any state, and (iii) the difficulty of the game can be tuned by
changing environment parameters in an interpretable way. We use these
Erdos-Selfridge-Spencer games not only to compare different algorithms, but
test for generalization, make comparisons to supervised learning, analyse
multiagent play, and even develop a self play algorithm. Code can be found at:
this https URL
| 1 | 0 | 0 | 1 | 0 | 0 |
18,316 | Memristor equations: incomplete physics and undefined passivity/activity | In his seminal paper, Chua presented a fundamental physical claim by
introducing the memristor, "The missing circuit element". The memristor
equations were originally supposed to represent a passive circuit element
because, with active circuitry, arbitrary elements can be realized without
limitations. Therefore, if the memristor equations do not guarantee that the
circuit element can be realized by a passive system, the fundamental physics
claim about the memristor as "missing circuit element" loses all its weight.
The question of passivity/activity belongs to physics thus we incorporate
thermodynamics into the study of this problem. We show that the memristor
equations are physically incomplete regarding the problem of
passivity/activity. As a consequence, the claim that the present memristor
functions describe a passive device lead to unphysical results, such as
violating the Second Law of thermodynamics, in infinitely large number of
cases. The seminal memristor equations cannot introduce a new physical circuit
element without making the model more physical such as providing the
Fluctuation Dissipation Theory of memristors.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,317 | Astrophysical uncertainties on the local dark matter distribution and direct detection experiments | The differential event rate in Weakly Interacting Massive Particle (WIMP)
direct detection experiments depends on the local dark matter density and
velocity distribution. Accurate modelling of the local dark matter distribution
is therefore required to obtain reliable constraints on the WIMP particle
physics properties. Data analyses typically use a simple Standard Halo Model
which might not be a good approximation to the real Milky Way (MW) halo. We
review observational determinations of the local dark matter density, circular
speed and escape speed and also studies of the local dark matter distribution
in simulated MW-like galaxies. We discuss the effects of the uncertainties in
these quantities on the energy spectrum and its time and direction dependence.
Finally we conclude with an overview of various methods for handling these
astrophysical uncertainties.
| 0 | 1 | 0 | 0 | 0 | 0 |
18,318 | On Reliability-Aware Server Consolidation in Cloud Datacenters | In the past few years, datacenter (DC) energy consumption has become an
important issue in technology world. Server consolidation using virtualization
and virtual machine (VM) live migration allows cloud DCs to improve resource
utilization and hence energy efficiency. In order to save energy, consolidation
techniques try to turn off the idle servers, while because of workload
fluctuations, these offline servers should be turned on to support the
increased resource demands. These repeated on-off cycles could affect the
hardware reliability and wear-and-tear of servers and as a result, increase the
maintenance and replacement costs. In this paper we propose a holistic
mathematical model for reliability-aware server consolidation with the
objective of minimizing total DC costs including energy and reliability costs.
In fact, we try to minimize the number of active PMs and racks, in a
reliability-aware manner. We formulate the problem as a Mixed Integer Linear
Programming (MILP) model which is in form of NP-complete. Finally, we evaluate
the performance of our approach in different scenarios using extensive
numerical MATLAB simulations.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,319 | A novel quantum dynamical approach in electron microscopy combining wave-packet propagation with Bohmian trajectories | The numerical analysis of the diffraction features rendered by transmission
electron microscopy (TEM) typically relies either on classical approximations
(Monte Carlo simulations) or quantum paraxial tomography (the multislice method
and any of its variants). Although numerically advan- tageous (relatively
simple implementations and low computational costs), they involve important
approximations and thus their range of applicability is limited. To overcome
such limitations, an alternative, more general approach is proposed, based on
an optimal combination of wave-packet propagation with the on-the-fly
computation of associated Bohmian trajectories. For the sake of clarity, but
without loss of generality, the approach is used to analyze the diffraction of
an electron beam by a thin aluminum slab as a function of three different
incidence (work) conditions which are of interest in electron microscopy: the
probe width, the tilting angle, and the beam energy. Specifically, it is shown
that, because there is a dependence on particular thresholds of the beam
energy, this approach provides a clear description of the diffraction process
at any energy, revealing at the same time any diversion of the beam inside the
material towards directions that cannot be accounted for by other conventional
methods, which is of much interest when dealing with relatively low energies
and/or relatively large tilting angles.
| 0 | 1 | 0 | 0 | 0 | 0 |
18,320 | Convex equipartitions of colored point sets | We show that any $d$-colored set of points in general position in
$\mathbb{R}^d$ can be partitioned into $n$ subsets with disjoint convex hulls
such that the set of points and all color classes are partitioned as evenly as
possible. This extends results by Holmsen, Kynčl & Valculescu (2017) and
establishes a special case of their general conjecture. Our proof utilizes a
result obtained independently by Soberón and by Karasev in 2010, on
simultaneous equipartitions of $d$ continuous measures in $\mathbb{R}^d$ by $n$
convex regions. This gives a convex partition of $\mathbb{R}^d$ with the
desired properties, except that points may lie on the boundaries of the
regions. In order to resolve the ambiguous assignment of these points, we set
up a network flow problem. The equipartition of the continuous measures gives a
fractional flow. The existence of an integer flow then yields the desired
partition of the point set.
| 0 | 0 | 1 | 0 | 0 | 0 |
18,321 | Logic Programming Petri Nets | With the purpose of modeling, specifying and reasoning in an integrated
fashion with procedural and declarative aspects (both commonly present in cases
or scenarios), the paper introduces Logic Programming Petri Nets (LPPN), an
extension to the Petri Net notation providing an interface to logic programming
constructs. Two semantics are presented. First, a hybrid operational semantics
that separates the process component, treated with Petri nets, from the
constraint/terminological component, treated with Answer Set Programming (ASP).
Second, a denotational semantics maps the notation to ASP fully, via Event
Calculus. These two alternative specifications enable a preliminary evaluation
in terms of reasoning efficiency.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,322 | Unique Continuation through Hyperplane for Higher Order Parabolic and Shrödinger Equations | We consider higher order parabolic operator $\partial_t+(-\Delta_x)^m$ and
higher order Schrödinger operator $i^{-1}\partial_t+(-\Delta_x)^m$ in
$X=\{(t,x)\in\mathbb{R}^{1+n};~|t|<A,|x_n|<B\}$ where $m$ is any positive
integer. Under certain lower order and regularity assumptions, we prove that if
the solution for linear problem vanishes when $x_n>0$, then the solution
vanishes in $X$. Such results are given globally, and we also prove some
related local results.
| 0 | 0 | 1 | 0 | 0 | 0 |
18,323 | Fashion Conversation Data on Instagram | The fashion industry is establishing its presence on a number of
visual-centric social media like Instagram. This creates an interesting clash
as fashion brands that have traditionally practiced highly creative and
editorialized image marketing now have to engage with people on the platform
that epitomizes impromptu, realtime conversation. What kinds of fashion images
do brands and individuals share and what are the types of visual features that
attract likes and comments? In this research, we take both quantitative and
qualitative approaches to answer these questions. We analyze visual features of
fashion posts first via manual tagging and then via training on convolutional
neural networks. The classified images were examined across four types of
fashion brands: mega couture, small couture, designers, and high street. We
find that while product-only images make up the majority of fashion
conversation in terms of volume, body snaps and face images that portray
fashion items more naturally tend to receive a larger number of likes and
comments by the audience. Our findings bring insights into building an
automated tool for classifying or generating influential fashion information.
We make our novel dataset of {24,752} labeled images on fashion conversations,
containing visual and textual cues, available for the research community.
| 1 | 0 | 0 | 1 | 0 | 0 |
18,324 | Viscous flow in a soft valve | Fluid-structure interactions are ubiquitous in nature and technology.
However, the systems are often so complex that numerical simulations or ad hoc
assumptions must be used to gain insight into the details of the complex
interactions between the fluid and solid mechanics. In this paper, we present
experiments and theory on viscous flow in a simple bioinspired soft valve which
illustrate essential features of interactions between hydrodynamic and elastic
forces at low Reynolds numbers. The setup comprises a sphere connected to a
spring located inside a tapering cylindrical channel. The spring is aligned
with the central axis of the channel and a pressure drop is applied across the
sphere, thus forcing the liquid through the narrow gap between the sphere and
the channel walls. The sphere's equilibrium position is determined by a balance
between spring and hydrodynamic forces. Since the gap thickness changes with
the sphere's position, the system has a pressure-dependent hydraulic
resistance. This leads to a non-linear relation between applied pressure and
flow rate: flow initially increases with pressure, but decreases when the
pressure exceeds a certain critical value as the gap closes. To rationalize
these observations, we propose a mathematical model that reduced the complexity
of the flow to a two-dimensional lubrication approximation. A closed-form
expression for the pressure-drop/flow rate is obtained which reveals that the
flow rate $Q$ depends on the pressure drop $\Delta p$, sphere radius $a$, gap
thickness $h_0$, and viscosity $\eta$ as $Q\sim \eta^{-1}
a^{1/2}h_0^{5/2}\left(\Delta p_c-\Delta p\right)^{5/2}\Delta p$, where the
critical pressure $\Delta p_c$ scales with the spring constant $k$ and sphere
radius $a$ as $\Delta p_c\sim k a^{-2}$. These predictions compared favorably
to the results of our experiments with no free parameters.
| 0 | 1 | 0 | 0 | 0 | 0 |
18,325 | Superheavy Thermal Dark Matter and Primordial Asymmetries | The early universe could feature multiple reheating events, leading to jumps
in the visible sector entropy density that dilute both particle asymmetries and
the number density of frozen-out states. In fact, late time entropy jumps are
usually required in models of Affleck-Dine baryogenesis, which typically
produces an initial particle-antiparticle asymmetry that is much too large. An
important consequence of late time dilution, is that a smaller dark matter
annihilation cross section is needed to obtain the observed dark matter relic
density. For cosmologies with high scale baryogenesis, followed by
radiation-dominated dark matter freeze-out, we show that the perturbative
unitarity mass bound on thermal relic dark matter is relaxed to $10^{10}$ GeV.
We proceed to study superheavy asymmetric dark matter models, made possible by
a sizable entropy injection after dark matter freeze-out, and identify how the
Affleck-Dine mechanism would generate the baryon and dark asymmetries.
| 0 | 1 | 0 | 0 | 0 | 0 |
18,326 | Time-dependent population imaging for solid high harmonic generation | We propose an intuitive method, called time-dependent population imaging
(TDPI), to map the dynamical processes of high harmonic generation (HHG) in
solids by solving the time-dependent Schrödinger equation (TDSE). It is
shown that the real-time dynamical characteristics of HHG in solids, such as
the instantaneous photon energies of emitted harmonics, can be read directly
from the energy-resolved population oscillations of electrons in the TDPIs.
Meanwhile, the short and long trajectories of solid HHG are illustrated clearly
from TDPI. By using the TDPI, we also investigate the effects of
carrier-envelope phase (CEP) in few-cycle pulses and intuitively demonstrate
the HHG dynamics driven by two-color fields. Our results show that the TDPI
provides a powerful tool to study the ultrafast dynamics in strong fields for
various laser-solid configurations and gain an insight into HHG processes in
solids.
| 0 | 1 | 0 | 0 | 0 | 0 |
18,327 | Group Sparse Bayesian Learning for Active Surveillance on Epidemic Dynamics | Predicting epidemic dynamics is of great value in understanding and
controlling diffusion processes, such as infectious disease spread and
information propagation. This task is intractable, especially when surveillance
resources are very limited. To address the challenge, we study the problem of
active surveillance, i.e., how to identify a small portion of system components
as sentinels to effect monitoring, such that the epidemic dynamics of an entire
system can be readily predicted from the partial data collected by such
sentinels. We propose a novel measure, the gamma value, to identify the
sentinels by modeling a sentinel network with row sparsity structure. We design
a flexible group sparse Bayesian learning algorithm to mine the sentinel
network suitable for handling both linear and non-linear dynamical systems by
using the expectation maximization method and variational approximation. The
efficacy of the proposed algorithm is theoretically analyzed and empirically
validated using both synthetic and real-world data.
| 1 | 0 | 0 | 1 | 0 | 0 |
18,328 | FNS: an event-driven spiking neural network framework for efficient simulations of large-scale brain models | Limitations in processing capabilities and memory of today's computers make
spiking neuron-based (human) whole-brain simulations inevitably characterized
by a compromise between bio-plausibility and computational cost. It translates
into brain models composed of a reduced number of neurons and a simplified
neuron's mathematical model. Taking advantage of the sparse character of
brain-like computation, eventdriven technique allows us to carry out efficient
simulation of large-scale Spiking Neural Networks (SNN). The recent Leaky
Integrate-and-Fire with Latency (LIFL) spiking neuron model is event-driven
compatible and exhibits some realistic neuronal features, opening new horizons
in whole-brain modelling. In this paper we present FNS, a LIFL-based exact
event-driven spiking neural network framework implemented in Java and oriented
to wholebrain simulations. FNS combines spiking/synaptic whole-brain modelling
with the event-driven approach, allowing us to define heterogeneous modules and
multi-scale connectivity with delayed connections and plastic synapses,
providing fast simulations at the same time. A novel parallelization strategy
is also implemented in order to further speed up simulations. This paper
presents mathematical models, software implementation and simulation routines
on which FNS is based. Finally, a reduced brain network model (1400 neurons and
45000 synapses) is synthesized on the basis of real brain structural data, and
the resulting model activity is compared with associated brain functional
(source-space MEG) data. The conducted test shows a good matching between the
activity of model and that of the emulated subject, in outstanding simulation
times (about 20s for simulating 4s of activity with a normal PC). Dedicated
sections of stimuli editing and output synthesis allow the neuroscientist to
introduce and extract brain-like signals, respectively...
| 0 | 0 | 0 | 0 | 1 | 0 |
18,329 | Synchronization, phase slips and coherent structures in area-preserving maps | The problem of synchronization of coupled Hamiltonian systems exhibits
interesting features due to the non-uniform or mixed nature (regular and
chaotic) of the phase space. We study these features by investigating the
synchronization of unidirectionally coupled area-preserving maps coupled by the
Pecora-Carroll method. We find that coupled standard maps show complete
synchronization for values of the nonlinearity parameter at which regular
structures are still present in phase space. The distribution of
synchronization times has a power law tail indicating long synchronization
times for at least some of the synchronizing trajectories. With the
introduction of coherent structures using parameter perturbation in the system,
this distribution crosses over to exponential behavior, indicating shorter
synchronization times, and the number of initial conditions which synchronize
increases significantly, indicating an enhancement in the basin of
synchronization. On the other hand, coupled blinking vortex maps display both
phase synchronization and phase slips, depending on the location of the initial
conditions. We discuss the implication of our results.
| 0 | 1 | 0 | 0 | 0 | 0 |
18,330 | Design of Real-time Semantic Segmentation Decoder for Automated Driving | Semantic segmentation remains a computationally intensive algorithm for
embedded deployment even with the rapid growth of computation power. Thus
efficient network design is a critical aspect especially for applications like
automated driving which requires real-time performance. Recently, there has
been a lot of research on designing efficient encoders that are mostly task
agnostic. Unlike image classification and bounding box object detection tasks,
decoders are computationally expensive as well for semantic segmentation task.
In this work, we focus on efficient design of the segmentation decoder and
assume that an efficient encoder is already designed to provide shared features
for a multi-task learning system. We design a novel efficient non-bottleneck
layer and a family of decoders which fit into a small run-time budget using
VGG10 as efficient encoder. We demonstrate in our dataset that experimentation
with various design choices led to an improvement of 10\% from a baseline
performance.
| 1 | 0 | 0 | 1 | 0 | 0 |
18,331 | Nonparametric covariance estimation for mixed longitudinal studies, with applications in midlife women's health | Motivated by applications of mixed longitudinal studies, where a group of
subjects entering the study at different ages (cross-sectional) are followed
for successive years (longitudinal), we consider nonparametric covariance
estimation with samples of noisy and partially-observed functional
trajectories. To ensure model identifiability and estimation consistency, we
introduce and carefully discuss the reduced rank and neighboring incoherence
condition. The proposed algorithm is based on a sequential-aggregation scheme,
which is non-iterative, with only basic matrix operations and closed-form
solutions in each step. The good performance of the proposed method is
supported by both theory and numerical experiments. We also apply the proposed
procedure to a midlife women's working memory study based on the data from the
Study of Women's Health Across the Nation (SWAN).
| 0 | 0 | 1 | 1 | 0 | 0 |
18,332 | Deep Learning for Real-time Gravitational Wave Detection and Parameter Estimation with LIGO Data | The recent Nobel-prize-winning detections of gravitational waves from merging
black holes and the subsequent detection of the collision of two neutron stars
in coincidence with electromagnetic observations have inaugurated a new era of
multimessenger astrophysics. To enhance the scope of this emergent science, we
proposed the use of deep convolutional neural networks for the detection and
characterization of gravitational wave signals in real-time. This method, Deep
Filtering, was initially demonstrated using simulated LIGO noise. In this
article, we present the extension of Deep Filtering using real data from the
first observing run of LIGO, for both detection and parameter estimation of
gravitational waves from binary black hole mergers with continuous data streams
from multiple LIGO detectors. We show for the first time that machine learning
can detect and estimate the true parameters of a real GW event observed by
LIGO. Our comparisons show that Deep Filtering is far more computationally
efficient than matched-filtering, while retaining similar sensitivity and lower
errors, allowing real-time processing of weak time-series signals in
non-stationary non-Gaussian noise, with minimal resources, and also enables the
detection of new classes of gravitational wave sources that may go unnoticed
with existing detection algorithms. This approach is uniquely suited to enable
coincident detection campaigns of gravitational waves and their multimessenger
counterparts in real-time.
| 1 | 1 | 0 | 0 | 0 | 0 |
18,333 | Note on local coisotropic Floer homology and leafwise fixed points | I outline a construction of a local Floer homology for a coisotropic
submanifold of a symplectic manifold and explain how it can be used to show
that leafwise fixed points of Hamiltonian diffeomorphisms exist.
| 0 | 0 | 1 | 0 | 0 | 0 |
18,334 | Geometric Surface-Based Tracking Control of a Quadrotor UAV under Actuator Constraints | This paper presents contributions on nonlinear tracking control systems for a
quadrotor unmanned micro aerial vehicle. New controllers are proposed based on
nonlinear surfaces composed by tracking errors that evolve directly on the
nonlinear configuration manifold thus inherently including in the control
design the nonlinear characteristics of the SE(3) configuration space. In
particular geometric surface-based controllers are developed, and through
rigorous stability proofs they are shown to have desirable closed loop
properties that are almost global. A region of attraction, independent of the
position error, is produced and its effects are analyzed. A strategy allowing
the quadrotor to achieve precise attitude tracking while simultaneously
following a desired position command and complying to actuator constraints in a
computationally inexpensive manner is derived. This important contribution
differentiates this work from existing Geometric Nonlinear Control System
solutions (GNCSs) since the commanded thrusts can be realized by the majority
of quadrotors produced by the industry. The new features of the proposed GNCSs
are illustrated by numerical simulations of aggressive maneuvers and a
comparison with a GNCSs from the bibliography.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,335 | Estimating the rate of defects under imperfect sampling inspection - a new approach | We consider the problem of estimating the the rate of defects (mean number of
defects per item), given counts of defects detected by two independent
imperfect inspectors on a sample of items. In contrast with the well-known
method of Capture-Recapture, here we {\it{do not}} have information regarding
the number of defects jointly detected by {\it{both}} inspectors. We solve this
problem by constructing two types of estimators - a simple moment-type
estimator, and a more complicated maximum-likelihood estimator. The performance
of these estimators is studied analytically and by means of simulations. It is
shown that the maximum-likelihood estimator is superior to the moment-type
estimator. A systematic comparison with the Capture-Recapture method is also
made.
| 0 | 0 | 0 | 1 | 0 | 0 |
18,336 | Measuring heavy-tailedness of distributions | Different questions related with analysis of extreme values and outliers
arise frequently in practice. To exclude extremal observations and outliers is
not a good decision because they contain important information about the
observed distribution. The difficulties with their usage are usually related to
the estimation of the tail index in case it exists. There are many measures for
the center of the distribution, e.g. mean, mode, median. There are many
measures of the variance, asymmetry, and kurtosis, but there is no easy
characteristic for heavy-tailedness of the observed distribution. Here we
propose such a measure, give some examples and explore some of its properties.
This allows us to introduce a classification of the distributions, with respect
to their heavy-tailedness. The idea is to help and navigate practitioners for
accurate and easier work in the field of probability distributions.
Using the properties of the defined characteristics some distribution
sensitive extremal index estimators are proposed and their properties are
partially investigated.
| 0 | 0 | 0 | 1 | 0 | 0 |
18,337 | Inductive $k$-independent graphs and $c$-colorable subgraphs in scheduling: A review | Inductive $k$-independent graphs generalize chordal graphs and have recently
been advocated in the context of interference-avoiding wireless communication
scheduling. The NP-hard problem of finding maximum-weight induced $c$-colorable
subgraphs, which is a generalization of finding maximum independent sets,
naturally occurs when selecting $c$ sets of pairwise non-conflicting jobs
(modeled as graph vertices). We investigate the parameterized complexity of
this problem on inductive $k$-independent graphs. We show that the Independent
Set problem is W[1]-hard even on 2-simplicial 3-minoes---a subclass of
inductive 2-independent graphs. In contrast, we prove that the more general
Maximum $c$-Colorable Subgraph problem is fixed-parameter tractable on
edge-wise unions of cluster and chordal graphs, which are 2-simplicial. In both
cases, the parameter is the solution size. Aside from this, we survey other
graph classes between inductive 1-inductive and inductive 2-inductive graphs
with applications in scheduling.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,338 | The word and order problems for self-similar and automata groups | We prove that the word problem is undecidable in functionally recursive
groups, and that the order problem is undecidable in automata groups, even
under the assumption that they are contracting.
| 1 | 0 | 1 | 0 | 0 | 0 |
18,339 | Machine Teaching: A New Paradigm for Building Machine Learning Systems | The current processes for building machine learning systems require
practitioners with deep knowledge of machine learning. This significantly
limits the number of machine learning systems that can be created and has led
to a mismatch between the demand for machine learning systems and the ability
for organizations to build them. We believe that in order to meet this growing
demand for machine learning systems we must significantly increase the number
of individuals that can teach machines. We postulate that we can achieve this
goal by making the process of teaching machines easy, fast and above all,
universally accessible.
While machine learning focuses on creating new algorithms and improving the
accuracy of "learners", the machine teaching discipline focuses on the efficacy
of the "teachers". Machine teaching as a discipline is a paradigm shift that
follows and extends principles of software engineering and programming
languages. We put a strong emphasis on the teacher and the teacher's
interaction with data, as well as crucial components such as techniques and
design principles of interaction and visualization.
In this paper, we present our position regarding the discipline of machine
teaching and articulate fundamental machine teaching principles. We also
describe how, by decoupling knowledge about machine learning algorithms from
the process of teaching, we can accelerate innovation and empower millions of
new uses for machine learning models.
| 1 | 0 | 0 | 1 | 0 | 0 |
18,340 | Totally geodesic submanifolds of Teichmuller space | We show that any totally geodesic submanifold of Teichmuller space of
dimension greater than one covers a totally geodesic subvariety, and only
finitely many totally geodesic subvarieties of dimension greater than one exist
in each moduli space.
| 0 | 0 | 1 | 0 | 0 | 0 |
18,341 | Towards a splitting of the $K(2)$-local string bordism spectrum | We show that $K(2)$-locally, the smash product of the string bordism spectrum
and the spectrum $T_2$ splits into copies of Morava $E$-theories. Here, $T_2$
is related to the Thom spectrum of the canonical bundle over $\Omega SU(4)$.
| 0 | 0 | 1 | 0 | 0 | 0 |
18,342 | Osmotic and diffusio-osmotic flow generation at high solute concentration. I. Mechanical approaches | In this paper, we explore various forms of osmotic transport in the regime of
high solute concentration. We consider both the osmosis across membranes and
diffusio-osmosis at solid interfaces, driven by solute concentration gradients.
We follow a mechanical point of view of osmotic transport, which allows us to
gain much insight into the local mechanical balance underlying osmosis. We
demonstrate in particular how the general expression of the osmotic pressure
for mixtures, as obtained classically from the thermodynamic framework, emerges
from the mechanical balance controlling non-equilibrium transport under solute
gradients. Expressions for the rejection coefficient of osmosis and the
diffusio-osmotic mobilities are accordingly obtained. These results generalize
existing ones in the dilute solute regime to mixtures with arbitrary
concentrations.
| 0 | 1 | 0 | 0 | 0 | 0 |
18,343 | Accurate and Efficient Estimation of Small P-values with the Cross-Entropy Method: Applications in Genomic Data Analysis | Small $p$-values are often required to be accurately estimated in large scale
genomic studies for the adjustment of multiple hypothesis tests and the ranking
of genomic features based on their statistical significance. For those
complicated test statistics whose cumulative distribution functions are
analytically intractable, existing methods usually do not work well with small
$p$-values due to lack of accuracy or computational restrictions. We propose a
general approach for accurately and efficiently calculating small $p$-values
for a broad range of complicated test statistics based on the principle of the
cross-entropy method and Markov chain Monte Carlo sampling techniques. We
evaluate the performance of the proposed algorithm through simulations and
demonstrate its application to three real examples in genomic studies. The
results show that our approach can accurately evaluate small to extremely small
$p$-values (e.g. $10^{-6}$ to $10^{-100}$). The proposed algorithm is helpful
to the improvement of existing test procedures and the development of new test
procedures in genomic studies.
| 0 | 0 | 0 | 1 | 0 | 0 |
18,344 | Does the Testing Level affect the Prevalence of Coincidental Correctness? | Researchers have previously shown that Coincidental Correctness (CC) is
prevalent; however, the benchmarks they used are considered inadequate
nowadays. They have also recognized the negative impact of CC on the
effectiveness of fault localization and testing. The aim of this paper is to
study Coincidental Correctness, using more realistic code, mainly from the
perspective of unit testing. This stems from the fact that the practice of unit
testing has grown tremendously in recent years due to the wide adoption of
software development processes, such as Test-Driven Development. We quantified
the presence of CC in unit testing using the Defects4J benchmark. This entailed
manually injecting two code checkers for each of the 395 defects in Defects4J:
1) a weak checker that detects weak CC tests by monitoring whether the defect
was reached; and 2) a strong checker that detects strong CC tests by monitoring
whether the defect was reached and the program has transitioned into an
infectious state. We also conducted preliminary experiments (using Defects4J,
NanoXML and JTidy) to assess the pervasiveness of CC at the unit testing level
in comparison to that at the integration and system levels. Our study showed
that unit testing is not immune to CC, as it exhibited 7.2x more strong CC
tests than failing tests and 8.3x more weak CC tests than failing tests.
However, our preliminary results suggested that it might be less prone to CC
than integration testing and system testing.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,345 | How spread changes affect the order book: Comparing the price responses of order deletions and placements to trades | We observe the effects of the three different events that cause spread
changes in the order book, namely trades, deletions and placement of limit
orders. By looking at the frequencies of the relative amounts of price changing
events, we discover that deletions of orders open the bid-ask spread of a stock
more often than trades do. We see that once the amount of spread changes due to
deletions exceeds the amount of the ones due to trades, other observables in
the order book change as well. We then look at how these spread changing events
affect the prices of stocks, by means of the price response. We not only see
that the self-response of stocks is positive for both spread changing trades
and deletions and negative for order placements, but also cross-response to
other stocks and therefore the market as a whole. In addition, the
self-response function of spread-changing trades is similar to that of all
trades. This leads to the conclusion that spread changing deletions and order
placements have a similar effect on the order book and stock prices over time
as trades.
| 0 | 0 | 0 | 0 | 0 | 1 |
18,346 | High-Dimensional Dependency Structure Learning for Physical Processes | In this paper, we consider the use of structure learning methods for
probabilistic graphical models to identify statistical dependencies in
high-dimensional physical processes. Such processes are often synthetically
characterized using PDEs (partial differential equations) and are observed in a
variety of natural phenomena, including geoscience data capturing atmospheric
and hydrological phenomena. Classical structure learning approaches such as the
PC algorithm and variants are challenging to apply due to their high
computational and sample requirements. Modern approaches, often based on sparse
regression and variants, do come with finite sample guarantees, but are usually
highly sensitive to the choice of hyper-parameters, e.g., parameter $\lambda$
for sparsity inducing constraint or regularization. In this paper, we present
ACLIME-ADMM, an efficient two-step algorithm for adaptive structure learning,
which estimates an edge specific parameter $\lambda_{ij}$ in the first step,
and uses these parameters to learn the structure in the second step. Both steps
of our algorithm use (inexact) ADMM to solve suitable linear programs, and all
iterations can be done in closed form in an efficient block parallel manner. We
compare ACLIME-ADMM with baselines on both synthetic data simulated by partial
differential equations (PDEs) that model advection-diffusion processes, and
real data (50 years) of daily global geopotential heights to study information
flow in the atmosphere. ACLIME-ADMM is shown to be efficient, stable, and
competitive, usually better than the baselines especially on difficult
problems. On real data, ACLIME-ADMM recovers the underlying structure of global
atmospheric circulation, including switches in wind directions at the equator
and tropics entirely from the data.
| 1 | 0 | 0 | 1 | 0 | 0 |
18,347 | Lax orthogonal factorisations in monad-quantale-enriched categories | We show that, for a quantale $V$ and a $\mathsf{Set}$-monad $\mathbb{T}$
laxly extended to $V$-$\mathsf{Rel}$, the presheaf monad on the category of
$(\mathbb{T},V)$-categories is simple, giving rise to a lax orthogonal
factorisation system (lofs) whose corresponding weak factorisation system has
embeddings as left part. In addition, we present presheaf submonads and study
the LOFSs they define. This provides a method of constructing weak
factorisation systems on some well-known examples of topological categories
over $\mathsf{Set}$.
| 0 | 0 | 1 | 0 | 0 | 0 |
18,348 | Radiation reaction for spinning bodies in effective field theory I: Spin-orbit effects | We compute the leading Post-Newtonian (PN) contributions at linear order in
the spin to the radiation-reaction acceleration and spin evolution for binary
systems, which enter at fourth PN order. The calculation is carried out, from
first principles, using the effective field theory framework for spinning
compact objects, in both the Newton-Wigner and covariant spin supplementary
conditions. A non-trivial consistency check is performed on our results by
showing that the energy loss induced by the resulting radiation-reaction force
is equivalent to the total emitted power in the far zone, up to so-called
"Schott terms." We also find that, at this order, the radiation reaction has no
net effect on the evolution of the spins. The spin-spin contributions to
radiation reaction are reported in a companion paper.
| 0 | 1 | 0 | 0 | 0 | 0 |
18,349 | On the geometric notion of connection and its expression in tangent categories | Tangent categories provide an axiomatic approach to key structural aspects of
differential geometry that exist not only in the classical category of smooth
manifolds but also in algebraic geometry, homological algebra, computer
science, and combinatorics. Generalizing the notion of (linear) connection on a
smooth vector bundle, Cockett and Cruttwell have defined a notion of connection
on a differential bundle in an arbitrary tangent category. Herein, we establish
equivalent formulations of this notion of connection that reduce the amount of
specified structure required. Further, one of our equivalent formulations
substantially reduces the number of axioms imposed, and others provide useful
abstract conceptualizations of connections. In particular, we show that a
connection on a differential bundle E over M is equivalently given by a single
morphism K that induces a suitable decomposition of TE as a biproduct. We also
show that a connection is equivalently given by a vertical connection K for
which a certain associated diagram is a limit diagram.
| 0 | 0 | 1 | 0 | 0 | 0 |
18,350 | Objective priors for the number of degrees of freedom of a multivariate t distribution and the t-copula | An objective Bayesian approach to estimate the number of degrees of freedom
$(\nu)$ for the multivariate $t$ distribution and for the $t$-copula, when the
parameter is considered discrete, is proposed. Inference on this parameter has
been problematic for the multivariate $t$ and, for the absence of any method,
for the $t$-copula. An objective criterion based on loss functions which allows
to overcome the issue of defining objective probabilities directly is employed.
The support of the prior for $\nu$ is truncated, which derives from the
property of both the multivariate $t$ and the $t$-copula of convergence to
normality for a sufficiently large number of degrees of freedom. The
performance of the priors is tested on simulated scenarios. The R codes and the
replication material are available as a supplementary material of the
electronic version of the paper and on real data: daily logarithmic returns of
IBM and of the Center for Research in Security Prices Database.
| 0 | 0 | 0 | 1 | 0 | 0 |
18,351 | PROBE: Predictive Robust Estimation for Visual-Inertial Navigation | Navigation in unknown, chaotic environments continues to present a
significant challenge for the robotics community. Lighting changes,
self-similar textures, motion blur, and moving objects are all considerable
stumbling blocks for state-of-the-art vision-based navigation algorithms. In
this paper we present a novel technique for improving localization accuracy
within a visual-inertial navigation system (VINS). We make use of training data
to learn a model for the quality of visual features with respect to
localization error in a given environment. This model maps each visual
observation from a predefined prediction space of visual-inertial predictors
onto a scalar weight, which is then used to scale the observation covariance
matrix. In this way, our model can adjust the influence of each observation
according to its quality. We discuss our choice of predictors and report
substantial reductions in localization error on 4 km of data from the KITTI
dataset, as well as on experimental datasets consisting of 700 m of indoor and
outdoor driving on a small ground rover equipped with a Skybotix VI-Sensor.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,352 | Ethical Artificial Intelligence - An Open Question | Artificial Intelligence (AI) is an effective science which employs strong
enough approaches, methods, and techniques to solve unsolvable real world based
problems. Because of its unstoppable rise towards the future, there are also
some discussions about its ethics and safety. Shaping an AI friendly
environment for people and a people friendly environment for AI can be a
possible answer for finding a shared context of values for both humans and
robots. In this context, objective of this paper is to address the ethical
issues of AI and explore the moral dilemmas that arise from ethical algorithms,
from pre set or acquired values. In addition, the paper will also focus on the
subject of AI safety. As general, the paper will briefly analyze the concerns
and potential solutions to solving the ethical issues presented and increase
readers awareness on AI safety as another related research interest.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,353 | Self-Attentive Model for Headline Generation | Headline generation is a special type of text summarization task. While the
amount of available training data for this task is almost unlimited, it still
remains challenging, as learning to generate headlines for news articles
implies that the model has strong reasoning about natural language. To overcome
this issue, we applied recent Universal Transformer architecture paired with
byte-pair encoding technique and achieved new state-of-the-art results on the
New York Times Annotated corpus with ROUGE-L F1-score 24.84 and ROUGE-2
F1-score 13.48. We also present the new RIA corpus and reach ROUGE-L F1-score
36.81 and ROUGE-2 F1-score 22.15 on it.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,354 | Localized Recombining Plasma in G166.0+4.3: A Supernova Remnant with an Unusual Morphology | We observed the Galactic mixed-morphology supernova remnant G166.0+4.3 with
Suzaku. The X-ray spectrum in the western part of the remnant is well
represented by a one-component ionizing plasma model. The spectrum in the
northeastern region can be explained by two components. One is the Fe-rich
component with the electron temperature $kT_e = 0.87_{-0.03}^{+0.02}$ keV. The
other is the recombining plasma component of lighter elements with $kT_e =
0.46\pm0.03$ keV, the initial temperature $kT_{init} = 3$ keV (fixed) and the
ionization parameter $n_et = (6.1_{-0.4}^{+0.5}) \times 10^{11} \rm cm^{-3} s$.
As the formation process of the recombining plasma, two scenarios, the
rarefaction and thermal conduction, are considered. The former would not be
favored since we found the recombining plasma only in the northeastern region
whereas the latter would explain the origin of the RP. In the latter scenario,
an RP is anticipated in a part of the remnant where blast waves are in contact
with cool dense gas. The emission measure suggests higher ambient gas density
in the northeastern region. The morphology of the radio shell and a GeV
gamma-ray emission also suggest a molecular cloud in the region.
| 0 | 1 | 0 | 0 | 0 | 0 |
18,355 | Visualizing Design Erosion: How Big Balls of Mud are Made | Software systems are not static, they have to undergo frequent changes to
stay fit for purpose, and in the process of doing so, their complexity
increases. It has been observed that this process often leads to the erosion of
the systems design and architecture and with it, the decline of many desirable
quality attributes, such as maintainability. This process can be captured in
terms of antipatterns-atomic violations of widely accepted design principles.
We present a visualisation that exposes the design of evolving Java programs,
highlighting instances of selected antipatterns including their emergence and
cancerous growth. This visualisation assists software engineers and architects
in assessing, tracing and therefore combating design erosion. We evaluated the
effectiveness of the visualisation in four case studies with ten participants.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,356 | Hierarchical Representations for Efficient Architecture Search | We explore efficient neural architecture search methods and show that a
simple yet powerful evolutionary algorithm can discover new architectures with
excellent performance. Our approach combines a novel hierarchical genetic
representation scheme that imitates the modularized design pattern commonly
adopted by human experts, and an expressive search space that supports complex
topologies. Our algorithm efficiently discovers architectures that outperform a
large number of manually designed models for image classification, obtaining
top-1 error of 3.6% on CIFAR-10 and 20.3% when transferred to ImageNet, which
is competitive with the best existing neural architecture search approaches. We
also present results using random search, achieving 0.3% less top-1 accuracy on
CIFAR-10 and 0.1% less on ImageNet whilst reducing the search time from 36
hours down to 1 hour.
| 1 | 0 | 0 | 1 | 0 | 0 |
18,357 | A return to eddy viscosity model for epistemic UQ in RANS closures | For the purpose of Uncertainty Quantification (UQ) of Reynolds-Averaged
Navier-Stokes closures, we introduce a framework in which perturbations in the
eigenvalues of the anisotropy tensor are made in order to bound a
Quantity-of-Interest based on limiting states of turbulence. To make the
perturbations representative of local flow features, we introduce two
additional transport equations for linear combinations of these aforementioned
eigenvalues. The location, magnitude and direction of the eigenvalue
perturbations are now governed by the model transport equations. The general
behavior of our discrepancy model is determined by two coefficients, resulting
in a low-dimensional UQ problem. We will furthermore show that the behavior of
the model is intuitive and rooted in the physical interpretation of
misalignment between the mean strain and Reynolds stresses.
| 0 | 1 | 0 | 0 | 0 | 0 |
18,358 | A Neural Network model with Bidirectional Whitening | We present here a new model and algorithm which performs an efficient Natural
gradient descent for Multilayer Perceptrons. Natural gradient descent was
originally proposed from a point of view of information geometry, and it
performs the steepest descent updates on manifolds in a Riemannian space. In
particular, we extend an approach taken by the "Whitened neural networks"
model. We make the whitening process not only in feed-forward direction as in
the original model, but also in the back-propagation phase. Its efficacy is
shown by an application of this "Bidirectional whitened neural networks" model
to a handwritten character recognition data (MNIST data).
| 0 | 0 | 0 | 1 | 0 | 0 |
18,359 | Regularity gradient estimates for weak solutions of singular quasi-linear parabolic equations | This paper studies the Sobolev regularity estimates of weak solutions of a
class of singular quasi-linear elliptic problems of the form $u_t -
\mbox{div}[\mathbb{A}(x,t,u,\nabla u)]= \mbox{div}[{\mathbf F}]$ with
homogeneous Dirichlet boundary conditions over bounded spatial domains. Our
main focus is on the case that the vector coefficients $\mathbb{A}$ are
discontinuous and singular in $(x,t)$-variables, and dependent on the solution
$u$. Global and interior weighted $W^{1,p}(\Omega, \omega)$-regularity
estimates are established for weak solutions of these equations, where $\omega$
is a weight function in some Muckenhoupt class of weights. The results obtained
are even new for linear equations, and for $\omega =1$, because of the
singularity of the coefficients in $(x,t)$-variables
| 0 | 0 | 1 | 0 | 0 | 0 |
18,360 | Learning to Avoid Errors in GANs by Manipulating Input Spaces | Despite recent advances, large scale visual artifacts are still a common
occurrence in images generated by GANs. Previous work has focused on improving
the generator's capability to accurately imitate the data distribution
$p_{data}$. In this paper, we instead explore methods that enable GANs to
actively avoid errors by manipulating the input space. The core idea is to
apply small changes to each noise vector in order to shift them away from areas
in the input space that tend to result in errors. We derive three different
architectures from that idea. The main one of these consists of a simple
residual module that leads to significantly less visual artifacts, while only
slightly decreasing diversity. The module is trivial to add to existing GANs
and costs almost zero computation and memory.
| 1 | 0 | 0 | 1 | 0 | 0 |
18,361 | Improving 6D Pose Estimation of Objects in Clutter via Physics-aware Monte Carlo Tree Search | This work proposes a process for efficiently searching over combinations of
individual object 6D pose hypotheses in cluttered scenes, especially in cases
involving occlusions and objects resting on each other. The initial set of
candidate object poses is generated from state-of-the-art object detection and
global point cloud registration techniques. The best-scored pose per object by
using these techniques may not be accurate due to overlaps and occlusions.
Nevertheless, experimental indications provided in this work show that object
poses with lower ranks may be closer to the real poses than ones with high
ranks according to registration techniques. This motivates a global
optimization process for improving these poses by taking into account
scene-level physical interactions between objects. It also implies that the
Cartesian product of candidate poses for interacting objects must be searched
so as to identify the best scene-level hypothesis. To perform the search
efficiently, the candidate poses for each object are clustered so as to reduce
their number but still keep a sufficient diversity. Then, searching over the
combinations of candidate object poses is performed through a Monte Carlo Tree
Search (MCTS) process that uses the similarity between the observed depth image
of the scene and a rendering of the scene given the hypothesized pose as a
score that guides the search procedure. MCTS handles in a principled way the
tradeoff between fine-tuning the most promising poses and exploring new ones,
by using the Upper Confidence Bound (UCB) technique. Experimental results
indicate that this process is able to quickly identify in cluttered scenes
physically-consistent object poses that are significantly closer to ground
truth compared to poses found by point cloud registration methods.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,362 | On the benefits of output sparsity for multi-label classification | The multi-label classification framework, where each observation can be
associated with a set of labels, has generated a tremendous amount of attention
over recent years. The modern multi-label problems are typically large-scale in
terms of number of observations, features and labels, and the amount of labels
can even be comparable with the amount of observations. In this context,
different remedies have been proposed to overcome the curse of dimensionality.
In this work, we aim at exploiting the output sparsity by introducing a new
loss, called the sparse weighted Hamming loss. This proposed loss can be seen
as a weighted version of classical ones, where active and inactive labels are
weighted separately. Leveraging the influence of sparsity in the loss function,
we provide improved generalization bounds for the empirical risk minimizer, a
suitable property for large-scale problems. For this new loss, we derive rates
of convergence linear in the underlying output-sparsity rather than linear in
the number of labels. In practice, minimizing the associated risk can be
performed efficiently by using convex surrogates and modern convex optimization
algorithms. We provide experiments on various real-world datasets demonstrating
the pertinence of our approach when compared to non-weighted techniques.
| 1 | 0 | 1 | 1 | 0 | 0 |
18,363 | On wrapping number, adequacy and the crossing number of satellite knots | In this work we establish the tightest lower bound up-to-date for the minimal
crossing number of a satellite knot based on the minimal crossing number of the
companion used to build the satellite. If $M$ is the wrapping number of the
pattern knot, we essentially show that $c(Sat(P,C))>\frac{M^2}{2}c(C)$. The
existence of this bound will be proven when the companion knot is adequate, and
it will be further tuned in the case of the companion being alternating.
| 0 | 0 | 1 | 0 | 0 | 0 |
18,364 | On the Applicability of Delicious for Temporal Search on Web Archives | Web archives are large longitudinal collections that store webpages from the
past, which might be missing on the current live Web. Consequently, temporal
search over such collections is essential for finding prominent missing
webpages and tasks like historical analysis. However, this has been challenging
due to the lack of popularity information and proper ground truth to evaluate
temporal retrieval models. In this paper we investigate the applicability of
external longitudinal resources to identify important and popular websites in
the past and analyze the social bookmarking service Delicious for this purpose.
The timestamped bookmarks on Delicious provide explicit cues about popular
time periods in the past along with relevant descriptors. These are valuable to
identify important documents in the past for a given temporal query. Focusing
purely on recall, we analyzed more than 12,000 queries and find that using
Delicious yields average recall values from 46% up to 100%, when limiting
ourselves to the best represented queries in the considered dataset. This
constitutes an attractive and low-overhead approach for quick access into Web
archives by not dealing with the actual contents.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,365 | The Geometry and Topology of Data and Information for Analytics of Processes and Behaviours: Building on Bourdieu and Addressing New Societal Challenges | We begin by summarizing the relevance and importance of inductive analytics
based on the geometry and topology of data and information. Contemporary issues
are then discussed. These include how sampling data for representativity is
increasingly to be questioned. While we can always avail of analytics from a
"bag of tools and techniques", in the application of machine learning and
predictive analytics, nonetheless we present the case for Bourdieu and
Benzécri-based science of data, as follows. This is to construct bridges
between data sources and position-taking, and decision-making. There is summary
presentation of a few case studies, illustrating and exemplifying application
domains.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,366 | Conformal Twists, Yang-Baxter $σ$-models & Holographic Noncommutativity | Expanding upon earlier results [arXiv:1702.02861], we present a compendium of
$\sigma$-models associated with integrable deformations of AdS$_5$ generated by
solutions to homogenous classical Yang-Baxter equation. Each example we study
from four viewpoints: conformal (Drinfeld) twists, closed string gravity
backgrounds, open string parameters and proposed dual noncommutative (NC) gauge
theory. Irrespective of whether the deformed background is a solution to
supergravity or generalized supergravity, we show that the open string metric
associated with each gravity background is undeformed AdS$_5$ with constant
open string coupling and the NC structure $\Theta$ is directly related to the
conformal twist. One novel feature is that $\Theta$ exhibits "holographic
noncommutativity": while it may exhibit non-trivial dependence on the
holographic direction, its value everywhere in the bulk is uniquely determined
by its value at the boundary, thus facilitating introduction of a dual NC gauge
theory. We show that the divergence of the NC structure $\Theta$ is directly
related to the unimodularity of the twist. We discuss the implementation of an
outer automorphism of the conformal algebra as a coordinate transformation in
the AdS bulk and discuss its implications for Yang-Baxter $\sigma$-models and
self-T-duality based on fermionic T-duality. Finally, we comment on
implications of our results for the integrability of associated open strings
and planar integrability of dual NC gauge theories.
| 0 | 1 | 0 | 0 | 0 | 0 |
18,367 | AWEsome: An open-source test platform for airborne wind energy systems | In this paper we present AWEsome (Airborne Wind Energy Standardized
Open-source Model Environment), a test platform for airborne wind energy
systems that consists of low-cost hardware and is entirely based on open-source
software. It can hence be used without the need of large financial investments,
in particular by research groups and startups to acquire first experiences in
their flight operations, to test novel control strategies or technical designs,
or for usage in public relations. Our system consists of a modified
off-the-shelf model aircraft that is controlled by the pixhawk autopilot
hardware and the ardupilot software for fixed wing aircraft. The aircraft is
attached to the ground by a tether. We have implemented new flight modes for
the autonomous tethered flight of the aircraft along periodic patterns. We
present the principal functionality of our algorithms. We report on first
successful tests of these modes in real flights.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,368 | A one-dimensional mathematical model of collecting lymphatics coupled with an electro-fluid-mechanical contraction model and valve dynamics | We propose a one-dimensional model for collecting lymphatics coupled with a
novel Electro-Fluid-Mechanical Contraction (EFMC) model for dynamical
contractions, based on a modified FitzHugh-Nagumo model for action potentials.
The one-dimensional model for a compliant lymphatic vessel is a set of
hyperbolic Partial Differential Equations (PDEs). The EFMC model combines the
electrical activity of lymphangions (action potentials) with fluid-mechanical
feedback (stretch of the lymphatic wall and wall shear stress) and the
mechanical variation of the lymphatic wall properties (contractions). The EFMC
model is governed by four Ordinary Differential Equations (ODEs) and
phenomenologically relies on: (1) environmental calcium influx, (2)
stretch-activated calcium influx, and (3) contraction inhibitions induced by
wall shear stresses. We carried out a complete mathematical analysis of the
stability of the stationary state of the EFMC model. Overall, the EFMC model
allows imitating the influence of pressure and wall shear stress on the
frequency of contractions observed experimentally. Lymphatic valves are
modelled using a well-established lumped-parameter model which allows
simulating stenotic and regurgitant valves. We analysed several lymphodynamical
indexes of a single lymphangion for a wide range of upstream and downstream
pressure combinations. Stenotic and regurgitant valves were modelled, and their
effects are here quantified. Results for stenotic valves showed in the
downstream lymphangion that for low frequencies of contractions the Calculated
Pump Flow (CPF) index remained almost unaltered, while for high frequencies the
CPF dramatically decreased depending on the severity of the stenosis (up to 93%
for a severe stenosis). Results for incompetent valves showed that the net flow
during a lymphatic cycle tends to zero as the degree of incompetence increases.
| 0 | 1 | 1 | 0 | 0 | 0 |
18,369 | $A_{n}$-type surface singularity and nondisplaceable Lagrangian tori | We prove the existence of a one-parameter family of nondisplaceable
Lagrangian tori near a linear chain of Lagrangian 2-spheres in a symplectic
4-manifold. When the symplectic structure is rational we prove that the
deformed Floer cohomology groups of these tori are nontrivial. The proof uses
the idea of toric degeneration to analyze the full potential functions with
bulk deformations of these tori.
| 0 | 0 | 1 | 0 | 0 | 0 |
18,370 | Cascade LSTM Based Visual-Inertial Navigation for Magnetic Levitation Haptic Interaction | Haptic feedback is essential to acquire immersive experience when interacting
in virtual or augmented reality. Although the existing promising magnetic
levitation (maglev) haptic system has advantages of none mechanical friction,
its performance is limited by its navigation method, which mainly results from
the challenge that it is difficult to obtain high precision, high frame rate
and good stability with lightweight design at the same. In this study, we
propose to perform the visual-inertial fusion navigation based on
sequence-to-sequence learning for the maglev haptic interaction. Cascade LSTM
based-increment learning method is first presented to progressively learn the
increments of the target variables. Then, two cascade LSTM networks are
separately trained for accomplishing the visual-inertial fusion navigation in a
loosely-coupled mode. Additionally, we set up a maglev haptic platform as the
system testbed. Experimental results show that the proposed cascade LSTM
based-increment learning method can achieve high-precision prediction, and our
cascade LSTM based visual-inertial fusion navigation method can reach 200Hz
while maintaining high-precision (the mean absolute error of the position and
orientation is respectively less than 1mm and 0.02°)navigation for the
maglev haptic interaction application.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,371 | Local Feature Descriptor Learning with Adaptive Siamese Network | Although the recent progress in the deep neural network has led to the
development of learnable local feature descriptors, there is no explicit answer
for estimation of the necessary size of a neural network. Specifically, the
local feature is represented in a low dimensional space, so the neural network
should have more compact structure. The small networks required for local
feature descriptor learning may be sensitive to initial conditions and learning
parameters and more likely to become trapped in local minima. In order to
address the above problem, we introduce an adaptive pruning Siamese
Architecture based on neuron activation to learn local feature descriptors,
making the network more computationally efficient with an improved recognition
rate over more complex networks. Our experiments demonstrate that our learned
local feature descriptors outperform the state-of-art methods in patch
matching.
| 1 | 0 | 0 | 1 | 0 | 0 |
18,372 | Unbounded cache model for online language modeling with open vocabulary | Recently, continuous cache models were proposed as extensions to recurrent
neural network language models, to adapt their predictions to local changes in
the data distribution. These models only capture the local context, of up to a
few thousands tokens. In this paper, we propose an extension of continuous
cache models, which can scale to larger contexts. In particular, we use a large
scale non-parametric memory component that stores all the hidden activations
seen in the past. We leverage recent advances in approximate nearest neighbor
search and quantization algorithms to store millions of representations while
searching them efficiently. We conduct extensive experiments showing that our
approach significantly improves the perplexity of pre-trained language models
on new distributions, and can scale efficiently to much larger contexts than
previously proposed local cache models.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,373 | Finite-Range Coulomb Gas Models of Banded Random Matrices and Quantum Kicked Rotors | Dyson demonstrated an equivalence between infinite-range Coulomb gas models
and classical random matrix ensembles for study of eigenvalue statistics. We
introduce finite-range Coulomb gas (FRCG) models via a Brownian matrix process,
and study them analytically and by Monte-Carlo simulations. These models yield
new universality classes, and provide a theoretical framework for study of
banded random matrices (BRM) and quantum kicked rotors (QKR). We demonstrate
that, for a BRM of bandwidth b and a QKR of chaos parameter {\alpha}, the
appropriate FRCG model has the effective range d = (b^2)/N = ({\alpha}^2)/N,
for large N matrix dimensionality. As d increases, there is a transition from
Poisson to classical random matrix statistics.
| 0 | 1 | 0 | 0 | 0 | 0 |
18,374 | Dual Loomis-Whitney inequalities via information theory | We establish lower bounds on the volume and the surface area of a geometric
body using the size of its slices along different directions. In the first part
of the paper, we derive volume bounds for convex bodies using generalized
subadditivity properties of entropy combined with entropy bounds for
log-concave random variables. In the second part, we investigate a new notion
of Fisher information which we call the $L_1$-Fisher information, and show that
certain superadditivity properties of the $L_1$-Fisher information lead to
lower bounds for the surface areas of polyconvex sets in terms of its slices.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,375 | Cubic lead perovskite PbMoO3 with anomalous metallic behavior | A previously unreported Pb-based perovskite PbMoO$_3$ is obtained by
high-pressure and high-temperature synthesis. This material crystallizes in the
$Pm\bar{3}m$ cubic structure at room temperature, making it distinct from
typical Pb-based perovskite oxides with a structural distortion. PbMoO$_3$
exhibits a metallic behavior down to 0.1 K with an unusual $T$-sub linear
dependence of the electrical resistivity. Moreover, a large specific heat is
observed at low temperatures accompanied by a peak in $C_P/T^3$ around 10 K, in
marked contrast to the isostructural metallic system SrMoO$_3$. These transport
and thermal properties for PbMoO$_3$, taking into account anomalously large Pb
atomic displacements detected through diffraction experiments, are attributed
to a low-energy vibrational mode, associated with incoherent off-centering of
lone pair Pb$^{2+}$ cations. We discuss the unusual behavior of the electrical
resistivity in terms of a polaron-like conduction, mediated by the strong
coupling between conduction electrons and optical phonons of the local
low-energy vibrational mode.
| 0 | 1 | 0 | 0 | 0 | 0 |
18,376 | On Long Memory Origins and Forecast Horizons | Most long memory forecasting studies assume that the memory is generated by
the fractional difference operator. We argue that the most cited theoretical
arguments for the presence of long memory do not imply the fractional
difference operator, and assess the performance of the autoregressive
fractionally integrated moving average $(ARFIMA)$ model when forecasting series
with long memory generated by nonfractional processes. We find that high-order
autoregressive $(AR)$ models produce similar or superior forecast performance
than $ARFIMA$ models at short horizons. Nonetheless, as the forecast horizon
increases, the $ARFIMA$ models tend to dominate in forecast performance. Hence,
$ARFIMA$ models are well suited for forecasts of long memory processes
regardless of the long memory generating mechanism, particularly for medium and
long forecast horizons. Additionally, we analyse the forecasting performance of
the heterogeneous autoregressive ($HAR$) model which imposes restrictions on
high-order $AR$ models. We find that the structure imposed by the $HAR$ model
produces better long horizon forecasts than $AR$ models of the same order, at
the price of inferior short horizon forecasts in some cases. Our results have
implications for, among others, Climate Econometrics and Financial Econometrics
models dealing with long memory series at different forecast horizons. We show
in an example that while a short memory autoregressive moving average $(ARMA)$
model gives the best performance when forecasting the Realized Variance of the
S\&P 500 up to a month ahead, the $ARFIMA$ model gives the best performance for
longer forecast horizons.
| 0 | 0 | 1 | 0 | 0 | 0 |
18,377 | Detecting Disguised Plagiarism | Source code plagiarism detection is a problem that has been addressed several
times before; and several tools have been developed for that purpose. In this
research project we investigated a set of possible disguises that can be
mechanically applied to plagiarized source code to defeat plagiarism detection
tools. We propose a preprocessor to be used with existing plagiarism detection
tools to "normalize" source code before checking it, thus making such disguises
ineffective.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,378 | Stellar population synthesis based modelling of the Milky Way using asteroseismology of dwarfs and subgiants from Kepler | Early attempts to apply asteroseismology to study the Galaxy have already
shown unexpected discrepancies for the mass distribution of stars between the
Galactic models and the data; a result that is still unexplained. Here, we
revisit the analysis of the asteroseismic sample of dwarf and subgiant stars
observed by Kepler and investigate in detail the possible causes for the
reported discrepancy. We investigate two models of the Milky Way based on
stellar population synthesis, Galaxia and TRILEGAL. In agreement with previous
results, we find that TRILEGAL predicts more massive stars compared to Galaxia,
and that TRILEGAL predicts too many blue stars compared to 2MASS observations.
Both models fail to match the distribution of the stellar sample in $(\log
g,T_{\rm eff})$ space, pointing to inaccuracies in the models and/or the
assumed selection function. When corrected for this mismatch in $(\log g,T_{\rm
eff})$ space, the mass distribution calculated by Galaxia is broader and the
mean is shifted toward lower masses compared to that of the observed stars.
This behaviour is similar to what has been reported for the Kepler red giant
sample. The shift between the mass distributions is equivalent to a change of
2\% in $\nu_{\rm max}$, which is within the current uncertainty in the
$\nu_{\rm max}$ scaling relation. Applying corrections to the $\Delta \nu$
scaling relation predicted by the stellar models makes the observed mass
distribution significantly narrower, but there is no change to the mean.
| 0 | 1 | 0 | 0 | 0 | 0 |
18,379 | EmbNum: Semantic labeling for numerical values with deep metric learning | Semantic labeling for numerical values is a task of assigning semantic labels
to unknown numerical attributes. The semantic labels could be numerical
properties in ontologies, instances in knowledge bases, or labeled data that
are manually annotated by domain experts. In this paper, we refer to semantic
labeling as a retrieval setting where the label of an unknown attribute is
assigned by the label of the most relevant attribute in labeled data. One of
the greatest challenges is that an unknown attribute rarely has the same set of
values with the similar one in the labeled data. To overcome the issue,
statistical interpretation of value distribution is taken into account.
However, the existing studies assume a specific form of distribution. It is not
appropriate in particular to apply open data where there is no knowledge of
data in advance. To address these problems, we propose a neural numerical
embedding model (EmbNum) to learn useful representation vectors for numerical
attributes without prior assumptions on the distribution of data. Then, the
"semantic similarities" between the attributes are measured on these
representation vectors by the Euclidean distance. Our empirical experiments on
City Data and Open Data show that EmbNum significantly outperforms
state-of-the-art methods for the task of numerical attribute semantic labeling
regarding effectiveness and efficiency.
| 0 | 0 | 0 | 1 | 0 | 0 |
18,380 | A Network Epidemic Model for Online Community Commissioning Data | A statistical model assuming a preferential attachment network, which is
generated by adding nodes sequentially according to a few simple rules, usually
describes real-life networks better than a model assuming, for example, a
Bernoulli random graph, in which any two nodes have the same probability of
being connected, does. Therefore, to study the propogation of "infection"
across a social network, we propose a network epidemic model by combining a
stochastic epidemic model and a preferential attachment model. A simulation
study based on the subsequent Markov Chain Monte Carlo algorithm reveals an
identifiability issue with the model parameters. Finally, the network epidemic
model is applied to a set of online commissioning data.
| 1 | 0 | 0 | 1 | 0 | 0 |
18,381 | Descent and Galois theory for Hopf categories | Descent theory for linear categories is developed. Given a linear category as
an extension of a diagonal category, we introduce descent data, and the
category of descent data is isomorphic to the category of representations of
the diagonal category, if some flatness assumptions are satisfied. Then
Hopf-Galois descent theory for linear Hopf categories, the Hopf algebra version
of a linear category, is developed. This leads to the notion of Hopf-Galois
category extension. We have a dual theory, where actions by dual linear Hopf
categories on linear categories are considered. Hopf-Galois category extensions
over groupoid algebras correspond to strongly graded linear categories.
| 0 | 0 | 1 | 0 | 0 | 0 |
18,382 | On winning strategies for Banach-Mazur games | We give topological and game theoretic definitions and theorems nec- essary
for defining a Banach-Mazur game, and apply these definitions to formalize the
game. We then state and prove two theorems which give necessary conditions for
existence of winning strategies for players in a Banach-Mazur game.
| 0 | 0 | 1 | 0 | 0 | 0 |
18,383 | Dynamical instability of the electric transport in strongly fluctuating superconductors | Theory of the influence of the thermal fluctuations on the electric transport
beyond linear response in superconductors is developed within the framework of
the time dependent Ginzburg - Landau approach. The I - V curve is calculated
using the dynamical self - consistent gaussian approximation. Under certain
conditions it exhibits a reentrant behaviour acquiring an S - shape form. The
unstable region below a critical temperature $T^{\ast }$ is determined for
arbitrary dimensionality ($D=1,2,3$) of the thermal fluctuations. The results
are applied to analyse the transport data on nanowires and several classes of
2D superconductors: metallic thin films, layered and atomically thick novel
materials.
| 0 | 1 | 0 | 0 | 0 | 0 |
18,384 | Controlling Multimode Optomechanical Interactions via Interference | We demonstrate optomechanical interference in a multimode system, in which an
optical mode couples to two mechanical modes. A phase-dependent
excitation-coupling approach is developed, which enables the observation of
constructive and destructive optomechanical interferences. The destructive
interference prevents the coupling of the mechanical system to the optical
mode, suppressing optically-induced mechanical damping. These studies establish
optomechanical interference as an essential tool for controlling the
interactions between light and mechanical oscillators.
| 0 | 1 | 0 | 0 | 0 | 0 |
18,385 | Online Adaptive Principal Component Analysis and Its extensions | We propose algorithms for online principal component analysis (PCA) and
variance minimization for adaptive settings. Previous literature has focused on
upper bounding the static adversarial regret, whose comparator is the optimal
fixed action in hindsight. However, static regret is not an appropriate metric
when the underlying environment is changing. Instead, we adopt the adaptive
regret metric from the previous literature and propose online adaptive
algorithms for PCA and variance minimization, that have sub-linear adaptive
regret guarantees. We demonstrate both theoretically and experimentally that
the proposed algorithms can adapt to the changing environments.
| 1 | 0 | 0 | 1 | 0 | 0 |
18,386 | Diffusion MRI measurements in challenging head and brain regions via cross-term spatiotemporally encoding | Cross-term spatiotemporal encoding (xSPEN) is a recently introduced imaging
approach delivering single-scan 2D NMR images with unprecedented resilience to
field inhomogeneities. The method relies on performing a pre-acquisition
encoding and a subsequent image read out while using the disturbing frequency
inhomogeneities as part of the image formation processes, rather than as
artifacts to be overwhelmed by the application of external gradients. This
study introduces the use of this new single-shot MRI technique as a
diffusion-monitoring tool, for accessing regions that have hitherto been
unapproachable by diffusion-weighted imaging (DWI) methods. In order to achieve
this, xSPEN MRIs intrinsic diffusion weighting effects are formulated using a
customized, spatially-localized b-matrix analysis; with this, we devise a novel
diffusion-weighting scheme that both exploits and overcomes xSPENs strong
intrinsic weighting effects. The ability to provide reliable and robust
diffusion maps in challenging head and brain regions, including the eyes and
the optic nerves, is thus demonstrated in humans at 3T; new avenues for imaging
other body regions are also briefly discussed.
| 0 | 1 | 0 | 0 | 0 | 0 |
18,387 | Game-Theoretic Capital Asset Pricing in Continuous Time | We derive formulas for the performance of capital assets in continuous time
from an efficient market hypothesis, with no stochastic assumptions and no
assumptions about the beliefs or preferences of investors. Our efficient market
hypothesis says that a speculator with limited means cannot beat a particular
index by a substantial factor. Our results include a formula that resembles the
classical CAPM formula for the expected simple return of a security or
portfolio.
This version of the article was essentially written in December 2001 but
remains a working paper.
| 0 | 0 | 0 | 0 | 0 | 1 |
18,388 | Full Workspace Generation of Serial-link Manipulators by Deep Learning based Jacobian Estimation | Apart from solving complicated problems that require a certain level of
intelligence, fine-tuned deep neural networks can also create fast algorithms
for slow, numerical tasks. In this paper, we introduce an improved version of
[1]'s work, a fast, deep-learning framework capable of generating the full
workspace of serial-link manipulators. The architecture consists of two neural
networks: an estimation net that approximates the manipulator Jacobian, and a
confidence net that measures the confidence of the approximation. We also
introduce M3 (Manipulability Maps of Manipulators), a MATLAB robotics library
based on [2](RTB), the datasets generated by which are used by this work.
Results have shown that not only are the neural networks significantly faster
than numerical inverse kinematics, it also offers superior accuracy when
compared to other machine learning alternatives. Implementations of the
algorithm (based on Keras[3]), including benchmark evaluation script, are
available at this https URL . The M3
Library APIs and datasets are also available at
this https URL .
| 1 | 0 | 0 | 0 | 0 | 0 |
18,389 | Linear regression without correspondence | This article considers algorithmic and statistical aspects of linear
regression when the correspondence between the covariates and the responses is
unknown. First, a fully polynomial-time approximation scheme is given for the
natural least squares optimization problem in any constant dimension. Next, in
an average-case and noise-free setting where the responses exactly correspond
to a linear function of i.i.d. draws from a standard multivariate normal
distribution, an efficient algorithm based on lattice basis reduction is shown
to exactly recover the unknown linear function in arbitrary dimension. Finally,
lower bounds on the signal-to-noise ratio are established for approximate
recovery of the unknown linear function by any estimator.
| 1 | 0 | 1 | 1 | 0 | 0 |
18,390 | Toward sensitive document release with privacy guarantees | Privacy has become a serious concern for modern Information Societies. The
sensitive nature of much of the data that are daily exchanged or released to
untrusted parties requires that responsible organizations undertake appropriate
privacy protection measures. Nowadays, much of these data are texts (e.g.,
emails, messages posted in social media, healthcare outcomes, etc.) that,
because of their unstructured and semantic nature, constitute a challenge for
automatic data protection methods. In fact, textual documents are usually
protected manually, in a process known as document redaction or sanitization.
To do so, human experts identify sensitive terms (i.e., terms that may reveal
identities and/or confidential information) and protect them accordingly (e.g.,
via removal or, preferably, generalization). To relieve experts from this
burdensome task, in a previous work we introduced the theoretical basis of
C-sanitization, an inherently semantic privacy model that provides the basis to
the development of automatic document redaction/sanitization algorithms and
offers clear and a priori privacy guarantees on data protection; even though
its potential benefits C-sanitization still presents some limitations when
applied to practice (mainly regarding flexibility, efficiency and accuracy). In
this paper, we propose a new more flexible model, named (C, g(C))-sanitization,
which enables an intuitive configuration of the trade-off between the desired
level of protection (i.e., controlled information disclosure) and the
preservation of the utility of the protected data (i.e., amount of semantics to
be preserved). Moreover, we also present a set of technical solutions and
algorithms that provide an efficient and scalable implementation of the model
and improve its practical accuracy, as we also illustrate through empirical
experiments.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,391 | Remarks on the Birch-Swinnerton-Dyer conjecture | We give a brief description of the Birch-Swinnerton-Dyer conjecture and
present related conjectures. We describe the relation between the nilpotent
orbits of SL(2,R) and CM points.
| 0 | 0 | 1 | 0 | 0 | 0 |
18,392 | Trajectory Optimization for Cooperative Dual-band UAV Swarms | Unmanned aerial vehicles (UAVs) have gained a lot of popularity in diverse
wireless communication fields. They can act as high-altitude flying relays to
support communications between ground nodes due to their ability to provide
line-of-sight links. With the flourishing Internet of Things, several types of
new applications are emerging. In this paper, we focus on bandwidth hungry and
delay-tolerant applications where multiple pairs of transceivers require the
support of UAVs to complete their transmissions. To do so, the UAVs have the
possibility to employ two different bands namely the typical microwave and the
high-rate millimeter wave bands. In this paper, we develop a generic framework
to assign UAVs to supported transceivers and optimize their trajectories such
that a weighted function of the total service time is minimized. Taking into
account both the communication time needed to relay the message and the flying
time of the UAVs, a mixed non-linear programming problem aiming at finding the
stops at which the UAVs hover to forward the data to the receivers is
formulated. An iterative approach is then developed to solve the problem.
First, a mixed linear programming problem is optimally solved to determine the
path of each available UAV. Then, a hierarchical iterative search is executed
to enhance the UAV stops' locations and reduce the service time. The behavior
of the UAVs and the benefits of the proposed framework are showcased for
selected scenarios.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,393 | A monad for full ground reference cells | We present a denotational account of dynamic allocation of potentially cyclic
memory cells using a monad on a functor category. We identify the collection of
heaps as an object in a different functor category equipped with a monad for
adding hiding/encapsulation capabilities to the heaps. We derive a monad for
full ground references supporting effect masking by applying a state monad
transformer to the encapsulation monad. To evaluate the monad, we present a
denotational semantics for a call-by-value calculus with full ground
references, and validate associated code transformations.
| 1 | 0 | 1 | 0 | 0 | 0 |
18,394 | Learning Deep Visual Object Models From Noisy Web Data: How to Make it Work | Deep networks thrive when trained on large scale data collections. This has
given ImageNet a central role in the development of deep architectures for
visual object classification. However, ImageNet was created during a specific
period in time, and as such it is prone to aging, as well as dataset bias
issues. Moving beyond fixed training datasets will lead to more robust visual
systems, especially when deployed on robots in new environments which must
train on the objects they encounter there. To make this possible, it is
important to break free from the need for manual annotators. Recent work has
begun to investigate how to use the massive amount of images available on the
Web in place of manual image annotations. We contribute to this research thread
with two findings: (1) a study correlating a given level of noisily labels to
the expected drop in accuracy, for two deep architectures, on two different
types of noise, that clearly identifies GoogLeNet as a suitable architecture
for learning from Web data; (2) a recipe for the creation of Web datasets with
minimal noise and maximum visual variability, based on a visual and natural
language processing concept expansion strategy. By combining these two results,
we obtain a method for learning powerful deep object models automatically from
the Web. We confirm the effectiveness of our approach through object
categorization experiments using our Web-derived version of ImageNet on a
popular robot vision benchmark database, and on a lifelong object discovery
task on a mobile robot.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,395 | Existence of either a periodic collisional orbit or infinitely many consecutive collision orbits in the planar circular restricted three-body problem | In the restricted three-body problem, consecutive collision orbits are those
orbits which start and end at collisions with one of the primaries. Interests
for such orbits arise not only from mathematics but also from various
engineering problems. In this article, using Floer homology, we show that there
are either a periodic collisional orbit, or infinitely many consecutive
collision orbits in the planar circular restricted three-body problem on each
bounded component of the energy hypersurface for Jacobi energy below the first
critical value.
| 0 | 0 | 1 | 0 | 0 | 0 |
18,396 | Finding AND-OR Hierarchies in Workflow Nets | This paper presents the notion of AND-OR reduction, which reduces a WF net to
a smaller net by iteratively contracting certain well-formed subnets into
single nodes until no more such contractions are possible. This reduction can
reveal the hierarchical structure of a WF net, and since it preserves certain
semantical properties such as soundness, it can help with analysing and
understanding why a WF net is sound or not. The reduction can also be used to
verify if a WF net is an AND-OR net. This class of WF nets was introduced in
earlier work, and arguably describes nets that follow good hierarchical design
principles. It is shown that the AND-OR reduction is confluent up to
isomorphism, which means that despite the inherent non-determinism that comes
from the choice of subnets that are contracted, the final result of the
reduction is always the same up to the choice of the identity of the nodes.
Based on this result, a polynomial-time algorithm is presented that computes
this unique result of the AND-OR reduction. Finally, it is shown how this
algorithm can be used to verify if a WF net is an AND-OR net.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,397 | Fusarium Damaged Kernels Detection Using Transfer Learning on Deep Neural Network Architecture | The present work shows the application of transfer learning for a pre-trained
deep neural network (DNN), using a small image dataset ($\approx$ 12,000) on a
single workstation with enabled NVIDIA GPU card that takes up to 1 hour to
complete the training task and archive an overall average accuracy of $94.7\%$.
The DNN presents a $20\%$ score of misclassification for an external test
dataset. The accuracy of the proposed methodology is equivalent to ones using
HSI methodology $(81\%-91\%)$ used for the same task, but with the advantage of
being independent on special equipment to classify wheat kernel for FHB
symptoms.
| 0 | 0 | 0 | 1 | 0 | 0 |
18,398 | About simple variational splines from the Hamiltonian viewpoint | In this paper, we study simple splines on a Riemannian manifold $Q$ from the
point of view of the Pontryagin maximum principle (PMP) in optimal control
theory. The control problem consists in finding smooth curves matching two
given tangent vectors with the control being the curve's acceleration, while
minimizing a given cost functional. We focus on cubic splines (quadratic cost
function) and on time-minimal splines (constant cost function) under bounded
acceleration. We present a general strategy to solve for the optimal
hamiltonian within the PMP framework based on splitting the variables by means
of a linear connection. We write down the corresponding hamiltonian equations
in intrinsic form and study the corresponding hamiltonian dynamics in the case
$Q$ is the $2$-sphere. We also elaborate on possible applications, including
landmark cometrics in computational anatomy.
| 0 | 0 | 1 | 0 | 0 | 0 |
18,399 | Adaptive Grasp Control through Multi-Modal Interactions for Assistive Prosthetic Devices | The hand is one of the most complex and important parts of the human body.
The dexterity provided by its multiple degrees of freedom enables us to perform
many of the tasks of daily living which involve grasping and manipulating
objects of interest. Contemporary prosthetic devices for people with
transradial amputations or wrist disarticulation vary in complexity, from
passive prosthetics to complex devices that are body or electrically driven.
One of the important challenges in developing smart prosthetic hands is to
create devices which are able to mimic all activities that a person might
perform and address the needs of a wide variety of users. The approach explored
here is to develop algorithms that permit a device to adapt its behavior to the
preferences of the operator through interactions with the wearer. This device
uses multiple sensing modalities including muscle activity from a myoelectric
armband, visual information from an on-board camera, tactile input through a
touchscreen interface, and speech input from an embedded microphone. Presented
within this paper are the design, software and controls of a platform used to
evaluate this architecture as well as results from experiments deigned to
quantify the performance.
| 1 | 0 | 0 | 0 | 0 | 0 |
18,400 | Generalization of Effective Conductance Centrality for Egonetworks | We study the popular centrality measure known as effective conductance or in
some circles as information centrality. This is an important notion of
centrality for undirected networks, with many applications, e.g., for random
walks, electrical resistor networks, epidemic spreading, etc. In this paper, we
first reinterpret this measure in terms of modulus (energy) of families of
walks on the network. This modulus centrality measure coincides with the
effective conductance measure on simple undirected networks, and extends it to
much more general situations, e.g., directed networks as well. Secondly, we
study a variation of this modulus approach in the egocentric network paradigm.
Egonetworks are networks formed around a focal node (ego) with a specific order
of neighborhoods. We propose efficient analytical and approximate methods for
computing these measures on both undirected and directed networks. Finally, we
describe a simple method inspired by the modulus point-of-view, called shell
degree, which proved to be a useful tool for network science.
| 1 | 1 | 0 | 0 | 0 | 0 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.