title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
Linear-Size Hopsets with Small Hopbound, and Distributed Routing with Low Memory | For a positive parameter $\beta$, the $\beta$-bounded distance between a pair
of vertices $u,v$ in a weighted undirected graph $G = (V,E,\omega)$ is the
length of the shortest $u-v$ path in $G$ with at most $\beta$ edges, aka {\em
hops}. For $\beta$ as above and $\epsilon>0$, a {\em $(\beta,\epsilon)$-hopset}
of $G = (V,E,\omega)$ is a graph $G' =(V,H,\omega_H)$ on the same vertex set,
such that all distances in $G$ are $(1+\epsilon)$-approximated by
$\beta$-bounded distances in $G\cup G'$.
Hopsets are a fundamental graph-theoretic and graph-algorithmic construct,
and they are widely used for distance-related problems in a variety of
computational settings. Currently existing constructions of hopsets produce
hopsets either with $\Omega(n \log n)$ edges, or with a hopbound
$n^{\Omega(1)}$. In this paper we devise a construction of {\em linear-size}
hopsets with hopbound $(\log n)^{\log^{(3)}n+O(1)}$. This improves the previous
bound almost exponentially.
We also devise efficient implementations of our construction in PRAM and
distributed settings. The only existing PRAM algorithm \cite{EN16} for
computing hopsets with a constant (i.e., independent of $n$) hopbound requires
$n^{\Omega(1)}$ time. We devise a PRAM algorithm with polylogarithmic running
time for computing hopsets with a constant hopbound, i.e., our running time is
exponentially better than the previous one. Moreover, these hopsets are also
significantly sparser than their counterparts from \cite{EN16}.
We use our hopsets to devise a distributed routing scheme that exhibits
near-optimal tradeoff between individual memory requirement
$\tilde{O}(n^{1/k})$ of vertices throughout preprocessing and routing phases of
the algorithm, and stretch $O(k)$, along with a near-optimal construction time
$\approx D + n^{1/2 + 1/k}$, where $D$ is the hop-diameter of the input graph.
| 1 | 0 | 0 | 0 | 0 | 0 |
The Openpipeflow Navier--Stokes Solver | Pipelines are used in a huge range of industrial processes involving fluids,
and the ability to accurately predict properties of the flow through a pipe is
of fundamental engineering importance. Armed with parallel MPI, Arnoldi and
Newton--Krylov solvers, the Openpipeflow code can be used in a range of
settings, from large-scale simulation of highly turbulent flow, to the detailed
analysis of nonlinear invariant solutions (equilibria and periodic orbits) and
their influence on the dynamics of the flow.
| 0 | 1 | 0 | 0 | 0 | 0 |
Logical and Inequality Implications for Reducing the Size and Complexity of Quadratic Unconstrained Binary Optimization Problems | The quadratic unconstrained binary optimization (QUBO) problem arises in
diverse optimization applications ranging from Ising spin problems to classical
problems in graph theory and binary discrete optimization. The use of
preprocessing to transform the graph representing the QUBO problem into a
smaller equivalent graph is important for improving solution quality and time
for both exact and metaheuristic algorithms and is a step towards mapping large
scale QUBO to hardware graphs used in quantum annealing computers. In an
earlier paper (Lewis and Glover, 2016) a set of rules was introduced that
achieved significant QUBO reductions as verified through computational testing.
Here this work is extended with additional rules that provide further
reductions that succeed in exactly solving 10% of the benchmark QUBO problems.
An algorithm and associated data structures to efficiently implement the entire
set of rules is detailed and computational experiments are reported that
demonstrate their efficacy.
| 1 | 0 | 0 | 0 | 0 | 0 |
Optimal Decentralized Economical-sharing Criterion and Scheme for Microgrid | In order to address the economical dispatch problem in islanded microgrid,
this letter proposes an optimal criterion and two decentralized
economical-sharing schemes. The criterion is to judge whether global optimal
economical-sharing can be realized via a decentralized manner. On the one hand,
if the system cost functions meet this criterion, the corresponding
decentralized droop method is proposed to achieve the global optimal dispatch.
Otherwise, if the system does not meet this criterion, a modified method to
achieve suboptimal dispatch is presented. The advantages of these methods are
convenient,effective and communication-less.
| 1 | 0 | 0 | 0 | 0 | 0 |
Unbiased inference for discretely observed hidden Markov model diffusions | We develop an importance sampling (IS) type estimator for Bayesian joint
inference on the model parameters and latent states of a class of hidden Markov
models. The hidden state dynamics is a diffusion process and noisy observations
are obtained at discrete points in time. We suppose that the diffusion dynamics
can not be simulated exactly and hence one must time-discretise the diffusion.
Our approach is based on particle marginal Metropolis--Hastings, particle
filters, and multilevel Monte Carlo. The resulting IS type estimator leads to
inference without a bias from the time-discretisation. We give convergence
results and recommend allocations for algorithm inputs. In contrast to existing
unbiased methods requiring strong conditions on the diffusion and tailored
solutions, our method relies on standard Euler approximations of the diffusion.
Our method is parallelisable, and can be computationally efficient. The
user-friendly approach is illustrated with two examples.
| 0 | 0 | 0 | 1 | 0 | 0 |
Electric propulsion reliability: statistical analysis of on-orbit anomalies and comparative analysis of electric versus chemical propulsion failure rates | With a few hundred spacecraft launched to date with electric propulsion (EP),
it is possible to conduct an epidemiological study of EP on orbit reliability.
The first objective of the present work was to undertake such a study and
analyze EP track record of on orbit anomalies and failures by different
covariates. The second objective was to provide a comparative analysis of EP
failure rates with those of chemical propulsion. After a thorough data
collection, 162 EP-equipped satellites launched between January 1997 and
December 2015 were included in our dataset for analysis. Several statistical
analyses were conducted, at the aggregate level and then with the data
stratified by severity of the anomaly, by orbit type, and by EP technology.
Mean Time To Anomaly (MTTA) and the distribution of the time to anomaly were
investigated, as well as anomaly rates. The important findings in this work
include the following: (1) Post-2005, EP reliability has outperformed that of
chemical propulsion; (2) Hall thrusters have robustly outperformed chemical
propulsion, and they maintain a small but shrinking reliability advantage over
gridded ion engines. Other results were also provided, for example the
differentials in MTTA of minor and major anomalies for gridded ion engines and
Hall thrusters. It was shown that: (3) Hall thrusters exhibit minor anomalies
very early on orbit, which might be indicative of infant anomalies, and thus
would benefit from better ground testing and acceptance procedures; (4) Strong
evidence exists that EP anomalies (onset and likelihood) and orbit type are
dependent, a dependence likely mediated by either the space environment or
differences in thrusters duty cycles; (5) Gridded ion thrusters exhibit both
infant and wear-out failures, and thus would benefit from a reliability growth
program that addresses both these types of problems.
| 0 | 1 | 0 | 1 | 0 | 0 |
A Bayesian Framework for Cosmic String Searches in CMB Maps | There exists various proposals to detect cosmic strings from Cosmic Microwave
Background (CMB) or 21 cm temperature maps. Current proposals do not aim to
find the location of strings on sky maps, all of these approaches can be
thought of as a statistic on a sky map. We propose a Bayesian interpretation of
cosmic string detection and within that framework, we derive a connection
between estimates of cosmic string locations and cosmic string tension $G\mu$.
We use this Bayesian framework to develop a machine learning framework for
detecting strings from sky maps and outline how to implement this framework
with neural networks. The neural network we trained was able to detect and
locate cosmic strings on noiseless CMB temperature map down to a string tension
of $G\mu=5 \times10^{-9}$ and when analyzing a CMB temperature map that does
not contain strings, the neural network gives a 0.95 probability that
$G\mu\leq2.3\times10^{-9}$.
| 0 | 1 | 0 | 0 | 0 | 0 |
Constraining the giant planets' initial configuration from their evolution: implications for the timing of the planetary instability | Recent works on planetary migration show that the orbital structure of the
Kuiper belt can be very well reproduced if before the onset of the planetary
instability Neptune underwent a long-range planetesimal-driven migration up to
$\sim$28 au. However, considering that all giant planets should have been
captured in mean motion resonances among themselves during the gas-disk phase,
it is not clear whether such a very specific evolution for Neptune is possible,
nor whether the instability could have happened at late times. Here, we first
investigate which initial resonant configuration of the giant planets can be
compatible with Neptune being extracted from the resonant chain and migrating
to $\sim$28 au before that the planetary instability happened. We address the
late instability issue by investigating the conditions where the planets can
stay in resonance for about 400 My. Our results indicate that this can happen
only in the case where the planetesimal disk is beyond a specific minimum
distance $\delta_{stab}$ from Neptune. Then, if there is a sufficient amount of
dust produced in the planetesimal disk, that drifts inwards, Neptune can enter
in a slow dust-driven migration phase for hundreds of Mys until it reaches a
critical distance $\delta_{mig}$ from the disk. From that point, faster
planetesimal-driven migration takes over and Neptune continues migrating
outward until the instability happens. We conclude that, although an early
instability reproduces more easily the evolution of Neptune required to explain
the structure of the Kuiper belt, such evolution is also compatible with a late
instability.
| 0 | 1 | 0 | 0 | 0 | 0 |
General AI Challenge - Round One: Gradual Learning | The General AI Challenge is an initiative to encourage the wider artificial
intelligence community to focus on important problems in building intelligent
machines with more general scope than is currently possible. The challenge
comprises of multiple rounds, with the first round focusing on gradual
learning, i.e. the ability to re-use already learned knowledge for efficiently
learning to solve subsequent problems. In this article, we will present details
of the first round of the challenge, its inspiration and aims. We also outline
a more formal description of the challenge and present a preliminary analysis
of its curriculum, based on ideas from computational mechanics. We believe,
that such formalism will allow for a more principled approach towards
investigating tasks in the challenge, building new curricula and for
potentially improving consequent challenge rounds.
| 1 | 0 | 0 | 0 | 0 | 0 |
DeepAR: Probabilistic Forecasting with Autoregressive Recurrent Networks | A key enabler for optimizing business processes is accurately estimating the
probability distribution of a time series future given its past. Such
probabilistic forecasts are crucial for example for reducing excess inventory
in supply chains. In this paper we propose DeepAR, a novel methodology for
producing accurate probabilistic forecasts, based on training an
auto-regressive recurrent network model on a large number of related time
series. We show through extensive empirical evaluation on several real-world
forecasting data sets that our methodology is more accurate than
state-of-the-art models, while requiring minimal feature engineering.
| 1 | 0 | 0 | 1 | 0 | 0 |
Algebraic and logistic investigations on free lattices | Lorenzen's "Algebraische und logistische Untersuchungen über freie
Verbände" appeared in 1951 in The journal of symbolic logic. These
"Investigations" have immediately been recognised as a landmark in the history
of infinitary proof theory, but their approach and method of proof have not
been incorporated into the corpus of proof theory. More precisely, Lorenzen
proves the admissibility of cut by double induction, on the cut formula and on
the complexity of the derivations, without using any ordinal assignment,
contrary to the presentation of cut elimination in most standard texts on proof
theory. This translation has the intent of giving a new impetus to their
reception.
The "Investigations" are best known for providing a constructive proof of
consistency for ramified type theory without axiom of reducibility. They do so
by showing that it is a part of a trivially consistent "inductive calculus"
that describes our knowledge of arithmetic without detour. The proof resorts
only to the inductive definition of formulas and theorems.
They propose furthermore a definition of a semilattice, of a distributive
lattice, of a pseudocomplemented semilattice, and of a countably complete
boolean lattice as deductive calculuses, and show how to present them for
constructing the respective free object over a given preordered set.
This translation is published with the kind permission of Lorenzen's
daughter, Jutta Reinhardt.
| 0 | 0 | 1 | 0 | 0 | 0 |
Online Interactive Collaborative Filtering Using Multi-Armed Bandit with Dependent Arms | Online interactive recommender systems strive to promptly suggest to
consumers appropriate items (e.g., movies, news articles) according to the
current context including both the consumer and item content information.
However, such context information is often unavailable in practice for the
recommendation, where only the users' interaction data on items can be
utilized. Moreover, the lack of interaction records, especially for new users
and items, worsens the performance of recommendation further. To address these
issues, collaborative filtering (CF), one of the recommendation techniques
relying on the interaction data only, as well as the online multi-armed bandit
mechanisms, capable of achieving the balance between exploitation and
exploration, are adopted in the online interactive recommendation settings, by
assuming independent items (i.e., arms). Nonetheless, the assumption rarely
holds in reality, since the real-world items tend to be correlated with each
other (e.g., two articles with similar topics). In this paper, we study online
interactive collaborative filtering problems by considering the dependencies
among items. We explicitly formulate the item dependencies as the clusters on
arms, where the arms within a single cluster share the similar latent topics.
In light of the topic modeling techniques, we come up with a generative model
to generate the items from their underlying topics. Furthermore, an efficient
online algorithm based on particle learning is developed for inferring both
latent parameters and states of our model. Additionally, our inferred model can
be naturally integrated with existing multi-armed selection strategies in the
online interactive collaborating setting. Empirical studies on two real-world
applications, online recommendations of movies and news, demonstrate both the
effectiveness and efficiency of the proposed approach.
| 1 | 0 | 0 | 0 | 0 | 0 |
Learning Latent Features with Pairwise Penalties in Matrix Completion | Low-rank matrix completion (MC) has achieved great success in many real-world
data applications. A latent feature model formulation is usually employed and,
to improve prediction performance, the similarities between latent variables
can be exploited by pairwise learning, e.g., the graph regularized matrix
factorization (GRMF) method. However, existing GRMF approaches often use a
squared L2 norm to measure the pairwise difference, which may be overly
influenced by dissimilar pairs and lead to inferior prediction. To fully
empower pairwise learning for matrix completion, we propose a general
optimization framework that allows a rich class of (non-)convex pairwise
penalty functions. A new and efficient algorithm is further developed to
uniformly solve the optimization problem, with a theoretical convergence
guarantee. In an important situation where the latent variables form a small
number of subgroups, its statistical guarantee is also fully characterized. In
particular, we theoretically characterize the complexity-regularized maximum
likelihood estimator, as a special case of our framework. It has a better error
bound when compared to the standard trace-norm regularized matrix completion.
We conduct extensive experiments on both synthetic and real datasets to
demonstrate the superior performance of this general framework.
| 0 | 0 | 0 | 1 | 0 | 0 |
Renewal theorems and mixing for non Markov flows with infinite measure | We obtain results on mixing for a large class of (not necessarily Markov)
infinite measure semiflows and flows. Erickson proved, amongst other things, a
strong renewal theorem in the corresponding i.i.d. setting. Using operator
renewal theory, we extend Erickson's methods to the deterministic (i.e.
non-i.i.d.) continuous time setting and obtain results on mixing as a
consequence.
Our results apply to intermittent semiflows and flows of Pomeau-Manneville
type (both Markov and nonMarkov), and to semiflows and flows over
Collet-Eckmann maps with nonintegrable roof function.
| 0 | 0 | 1 | 0 | 0 | 0 |
Post-hoc labeling of arbitrary EEG recordings for data-efficient evaluation of neural decoding methods | Many cognitive, sensory and motor processes have correlates in oscillatory
neural sources, which are embedded as a subspace into the recorded brain
signals. Decoding such processes from noisy
magnetoencephalogram/electroencephalogram (M/EEG) signals usually requires the
use of data-driven analysis methods. The objective evaluation of such decoding
algorithms on experimental raw signals, however, is a challenge: the amount of
available M/EEG data typically is limited, labels can be unreliable, and raw
signals often are contaminated with artifacts. The latter is specifically
problematic, if the artifacts stem from behavioral confounds of the oscillatory
neural processes of interest.
To overcome some of these problems, simulation frameworks have been
introduced for benchmarking decoding methods. Generating artificial brain
signals, however, most simulation frameworks make strong and partially
unrealistic assumptions about brain activity, which limits the generalization
of obtained results to real-world conditions.
In the present contribution, we thrive to remove many shortcomings of current
simulation frameworks and propose a versatile alternative, that allows for
objective evaluation and benchmarking of novel data-driven decoding methods for
neural signals. Its central idea is to utilize post-hoc labelings of arbitrary
M/EEG recordings. This strategy makes it paradigm-agnostic and allows to
generate comparatively large datasets with noiseless labels. Source code and
data of the novel simulation approach are made available for facilitating its
adoption.
| 1 | 0 | 0 | 1 | 0 | 0 |
Sums of two cubes as twisted perfect powers, revisited | In this paper, we sharpen earlier work of the first author, Luca and
Mulholland, showing that the Diophantine equation $$ A^3+B^3 = q^\alpha C^p, \,
\, ABC \neq 0, \, \, \gcd (A,B) =1, $$ has, for "most" primes $q$ and suitably
large prime exponents $p$, no solutions. We handle a number of (presumably
infinite) families where no such conclusion was hitherto known. Through further
application of certain {\it symplectic criteria}, we are able to make some
conditional statements about still more values of $q$, a sample such result is
that, for all but $O(\sqrt{x}/\log x)$ primes $q$ up to $x$, the equation $$
A^3 + B^3 = q C^p. $$ has no solutions in coprime, nonzero integers $A, B$
and $C$, for a positive proportion of prime exponents $p$.
| 0 | 0 | 1 | 0 | 0 | 0 |
A Review on Bilevel Optimization: From Classical to Evolutionary Approaches and Applications | Bilevel optimization is defined as a mathematical program, where an
optimization problem contains another optimization problem as a constraint.
These problems have received significant attention from the mathematical
programming community. Only limited work exists on bilevel problems using
evolutionary computation techniques; however, recently there has been an
increasing interest due to the proliferation of practical applications and the
potential of evolutionary algorithms in tackling these problems. This paper
provides a comprehensive review on bilevel optimization from the basic
principles to solution strategies; both classical and evolutionary. A number of
potential application problems are also discussed. To offer the readers
insights on the prominent developments in the field of bilevel optimization, we
have performed an automated text-analysis of an extended list of papers
published on bilevel optimization to date. This paper should motivate
evolutionary computation researchers to pay more attention to this practical
yet challenging area.
| 1 | 0 | 1 | 0 | 0 | 0 |
Speaker Recognition with Cough, Laugh and "Wei" | This paper proposes a speaker recognition (SRE) task with trivial speech
events, such as cough and laugh. These trivial events are ubiquitous in
conversations and less subjected to intentional change, therefore offering
valuable particularities to discover the genuine speaker from disguised speech.
However, trivial events are often short and idiocratic in spectral patterns,
making SRE extremely difficult. Fortunately, we found a very powerful deep
feature learning structure that can extract highly speaker-sensitive features.
By employing this tool, we studied the SRE performance on three types of
trivial events: cough, laugh and "Wei" (a short Chinese "Hello"). The results
show that there is rich speaker information within these trivial events, even
for cough that is intuitively less speaker distinguishable. With the deep
feature approach, the EER can reach 10%-14% with the three trivial events,
despite their extremely short durations (0.2-1.0 seconds).
| 1 | 0 | 0 | 0 | 0 | 0 |
Explicit equations for two-dimensional water waves with constant vorticity | Governing equations for two-dimensional inviscid free-surface flows with
constant vorticity over arbitrary non-uniform bottom profile are presented in
exact and compact form using conformal variables. An efficient and very
accurate numerical method for this problem is developed.
| 0 | 1 | 0 | 0 | 0 | 0 |
On the Global Limiting Absorption Principle for Massless Dirac Operators | We prove a global limiting absorption principle on the entire real line for
free, massless Dirac operators $H_0 = \alpha \cdot (-i \nabla)$ for all space
dimensions $n \in \mathbb{N}$, $n \geq 2$. This is a new result for all
dimensions other than three, in particular, it applies to the two-dimensional
case which is known to be of some relevance in applications to graphene.
We also prove an essential self-adjointness result for first-order
matrix-valued differential operators with Lipschitz coefficients.
| 0 | 0 | 1 | 0 | 0 | 0 |
Rotation Averaging and Strong Duality | In this paper we explore the role of duality principles within the problem of
rotation averaging, a fundamental task in a wide range of computer vision
applications. In its conventional form, rotation averaging is stated as a
minimization over multiple rotation constraints. As these constraints are
non-convex, this problem is generally considered challenging to solve globally.
We show how to circumvent this difficulty through the use of Lagrangian
duality. While such an approach is well-known it is normally not guaranteed to
provide a tight relaxation. Based on spectral graph theory, we analytically
prove that in many cases there is no duality gap unless the noise levels are
severe. This allows us to obtain certifiably global solutions to a class of
important non-convex problems in polynomial time.
We also propose an efficient, scalable algorithm that out-performs general
purpose numerical solvers and is able to handle the large problem instances
commonly occurring in structure from motion settings. The potential of this
proposed method is demonstrated on a number of different problems, consisting
of both synthetic and real-world data.
| 1 | 0 | 0 | 0 | 0 | 0 |
Active Learning with Gaussian Processes for High Throughput Phenotyping | A looming question that must be solved before robotic plant phenotyping
capabilities can have significant impact to crop improvement programs is
scalability. High Throughput Phenotyping (HTP) uses robotic technologies to
analyze crops in order to determine species with favorable traits, however, the
current practices rely on exhaustive coverage and data collection from the
entire crop field being monitored under the breeding experiment. This works
well in relatively small agricultural fields but can not be scaled to the
larger ones, thus limiting the progress of genetics research. In this work, we
propose an active learning algorithm to enable an autonomous system to collect
the most informative samples in order to accurately learn the distribution of
phenotypes in the field with the help of a Gaussian Process model. We
demonstrate the superior performance of our proposed algorithm compared to the
current practices on sorghum phenotype data collection.
| 1 | 0 | 0 | 1 | 0 | 0 |
Comparative Autoignition Trends in the Butanol Isomers at Elevated Pressure | Autoignition experiments of stoichiometric mixtures of s-, t-, and i-butanol
in air have been performed using a heated rapid compression machine (RCM). At
compressed pressures of 15 and 30 bar and for compressed temperatures in the
range of 715-910 K, no evidence of a negative temperature coefficient region in
terms of ignition delay response is found. The present experimental results are
also compared with previously reported RCM data of n-butanol in air. The order
of reactivity of the butanols is
n-butanol>s-butanol$\approx$i-butanol>t-butanol at the lower pressure, but
changes to n-butanol>t-butanol>s-butanol>i-butanol at higher pressure. In
addition, t-butanol shows pre-ignition heat release behavior, which is
especially evident at higher pressures. To help identify the controlling
chemistry leading to this pre-ignition heat release, off-stoichiometric
experiments are further performed at 30 bar compressed pressure, for t-butanol
at $\phi$ = 0.5 and $\phi$ = 2.0 in air. For these experiments, higher fuel
loading (i.e. $\phi$ = 2.0) causes greater pre-ignition heat release (as
indicated by greater pressure rise) than the stoichiometric or $\phi$ = 0.5
cases. Comparison of the experimental ignition delays with the simulated
results using two literature kinetic mechanisms shows generally good agreement,
and one mechanism is further used to explore and compare the fuel decomposition
pathways of the butanol isomers. Using this mechanism, the importance of peroxy
chemistry in the autoignition of the butanol isomers is highlighted and
discussed.
| 0 | 1 | 0 | 0 | 0 | 0 |
A cross-vendor and cross-state analysis of the GPS-probe data latency | Crowdsourced GPS probe data has become a major source of real-time traffic
information applications. In addition to traditional traveler advisory systems
such as dynamic message signs (DMS) and 511 systems, probe data is being used
for automatic incident detection, Integrated Corridor Management (ICM), end of
queue warning systems, and mobility-related smartphone applications. Several
private sector vendors offer minute by minute network-wide travel time and
speed probe data. The quality of such data in terms of deviation of the
reported travel time and speeds from ground-truth has been extensively studied
in recent years, and as a result concerns over the accuracy of probe data has
mostly faded away. However, the latency of probe data, defined as the lag
between the time that disturbance in traffic speed is reported in the
outsourced data feed, and the time that the traffic is perturbed, has become a
subject of interest. The extent of latency of probe data for real-time
applications is critical, so it is important to have a good understanding of
the amount of latency and its influencing factors. This paper uses high-quality
independent Bluetooth/Wi-Fi re-identification data collected on multiple
freeway segments in three different states, to measure the latency of the
vehicle probe data provided by three major vendors. The statistical
distribution of the latency and its sensitivity to speed slowdown and recovery
periods are discussed.
| 0 | 0 | 0 | 1 | 0 | 0 |
Determining Phonon Coherence Using Photon Sideband Detection | Generating and detection coherent high-frequency heat-carrying phonons has
been a great topic of interest in recent years. While there have been
successful attempts in generating and observing coherent phonons, rigorous
techniques to characterize and detect these phonon coherence in a crystalline
material have been lagging compared to what has been achieved for photons. One
main challenge is a lack of detailed understanding of how detection signals for
phonons can be related to coherence. The quantum theory of photoelectric
detection has greatly advanced the ability to characterize photon coherence in
the last century and a similar theory for phonon detection is necessary. Here,
we re-examine the optical sideband fluorescence technique that has been used
detect high frequency phonons in materials with optically active defects. We
apply the quantum theory of photodetection to the sideband technique and
propose signatures in sideband photon-counting statistics and second-order
correlation measurement of sideband signals that indicates the degree of phonon
coherence. Our theory can be implemented in recently performed experiments to
bridge the gap of determining phonon coherence to be on par with that of
photons.
| 0 | 1 | 0 | 0 | 0 | 0 |
Landscape of Configurational Density of States for Discrete Large Systems | For classical many-body systems, our recent study reveals that expectation
value of internal energy, structure, and free energy can be well characterized
by a single specially-selected microscopic structure. This finding relies on
the fact that configurational density of states (CDOS) for typical classical
system before applying interatomic interaction can be well characterized by
multidimensional gaussian distribution. Although gaussian distribution is an
well-known and widely-used function in diverse fields, it is quantitatively
unclear why the CDOS takes gaussian when system size gets large, even for
projected CDOS onto a single chosen coordination. Here we demonstrate that for
equiatomic binary system, one-dimensional CDOS along coordination of pair
correlation can be reasonably described by gaussian distribution under an
appropriate condition, whose deviation from real CDOS mainly reflects the
existence of triplet closed link consisting of the pair figure considered. The
present result thus significantly makes advance in analytic determination of
the special microscopic states to characterized macroscopic physical property
in equilibrium state.
| 0 | 1 | 0 | 0 | 0 | 0 |
On the domain of Dirac and Laplace type operators on stratified spaces | We consider a generalized Dirac operator on a compact stratified space with
an iterated cone-edge metric. Assuming a spectral Witt condition, we prove its
essential self-adjointness and identify its domain and the domain of its square
with weighted edge Sobolev spaces. This sharpens previous results where the
minimal domain is shown only to be a subset of an intersection of weighted edge
Sobolev spaces. Our argument does not rely on microlocal techniques and is very
explicit. The novelty of our approach is the use of an abstract functional
analytic notion of interpolation scales. Our results hold for the Gauss-Bonnet
and spin Dirac operators satisfying a spectral Witt condition.
| 0 | 0 | 1 | 0 | 0 | 0 |
Modeling Influence with Semantics in Social Networks: a Survey | The discovery of influential entities in all kinds of networks (e.g. social,
digital, or computer) has always been an important field of study. In recent
years, Online Social Networks (OSNs) have been established as a basic means of
communication and often influencers and opinion makers promote politics,
events, brands or products through viral content. In this work, we present a
systematic review across i) online social influence metrics, properties, and
applications and ii) the role of semantic in modeling OSNs information. We end
up with the conclusion that both areas can jointly provide useful insights
towards the qualitative assessment of viral user-generated content, as well as
for modeling the dynamic properties of influential content and its flow
dynamics.
| 1 | 0 | 0 | 0 | 0 | 0 |
Differentiable Submodular Maximization | We consider learning of submodular functions from data. These functions are
important in machine learning and have a wide range of applications, e.g. data
summarization, feature selection and active learning. Despite their
combinatorial nature, submodular functions can be maximized approximately with
strong theoretical guarantees in polynomial time. Typically, learning the
submodular function and optimization of that function are treated separately,
i.e. the function is first learned using a proxy objective and subsequently
maximized. In contrast, we show how to perform learning and optimization
jointly. By interpreting the output of greedy maximization algorithms as
distributions over sequences of items and smoothening these distributions, we
obtain a differentiable objective. In this way, we can differentiate through
the maximization algorithms and optimize the model to work well with the
optimization algorithm. We theoretically characterize the error made by our
approach, yielding insights into the tradeoff of smoothness and accuracy. We
demonstrate the effectiveness of our approach for jointly learning and
optimizing on synthetic maximum cut data, and on real world applications such
as product recommendation and image collection summarization.
| 0 | 0 | 0 | 1 | 0 | 0 |
On the Support of Weight Modules for Affine Kac-Moody-Algebras | An irreducible weight module of an affine Kac-Moody algebra $\mathfrak{g}$ is
called dense if its support is equal to a coset in $\mathfrak{h}^{*}/Q$.
Following a conjecture of V. Futorny about affine Kac-Moody algebras
$\mathfrak{g}$, an irreducible weight $\mathfrak{g}$-module is dense if and
only if it is cuspidal (i.e. not a quotient of an induced module). The
conjecture is confirmed for $\mathfrak{g}=A_{2}^{\left(1\right)}$,
$A_{3}^{\left(1\right)}$ and$A_{4}^{\left(1\right)}$ and a classification of
the supports of the irreducible weight $\mathfrak{g}$-modules obtained. For all
$A_{n}^{\left(1\right)}$ the problem is reduced to finding primitive elements
for only finitely many cases, all lying below a certain bound. For the
left-over finitely many cases an algorithm is proposed, which leads to the
solution of Futorny's conjecture for the cases $A_{2}^{\left(1\right)}$ and
$A_{3}^{\left(1\right)}$. Yet, the solution of the case
$A_{4}^{\left(1\right)}$ required additional combinatorics.
For the proofs, a new category of hypoabelian Lie subalgebras,
pre-prosolvable subalgebras, and a subclass thereof, quasicone subalgebras, is
introduced and its tropical matrix algebra structure outlined.
| 0 | 0 | 1 | 0 | 0 | 0 |
The middle-scale asymptotics of Wishart matrices | We study the behavior of a real $p$-dimensional Wishart random matrix with
$n$ degrees of freedom when $n,p\rightarrow\infty$ but $p/n\rightarrow 0$. We
establish the existence of phase transitions when $p$ grows at the order
$n^{(K+1)/(K+3)}$ for every $k\in\mathbb{N}$, and derive expressions for
approximating densities between every two phase transitions. To do this, we
make use of a novel tool we call the G-transform of a distribution, which is
closely related to the characteristic function. We also derive an extension of
the $t$-distribution to the real symmetric matrices, which naturally appears as
the conjugate distribution to the Wishart under a G-transformation, and show
its empirical spectral distribution obeys a semicircle law when $p/n\rightarrow
0$. Finally, we discuss how the phase transitions of the Wishart distribution
might originate from changes in rates of convergence of symmetric $t$
statistics.
| 0 | 0 | 1 | 1 | 0 | 0 |
Online Spatial Concept and Lexical Acquisition with Simultaneous Localization and Mapping | In this paper, we propose an online learning algorithm based on a
Rao-Blackwellized particle filter for spatial concept acquisition and mapping.
We have proposed a nonparametric Bayesian spatial concept acquisition model
(SpCoA). We propose a novel method (SpCoSLAM) integrating SpCoA and FastSLAM in
the theoretical framework of the Bayesian generative model. The proposed method
can simultaneously learn place categories and lexicons while incrementally
generating an environmental map. Furthermore, the proposed method has scene
image features and a language model added to SpCoA. In the experiments, we
tested online learning of spatial concepts and environmental maps in a novel
environment of which the robot did not have a map. Then, we evaluated the
results of online learning of spatial concepts and lexical acquisition. The
experimental results demonstrated that the robot was able to more accurately
learn the relationships between words and the place in the environmental map
incrementally by using the proposed method.
| 1 | 0 | 0 | 0 | 0 | 0 |
The Method of Arbitrarily Large Moments to Calculate Single Scale Processes in Quantum Field Theory | We device a new method to calculate a large number of Mellin moments of
single scale quantities using the systems of differential and/or difference
equations obtained by integration-by-parts identities between the corresponding
Feynman integrals of loop corrections to physical quantities. These scalar
quantities have a much simpler mathematical structure than the complete
quantity. A sufficiently large set of moments may even allow the analytic
reconstruction of the whole quantity considered, holding in case of first order
factorizing systems. In any case, one may derive highly precise numerical
representations in general using this method, which is otherwise completely
analytic.
| 1 | 0 | 1 | 0 | 0 | 0 |
Unraveling the escape dynamics and the nature of the normally hyperbolic invariant manifolds in tidally limited star clusters | The escape mechanism of orbits in a star cluster rotating around its parent
galaxy in a circular orbit is investigated. A three degrees of freedom model is
used for describing the dynamical properties of the Hamiltonian system. The
gravitational field of the star cluster is represented by a smooth and
spherically symmetric Plummer potential. We distinguish between ordered and
chaotic orbits as well as between trapped and escaping orbits, considering only
unbounded motion for several energy levels. The Smaller Alignment Index (SALI)
method is used for determining the regular or chaotic nature of the orbits. The
basins of escape are located and they are also correlated with the
corresponding escape time of the orbits. Areas of bounded regular or chaotic
motion and basins of escape were found to coexist in the $(x,z)$ plane. The
properties of the normally hyperbolic invariant manifolds (NHIMs), located in
the vicinity of the index-1 Lagrange points $L_1$ and $L_2$, are also explored.
These manifolds are of paramount importance as they control the flow of stars
over the saddle points, while they also trigger the formation of tidal tails
observed in star clusters. Bifurcation diagrams of the Lyapunov periodic orbits
as well as restrictions of the Poincaré map to the NHIMs are deployed for
elucidating the dynamics in the neighbourhood of the saddle points. The
extended tidal tails, or tidal arms, formed by stars with low velocity which
escape through the Lagrange points are monitored. The numerical results of this
work are also compared with previous related work.
| 0 | 1 | 0 | 0 | 0 | 0 |
The ALMA Early Science View of FUor/EXor objects. III. The Slow and Wide Outflow of V883 Ori | We present Atacama Large Millimeter/ sub-millimeter Array (ALMA) observations
of V883 Ori, an FU Ori object. We describe the molecular outflow and envelope
of the system based on the $^{12}$CO and $^{13}$CO emissions, which together
trace a bipolar molecular outflow. The C$^{18}$O emission traces the rotational
motion of the circumstellar disk. From the $^{12}$CO blue-shifted emission, we
estimate a wide opening angle of $\sim$ 150$^{^{\circ}}$ for the outflow
cavities. Also, we find that the outflow is very slow (characteristic velocity
of only 0.65 km~s$^{-1}$), which is unique for an FU Ori object. We calculate
the kinematic properties of the outflow in the standard manner using the
$^{12}$CO and $^{13}$CO emissions. In addition, we present a P Cygni profile
observed in the high-resolution optical spectrum, evidence of a wind driven by
the accretion and being the cause for the particular morphology of the
outflows. We discuss the implications of our findings and the rise of these
slow outflows during and/or after the formation of a rotationally supported
disk.
| 0 | 1 | 0 | 0 | 0 | 0 |
Hydrodynamical models of cometary HII regions | We have modelled the evolution of cometary HII regions produced by zero-age
main-sequence stars of O and B spectral types, which are driving strong winds
and are born off-centre from spherically symmetric cores with power-law
($\alpha = 2$) density slopes. A model parameter grid was produced that spans
stellar mass, age and core density. Exploring this parameter space we
investigated limb-brightening, a feature commonly seen in cometary HII regions.
We found that stars with mass $M_\star \geq 12\, \mathrm{M}_\odot$ produce this
feature. Our models have a cavity bounded by a contact discontinuity separating
hot shocked wind and ionised ambient gas that is similar in size to the
surrounding HII region. Due to early pressure confinement we did not see shocks
outside of the contact discontinuity for stars with $M_\star \leq 40\,
\mathrm{M}_\odot$, but the cavities were found to continue to grow. The cavity
size in each model plateaus as the HII region stagnates. The spectral energy
distributions of our models are similar to those from identical stars evolving
in uniform density fields. The turn-over frequency is slightly lower in our
power-law models due to a higher proportion of low density gas covered by the
HII regions.
| 0 | 1 | 0 | 0 | 0 | 0 |
Feature Selection based on the Local Lift Dependence Scale | This paper uses a classical approach to feature selection: minimization of a
cost function applied on estimated joint distributions. However, the search
space in which such minimization is performed is extended. In the original
formulation, the search space is the Boolean lattice of features sets (BLFS),
while, in the present formulation, it is a collection of Boolean lattices of
ordered pairs (features, associated value) (CBLOP), indexed by the elements of
the BLFS. In this approach, we may not only select the features that are most
related to a variable Y, but also select the values of the features that most
influence the variable or that are most prone to have a specific value of Y. A
local formulation of Shanon's mutual information is applied on a CBLOP to
select features, namely, the Local Lift Dependence Scale, an scale for
measuring variable dependence in multiple resolutions. The main contribution of
this paper is to define and apply this local measure, which permits to analyse
local properties of joint distributions that are neglected by the classical
Shanon's global measure. The proposed approach is applied to a dataset
consisting of student performances on a university entrance exam, as well as on
undergraduate courses. The approach is also applied to two datasets of the UCI
Machine Learning Repository.
| 1 | 0 | 0 | 1 | 0 | 0 |
Weyl Rings and enhanced susceptibilities in Pyrochlore Iridates: $k\cdot p$ Analysis of Cluster Dynamical Mean-Field Theory Results | We match analytic results to numerical calculations to provide a detailed
picture of the metal-insulator and topological transitions found in density
functional plus cluster dynamical mean-field calculations of pyrochlore
iridates. We discuss the transition from Weyl metal to Weyl semimetal regimes,
and then analyse in detail the properties of the Weyl semimetal phase and its
evolution into the topologically trivial insulator. The energy scales in the
Weyl semimetal phase are found to be very small, as are the anisotropy
parameters. The electronic structure can to a good approximation be described
as `Weyl rings' and one of the two branches that contributes to the Weyl bands
is essentially flat, leading to enhanced susceptibilities. The optical
longitudinal and Hall conductivities are determined; the frequency dependence
includes pronounced features that reveal the basic energy scales of the Weyl
semimetal phase.
| 0 | 1 | 0 | 0 | 0 | 0 |
Inconsistency of Template Estimation by Minimizing of the Variance/Pre-Variance in the Quotient Space | We tackle the problem of template estimation when data have been randomly
deformed under a group action in the presence of noise. In order to estimate
the template, one often minimizes the variance when the influence of the
transformations have been removed (computation of the Fr{é}chet mean in the
quotient space). The consistency bias is defined as the distance (possibly
zero) between the orbit of the template and the orbit of one element which
minimizes the variance. In the first part, we restrict ourselves to isometric
group action, in this case the Hilbertian distance is invariant under the group
action. We establish an asymptotic behavior of the consistency bias which is
linear with respect to the noise level. As a result the inconsistency is
unavoidable as soon as the noise is enough. In practice, template estimation
with a finite sample is often done with an algorithm called "max-max". In the
second part, also in the case of isometric group finite, we show the
convergence of this algorithm to an empirical Karcher mean. Our numerical
experiments show that the bias observed in practice can not be attributed to
the small sample size or to a convergence problem but is indeed due to the
previously studied inconsistency. In a third part, we also present some
insights of the case of a non invariant distance with respect to the group
action. We will see that the inconsistency still holds as soon as the noise
level is large enough. Moreover we prove the inconsistency even when a
regularization term is added.
| 0 | 0 | 1 | 1 | 0 | 0 |
Surrogate-Based Bayesian Inverse Modeling of the Hydrological System: An Adaptive Approach Considering Surrogate Approximation Erro | Bayesian inverse modeling is important for a better understanding of
hydrological processes. However, this approach can be computationally demanding
as it usually requires a large number of model evaluations. To address this
issue, one can take advantage of surrogate modeling techniques. Nevertheless,
when approximation error of the surrogate model is neglected in inverse
modeling, the inversion result will be biased. In this paper, we develop a
surrogate-based Bayesian inversion framework that explicitly quantifies and
gradually reduces the approximation error of the surrogate. Specifically, two
strategies are proposed and compared. The first strategy works by obtaining an
ensemble of sparse polynomial chaos expansion (PCE) surrogates with Markov
chain Monte Carlo sampling, while the second one uses Gaussian process (GP) to
simulate the approximation error of a single sparse PCE surrogate. The two
strategies can also be applied with other surrogates, thus they have general
applicability. By adaptively refining the surrogate over the posterior
distribution, we can gradually reduce the surrogate approximation error to a
small level. Demonstrated with three case studies involving
high-dimensionality, multi-modality and a real-world application, respectively,
it is found that both strategies can reduce the bias introduced by surrogate
modeling, while the second strategy has a better performance as it integrates
two methods (i.e., sparse PCE and GP) that complement each other.
| 0 | 0 | 0 | 1 | 0 | 0 |
On The Construction of Extreme Learning Machine for Online and Offline One-Class Classification - An Expanded Toolbox | One-Class Classification (OCC) has been prime concern for researchers and
effectively employed in various disciplines. But, traditional methods based
one-class classifiers are very time consuming due to its iterative process and
various parameters tuning. In this paper, we present six OCC methods based on
extreme learning machine (ELM) and Online Sequential ELM (OSELM). Our proposed
classifiers mainly lie in two categories: reconstruction based and boundary
based, which supports both types of learning viz., online and offline learning.
Out of various proposed methods, four are offline and remaining two are online
methods. Out of four offline methods, two methods perform random feature
mapping and two methods perform kernel feature mapping. Kernel feature mapping
based approaches have been tested with RBF kernel and online version of
one-class classifiers are tested with both types of nodes viz., additive and
RBF. It is well known fact that threshold decision is a crucial factor in case
of OCC, so, three different threshold deciding criteria have been employed so
far and analyses the effectiveness of one threshold deciding criteria over
another. Further, these methods are tested on two artificial datasets to check
there boundary construction capability and on eight benchmark datasets from
different discipline to evaluate the performance of the classifiers. Our
proposed classifiers exhibit better performance compared to ten traditional
one-class classifiers and ELM based two one-class classifiers. Through proposed
one-class classifiers, we intend to expand the functionality of the most used
toolbox for OCC i.e. DD toolbox. All of our methods are totally compatible with
all the present features of the toolbox.
| 1 | 0 | 0 | 1 | 0 | 0 |
2nd order PDEs: geometric and functional considerations | INTRODUCTION
This papers deals with partial differential equations of second order,
linear, with constant and not constant coefficients, in two variables, which
admit real characteristics. I face the study of PDEs with the mentality of the
applied physicist, but with a weakness for formalization: look inside the black
box of the formulas, try to compact them (for example, proceeding from an
inverse transformation of coordinates) and make them smart (in the context,
reformulating the theory by means of differential operators and related
invariants), applying them with awareness and then connecting them to geometry
or to spatial categories, which are in mathematics what is closest to the
sensible reality. Finally, proposing examples that are exercise and
corroborating for theory.
TOPICS
The geometric meaning of invariant to a differential operator. Operator
Principal Part and its factorization: commutativity and product with and
without residues(first order terms). Related conditions by operators and
invariants derivatives. Coordinate transformation by invariants and expression
of the hyperbolic and parabolic operators in the new coordinates. Properties of
the Jacobian Matrix and relations between invariants derivatives and inverse
coordinates transformation or the initial variables derivatives. Commutativity
conditions and product without residues in terms of inverse coordinate
transformations that allow to build commutative differential operators or whose
product is without residues (or both). Diffeomorphisms and plane
transformations: new operators and invariants in the new coordinate space which
lead to the chain rule in compact form. Conclusive considerations and examples
who compares different methods of solution.
| 0 | 0 | 1 | 0 | 0 | 0 |
Adversarial Perturbation Intensity Achieving Chosen Intra-Technique Transferability Level for Logistic Regression | Machine Learning models have been shown to be vulnerable to adversarial
examples, ie. the manipulation of data by a attacker to defeat a defender's
classifier at test time. We present a novel probabilistic definition of
adversarial examples in perfect or limited knowledge setting using prior
probability distributions on the defender's classifier. Using the asymptotic
properties of the logistic regression, we derive a closed-form expression of
the intensity of any adversarial perturbation, in order to achieve a given
expected misclassification rate. This technique is relevant in a threat model
of known model specifications and unknown training data. To our knowledge, this
is the first method that allows an attacker to directly choose the probability
of attack success. We evaluate our approach on two real-world datasets.
| 0 | 0 | 0 | 1 | 0 | 0 |
Data Aggregation and Packet Bundling of Uplink Small Packets for Monitoring Applications in LTE | In cellular massive Machine-Type Communications (MTC), a device can transmit
directly to the base station (BS) or through an aggregator (intermediate node).
While direct device-BS communication has recently been in the focus of 5G/3GPP
research and standardization efforts, the use of aggregators remains a less
explored topic. In this paper we analyze the deployment scenarios in which
aggregators can perform cellular access on behalf of multiple MTC devices. We
study the effect of packet bundling at the aggregator, which alleviates
overhead and resource waste when sending small packets. The aggregators give
rise to a tradeoff between access congestion and resource starvation and we
show that packet bundling can minimize resource starvation, especially for
smaller numbers of aggregators. Under the limitations of the considered model,
we investigate the optimal settings of the network parameters, in terms of
number of aggregators and packet-bundle size. Our results show that, in
general, data aggregation can benefit the uplink massive MTC in LTE, by
reducing the signalling overhead.
| 1 | 0 | 0 | 0 | 0 | 0 |
The GTC exoplanet transit spectroscopy survey. VII. Detection of sodium in WASP-52b's cloudy atmosphere | We report the first detection of sodium absorption in the atmosphere of the
hot Jupiter WASP-52b. We observed one transit of WASP-52b with the
low-resolution Optical System for Imaging and low-Intermediate-Resolution
Integrated Spectroscopy (OSIRIS) at the 10.4 m Gran Telescopio Canarias (GTC).
The resulting transmission spectrum, covering the wavelength range from 522 nm
to 903 nm, is flat and featureless, except for the significant narrow
absorption signature at the sodium doublet, which can be explained by an
atmosphere in solar composition with clouds at 1 mbar. A cloud-free atmosphere
is stringently ruled out. By assessing the absorption depths of sodium in
various bin widths, we find that temperature increases towards lower
atmospheric pressure levels, with a positive temperature gradient of 0.88 +/-
0.65 K/km, possibly indicative of upper atmospheric heating and a temperature
inversion.
| 0 | 1 | 0 | 0 | 0 | 0 |
That's Enough: Asynchrony with Standard Choreography Primitives | Choreographies are widely used for the specification of concurrent and
distributed software architectures. Since asynchronous communications are
ubiquitous in real-world systems, previous works have proposed different
approaches for the formal modelling of asynchrony in choreographies. Such
approaches typically rely on ad-hoc syntactic terms or semantics for capturing
the concept of messages in transit, yielding different formalisms that have to
be studied separately.
In this work, we take a different approach, and show that such extensions are
not needed to reason about asynchronous communications in choreographies.
Rather, we demonstrate how a standard choreography calculus already has all the
needed expressive power to encode messages in transit (and thus asynchronous
communications) through the primitives of process spawning and name mobility.
The practical consequence of our results is that we can reason about real-world
systems within a choreography formalism that is simpler than those hitherto
proposed.
| 1 | 0 | 0 | 0 | 0 | 0 |
Doing Things Twice (Or Differently): Strategies to Identify Studies for Targeted Validation | The "reproducibility crisis" has been a highly visible source of scientific
controversy and dispute. Here, I propose and review several avenues for
identifying and prioritizing research studies for the purpose of targeted
validation. Of the various proposals discussed, I identify scientific data
science as being a strategy that merits greater attention among those
interested in reproducibility. I argue that the tremendous potential of
scientific data science for uncovering high-value research studies is a
significant and rarely discussed benefit of the transition to a fully
open-access publishing model.
| 1 | 1 | 0 | 0 | 0 | 0 |
SciSports: Learning football kinematics through two-dimensional tracking data | SciSports is a Dutch startup company specializing in football analytics. This
paper describes a joint research effort with SciSports, during the Study Group
Mathematics with Industry 2018 at Eindhoven, the Netherlands. The main
challenge that we addressed was to automatically process empirical football
players' trajectories, in order to extract useful information from them. The
data provided to us was two-dimensional positional data during entire matches.
We developed methods based on Newtonian mechanics and the Kalman filter,
Generative Adversarial Nets and Variational Autoencoders. In addition, we
trained a discriminator network to recognize and discern different movement
patterns of players. The Kalman-filter approach yields an interpretable model,
in which a small number of player-dependent parameters can be fit; in theory
this could be used to distinguish among players. The
Generative-Adversarial-Nets approach appears promising in theory, and some
initial tests showed an improvement with respect to the baseline, but the
limits in time and computational power meant that we could not fully explore
it. We also trained a Discriminator network to distinguish between two players
based on their trajectories; after training, the network managed to distinguish
between some pairs of players, but not between others. After training, the
Variational Autoencoders generated trajectories that are difficult to
distinguish, visually, from the data. These experiments provide an indication
that deep generative models can learn the underlying structure and statistics
of football players' trajectories. This can serve as a starting point for
determining player qualities based on such trajectory data.
| 0 | 0 | 0 | 1 | 0 | 0 |
A Model-based Projection Technique for Segmenting Customers | We consider the problem of segmenting a large population of customers into
non-overlapping groups with similar preferences, using diverse preference
observations such as purchases, ratings, clicks, etc. over subsets of items. We
focus on the setting where the universe of items is large (ranging from
thousands to millions) and unstructured (lacking well-defined attributes) and
each customer provides observations for only a few items. These data
characteristics limit the applicability of existing techniques in marketing and
machine learning. To overcome these limitations, we propose a model-based
projection technique, which transforms the diverse set of observations into a
more comparable scale and deals with missing data by projecting the transformed
data onto a low-dimensional space. We then cluster the projected data to obtain
the customer segments. Theoretically, we derive precise necessary and
sufficient conditions that guarantee asymptotic recovery of the true customer
segments. Empirically, we demonstrate the speed and performance of our method
in two real-world case studies: (a) 84% improvement in the accuracy of new
movie recommendations on the MovieLens data set and (b) 6% improvement in the
performance of similar item recommendations algorithm on an offline dataset at
eBay. We show that our method outperforms standard latent-class and
demographic-based techniques.
| 1 | 0 | 0 | 1 | 0 | 0 |
Distinct Effects of Cr Bulk Doping and Surface Deposition on the Chemical Environment and Electronic Structure of the Topological Insulator Bi2Se3 | In this report, it is shown that Cr doped into the bulk and Cr deposited on
the surface of Bi2Se3 films produced by molecular beam epitaxy (MBE) have
strikingly different effects on both the electronic structure and chemical
environment.
| 0 | 1 | 0 | 0 | 0 | 0 |
High-Performance Code Generation though Fusion and Vectorization | We present a technique for automatically transforming kernel-based
computations in disparate, nested loops into a fused, vectorized form that can
reduce intermediate storage needs and lead to improved performance on
contemporary hardware.
We introduce representations for the abstract relationships and data
dependencies of kernels in loop nests and algorithms for manipulating them into
more efficient form; we similarly introduce techniques for determining data
access patterns for stencil-like array accesses and show how this can be used
to elide storage and improve vectorization.
We discuss our prototype implementation of these ideas---named HFAV---and its
use of a declarative, inference-based front-end to drive transformations, and
we present results for some prominent codes in HPC.
| 1 | 0 | 0 | 0 | 0 | 0 |
Language Model Pre-training for Hierarchical Document Representations | Hierarchical neural architectures are often used to capture long-distance
dependencies and have been applied to many document-level tasks such as
summarization, document segmentation, and sentiment analysis. However,
effective usage of such a large context can be difficult to learn, especially
in the case where there is limited labeled data available. Building on the
recent success of language model pretraining methods for learning flat
representations of text, we propose algorithms for pre-training hierarchical
document representations from unlabeled data. Unlike prior work, which has
focused on pre-training contextual token representations or context-independent
{sentence/paragraph} representations, our hierarchical document representations
include fixed-length sentence/paragraph representations which integrate
contextual information from the entire documents. Experiments on document
segmentation, document-level question answering, and extractive document
summarization demonstrate the effectiveness of the proposed pre-training
algorithms.
| 1 | 0 | 0 | 0 | 0 | 0 |
Interpretation of Semantic Tweet Representations | Research in analysis of microblogging platforms is experiencing a renewed
surge with a large number of works applying representation learning models for
applications like sentiment analysis, semantic textual similarity computation,
hashtag prediction, etc. Although the performance of the representation
learning models has been better than the traditional baselines for such tasks,
little is known about the elementary properties of a tweet encoded within these
representations, or why particular representations work better for certain
tasks. Our work presented here constitutes the first step in opening the
black-box of vector embeddings for tweets. Traditional feature engineering
methods for high-level applications have exploited various elementary
properties of tweets. We believe that a tweet representation is effective for
an application because it meticulously encodes the application-specific
elementary properties of tweets. To understand the elementary properties
encoded in a tweet representation, we evaluate the representations on the
accuracy to which they can model each of those properties such as tweet length,
presence of particular words, hashtags, mentions, capitalization, etc. Our
systematic extensive study of nine supervised and four unsupervised tweet
representations against most popular eight textual and five social elementary
properties reveal that Bi-directional LSTMs (BLSTMs) and Skip-Thought Vectors
(STV) best encode the textual and social properties of tweets respectively.
FastText is the best model for low resource settings, providing very little
degradation with reduction in embedding size. Finally, we draw interesting
insights by correlating the model performance obtained for elementary property
prediction tasks with the highlevel downstream applications.
| 1 | 0 | 0 | 0 | 0 | 0 |
LoIDE: a web-based IDE for Logic Programming - Preliminary Technical Report | Logic-based paradigms are nowadays widely used in many different fields, also
thank to the availability of robust tools and systems that allow the
development of real-world and industrial applications.
In this work we present LoIDE, an advanced and modular web-editor for
logic-based languages that also integrates with state-of-the-art solvers.
| 1 | 0 | 0 | 0 | 0 | 0 |
Introduction to finite mixtures | Mixture models have been around for over 150 years, as an intuitively simple
and practical tool for enriching the collection of probability distributions
available for modelling data. In this chapter we describe the basic ideas of
the subject, present several alternative representations and perspectives on
these models, and discuss some of the elements of inference about the unknowns
in the models. Our focus is on the simplest set-up, of finite mixture models,
but we discuss also how various simplifying assumptions can be relaxed to
generate the rich landscape of modelling and inference ideas traversed in the
rest of this book.
| 0 | 0 | 0 | 1 | 0 | 0 |
On the complexity of the projective splitting and Spingarn's methods for the sum of two maximal monotone operators | In this work we study the pointwise and ergodic iteration-complexity of a
family of projective splitting methods proposed by Eckstein and Svaiter, for
finding a zero of the sum of two maximal monotone operators. As a consequence
of the complexity analysis of the projective splitting methods, we obtain
complexity bounds for the two-operator case of Spingarn's partial inverse
method. We also present inexact variants of two specific instances of this
family of algorithms, and derive corresponding convergence rate results.
| 0 | 0 | 1 | 0 | 0 | 0 |
Demystifying Relational Latent Representations | Latent features learned by deep learning approaches have proven to be a
powerful tool for machine learning. They serve as a data abstraction that makes
learning easier by capturing regularities in data explicitly. Their benefits
motivated their adaptation to relational learning context. In our previous
work, we introduce an approach that learns relational latent features by means
of clustering instances and their relations. The major drawback of latent
representations is that they are often black-box and difficult to interpret.
This work addresses these issues and shows that (1) latent features created by
clustering are interpretable and capture interesting properties of data; (2)
they identify local regions of instances that match well with the label, which
partially explains their benefit; and (3) although the number of latent
features generated by this approach is large, often many of them are highly
redundant and can be removed without hurting performance much.
| 1 | 0 | 0 | 1 | 0 | 0 |
A direct method to compute the galaxy count angular correlation function including redshift-space distortions | In the near future, cosmology will enter the wide and deep galaxy survey area
allowing high-precision studies of the large scale structure of the universe in
three dimensions. To test cosmological models and determine their parameters
accurately, it is natural to confront data with exact theoretical expectations
expressed in the observational parameter space (angles and redshift). The
data-driven galaxy number count fluctuations on redshift shells, can be used to
build correlation functions $C(\theta; z_1, z_2)$ on and between shells which
can probe the baryonic acoustic oscillations, the distance-redshift distortions
as well as gravitational lensing and other relativistic effects. Transforming
the model to the data space usually requires the computation of the angular
power spectrum $C_\ell(z_1, z_2)$ but this appears as an artificial and
inefficient step plagued by apodization issues. In this article we show that it
is not necessary and present a compact expression for $C(\theta; z_1, z_2)$
that includes directly the leading density and redshift space distortions terms
from the full linear theory. It can be evaluated using a fast integration
method based on Clenshaw-Curtis quadrature and Chebyshev polynomial series.
This new method to compute the correlation functions without any Limber
approximation, allows us to produce and discuss maps of the correlation
function directly in the observable space and is a significant step towards
disentangling the data from the tested models.
| 0 | 1 | 0 | 0 | 0 | 0 |
Contributors profile modelization in crowdsourcing platforms | The crowdsourcing consists in the externalisation of tasks to a crowd of
people remunerated to execute this ones. The crowd, usually diversified, can
include users without qualification and/or motivation for the tasks. In this
paper we will introduce a new method of user expertise modelization in the
crowdsourcing platforms based on the theory of belief functions in order to
identify serious and qualificated users.
| 1 | 0 | 0 | 0 | 0 | 0 |
Discrete versions of the Li-Yau gradient estimate | We study positive solutions to the heat equation on graphs. We prove variants
of the Li-Yau gradient estimate and the differential Harnack inequality. For
some graphs, we can show the estimates to be sharp. We establish new
computation rules for differential operators on discrete spaces and introduce a
relaxation function that governs the time dependency in the differential
Harnack estimate.
| 0 | 0 | 1 | 0 | 0 | 0 |
Efficient Modelling & Forecasting with range based volatility models and application | This paper considers an alternative method for fitting CARR models using
combined estimating functions (CEF) by showing its usefulness in applications
in economics and quantitative finance. The associated information matrix for
corresponding new estimates is derived to calculate the standard errors. A
simulation study is carried out to demonstrate its superiority relative to
other two competitors: linear estimating functions (LEF) and the maximum
likelihood (ML). Results show that CEF estimates are more efficient than LEF
and ML estimates when the error distribution is mis-specified. Taking a real
data set from financial economics, we illustrate the usefulness and
applicability of the CEF method in practice and report reliable forecast values
to minimize the risk in the decision making process.
| 0 | 0 | 1 | 1 | 0 | 0 |
A Constrained Shortest Path Scheme for Virtual Network Service Management | Virtual network services that span multiple data centers are important to
support emerging data-intensive applications in fields such as bioinformatics
and retail analytics. Successful virtual network service composition and
maintenance requires flexible and scalable 'constrained shortest path
management' both in the management plane for virtual network embedding (VNE) or
network function virtualization service chaining (NFV-SC), as well as in the
data plane for traffic engineering (TE). In this paper, we show analytically
and empirically that leveraging constrained shortest paths within recent VNE,
NFV-SC and TE algorithms can lead to network utilization gains (of up to 50%)
and higher energy efficiency. The management of complex VNE, NFV-SC and TE
algorithms can be, however, intractable for large scale substrate networks due
to the NP-hardness of the constrained shortest path problem. To address such
scalability challenges, we propose a novel, exact constrained shortest path
algorithm viz., 'Neighborhoods Method' (NM). Our NM uses novel search space
reduction techniques and has a theoretical quadratic speed-up making it
practically faster (by an order of magnitude) than recent branch-and-bound
exhaustive search solutions. Finally, we detail our NM-based SDN controller
implementation in a real-world testbed to further validate practical NM
benefits for virtual network services.
| 1 | 0 | 0 | 0 | 0 | 0 |
Tunneling of Glashow-Weinberg-Salam model particles from Black Hole Solutions in Rastall Theory | Using the semiclassical WKB approximation and Hamilton-Jacobi method, we
solve an equation of motion for the Glashow-Weinberg-Salam model, which is
important for understanding the unified gauge-theory of weak and
electromagnetic interactions. We calculate the tunneling rate of the massive
charged W-bosons in a background of electromagnetic field to investigate the
Hawking temperature of black holes surrounded by perfect fluid in Rastall
theory. Then, we study the quantum gravity effects on the generalized Proca
equation with generalized uncertainty principle (GUP) on this background. We
show that quantum gravity effects leave the remnants on the Hawking temperature
and the Hawking radiation becomes nonthermal.
| 0 | 1 | 0 | 0 | 0 | 0 |
Symbolic, Distributed and Distributional Representations for Natural Language Processing in the Era of Deep Learning: a Survey | Natural language and symbols are intimately correlated. Recent advances in
machine learning (ML) and in natural language processing (NLP) seem to
contradict the above intuition: symbols are fading away, erased by vectors or
tensors called distributed and distributional representations. However, there
is a strict link between distributed/distributional representations and
symbols, being the first an approximation of the second. A clearer
understanding of the strict link between distributed/distributional
representations and symbols will certainly lead to radically new deep learning
networks. In this paper we make a survey that aims to draw the link between
symbolic representations and distributed/distributional representations. This
is the right time to revitalize the area of interpreting how symbols are
represented inside neural networks.
| 1 | 0 | 0 | 0 | 0 | 0 |
Joint Embedding of Graphs | Feature extraction and dimension reduction for networks is critical in a wide
variety of domains. Efficiently and accurately learning features for multiple
graphs has important applications in statistical inference on graphs. We
propose a method to jointly embed multiple undirected graphs. Given a set of
graphs, the joint embedding method identifies a linear subspace spanned by rank
one symmetric matrices and projects adjacency matrices of graphs into this
subspace. The projection coefficients can be treated as features of the graphs.
We also propose a random graph model which generalizes classical random graph
model and can be used to model multiple graphs. We show through theory and
numerical experiments that under the model, the joint embedding method produces
estimates of parameters with small errors. Via simulation experiments, we
demonstrate that the joint embedding method produces features which lead to
state of the art performance in classifying graphs. Applying the joint
embedding method to human brain graphs, we find it extract interpretable
features that can be used to predict individual composite creativity index.
| 1 | 0 | 0 | 1 | 0 | 0 |
Tuples of polynomials over finite fields with pairwise coprimality conditions | Let $q$ be a prime power. We estimate the number of tuples of degree bounded
monic polynomials $(Q_1,\ldots,Q_v) \in (\mathbb{F}_q[z])^v$ that satisfy given
pairwise coprimality conditions. We show how this generalises from monic
polynomials in finite fields to Dedekind domains with finite norms.
| 0 | 0 | 1 | 0 | 0 | 0 |
Pythagorean theorem of Sharpe ratio | In the present paper, using a replica analysis, we examine the portfolio
optimization problem handled in previous work and discuss the minimization of
investment risk under constraints of budget and expected return for the case
that the distribution of the hyperparameters of the mean and variance of the
return rate of each asset are not limited to a specific probability family.
Findings derived using our proposed method are compared with those in previous
work to verify the effectiveness of our proposed method. Further, we derive a
Pythagorean theorem of the Sharpe ratio and macroscopic relations of
opportunity loss. Using numerical experiments, the effectiveness of our
proposed method is demonstrated for a specific situation.
| 0 | 1 | 1 | 0 | 0 | 0 |
Learning to Generate Posters of Scientific Papers by Probabilistic Graphical Models | Researchers often summarize their work in the form of scientific posters.
Posters provide a coherent and efficient way to convey core ideas expressed in
scientific papers. Generating a good scientific poster, however, is a complex
and time consuming cognitive task, since such posters need to be readable,
informative, and visually aesthetic. In this paper, for the first time, we
study the challenging problem of learning to generate posters from scientific
papers. To this end, a data-driven framework, that utilizes graphical models,
is proposed. Specifically, given content to display, the key elements of a good
poster, including attributes of each panel and arrangements of graphical
elements are learned and inferred from data. During the inference stage, an MAP
inference framework is employed to incorporate some design principles. In order
to bridge the gap between panel attributes and the composition within each
panel, we also propose a recursive page splitting algorithm to generate the
panel layout for a poster. To learn and validate our model, we collect and
release a new benchmark dataset, called NJU-Fudan Paper-Poster dataset, which
consists of scientific papers and corresponding posters with exhaustively
labelled panels and attributes. Qualitative and quantitative results indicate
the effectiveness of our approach.
| 1 | 0 | 0 | 0 | 0 | 0 |
Transformable Biomimetic Liquid Metal Chameleon | Liquid metal (LM) is of current core interest for a wide variety of newly
emerging areas. However, the functional materials thus made so far by LM only
could display a single silver-white appearance. Here in this study, the new
conceptual colorful LM marbles working like transformable biomimetic chameleons
were proposed and fabricated from LM droplets through encasing them with
fluorescent nano-particles. We demonstrated that this unique LM marble can be
manipulated into various stable magnificent appearances as one desires. And it
can also splitt and merge among different colors. Such multifunctional LM
chameleon is capable of responding to the outside electric-stimulus and
realizing shape transformation and discoloration behaviors as well. Further
more, the electric-stimuli has been disclosed to be an easy going way to
trigger the release of nano/micro-particles from the LM. The present
fluorescent biomimetic liquid metal chameleon is expected to offer important
opportunities for diverse unconventional applications, especially in a wide
variety of functional smart material and color changeable soft robot areas.
| 0 | 1 | 0 | 0 | 0 | 0 |
Local Private Hypothesis Testing: Chi-Square Tests | The local model for differential privacy is emerging as the reference model
for practical applications collecting and sharing sensitive information while
satisfying strong privacy guarantees. In the local model, there is no trusted
entity which is allowed to have each individual's raw data as is assumed in the
traditional curator model for differential privacy. So, individuals' data are
usually perturbed before sharing them.
We explore the design of private hypothesis tests in the local model, where
each data entry is perturbed to ensure the privacy of each participant.
Specifically, we analyze locally private chi-square tests for goodness of fit
and independence testing, which have been studied in the traditional, curator
model for differential privacy.
| 1 | 0 | 1 | 1 | 0 | 0 |
Rapid point-of-care Hemoglobin measurement through low-cost optics and Convolutional Neural Network based validation | A low-cost, robust, and simple mechanism to measure hemoglobin would play a
critical role in the modern health infrastructure. Consistent sample
acquisition has been a long-standing technical hurdle for photometer-based
portable hemoglobin detectors which rely on micro cuvettes and dry chemistry.
Any particulates (e.g. intact red blood cells (RBCs), microbubbles, etc.) in a
cuvette's sensing area drastically impact optical absorption profile, and
commercial hemoglobinometers lack the ability to automatically detect faulty
samples. We present the ground-up development of a portable, low-cost and open
platform with equivalent accuracy to medical-grade devices, with the addition
of CNN-based image processing for rapid sample viability prechecks. The
developed platform has demonstrated precision to the nearest $0.18[g/dL]$ of
hemoglobin, an R^2 = 0.945 correlation to hemoglobin absorption curves reported
in literature, and a 97% detection accuracy of poorly-prepared samples. We see
the developed hemoglobin device/ML platform having massive implications in
rural medicine, and consider it an excellent springboard for robust deep
learning optical spectroscopy: a currently untapped source of data for
detection of countless analytes.
| 0 | 1 | 0 | 1 | 0 | 0 |
Families of sets with no matchings of sizes 3 and 4 | In this paper, we study the following classical question of extremal set
theory: what is the maximum size of a family of subsets of $[n]$ such that no
$s$ sets from the family are pairwise disjoint? This problem was first posed by
Erd\H os and resolved for $n\equiv 0, -1\ (\mathrm{mod }\ s)$ by Kleitman in
the 60s. Very little progress was made on the problem until recently. The only
result was a very lengthy resolution of the case $s=3,\ n\equiv 1\ (\mathrm{mod
}\ 3)$ by Quinn, which was written in his PhD thesis and never published in a
refereed journal. In this paper, we give another, much shorter proof of Quinn's
result, as well as resolve the case $s=4,\ n\equiv 2\ (\mathrm{mod }\ 4)$. This
complements the results in our recent paper, where, in particular, we answered
the question in the case $n\equiv -2\ (\mathrm{mod }\ s)$ for $s\ge 5$.
| 1 | 0 | 1 | 0 | 0 | 0 |
Light fields in complex media: mesoscopic scattering meets wave control | The newly emerging field of wave front shaping in complex media has recently
seen enormous progress. The driving force behind these advances has been the
experimental accessibility of the information stored in the scattering matrix
of a disordered medium, which can nowadays routinely be exploited to focus
light as well as to image or to transmit information even across highly turbid
scattering samples. We will provide an overview of these new techniques, of
their experimental implementations as well as of the underlying theoretical
concepts following from mesoscopic scattering theory. In particular, we will
highlight the intimate connections between quantum transport phenomena and the
scattering of light fields in disordered media, which can both be described by
the same theoretical concepts. We also put particular emphasis on how the above
topics relate to application-oriented research fields such as optical imaging,
sensing and communication.
| 0 | 1 | 0 | 0 | 0 | 0 |
Tailoring Product Ownership in Large-Scale Agile | In large-scale agile projects, product owners undertake a range of
challenging and varied activities beyond those conventionally associated with
that role. Using in-depth research interviews from 93 practitioners working in
cross-border teams, from 21 organisations, our rich empirical data offers a
unique international perspective into product owner activities. We found that
the leaders of large-scale agile projects create product owner teams. Product
owner team members undertake sponsor, intermediary and release plan master
activities to manage scale. They undertake communicator and traveller
activities to manage distance and technical architect, governor and risk
assessor activities to manage governance. Based on our findings, we describe
product owner behaviors that are valued by experienced product owners and their
line managers.
| 1 | 0 | 0 | 0 | 0 | 0 |
Gravitational wave, collider and dark matter signals from a scalar singlet electroweak baryogenesis | We analyse a simple extension of the SM with just an additional scalar
singlet coupled to the Higgs boson. We discuss the possible probes for
electroweak baryogenesis in this model including collider searches,
gravitational wave and direct dark matter detection signals. We show that a
large portion of the model parameter space exists where the observation of
gravitational waves would allow detection while the indirect collider searches
would not.
| 0 | 1 | 0 | 0 | 0 | 0 |
Canonical correlation coefficients of high-dimensional Gaussian vectors: finite rank case | Consider a Gaussian vector $\mathbf{z}=(\mathbf{x}',\mathbf{y}')'$,
consisting of two sub-vectors $\mathbf{x}$ and $\mathbf{y}$ with dimensions $p$
and $q$ respectively, where both $p$ and $q$ are proportional to the sample
size $n$. Denote by $\Sigma_{\mathbf{u}\mathbf{v}}$ the population
cross-covariance matrix of random vectors $\mathbf{u}$ and $\mathbf{v}$, and
denote by $S_{\mathbf{u}\mathbf{v}}$ the sample counterpart. The canonical
correlation coefficients between $\mathbf{x}$ and $\mathbf{y}$ are known as the
square roots of the nonzero eigenvalues of the canonical correlation matrix
$\Sigma_{\mathbf{x}\mathbf{x}}^{-1}\Sigma_{\mathbf{x}\mathbf{y}}\Sigma_{\mathbf{y}\mathbf{y}}^{-1}\Sigma_{\mathbf{y}\mathbf{x}}$.
In this paper, we focus on the case that $\Sigma_{\mathbf{x}\mathbf{y}}$ is of
finite rank $k$, i.e. there are $k$ nonzero canonical correlation coefficients,
whose squares are denoted by $r_1\geq\cdots\geq r_k>0$. We study the sample
counterparts of $r_i,i=1,\ldots,k$, i.e. the largest $k$ eigenvalues of the
sample canonical correlation matrix
$§_{\mathbf{x}\mathbf{x}}^{-1}§_{\mathbf{x}\mathbf{y}}§_{\mathbf{y}\mathbf{y}}^{-1}§_{\mathbf{y}\mathbf{x}}$,
denoted by $\lambda_1\geq\cdots\geq \lambda_k$. We show that there exists a
threshold $r_c\in(0,1)$, such that for each $i\in\{1,\ldots,k\}$, when $r_i\leq
r_c$, $\lambda_i$ converges almost surely to the right edge of the limiting
spectral distribution of the sample canonical correlation matrix, denoted by
$d_{+}$. When $r_i>r_c$, $\lambda_i$ possesses an almost sure limit in
$(d_{+},1]$. We also obtain the limiting distribution of $\lambda_i$'s under
appropriate normalization. Specifically, $\lambda_i$ possesses Gaussian type
fluctuation if $r_i>r_c$, and follows Tracy-Widom distribution if $r_i<r_c$.
Some applications of our results are also discussed.
| 0 | 0 | 1 | 1 | 0 | 0 |
Synthesizing Normalized Faces from Facial Identity Features | We present a method for synthesizing a frontal, neutral-expression image of a
person's face given an input face photograph. This is achieved by learning to
generate facial landmarks and textures from features extracted from a
facial-recognition network. Unlike previous approaches, our encoding feature
vector is largely invariant to lighting, pose, and facial expression.
Exploiting this invariance, we train our decoder network using only frontal,
neutral-expression photographs. Since these photographs are well aligned, we
can decompose them into a sparse set of landmark points and aligned texture
maps. The decoder then predicts landmarks and textures independently and
combines them using a differentiable image warping operation. The resulting
images can be used for a number of applications, such as analyzing facial
attributes, exposure and white balance adjustment, or creating a 3-D avatar.
| 1 | 0 | 0 | 1 | 0 | 0 |
One-Shot Coresets: The Case of k-Clustering | Scaling clustering algorithms to massive data sets is a challenging task.
Recently, several successful approaches based on data summarization methods,
such as coresets and sketches, were proposed. While these techniques provide
provably good and small summaries, they are inherently problem dependent - the
practitioner has to commit to a fixed clustering objective before even
exploring the data. However, can one construct small data summaries for a wide
range of clustering problems simultaneously? In this work, we affirmatively
answer this question by proposing an efficient algorithm that constructs such
one-shot summaries for k-clustering problems while retaining strong theoretical
guarantees.
| 1 | 0 | 0 | 1 | 0 | 0 |
Multiple nodal solutions of nonlinear Choquard equations | In this paper, we consider the existence of multiple nodal solutions of the
nonlinear Choquard equation \begin{equation*} \ \ \ \ (P)\ \ \ \ \begin{cases}
-\Delta u+u=(|x|^{-1}\ast|u|^p)|u|^{p-2}u \ \ \ \text{in}\ \mathbb{R}^3, \ \ \
\ \\ u\in H^1(\mathbb{R}^3),\\ \end{cases} \end{equation*} where $p\in
(\frac{5}{2},5)$. We show that for any positive integer $k$, problem $(P)$ has
at least a radially symmetrical solution changing sign exactly $k$-times.
| 0 | 0 | 1 | 0 | 0 | 0 |
Computational Sufficiency, Reflection Groups, and Generalized Lasso Penalties | We study estimators with generalized lasso penalties within the computational
sufficiency framework introduced by Vu (2018, arXiv:1807.05985). By
representing these penalties as support functions of zonotopes and more
generally Minkowski sums of line segments and rays, we show that there is a
natural reflection group associated with the underlying optimization problem. A
consequence of this point of view is that for large classes of estimators
sharing the same penalty, the penalized least squares estimator is
computationally minimal sufficient. This means that all such estimators can be
computed by refining the output of any algorithm for the least squares case. An
interesting technical component is our analysis of coordinate descent on the
dual problem. A key insight is that the iterates are obtained by reflecting and
averaging, so they converge to an element of the dual feasible set that is
minimal with respect to a ordering induced by the group associated with the
penalty. Our main application is fused lasso/total variation denoising and
isotonic regression on arbitrary graphs. In those cases the associated group is
a permutation group.
| 0 | 0 | 0 | 1 | 0 | 0 |
Driving Simulator Platform for Development and Evaluation of Safety and Emergency Systems | According to data from the United Nations, more than 3000 people have died
each day in the world due to road traffic collision. Considering recent
researches, the human error may be considered as the main responsible for these
fatalities. Because of this, researchers seek alternatives to transfer the
vehicle control from people to autonomous systems. However, providing this
technological innovation for the people may demand complex challenges in the
legal, economic and technological areas. Consequently, carmakers and
researchers have divided the driving automation in safety and emergency systems
that improve the driver perception on the road. This may reduce the human
error. Therefore, the main contribution of this study is to propose a driving
simulator platform to develop and evaluate safety and emergency systems, in the
first design stage. This driving simulator platform has an advantage: a
flexible software structure.This allows in the simulation one adaptation for
development or evaluation of a system. The proposed driving simulator platform
was tested in two applications: cooperative vehicle system development and the
influence evaluation of a Driving Assistance System (\textit{DAS}) on a driver.
In the cooperative vehicle system development, the results obtained show that
the increment of the time delay in the communication among vehicles ($V2V$) is
determinant for the system performance. On the other hand, in the influence
evaluation of a \textit{DAS} in a driver, it was possible to conclude that the
\textit{DAS'} model does not have the level of influence necessary in a driver
to avoid an accident.
| 1 | 0 | 0 | 0 | 0 | 0 |
Electrical transient laws in neuronal microdomains based on electro-diffusion | The current-voltage (I-V) conversion characterizes the physiology of cellular
microdomains and reflects cellular communication, excitability, and electrical
transduction. Yet deriving such I-V laws remains a major challenge in most
cellular microdomains due to their small sizes and the difficulty of accessing
voltage with a high nanometer precision. We present here novel analytical
relations derived for different numbers of ionic species inside a neuronal
micro/nano-domains, such as dendritic spines. When a steady-state current is
injected, we find a large deviation from the classical Ohm's law, showing that
the spine neck resistance is insuficent to characterize electrical properties.
For a constricted spine neck, modeled by a hyperboloid, we obtain a new I-V law
that illustrates the consequences of narrow passages on electrical conduction.
Finally, during a fast current transient, the local voltage is modulated by the
distance between activated voltage-gated channels. To conclude,
electro-diffusion laws can now be used to interpret voltage distribution in
neuronal microdomains.
| 0 | 0 | 0 | 0 | 1 | 0 |
Characterization of a Deuterium-Deuterium Plasma Fusion Neutron Generator | We characterize the neutron output of a deuterium-deuterium plasma fusion
neutron generator, model 35-DD-W-S, manufactured by NSD/Gradel-Fusion. The
measured energy spectrum is found to be dominated by neutron peaks at 2.2 MeV
and 2.7 MeV. A detailed GEANT4 simulation accurately reproduces the measured
energy spectrum and confirms our understanding of the fusion process in this
generator. Additionally, a contribution of 14.1 MeV neutrons from
deuterium-tritium fusion is found at a level of~$3.5\%$, from tritium produced
in previous deuterium-deuterium reactions. We have measured both the absolute
neutron flux as well as its relative variation on the operational parameters of
the generator. We find the flux to be proportional to voltage $V^{3.32 \pm
0.14}$ and current $I^{0.97 \pm 0.01}$. Further, we have measured the angular
dependence of the neutron emission with respect to the polar angle. We conclude
that it is well described by isotropic production of neutrons within the
cathode field cage.
| 0 | 1 | 0 | 0 | 0 | 0 |
Turbulent Mass Inhomogeneities induced by a point-source | We describe how turbulence distributes tracers away from a localized source
of injection, and analyse how the spatial inhomogeneities of the concentration
field depend on the amount of randomness in the injection mechanism. For that
purpose, we contrast the mass correlations induced by purely random injections
with those induced by continuous injections in the environment. Using the
Kraichnan model of turbulent advection, whereby the underlying velocity field
is assumed to be shortly correlated in time, we explicitly identify scaling
regions for the statistics of the mass contained within a shell of radius $r$
and located at a distance $\rho$ away from the source. The two key parameters
are found to be (i) the ratio $s^2$ between the absolute and the relative
timescales of dispersion and (ii) the ratio $\Lambda$ between the size of the
cloud and its distance away from the source. When the injection is random, only
the former is relevant, as previously shown by Celani, Martins-Afonso $\&$
Mazzino, $J. Fluid. Mech$, 2007 in the case of an incompressible fluid. It is
argued that the space partition in terms of $s^2$ and $\Lambda$ is a robust
feature of the injection mechanism itself, which should remain relevant beyond
the Kraichnan model. This is for instance the case in a generalised version of
the model, where the absolute dispersion is prescribed to be ballistic rather
than diffusive.
| 0 | 1 | 0 | 0 | 0 | 0 |
Berezin-toeplitz quantization and complex weyl quantization of the torus t${}^2$ | In this paper, we give a correspondence between the Berezin-Toeplitz and the
complex Weyl quantizations of the torus $ \mathbb{T}^2$. To achieve this, we
use the correspondence between the Berezin-Toeplitz and the complex Weyl
quantizations of the complex plane and a relation between the Berezin-Toeplitz
quantization of a periodic symbol on the real phase space $\mathbb{R}^2$ and
the Berezin-Toeplitz quantization of a symbol on the torus $ \mathbb{T}^2 $.
| 0 | 0 | 1 | 0 | 0 | 0 |
Conformal predictive distributions with kernels | This paper reviews the checkered history of predictive distributions in
statistics and discusses two developments, one from recent literature and the
other new. The first development is bringing predictive distributions into
machine learning, whose early development was so deeply influenced by two
remarkable groups at the Institute of Automation and Remote Control. The second
development is combining predictive distributions with kernel methods, which
were originated by one of those groups, including Emmanuel Braverman.
| 1 | 0 | 0 | 1 | 0 | 0 |
Degree bound for toric envelope of a linear algebraic group | Algorithms working with linear algebraic groups often represent them via
defining polynomial equations. One can always choose defining equations for an
algebraic group to be of the degree at most the degree of the group as an
algebraic variety. However, the degree of a linear algebraic group $G \subset
\mathrm{GL}_n(C)$ can be arbitrarily large even for $n = 1$. One of the key
ingredients of Hrushovski's algorithm for computing the Galois group of a
linear differential equation was an idea to `approximate' every algebraic
subgroup of $\mathrm{GL}_n(C)$ by a `similar' group so that the degree of the
latter is bounded uniformly in $n$. Making this uniform bound computationally
feasible is crucial for making the algorithm practical.
In this paper, we derive a single-exponential degree bound for such an
approximation (we call it toric envelope), which is qualitatively optimal. As
an application, we improve the quintuply exponential bound for the first step
of the Hrushovski's algorithm due to Feng to a single-exponential bound. For
the cases $n = 2, 3$ often arising in practice, we further refine our general
bound.
| 1 | 0 | 0 | 0 | 0 | 0 |
The Modified Lommel functions: monotonic pattern and inequalities | This article studies the monotonicity, log-convexity of the modified Lommel
functions by using its power series and infinite product representation. Same
properties for the ratio of the modified Lommel functions with the Lommel
function, $\sinh$ and $\cosh$ are also discussed. As a consequence, some
Turán type and reverse Turán type inequalities are given. A Rayleigh type
function for the Lommel functions are derived and as an application, we obtain
the Redheffer-type inequality.
| 0 | 0 | 1 | 0 | 0 | 0 |
Anticipating epileptic seizures through the analysis of EEG synchronization as a data classification problem | Epilepsy is a neurological disorder arising from anomalies of the electrical
activity in the brain, affecting about 0.5--0.8\% of the world population.
Several studies investigated the relationship between seizures and brainwave
synchronization patterns, pursuing the possibility of identifying interictal,
preictal, ictal and postictal states. In this work, we introduce a graph-based
model of the brain interactions developed to study synchronization patterns in
the electroencephalogram (EEG) signals. The aim is to develop a
patient-specific approach, also for a real-time use, for the prediction of
epileptic seizures' occurrences. Different synchronization measures of the EEG
signals and easily computable functions able to capture in real-time the
variations of EEG synchronization have been considered. Both standard and
ad-hoc classification algorithms have been developed and used. Results on scalp
EEG signals show that this simple and computationally viable processing is able
to highlight the changes in the synchronization corresponding to the preictal
state.
| 0 | 0 | 0 | 0 | 1 | 0 |
On distances in lattices from algebraic number fields | In this paper, we study a classical construction of lattices from number
fields and obtain a series of new results about their minimum distance and
other characteristics by introducing a new measure of algebraic numbers. In
particular, we show that when the number fields have few complex embeddings,
the minimum distances of these lattices can be computed exactly.
| 0 | 0 | 1 | 0 | 0 | 0 |
Significance of distinct electron correlation effects in determining the P,T-odd electric dipole moment of $^{171}$Yb | Parity and time-reversal violating electric dipole moment (EDM) of $^{171}$Yb
is calculated accounting for the electron correlation effects over the
Dirac-Hartree-Fock (DHF) method in the relativistic Rayleigh-Schrödinger
many-body perturbation theory, with the second (MBPT(2) method) and third order
(MBPT(3) method) approximations, and two variants of all-order relativistic
many-body approaches, in the random phase approximation (RPA) and
coupled-cluster (CC) method with singles and doubles (CCSD method) framework.
We consider electron-nucleus tensor-pseudotensor (T-PT) and nuclear Schiff
moment (NSM) interactions as the predominant sources that induce EDM in a
diamagnetic atomic system. Our results from the CCSD method to EDM ($d_a$) of
$^{171}$Yb due to the T-PT and NSM interactions are found to be $d_a = 4.85(6)
\times 10^{-20} \langle \sigma \rangle C_T \ |e| \ cm$ and $d_a=2.89(4) \times
10^{-17} {S/(|e|\ fm^3)}$, respectively, where $C_T$ is the T-PT coupling
constant and $S$ is the NSM. These values differ significantly from the earlier
calculations. The reason for the same has been attributed to large correlation
effects arising through non-RPA type of interactions among the electrons in
this atom that are observed by analyzing the differences in the RPA and CCSD
results. This has been further scrutinized from the MBPT(2) and MBPT(3) results
and their roles have been demonstrated explicitly.
| 0 | 1 | 0 | 0 | 0 | 0 |
Facilitating information system development with Panoramic view on data | The increasing amount of information and the absence of an effective tool for
assisting users with minimal technical knowledge lead us to use associative
thinking paradigm for implementation of a software solution - Panorama. In this
study, we present object recognition process, based on context + focus
information visualization techniques, as a foundation for realization of
Panorama. We show that user can easily define data vocabulary of selected
domain that is furthermore used as the application framework. The purpose of
Panorama approach is to facilitate software development of certain problem
domains by shortening the Software Development Life Cycle with minimizing the
impact of implementation, review and maintenance phase. Our approach is focused
on using and updating data vocabulary by users without extensive programming
skills. Panorama therefore facilitates traversing through data by following
associations where user does not need to be familiar with the query language,
the data structure and does not need to know the problem domain fully. Our
approach has been verified by detailed comparison to existing approaches and in
an experiment by implementing selected use cases. The results confirmed that
Panorama fits problem domains with emphasis on data oriented rather than ones
with process oriented aspects. In such cases the development of selected
problem domains is shortened up to 25%, where emphasis is mainly on analysis,
logical design and testing, while omitting physical design and programming,
which is performed automatically by Panorama tool.
| 1 | 0 | 0 | 0 | 0 | 0 |
Neutron Diffraction and $μ$SR Studies of Two Polymorphs of Nickel Niobate (NiNb$_2$O$_6$) | Neutron diffraction and muon spin relaxation ($\mu$SR) studies are presented
for the newly characterized polymorph of NiNb$_2$O$_6$ ($\beta$-NiNb$_2$O$_6$)
with space group P4$_2$/n and $\mu$SR data only for the previously known
columbite structure polymorph with space group Pbcn. The magnetic structure of
the P4$_2$/n form was determined from neutron diffraction using both powder and
single crystal data. Powder neutron diffraction determined an ordering wave
vector $\vec{k}$ = ($\frac{1}{2},\frac{1}{2},\frac{1}{2}$). Single crystal data
confirmed the same $\vec{k}$-vector and showed that the correct magnetic
structure consists of antiferromagnetically-coupled chains running along the a
or b-axes in adjacent Ni$^{2+}$ layers perpendicular to the c-axis, which is
consistent with the expected exchange interaction hierarchy in this system. The
refined magnetic structure is compared with the known magnetic structures of
the closely related tri-rutile phases, NiSb$_2$O$_6$ and NiTa$_2$O$_6$. $\mu$SR
data finds a transition temperature of $T_N \sim$ 15 K for this system, while
the columbite polymorph exhibits a lower $T_N =$ 5.7(3) K. Our $\mu$SR
measurements also allowed us to estimate the critical exponent of the order
parameter $\beta$ for each polymorph. We found $\beta =$ 0.25(3) and 0.16(2)
for the $\beta$ and columbite polymorphs respectively. The single crystal
neutron scattering data gives a value for the critical exponent $\beta
=$~0.28(3) for $\beta$-NiNb$_2$O$_6$, in agreement with the $\mu$SR value.
While both systems have $\beta$ values less than 0.3, which is indicative of
reduced dimensionality, this effect appears to be much stronger for the
columbite system. In other words, although both systems appear to
well-described by $S = 1$ spin chains, the interchain interactions in the
$\beta$-polymorph are likely much larger.
| 0 | 1 | 0 | 0 | 0 | 0 |
Semiclassical Propagation: Hilbert Space vs. Wigner Representation | A unified viewpoint on the van Vleck and Herman-Kluk propagators in Hilbert
space and their recently developed counterparts in Wigner representation is
presented. It is shown that the numerical protocol for the Herman-Kluk
propagator, which contains the van Vleck one as a particular case, coincides in
both representations. The flexibility of the Wigner version in choosing the
Gaussians' width for the underlying coherent states, being not bound to minimal
uncertainty, is investigated numerically on prototypical potentials. Exploiting
this flexibility provides neither qualitative nor quantitative improvements.
Thus, the well-established Herman-Kluk propagator in Hilbert space remains the
best choice to date given the large number of semiclassical developments and
applications based on it.
| 0 | 1 | 0 | 0 | 0 | 0 |
Dynamic analysis and control PID path of a model type gantry crane | This paper presents an alternate form for the dynamic modelling of a
mechanical system that simulates in real life a gantry crane type, using
Euler's classical mechanics and Lagrange formalism, which allows find the
equations of motion that our model describe. Moreover, it has a basic model
design system using the SolidWorks software, based on the material and
dimensions of the model provides some physical variables necessary for
modelling. In order to verify the theoretical results obtained, a contrast was
made between solutions obtained by simulation in SimMechanics-Matlab and
Euler-Lagrange equations system, has been solved through Matlab libraries for
solving equation's systems of the type and order obtained. The force is
determined, but not as exerted by the spring, as this will be the control
variable. The objective to bring the mass of the pendulum from one point to
another with a specified distance without the oscillation from it, so that, the
answer is overdamped. This article includes an analysis of PID control in which
the equations of motion of Euler-Lagrange are rewritten in the state space,
once there, they were implemented in Simulink to get the natural response of
the system to a step input in F and then draw the desired trajectories.
| 1 | 1 | 0 | 0 | 0 | 0 |
Dynamic Minimum Spanning Forest with Subpolynomial Worst-case Update Time | We present a Las Vegas algorithm for dynamically maintaining a minimum
spanning forest of an $n$-node graph undergoing edge insertions and deletions.
Our algorithm guarantees an $O(n^{o(1)})$ worst-case update time with high
probability. This significantly improves the two recent Las Vegas algorithms by
Wulff-Nilsen [STOC'17] with update time $O(n^{0.5-\epsilon})$ for some constant
$\epsilon>0$ and, independently, by Nanongkai and Saranurak [STOC'17] with
update time $O(n^{0.494})$ (the latter works only for maintaining a spanning
forest).
Our result is obtained by identifying the common framework that both two
previous algorithms rely on, and then improve and combine the ideas from both
works. There are two main algorithmic components of the framework that are
newly improved and critical for obtaining our result. First, we improve the
update time from $O(n^{0.5-\epsilon})$ in Wulff-Nilsen [STOC'17] to
$O(n^{o(1)})$ for decrementally removing all low-conductance cuts in an
expander undergoing edge deletions. Second, by revisiting the "contraction
technique" by Henzinger and King [1997] and Holm et al. [STOC'98], we show a
new approach for maintaining a minimum spanning forest in connected graphs with
very few (at most $(1+o(1))n$) edges. This significantly improves the previous
approach in [Wulff-Nilsen STOC'17] and [Nanongkai and Saranurak STOC'17] which
is based on Frederickson's 2-dimensional topology tree and illustrates a new
application to this old technique.
| 1 | 0 | 0 | 0 | 0 | 0 |
Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference | The rising popularity of intelligent mobile devices and the daunting
computational cost of deep learning-based models call for efficient and
accurate on-device inference schemes. We propose a quantization scheme that
allows inference to be carried out using integer-only arithmetic, which can be
implemented more efficiently than floating point inference on commonly
available integer-only hardware. We also co-design a training procedure to
preserve end-to-end model accuracy post quantization. As a result, the proposed
quantization scheme improves the tradeoff between accuracy and on-device
latency. The improvements are significant even on MobileNets, a model family
known for run-time efficiency, and are demonstrated in ImageNet classification
and COCO detection on popular CPUs.
| 1 | 0 | 0 | 1 | 0 | 0 |
airpred: A Flexible R Package Implementing Methods for Predicting Air Pollution | Fine particulate matter (PM$_{2.5}$) is one of the criteria air pollutants
regulated by the Environmental Protection Agency in the United States. There is
strong evidence that ambient exposure to (PM$_{2.5}$) increases risk of
mortality and hospitalization. Large scale epidemiological studies on the
health effects of PM$_{2.5}$ provide the necessary evidence base for lowering
the safety standards and inform regulatory policy. However, ambient monitors of
PM$_{2.5}$ (as well as monitors for other pollutants) are sparsely located
across the U.S., and therefore studies based only on the levels of PM$_{2.5}$
measured from the monitors would inevitably exclude large amounts of the
population. One approach to resolving this issue has been developing models to
predict local PM$_{2.5}$, NO$_2$, and ozone based on satellite, meteorological,
and land use data. This process typically relies developing a prediction model
that relies on large amounts of input data and is highly computationally
intensive to predict levels of air pollution in unmonitored areas. We have
developed a flexible R package that allows for environmental health researchers
to design and train spatio-temporal models capable of predicting multiple
pollutants, including PM$_{2.5}$. We utilize H2O, an open source big data
platform, to achieve both performance and scalability when used in conjunction
with cloud or cluster computing systems.
| 0 | 0 | 0 | 1 | 0 | 0 |
Accurate and Efficient Profile Matching in Knowledge Bases | A profile describes a set of properties, e.g. a set of skills a person may
have, a set of skills required for a particular job, or a set of abilities a
football player may have with respect to a particular team strategy. Profile
matching aims to determine how well a given profile fits to a requested
profile. The approach taken in this article is grounded in a matching theory
that uses filters in lattices to represent profiles, and matching values in the
interval [0,1]: the higher the matching value the better is the fit. Such
lattices can be derived from knowledge bases exploiting description logics to
represent the knowledge about profiles. An interesting first question is, how
human expertise concerning the matching can be exploited to obtain most
accurate matchings. It will be shown that if a set of filters together with
matching values by some human expert is given, then under some mild
plausibility assumptions a matching measure can be determined such that the
computed matching values preserve the rankings given by the expert. A second
question concerns the efficient querying of databases of profile instances. For
matching queries that result in a ranked list of profile instances matching a
given one it will be shown how corresponding top-k queries can be evaluated on
grounds of pre-computed matching values, which in turn allows the maintenance
of the knowledge base to be decoupled from the maintenance of profile
instances. In addition, it will be shown how the matching queries can be
exploited for gap queries that determine how profile instances need to be
extended in order to improve in the rankings. Finally, the theory of matching
will be extended beyond the filters, which lead to a matching theory that
exploits fuzzy sets or probabilistic logic with maximum entropy semantics. It
will be shown that added fuzzy links can be captured by extending the
underlying lattice.
| 1 | 0 | 0 | 0 | 0 | 0 |
Boötes-HiZELS: an optical to near-infrared survey of emission-line galaxies at $\bf z=0.4-4.7$ | We present a sample of $\sim 1000$ emission line galaxies at $z=0.4-4.7$ from
the $\sim0.7$deg$^2$ High-$z$ Emission Line Survey (HiZELS) in the Boötes
field identified with a suite of six narrow-band filters at $\approx 0.4-2.1$
$\mu$m. These galaxies have been selected on their Ly$\alpha$ (73), {\sc [Oii]}
(285), H$\beta$/{\sc [Oiii]} (387) or H$\alpha$ (362) emission-line, and have
been classified with optical to near-infrared colours. A subsample of 98
sources have reliable redshifts from multiple narrow-band (e.g. [O{\sc
ii}]-H$\alpha$) detections and/or spectroscopy. In this survey paper, we
present the observations, selection and catalogs of emitters. We measure number
densities of Ly$\alpha$, [O{\sc ii}], H$\beta$/{\sc [Oiii]} and H$\alpha$ and
confirm strong luminosity evolution in star-forming galaxies from $z\sim0.4$ to
$\sim 5$, in agreement with previous results. To demonstrate the usefulness of
dual-line emitters, we use the sample of dual [O{\sc ii}]-H$\alpha$ emitters to
measure the observed [O{\sc ii}]/H$\alpha$ ratio at $z=1.47$. The observed
[O{\sc ii}]/H$\alpha$ ratio increases significantly from 0.40$\pm0.01$ at
$z=0.1$ to 0.52$\pm0.05$ at $z=1.47$, which we attribute to either decreasing
dust attenuation with redshift, or due to a bias in the (typically)
fiber-measurements in the local Universe which only measure the central kpc
regions. At the bright end, we find that both the H$\alpha$ and Ly$\alpha$
number densities at $z\approx2.2$ deviate significantly from a Schechter form,
following a power-law. We show that this is driven entirely by an increasing
X-ray/AGN fraction with line-luminosity, which reaches $\approx 100$ \% at
line-luminosities $L\gtrsim3\times10^{44}$ erg s$^{-1}$.
| 0 | 1 | 0 | 0 | 0 | 0 |
Subsets and Splits