title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
Spin dynamics of FeGa$_{3-x}$Ge$_x$ studied by Electron Spin Resonance | The intermetallic semiconductor FeGa$_{3}$ acquires itinerant ferromagnetism
upon electron doping by a partial replacement of Ga with Ge. We studied the
electron spin resonance (ESR) of high-quality single crystals of
FeGa$_{3-x}$Ge$_x$ for $x$ from 0 up to 0.162 where ferromagnetic order is
observed. For $x = 0$ we observed a well-defined ESR signal, indicating the
presence of pre-formed magnetic moments in the semiconducting phase. Upon Ge
doping the occurrence of itinerant magnetism clearly affects the ESR properties
below $\approx 40$~K whereas at higher temperatures an ESR signal as seen in
FeGa$_{3}$ prevails independent on the Ge-content. The present results show
that the ESR of FeGa$_{3-x}$Ge$_x$ is an appropriate and direct tool to
investigate the evolution of 3d-based itinerant magnetism.
| 0 | 1 | 0 | 0 | 0 | 0 |
Topology and edge modes in quantum critical chains | We show that topology can protect exponentially localized, zero energy edge
modes at critical points between one-dimensional symmetry protected topological
phases. This is possible even without gapped degrees of freedom in the bulk
---in contrast to recent work on edge modes in gapless chains. We present an
intuitive picture for the existence of these edge modes in the case of
non-interacting spinless fermions with time reversal symmetry (BDI class of the
tenfold way). The stability of this phenomenon relies on a topological
invariant defined in terms of a complex function, counting its zeros and poles
inside the unit circle. This invariant can prevent two models described by the
\emph{same} conformal field theory (CFT) from being smoothly connected. A full
classification of critical phases in the non-interacting BDI class is obtained:
each phase is labeled by the central charge of the CFT, $c \in
\frac{1}{2}\mathbb N$, and the topological invariant, $\omega \in \mathbb Z$.
Moreover, $c$ is determined by the difference in the number of edge modes
between the phases neighboring the transition. Numerical simulations show that
the topological edge modes of critical chains can be stable in the presence of
interactions and disorder.
| 0 | 1 | 0 | 0 | 0 | 0 |
Incompressible fillings of manifolds | We find boundaries of Borel-Serre compactifications of locally symmetric
spaces, for which any filling is incompressible. We prove this result by
showing that these boundaries have small singular models and using these models
to obstruct compressions. We also show that small singular models of boundaries
obstruct $S^1$-actions (and more generally homotopically trivial $\mathbb
Z/p$-actions) on interiors of aspherical fillings. We use this to bound the
symmetry of complete Riemannian metrics on such interiors in terms of the
fundamental group. We also use small singular models to simplify the proofs of
some already known theorems about moduli spaces (the minimal orbifold theorem
and a topological analogue of Royden's theorem).
| 0 | 0 | 1 | 0 | 0 | 0 |
Multichannel Attention Network for Analyzing Visual Behavior in Public Speaking | Public speaking is an important aspect of human communication and
interaction. The majority of computational work on public speaking concentrates
on analyzing the spoken content, and the verbal behavior of the speakers. While
the success of public speaking largely depends on the content of the talk, and
the verbal behavior, non-verbal (visual) cues, such as gestures and physical
appearance also play a significant role. This paper investigates the importance
of visual cues by estimating their contribution towards predicting the
popularity of a public lecture. For this purpose, we constructed a large
database of more than $1800$ TED talk videos. As a measure of popularity of the
TED talks, we leverage the corresponding (online) viewers' ratings from
YouTube. Visual cues related to facial and physical appearance, facial
expressions, and pose variations are extracted from the video frames using
convolutional neural network (CNN) models. Thereafter, an attention-based long
short-term memory (LSTM) network is proposed to predict the video popularity
from the sequence of visual features. The proposed network achieves
state-of-the-art prediction accuracy indicating that visual cues alone contain
highly predictive information about the popularity of a talk. Furthermore, our
network learns a human-like attention mechanism, which is particularly useful
for interpretability, i.e. how attention varies with time, and across different
visual cues by indicating their relative importance.
| 1 | 0 | 0 | 0 | 0 | 0 |
The Łojasiewicz Exponent via The Valuative Hamburger-Noether Process | Let $k$ be an algebraically closed field of any characteristic. We apply the
Hamburger-Noether process of successive quadratic transformations to show the
equivalence of two definitions of the {\L}ojasiewicz exponent
$\mathfrak{L}(\mathfrak{a})$ of an ideal $\mathfrak{a}\subset k[[x,y]]$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Science and its significant other: Representing the humanities in bibliometric scholarship | Bibliometrics offers a particular representation of science. Through
bibliometric methods a bibliometrician will always highlight particular
elements of publications, and through these elements operationalize particular
representations of science, while obscuring other possible representations from
view. Understanding bibliometrics as representation implies that a bibliometric
analysis is always performative: a bibliometric analysis brings a particular
representation of science into being that potentially influences the science
system itself. In this review we analyze the ways the humanities have been
represented throughout the history of bibliometrics, often in comparison to
other scientific domains or to a general notion of the sciences. Our review
discusses bibliometric scholarship between 1965 and 2016 that studies the
humanities empirically. We distinguish between two periods of bibliometric
scholarship. The first period, between 1965 and 1989, is characterized by a
sociological theoretical framework, the development and use of the Price index,
and small samples of journal publications as data sources. The second period,
from the mid-1980s up until the present day, is characterized by a new
hinterland, that of science policy and research evaluation, in which
bibliometric methods become embedded.
| 1 | 0 | 0 | 0 | 0 | 0 |
Referenceless Quality Estimation for Natural Language Generation | Traditional automatic evaluation measures for natural language generation
(NLG) use costly human-authored references to estimate the quality of a system
output. In this paper, we propose a referenceless quality estimation (QE)
approach based on recurrent neural networks, which predicts a quality score for
a NLG system output by comparing it to the source meaning representation only.
Our method outperforms traditional metrics and a constant baseline in most
respects; we also show that synthetic data helps to increase correlation
results by 21% compared to the base system. Our results are comparable to
results obtained in similar QE tasks despite the more challenging setting.
| 1 | 0 | 0 | 0 | 0 | 0 |
Regularisation of Neural Networks by Enforcing Lipschitz Continuity | We investigate the effect of explicitly enforcing the Lipschitz continuity of
neural networks with respect to their inputs. To this end, we provide a simple
technique for computing an upper bound to the Lipschitz constant of a feed
forward neural network composed of commonly used layer types and demonstrate
inaccuracies in previous work on this topic. Our technique is then used to
formulate training a neural network with a bounded Lipschitz constant as a
constrained optimisation problem that can be solved using projected stochastic
gradient methods. Our evaluation study shows that, in isolation, our method
performs comparatively to state-of-the-art regularisation techniques. Moreover,
when combined with existing approaches to regularising neural networks the
performance gains are cumulative. We also provide evidence that the
hyperparameters are intuitive to tune and demonstrate how the choice of norm
for computing the Lipschitz constant impacts the resulting model.
| 0 | 0 | 0 | 1 | 0 | 0 |
HiNet: Hierarchical Classification with Neural Network | Traditionally, classifying large hierarchical labels with more than 10000
distinct traces can only be achieved with flatten labels. Although flatten
labels is feasible, it misses the hierarchical information in the labels.
Hierarchical models like HSVM by \cite{vural2004hierarchical} becomes
impossible to train because of the sheer number of SVMs in the whole
architecture. We developed a hierarchical architecture based on neural networks
that is simple to train. Also, we derived an inference algorithm that can
efficiently infer the MAP (maximum a posteriori) trace guaranteed by our
theorems. Furthermore, the complexity of the model is only $O(n^2)$ compared to
$O(n^h)$ in a flatten model, where $h$ is the height of the hierarchy.
| 1 | 0 | 0 | 0 | 0 | 0 |
Multiscale permutation entropy analysis of laser beam wandering in isotropic turbulence | We have experimentally quantified the temporal structural diversity from the
coordinate fluctuations of a laser beam propagating through isotropic optical
turbulence. The main focus here is on the characterization of the long-range
correlations in the wandering of a thin Gaussian laser beam over a screen after
propagating through a turbulent medium. To fulfill this goal, a
laboratory-controlled experiment was conducted in which coordinate fluctuations
of the laser beam were recorded at a sufficiently high sampling rate for a wide
range of turbulent conditions. Horizontal and vertical displacements of the
laser beam centroid were subsequently analyzed by implementing the symbolic
technique based on ordinal patterns to estimate the well-known permutation
entropy. We show that the permutation entropy estimations at multiple time
scales evidence an interplay between different dynamical behaviors. More
specifically, a crossover between two different scaling regimes is observed. We
confirm a transition from an integrated stochastic process contaminated with
electronic noise to a fractional Brownian motion with a Hurst exponent H = 5/6
as the sampling time increases. Besides, we are able to quantify, from the
estimated entropy, the amount of electronic noise as a function of the
turbulence strength. We have also demonstrated that these experimental
observations are in very good agreement with numerical simulations of noisy
fractional Brownian motions with a well-defined crossover between two different
scaling regimes.
| 0 | 1 | 0 | 0 | 0 | 0 |
Improving OpenCL Performance by Specializing Compiler Phase Selection and Ordering | Automatic compiler phase selection/ordering has traditionally been focused on
CPUs and, to a lesser extent, FPGAs. We present experiments regarding compiler
phase ordering specialization of OpenCL kernels targeting a GPU. We use
iterative exploration to specialize LLVM phase orders on 15 OpenCL benchmarks
to an NVIDIA GPU. We analyze the generated NVIDIA PTX code for the various
versions to identify the main causes of the most significant improvements and
present results of a set of experiments that demonstrate the importance of
using specific phase orders. Using specialized compiler phase orders, we were
able to achieve geometric mean improvements of 1.54x (up to 5.48x) and 1.65x
(up to 5.7x) over PTX generated by the NVIDIA CUDA compiler from CUDA versions
of the same kernels, and over execution of the OpenCL kernels compiled from
source with the NVIDIA OpenCL driver, respectively. We also evaluate the use of
code-features in the OpenCL kernels. More specifically, we evaluate an approach
that achieves geometric mean improvements of 1.49x and 1.56x over the same
OpenCL baseline, by using the compiler sequences of the 1 or 3 most similar
benchmarks, respectively.
| 1 | 0 | 0 | 0 | 0 | 0 |
Shot noise and biased tracers: a new look at the halo model | Shot noise is an important ingredient to any measurement or theoretical
modeling of discrete tracers of the large scale structure. Recent work has
shown that the shot noise in the halo power spectrum becomes increasingly
sub-Poissonian at high mass. Interestingly, while the halo model predicts a
shot noise power spectrum in qualitative agreement with the data, it leads to
an unphysical white noise in the cross halo-matter and matter power spectrum.
In this work, we show that absorbing all the halo model sources of shot noise
into the halo fluctuation field leads to meaningful predictions for the shot
noise contributions to halo clustering statistics and remove the unphysical
white noise from the cross halo-matter statistics. Our prescription
straightforwardly maps onto the general bias expansion, so that the
renormalized shot noise terms can be expressed as combinations of the halo
model shot noises. Furthermore, we demonstrate that non-Poissonian
contributions are related to volume integrals over correlation functions and
their response to long-wavelength density perturbations. This leads to a new
class of consistency relations for discrete tracers, which appear to be
satisfied by our reformulation of the halo model. We test our theoretical
predictions against measurements of halo shot noise bispectra extracted from a
large suite of numerical simulations. Our model reproduces qualitatively the
observed sub-Poissonian noise, although it underestimates the magnitude of this
effect.
| 0 | 1 | 0 | 0 | 0 | 0 |
On the maximal directional Hilbert transform in three dimensions | We establish the sharp growth rate, in terms of cardinality, of the $L^p$
norms of the maximal Hilbert transform $H_\Omega$ along finite subsets of a
finite order lacunary set of directions $\Omega \subset \mathbb R^3$, answering
a question of Parcet and Rogers in dimension $n=3$. Our result is the first
sharp estimate for maximal directional singular integrals in dimensions greater
than 2.
The proof relies on a representation of the maximal directional Hilbert
transform in terms of a model maximal operator associated to compositions of
two-dimensional angular multipliers, as well as on the usage of weighted norm
inequalities, and their extrapolation, in the directional setting.
| 0 | 0 | 1 | 0 | 0 | 0 |
MOLIERE: Automatic Biomedical Hypothesis Generation System | Hypothesis generation is becoming a crucial time-saving technique which
allows biomedical researchers to quickly discover implicit connections between
important concepts. Typically, these systems operate on domain-specific
fractions of public medical data. MOLIERE, in contrast, utilizes information
from over 24.5 million documents. At the heart of our approach lies a
multi-modal and multi-relational network of biomedical objects extracted from
several heterogeneous datasets from the National Center for Biotechnology
Information (NCBI). These objects include but are not limited to scientific
papers, keywords, genes, proteins, diseases, and diagnoses. We model hypotheses
using Latent Dirichlet Allocation applied on abstracts found near shortest
paths discovered within this network, and demonstrate the effectiveness of
MOLIERE by performing hypothesis generation on historical data. Our network,
implementation, and resulting data are all publicly available for the broad
scientific community.
| 1 | 0 | 0 | 1 | 0 | 0 |
Lower bounds for several online variants of bin packing | We consider several previously studied online variants of bin packing and
prove new and improved lower bounds on the asymptotic competitive ratios for
them. For that, we use a method of fully adaptive constructions. In particular,
we improve the lower bound for the asymptotic competitive ratio of online
square packing significantly, raising it from roughly 1.68 to above 1.75.
| 1 | 0 | 0 | 0 | 0 | 0 |
Query-limited Black-box Attacks to Classifiers | We study black-box attacks on machine learning classifiers where each query
to the model incurs some cost or risk of detection to the adversary. We focus
explicitly on minimizing the number of queries as a major objective.
Specifically, we consider the problem of attacking machine learning classifiers
subject to a budget of feature modification cost while minimizing the number of
queries, where each query returns only a class and confidence score. We
describe an approach that uses Bayesian optimization to minimize the number of
queries, and find that the number of queries can be reduced to approximately
one tenth of the number needed through a random strategy for scenarios where
the feature modification cost budget is low.
| 1 | 0 | 0 | 1 | 0 | 0 |
Efficient Kinematic Planning for Mobile Manipulators with Non-holonomic Constraints Using Optimal Control | This work addresses the problem of kinematic trajectory planning for mobile
manipulators with non-holonomic constraints, and holonomic operational-space
tracking constraints. We obtain whole-body trajectories and time-varying
kinematic feedback controllers by solving a Constrained Sequential Linear
Quadratic Optimal Control problem. The employed algorithm features high
efficiency through a continuous-time formulation that benefits from adaptive
step-size integrators and through linear complexity in the number of
integration steps. In a first application example, we solve kinematic
trajectory planning problems for a 26 DoF wheeled robot. In a second example,
we apply Constrained SLQ to a real-world mobile manipulator in a
receding-horizon optimal control fashion, where we obtain optimal controllers
and plans at rates up to 100 Hz.
| 1 | 0 | 0 | 0 | 0 | 0 |
Measuring Integrated Information: Comparison of Candidate Measures in Theory and Simulation | Integrated Information Theory (IIT) is a prominent theory of consciousness
that has at its centre measures that quantify the extent to which a system
generates more information than the sum of its parts. While several candidate
measures of integrated information (`$\Phi$') now exist, little is known about
how they compare, especially in terms of their behaviour on non-trivial network
models. In this article we provide clear and intuitive descriptions of six
distinct candidate measures. We then explore the properties of each of these
measures in simulation on networks consisting of eight interacting nodes,
animated with Gaussian linear autoregressive dynamics. We find a striking
diversity in the behaviour of these measures -- no two measures show consistent
agreement across all analyses. Further, only a subset of the measures appear to
genuinely reflect some form of dynamical complexity, in the sense of
simultaneous segregation and integration between system components. Our results
help guide the operationalisation of IIT and advance the development of
measures of integrated information that may have more general applicability.
| 0 | 0 | 0 | 0 | 1 | 0 |
Volumes of $\mathrm{SL}_n\mathbb{C}$-representations of hyperbolic 3-manifolds | Let $M$ be a compact oriented three-manifold whose interior is hyperbolic of
finite volume. We prove a variation formula for the volume on the variety of
representations of $M$ in $\operatorname{SL}_n(\mathbb C)$. Our proof follows
the strategy of Reznikov's rigidity when $M$ is closed, in particular we use
Fuks' approach to variations by means of Lie algebra cohomology. When $n=2$, we
get back Hodgson's formula for variation of volume on the space of hyperbolic
Dehn fillings. Our formula also yields the variation of volume on the space of
decorated triangulations obtained by Bergeron-Falbel-Guillou and
Dimofte-Gabella-Goncharov.
| 0 | 0 | 1 | 0 | 0 | 0 |
Learning in Variational Autoencoders with Kullback-Leibler and Renyi Integral Bounds | In this paper we propose two novel bounds for the log-likelihood based on
Kullback-Leibler and the Rényi divergences, which can be used for
variational inference and in particular for the training of Variational
AutoEncoders. Our proposal is motivated by the difficulties encountered in
training VAEs on continuous datasets with high contrast images, such as those
with handwritten digits and characters, where numerical issues often appear
unless noise is added, either to the dataset during training or to the
generative model given by the decoder. The new bounds we propose, which are
obtained from the maximization of the likelihood of an interval for the
observations, allow numerically stable training procedures without the
necessity of adding any extra source of noise to the data.
| 0 | 0 | 0 | 1 | 0 | 0 |
Design and demonstration of an acoustic right-angle bend | In this paper, we design, fabricate and experimentally characterize a
broadband acoustic right-angle bend device in air. Perforated panels with
various hole-sizes are used to construct the bend structure. Both the simulated
and the experimental results verify that acoustic beam can be rotated
effectively through the acoustic bend in a wide frequency range. This model may
have potential applications in some areas such as sound absorption and acoustic
detection in pipeline.
| 0 | 1 | 0 | 0 | 0 | 0 |
FBG-Based Position Estimation of Highly Deformable Continuum Manipulators: Model-Dependent vs. Data-Driven Approaches | Conventional shape sensing techniques using Fiber Bragg Grating (FBG) involve
finding the curvature at discrete FBG active areas and integrating curvature
over the length of the continuum dexterous manipulator (CDM) for tip position
estimation (TPE). However, due to limited number of sensing locations and many
geometrical assumptions, these methods are prone to large error propagation
especially when the CDM undergoes large deflections. In this paper, we study
the complications of using the conventional TPE methods that are dependent on
sensor model and propose a new data-driven method that overcomes these
challenges. The proposed method consists of a regression model that takes FBG
wavelength raw data as input and directly estimates the CDM's tip position.
This model is pre-operatively (off-line) trained on position information from
optical trackers/cameras (as the ground truth) and it intra-operatively
(on-line) estimates CDM tip position using only the FBG wavelength data. The
method's performance is evaluated on a CDM developed for orthopedic
applications, and the results are compared to conventional model-dependent
methods during large deflection bendings. Mean absolute TPE error (and standard
deviation) of 1.52 (0.67) mm and 0.11 (0.1) mm with maximum absolute errors of
3.63 mm and 0.62 mm for the conventional and the proposed data-driven
techniques were obtained, respectively. These results demonstrate a significant
out-performance of the proposed data-driven approach versus the conventional
estimation technique.
| 1 | 0 | 0 | 0 | 0 | 0 |
Small-signal Stability Analysis and Performance Evaluation of Microgrids under Distributed Control | Distributed control, as a potential solution to decreasing communication
demands in microgrids, has drawn much attention in recent years. Advantages of
distributed control have been extensively discussed, while its impacts on
microgrid performance and stability, especially in the case of communication
latency, have not been explicitly studied or fully understood yet. This paper
addresses this gap by proposing a generalized theoretical framework for
small-signal stability analysis and performance evaluation for microgrids using
distributed control. The proposed framework synthesizes generator and load
frequency-domain characteristics, primary and secondary control loops, as well
as the communication latency into a frequency-domain representation which is
further evaluated by the generalized Nyquist theorem. In addition, various
parameters and their impacts on microgrid dynamic performance are investigated
and summarized into guidelines to help better design the system. Case studies
demonstrate the effectiveness of the proposed approach.
| 1 | 0 | 0 | 0 | 0 | 0 |
Agent Failures in All-Pay Auctions | All-pay auctions, a common mechanism for various human and agent
interactions, suffers, like many other mechanisms, from the possibility of
players' failure to participate in the auction. We model such failures, and
fully characterize equilibrium for this class of games, we present a symmetric
equilibrium and show that under some conditions the equilibrium is unique. We
reveal various properties of the equilibrium, such as the lack of influence of
the most-likely-to-participate player on the behavior of the other players. We
perform this analysis with two scenarios: the sum-profit model, where the
auctioneer obtains the sum of all submitted bids, and the max-profit model of
crowdsourcing contests, where the auctioneer can only use the best submissions
and thus obtains only the winning bid.
Furthermore, we examine various methods of influencing the probability of
participation such as the effects of misreporting one's own probability of
participating, and how influencing another player's participation chances
changes the player's strategy.
| 1 | 0 | 0 | 0 | 0 | 0 |
Practical volume computation of structured convex bodies, and an application to modeling portfolio dependencies and financial crises | We examine volume computation of general-dimensional polytopes and more
general convex bodies, defined as the intersection of a simplex by a family of
parallel hyperplanes, and another family of parallel hyperplanes or a family of
concentric ellipsoids. Such convex bodies appear in modeling and predicting
financial crises. The impact of crises on the economy (labor, income, etc.)
makes its detection of prime interest. Certain features of dependencies in the
markets clearly identify times of turmoil. We describe the relationship between
asset characteristics by means of a copula; each characteristic is either a
linear or quadratic form of the portfolio components, hence the copula can be
constructed by computing volumes of convex bodies. We design and implement
practical algorithms in the exact and approximate setting, we experimentally
juxtapose them and study the tradeoff of exactness and accuracy for speed. We
analyze the following methods in order of increasing generality: rejection
sampling relying on uniformly sampling the simplex, which is the fastest
approach, but inaccurate for small volumes; exact formulae based on the
computation of integrals of probability distribution functions; an optimized
Lawrence sign decomposition method, since the polytopes at hand are shown to be
simple; Markov chain Monte Carlo algorithms using random walks based on the
hit-and-run paradigm generalized to nonlinear convex bodies and relying on new
methods for computing a ball enclosed; the latter is experimentally extended to
non-convex bodies with very encouraging results. Our C++ software, based on
CGAL and Eigen and available on github, is shown to be very effective in up to
100 dimensions. Our results offer novel, effective means of computing portfolio
dependencies and an indicator of financial crises, which is shown to correctly
identify past crises.
| 0 | 0 | 0 | 0 | 0 | 1 |
Charting the replica symmetric phase | Diluted mean-field models are spin systems whose geometry of interactions is
induced by a sparse random graph or hypergraph. Such models play an eminent
role in the statistical mechanics of disordered systems as well as in
combinatorics and computer science. In a path-breaking paper based on the
non-rigorous `cavity method', physicists predicted not only the existence of a
replica symmetry breaking phase transition in such models but also sketched a
detailed picture of the evolution of the Gibbs measure within the replica
symmetric phase and its impact on important problems in combinatorics, computer
science and physics [Krzakala et al.: PNAS 2007]. In this paper we rigorise
this picture completely for a broad class of models, encompassing the Potts
antiferromagnet on the random graph, the $k$-XORSAT model and the diluted
$k$-spin model for even $k$. We also prove a conjecture about the detection
problem in the stochastic block model that has received considerable attention
[Decelle et al.: Phys. Rev. E 2011].
| 1 | 1 | 0 | 0 | 0 | 0 |
Adaptive Clustering through Semidefinite Programming | We analyze the clustering problem through a flexible probabilistic model that
aims to identify an optimal partition on the sample X 1 , ..., X n. We perform
exact clustering with high probability using a convex semidefinite estimator
that interprets as a corrected, relaxed version of K-means. The estimator is
analyzed through a non-asymptotic framework and showed to be optimal or
near-optimal in recovering the partition. Furthermore, its performances are
shown to be adaptive to the problem's effective dimension, as well as to K the
unknown number of groups in this partition. We illustrate the method's
performances in comparison to other classical clustering algorithms with
numerical experiments on simulated data.
| 0 | 0 | 1 | 1 | 0 | 0 |
Improving pairwise comparison models using Empirical Bayes shrinkage | Comparison data arises in many important contexts, e.g. shopping, web clicks,
or sports competitions. Typically we are given a dataset of comparisons and
wish to train a model to make predictions about the outcome of unseen
comparisons. In many cases available datasets have relatively few comparisons
(e.g. there are only so many NFL games per year) or efficiency is important
(e.g. we want to quickly estimate the relative appeal of a product). In such
settings it is well known that shrinkage estimators outperform maximum
likelihood estimators. A complicating matter is that standard comparison models
such as the conditional multinomial logit model are only models of conditional
outcomes (who wins) and not of comparisons themselves (who competes). As such,
different models of the comparison process lead to different shrinkage
estimators. In this work we derive a collection of methods for estimating the
pairwise uncertainty of pairwise predictions based on different assumptions
about the comparison process. These uncertainty estimates allow us both to
examine model uncertainty as well as perform Empirical Bayes shrinkage
estimation of the model parameters. We demonstrate that our shrunk estimators
outperform standard maximum likelihood methods on real comparison data from
online comparison surveys as well as from several sports contexts.
| 1 | 0 | 0 | 1 | 0 | 0 |
Single Element Nonlinear Chimney Model | We generalize the chimney model by introducing nonlinear restoring and
gravitational forces for the purpose of modeling swaying of trees at high wind
speeds. Here we have restricted to the simplest case of a single element and
the governing equation we arrive at has not been studied so far. We study the
onset of fractal basin boundary of the two fixed points and also observe the
chaotic solutions. We also examine the need for considering the full sine term
in the gravitational force.
| 0 | 1 | 0 | 0 | 0 | 0 |
VOEvent Standard for Fast Radio Bursts | Fast radio bursts are a new class of transient radio phenomena currently
detected as millisecond radio pulses with very high dispersion measures. As new
radio surveys begin searching for FRBs a large population is expected to be
detected in real-time, triggering a range of multi-wavelength and
multi-messenger telescopes to search for repeating bursts and/or associated
emission. Here we propose a method for disseminating FRB triggers using Virtual
Observatory Events (VOEvents). This format was developed and is used
successfully for transient alerts across the electromagnetic spectrum and for
multi-messenger signals such as gravitational waves. In this paper we outline a
proposed VOEvent standard for FRBs that includes the essential parameters of
the event and where these parameters should be specified within the structure
of the event. An additional advantage to the use of VOEvents for FRBs is that
the events can automatically be ingested into the FRB Catalogue (FRBCAT)
enabling real-time updates for public use. We welcome feedback from the
community on the proposed standard outlined below and encourage those
interested to join the nascent working group forming around this topic.
| 0 | 1 | 0 | 0 | 0 | 0 |
Fine-tuning deep CNN models on specific MS COCO categories | Fine-tuning of a deep convolutional neural network (CNN) is often desired.
This paper provides an overview of our publicly available py-faster-rcnn-ft
software library that can be used to fine-tune the VGG_CNN_M_1024 model on
custom subsets of the Microsoft Common Objects in Context (MS COCO) dataset.
For example, we improved the procedure so that the user does not have to look
for suitable image files in the dataset by hand which can then be used in the
demo program. Our implementation randomly selects images that contain at least
one object of the categories on which the model is fine-tuned.
| 1 | 0 | 0 | 0 | 0 | 0 |
On the Throughput of Channels that Wear Out | This work investigates the fundamental limits of communication over a noisy
discrete memoryless channel that wears out, in the sense of signal-dependent
catastrophic failure. In particular, we consider a channel that starts as a
memoryless binary-input channel and when the number of transmitted ones causes
a sufficient amount of damage, the channel ceases to convey signals. Constant
composition codes are adopted to obtain an achievability bound and the
left-concave right-convex inequality is then refined to obtain a converse bound
on the log-volume throughput for channels that wear out. Since infinite
blocklength codes will always wear out the channel for any finite threshold of
failure and therefore cannot convey information at positive rates, we analyze
the performance of finite blocklength codes to determine the maximum expected
transmission volume at a given level of average error probability. We show that
this maximization problem has a recursive form and can be solved by dynamic
programming. Numerical results demonstrate that a sequence of block codes is
preferred to a single block code for streaming sources.
| 1 | 0 | 1 | 0 | 0 | 0 |
Reply to Marchildon: absorption and non-unitarity remain well-defined in the Relativistic Transactional Interpretation | I rebut some erroneous statements and attempt to clear up some
misunderstandings in a recent set of critical remarks by Marchildon regarding
the Relativistic Transactional Interpretation (RTI), showing that his negative
conclusions regarding the transactional model are ill-founded.
| 0 | 1 | 0 | 0 | 0 | 0 |
Interaction-induced transition in the quantum chaotic dynamics of a disordered metal | We demonstrate that a weakly disordered metal with short-range interactions
exhibits a transition in the quantum chaotic dynamics when changing the
temperature or the interaction strength. For weak interactions, the system
displays exponential growth of the out-of-time-ordered correlator (OTOC) of the
current operator. The Lyapunov exponent of this growth is
temperature-independent in the limit of vanishing interaction. With increasing
the temperature or the interaction strength, the system undergoes a transition
to a non-chaotic behaviour, for which the exponential growth of the OTOC is
absent. We conjecture that the transition manifests itself in the quasiparticle
energy-level statistics and also discuss ways of its explicit observation in
cold-atom setups.
| 0 | 1 | 0 | 0 | 0 | 0 |
Epidemiological impact of waning immunization on a vaccinated population | This is an epidemiological SIRV model based study that is designed to analyze
the impact of vaccination in containing infection spread, in a 4-tiered
population compartment comprised of susceptible, infected, recovered and
vaccinated agents. While many models assume a lifelong protection through
vaccination, we focus on the impact of waning immunization due to conversion of
vaccinated and recovered agents back to susceptible ones. Two asymptotic states
exist, the "disease-free equilibrium" and the "endemic equilibrium"; we express
the transitions between these states as function of the vaccination and
conversion rates using the basic reproduction number as a descriptor. We find
that the vaccination of newborns and adults have different consequences in
controlling epidemics. We also find that a decaying disease protection within
the recovered sub-population is not sufficient to trigger an epidemic at the
linear level. Our simulations focus on parameter sets that could model a
disease with waning immunization like pertussis. For a diffusively coupled
population, a transition to the endemic state can be initiated via the
propagation of a traveling infection wave, described successfully within a
Fisher-Kolmogorov framework.
| 0 | 0 | 0 | 0 | 1 | 0 |
Waves of seed propagation induced by delayed animal dispersion | We study a model of seed dispersal that considers the inclusion of an animal
disperser moving diffusively, feeding on fruits and transporting the seeds,
which are later deposited and capable of germination. The dynamics depends on
several population parameters of growth, decay, harvesting, transport,
digestion and germination. In particular, the deposition of transported seeds
at places away from their collection sites produces a delay in the dynamics,
whose effects are the focus of this work. Analytical and numerical solutions of
different simplified scenarios show the existence of travelling waves. The
effect of zoochory is apparent in the increase of the velocity of these waves.
The results support the hypothesis of the relevance of animal mediated seed
dispersion when trying to understand the origin of the high rates of vegetable
invasion observed in real systems.
| 0 | 1 | 0 | 0 | 0 | 0 |
Sobolev GAN | We propose a new Integral Probability Metric (IPM) between distributions: the
Sobolev IPM. The Sobolev IPM compares the mean discrepancy of two distributions
for functions (critic) restricted to a Sobolev ball defined with respect to a
dominant measure $\mu$. We show that the Sobolev IPM compares two distributions
in high dimensions based on weighted conditional Cumulative Distribution
Functions (CDF) of each coordinate on a leave one out basis. The Dominant
measure $\mu$ plays a crucial role as it defines the support on which
conditional CDFs are compared. Sobolev IPM can be seen as an extension of the
one dimensional Von-Mises Cramér statistics to high dimensional
distributions. We show how Sobolev IPM can be used to train Generative
Adversarial Networks (GANs). We then exploit the intrinsic conditioning implied
by Sobolev IPM in text generation. Finally we show that a variant of Sobolev
GAN achieves competitive results in semi-supervised learning on CIFAR-10,
thanks to the smoothness enforced on the critic by Sobolev GAN which relates to
Laplacian regularization.
| 1 | 0 | 0 | 1 | 0 | 0 |
Understanding Human Motion and Gestures for Underwater Human-Robot Collaboration | In this paper, we present a number of robust methodologies for an underwater
robot to visually detect, follow, and interact with a diver for collaborative
task execution. We design and develop two autonomous diver-following
algorithms, the first of which utilizes both spatial- and frequency-domain
features pertaining to human swimming patterns in order to visually track a
diver. The second algorithm uses a convolutional neural network-based model for
robust tracking-by-detection. In addition, we propose a hand gesture-based
human-robot communication framework that is syntactically simpler and
computationally more efficient than the existing grammar-based frameworks. In
the proposed interaction framework, deep visual detectors are used to provide
accurate hand gesture recognition; subsequently, a finite-state machine
performs robust and efficient gesture-to-instruction mapping. The
distinguishing feature of this framework is that it can be easily adopted by
divers for communicating with underwater robots without using artificial
markers or requiring memorization of complex language rules. Furthermore, we
validate the performance and effectiveness of the proposed methodologies
through extensive field experiments in closed- and open-water environments.
Finally, we perform a user interaction study to demonstrate the usability
benefits of our proposed interaction framework compared to existing methods.
| 1 | 0 | 0 | 0 | 0 | 0 |
One-loop binding corrections to the electron $g$ factor | We calculate the one-loop electron self-energy correction of order
$\alpha\,(Z\,\alpha)^5$ to the bound electron $g$ factor. Our result is in
agreement with the extrapolated numerical value and paves the way for the
calculation of the analogous, but as yet unknown two-loop correction.
| 0 | 1 | 0 | 0 | 0 | 0 |
Periodic solutions of Euler-Lagrange equations in an anisotropic Orlicz-Sobolev space setting | In this paper we consider the problem of finding periodic solutions of
certain Euler-Lagrange equations, which include, among others, equations
involving the $p$-Laplace and, more generality, the $(p,q)$-Laplace operator.
We employ the direct method of the calculus of variations in the framework of
anisotropic Orlicz-Sobolev spaces. These spaces appear to be useful in
formulating a unified theory of existence for the type of problem considered.
| 0 | 0 | 1 | 0 | 0 | 0 |
Dehn functions of subgroups of right-angled Artin groups | We show that for each positive integer $k$ there exist right-angled Artin
groups containing free-by-cyclic subgroups whose monodromy automorphisms grow
as $n^k$. As a consequence we produce examples of right-angled Artin groups
containing finitely presented subgroups whose Dehn functions grow as $n^{k+2}$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Recovering Sparse Nonnegative Signals via Non-convex Fraction Function Penalty | Many real world practical problems can be formulated as
$\ell_{0}$-minimization problems with nonnegativity constraints, which seek the
sparsest nonnegative signals to underdetermined linear systems. They have been
widely applied in signal and image processing, machine learning, pattern
recognition and computer vision. Unfortunately, this $\ell_{0}$-minimization
problem with nonnegativity constraint is computational and NP-hard because of
the discrete and discontinuous nature of the $\ell_{0}$-norm. In this paper, we
replace the $\ell_{0}$-norm with a non-convex fraction function, and study the
minimization problem of this non-convex fraction function in recovering the
sparse nonnegative signals from an underdetermined linear system. Firstly, we
discuss the equivalence between $(P_{0}^{\geq})$ and $(FP_{a}^{\geq})$, and the
equivalence between $(FP_{a}^{\geq})$ and $(FP_{a,\lambda}^{\geq})$. It is
proved that the optimal solution of the problem $(P_{0}^{\geq})$ could be
approximately obtained by solving the regularization problem
$(FP_{a,\lambda}^{\geq})$ if some specific conditions satisfied. Secondly, we
propose a nonnegative iterative thresholding algorithm to solve the
regularization problem $(FP_{a,\lambda}^{\geq})$ for all $a>0$. Finally, some
numerical experiments on sparse nonnegative siganl recovery problems show that
our method performs effective in finding sparse nonnegative signals compared
with the linear programming.
| 0 | 0 | 1 | 0 | 0 | 0 |
Estimating Local Interactions Among Many Agents Who Observe Their Neighbors | In various economic environments, people observe those with whom they
strategically interact. We can model such information-sharing relations as an
information network, and the strategic interactions as a game on the network.
When any two agents in the network are connected either directly or indirectly,
empirical modeling using an equilibrium approach is cumbersome, since the
testable implications from an equilibrium generally involve all the players of
the game, whereas a researcher's data set may contain only a fraction of these
players in practice. This paper develops a tractable empirical model of linear
interactions where each agent, after observing part of his neighbors' types,
not knowing the full information network, uses best responses that are linear
in his and other players' types that he observes, based on simple beliefs about
other players' strategies. We provide conditions on information networks and
beliefs such that best responses take an explicit form with multiple intuitive
features. Furthermore, the best responses reveal how local payoff
interdependence among agents is translated into local stochastic dependence of
their actions, allowing the econometrician to perform asymptotic inference
without having to observe all the players in the game or having to know
precisely the sampling process.
| 0 | 0 | 0 | 1 | 0 | 0 |
Multimodal speech synthesis architecture for unsupervised speaker adaptation | This paper proposes a new architecture for speaker adaptation of
multi-speaker neural-network speech synthesis systems, in which an unseen
speaker's voice can be built using a relatively small amount of speech data
without transcriptions. This is sometimes called "unsupervised speaker
adaptation". More specifically, we concatenate the layers to the audio inputs
when performing unsupervised speaker adaptation while we concatenate them to
the text inputs when synthesizing speech from text. Two new training schemes
for the new architecture are also proposed in this paper. These training
schemes are not limited to speech synthesis, other applications are suggested.
Experimental results show that the proposed model not only enables adaptation
to unseen speakers using untranscribed speech but it also improves the
performance of multi-speaker modeling and speaker adaptation using transcribed
audio files.
| 1 | 0 | 0 | 1 | 0 | 0 |
Analytical history | The purpose of this note is to explain what is "analytical history", a
modular and testable analysis of historical events introduced in a book
published in 2002 (Roehner and Syme 2002). Broadly speaking, it is a
comparative methodology for the analysis of historical events. Comparison is
the keystone and hallmark of science. For instance, the extrasolar planets are
crucial for understanding our own solar system. Until their discovery,
astronomers could observe only one instance. Single instances can be described
but they cannot be understood in a testable way. In other words, if one accepts
that, as many historians say, "historical events are unique", then no testable
understanding can be developed.
| 0 | 1 | 0 | 0 | 0 | 0 |
Control Synthesis for Multi-Agent Systems under Metric Interval Temporal Logic Specifications | This paper presents a framework for automatic synthesis of a control sequence
for multi-agent systems governed by continuous linear dynamics under timed
constraints. First, the motion of the agents in the workspace is abstracted
into individual Transition Systems (TS). Second, each agent is assigned with an
individual formula given in Metric Interval Temporal Logic (MITL) and in
parallel, the team of agents is assigned with a collaborative team formula. The
proposed method is based on a correct-by-construction control synthesis method,
and hence guarantees that the resulting closed-loop system will satisfy the
specifications. The specifications considers boolean-valued properties under
real-time. Extended simulations has been performed in order to demonstrate the
efficiency of the proposed controllers.
| 1 | 0 | 0 | 0 | 0 | 0 |
Eulerian and Lagrangian solutions to the continuity and Euler equations with $L^1$ vorticity | In the first part of this paper we establish a uniqueness result for
continuity equations with velocity field whose derivative can be represented by
a singular integral operator of an $L^1$ function, extending the Lagrangian
theory in \cite{BouchutCrippa13}. The proof is based on a combination of a
stability estimate via optimal transport techniques developed in \cite{Seis16a}
and some tools from harmonic analysis introduced in \cite{BouchutCrippa13}. In
the second part of the paper, we address a question that arose in
\cite{FilhoMazzucatoNussenzveig06}, namely whether 2D Euler solutions obtained
via vanishing viscosity are renormalized (in the sense of DiPerna and Lions)
when the initial data has low integrability. We show that this is the case even
when the initial vorticity is only in~$L^1$, extending the proof for the $L^p$
case in \cite{CrippaSpirito15}.
| 0 | 1 | 1 | 0 | 0 | 0 |
Star formation, supernovae, iron, and alpha: consistent cosmic and Galactic histories | Recent versions of the observed cosmic star-formation history (SFH) have
resolved an inconsistency with the stellar mass density history. We show that
the revised SFH also scales up the delay-time distribution (DTD) of Type Ia
supernovae (SNe Ia), as determined from the observed volumetric SN Ia rate
history, aligning it with other field-galaxy SN Ia DTD measurements. The
revised-SFH-based DTD has a $t^{-1.1 \pm 0.1}$ form and a
Hubble-time-integrated production efficiency of $N/M_\star=1.3\pm0.1$ SNe Ia
per $1000~{\rm M_\odot}$ of formed stellar mass. Using these revised histories
and updated empirical iron yields of the various SN types, we re-derive the
cosmic iron accumulation history. Core-collapse SNe and SNe Ia have contributed
about equally to the total mass of iron in the Universe today. We find the
track of the average cosmic gas element in the [$\alpha$/Fe] vs. [Fe/H]
abundance-ratio plane. The track is broadly similar to the observed main locus
of Galactic stars in this plane, indicating a Milky Way (MW) SFH similar in
form to the cosmic one. We easily find a simple MW SFH that makes the track
closely match this stellar locus. Galaxy clusters appear to have a
higher-normalization DTD. This cluster DTD, combined with a short-burst MW SFH
peaked at $z=3$, produces a track that matches remarkably well the observed
"high-$\alpha$" locus of MW stars, suggesting the halo/thick-disk population
has had a galaxy-cluster-like formation mode. Thus, a simple two-component SFH,
combined with empirical DTDs and SN iron yields, suffices to closely reproduce
the MW's stellar abundance patterns.
| 0 | 1 | 0 | 0 | 0 | 0 |
ALFABURST: A commensal search for Fast Radio Bursts with Arecibo | ALFABURST has been searching for Fast Radio Bursts (FRBs) commensally with
other projects using the Arecibo L-band Feed Array (ALFA) receiver at the
Arecibo Observatory since July 2015. We describe the observing system and
report on the non-detection of any FRBs from that time until August 2017 for a
total observing time of 518 hours. With current FRB rate models, along with
measurements of telescope sensitivity and beam size, we estimate that this
survey probed redshifts out to about 3.4 with an effective survey volume of
around 600,000 Mpc$^3$. Based on this, we would expect, at the 99% confidence
level, to see at most two FRBs. We discuss the implications of this
non-detection in the context of results from other telescopes and the
limitation of our search pipeline. During the survey, single pulses from 17
known pulsars were detected. We also report the discovery of a Galactic radio
transient with a pulse width of 3 ms and dispersion measure of 281 pc
cm$^{-3}$, which was detected while the telescope was slewing between fields.
| 0 | 1 | 0 | 0 | 0 | 0 |
Mechanical properties and thermal conductivity of graphitic carbon nitride: A molecular dynamics study | Graphitic carbon nitride nanosheets are among 2D attractive materials due to
presenting unusual physicochemical properties.Nevertheless, no adequate
information exists about their mechanical and thermal properties. Therefore, we
used classical molecular dynamics simulations to explore the thermal
conductivity and mechanical response of two main structures of single-layer
triazine-basedg-C3N4 films.By performing uniaxial tensile modeling, we found
remarkable elastic modulus of 320 and 210 GPa, and tensile strength of 47 GPa
and 30 GPa for two different structures of g-C3N4sheets. Using equilibrium
molecular dynamics simulations, the thermal conductivity of free-standing
g-C3N4 structures were also predicted to be around 7.6 W/mK and 3.5 W/mK. Our
study suggests the g-C3N4films as exciting candidate for reinforcement of
polymeric materials mechanical properties.
| 0 | 1 | 0 | 0 | 0 | 0 |
Green's Functions of Partial Differential Equations with Involutions | In this paper we develop a way of obtaining Green's functions for partial
differential equations with linear involutions by reducing the equation to a
higher-order PDE without involutions. The developed theory is applied to a
model of heat transfer in a conducting plate which is bent in half.
| 0 | 0 | 1 | 0 | 0 | 0 |
Zero-Modified Poisson-Lindley distribution with applications in zero-inflated and zero-deflated count data | The main object of this article is to present an extension of the
zero-inflated Poisson-Lindley distribution, called of zero-modified
Poisson-Lindley. The additional parameter $\pi$ of the zero-modified
Poisson-Lindley has a natural interpretation in terms of either
zero-deflated/inflated proportion. Inference is dealt with by using the
likelihood approach. In particular the maximum likelihood estimators of the
distribution's parameter are compared in small and large samples. We also
consider an alternative bias-correction mechanism based on Efron's bootstrap
resampling. The model is applied to real data sets and found to perform better
than other competing models.
| 0 | 0 | 0 | 1 | 0 | 0 |
Phase-Retrieval as a Regularization Problem | It was recently shown that the phase retrieval imaging of a sample can be
modeled as a simple convolution process. Sometimes, such a convolution depends
on physical parameters of the sample which are difficult to estimate a priori.
In this case, a blind choice for those parameters usually lead to wrong
results, e.g., in posterior image segmentation processing. In this manuscript,
we propose a simple connection between phase-retrieval algorithms and
optimization strategies, which lead us to ways of numerically determining the
physical parameters
| 0 | 0 | 1 | 0 | 0 | 0 |
Comparing simulations and test data of a radiation damaged charge-couple device for the Euclid mission | The VIS instrument on board the Euclid mission is a weak-lensing experiment
that depends on very precise shape measurements of distant galaxies obtained by
a large CCD array. Due to the harsh radiative environment outside the Earth's
atmosphere, it is anticipated that the CCDs over the mission lifetime will be
degraded to an extent that these measurements will only be possible through the
correction of radiation damage effects. We have therefore created a Monte Carlo
model that simulates the physical processes taking place when transferring
signal through a radiation-damaged CCD. The software is based on
Shockley-Read-Hall theory, and is made to mimic the physical properties in the
CCD as closely as possible. The code runs on a single electrode level and takes
three dimensional trap position, potential structure of the pixel, and
multi-level clocking into account. A key element of the model is that it also
takes device specific simulations of electron density as a direct input,
thereby avoiding to make any analytical assumptions about the size and density
of the charge cloud. This paper illustrates how test data and simulated data
can be compared in order to further our understanding of the positions and
properties of the individual radiation-induced traps.
| 0 | 1 | 0 | 0 | 0 | 0 |
A generalization of crossing families | For a set of points in the plane, a \emph{crossing family} is a set of line
segments, each joining two of the points, such that any two line segments
cross. We investigate the following generalization of crossing families: a
\emph{spoke set} is a set of lines drawn through a point set such that each
unbounded region of the induced line arrangement contains at least one point of
the point set. We show that every point set has a spoke set of size
$\sqrt{\frac{n}{8}}$. We also characterize the matchings obtained by selecting
exactly one point in each unbounded region and connecting every such point to
the point in the antipodal unbounded region.
| 1 | 0 | 0 | 0 | 0 | 0 |
Uncovering Offshore Financial Centers: Conduits and Sinks in the Global Corporate Ownership Network | Multinational corporations use highly complex structures of parents and
subsidiaries to organize their operations and ownership. Offshore Financial
Centers (OFCs) facilitate these structures through low taxation and lenient
regulation, but are increasingly under scrutiny, for instance for enabling tax
avoidance. Therefore, the identification of OFC jurisdictions has become a
politicized and contested issue. We introduce a novel data-driven approach for
identifying OFCs based on the global corporate ownership network, in which over
98 million firms (nodes) are connected through 71 million ownership relations.
This granular firm-level network data uniquely allows identifying both
sink-OFCs and conduit-OFCs. Sink-OFCs attract and retain foreign capital while
conduit-OFCs are attractive intermediate destinations in the routing of
international investments and enable the transfer of capital without taxation.
We identify 24 sink-OFCs. In addition, a small set of five countries -- the
Netherlands, the United Kingdom, Ireland, Singapore and Switzerland -- canalize
the majority of corporate offshore investment as conduit-OFCs. Each conduit
jurisdiction is specialized in a geographical area and there is significant
specialization based on industrial sectors. Against the idea of OFCs as exotic
small islands that cannot be regulated, we show that many sink and conduit-OFCs
are highly developed countries.
| 0 | 1 | 0 | 0 | 0 | 0 |
Predicting radio emission from the newborn hot Jupiter V830 Tau and its host star | Magnetised exoplanets are expected to emit at radio frequencies analogously
to the radio auroral emission of Earth and Jupiter. We predict the radio
emission from V830 Tau b, the youngest (2 Myr) detected exoplanet to date. We
model the host star wind using 3DMHD simulations that take into account its
surface magnetism. With this, we constrain the local conditions around V830 Tau
b that we use to then compute its radio emission. We estimate average radio
flux densities of 6 to 24mJy, depending on the assumed radius of the planet
(one or two Rjupiter). These radio fluxes are present peaks that are up to
twice the average values. We show here that these fluxes are weakly dependent
(a factor of 1.8) on the assumed polar planetary magnetic field (10 to 100G),
opposed to the maximum frequency of the emission, which ranges from 18 to
240MHz. We also estimate the thermal radio emission from the stellar wind. By
comparing our results with VLA and VLBA observations of the system, we
constrain the stellar mass-loss rate to be <3e-9 Msun/yr, with likely values
between ~1e-12 and 1e-10 Msun/yr. The frequency-dependent extension of the
radio-emitting wind is around ~ 3 to 30 Rstar for frequencies in the range of
275 to 50MHz, implying that V830 Tau b, at an orbital distance of 6.1 Rstar,
could be embedded in the regions of the host star's wind that are optically
thick to radio wavelengths, but not deeply so. Planetary emission can only
propagate in the stellar wind plasma if the frequency of the cyclotron emission
exceeds the stellar wind plasma frequency. For that, we find that for planetary
radio emission to propagate through the host star wind, planetary magnetic
field strengths larger than ~1.3 to 13 G are required. The V830 Tau system is a
very interesting system for conducting radio observations from both the
perspective of radio emission from the planet as well as from the host star's
wind.
| 0 | 1 | 0 | 0 | 0 | 0 |
Redshift determination through weighted phase correlation: a linearithmic implementation | We present a new algorithm having a time complexity of O(N log N) and
designed to retrieve the phase at which an input signal and a set of not
necessarily orthogonal templates match best in a weighted chi-squared sense.
The proposed implementation is based on an orthogonalization algorithm and thus
also benefits from high numerical stability. We apply this method successfully
to the redshift determination of quasars from the twelfth Sloan Digital Sky
Survey (SDSS) quasar catalogue and derive the proper spectral reduction and
redshift selection methods. Derivations of the redshift uncertainty and the
associated confidence are also provided. The results of this application are
comparable to the performance of the SDSS pipeline, while not having a
quadratic time dependence.
| 0 | 1 | 0 | 0 | 0 | 0 |
Nonautonomous Dynamics of Acute Cell Injury | Clinically-relevant forms of acute cell injury, which include stroke and
myocardial infarction, have been of long-lasting challenge in terms of
successful intervention and treatments. Although laboratory studies have shown
it is possible to decrease cell death after such injuries, human clinical
trials based on laboratory therapies have generally failed. We suggested these
failures are due, at least partially, to the lack of a quantitative theoretical
framework for acute cell injury. Here we provide a systematic study on a
nonlinear dynamical model of acute cell injury and characterize the global
dynamics of a nonautonomous version of the theory. The nonautonomous model
gives rise to four qualitative types of dynamical patterns that can be mapped
to the behavior of cells after clinical acute injuries. In addition, the
concept of a maximum total intrinsic stress response, $S_{max}^*$, emerges from
the nonautonomous theory. A continuous transition across the four qualitative
patterns has been observed, which sets a natural range for initial conditions.
Under these initial conditions in the parameter space tested, the total induced
stress response can be increased to 2.5-11 folds of $S_{max}^*$. This result
indicates that cells possess a reserve stress response capacity which provides
a theoretical explanation of how therapies can prevent cell death after lethal
injuries. This nonautonomous theory of acute cell injury thus provides a
quantitative framework for understanding cell death and recovery and developing
effective therapeutics for acute injury.
| 0 | 0 | 0 | 0 | 1 | 0 |
Cavity-enhanced photoionization of an ultracold rubidium beam for application in focused ion beams | A two-step photoionization strategy of an ultracold rubidium beam for
application in a focused ion beam instrument is analyzed and implemented. In
this strategy the atomic beam is partly selected with an aperture after which
the transmitted atoms are ionized in the overlap of a tightly cylindrically
focused excitation laser beam and an ionization laser beam whose power is
enhanced in a build-up cavity. The advantage of this strategy, as compared to
without the use of a build-up cavity, is that higher ionization degrees can be
reached at higher currents. Optical Bloch equations including the
photoionization process are used to calculate what ionization degree and
ionization position distribution can be reached. Furthermore, the ionization
strategy is tested on an ultracold beam of $^{85}$Rb atoms. The beam current is
measured as a function of the excitation and ionization laser beam intensity
and the selection aperture size. Although details are different, the global
trends of the measurements agree well with the calculation. With a selection
aperture diameter of 52 $\mu$m, a current of $\left(170\pm4\right)$ pA is
measured, which according to calculations is 63% of the current equivalent of
the transmitted atomic flux. Taking into account the ionization degree the ion
beam peak reduced brightness is estimated at $1\times10^7$ A/(m$^2\,$sr$\,$eV).
| 0 | 1 | 0 | 0 | 0 | 0 |
Fusion rule algebras related to a pair of compact groups | The purpose of the present paper is to investigate a fusion rule algebra
arising from irreducible characters of a compact group $G$ and a closed
subgroup $G_0$ of $G$ with finite index. The convolution of this fusion rule
algebra is introduced by inducing irreducible representations of $G_0$ to $G$
and by restricting irreducible representations of $G$ to $G_0$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Matricial Canonical Moments and Parametrization of Matricial Hausdorff Moment Sequences | In this paper we study moment sequences of matrix-valued measures on compact
intervals. A complete parametrization of such sequences is obtained via a
symmetric version of matricial canonical moments. Furthermore, distinguished
extensions of finite moment sequences are characterized in this framework. The
results are applied to the underlying matrix-valued measures, generalizing some
results from the scalar theory of canonical moments.
| 0 | 0 | 1 | 0 | 0 | 0 |
Embedding Deep Networks into Visual Explanations | In this paper, we propose a novel explanation module to explain the
predictions made by a deep network. The explanation module works by embedding a
high-dimensional deep network layer nonlinearly into a low-dimensional
explanation space while retaining faithfulness, so that the original deep
learning predictions can be constructed from the few concepts extracted by the
explanation module. We then visualize such concepts for human to learn about
the high-level concepts that deep learning is using to make decisions. We
propose an algorithm called Sparse Reconstruction Autoencoder (SRAE) for
learning the embedding to the explanation space. SRAE aims to reconstruct part
of the original feature space while retaining faithfulness. A pull-away term is
applied to SRAE to make the explanation space more orthogonal. A visualization
system is then introduced for human understanding of the features in the
explanation space. The proposed method is applied to explain CNN models in
image classification tasks, and several novel metrics are introduced to
evaluate the performance of explanations quantitatively without human
involvement. Experiments show that the proposed approach generates interesting
explanations of the mechanisms CNN use for making predictions.
| 1 | 0 | 0 | 0 | 0 | 0 |
Information Diffusion in Social Networks: Friendship Paradox based Models and Statistical Inference | Dynamic models and statistical inference for the diffusion of information in
social networks is an area which has witnessed remarkable progress in the last
decade due to the proliferation of social networks. Modeling and inference of
diffusion of information has applications in targeted advertising and
marketing, forecasting elections, predicting investor sentiment and identifying
epidemic outbreaks. This chapter discusses three important aspects related to
information diffusion in social networks: (i) How does observation bias named
friendship paradox (a graph theoretic consequence) and monophilic contagion
(influence of friends of friends) affect information diffusion dynamics. (ii)
How can social networks adapt their structural connectivity depending on the
state of information diffusion. (iii) How one can estimate the state of the
network induced by information diffusion. The motivation for all three topics
considered in this chapter stems from recent findings in network science and
social sensing. Further, several directions for future research that arise from
these topics are also discussed.
| 1 | 0 | 0 | 0 | 0 | 0 |
Really should we pruning after model be totally trained? Pruning based on a small amount of training | Pre-training of models in pruning algorithms plays an important role in
pruning decision-making. We find that excessive pre-training is not necessary
for pruning algorithms. According to this idea, we propose a pruning
algorithm---Incremental pruning based on less training (IPLT). Compared with
the traditional pruning algorithm based on a large number of pre-training, IPLT
has competitive compression effect than the traditional pruning algorithm under
the same simple pruning strategy. On the premise of ensuring accuracy, IPLT can
achieve 8x-9x compression for VGG-19 on CIFAR-10 and only needs to pre-train
few epochs. For VGG-19 on CIFAR-10, we can not only achieve 10 times test
acceleration, but also about 10 times training acceleration. At present, the
research mainly focuses on the compression and acceleration in the application
stage of the model, while the compression and acceleration in the training
stage are few. We newly proposed a pruning algorithm that can compress and
accelerate in the training stage. It is novel to consider the amount of
pre-training required by pruning algorithm. Our results have implications: Too
much pre-training may be not necessary for pruning algorithms.
| 1 | 0 | 0 | 0 | 0 | 0 |
Adversarial Networks for the Detection of Aggressive Prostate Cancer | Semantic segmentation constitutes an integral part of medical image analyses
for which breakthroughs in the field of deep learning were of high relevance.
The large number of trainable parameters of deep neural networks however
renders them inherently data hungry, a characteristic that heavily challenges
the medical imaging community. Though interestingly, with the de facto standard
training of fully convolutional networks (FCNs) for semantic segmentation being
agnostic towards the `structure' of the predicted label maps, valuable
complementary information about the global quality of the segmentation lies
idle. In order to tap into this potential, we propose utilizing an adversarial
network which discriminates between expert and generated annotations in order
to train FCNs for semantic segmentation. Because the adversary constitutes a
learned parametrization of what makes a good segmentation at a global level, we
hypothesize that the method holds particular advantages for segmentation tasks
on complex structured, small datasets. This holds true in our experiments: We
learn to segment aggressive prostate cancer utilizing MRI images of 152
patients and show that the proposed scheme is superior over the de facto
standard in terms of the detection sensitivity and the dice-score for
aggressive prostate cancer. The achieved relative gains are shown to be
particularly pronounced in the small dataset limit.
| 1 | 0 | 0 | 0 | 0 | 0 |
A class of singular integrals associated with Zygmund dilations | The main purpose of this paper is to study multi-parameter singular integral
operators which commute with Zygmund dilations. We introduce a class of
singular integral operators associated with Zygmund dilations and show the
boundedness for these operators on $L^p, 1<p<\infty$, which covers those
studied by Ricci--Stein \cite{RS} and Nagel--Wainger \cite{NW}
| 0 | 0 | 1 | 0 | 0 | 0 |
Surge-like oscillations above sunspot light bridges driven by magnetoacoustic shocks | High-resolution observations of the solar chromosphere and transition region
often reveal surge-like oscillatory activities above sunspot light bridges.
These oscillations are often interpreted as intermittent plasma jets produced
by quasi-periodic magnetic reconnection. We have analyzed the oscillations
above a light bridge in a sunspot using data taken by the Interface Region
Imaging Spectrograph (IRIS). The chromospheric 2796\AA{}~images show surge-like
activities above the entire light bridge at any time, forming an oscillating
wall. Within the wall we often see that the Mg~{\sc{ii}}~k 2796.35\AA{}~line
core first experiences a large blueshift, and then gradually decreases to zero
shift before increasing to a red shift of comparable magnitude. Such a behavior
suggests that the oscillations are highly nonlinear and likely related to
shocks. In the 1400\AA{}~passband which samples emission mainly from the
Si~{\sc{iv}}~ion, the most prominent feature is a bright oscillatory front
ahead of the surges. We find a positive correlation between the acceleration
and maximum velocity of the moving front, which is consistent with numerical
simulations of upward propagating slow-mode shock waves. The Si~{\sc{iv}}
1402.77\AA{}~line profile is generally enhanced and broadened in the bright
front, which might be caused by turbulence generated through compression or by
the shocks. These results, together with the fact that the oscillation period
stays almost unchanged over a long duration, lead us to propose that the
surge-like oscillations above light bridges are caused by shocked p-mode waves
leaked from the underlying photosphere.
| 0 | 1 | 0 | 0 | 0 | 0 |
A new Composition-Diamond lemma for dialgebras | Let $Di\langle X\rangle$ be the free dialgebra over a field generated by a
set $X$. Let $S$ be a monic subset of $Di\langle X\rangle$. A
Composition-Diamond lemma for dialgebras is firstly established by Bokut, Chen
and Liu in 2010 \cite{Di} which claims that if (i) $S$ is a
Gröbner-Shirshov basis in $Di\langle X\rangle$, then (ii) the set of
$S$-irreducible words is a linear basis of the quotient dialgebra $Di\langle X
\mid S \rangle$, but not conversely. Such a lemma based on a fixed ordering on
normal diwords of $Di\langle X\rangle$ and special definition of composition
trivial modulo $S$. In this paper, by introducing an arbitrary monomial-center
ordering and the usual definition of composition trivial modulo $S$, we give a
new Composition-Diamond lemma for dialgebras which makes the conditions (i) and
(ii) equivalent. We show that every ideal of $Di\langle X\rangle$ has a unique
reduced Gröbner-Shirshov basis. The new lemma is more useful and convenient
than the one in \cite{Di}. As applications, we give a method to find normal
forms of elements of an arbitrary disemigroup, in particular, A.V. Zhuchok's
(2010) and Y.V. Zhuchok's (2015) normal forms of the free commutative
disemigroups and the free abelian disemigroups, and normal forms of the free
left (right) commutative disemigroups.
| 0 | 0 | 1 | 0 | 0 | 0 |
A Remote Interface for Live Interaction with OMNeT++ Simulations | Discrete event simulators, such as OMNeT++, provide fast and convenient
methods for the assessment of algorithms and protocols, especially in the
context of wired and wireless networks. Usually, simulation parameters such as
topology and traffic patterns are predefined to observe the behaviour
reproducibly. However, for learning about the dynamic behaviour of a system, a
live interaction that allows changing parameters on the fly is very helpful.
This is especially interesting for providing interactive demonstrations at
conferences and fairs. In this paper, we present a remote interface to OMNeT++
simulations that can be used to control the simulations while visualising
real-time data merged from multiple OMNeT++ instances. We explain the software
architecture behind our framework and how it can be used to build
demonstrations on the foundation of OMNeT++.
| 1 | 0 | 0 | 0 | 0 | 0 |
Dynamic time warping distance for message propagation classification in Twitter | Social messages classification is a research domain that has attracted the
attention of many researchers in these last years. Indeed, the social message
is different from ordinary text because it has some special characteristics
like its shortness. Then the development of new approaches for the processing
of the social message is now essential to make its classification more
efficient. In this paper, we are mainly interested in the classification of
social messages based on their spreading on online social networks (OSN). We
proposed a new distance metric based on the Dynamic Time Warping distance and
we use it with the probabilistic and the evidential k Nearest Neighbors (k-NN)
classifiers to classify propagation networks (PrNets) of messages. The
propagation network is a directed acyclic graph (DAG) that is used to record
propagation traces of the message, the traversed links and their types. We
tested the proposed metric with the chosen k-NN classifiers on real world
propagation traces that were collected from Twitter social network and we got
good classification accuracies.
| 1 | 0 | 0 | 1 | 0 | 0 |
Proceedings of the 3rd International Workshop on Overlay Architectures for FPGAs (OLAF 2017) | The 3rd International Workshop on Overlay Architectures for FPGAs (OLAF 2017)
was held on 22 Feb, 2017 as a co-located workshop at the 25th ACM/SIGDA
International Symposium on Field-Programmable Gate Arrays (FPGA 2017). This
year, the program committee selected 3 papers and 3 extended abstracts to be
presented at the workshop, which are subsequently collected in this online
volume.
| 1 | 0 | 0 | 0 | 0 | 0 |
Simultaneous Modeling of Multiple Complications for Risk Profiling in Diabetes Care | Type 2 diabetes mellitus (T2DM) is a chronic disease that often results in
multiple complications. Risk prediction and profiling of T2DM complications is
critical for healthcare professionals to design personalized treatment plans
for patients in diabetes care for improved outcomes. In this paper, we study
the risk of developing complications after the initial T2DM diagnosis from
longitudinal patient records. We propose a novel multi-task learning approach
to simultaneously model multiple complications where each task corresponds to
the risk modeling of one complication. Specifically, the proposed method
strategically captures the relationships (1) between the risks of multiple T2DM
complications, (2) between the different risk factors, and (3) between the risk
factor selection patterns. The method uses coefficient shrinkage to identify an
informative subset of risk factors from high-dimensional data, and uses a
hierarchical Bayesian framework to allow domain knowledge to be incorporated as
priors. The proposed method is favorable for healthcare applications because in
additional to improved prediction performance, relationships among the
different risks and risk factors are also identified. Extensive experimental
results on a large electronic medical claims database show that the proposed
method outperforms state-of-the-art models by a significant margin.
Furthermore, we show that the risk associations learned and the risk factors
identified lead to meaningful clinical insights.
| 0 | 0 | 0 | 1 | 0 | 0 |
On a topology property for moduli space of Kapustin-Witten equations | In this article, we study the Kapustin-Witten equations on a closed,
simply-connected, four-manifold. We using a compactness theorem due to Taubes
to prove that if $(A,\phi)$ is a solution of Kapustin-Witten equations and the
connection $A$ is closed to a $generic$ ASD connection $A_{\infty}$, then
$(A,\phi)$ must be a trivial solution. We also prove that the moduli space of
the solutions of Kapustin-Witten equations is non-connected if the connections
on the compactification of moduli space of ASD connections are all $generic$.
As one application, we extend the ideas of Kapustin-Witten equations to other
equations on gauge theory-- Hitchin-Simpson equations and Vafa-Witten on
compact Kähler surface with a Kähler metric $g$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Neural Semantic Parsing over Multiple Knowledge-bases | A fundamental challenge in developing semantic parsers is the paucity of
strong supervision in the form of language utterances annotated with logical
form. In this paper, we propose to exploit structural regularities in language
in different domains, and train semantic parsers over multiple knowledge-bases
(KBs), while sharing information across datasets. We find that we can
substantially improve parsing accuracy by training a single
sequence-to-sequence model over multiple KBs, when providing an encoding of the
domain at decoding time. Our model achieves state-of-the-art performance on the
Overnight dataset (containing eight domains), improves performance over a
single KB baseline from 75.6% to 79.6%, while obtaining a 7x reduction in the
number of model parameters.
| 1 | 0 | 0 | 0 | 0 | 0 |
Self-Committee Approach for Image Restoration Problems using Convolutional Neural Network | There have been many discriminative learning methods using convolutional
neural networks (CNN) for several image restoration problems, which learn the
mapping function from a degraded input to the clean output. In this letter, we
propose a self-committee method that can find enhanced restoration results from
the multiple trial of a trained CNN with different but related inputs.
Specifically, it is noted that the CNN sometimes finds different mapping
functions when the input is transformed by a reversible transform and thus
produces different but related outputs with the original. Hence averaging the
outputs for several different transformed inputs can enhance the results as
evidenced by the network committee methods. Unlike the conventional committee
approaches that require several networks, the proposed method needs only a
single network. Experimental results show that adding an additional transform
as a committee always brings additional gain on image denoising and single
image supre-resolution problems.
| 1 | 0 | 0 | 0 | 0 | 0 |
Quantifying Differential Privacy in Continuous Data Release under Temporal Correlations | Differential Privacy (DP) has received increasing attention as a rigorous
privacy framework. Many existing studies employ traditional DP mechanisms
(e.g., the Laplace mechanism) as primitives to continuously release private
data for protecting privacy at each time point (i.e., event-level privacy),
which assume that the data at different time points are independent, or that
adversaries do not have knowledge of correlation between data. However,
continuously generated data tend to be temporally correlated, and such
correlations can be acquired by adversaries. In this paper, we investigate the
potential privacy loss of a traditional DP mechanism under temporal
correlations. First, we analyze the privacy leakage of a DP mechanism under
temporal correlation that can be modeled using Markov Chain. Our analysis
reveals that, the event-level privacy loss of a DP mechanism may
\textit{increase over time}. We call the unexpected privacy loss
\textit{temporal privacy leakage} (TPL). Although TPL may increase over time,
we find that its supremum may exist in some cases. Second, we design efficient
algorithms for calculating TPL. Third, we propose data releasing mechanisms
that convert any existing DP mechanism into one against TPL. Experiments
confirm that our approach is efficient and effective.
| 1 | 0 | 0 | 0 | 0 | 0 |
A Sufficient Condition for Nilpotency of the Nilpotent Residual of a Finite Group | Let $G$ be a finite group with the property that if $a,b$ are powers of
$\delta_1^*$-commutators such that $(|a|,|b|)=1$, then $|ab|=|a||b|$. We show
that $\gamma_{\infty}(G)$ is nilpotent.
| 0 | 0 | 1 | 0 | 0 | 0 |
Monte Carlo Estimation of the Density of the Sum of Dependent Random Variables | We study an unbiased estimator for the density of a sum of random variables
that are simulated from a computer model. A numerical study on examples with
copula dependence is conducted where the proposed estimator performs favourably
in terms of variance compared to other unbiased estimators. We provide
applications and extensions to the estimation of marginal densities in Bayesian
statistics and to the estimation of the density of sums of random variables
under Gaussian copula dependence.
| 0 | 0 | 1 | 1 | 0 | 0 |
Transferable neural networks for enhanced sampling of protein dynamics | Variational auto-encoder frameworks have demonstrated success in reducing
complex nonlinear dynamics in molecular simulation to a single non-linear
embedding. In this work, we illustrate how this non-linear latent embedding can
be used as a collective variable for enhanced sampling, and present a simple
modification that allows us to rapidly perform sampling in multiple related
systems. We first demonstrate our method is able to describe the effects of
force field changes in capped alanine dipeptide after learning a model using
AMBER99. We further provide a simple extension to variational dynamics encoders
that allows the model to be trained in a more efficient manner on larger
systems by encoding the outputs of a linear transformation using time-structure
based independent component analysis (tICA). Using this technique, we show how
such a model trained for one protein, the WW domain, can efficiently be
transferred to perform enhanced sampling on a related mutant protein, the GTT
mutation. This method shows promise for its ability to rapidly sample related
systems using a single transferable collective variable and is generally
applicable to sets of related simulations, enabling us to probe the effects of
variation in increasingly large systems of biophysical interest.
| 0 | 0 | 0 | 1 | 1 | 0 |
Collisional stripping of planetary crusts | Geochemical studies of planetary accretion and evolution have invoked various
degrees of collisional erosion to explain differences in bulk composition
between planets and chondrites. Here we undertake a full, dynamical evaluation
of 'crustal stripping' during accretion and its key geochemical consequences.
We present smoothed particle hydrodynamics simulations of collisions between
differentiated rocky planetesimals and planetary embryos. We find that the
crust is preferentially lost relative to the mantle during impacts, and we have
developed a scaling law that approximates the mass of crust that remains in the
largest remnant. Using this scaling law and a recent set of N-body simulations,
we have estimated the maximum effect of crustal stripping on incompatible
element abundances during the accretion of planetary embryos. We find that on
average one third of the initial crust is stripped from embryos as they
accrete, which leads to a reduction of ~20% in the budgets of the heat
producing elements if the stripped crust does not reaccrete. Erosion of crusts
can lead to non-chondritic ratios of incompatible elements, but the magnitude
of this effect depends sensitively on the details of the crust-forming melting
process. The Lu/Hf system is fractionated for a wide range of crustal formation
scenarios. Using eucrites (the products of planetesimal silicate melting,
thought to represent the crust of Vesta) as a guide to the Lu/Hf of
planetesimal crust partially lost during accretion, we predict the Earth could
evolve to a superchondritic 176-Hf/177-Hf (3-5 parts per ten thousand) at
present day. Such values are in keeping with compositional estimates of the
bulk Earth. Stripping of planetary crusts during accretion can lead to
detectable changes in bulk composition of lithophile elements, but the
fractionation is relatively subtle, and sensitive to the efficiency of
reaccretion.
| 0 | 1 | 0 | 0 | 0 | 0 |
N-GCN: Multi-scale Graph Convolution for Semi-supervised Node Classification | Graph Convolutional Networks (GCNs) have shown significant improvements in
semi-supervised learning on graph-structured data. Concurrently, unsupervised
learning of graph embeddings has benefited from the information contained in
random walks. In this paper, we propose a model: Network of GCNs (N-GCN), which
marries these two lines of work. At its core, N-GCN trains multiple instances
of GCNs over node pairs discovered at different distances in random walks, and
learns a combination of the instance outputs which optimizes the classification
objective. Our experiments show that our proposed N-GCN model improves
state-of-the-art baselines on all of the challenging node classification tasks
we consider: Cora, Citeseer, Pubmed, and PPI. In addition, our proposed method
has other desirable properties, including generalization to recently proposed
semi-supervised learning methods such as GraphSAGE, allowing us to propose
N-SAGE, and resilience to adversarial input perturbations.
| 1 | 0 | 0 | 1 | 0 | 0 |
Sparse Kneser graphs are Hamiltonian | For integers $k\geq 1$ and $n\geq 2k+1$, the Kneser graph $K(n,k)$ is the
graph whose vertices are the $k$-element subsets of $\{1,\ldots,n\}$ and whose
edges connect pairs of subsets that are disjoint. The Kneser graphs of the form
$K(2k+1,k)$ are also known as the odd graphs. We settle an old problem due to
Meredith, Lloyd, and Biggs from the 1970s, proving that for every $k\geq 3$,
the odd graph $K(2k+1,k)$ has a Hamilton cycle. This and a known conditional
result due to Johnson imply that all Kneser graphs of the form $K(2k+2^a,k)$
with $k\geq 3$ and $a\geq 0$ have a Hamilton cycle. We also prove that
$K(2k+1,k)$ has at least $2^{2^{k-6}}$ distinct Hamilton cycles for $k\geq 6$.
Our proofs are based on a reduction of the Hamiltonicity problem in the odd
graph to the problem of finding a spanning tree in a suitably defined
hypergraph on Dyck words.
| 1 | 0 | 0 | 0 | 0 | 0 |
Fixed points of morphisms among binary generalized pseudostandard words | We introduce a class of fixed points of primitive morphisms among aperiodic
binary generalized pseudostandard words. We conjecture that this class contains
all fixed points of primitive morphisms among aperiodic binary generalized
pseudostandard words that are not standard Sturmian words.
| 0 | 0 | 1 | 0 | 0 | 0 |
Dimension of the space of conics on Fano hypersurfaces | R. Beheshti showed that, for a smooth Fano hypersurface $X$ of degree $\leq
8$ over the complex number field $\mathbb{C}$, the dimension of the space of
lines lying in $X$ is equal to the expected dimension. We study the space of
conics on $X$. In this case, if $X$ contains some linear subvariety, then the
dimension of the space can be larger than the expected dimension. In this
paper, we show that, for a smooth Fano hypersurface $X$ of degree $\leq 6$ over
$\mathbb{C}$, and for an irreducible component $R$ of the space of conics lying
in $X$, if the $2$-plane spanned by a general conic of $R$ is not contained in
$X$, then the dimension of $R$ is equal to the expected dimension.
| 0 | 0 | 1 | 0 | 0 | 0 |
Automorphisms of Partially Commutative Groups III: Inversions and Transvections | The structure of a certain subgroup $S$ of the automorphism group of a
partially commutative group (RAAG) $G$ is described in detail: namely the
subgroup generated by inversions and elementary transvections. We define
admissible subsets of the generators of $G$, and show that $S$ is the subgroup
of automorphisms which fix all subgroups $\langle Y\rangle$ of $G$, for all
admissible subsets $Y$. A decomposition of $S$ as an iterated tower of
semi-direct products in given and the structure of the factors of this
decomposition described. The construction allows a presentation of $S$ to be
computed, from the commutation graph of $G$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Testing Global Constraints | Every Constraint Programming (CP) solver exposes a library of constraints for
solving combinatorial problems. In order to be useful, CP solvers need to be
bug-free. Therefore the testing of the solver is crucial to make developers and
users confident. We present a Java library allowing any JVM based solver to
test that the implementations of the individual constraints are correct. The
library can be used in a test suite executed in a continuous integration tool
or it can also be used to discover minimalist instances violating some
properties (arc-consistency, etc) in order to help the developer to identify
the origin of the problem using standard debuggers.
| 1 | 0 | 0 | 0 | 0 | 0 |
Restricted Boltzmann Machines: Introduction and Review | The restricted Boltzmann machine is a network of stochastic units with
undirected interactions between pairs of visible and hidden units. This model
was popularized as a building block of deep learning architectures and has
continued to play an important role in applied and theoretical machine
learning. Restricted Boltzmann machines carry a rich structure, with
connections to geometry, applied algebra, probability, statistics, machine
learning, and other areas. The analysis of these models is attractive in its
own right and also as a platform to combine and generalize mathematical tools
for graphical models with hidden variables. This article gives an introduction
to the mathematical analysis of restricted Boltzmann machines, reviews recent
results on the geometry of the sets of probability distributions representable
by these models, and suggests a few directions for further investigation.
| 0 | 0 | 0 | 1 | 0 | 0 |
Learning and Visualizing Localized Geometric Features Using 3D-CNN: An Application to Manufacturability Analysis of Drilled Holes | 3D Convolutional Neural Networks (3D-CNN) have been used for object
recognition based on the voxelized shape of an object. However, interpreting
the decision making process of these 3D-CNNs is still an infeasible task. In
this paper, we present a unique 3D-CNN based Gradient-weighted Class Activation
Mapping method (3D-GradCAM) for visual explanations of the distinct local
geometric features of interest within an object. To enable efficient learning
of 3D geometries, we augment the voxel data with surface normals of the object
boundary. We then train a 3D-CNN with this augmented data and identify the
local features critical for decision-making using 3D GradCAM. An application of
this feature identification framework is to recognize difficult-to-manufacture
drilled hole features in a complex CAD geometry. The framework can be extended
to identify difficult-to-manufacture features at multiple spatial scales
leading to a real-time design for manufacturability decision support system.
| 1 | 0 | 0 | 1 | 0 | 0 |
Note on regions containing eigenvalues of a matrix | By excluding some regions, in which each eigenvalue of a matrix is not
contained, from the \alpha\beta-type eigenvalue inclusion region provided by
Huang et al.(Electronic Journal of Linear Algebra, 15 (2006) 215-224), a new
eigenvalue inclusion region is given. And it is proved that the new region is
contained in the \alpha\beta-type eigenvalue inclusion region.
| 0 | 0 | 1 | 0 | 0 | 0 |
Abstract Family-based Model Checking using Modal Featured Transition Systems: Preservation of CTL* (Extended Version) | Variational systems allow effective building of many custom variants by using
features (configuration options) to mark the variable functionality. In many of
the applications, their quality assurance and formal verification are of
paramount importance. Family-based model checking allows simultaneous
verification of all variants of a variational system in a single run by
exploiting the commonalities between the variants. Yet, its computational cost
still greatly depends on the number of variants (often huge).
In this work, we show how to achieve efficient family-based model checking of
CTL* temporal properties using variability abstractions and off-the-shelf
(single-system) tools. We use variability abstractions for deriving abstract
family-based model checking, where the variability model of a variational
system is replaced with an abstract (smaller) version of it, called modal
featured transition system, which preserves the satisfaction of both universal
and existential temporal properties, as expressible in CTL*. Modal featured
transition systems contain two kinds of transitions, termed may and must
transitions, which are defined by the conservative (over-approximating)
abstractions and their dual (under-approximating) abstractions, respectively.
The variability abstractions can be combined with different partitionings of
the set of variants to infer suitable divide-and-conquer verification plans for
the variational system. We illustrate the practicality of this approach for
several variational systems.
| 1 | 0 | 0 | 0 | 0 | 0 |
Nonparametric Shape-restricted Regression | We consider the problem of nonparametric regression under shape constraints.
The main examples include isotonic regression (with respect to any partial
order), unimodal/convex regression, additive shape-restricted regression, and
constrained single index model. We review some of the theoretical properties of
the least squares estimator (LSE) in these problems, emphasizing on the
adaptive nature of the LSE. In particular, we study the behavior of the risk of
the LSE, and its pointwise limiting distribution theory, with special emphasis
to isotonic regression. We survey various methods for constructing pointwise
confidence intervals around these shape-restricted functions. We also briefly
discuss the computation of the LSE and indicate some open research problems and
future directions.
| 0 | 0 | 1 | 1 | 0 | 0 |
Emergence of Invariance and Disentanglement in Deep Representations | Using established principles from Statistics and Information Theory, we show
that invariance to nuisance factors in a deep neural network is equivalent to
information minimality of the learned representation, and that stacking layers
and injecting noise during training naturally bias the network towards learning
invariant representations. We then decompose the cross-entropy loss used during
training and highlight the presence of an inherent overfitting term. We propose
regularizing the loss by bounding such a term in two equivalent ways: One with
a Kullbach-Leibler term, which relates to a PAC-Bayes perspective; the other
using the information in the weights as a measure of complexity of a learned
model, yielding a novel Information Bottleneck for the weights. Finally, we
show that invariance and independence of the components of the representation
learned by the network are bounded above and below by the information in the
weights, and therefore are implicitly optimized during training. The theory
enables us to quantify and predict sharp phase transitions between underfitting
and overfitting of random labels when using our regularized loss, which we
verify in experiments, and sheds light on the relation between the geometry of
the loss function, invariance properties of the learned representation, and
generalization error.
| 1 | 0 | 0 | 1 | 0 | 0 |
Fracton topological order via coupled layers | In this work, we develop a coupled layer construction of fracton topological
orders in $d=3$ spatial dimensions. These topological phases have sub-extensive
topological ground-state degeneracy and possess excitations whose movement is
restricted in interesting ways. Our coupled layer approach is used to construct
several different fracton topological phases, both from stacked layers of
simple $d=2$ topological phases and from stacks of $d=3$ fracton topological
phases. This perspective allows us to shed light on the physics of the X-cube
model recently introduced by Vijay, Haah, and Fu, which we demonstrate can be
obtained as the strong-coupling limit of a coupled three-dimensional stack of
toric codes. We also construct two new models of fracton topological order: a
semionic generalization of the X-cube model, and a model obtained by coupling
together four interpenetrating X-cube models, which we dub the "Four Color Cube
model." The couplings considered lead to fracton topological orders via
mechanisms we dub "p-string condensation" and "p-membrane condensation," in
which strings or membranes built from particle excitations are driven to
condense. This allows the fusion properties, braiding statistics, and
ground-state degeneracy of the phases we construct to be easily studied in
terms of more familiar degrees of freedom. Our work raises the possibility of
studying fracton topological phases from within the framework of topological
quantum field theory, which may be useful for obtaining a more complete
understanding of such phases.
| 0 | 1 | 0 | 0 | 0 | 0 |
Local Asymptotic Normality of Infinite-Dimensional Concave Extended Linear Models | We study local asymptotic normality of M-estimates of convex minimization in
an infinite dimensional parameter space. The objective function of M-estimates
is not necessary differentiable and is possibly subject to convex constraints.
In the above circumstance, narrow convergence with respect to uniform
convergence fails to hold, because of the strength of it's topology. A new
approach we propose to the lack-of-uniform-convergence is based on
Mosco-convergence that is weaker topology than uniform convergence. By applying
narrow convergence with respect to Mosco topology, we develop an
infinite-dimensional version of the convexity argument and provide a proof of a
local asymptotic normality. Our new technique also provides a proof of an
asymptotic distribution of the likelihood ratio test statistic defined on real
separable Hilbert spaces.
| 0 | 0 | 1 | 1 | 0 | 0 |
Polarization exchange of optical eigenmode pair in twisted-nematic Fabry-Pérot resonator | The polarization exchange effect in a twisted-nematic Fabry-Pérot resonator
is experimentally confirmed in the regimes of both uniform and
electric-field-deformed twisted structures. The polarization of output light in
the transmission peaks is shown to be linear rather than elliptical. The
polarization deflection from the nematic director grows from $0^\circ$ to
$90^\circ$ angle and exchanges the longitudinal and transverse directions.
Untwisting of a nematic by a voltage leads to the rotation of the polarization
plane of light passing through the resonator. The polarization exchange effect
allows using the investigated resonator as a spectral-selective linear
polarizer with the voltage-controlled rotation of the polarization plane.
| 0 | 1 | 0 | 0 | 0 | 0 |
Galaxy Protoclusters as Drivers of Cosmic Star-Formation History in the First 2 Gyr | Present-day clusters are massive halos containing mostly quiescent galaxies,
while distant protoclusters are extended structures containing numerous
star-forming galaxies. We investigate the implications of this fundamental
change in a cosmological context using a set of N-body simulations and
semi-analytic models. We find that the fraction of the cosmic volume occupied
by all (proto)clusters increases by nearly three orders of magnitude from z=0
to z=7. We show that (proto)cluster galaxies are an important, and even
dominant population at high redshift, as their expected contribution to the
cosmic star-formation rate density rises (from 1% at z=0) to 20% at z=2 and 50%
at z=10. Protoclusters thus provide a significant fraction of the cosmic
ionizing photons, and may have been crucial in driving the timing and topology
of cosmic reionization. Internally, the average history of cluster formation
can be described by three distinct phases: at z~10-5, galaxy growth in
protoclusters proceeded in an inside-out manner, with centrally dominant halos
that are among the most active regions in the Universe; at z~5-1.5, rapid star
formation occurred within the entire 10-20 Mpc structures, forming most of
their present-day stellar mass; at z<~1.5, violent gravitational collapse drove
these stellar contents into single cluster halos, largely erasing the details
of cluster galaxy formation due to relaxation and virialization. Our results
motivate observations of distant protoclusters in order to understand the
rapid, extended stellar growth during Cosmic Noon, and their connection to
reionization during Cosmic Dawn.
| 0 | 1 | 0 | 0 | 0 | 0 |
Universality and scaling laws in the cascading failure model with healing | Cascading failures may lead to dramatic collapse in interdependent networks,
where the breakdown takes place as a discontinuity of the order parameter. In
the cascading failure (CF) model with healing there is a control parameter
which at some value suppresses the discontinuity of the order parameter.
However, up to this value of the healing parameter the breakdown is a hybrid
transition, meaning that, besides this first order character, the transition
shows scaling too. In this paper we investigate the question of universality
related to the scaling behavior. Recently we showed that the hybrid phase
transition in the original CF model has two sets of exponents describing
respectively the order parameter and the cascade statistics, which are
connected by a scaling law. In the CF model with healing we measure these
exponents as a function of the healing parameter. We find two universality
classes: In the wide range below the critical healing value the exponents agree
with those of the original model, while above this value the model displays
trivial scaling meaning that fluctuations follow the central limit theorem.
| 1 | 1 | 0 | 0 | 0 | 0 |
Vector-valued Jack Polynomials and Wavefunctions on the Torus | The Hamiltonian of the quantum Calogero-Sutherland model of $N$ identical
particles on the circle with $1/r^{2}$ interactions has eigenfunctions
consisting of Jack polynomials times the base state. By use of the generalized
Jack polynomials taking values in modules of the symmetric group and the matrix
solution of a system of linear differential equations one constructs novel
eigenfunctions of the Hamiltonian. Like the usual wavefunctions each
eigenfunction determines a symmetric probability density on the $N$-torus. The
construction applies to any irreducible representation of the symmetric group.
The methods depend on the theory of generalized Jack polynomials due to
Griffeth, and the Yang-Baxter graph approach of Luque and the author.
| 0 | 0 | 1 | 0 | 0 | 0 |
Wirtinger systems of generators of knot groups | We define the {\it Wirtinger number} of a link, an invariant closely related
to the meridional rank. The Wirtinger number is the minimum number of
generators of the fundamental group of the link complement over all meridional
presentations in which every relation is an iterated Wirtinger relation arising
in a diagram. We prove that the Wirtinger number of a link equals its bridge
number. This equality can be viewed as establishing a weak version of Cappell
and Shaneson's Meridional Rank Conjecture, and suggests a new approach to this
conjecture. Our result also leads to a combinatorial technique for obtaining
strong upper bounds on bridge numbers. This technique has so far allowed us to
add the bridge numbers of approximately 50,000 prime knots of up to 14
crossings to the knot table. As another application, we use the Wirtinger
number to show there exists a universal constant $C$ with the property that the
hyperbolic volume of a prime alternating link $L$ is bounded below by $C$ times
the bridge number of $L$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.