title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
Addressing Item-Cold Start Problem in Recommendation Systems using Model Based Approach and Deep Learning | Traditional recommendation systems rely on past usage data in order to
generate new recommendations. Those approaches fail to generate sensible
recommendations for new users and items into the system due to missing
information about their past interactions. In this paper, we propose a solution
for successfully addressing item-cold start problem which uses model-based
approach and recent advances in deep learning. In particular, we use latent
factor model for recommendation, and predict the latent factors from item's
descriptions using convolutional neural network when they cannot be obtained
from usage data. Latent factors obtained by applying matrix factorization to
the available usage data are used as ground truth to train the convolutional
neural network. To create latent factor representations for the new items, the
convolutional neural network uses their textual description. The results from
the experiments reveal that the proposed approach significantly outperforms
several baseline estimators.
| 1 | 0 | 0 | 1 | 0 | 0 |
End-to-End Monaural Multi-speaker ASR System without Pretraining | Recently, end-to-end models have become a popular approach as an alternative
to traditional hybrid models in automatic speech recognition (ASR). The
multi-speaker speech separation and recognition task is a central task in
cocktail party problem. In this paper, we present a state-of-the-art monaural
multi-speaker end-to-end automatic speech recognition model. In contrast to
previous studies on the monaural multi-speaker speech recognition, this
end-to-end framework is trained to recognize multiple label sequences
completely from scratch. The system only requires the speech mixture and
corresponding label sequences, without needing any indeterminate supervisions
obtained from non-mixture speech or corresponding labels/alignments. Moreover,
we exploited using the individual attention module for each separated speaker
and the scheduled sampling to further improve the performance. Finally, we
evaluate the proposed model on the 2-speaker mixed speech generated from the
WSJ corpus and the wsj0-2mix dataset, which is a speech separation and
recognition benchmark. The experiments demonstrate that the proposed methods
can improve the performance of the end-to-end model in separating the
overlapping speech and recognizing the separated streams. From the results, the
proposed model leads to ~10.0% relative performance gains in terms of CER and
WER respectively.
| 1 | 0 | 0 | 0 | 0 | 0 |
Characterization of Calabi--Yau variations of Hodge structure over tube domains by characteristic forms | Sheng and Zuo's characteristic forms are invariants of a variation of Hodge
structure. We show that they characterize Gross's canonical variations of Hodge
structure of Calabi-Yau type over (Hermitian symmetric) tube domains.
| 0 | 0 | 1 | 0 | 0 | 0 |
Spectral Theory of Infinite Quantum Graphs | We investigate quantum graphs with infinitely many vertices and edges without
the common restriction on the geometry of the underlying metric graph that
there is a positive lower bound on the lengths of its edges. Our central result
is a close connection between spectral properties of a quantum graph and the
corresponding properties of a certain weighted discrete Laplacian on the
underlying discrete graph. Using this connection together with spectral theory
of (unbounded) discrete Laplacians on infinite graphs, we prove a number of new
results on spectral properties of quantum graphs. Namely, we prove several
self-adjointness results including a Gaffney type theorem. We investigate the
problem of lower semiboundedness, prove several spectral estimates (bounds for
the bottom of spectra and essential spectra of quantum graphs, CLR-type
estimates) and study spectral types.
| 0 | 0 | 1 | 0 | 0 | 0 |
A comment on "A test of general relativity using the LARES and LAGEOS satellites and a GRACE Earth gravity model", by I. Ciufolini et al | Recently, Ciufolini et al. reported on a test of the general relativistic
gravitomagnetic Lense-Thirring effect by analyzing about 3.5 years of laser
ranging data to the LAGEOS, LAGEOS II, LARES geodetic satellites orbiting the
Earth. By using the GRACE-based GGM05S Earth's global gravity model and a
linear combination of the nodes $\Omega$ of the three satellites designed to
remove the impact of errors in the first two even zonal harmonic coefficients
$J_2,~J_4$ of the multipolar expansion of the Newtonian part of the Earth's
gravitational potential, they claimed an overall accuracy of $5\%$ for the
Lense-Thirring caused node motion. We show that the scatter in the nominal
values of the uncancelled even zonals of degree $\ell = 6,~8,~10$ from some of
the most recent global gravity models does not yet allow to reach unambiguously
and univocally the expected $\approx 1\%$ level, being large up to $\lesssim
15\%~(\ell=6),~6\%~(\ell=8),~36\%~(\ell=10)$ for some pairs of models.
| 0 | 1 | 0 | 0 | 0 | 0 |
SWIFT Detection of a 65-Day X-ray Period from the Ultraluminous Pulsar NGC 7793 P13 | NGC 7793 P13 is an ultraluminous X-ray source harboring an accreting pulsar.
We report on the detection of a ~65 d period X-ray modulation with Swift
observations in this system. The modulation period found in the X-ray band is
P=65.05+/-0.10 d and the profile is asymmetric with a fast rise and a slower
decay. On the other hand, the u-band light curve collected by Swift UVOT
confirmed an optical modulation with a period of P=64.24+/-0.13 d. We explored
the phase evolution of the X-ray and optical periodicities and propose two
solutions. A superorbital modulation with a period of ~2,700-4,700 d probably
caused by the precession of a warped accretion disk is necessary to interpret
the phase drift of the optical data. We further discuss the implication if this
~65d periodicity is caused by the superorbital modulation. Estimated from the
relationship between the spin-orbital and orbital-superorbital periods of known
disk-fed high-mass X-ray binaries, the orbital period of P13 is roughly
estimated as 3-7 d. In this case, an unknown mechanism with a much longer time
scale is needed to interpret the phase drift. Further studies on the stability
of these two periodicities with a long-term monitoring could help us to probe
their physical origins.
| 0 | 1 | 0 | 0 | 0 | 0 |
Pushing Configuration-Interaction to the Limit: Towards Massively Parallel MCSCF Calculations | A new large-scale parallel multiconfigurational self-consistent field (MCSCF)
implementation in the open-source NWChem computational chemistry code is
presented. The generalized active space (GAS) approach is used to partition
large configuration interaction (CI) vectors and generate a sufficient number
of batches that can be distributed to the available nodes. Massively parallel
CI calculations with large active spaces can be treated. The performance of the
new parallel MCSCF implementation is presented for the chromium trimer and for
an active space of 20 electrons in 20 orbitals. Unprecedented CI calculations
with an active space of 22 electrons in 22 orbitals for the pentacene systems
were performed and a single CI iteration calculation with an active space of 24
electrons in 24 orbitals for the chromium tetramer was possible. The chromium
tetramer corresponds to a CI expansion of one trillion SDs (914 058 513 424)
and is largest conventional CI calculation attempted up to date.
| 0 | 1 | 0 | 0 | 0 | 0 |
Outer Regions of the Milky Way | With the start of the Gaia era, the time has come to address the major
challenge of deriving the star formation history and evolution of the disk of
our MilkyWay. Here we review our present knowledge of the outer regions of the
Milky Way disk population. Its stellar content, its structure and its dynamical
and chemical evolution are summarized, focussing on our lack of understanding
both from an observational and a theoretical viewpoint. We describe the
unprecedented data that Gaia and the upcoming ground-based spectroscopic
surveys will provide in the next decade. More in detail, we quantify the expect
accuracy in position, velocity and astrophysical parameters of some of the key
tracers of the stellar populations in the outer Galactic disk. Some insights on
the future capability of these surveys to answer crucial and fundamental issues
are discussed, such as the mechanisms driving the spiral arms and the warp
formation. Our Galaxy, theMilkyWay, is our cosmological laboratory for
understanding the process of formation and evolution of disk galaxies. What we
learn in the next decades will be naturally transferred to the extragalactic
domain.
| 0 | 1 | 0 | 0 | 0 | 0 |
Multi-Objective Learning and Mask-Based Post-Processing for Deep Neural Network Based Speech Enhancement | We propose a multi-objective framework to learn both secondary targets not
directly related to the intended task of speech enhancement (SE) and the
primary target of the clean log-power spectra (LPS) features to be used
directly for constructing the enhanced speech signals. In deep neural network
(DNN) based SE we introduce an auxiliary structure to learn secondary
continuous features, such as mel-frequency cepstral coefficients (MFCCs), and
categorical information, such as the ideal binary mask (IBM), and integrate it
into the original DNN architecture for joint optimization of all the
parameters. This joint estimation scheme imposes additional constraints not
available in the direct prediction of LPS, and potentially improves the
learning of the primary target. Furthermore, the learned secondary information
as a byproduct can be used for other purposes, e.g., the IBM-based
post-processing in this work. A series of experiments show that joint LPS and
MFCC learning improves the SE performance, and IBM-based post-processing
further enhances listening quality of the reconstructed speech.
| 1 | 0 | 0 | 0 | 0 | 0 |
Latent variable approach to diarization of audio recordings using ad-hoc randomly placed mobile devices | Diarization of audio recordings from ad-hoc mobile devices using spatial
information is considered in this paper. A two-channel synchronous recording is
assumed for each mobile device, which is used to compute directional statistics
separately at each device in a frame-wise manner. The recordings across the
mobile devices are asynchronous, but a coarse synchronization is performed by
aligning the signals using acoustic events, or real-time clock. Direction
statistics computed for all the devices, are then modeled jointly using a
Dirichlet mixture model, and the posterior probability over the mixture
components is used to derive the diarization information. Experiments on real
life recordings using mobile phones show a diarization error rate of less than
14%.
| 1 | 0 | 0 | 0 | 0 | 0 |
A multiple attribute model resolves a conflict between additive and multiplicative models of incentive salience | A model of incentive salience as a function of stimulus value and
interoceptive state has been previously proposed. In that model, the function
differs depending on whether the stimulus is appetitive or aversive; it is
multiplicative for appetitive stimuli and additive for aversive stimuli. The
authors argued it was necessary to capture data on how extreme changes in salt
appetite could move evaluation of an extreme salt solution from negative to
positive. We demonstrate that arbitrarily varying this function is unnecessary,
and that a multiplicative function is sufficient if one assumes the incentive
salience function for an incentive (such as salt) is comprised of multiple
stimulus features and multiple interoceptive signals. We show that it is also
unnecessary considering the dual-structure approach-aversive nature of the
reward system, which results in separate weighting of appetitive and aversive
stimulus features.
| 0 | 0 | 0 | 0 | 1 | 0 |
Perturbation theory approaches to Anderson and Many-Body Localization: some lecture notes | These are lecture notes based on three lectures given by Antonello
Scardicchio at the December 2016 Topical School on Many-Body-Localization
organized by the Statistical Physics Group of the Institute Jean Lamour in
Nancy. They were compiled and put in a coherent logical form by Thimothée
Thiery.
| 0 | 1 | 0 | 0 | 0 | 0 |
A measurement of the z = 0 UV background from H$α$ fluorescence | We report the detection of extended Halpha emission from the tip of the HI
disk of the nearby edge-on galaxy UGC 7321, observed with the Multi Unit
Spectroscopic Explorer (MUSE) instrument at the Very Large Telescope. The
Halpha surface brightness fades rapidly where the HI column density drops below
N(HI) = 10^19 cm^-2 , consistent with fluorescence arising at the ionisation
front from gas that is photoionized by the extragalactic ultraviolet background
(UVB). The surface brightness measured at this location is (1.2 +/- 0.5)x10^-19
erg/s/cm^2/arcsec^2, where the error is mostly systematic and results from the
proximity of the signal to the edge of the MUSE field of view, and from the
presence of a sky line next to the redshifted Halpha wavelength. By combining
the Halpha and the HI 21 cm maps with a radiative transfer calculation of an
exponential disk illuminated by the UVB, we derive a value for the HI
photoionization rate of Gamma ~ (6-8)x10^-14 1/s . This value is consistent
with transmission statistics of the Lyalpha forest and with recent models of a
UVB which is dominated by quasars.
| 0 | 1 | 0 | 0 | 0 | 0 |
The landscape of the spiked tensor model | We consider the problem of estimating a large rank-one tensor ${\boldsymbol
u}^{\otimes k}\in({\mathbb R}^{n})^{\otimes k}$, $k\ge 3$ in Gaussian noise.
Earlier work characterized a critical signal-to-noise ratio $\lambda_{Bayes}=
O(1)$ above which an ideal estimator achieves strictly positive correlation
with the unknown vector of interest. Remarkably no polynomial-time algorithm is
known that achieved this goal unless $\lambda\ge C n^{(k-2)/4}$ and even
powerful semidefinite programming relaxations appear to fail for $1\ll
\lambda\ll n^{(k-2)/4}$.
In order to elucidate this behavior, we consider the maximum likelihood
estimator, which requires maximizing a degree-$k$ homogeneous polynomial over
the unit sphere in $n$ dimensions. We compute the expected number of critical
points and local maxima of this objective function and show that it is
exponential in the dimensions $n$, and give exact formulas for the exponential
growth rate. We show that (for $\lambda$ larger than a constant) critical
points are either very close to the unknown vector ${\boldsymbol u}$, or are
confined in a band of width $\Theta(\lambda^{-1/(k-1)})$ around the maximum
circle that is orthogonal to ${\boldsymbol u}$. For local maxima, this band
shrinks to be of size $\Theta(\lambda^{-1/(k-2)})$. These `uninformative' local
maxima are likely to cause the failure of optimization algorithms.
| 0 | 0 | 1 | 1 | 0 | 0 |
System Identification of a Multi-timescale Adaptive Threshold Neuronal Model | In this paper, the parameter estimation problem for a multi-timescale
adaptive threshold (MAT) neuronal model is investigated. By manipulating the
system dynamics, which comprise of a non-resetting leaky integrator coupled
with an adaptive threshold, the threshold voltage can be obtained as a
realizable model that is linear in the unknown parameters. This linearly
parametrized realizable model is then utilized inside a prediction error based
framework to identify the threshold parameters with the purpose of predicting
single neuron precise firing times. The iterative linear least squares
estimation scheme is evaluated using both synthetic data obtained from an exact
model as well as experimental data obtained from in vitro rat somatosensory
cortical neurons. Results show the ability of this approach to fit the MAT
model to different types of fluctuating reference data. The performance of the
proposed approach is seen to be superior when comparing with existing
identification approaches used by the neuronal community.
| 0 | 0 | 0 | 0 | 1 | 0 |
Free deterministic equivalent Z-scores of compound Wishart models: A goodness of fit test of 2DARMA models | We introduce a new method to qualify the goodness of fit parameter estimation
of compound Wishart models. Our method based on the free deterministic
equivalent Z-score, which we introduce in this paper. Furthermore, an
application to two dimensional autoregressive moving-average model is provided.
Our proposal method is a generalization of statistical hypothesis testing to
one dimensional moving average model based on fluctuations of real compound
Wishart matrices, which is a recent result by Hasegawa, Sakuma and Yoshida.
| 0 | 0 | 1 | 1 | 0 | 0 |
Numerical algorithms for mean exit time and escape probability of stochastic systems with asymmetric Lévy motion | For non-Gaussian stochastic dynamical systems, mean exit time and escape
probability are important deterministic quantities, which can be obtained from
integro-differential (nonlocal) equations. We develop an efficient and
convergent numerical method for the mean first exit time and escape probability
for stochastic systems with an asymmetric Lévy motion, and analyze the
properties of the solutions of the nonlocal equations. We also investigate the
effects of different system factors on the mean exit time and escape
probability, including the skewness parameter, the size of the domain, the
drift term and the intensity of Gaussian and non-Gaussian noises. We find that
the behavior of the mean exit time and the escape probability has dramatic
difference at the boundary of the domain when the index of stability crosses
the critical value of one.
| 0 | 0 | 1 | 0 | 0 | 0 |
Canonical Models and the Complexity of Modal Team Logic | We study modal team logic MTL, the team-semantical extension of modal logic
ML closed under Boolean negation. Its fragments, such as modal dependence,
independence, and inclusion logic, are well-understood. However, due to the
unrestricted Boolean negation, the satisfiability problem of full MTL has been
notoriously resistant to a complexity theoretical classification. In our
approach, we introduce the notion of canonical models into the team-semantical
setting. By construction of such a model, we reduce the satisfiability problem
of MTL to simple model checking. Afterwards, we show that this approach is
optimal in the sense that MTL-formulas can efficiently enforce canonicity.
Furthermore, to capture these results in terms of complexity, we introduce a
non-elementary complexity class, TOWER(poly), and prove that it contains
satisfiability and validity of MTL as complete problems. We also prove that the
fragments of MTL with bounded modal depth are complete for the levels of the
elementary hierarchy (with polynomially many alternations). The respective
hardness results hold for both strict or lax semantics of the modal operators
and the splitting disjunction, and also over the class of reflexive and
transitive frames.
| 1 | 0 | 0 | 0 | 0 | 0 |
CoT: Cooperative Training for Generative Modeling of Discrete Data | We propose Cooperative Training (CoT) for training generative models that
measure a tractable density for discrete data. CoT coordinately trains a
generator $G$ and an auxiliary predictive mediator $M$. The training target of
$M$ is to estimate a mixture density of the learned distribution $G$ and the
target distribution $P$, and that of $G$ is to minimize the Jensen-Shannon
divergence estimated through $M$. CoT achieves independent success without the
necessity of pre-training via Maximum Likelihood Estimation or involving
high-variance algorithms like REINFORCE. This low-variance algorithm is
theoretically proved to be unbiased for both generative and predictive tasks.
We also theoretically and empirically show the superiority of CoT over most
previous algorithms in terms of generative quality and diversity, predictive
generalization ability and computational cost.
| 0 | 0 | 0 | 1 | 0 | 0 |
Iterated function systems consisting of phi-max-contractions have attractor | We associate to each iterated function system consisting of
phi-max-contractions an operator (on the space of continuous functions from the
shift space on the metric space corresponding to the system) having a unique
fixed point whose image turns out to be the attractor of the system. Moreover,
we prove that the unique fixed point of the operator associated to an iterated
function system consisting of convex contractions is the canonical projection
from the shift space on the attractor of the system.
| 0 | 0 | 1 | 0 | 0 | 0 |
Decay Rates of the Solutions to the Thermoelastic Bresse System of Types I and III | In this paper, we study the energy decay for the thermoelastic Bresse system
in the whole line with two different dissipative mechanism, given by heat
conduction (Types I and III). We prove that the decay rate of the solutions are
very slow. More precisely, we show that the solutions decay with the rate of
$(1+t)^{-\frac{1}{8}}$ in the $L^2$-norm, whenever the initial data belongs to
$L^1(R) \cap H^{s}(R)$ for a suitable $s$. The wave speeds of propagation have
influence on the decay rate with respect to the regularity of the initial data.
This phenomenon is known as \textit{regularity-loss}. The main tool used to
prove our results is the energy method in the Fourier space.
| 0 | 0 | 1 | 0 | 0 | 0 |
Baryogenesis at a Lepton-Number-Breaking Phase Transition | We study a scenario in which the baryon asymmetry of the universe arises from
a cosmological phase transition where lepton-number is spontaneously broken. If
the phase transition is first order, a lepton-number asymmetry can arise at the
bubble wall, through dynamics similar to electroweak baryogenesis, but
involving right-handed neutrinos. In addition to the usual neutrinoless double
beta decay in nuclear experiments, the model may be probed through a variety of
"baryogenesis by-products," which include a stochastic background of
gravitational waves created by the colliding bubbles. Depending on the model,
other aspects may include a network of topological defects that produce their
own gravitational waves, additional contribution to dark radiation, and a light
pseudo-Goldstone boson (majoron) as dark matter candidate.
| 0 | 1 | 0 | 0 | 0 | 0 |
Contact resistance between two REBCO tapes under load and load-cycles | No-insulation (NI) REBCO magnets have many advantages. They are
self-protecting, therefore do not need quench detection and protection which
can be very challenging in a high Tc superconducting magnet. Moreover, by
removing insulation and allowing thinner copper stabilizer, NI REBCO magnets
have significantly higher engineering current density and higher mechanical
strength. On the other hand, NI REBCO magnets have drawbacks of long magnet
charging time and high field-ramp-loss. In principle, these drawbacks can be
mitigated by managing the turn-to-turn contact resistivity (Rc). Evidently the
first step toward managing Rc is to establish a reliable method of accurate Rc
measurement. In this paper, we present experimental Rc measurements of REBCO
tapes as a function of mechanical load up to 144 MPa and load cycles up to 14
times. We found that Rc is in the range of 26-100 uOhm-cm2; it decreases with
increasing pressure, and gradually increases with number of load cycles. The
results are discussed in the framework of Holm's electric contact theory.
| 0 | 1 | 0 | 0 | 0 | 0 |
Stochastic Backward Euler: An Implicit Gradient Descent Algorithm for $k$-means Clustering | In this paper, we propose an implicit gradient descent algorithm for the
classic $k$-means problem. The implicit gradient step or backward Euler is
solved via stochastic fixed-point iteration, in which we randomly sample a
mini-batch gradient in every iteration. It is the average of the fixed-point
trajectory that is carried over to the next gradient step. We draw connections
between the proposed stochastic backward Euler and the recent entropy
stochastic gradient descent (Entropy-SGD) for improving the training of deep
neural networks. Numerical experiments on various synthetic and real datasets
show that the proposed algorithm provides better clustering results compared to
$k$-means algorithms in the sense that it decreased the objective function (the
cluster) and is much more robust to initialization.
| 1 | 0 | 0 | 1 | 0 | 0 |
The Frobenius number for sequences of triangular and tetrahedral numbers | We compute the Frobenius number for sequences of triangular and tetrahedral
numbers. In addition, we study some properties of the numerical semigroups
associated to those sequences.
| 0 | 0 | 1 | 0 | 0 | 0 |
Information-entropic analysis of Korteweg--de Vries solitons in the quark-gluon plasma | Solitary waves propagation of baryonic density perturbations, ruled by the
Korteweg--de Vries equation in a mean-field quark-gluon plasma model, are
investigated from the point of view of the theory of information. A recently
proposed continuous logarithmic measure of information, called configurational
entropy, is used to derive the soliton width, defining the pulse, for which the
informational content of the soliton spatial profile is more compressed, in the
Shannon's sense.
| 0 | 1 | 0 | 0 | 0 | 0 |
The photon identification loophole in EPRB experiments: computer models with single-wing selection | Recent Einstein-Podolsky-Rosen-Bohm experiments [M. Giustina et al. Phys.
Rev. Lett. 115, 250401 (2015); L. K. Shalm et al. Phys. Rev. Lett. 115, 250402
(2015)] that claim to be loophole free are scrutinized and are shown to suffer
a photon identification loophole. The combination of a digital computer and
discrete-event simulation is used to construct a minimal but faithful model of
the most perfected realization of these laboratory experiments. In contrast to
prior simulations, all photon selections are strictly made, as they are in the
actual experiments, at the local station and no other "post-selection" is
involved. The simulation results demonstrate that a manifestly non-quantum
model that identifies photons in the same local manner as in these experiments
can produce correlations that are in excellent agreement with those of the
quantum theoretical description of the corresponding thought experiment, in
conflict with Bell's theorem. The failure of Bell's theorem is possible because
of our recognition of the photon identification loophole. Such identification
measurement-procedures are necessarily included in all actual experiments but
are not included in the theory of Bell and his followers.
| 0 | 1 | 0 | 0 | 0 | 0 |
Tilings of the plane with unit area triangles of bounded diameter | There exist tilings of the plane with pairwise noncongruent triangles of
equal area and bounded perimeter. Analogously, there exist tilings with
triangles of equal perimeter, the areas of which are bounded from below by a
positive constant. This solves a problem of Nandakumar.
| 1 | 0 | 1 | 0 | 0 | 0 |
The cauchy problem for radially symmetric homogeneous boltzmann equation with shubin class initial datum and gelfand-shilov smoothing effect | In this paper, we study the Cauchy problem for radially symmetric homogeneous
non-cutoff Boltzmann equation with Maxwellian molecules, the initial datum
belongs to Shubin space of the negative index which can be characterized by
spectral decomposition of the harmonic oscillators. The Shubin space of the
negative index contains the measure functions. Based on this spectral
decomposition, we construct the weak solution with Shubin class initial datum,
we also prove that the Cauchy problem enjoys Gelfand-Shilov smoothing effect,
meaning that the smoothing properties are the same as the Cauchy problem
defined by the evolution equation associated to a fractional harmonic
oscillator.
| 0 | 0 | 1 | 0 | 0 | 0 |
Network Dissection: Quantifying Interpretability of Deep Visual Representations | We propose a general framework called Network Dissection for quantifying the
interpretability of latent representations of CNNs by evaluating the alignment
between individual hidden units and a set of semantic concepts. Given any CNN
model, the proposed method draws on a broad data set of visual concepts to
score the semantics of hidden units at each intermediate convolutional layer.
The units with semantics are given labels across a range of objects, parts,
scenes, textures, materials, and colors. We use the proposed method to test the
hypothesis that interpretability of units is equivalent to random linear
combinations of units, then we apply our method to compare the latent
representations of various networks when trained to solve different supervised
and self-supervised training tasks. We further analyze the effect of training
iterations, compare networks trained with different initializations, examine
the impact of network depth and width, and measure the effect of dropout and
batch normalization on the interpretability of deep visual representations. We
demonstrate that the proposed method can shed light on characteristics of CNN
models and training methods that go beyond measurements of their discriminative
power.
| 1 | 0 | 0 | 0 | 0 | 0 |
Systole inequalities for arithmetic locally symmetric spaces | In this paper we study the systole growth of arithmetic locally symmetric
spaces up congruence covers and show that this growth is at least logarithmic
in volume. This generalizes previous work of Buser and Sarnak as well as Katz,
Schaps and Vishne where the case of compact hyperbolic 2- and 3-manifolds was
considered.
| 0 | 0 | 1 | 0 | 0 | 0 |
Featured Weighted Automata | A featured transition system is a transition system in which the transitions
are annotated with feature expressions: Boolean expressions on a finite number
of given features. Depending on its feature expression, each individual
transition can be enabled when some features are present, and disabled for
other sets of features. The behavior of a featured transition system hence
depends on a given set of features. There are algorithms for featured
transition systems which can check their properties for all sets of features at
once, for example for LTL or CTL properties.
Here we introduce a model of featured weighted automata which combines
featured transition systems and (semiring-) weighted automata. We show that
methods and techniques from weighted automata extend to featured weighted
automata and devise algorithms to compute quantitative properties of featured
weighted automata for all sets of features at once. We show applications to
minimum reachability and to energy properties.
| 1 | 0 | 0 | 0 | 0 | 0 |
Schrödinger model and Stratonovich-Weyl correspondence for Heisenberg motion groups | We introduce a Schrödinger model for the unitary irreducible
representations of a Heisenberg motion group and we show that the usual Weyl
quantization then provides a Stratonovich-Weyl correspondence.
| 0 | 0 | 1 | 0 | 0 | 0 |
Estimation in emerging epidemics: biases and remedies | When analysing new emerging infectious disease outbreaks one typically has
observational data over a limited period of time and several parameters to
estimate, such as growth rate, R0, serial or generation interval distribution,
latent and incubation times or case fatality rates. Also parameters describing
the temporal relations between appearance of symptoms, notification, death and
recovery/discharge will be of interest. These parameters form the basis for
predicting the future outbreak, planning preventive measures and monitoring the
progress of the disease. We study the problem of making inference during the
emerging phase of an outbreak and point out potential sources of bias related
to contact tracing, replacing generation times by serial intervals, multiple
potential infectors or truncation effects amplified by exponential growth.
These biases directly affect the estimation of e.g. the generation time
distribution and the case fatality rate, but can then propagate to other
estimates, e.g. of R0 and growth rate. Many of the traditionally used
estimation methods in disease epidemiology may suffer from these biases when
applied to the emerging disease outbreak situation. We show how to avoid these
biases based on proper statistical modelling. We illustrate the theory by
numerical examples and simulations based on the recent 2014-15 Ebola outbreak
to quantify possible estimation biases, which may be up to 20% underestimation
of R0, if the epidemic growth rate is fitted to observed data or, conversely,
up to 62% overestimation of the growth rate if the correct R0 is used in
conjunction with the Euler-Lotka equation.
| 0 | 0 | 0 | 1 | 0 | 0 |
Long-Term Evolution of Genetic Programming Populations | We evolve binary mux-6 trees for up to 100000 generations evolving some
programs with more than a hundred million nodes. Our unbounded Long-Term
Evolution Experiment LTEE GP appears not to evolve building blocks but does
suggests a limit to bloat. We do see periods of tens even hundreds of
generations where the population is 100 percent functionally converged. The
distribution of tree sizes is not as predicted by theory.
| 1 | 0 | 0 | 0 | 0 | 0 |
A Multi-Scale Analysis of 27,000 Urban Street Networks: Every US City, Town, Urbanized Area, and Zillow Neighborhood | OpenStreetMap offers a valuable source of worldwide geospatial data useful to
urban researchers. This study uses the OSMnx software to automatically download
and analyze 27,000 US street networks from OpenStreetMap at metropolitan,
municipal, and neighborhood scales - namely, every US city and town, census
urbanized area, and Zillow-defined neighborhood. It presents empirical findings
on US urban form and street network characteristics, emphasizing measures
relevant to graph theory, transportation, urban design, and morphology such as
structure, connectedness, density, centrality, and resilience. In the past,
street network data acquisition and processing have been challenging and ad
hoc. This study illustrates the use of OSMnx and OpenStreetMap to consistently
conduct street network analysis with extremely large sample sizes, with clearly
defined network definitions and extents for reproducibility, and using
nonplanar, directed graphs. These street networks and measures data have been
shared in a public repository for other researchers to use.
| 1 | 1 | 0 | 0 | 0 | 0 |
Towards Detection of Exoplanetary Rings Via Transit Photometry: Methodology and a Possible Candidate | Detection of a planetary ring of exoplanets remains as one of the most
attractive but challenging goals in the field. We present a methodology of a
systematic search for exoplanetary rings via transit photometry of long-period
planets. The methodology relies on a precise integration scheme we develop to
compute a transit light curve of a ringed planet. We apply the methodology to
89 long-period planet candidates from the Kepler data so as to estimate, and/or
set upper limits on, the parameters of possible rings. While a majority of our
samples do not have a sufficiently good signal-to-noise ratio for meaningful
constraints on ring parameters, we find that six systems with a higher
signal-to-noise ratio are inconsistent with the presence of a ring larger than
1.5 times the planetary radius assuming a grazing orbit and a tilted ring.
Furthermore, we identify five preliminary candidate systems whose light curves
exhibit ring-like features. After removing four false positives due to the
contamination from nearby stars, we identify KIC 10403228 as a reasonable
candidate for a ringed planet. A systematic parameter fit of its light curve
with a ringed planet model indicates two possible solutions corresponding to a
Saturn-like planet with a tilted ring. There also remain other two possible
scenarios accounting for the data; a circumstellar disk and a hierarchical
triple. Due to large uncertain factors, we cannot choose one specific model
among the three.
| 0 | 1 | 0 | 0 | 0 | 0 |
SiMon: Simulation Monitor for Computational Astrophysics | Scientific discovery via numerical simulations is important in modern
astrophysics. This relatively new branch of astrophysics has become possible
due to the development of reliable numerical algorithms and the high
performance of modern computing technologies. These enable the analysis of
large collections of observational data and the acquisition of new data via
simulations at unprecedented accuracy and resolution. Ideally, simulations run
until they reach some pre-determined termination condition, but often other
factors cause extensive numerical approaches to break down at an earlier stage.
In those cases, processes tend to be interrupted due to unexpected events in
the software or the hardware. In those cases, the scientist handles the
interrupt manually, which is time-consuming and prone to errors. We present the
Simulation Monitor (SiMon) to automatize the farming of large and extensive
simulation processes. Our method is light-weight, it fully automates the entire
workflow management, operates concurrently across multiple platforms and can be
installed in user space. Inspired by the process of crop farming, we perceive
each simulation as a crop in the field and running simulation becomes analogous
to growing crops. With the development of SiMon we relax the technical aspects
of simulation management. The initial package was developed for extensive
parameter searchers in numerical simulations, but it turns out to work equally
well for automating the computational processing and reduction of observational
data reduction.
| 0 | 1 | 0 | 0 | 0 | 0 |
Asymptotic and bootstrap tests for the dimension of the non-Gaussian subspace | Dimension reduction is often a preliminary step in the analysis of large data
sets. The so-called non-Gaussian component analysis searches for a projection
onto the non-Gaussian part of the data, and it is then important to know the
correct dimension of the non-Gaussian signal subspace. In this paper we develop
asymptotic as well as bootstrap tests for the dimension based on the popular
fourth order blind identification (FOBI) method.
| 0 | 0 | 1 | 1 | 0 | 0 |
Pseudoconcavity of flag domains: The method of supporting cycles | A flag domain of a real from $G_0$ of a complex semismiple Lie group $G$ is
an open $G_0$-orbit $D$ in a (compact) $G$-flag manifold. In the usual way one
reduces to the case where $G_0$ is simple. It is known that if $D$ possesses
non-constant holomorphic functions, then it is the product of a compact flag
manifold and a Hermitian symmetric bounded domain. This pseudoconvex case is
rare in the geography of flag domains. Here it is shown that otherwise, i.e.,
when $\mathcal{O}(D)\cong\mathbb{C}$, the flag domain $D$ is pseudoconcave. In
a rather general setting the degree of the pseudoconcavity is estimated in
terms of root invariants. This estimate is explicitly computed for domains in
certain Grassmannians.
| 0 | 0 | 1 | 0 | 0 | 0 |
Rule Formats for Nominal Process Calculi | The nominal transition systems (NTSs) of Parrow et al. describe the
operational semantics of nominal process calculi. We study NTSs in terms of the
nominal residual transition systems (NRTSs) that we introduce. We provide rule
formats for the specifications of NRTSs that ensure that the associated NRTS is
an NTS and apply them to the operational specifications of the early and late
pi-calculus. We also explore alternative specifications of the NTSs in which we
allow residuals of abstraction sort, and introduce translations between the
systems with and without residuals of abstraction sort. Our study stems from
the Nominal SOS of Cimini et al. and from earlier works in nominal sets and
nominal logic by Gabbay, Pitts and their collaborators.
| 1 | 0 | 0 | 0 | 0 | 0 |
Tilings with noncongruent triangles | We solve a problem of R. Nandakumar by proving that there is no tiling of the
plane with pairwise noncongruent triangles of equal area and equal perimeter.
We also show that no convex polygon with more than three sides can be tiled
with finitely many triangles such that no pair of them share a full side.
| 1 | 0 | 1 | 0 | 0 | 0 |
Phase limitations of Zames-Falb multipliers | Phase limitations of both continuous-time and discrete-time Zames-Falb
multipliers and their relation with the Kalman conjecture are analysed. A phase
limitation for continuous-time multipliers given by Megretski is generalised
and its applicability is clarified; its relation to the Kalman conjecture is
illustrated with a classical example from the literature. It is demonstrated
that there exist fourth-order plants where the existence of a suitable
Zames-Falb multiplier can be discarded and for which simulations show unstable
behavior. A novel phase-limitation for discrete-time Zames-Falb multipliers is
developed. Its application is demonstrated with a second-order counterexample
to the Kalman conjecture. Finally, the discrete-time limitation is used to show
that there can be no direct counterpart of the off-axis circle criterion in the
discrete-time domain.
| 1 | 0 | 1 | 0 | 0 | 0 |
Fragmentation of vertically stratified gaseous layers: monolithic or coalescence-driven collapse | We investigate, using 3D hydrodynamic simulations, the fragmentation of
pressure-confined, vertically stratified, self-gravitating gaseous layers. The
confining pressure is either thermal pressure acting on both surfaces, or
thermal pressure acting on one surface and ram-pressure on the other. In the
linear regime of fragmentation, the dispersion relation we obtain agrees well
with that derived by Elmegreen & Elmegreen (1978), and consequently deviates
from the dispersion relations based on the thin shell approximation (Vishniac
1983) or pressure assisted gravitational instability (Wünsch et al. 2010). In
the non-linear regime, the relative importance of the confining pressure to the
self-gravity is a crucial parameter controlling the qualitative course of
fragmentation. When confinement of the layer is dominated by external pressure,
self- gravitating condensations are delivered by a two-stage process: first the
layer fragments into gravitationally bound but stable clumps, and then these
clumps coalesce until they assemble enough mass to collapse. In contrast, when
external pressure makes a small contribution to confinement of the layer, the
layer fragments monolithically into gravitationally unstable clumps and there
is no coalescence. This dichotomy persists whether the external pressure is
thermal or ram. We apply these results to fragments forming in a shell swept up
by an expanding H II region, and find that, unless the swept up gas is quite
hot or the surrounding medium has low density, the fragments have low-mass ( ~<
3 M_Sun ), and therefore they are unlikely to spawn stars that are sufficiently
massive to promote sequential self-propagating star formation.
| 0 | 1 | 0 | 0 | 0 | 0 |
Learning Local Receptive Fields and their Weight Sharing Scheme on Graphs | We propose a simple and generic layer formulation that extends the properties
of convolutional layers to any domain that can be described by a graph. Namely,
we use the support of its adjacency matrix to design learnable weight sharing
filters able to exploit the underlying structure of signals in the same fashion
as for images. The proposed formulation makes it possible to learn the weights
of the filter as well as a scheme that controls how they are shared across the
graph. We perform validation experiments with image datasets and show that
these filters offer performances comparable with convolutional ones.
| 1 | 0 | 0 | 0 | 0 | 0 |
Galaxies with Shells in the Illustris Simulation: Metallicity Signatures | Stellar shells are low surface brightness arcs of overdense stellar regions,
extending to large galactocentric distances. In a companion study, we
identified 39 shell galaxies in a sample of 220 massive ellipticals
($\mathrm{M}_{\mathrm{200crit}}>6\times10^{12}\,\mathrm{M}_\odot$) from the
Illustris cosmological simulation. We used stellar history catalogs to trace
the history of each individual star particle inside the shell substructures,
and we found that shells in high-mass galaxies form through mergers with
massive satellites (stellar mass ratios $\mu_{\mathrm{stars}}\gtrsim1:10$).
Using the same sample of shell galaxies, the current study extends the stellar
history catalogs in order to investigate the metallicity of stellar shells
around massive galaxies. Our results indicate that outer shells are often times
more metal-rich than the surrounding stellar material in a galaxy's halo. For a
galaxy with two different satellites forming $z=0$ shells, we find a
significant difference in the metallicity of the shells produced by each
progenitor. We also find that shell galaxies have higher mass-weighted
logarithmic metallicities ([Z/H]) at $2$-$4\,\mathrm{R}_{\mathrm{eff}}$
compared to galaxies without shells. Our results indicate that observations
comparing the metallicities of stars in tidal features, such as shells, to the
average metallicities in the stellar halo can provide information about the
assembly histories of galaxies.
| 0 | 1 | 0 | 0 | 0 | 0 |
Runtime Verification of Temporal Properties over Out-of-order Data Streams | We present a monitoring approach for verifying systems at runtime. Our
approach targets systems whose components communicate with the monitors over
unreliable channels, where messages can be delayed or lost. In contrast to
prior works, whose property specification languages are limited to
propositional temporal logics, our approach handles an extension of the
real-time logic MTL with freeze quantifiers for reasoning about data values. We
present its underlying theory based on a new three-valued semantics that is
well suited to soundly and completely reason online about event streams in the
presence of message delay or loss. We also evaluate our approach
experimentally. Our prototype implementation processes hundreds of events per
second in settings where messages are received out of order.
| 1 | 0 | 0 | 0 | 0 | 0 |
Deep Learning for Electromyographic Hand Gesture Signal Classification Using Transfer Learning | In recent years, deep learning algorithms have become increasingly more
prominent for their unparalleled ability to automatically learn discriminant
features from large amounts of data. However, within the field of
electromyography-based gesture recognition, deep learning algorithms are seldom
employed as they require an unreasonable amount of effort from a single person,
to generate tens of thousands of examples.
This work's hypothesis is that general, informative features can be learned
from the large amounts of data generated by aggregating the signals of multiple
users, thus reducing the recording burden while enhancing gesture recognition.
Consequently, this paper proposes applying transfer learning on aggregated data
from multiple users, while leveraging the capacity of deep learning algorithms
to learn discriminant features from large datasets. Two datasets comprised of
19 and 17 able-bodied participants respectively (the first one is employed for
pre-training) were recorded for this work, using the Myo Armband. A third Myo
Armband dataset was taken from the NinaPro database and is comprised of 10
able-bodied participants. Three different deep learning networks employing
three different modalities as input (raw EMG, Spectrograms and Continuous
Wavelet Transform (CWT)) are tested on the second and third dataset. The
proposed transfer learning scheme is shown to systematically and significantly
enhance the performance for all three networks on the two datasets, achieving
an offline accuracy of 98.31% for 7 gestures over 17 participants for the
CWT-based ConvNet and 68.98% for 18 gestures over 10 participants for the raw
EMG-based ConvNet. Finally, a use-case study employing eight able-bodied
participants suggests that real-time feedback allows users to adapt their
muscle activation strategy which reduces the degradation in accuracy normally
experienced over time.
| 0 | 0 | 0 | 1 | 0 | 0 |
Adaptive Non-uniform Compressive Sampling for Time-varying Signals | In this paper, adaptive non-uniform compressive sampling (ANCS) of
time-varying signals, which are sparse in a proper basis, is introduced. ANCS
employs the measurements of previous time steps to distribute the sensing
energy among coefficients more intelligently. To this aim, a Bayesian inference
method is proposed that does not require any prior knowledge of importance
levels of coefficients or sparsity of the signal. Our numerical simulations
show that ANCS is able to achieve the desired non-uniform recovery of the
signal. Moreover, if the signal is sparse in canonical basis, ANCS can reduce
the number of required measurements significantly.
| 1 | 0 | 0 | 1 | 0 | 0 |
Enabling Reasoning with LegalRuleML | In order to automate verification process, regulatory rules written in
natural language need to be translated into a format that machines can
understand. However, none of the existing formalisms can fully represent the
elements that appear in legal norms. For instance, most of these formalisms do
not provide features to capture the behavior of deontic effects, which is an
important aspect in automated compliance checking. This paper presents an
approach for transforming legal norms represented using LegalRuleML to a
variant of Modal Defeasible Logic (and vice versa) such that a legal statement
represented using LegalRuleML can be transformed into a machine-readable format
that can be understood and reasoned about depending upon the client's
preferences.
| 1 | 0 | 0 | 0 | 0 | 0 |
Combinatorial distance geometry in normed spaces | We survey problems and results from combinatorial geometry in normed spaces,
concentrating on problems that involve distances. These include various
properties of unit-distance graphs, minimum-distance graphs, diameter graphs,
as well as minimum spanning trees and Steiner minimum trees. In particular, we
discuss translative kissing (or Hadwiger) numbers, equilateral sets, and the
Borsuk problem in normed spaces. We show how to use the angular measure of
Peter Brass to prove various statements about Hadwiger and blocking numbers of
convex bodies in the plane, including some new results. We also include some
new results on thin cones and their application to distinct distances and other
combinatorial problems for normed spaces.
| 0 | 0 | 1 | 0 | 0 | 0 |
When is Network Lasso Accurate: The Vector Case | A recently proposed learning algorithm for massive network-structured data
sets (big data over networks) is the network Lasso (nLasso), which extends the
well- known Lasso estimator from sparse models to network-structured datasets.
Efficient implementations of the nLasso have been presented using modern convex
optimization methods. In this paper, we provide sufficient conditions on the
network structure and available label information such that nLasso accurately
learns a vector-valued graph signal (representing label information) from the
information provided by the labels of a few data points.
| 1 | 0 | 0 | 0 | 0 | 0 |
Surrogate Aided Unsupervised Recovery of Sparse Signals in Single Index Models for Binary Outcomes | We consider the recovery of regression coefficients, denoted by
$\boldsymbol{\beta}_0$, for a single index model (SIM) relating a binary
outcome $Y$ to a set of possibly high dimensional covariates $\boldsymbol{X}$,
based on a large but 'unlabeled' dataset $\mathcal{U}$, with $Y$ never
observed. On $\mathcal{U}$, we fully observe $\boldsymbol{X}$ and additionally,
a surrogate $S$ which, while not being strongly predictive of $Y$ throughout
the entirety of its support, can forecast it with high accuracy when it assumes
extreme values. Such datasets arise naturally in modern studies involving large
databases such as electronic medical records (EMR) where $Y$, unlike
$(\boldsymbol{X}, S)$, is difficult and/or expensive to obtain. In EMR studies,
an example of $Y$ and $S$ would be the true disease phenotype and the count of
the associated diagnostic codes respectively. Assuming another SIM for $S$
given $\boldsymbol{X}$, we show that under sparsity assumptions, we can recover
$\boldsymbol{\beta}_0$ proportionally by simply fitting a least squares LASSO
estimator to the subset of the observed data on $(\boldsymbol{X}, S)$
restricted to the extreme sets of $S$, with $Y$ imputed using the surrogacy of
$S$. We obtain sharp finite sample performance bounds for our estimator,
including deterministic deviation bounds and probabilistic guarantees. We
demonstrate the effectiveness of our approach through multiple simulation
studies, as well as by application to real data from an EMR study conducted at
the Partners HealthCare Systems.
| 0 | 0 | 1 | 1 | 0 | 0 |
Superior lattice thermal conductance of single layer borophene | By way of the nonequilibrium Green's function simulations and first
principles calculations, we report that borophene, a single layer of boron
atoms that was fabricated recently, possesses an extraordinarily high lattice
thermal conductance in the ballistic transport regime, which even exceeds
graphene. In addition to the obvious reasons of light mass and strong bonding
of boron atoms, the superior thermal conductance is mainly rooted in its strong
structural anisotropy and unusual phonon transmission. For low-frequency
phonons, the phonon transmission within borophene is nearly isotropic, similar
to that of graphene. For high frequency phonons, however, the transmission is
one dimensional, that is, all the phonons travel in one direction, giving rise
to its ultrahigh thermal conductance. The present study suggests that borophene
is promising for applications in efficient heat dissipation and thermal
management, and also an ideal material for revealing fundamentals of
dimensionality effect on phonon transport in ballistic regime.
| 0 | 1 | 0 | 0 | 0 | 0 |
The liar paradox is a real problem | The liar paradox is widely seen as not a serious problem. I try to explain
why this view is mistaken.
| 0 | 0 | 1 | 0 | 0 | 0 |
High-dimensional ABC | This Chapter, "High-dimensional ABC", is to appear in the forthcoming
Handbook of Approximate Bayesian Computation (2018). It details the main ideas
and concepts behind extending ABC methods to higher dimensions, with supporting
examples and illustrations.
| 0 | 0 | 0 | 1 | 0 | 0 |
Automating Release of Deep Link APIs for Android Applications | Unlike the Web where each web page has a global URL to reach, a specific
"content page" inside a mobile app cannot be opened unless the user explores
the app with several operations from the landing page. Recently, deep links
have been advocated by major companies to enable targeting and opening a
specific page of an app externally with an accessible uniform resource
identifier (URI). To empirically investigate the state of the practice on
adopting deep links, in this article, we present the largest empirical study of
deep links over 20,000 Android apps, and find that deep links do not get wide
adoption among current Android apps, and non-trivial manual efforts are
required for app developers to support deep links. To address such an issue, we
propose the Aladdin approach and supporting tool to release deep links to
access arbitrary location of existing apps. Aladdin instantiates our novel
cooperative framework to synergically combine static analysis and dynamic
analysis while minimally engaging developers to provide inputs to the framework
for automation, without requiring any coding efforts or additional deployment
efforts. We evaluate Aladdin with popular apps and demonstrate its
effectiveness and performance.
| 1 | 0 | 0 | 0 | 0 | 0 |
Refractive index measurements of single, spherical cells using digital holographic microscopy | In this chapter, we introduce digital holographic microscopy (DHM) as a
marker-free method to determine the refractive index of single, spherical cells
in suspension. The refractive index is a conclusive measure in a biological
context. Cell conditions, such as differentiation or infection, are known to
yield significant changes in the refractive index. Furthermore, the refractive
index of biological tissue determines the way it interacts with light. Besides
the biological relevance of this interaction in the retina, a lot of methods
used in biology, including microscopy, rely on light-tissue or light-cell
interactions. Hence, determining the refractive index of cells using DHM is
valuable in many biological applications. This chapter covers the main topics
which are important for the implementation of DHM: setup, sample preparation
and analysis. First, the optical setup is described in detail including notes
and suggestions for the implementation. Following that, a protocol for the
sample and measurement preparation is explained. In the analysis section, an
algorithm for the determination of the quantitative phase map is described.
Subsequently, all intermediate steps for the calculation of the refractive
index of suspended cells are presented, exploiting their spherical shape. In
the last section, a discussion of possible extensions to the setup, further
measurement configurations and additional analysis methods are given.
Throughout this chapter, we describe a simple, robust, and thus easily
reproducible implementation of DHM. The different possibilities for extensions
show the diverse fields of application for this technique.
| 0 | 0 | 0 | 0 | 1 | 0 |
Tuple-oriented Compression for Large-scale Mini-batch Stochastic Gradient Descent | Data compression is a popular technique for improving the efficiency of data
processing workloads such as SQL queries and more recently, machine learning
(ML) with classical batch gradient methods. But the efficacy of such ideas for
mini-batch stochastic gradient descent (MGD), arguably the workhorse algorithm
of modern ML, is an open question. MGD's unique data access pattern renders
prior art, including those designed for batch gradient methods, less effective.
We fill this crucial research gap by proposing a new lossless compression
scheme we call tuple-oriented compression (TOC) that is inspired by an unlikely
source, the string/text compression scheme Lempel-Ziv-Welch, but tailored to
MGD in a way that preserves tuple boundaries within mini-batches. We then
present a suite of novel compressed matrix operation execution techniques
tailored to the TOC compression scheme that operate directly over the
compressed data representation and avoid decompression overheads. An extensive
empirical evaluation with real-world datasets shows that TOC consistently
achieves substantial compression ratios by up to 51x and reduces runtimes for
MGD workloads by up to 10.2x in popular ML systems.
| 1 | 0 | 0 | 1 | 0 | 0 |
Program Language Translation Using a Grammar-Driven Tree-to-Tree Model | The task of translating between programming languages differs from the
challenge of translating natural languages in that programming languages are
designed with a far more rigid set of structural and grammatical rules.
Previous work has used a tree-to-tree encoder/decoder model to take advantage
of the inherent tree structure of programs during translation. Neural decoders,
however, by default do not exploit known grammar rules of the target language.
In this paper, we describe a tree decoder that leverages knowledge of a
language's grammar rules to exclusively generate syntactically correct
programs. We find that this grammar-based tree-to-tree model outperforms the
state of the art tree-to-tree model in translating between two programming
languages on a previously used synthetic task.
| 1 | 0 | 0 | 1 | 0 | 0 |
In-Place Initializable Arrays | Initializing all elements of an array to a specified value is a basic
operation that frequently appears in numerous algorithms and programs.
Initializable arrays are abstract arrays that support initialization as well as
reading and writing of any element of the array in less than linear time
proportional to the length of the array. On the word RAM model with $w$ bits
word size, we propose an in-place algorithm using only 1 extra bit which
implements an initializable array of length $N$ each of whose elements can
store $\ell \in O(w)$ bits value, and supports all operations in constant worst
case time. We also show that our algorithm is not only time optimal but also
space optimal. Our algorithm significantly improves upon the previous best
algorithm [Navarro, CSUR 2014] using $N + \ell + o(N)$ extra bits supporting
all operations in constant worst case time.
Moreover, for a special cast that $\ell \ge 2 \lceil \log N \rceil$ and $\ell
\in O(w)$, we also propose an algorithm so that each element of initializable
array can store $2^\ell$ normal states and a one optional state, which uses
$\ell + \lceil \log N \rceil + 1$ extra bits and supports all operations in
constant worst case time.
| 1 | 0 | 0 | 0 | 0 | 0 |
A Simple Exponential Family Framework for Zero-Shot Learning | We present a simple generative framework for learning to predict previously
unseen classes, based on estimating class-attribute-gated class-conditional
distributions. We model each class-conditional distribution as an exponential
family distribution and the parameters of the distribution of each seen/unseen
class are defined as functions of the respective observed class attributes.
These functions can be learned using only the seen class data and can be used
to predict the parameters of the class-conditional distribution of each unseen
class. Unlike most existing methods for zero-shot learning that represent
classes as fixed embeddings in some vector space, our generative model
naturally represents each class as a probability distribution. It is simple to
implement and also allows leveraging additional unlabeled data from unseen
classes to improve the estimates of their class-conditional distributions using
transductive/semi-supervised learning. Moreover, it extends seamlessly to
few-shot learning by easily updating these distributions when provided with a
small number of additional labelled examples from unseen classes. Through a
comprehensive set of experiments on several benchmark data sets, we demonstrate
the efficacy of our framework.
| 1 | 0 | 0 | 1 | 0 | 0 |
Modeling Hormesis Using a Non-Monotonic Copula Method | This paper presents a probabilistic method for capturing non-monotonic
behavior under the biphasic dose-response regime observed in many biological
systems experiencing different types of stress. The proposed method is based on
the rolling-pin method introduced earlier to estimate highly nonlinear and
non-monotonic joint probability distributions from continuous domain data. We
show that the proposed method outperforms the conventional parametric methods
in terms of the error (namely RMSE) and it needs fewer parameters to be
estimated a priori, while offering high flexibility. The application and
performance of the proposed method are shown through an example.
| 0 | 0 | 0 | 1 | 0 | 0 |
Joint Task Offloading and Resource Allocation for Multi-Server Mobile-Edge Computing Networks | Mobile-Edge Computing (MEC) is an emerging paradigm that provides a capillary
distribution of cloud computing capabilities to the edge of the wireless access
network, enabling rich services and applications in close proximity to the end
users. In this article, a MEC enabled multi-cell wireless network is considered
where each Base Station (BS) is equipped with a MEC server that can assist
mobile users in executing computation-intensive tasks via task offloading. The
problem of Joint Task Offloading and Resource Allocation (JTORA) is studied in
order to maximize the users' task offloading gains, which is measured by the
reduction in task completion time and energy consumption. The considered
problem is formulated as a Mixed Integer Non-linear Program (MINLP) that
involves jointly optimizing the task offloading decision, uplink transmission
power of mobile users, and computing resource allocation at the MEC servers.
Due to the NP-hardness of this problem, solving for optimal solution is
difficult and impractical for a large-scale network. To overcome this drawback,
our approach is to decompose the original problem into (i) a Resource
Allocation (RA) problem with fixed task offloading decision and (ii) a Task
Offloading (TO) problem that optimizes the optimal-value function corresponding
to the RA problem. We address the RA problem using convex and quasi-convex
optimization techniques, and propose a novel heuristic algorithm to the TO
problem that achieves a suboptimal solution in polynomial time. Numerical
simulation results show that our algorithm performs closely to the optimal
solution and that it significantly improves the users' offloading utility over
traditional approaches.
| 1 | 0 | 0 | 0 | 0 | 0 |
The Sum Over Topological Sectors and $θ$ in the 2+1-Dimensional $\mathbb{C}\mathbb{P}^1$ $σ$-Model | We discuss the three spacetime dimensional $\mathbb{C}\mathbb{P}^N$ model and
specialize to the $\mathbb{C}\mathbb{P}^1$ model. Because of the Hopf map
$\pi_3(\mathbb{C}\mathbb{P}^1)=\mathbb{Z}$ one might try to couple the model to
a periodic $\theta$ parameter. However, we argue that only the values
$\theta=0$ and $\theta=\pi$ are consistent. For these values the Skyrmions in
the model are bosons and fermions respectively, rather than being anyons. We
also extend the model by coupling it to a topological quantum field theory,
such that the Skyrmions are anyons. We use techniques from geometry and
topology to construct the $\theta =\pi $ theory on arbitrary 3-manifolds, and
use recent results about invertible field theories to prove that no other
values of $\theta $ satisfy the necessary locality.
| 0 | 1 | 1 | 0 | 0 | 0 |
Continued Kinematic and Photometric Investigations of Hierarchical Solar-Type Multiple Star Systems | We observed 15 of the solar-type binaries within 67 pc of the Sun previously
observed by the Robo-AO system in the visible, with the PHARO near-IR camera
and the PALM-3000 adaptive optics system on the 5 m Hale telescope. The
physical status of the binaries is confirmed through common proper motion and
detection of orbital motion. In the process we detected a new candidate
companion to HIP 95309. We also resolved the primary of HIP 110626 into a close
binary making that system a triple. These detections increase the completeness
of the multiplicity survey of the solar-type stars within 67 pc of the Sun.
Combining our observations of HIP 103455 with archival astrometric measurements
and RV measurements, we are able to compute the first orbit of HIP 103455
showing that the binary has a 68 yr period. We place the components on a
color-magnitude diagram and discuss each multiple system individually.
| 0 | 1 | 0 | 0 | 0 | 0 |
A Note on Multiparty Communication Complexity and the Hales-Jewett Theorem | For integers $n$ and $k$, the density Hales-Jewett number $c_{n,k}$ is
defined as the maximal size of a subset of $[k]^n$ that contains no
combinatorial line. We show that for $k \ge 3$ the density Hales-Jewett number
$c_{n,k}$ is equal to the maximal size of a cylinder intersection in the
problem $Part_{n,k}$ of testing whether $k$ subsets of $[n]$ form a partition.
It follows that the communication complexity, in the Number On the Forehead
(NOF) model, of $Part_{n,k}$, is equal to the minimal size of a partition of
$[k]^n$ into subsets that do not contain a combinatorial line. Thus, the bound
in \cite{chattopadhyay2007languages} on $Part_{n,k}$ using the Hales-Jewett
theorem is in fact tight, and the density Hales-Jewett number can be thought of
as a quantity in communication complexity. This gives a new angle to this well
studied quantity.
As a simple application we prove a lower bound on $c_{n,k}$, similar to the
lower bound in \cite{polymath2010moser} which is roughly $c_{n,k}/k^n \ge
\exp(-O(\log n)^{1/\lceil \log_2 k\rceil})$. This lower bound follows from a
protocol for $Part_{n,k}$. It is interesting to better understand the
communication complexity of $Part_{n,k}$ as this will also lead to the better
understanding of the Hales-Jewett number. The main purpose of this note is to
motivate this study.
| 1 | 0 | 0 | 0 | 0 | 0 |
First Results from Using Game Refinement Measure and Learning Coefficient in Scrabble | This paper explores the entertainment experience and learning experience in
Scrabble. It proposes a new measure from the educational point of view, which
we call learning coefficient, based on the balance between the learner's skill
and the challenge in Scrabble. Scrabble variants, generated using different
size of board and dictionary, are analyzed with two measures of game refinement
and learning coefficient. The results show that 13x13 Scrabble yields the best
entertainment experience and 15x15 (standard) Scrabble with 4% of original
dictionary size yields the most effective environment for language learners.
Moreover, 15x15 Scrabble with 10% of original dictionary size has a good
balance between entertainment and learning experience.
| 1 | 0 | 0 | 0 | 0 | 0 |
Structured Local Optima in Sparse Blind Deconvolution | Blind deconvolution is a ubiquitous problem of recovering two unknown signals
from their convolution. Unfortunately, this is an ill-posed problem in general.
This paper focuses on the {\em short and sparse} blind deconvolution problem,
where the one unknown signal is short and the other one is sparsely and
randomly supported. This variant captures the structure of the unknown signals
in several important applications. We assume the short signal to have unit
$\ell^2$ norm and cast the blind deconvolution problem as a nonconvex
optimization problem over the sphere. We demonstrate that (i) in a certain
region of the sphere, every local optimum is close to some shift truncation of
the ground truth, and (ii) for a generic short signal of length $k$, when the
sparsity of activation signal $\theta\lesssim k^{-2/3}$ and number of
measurements $m\gtrsim poly(k)$, a simple initialization method together with a
descent algorithm which escapes strict saddle points recovers a near shift
truncation of the ground truth kernel.
| 0 | 0 | 0 | 1 | 0 | 0 |
The empirical Christoffel function with applications in data analysis | We illustrate the potential applications in machine learning of the
Christoffel function, or more precisely, its empirical counterpart associated
with a counting measure uniformly supported on a finite set of points. Firstly,
we provide a thresholding scheme which allows to approximate the support of a
measure from a finite subset of its moments with strong asymptotic guaranties.
Secondly, we provide a consistency result which relates the empirical
Christoffel function and its population counterpart in the limit of large
samples. Finally, we illustrate the relevance of our results on simulated and
real world datasets for several applications in statistics and machine
learning: (a) density and support estimation from finite samples, (b) outlier
and novelty detection and (c) affine matching.
| 1 | 0 | 0 | 0 | 0 | 0 |
Around power law for PageRank components in Buckley-Osthus model of web graph | In the paper we investigate power law for PageRank components for the
Buckley-Osthus model for web graph. We compare different numerical methods for
PageRank calculation. With the best method we do a lot of numerical
experiments. These experiments confirm the hypothesis about power law. At the
end we discuss real model of web-ranking based on the classical PageRank
approach.
| 1 | 0 | 1 | 0 | 0 | 0 |
Cash-settled options for wholesale electricity markets | Wholesale electricity market designs in practice do not provide the market
participants with adequate mechanisms to hedge their financial risks. Demanders
and suppliers will likely face even greater risks with the deepening
penetration of variable renewable resources like wind and solar. This paper
explores the design of a centralized cash-settled call option market to
mitigate such risks. A cash-settled call option is a financial instrument that
allows its holder the right to claim a monetary reward equal to the positive
difference between the real-time price of an underlying commodity and a
pre-negotiated strike price for an upfront fee. Through an example, we
illustrate that a bilateral call option can reduce the payment volatility of
market participants. Then, we design a centralized clearing mechanism for call
options that generalizes the bilateral trade. We illustrate through an example
how the centralized clearing mechanism generalizes the bilateral trade.
Finally, the effect of risk preference of the market participants, as well as
some generalizations are discussed.
| 1 | 0 | 1 | 0 | 0 | 0 |
Power in High-Dimensional Testing Problems | Fan et al. (2015) recently introduced a remarkable method for increasing
asymptotic power of tests in high-dimensional testing problems. If applicable
to a given test, their power enhancement principle leads to an improved test
that has the same asymptotic size, uniformly non-inferior asymptotic power, and
is consistent against a strictly broader range of alternatives than the
initially given test. We study under which conditions this method can be
applied and show the following: In asymptotic regimes where the dimensionality
of the parameter space is fixed as sample size increases, there often exist
tests that can not be further improved with the power enhancement principle.
However, when the dimensionality of the parameter space increases sufficiently
slowly with sample size and a marginal local asymptotic normality (LAN)
condition is satisfied, every test with asymptotic size smaller than one can be
improved with the power enhancement principle. While the marginal LAN condition
alone does not allow one to extend the latter statement to all rates at which
the dimensionality increases with sample size, we give sufficient conditions
under which this is the case.
| 0 | 0 | 1 | 1 | 0 | 0 |
Deep Learning for Precipitation Nowcasting: A Benchmark and A New Model | With the goal of making high-resolution forecasts of regional rainfall,
precipitation nowcasting has become an important and fundamental technology
underlying various public services ranging from rainstorm warnings to flight
safety. Recently, the Convolutional LSTM (ConvLSTM) model has been shown to
outperform traditional optical flow based methods for precipitation nowcasting,
suggesting that deep learning models have a huge potential for solving the
problem. However, the convolutional recurrence structure in ConvLSTM-based
models is location-invariant while natural motion and transformation (e.g.,
rotation) are location-variant in general. Furthermore, since
deep-learning-based precipitation nowcasting is a newly emerging area, clear
evaluation protocols have not yet been established. To address these problems,
we propose both a new model and a benchmark for precipitation nowcasting.
Specifically, we go beyond ConvLSTM and propose the Trajectory GRU (TrajGRU)
model that can actively learn the location-variant structure for recurrent
connections. Besides, we provide a benchmark that includes a real-world
large-scale dataset from the Hong Kong Observatory, a new training loss, and a
comprehensive evaluation protocol to facilitate future research and gauge the
state of the art.
| 1 | 0 | 0 | 0 | 0 | 0 |
PAWS: A Tool for the Analysis of Weighted Systems | PAWS is a tool to analyse the behaviour of weighted automata and conditional
transition systems. At its core PAWS is based on a generic implementation of
algorithms for checking language equivalence in weighted automata and
bisimulation in conditional transition systems. This architecture allows for
the use of arbitrary user-defined semirings. New semirings can be generated
during run-time and the user can rely on numerous automatisation techniques to
create new semiring structures for PAWS' algorithms. Basic semirings such as
distributive complete lattices and fields of fractions can be defined by
specifying few parameters, more exotic semirings can be generated from other
semirings or defined from scratch using a built-in semiring generator. In the
most general case, users can define new semirings by programming (in C#) the
base operations of the semiring and a procedure to solve linear equations and
use their newly generated semiring in the analysis tools that PAWS offers.
| 1 | 0 | 0 | 0 | 0 | 0 |
Quantum-optical spectroscopy for plasma electric field measurements and diagnostics | Measurements of plasma electric fields are essential to the advancement of
plasma science and applications. Methods for non-invasive in situ measurements
of plasma fields on sub-millimeter length scales with high sensitivity over a
large field range remain an outstanding challenge. Here, we introduce and
demonstrate a new method for plasma electric field measurement that employs
electromagnetically induced transparency as a high-resolution quantum-optical
probe for the Stark energy level shifts of plasma-embedded Rydberg atoms, which
serve as highly-sensitive field sensors with a large dynamic range. The method
is applied in diagnostics of plasmas photo-excited out of a cesium vapor. The
plasma electric fields are extracted from spatially-resolved measurements of
field-induced shape changes and shifts of Rydberg resonances in rubidium tracer
atoms. Measurement capabilities over a range of plasma densities and
temperatures are exploited to characterize plasmas in applied magnetic fields
and to image electric-field distributions in cyclotron-heated plasmas.
| 0 | 1 | 0 | 0 | 0 | 0 |
ROSA: R Optimizations with Static Analysis | R is a popular language and programming environment for data scientists. It
is increasingly co-packaged with both relational and Hadoop-based data
platforms and can often be the most dominant computational component in data
analytics pipelines. Recent work has highlighted inefficiencies in executing R
programs, both in terms of execution time and memory requirements, which in
practice limit the size of data that can be analyzed by R. This paper presents
ROSA, a static analysis framework to improve the performance and space
efficiency of R programs. ROSA analyzes input programs to determine program
properties such as reaching definitions, live variables, aliased variables, and
types of variables. These inferred properties enable program transformations
such as C++ code translation, strength reduction, vectorization, code motion,
in addition to interpretive optimizations such as avoiding redundant object
copies and performing in-place evaluations. An empirical evaluation shows
substantial reductions by ROSA in execution time and memory consumption over
both CRAN R and Microsoft R Open.
| 1 | 0 | 0 | 0 | 0 | 0 |
Human Perception of Performance | Humans are routinely asked to evaluate the performance of other individuals,
separating success from failure and affecting outcomes from science to
education and sports. Yet, in many contexts, the metrics driving the human
evaluation process remain unclear. Here we analyse a massive dataset capturing
players' evaluations by human judges to explore human perception of performance
in soccer, the world's most popular sport. We use machine learning to design an
artificial judge which accurately reproduces human evaluation, allowing us to
demonstrate how human observers are biased towards diverse contextual features.
By investigating the structure of the artificial judge, we uncover the aspects
of the players' behavior which attract the attention of human judges,
demonstrating that human evaluation is based on a noticeability heuristic where
only feature values far from the norm are considered to rate an individual's
performance.
| 1 | 0 | 0 | 1 | 0 | 0 |
Practical Distance Functions for Path-Planning in Planar Domains | Path planning is an important problem in robotics. One way to plan a path
between two points $x,y$ within a (not necessarily simply-connected) planar
domain $\Omega$, is to define a non-negative distance function $d(x,y)$ on
$\Omega\times\Omega$ such that following the (descending) gradient of this
distance function traces such a path. This presents two equally important
challenges: A mathematical challenge -- to define $d$ such that $d(x,y)$ has a
single minimum for any fixed $y$ (and this is when $x=y$), since a local
minimum is in effect a "dead end", A computational challenge -- to define $d$
such that it may be computed efficiently. In this paper, given a description of
$\Omega$, we show how to assign coordinates to each point of $\Omega$ and
define a family of distance functions between points using these coordinates,
such that both the mathematical and the computational challenges are met. This
is done using the concepts of \emph{harmonic measure} and
\emph{$f$-divergences}.
In practice, path planning is done on a discrete network defined on a finite
set of \emph{sites} sampled from $\Omega$, so any method that works well on the
continuous domain must be adapted so that it still works well on the discrete
domain. Given a set of sites sampled from $\Omega$, we show how to define a
network connecting these sites such that a \emph{greedy routing} algorithm
(which is the discrete equivalent of continuous gradient descent) based on the
distance function mentioned above is guaranteed to generate a path in the
network between any two such sites. In many cases, this network is close to a
(desirable) planar graph, especially if the set of sites is dense.
| 1 | 0 | 0 | 0 | 0 | 0 |
FPGA-Based CNN Inference Accelerator Synthesized from Multi-Threaded C Software | A deep-learning inference accelerator is synthesized from a C-language
software program parallelized with Pthreads. The software implementation uses
the well-known producer/consumer model with parallel threads interconnected by
FIFO queues. The LegUp high-level synthesis (HLS) tool synthesizes threads into
parallel FPGA hardware, translating software parallelism into spatial
parallelism. A complete system is generated where convolution, pooling and
padding are realized in the synthesized accelerator, with remaining tasks
executing on an embedded ARM processor. The accelerator incorporates reduced
precision, and a novel approach for zero-weight-skipping in convolution. On a
mid-sized Intel Arria 10 SoC FPGA, peak performance on VGG-16 is 138 effective
GOPS.
| 1 | 0 | 0 | 1 | 0 | 0 |
The set of forces that ideal trusses, or wire webs, under tension can support | The problem of determining those multiplets of forces, or sets of force
multiplets, acting at a set of points, such that there exists a truss
structure, or wire web, that can support these force multiplets with all the
elements of the truss or wire web being under tension, is considered. The
two-dimensional problem where the points are at the vertices of a convex
polygon is essentially solved: each multiplet of forces must be such that the
net anticlockwise torque around any vertex of the forces summed over any number
of consecutive points clockwise past the vertex must be non-negative; and one
can find a truss structure that supports under tension, and only supports,
those force multiplets in a convex polyhedron of force multiplets that is
generated by a finite number of force multiplets each satisfying the torque
condition. Progress is also made on the problem where only a subset of the
points are at the vertices of a convex polygon, and the other points are
inside. In particular, in the case where only one point is inside, an explicit
procedure is described for constructing a suitable truss, if one exists. An
alternative recipe to that provided by Guevara-Vasquez, Milton, and Onofrei
(2011), based on earlier work of Camar Eddine and Seppecher (2003), is given
for constructing a truss structure, with elements under either compression or
tension, that supports an arbitrary collection of balanced forces at the
vertices of a convex polygon. Finally some constraints are given on the forces
that a three-dimension truss, or wire web, under tension must satisfy.
| 0 | 1 | 0 | 0 | 0 | 0 |
A Thematic Study of Requirements Modeling and Analysis for Self-Adaptive Systems | Over the last decade, researchers and engineers have developed a vast body of
methodologies and technologies in requirements engineering for self-adaptive
systems. Although existing studies have explored various aspects of this topic,
few of them have categorized and summarized these areas of research in
require-ments modeling and analysis. This study aims to investigate the
research themes based on the utilized modeling methods and RE activities. We
conduct a thematic study in the systematic literature review. The results are
derived by synthesizing the extracted data with statistical methods. This paper
provides an updated review of the research literature, enabling researchers and
practitioners to better understand the research themes in these areas and
identify research gaps which need to be further studied.
| 1 | 0 | 0 | 0 | 0 | 0 |
On predictive density estimation with additional information | Based on independently distributed $X_1 \sim N_p(\theta_1, \sigma^2_1 I_p)$
and $X_2 \sim N_p(\theta_2, \sigma^2_2 I_p)$, we consider the efficiency of
various predictive density estimators for $Y_1 \sim N_p(\theta_1, \sigma^2_Y
I_p)$, with the additional information $\theta_1 - \theta_2 \in A$ and known
$\sigma^2_1, \sigma^2_2, \sigma^2_Y$. We provide improvements on benchmark
predictive densities such as plug-in, the maximum likelihood, and the minimum
risk equivariant predictive densities. Dominance results are obtained for
$\alpha-$divergence losses and include Bayesian improvements for reverse
Kullback-Leibler loss, and Kullback-Leibler (KL) loss in the univariate case
($p=1$). An ensemble of techniques are exploited, including variance expansion
(for KL loss), point estimation duality, and concave inequalities.
Representations for Bayesian predictive densities, and in particular for
$\hat{q}_{\pi_{U,A}}$ associated with a uniform prior for $\theta=(\theta_1,
\theta_2)$ truncated to $\{\theta \in \mathbb{R}^{2p}: \theta_1 - \theta_2 \in
A \}$, are established and are used for the Bayesian dominance findings.
Finally and interestingly, these Bayesian predictive densities also relate to
skew-normal distributions, as well as new forms of such distributions.
| 0 | 0 | 1 | 1 | 0 | 0 |
Optimizing Channel Selection for Seizure Detection | Interpretation of electroencephalogram (EEG) signals can be complicated by
obfuscating artifacts. Artifact detection plays an important role in the
observation and analysis of EEG signals. Spatial information contained in the
placement of the electrodes can be exploited to accurately detect artifacts.
However, when fewer electrodes are used, less spatial information is available,
making it harder to detect artifacts. In this study, we investigate the
performance of a deep learning algorithm, CNN-LSTM, on several channel
configurations. Each configuration was designed to minimize the amount of
spatial information lost compared to a standard 22-channel EEG. Systems using a
reduced number of channels ranging from 8 to 20 achieved sensitivities between
33% and 37% with false alarms in the range of [38, 50] per 24 hours. False
alarms increased dramatically (e.g., over 300 per 24 hours) when the number of
channels was further reduced. Baseline performance of a system that used all 22
channels was 39% sensitivity with 23 false alarms. Since the 22-channel system
was the only system that included referential channels, the rapid increase in
the false alarm rate as the number of channels was reduced underscores the
importance of retaining referential channels for artifact reduction. This
cautionary result is important because one of the biggest differences between
various types of EEGs administered is the type of referential channel used.
| 0 | 0 | 0 | 1 | 1 | 0 |
Bases of standard modules for affine Lie algebras of type $C_\ell^{(1)}$ | Feigin-Stoyanovsky's type subspaces for affine Lie algebras of type
$C_\ell^{(1)}$ have monomial bases with a nice combinatorial description. We
describe bases of whole standard modules in terms of semi-infinite monomials
obtained as "a limit of translations" of bases for Feigin-Stoyanovsky's type
subspaces.
| 0 | 0 | 1 | 0 | 0 | 0 |
Connectivity-Driven Brain Parcellation via Consensus Clustering | We present two related methods for deriving connectivity-based brain atlases
from individual connectomes. The proposed methods exploit a previously proposed
dense connectivity representation, termed continuous connectivity, by first
performing graph-based hierarchical clustering of individual brains, and
subsequently aggregating the individual parcellations into a consensus
parcellation. The search for consensus minimizes the sum of cluster membership
distances, effectively estimating a pseudo-Karcher mean of individual
parcellations. We assess the quality of our parcellations using (1)
Kullback-Liebler and Jensen-Shannon divergence with respect to the dense
connectome representation, (2) inter-hemispheric symmetry, and (3) performance
of the simplified connectome in a biological sex classification task. We find
that the parcellation based-atlas computed using a greedy search at a
hierarchical depth 3 outperforms all other parcellation-based atlases as well
as the standard Dessikan-Killiany anatomical atlas in all three assessments.
| 0 | 0 | 0 | 1 | 1 | 0 |
Baryon acoustic oscillations from the complete SDSS-III Ly$α$-quasar cross-correlation function at $z=2.4$ | We present a measurement of baryon acoustic oscillations (BAO) in the
cross-correlation of quasars with the Ly$\alpha$-forest flux-transmission at a
mean redshift $z=2.40$. The measurement uses the complete SDSS-III data sample:
168,889 forests and 234,367 quasars from the SDSS Data Release DR12. In
addition to the statistical improvement on our previous study using DR11, we
have implemented numerous improvements at the analysis level allowing a more
accurate measurement of this cross-correlation. We also developed the first
simulations of the cross-correlation allowing us to test different aspects of
our data analysis and to search for potential systematic errors in the
determination of the BAO peak position. We measure the two ratios
$D_{H}(z=2.40)/r_{d} = 9.01 \pm 0.36$ and $D_{M}(z=2.40)/r_{d} = 35.7 \pm 1.7$,
where the errors include marginalization over the non-linear velocity of
quasars and the metal - quasar cross-correlation contribution, among other
effects. These results are within $1.8\sigma$ of the prediction of the
flat-$\Lambda$CDM model describing the observed CMB anisotropies. We combine
this study with the Ly$\alpha$-forest auto-correlation function
[2017A&A...603A..12B], yielding $D_{H}(z=2.40)/r_{d} = 8.94 \pm 0.22$ and
$D_{M}(z=2.40)/r_{d} = 36.6 \pm 1.2$, within $2.3\sigma$ of the same
flat-$\Lambda$CDM model.
| 0 | 1 | 0 | 0 | 0 | 0 |
Gamma-ray and Optical Oscillations of 0716+714, MRK 421, and BL Lac | We examine the 2008-2016 $\gamma$-ray and optical light curves of three
bright BL Lac objects, 0716+714, MRK 421, BL Lac, which exhibit large
structured variability. We searched for periodicities by using a fully Bayesian
approach. For two out of three sources investigated no significant periodic
variability was found. In the case of BL Lac we detected a periodicity of ~ 680
days. Although the signal related to this is modest, the coincidence of the
periods in both gamma and optical bands is indicative of a physical relevance.
Considering previous literature results, possibly related $\gamma$-ray and
optical periodicities of about one year time scale are proposed in 4 bright
$\gamma$-ray blazars out of the 10 examined in detail. Comparing with results
from periodicity search of optical archives of quasars, the presence of
quasi-periodicities in blazars might be more frequent by a large factor. This
suggests the intriguing possibility that the basic conditions for their
observability are related to the relativistic jet in the observer direction,
but the overall picture remains uncertain.
| 0 | 1 | 0 | 0 | 0 | 0 |
Antibonding Ground state of Adatom Molecules in Bulk Dirac Semimetals | The ground state of the diatomic molecules in nature is inevitably bonding,
and its first excited state is antibonding. We demonstrate theoretically that,
for a pair of distant adatoms placed buried in three-dimensional-Dirac
semimetals, this natural order of the states can be reversed and an antibonding
ground state occurs at the lowest energy of the so-called bound states in the
continuum. We propose an experimental protocol with the use of a scanning
tunneling microscope tip to visualize the topographic map of the local density
of states on the surface of the system to reveal the emerging physics.
| 0 | 1 | 0 | 0 | 0 | 0 |
Baby MIND: A magnetized segmented neutrino detector for the WAGASCI experiment | T2K (Tokai-to-Kamioka) is a long-baseline neutrino experiment in Japan
designed to study various parameters of neutrino oscillations. A near detector
complex (ND280) is located 280~m downstream of the production target and
measures neutrino beam parameters before any oscillations occur. ND280's
measurements are used to predict the number and spectra of neutrinos in the
Super-Kamiokande detector at the distance of 295~km. The difference in the
target material between the far (water) and near (scintillator, hydrocarbon)
detectors leads to the main non-cancelling systematic uncertainty for the
oscillation analysis. In order to reduce this uncertainty a new
WAter-Grid-And-SCintillator detector (WAGASCI) has been developed. A magnetized
iron neutrino detector (Baby MIND) will be used to measure momentum and charge
identification of the outgoing muons from charged current interactions. The
Baby MIND modules are composed of magnetized iron plates and long plastic
scintillator bars read out at the both ends with wavelength shifting fibers and
silicon photomultipliers. The front-end electronics board has been developed to
perform the readout and digitization of the signals from the scintillator bars.
Detector elements were tested with cosmic rays and in the PS beam at CERN. The
obtained results are presented in this paper.
| 0 | 1 | 0 | 0 | 0 | 0 |
Applications of noncommutative deformations | For a general class of contractions of a variety X to a base Y, I discuss
recent joint work with M. Wemyss defining a noncommutative enhancement of the
locus in Y over which the contraction is not an isomorphism, along with
applications to the derived symmetries of X. This note is based on a talk given
at the Kinosaki Symposium in 2016.
| 0 | 0 | 1 | 0 | 0 | 0 |
Photometric and radial-velocity time-series of RR Lyrae stars in M3: analysis of single-mode variables | We present the first simultaneous photometric and spectroscopic investigation
of a large set of RR Lyrae variables in a globular cluster. The radial-velocity
data presented comprise the largest sample of RVs of RR Lyrae stars ever
obtained. The target is M3; $BVI_{\mathrm{C}}$ time-series of 111 and $b$ flux
data of further 64 RRab stars, and RV data of 79 RR Lyrae stars are published.
Blazhko modulation of the light curves of 47 percent of the RRab stars are
detected. The mean value of the center-of-mass velocities of RR Lyrae stars is
$-146.8$ km s$^{-1}$ with 4.52 km s$^{-1}$ standard deviation, which is in good
agreement with the results obtained for the red giants of the cluster. The
${\Phi_{21}}^{\mathrm RV}$ phase difference of the RV curves of RRab stars is
found to be uniformly constant both for the M3 and for Galactic field RRab
stars; no period or metallicity dependence of the ${\Phi_{21}}^{\mathrm RV}$ is
detected. The Baade-Wesselink distances of 26 non-Blazhko variables with the
best phase-coverage radial-velocity curves are determined; the corresponding
distance of the cluster, $10480\pm210$ pc, agrees with the previous literature
information. A quadratic formula for the $A_{\mathrm{puls}}-A_V$ relation of
RRab stars is given, which is valid for both OoI and OoII variables. We also
show that the $(V-I)_0$ of RRab stars measured at light minimum is period
dependent, there is at least 0.1 mag difference between the colours at minimum
light of the shortest- and longest-period variables.
| 0 | 1 | 0 | 0 | 0 | 0 |
Particlelike scattering states in a microwave cavity | We realize scattering states in a lossy and chaotic two-dimensional microwave
cavity which follow bundles of classical particle trajectories. To generate
such particlelike scattering states we measure the system's transmission matrix
and apply an adapted Wigner-Smith time-delay formalism to it. The necessary
shaping of the incident wave is achieved in situ using phase and amplitude
regulated microwave antennas. Our experimental findings pave the way for
establishing spatially confined communication channels that avoid possible
intruders or obstacles in wave-based communication systems.
| 0 | 1 | 0 | 0 | 0 | 0 |
Market Self-Learning of Signals, Impact and Optimal Trading: Invisible Hand Inference with Free Energy | We present a simple model of a non-equilibrium self-organizing market where
asset prices are partially driven by investment decisions of a bounded-rational
agent. The agent acts in a stochastic market environment driven by various
exogenous "alpha" signals, agent's own actions (via market impact), and noise.
Unlike traditional agent-based models, our agent aggregates all traders in the
market, rather than being a representative agent. Therefore, it can be
identified with a bounded-rational component of the market itself, providing a
particular implementation of an Invisible Hand market mechanism. In such
setting, market dynamics are modeled as a fictitious self-play of such
bounded-rational market-agent in its adversarial stochastic environment. As
rewards obtained by such self-playing market agent are not observed from market
data, we formulate and solve a simple model of such market dynamics based on a
neuroscience-inspired Bounded Rational Information Theoretic Inverse
Reinforcement Learning (BRIT-IRL). This results in effective asset price
dynamics with a non-linear mean reversion - which in our model is generated
dynamically, rather than being postulated. We argue that our model can be used
in a similar way to the Black-Litterman model. In particular, it represents, in
a simple modeling framework, market views of common predictive signals, market
impacts and implied optimal dynamic portfolio allocations, and can be used to
assess values of private signals. Moreover, it allows one to quantify a
"market-implied" optimal investment strategy, along with a measure of market
rationality. Our approach is numerically light, and can be implemented using
standard off-the-shelf software such as TensorFlow.
| 0 | 0 | 0 | 0 | 0 | 1 |
Current-driven skyrmion dynamics in disordered films | A theoretical study of the current-driven dynamics of magnetic skyrmions in
disordered perpendicularly-magnetized ultrathin films is presented. The
disorder is simulated as a granular structure in which the local anisotropy
varies randomly from grain to grain. The skyrmion velocity is computed for
different disorder parameters and ensembles. Similar behavior is seen for
spin-torques due to in-plane currents and the spin Hall effect, where a pinning
regime can be identified at low currents with a transition towards the
disorder-free case at higher currents, similar to domain wall motion in
disordered films. Moreover, a current-dependent skyrmion Hall effect and
fluctuations in the core radius are found, which result from the interaction
with the pinning potential.
| 0 | 1 | 0 | 0 | 0 | 0 |
On the Rigidity of Riemannian-Penrose Inequality for Asymptotically Flat 3-manifolds with Corners | In this paper we prove a rigidity result for the equality case of the Penrose
inequality on $3$-dimensional asymptotically flat manifolds with nonnegative
scalar curvature and corners. Our result also has deep connections with the
equality cases of Theorem 1 in \cite{Miao2} and Theorem 1.1 in \cite{LM}.
| 0 | 0 | 1 | 0 | 0 | 0 |
An Algebra Model for the Higher Order Sum Rules | We introduce an algebra model to study higher order sum rules for orthogonal
polynomials on the unit circle. We build the relation between the algebra model
and sum rules, and prove an equivalent expression on the algebra side for the
sum rules, involving a Hall-Littlewood type polynomial. By this expression, we
recover an earlier result by Golinskii and Zlatǒs, and prove a new case -
half of the Lukic conjecture in the case of a single critical point with
arbitrary order.
| 0 | 0 | 1 | 0 | 0 | 0 |
On maxispaces of nonparametric tests | For the problems of nonparametric hypothesis testing we introduce the notion
of maxisets and maxispace. We point out the maxisets of $\chi^2-$tests,
Cramer-von Mises tests, tests generated $\mathbb{L}_2$- norms of kernel
estimators and tests generated quadratic forms of estimators of Fourier
coefficients. For these tests we show that, if sequence of alternatives having
given rates of convergence to hypothesis is consistent, then each altehrnative
can be broken down into the sum of two parts: a function belonging to maxiset
and orthogonal function. Sequence of functions belonging to maxiset is
consistent sequence of alternatives.
We point out asymptotically minimax tests if sets of alternatives are maxiset
with deleted "small" $\mathbb{L}_2$-balls.
| 0 | 0 | 1 | 1 | 0 | 0 |
Learning Unsupervised Learning Rules | A major goal of unsupervised learning is to discover data representations
that are useful for subsequent tasks, without access to supervised labels
during training. Typically, this goal is approached by minimizing a surrogate
objective, such as the negative log likelihood of a generative model, with the
hope that representations useful for subsequent tasks will arise incidentally.
In this work, we propose instead to directly target a later desired task by
meta-learning an unsupervised learning rule, which leads to representations
useful for that task. Here, our desired task (meta-objective) is the
performance of the representation on semi-supervised classification, and we
meta-learn an algorithm -- an unsupervised weight update rule -- that produces
representations that perform well under this meta-objective. Additionally, we
constrain our unsupervised update rule to a be a biologically-motivated,
neuron-local function, which enables it to generalize to novel neural network
architectures. We show that the meta-learned update rule produces useful
features and sometimes outperforms existing unsupervised learning techniques.
We further show that the meta-learned unsupervised update rule generalizes to
train networks with different widths, depths, and nonlinearities. It also
generalizes to train on data with randomly permuted input dimensions and even
generalizes from image datasets to a text task.
| 0 | 0 | 0 | 1 | 0 | 0 |
Conformal Nanocarbon Coating of Alumina Nanocrystals for Biosensing and Bioimaging | A conformal coating technique with nanocarbon was developed to enhance the
surface properties of alumina nanoparticles for bio-applications. The
ultra-thin carbon layer induces new surface properties such as water
dispersion, cytocompatibility and tuneable surface chemistry, while maintaining
the optical properties of the core particle. The possibility of using these
particles as agents for DNA sensing was demonstrated in a competitive assay.
Additionally, the inherent fluorescence of the core alumina particles provided
a unique platform for localization and monitoring of living organisms, allowing
simultaneous cell monitoring and intra-cellular sensing. Nanoparticles were
able to carry genes to the cells and release them in an environment where
specific biomarkers were present.
| 0 | 1 | 0 | 0 | 0 | 0 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.