title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
Asymmetric Spin-wave Dispersion on Fe(110): Direct Evidence of Dzyaloshinskii--Moriya Interaction | The influence of the Dzyaloshinskii-Moriya interaction on the spin-wave
dispersion in an Fe double layer grown on W(110) is measured for the first
time. It is demonstrated that the Dzyaloshinskii-Moriya interaction breaks the
degeneracy of spin waves and leads to an asymmetric spin-wave dispersion
relation. An extended Heisenberg spin Hamiltonian is employed to obtain the
longitudinal component of the Dzyaloshinskii-Moriya vectors from the
experimentally measured energy asymmetry.
| 0 | 1 | 0 | 0 | 0 | 0 |
Consumption smoothing in the working-class households of interwar Japan | I analyze Osaka factory worker households in the early 1920s, whether
idiosyncratic income shocks were shared efficiently, and which consumption
categories were robust to shocks. While the null hypothesis of full
risk-sharing of total expenditures was rejected, factory workers maintained
their households, in that they paid for essential expenditures (rent,
utilities, and commutation) during economic hardship. Additionally, children's
education expenditures were possibly robust to idiosyncratic income shocks. The
results suggest that temporary income is statistically significantly increased
if disposable income drops due to idiosyncratic shocks. Historical documents
suggest microfinancial lending and saving institutions helped mitigate
risk-based vulnerabilities.
| 0 | 0 | 0 | 0 | 0 | 1 |
Observations and Modelling of the Pre-Flare Period of the 29 March 2014 X1 Flare | On the 29 March 2014 NOAA active region (AR) 12017 produced an X1 flare which
was simultaneously observed by an unprecedented number of observatories. We
have investigated the pre-flare period of this flare from 14:00 UT until 19:00
UT using joint observations made by the Interface Region Imaging Spectrometer
(IRIS) and the Hinode Extreme Ultraviolet Imaging Spectrometer (EIS). Spectral
lines providing coverage of the solar atmosphere from chromosphere to the
corona were analysed to investigate pre-flare activity within the AR. The
results of the investigation have revealed evidence of strongly blue-shifted
plasma flows, with velocities up to 200 km/s, being observed 40 minutes prior
to flaring. These flows are located along the filament present in the active
region and are both spatially discrete and transient. In order to constrain the
possible explanations for this activity, we undertake non-potential magnetic
field modelling of the active region. This modelling indicates the existence of
a weakly twisted flux rope along the polarity inversion line in the region
where a filament and the strong pre-flare flows are observed. We then discuss
how these observations relate to the current models of flare triggering. We
conclude that the most likely drivers of the observed activity are internal
reconnection in the flux rope, early onset of the flare reconnection, or tether
cutting reconnection along the filament.
| 0 | 1 | 0 | 0 | 0 | 0 |
Wild ramification and K(pi, 1) spaces | We prove that every connected affine scheme of positive characteristic is a
K(pi, 1) space for the etale topology. The main ingredient is the special case
of the affine space over a field k. This is dealt with by induction on n, using
a key "Bertini-type"' statement regarding the wild ramification of l-adic local
systems on affine spaces, which might be of independent interest. Its proof
uses in an essential way recent advances in higher ramification theory due to
T. Saito. We also give rigid analytic and mixed characteristic versions of the
main result.
| 0 | 0 | 1 | 0 | 0 | 0 |
A Deep Learning Interpretable Classifier for Diabetic Retinopathy Disease Grading | Deep neural network models have been proven to be very successful in image
classification tasks, also for medical diagnosis, but their main concern is its
lack of interpretability. They use to work as intuition machines with high
statistical confidence but unable to give interpretable explanations about the
reported results. The vast amount of parameters of these models make difficult
to infer a rationale interpretation from them. In this paper we present a
diabetic retinopathy interpretable classifier able to classify retine images
into the different levels of disease severity and of explaining its results by
assigning a score for every point in the hidden and input space, evaluating its
contribution to the final classification in a linear way. The generated visual
maps can be interpreted by an expert in order to compare its own knowledge with
the interpretation given by the model.
| 1 | 0 | 0 | 1 | 0 | 0 |
Arabic Multi-Dialect Segmentation: bi-LSTM-CRF vs. SVM | Arabic word segmentation is essential for a variety of NLP applications such
as machine translation and information retrieval. Segmentation entails breaking
words into their constituent stems, affixes and clitics. In this paper, we
compare two approaches for segmenting four major Arabic dialects using only
several thousand training examples for each dialect. The two approaches involve
posing the problem as a ranking problem, where an SVM ranker picks the best
segmentation, and as a sequence labeling problem, where a bi-LSTM RNN coupled
with CRF determines where best to segment words. We are able to achieve solid
segmentation results for all dialects using rather limited training data. We
also show that employing Modern Standard Arabic data for domain adaptation and
assuming context independence improve overall results.
| 1 | 0 | 0 | 0 | 0 | 0 |
Iterative bidding in electricity markets: rationality and robustness | This paper studies an electricity market consisting of an independent system
operator (ISO) and a group of generators. The goal is to solve the DC optimal
power flow (DC-OPF) problem: have the generators collectively meet the power
demand while minimizing the aggregate generation cost and respecting line flow
limits in the network. The ISO by itself cannot solve the DC-OPF problem as
generators are strategic and do not share their cost functions. Instead, each
generator submits to the ISO a bid, consisting of the price per unit of
electricity at which it is willing to provide power. Based on the bids, the ISO
decides how much production to allocate to each generator to minimize the total
payment while meeting the load and satisfying the line limits. We provide a
provably correct, decentralized iterative scheme, termed BID ADJUSTMENT
ALGORITHM, for the resulting Bertrand competition game. Regarding convergence,
we show that the algorithm takes the generators' bids to any desired
neighborhood of the efficient Nash equilibrium at a linear convergence rate. As
a consequence, the optimal production of the generators converges to the
optimizer of the DC-OPF problem. Regarding robustness, we show that the
algorithm is robust to affine perturbations in the bid adjustment scheme and
that there is no incentive for any individual generator to deviate from the
algorithm by using an alternative bid update scheme. We also establish the
algorithm robustness to collusion, i.e., we show that, as long as each bus with
generation has a generator following the strategy, there is no incentive for
any group of generators to share information with the intent of tricking the
system to obtain a higher payoff. Simulations illustrate our results.
| 1 | 0 | 1 | 0 | 0 | 0 |
Short-baseline electron antineutrino disappearance study by using neutrino sources from $^{13}$C + $^{9}$Be reaction | To investigate the existence of sterile neutrino, we propose a new neutrino
production method using $^{13}$C beams and a $^{9}$Be target for short-baseline
electron antineutrino (${\bar{\nu}}_{e}$) disappearance study. The production
of secondary unstable isotopes which can emit neutrinos from the $^{13}$C +
$^{9}$Be reaction is calculated with three different nucleus-nucleus (AA)
reaction models. Different isotope yields are obtained using these models, but
the results of the neutrino flux are found to have unanimous similarities. This
feature gives an opportunity to study neutrino oscillation through shape
analysis. In this work, expected neutrino flux and event rates are discussed in
detail through intensive simulation of the light ion collision reaction and the
neutrino flux from the beta decay of unstable isotopes followed by this
collision. Together with the reactor and accelerator anomalies, the present
proposed ${\bar{\nu}}_{e}$ source is shown to be a practically alternative test
of the existence of the $\Delta m^{2}$ $\sim$ 1 eV$^{2}$ scale sterile
neutrino.
| 0 | 1 | 0 | 0 | 0 | 0 |
Temperature-dependent non-covalent protein-protein interactions explain normal and inverted solubility in a mutant of human gamma D-crystallin | Protein crystal production is a major bottleneck for the structural
characterisation of proteins. To advance beyond large-scale screening, rational
strategies for protein crystallization are crucial. Understanding how chemical
anisotropy (or patchiness) of the protein surface due to the variety of amino
acid side chains in contact with solvent, contributes to protein protein
contact formation in the crystal lattice is a major obstacle to predicting and
optimising crystallization. The relative scarcity of sophisticated theoretical
models that include sufficient detail to link collective behaviour, captured in
protein phase diagrams, and molecular level details, determined from
high-resolution structural information is a further barrier. Here we present
two crystals structures for the P23TR36S mutant of gamma D-crystallin, each
with opposite solubility behaviour, one melts when heated, the other when
cooled. When combined with the protein phase diagram and a tailored patchy
particle model we show that a single temperature dependent interaction is
sufficient to stabilise the inverted solubility crystal. This contact, at the
P23T substitution site, relates to a genetic cataract and reveals at a
molecular level, the origin of the lowered and retrograde solubility of the
protein. Our results show that the approach employed here may present an
alternative strategy for the rationalization of protein crystallization.
| 0 | 0 | 0 | 0 | 1 | 0 |
Planetary Candidates Observed by Kepler. VIII. A Fully Automated Catalog With Measured Completeness and Reliability Based on Data Release 25 | We present the Kepler Object of Interest (KOI) catalog of transiting
exoplanets based on searching four years of Kepler time series photometry (Data
Release 25, Q1-Q17). The catalog contains 8054 KOIs of which 4034 are planet
candidates with periods between 0.25 and 632 days. Of these candidates, 219 are
new and include two in multi-planet systems (KOI-82.06 and KOI-2926.05), and
ten high-reliability, terrestrial-size, habitable zone candidates. This catalog
was created using a tool called the Robovetter which automatically vets the
DR25 Threshold Crossing Events (TCEs, Twicken et al. 2016). The Robovetter also
vetted simulated data sets and measured how well it was able to separate TCEs
caused by noise from those caused by low signal-to-noise transits. We discusses
the Robovetter and the metrics it uses to sort TCEs. For orbital periods less
than 100 days the Robovetter completeness (the fraction of simulated transits
that are determined to be planet candidates) across all observed stars is
greater than 85%. For the same period range, the catalog reliability (the
fraction of candidates that are not due to instrumental or stellar noise) is
greater than 98%. However, for low signal-to-noise candidates between 200 and
500 days around FGK dwarf stars, the Robovetter is 76.7% complete and the
catalog is 50.5% reliable. The KOI catalog, the transit fits and all of the
simulated data used to characterize this catalog are available at the NASA
Exoplanet Archive.
| 0 | 1 | 0 | 0 | 0 | 0 |
Dark Photons from Captured Inelastic Dark Matter Annihilation: Charged Particle Signatures | The dark sector may contain a dark photon that kinetically mixes with the
Standard Model photon, allowing dark matter to interact weakly with normal
matter. In previous work we analyzed the implications of this scenario for dark
matter capture by the Sun. Dark matter will gather in the core of the Sun and
annihilate to dark photons. These dark photons travel outwards from the center
of the Sun and may decay to produce positrons that can be detected by the Alpha
Magnetic Spectrometer (AMS-02) on the ISS. We found that the dark photon
parameter space accessible to this analysis is largely constrained by strong
limits on the spin-independent WIMP-nucleon cross section from direct detection
experiments. In this paper we build upon previous work by considering the case
where the dark sector contains two species of Dirac fermion that are nearly
degenerate in mass and couple inelastically to the dark photon. We find that
for small values of the mass splitting $\Delta \sim 100 ~\text{keV}$, the
predicted positron signal at AMS-02 remains largely unchanged from the
previously considered elastic case while constraints from direct detection are
relaxed, leaving a region of parameter space with dark matter mass $100
~\text{GeV} \lesssim m_X \lesssim 10 ~\text{TeV}$, dark photon mass $1
~\text{MeV} \lesssim m_{A'} \lesssim 100 ~\text{MeV}$, and kinetic mixing
parameter $10^{-9} \lesssim \varepsilon \lesssim 10^{-8}$ that is untouched by
supernova observations and fixed target experiments but where an inelastic dark
sector may still be discovered using existing AMS-02 data.
| 0 | 1 | 0 | 0 | 0 | 0 |
A Polya Contagion Model for Networks | A network epidemics model based on the classical Polya urn scheme is
investigated. Temporal contagion processes are generated on the network nodes
using a modified Polya sampling scheme that accounts for spatial infection
among neighbouring nodes. The stochastic properties and the asymptotic
behaviour of the resulting network contagion process are analyzed. Unlike the
classical Polya process, the network process is noted to be non-stationary in
general, although it is shown to be time-invariant in its first and some of its
second-order statistics and to satisfy martingale convergence properties under
certain conditions. Three classical Polya processes, one computational and two
analytical, are proposed to statistically approximate the contagion process of
each node, showing a good fit for a range of system parameters. Finally,
empirical results compare and contrast our model with the well-known discrete
time SIS model.
| 1 | 1 | 1 | 0 | 0 | 0 |
Compressive Statistical Learning with Random Feature Moments | We describe a general framework --compressive statistical learning-- for
resource-efficient large-scale learning: the training collection is compressed
in one pass into a low-dimensional sketch (a vector of random empirical
generalized moments) that captures the information relevant to the considered
learning task. A near-minimizer of the risk is computed from the sketch through
the solution of a nonlinear least squares problem. We investigate sufficient
sketch sizes to control the generalization error of this procedure. The
framework is illustrated on compressive clustering, compressive Gaussian
mixture Modeling with fixed known variance, and compressive PCA.
| 1 | 0 | 1 | 1 | 0 | 0 |
Bandit Structured Prediction for Neural Sequence-to-Sequence Learning | Bandit structured prediction describes a stochastic optimization framework
where learning is performed from partial feedback. This feedback is received in
the form of a task loss evaluation to a predicted output structure, without
having access to gold standard structures. We advance this framework by lifting
linear bandit learning to neural sequence-to-sequence learning problems using
attention-based recurrent neural networks. Furthermore, we show how to
incorporate control variates into our learning algorithms for variance
reduction and improved generalization. We present an evaluation on a neural
machine translation task that shows improvements of up to 5.89 BLEU points for
domain adaptation from simulated bandit feedback.
| 1 | 0 | 0 | 1 | 0 | 0 |
Dynamic Input Structure and Network Assembly for Few-Shot Learning | The ability to learn from a small number of examples has been a difficult
problem in machine learning since its inception. While methods have succeeded
with large amounts of training data, research has been underway in how to
accomplish similar performance with fewer examples, known as one-shot or more
generally few-shot learning. This technique has been shown to have promising
performance, but in practice requires fixed-size inputs making it impractical
for production systems where class sizes can vary. This impedes training and
the final utility of few-shot learning systems. This paper describes an
approach to constructing and training a network that can handle arbitrary
example sizes dynamically as the system is used.
| 1 | 0 | 0 | 1 | 0 | 0 |
Face R-CNN | Faster R-CNN is one of the most representative and successful methods for
object detection, and has been becoming increasingly popular in various
objection detection applications. In this report, we propose a robust deep face
detection approach based on Faster R-CNN. In our approach, we exploit several
new techniques including new multi-task loss function design, online hard
example mining, and multi-scale training strategy to improve Faster R-CNN in
multiple aspects. The proposed approach is well suited for face detection, so
we call it Face R-CNN. Extensive experiments are conducted on two most popular
and challenging face detection benchmarks, FDDB and WIDER FACE, to demonstrate
the superiority of the proposed approach over state-of-the-arts.
| 1 | 0 | 0 | 0 | 0 | 0 |
Multi-Sensor Data Pattern Recognition for Multi-Target Localization: A Machine Learning Approach | Data-target pairing is an important step towards multi-target localization
for the intelligent operation of unmanned systems. Target localization plays a
crucial role in numerous applications, such as search, and rescue missions,
traffic management and surveillance. The objective of this paper is to present
an innovative target location learning approach, where numerous machine
learning approaches, including K-means clustering and supported vector machines
(SVM), are used to learn the data pattern across a list of spatially
distributed sensors. To enable the accurate data association from different
sensors for accurate target localization, appropriate data pre-processing is
essential, which is then followed by the application of different machine
learning algorithms to appropriately group data from different sensors for the
accurate localization of multiple targets. Through simulation examples, the
performance of these machine learning algorithms is quantified and compared.
| 1 | 0 | 0 | 1 | 0 | 0 |
Calipso: Physics-based Image and Video Editing through CAD Model Proxies | We present Calipso, an interactive method for editing images and videos in a
physically-coherent manner. Our main idea is to realize physics-based
manipulations by running a full physics simulation on proxy geometries given by
non-rigidly aligned CAD models. Running these simulations allows us to apply
new, unseen forces to move or deform selected objects, change physical
parameters such as mass or elasticity, or even add entire new objects that
interact with the rest of the underlying scene. In Calipso, the user makes
edits directly in 3D; these edits are processed by the simulation and then
transfered to the target 2D content using shape-to-image correspondences in a
photo-realistic rendering process. To align the CAD models, we introduce an
efficient CAD-to-image alignment procedure that jointly minimizes for rigid and
non-rigid alignment while preserving the high-level structure of the input
shape. Moreover, the user can choose to exploit image flow to estimate scene
motion, producing coherent physical behavior with ambient dynamics. We
demonstrate Calipso's physics-based editing on a wide range of examples
producing myriad physical behavior while preserving geometric and visual
consistency.
| 1 | 0 | 0 | 0 | 0 | 0 |
Maximal (120,8)-arcs in projective planes of order 16 and related designs | The resolutions and maximal sets of compatible resolutions of all 2-(120,8,1)
designs arising frommaximal (120,8)-arcs in the known projective planes of
order 16 are computed. It is shown that each of these designs is embeddable in
a unique way in a projective plane of order 16.
| 0 | 0 | 1 | 0 | 0 | 0 |
First order sentences about random graphs: small number of alternations | Spectrum of a first order sentence is the set of all $\alpha$ such that $G(n,
n^{-\alpha})$ does not obey zero-one law w.r.t. this sentence. We have proved
that the minimal number of quantifier alternations of a first order sentence
with an infinite spectrum equals 3. We have also proved that the spectrum of a
first order sentence with a quantifier depth 4 has no limit points except
possibly the points 1/2 and 3/5.
| 0 | 0 | 1 | 0 | 0 | 0 |
New Methods for Metadata Extraction from Scientific Literature | Within the past few decades we have witnessed digital revolution, which moved
scholarly communication to electronic media and also resulted in a substantial
increase in its volume. Nowadays keeping track with the latest scientific
achievements poses a major challenge for the researchers. Scientific
information overload is a severe problem that slows down scholarly
communication and knowledge propagation across the academia. Modern research
infrastructures facilitate studying scientific literature by providing
intelligent search tools, proposing similar and related documents, visualizing
citation and author networks, assessing the quality and impact of the articles,
and so on. In order to provide such high quality services the system requires
the access not only to the text content of stored documents, but also to their
machine-readable metadata. Since in practice good quality metadata is not
always available, there is a strong demand for a reliable automatic method of
extracting machine-readable metadata directly from source documents. This
research addresses these problems by proposing an automatic, accurate and
flexible algorithm for extracting wide range of metadata directly from
scientific articles in born-digital form. Extracted information includes basic
document metadata, structured full text and bibliography section. Designed as a
universal solution, proposed algorithm is able to handle a vast variety of
publication layouts with high precision and thus is well-suited for analyzing
heterogeneous document collections. This was achieved by employing supervised
and unsupervised machine-learning algorithms trained on large, diverse
datasets. The evaluation we conducted showed good performance of proposed
metadata extraction algorithm. The comparison with other similar solutions also
proved our algorithm performs better than competition for most metadata types.
| 1 | 0 | 0 | 0 | 0 | 0 |
Unexpected biases in prime factorizations and Liouville functions for arithmetic progressions | We introduce a refinement of the classical Liouville function to primes in
arithmetic progressions. Using this, we discover new biases in the appearances
of primes in a given arithmetic progression in the prime factorizations of
integers. For example, we observe that the primes of the form $4k+1$ tend to
appear an even number of times in the prime factorization of a given integer,
more so than for primes of the form $4k+3$. We are led to consider variants of
Pólya's conjecture, supported by extensive numerical evidence, and its
relation to other conjectures.
| 0 | 0 | 1 | 0 | 0 | 0 |
Nonconvection and uniqueness in Navier-Stokes equation | In the presence of a certain class of functions we show that there exists a
smooth solution to Navier-Stokes equation. This solution entertains the
property of being nonconvective. We introduce a definition for any possible
solution to the problem with minimum assumptions on the existence and the
regularity of such solution. Then we prove that the proposed class of functions
represents the unique solution to the problem and consequently we conclude that
there exists no convective solutions to the problem in the sense of the given
definition.
| 0 | 0 | 1 | 0 | 0 | 0 |
An alternative approach for compatibility of two discrete conditional distributions | Conditional specification of distributions is a developing area with
increasing applications. In the finite discrete case, a variety of compatible
conditions can be derived. In this paper, we propose an alternative approach to
study the compatibility of two conditional probability distributions under the
finite discrete setup. A technique based on rank-based criterion is shown to be
particularly convenient for identifying compatible distributions corresponding
to complete conditional specification including the case with zeros.The
proposed methods are illustrated with several examples.
| 0 | 0 | 1 | 1 | 0 | 0 |
System of unbiased representatives for a collection of bicolorings | Let $\mathcal{B}$ denote a set of bicolorings of $[n]$, where each bicoloring
is a mapping of the points in $[n]$ to $\{-1,+1\}$.
For each $B \in \mathcal{B}$, let $Y_B=(B(1),\ldots,B(n))$.
For each $A \subseteq [n]$, let $X_A \in \{0,1\}^n$ denote the incidence
vector of $A$.
A non-empty set $A$ is said to be an `unbiased representative' for a
bicoloring $B \in \mathcal{B}$ if $\left\langle X_A,Y_B\right\rangle =0$.
Given a set $\mathcal{B}$ of bicolorings, we study the minimum cardinality of
a family $\mathcal{A}$ consisting of subsets of $[n]$ such that every
bicoloring in $\mathcal{B}$ has an unbiased representative in $\mathcal{A}$.
| 1 | 0 | 0 | 0 | 0 | 0 |
Stability and instability in saddle point dynamics Part II: The subgradient method | In part I we considered the problem of convergence to a saddle point of a
concave-convex function via gradient dynamics and an exact characterization was
given to their asymptotic behaviour. In part II we consider a general class of
subgradient dynamics that provide a restriction in an arbitrary convex domain.
We show that despite the nonlinear and non-smooth character of these dynamics
their $\omega$-limit set is comprised of solutions to only linear ODEs. In
particular, we show that the latter are solutions to subgradient dynamics on
affine subspaces which is a smooth class of dynamics the asymptotic properties
of which have been exactly characterized in part I. Various convergence
criteria are formulated using these results and several examples and
applications are also discussed throughout the manuscript.
| 1 | 0 | 1 | 0 | 0 | 0 |
Refactoring Legacy JavaScript Code to Use Classes: The Good, The Bad and The Ugly | JavaScript systems are becoming increasingly complex and large. To tackle the
challenges involved in implementing these systems, the language is evolving to
include several constructions for programming- in-the-large. For example,
although the language is prototype-based, the latest JavaScript standard, named
ECMAScript 6 (ES6), provides native support for implementing classes. Even
though most modern web browsers support ES6, only a very few applications use
the class syntax. In this paper, we analyze the process of migrating structures
that emulate classes in legacy JavaScript code to adopt the new syntax for
classes introduced by ES6. We apply a set of migration rules on eight legacy
JavaScript systems. In our study, we document: (a) cases that are
straightforward to migrate (the good parts); (b) cases that require manual and
ad-hoc migration (the bad parts); and (c) cases that cannot be migrated due to
limitations and restrictions of ES6 (the ugly parts). Six out of eight systems
(75%) contain instances of bad and/or ugly cases. We also collect the
perceptions of JavaScript developers about migrating their code to use the new
syntax for classes.
| 1 | 0 | 0 | 0 | 0 | 0 |
Parylene-C microfibrous thin films as phononic crystals | Phononic bandgaps of Parylene-C microfibrous thin films (muFTFs) were
computationally determined by treating them as phononic crystals comprising
identical microfibers arranged either on a square or a hexagonal lattice. The
microfibers could be columnar,chevronic, or helical in shape, and the host
medium could be either water or air. All bandgaps were observed to lie in the
0.01-to-162.9-MHz regime, for microfibers of realistically chosen dimensions.
The upper limit of the frequency of bandgaps was the highest for the columnar
muFTF and the lowest for the chiral muFTF. More bandgaps exist when the host
medium is water than air. Complete bandgaps were observed for the columnar
muFTF with microfibers arranged on a hexagonal lattice in air, the chevronic
muFTF with microfibers arranged on a square lattice in water, and the chiral
muFTF with microfibers arranged on a hexagonal lattice in either air or water.
The softness of the Parylene-C muFTFs makes them mechanically tunable, and
their bandgaps can be exploited in multiband ultrasonic filters.
| 0 | 1 | 0 | 0 | 0 | 0 |
A General Theory for Training Learning Machine | Though the deep learning is pushing the machine learning to a new stage,
basic theories of machine learning are still limited. The principle of
learning, the role of the a prior knowledge, the role of neuron bias, and the
basis for choosing neural transfer function and cost function, etc., are still
far from clear. In this paper, we present a general theoretical framework for
machine learning. We classify the prior knowledge into common and
problem-dependent parts, and consider that the aim of learning is to maximally
incorporate them. The principle we suggested for maximizing the former is the
design risk minimization principle, while the neural transfer function, the
cost function, as well as pretreatment of samples, are endowed with the role
for maximizing the latter. The role of the neuron bias is explained from a
different angle. We develop a Monte Carlo algorithm to establish the
input-output responses, and we control the input-output sensitivity of a
learning machine by controlling that of individual neurons. Applications of
function approaching and smoothing, pattern recognition and classification, are
provided to illustrate how to train general learning machines based on our
theory and algorithm. Our method may in addition induce new applications, such
as the transductive inference.
| 1 | 0 | 0 | 1 | 0 | 0 |
Semiparametric spectral modeling of the Drosophila connectome | We present semiparametric spectral modeling of the complete larval Drosophila
mushroom body connectome. Motivated by a thorough exploratory data analysis of
the network via Gaussian mixture modeling (GMM) in the adjacency spectral
embedding (ASE) representation space, we introduce the latent structure model
(LSM) for network modeling and inference. LSM is a generalization of the
stochastic block model (SBM) and a special case of the random dot product graph
(RDPG) latent position model, and is amenable to semiparametric GMM in the ASE
representation space. The resulting connectome code derived via semiparametric
GMM composed with ASE captures latent connectome structure and elucidates
biologically relevant neuronal properties.
| 0 | 0 | 0 | 1 | 0 | 0 |
A Large Term Rewrite System Modelling a Pioneering Cryptographic Algorithm | We present a term rewrite system that formally models the Message
Authenticator Algorithm (MAA), which was one of the first cryptographic
functions for computing a Message Authentication Code and was adopted, between
1987 and 2001, in international standards (ISO 8730 and ISO 8731-2) to ensure
the authenticity and integrity of banking transactions. Our term rewrite system
is large (13 sorts, 18 constructors, 644 non-constructors, and 684 rewrite
rules), confluent, and terminating. Implementations in thirteen different
languages have been automatically derived from this model and used to validate
200 official test vectors for the MAA.
| 1 | 0 | 0 | 0 | 0 | 0 |
Malleability of complex networks | Most complex networks are not static, but evolve along time. Given a specific
configuration of one such changing network, it becomes a particularly
interesting issue to quantify the diversity of possible unfoldings of its
topology. In this work, we suggest the concept of malleability of a network,
which is defined as the exponential of the entropy of the probabilities of each
possible unfolding with respect to a given configuration. We calculate the
malleability with respect to specific measurements of the involved topologies.
More specifically, we identify the possible topologies derivable from a given
configuration and calculate some topological measurement of them (e.g.
clustering coefficient, shortest path length, assortativity, etc.), leading to
respective probabilities being associated to each possible measurement value.
Though this approach implies some level of degeneracy in the mapping from
topology to measurement space, it still paves the way to inferring the
malleability of specific network types with respect to given topological
measurements. We report that the malleability, in general, depends on each
specific measurement, with the average shortest path length and degree
assortativity typically leading to large malleability values. The maximum
malleability was observed for the Wikipedia network and the minimum for the
Watts-Strogatz model.
| 1 | 0 | 0 | 0 | 0 | 0 |
Identifying Critical Risks of Cascading Failures in Power Systems | Potential critical risks of cascading failures in power systems can be
identified by exposing those critical electrical elements on which certain
initial disturbances may cause maximum disruption to power transmission
networks. In this work, we investigate cascading failures in power systems
described by the direct current (DC) power flow equations, while initial
disturbances take the form of altering admittance of elements. The disruption
is quantified with the remaining transmission power at the end of cascading
process. In particular, identifying the critical elements and the corresponding
initial disturbances causing the worst-case cascading blackout is formulated as
a dynamic optimization problem (DOP) in the framework of optimal control
theory, where the entire propagation process of cascading failures is put under
consideration. An Identifying Critical Risk Algorithm (ICRA) based on the
maximum principle is proposed to solve the DOP. Simulation results on the IEEE
9-Bus and the IEEE 14-Bus test systems are presented to demonstrate the
effectiveness of the algorithm.
| 1 | 0 | 0 | 0 | 0 | 0 |
Radiation Hardness Test of Eljen EJ-500 Optical Cement | We present a comprehensive account of the proton radiation hardness of Eljen
Technology's EJ-500 optical cement used in the construction of experiment
detectors. The cement was embedded into five plastic scintillator tiles which
were each exposed to one of five different levels of radiation by a 50 MeV
proton beam produced at the 88-Inch Cyclotron at Lawrence Berkeley National
Laboratory. A cosmic ray telescope setup was used to measure signal amplitudes
before and after irradiation. Another post-radiation measurement was taken four
months after the experiment to investigate whether the radiation damage to the
cement recovers after a short amount of time. We verified that the radiation
damage to the tiles increased with increasing dose but showed significant
improvement after the four months time interval.
| 0 | 1 | 0 | 0 | 0 | 0 |
Triviality of the ground-state metastate in long-range Ising spin glasses in one dimension | We consider the one-dimensional model of a spin glass with independent
Gaussian-distributed random interactions, that have mean zero and variance
$1/|i-j|^{2\sigma}$, between the spins at sites $i$ and $j$ for all $i\neq j$.
It is known that, for $\sigma>1$, there is no phase transition at any non-zero
temperature in this model. We prove rigorously that, for $\sigma>3/2$, any
Newman-Stein metastate for the ground states (i.e.\ the frequencies with which
distinct ground states are observed in finite size samples in the limit of
infinite size, for given disorder) is trivial and unique. In other words, for
given disorder and asymptotically at large sizes, the same ground state, or its
global spin flip, is obtained (almost) always. The proof consists of two parts:
one is a theorem (based on one by Newman and Stein for short-range
two-dimensional models), valid for all $\sigma>1$, that establishes triviality
under a convergence hypothesis on something similar to the energies of domain
walls, and the other (based on older results for the one-dimensional model)
establishes that the hypothesis is true for $\sigma>3/2$. In addition, we
derive heuristic scaling arguments and rigorous exponent inequalities which
tend to support the validity of the hypothesis under broader conditions. The
constructions of various metastates are extended to all values $\sigma>1/2$.
Triviality of the metastate in bond-diluted power-law models for $\sigma>1$ is
proved directly.
| 0 | 1 | 0 | 0 | 0 | 0 |
Thermodynamics of BTZ Black Holes in Gravity's Rainbow | In this paper, we deform the thermodynamics of a BTZ black hole from rainbow
functions in gravity's rainbow. The rainbow functions will be motivated from
results in loop quantum gravity and Noncommutative geometry. It will be
observed that the thermodynamics gets deformed due to these rainbow functions,
indicating the existence of a remnant. However, the Gibbs free energy does not
get deformed due to these rainbow functions, and so the critical behaviour from
Gibbs does not change by this deformation.This is because the deformation in
the entropy cancel's out the temperature deformation.
| 0 | 1 | 0 | 0 | 0 | 0 |
Nondegeneracy and the Jacobi fields of rotationally symmetric solutions to the Cahn-Hillard equation | In this paper we study rotationally symmetric solutions of the Cahn-Hilliard
equation in $\mathbb R^3$ constructed by the authors. These solutions form a
one parameter family analog to the family of Delaunay surfaces and in fact the
zero level sets of their blowdowns approach these surfaces. Presently we go a
step further and show that their stability properties are inherited from the
stability properties of the Delaunay surfaces. Our main result states that the
rotationally symmetric solutions are non degenerate and that they have exactly
$6$ Jacobi fields of temperate growth coming from the natural invariances of
the problem (3 translations and 2 rotations) and the variation of the Delaunay
parameter.
| 0 | 0 | 1 | 0 | 0 | 0 |
Quasiflats in hierarchically hyperbolic spaces | The rank of a hierarchically hyperbolic space is the maximal number of
unbounded factors of standard product regions; this coincides with the maximal
dimension of a quasiflat for hierarchically hyperbolic groups. Noteworthy
examples where the rank coincides with familiar quantities include: the
dimension of maximal Dehn twist flats for mapping class groups, the maximal
rank of a free abelian subgroup for right-angled Coxeter groups and
right-angled Artin groups (in the latter this coincides with the clique number
of the defining graph), and, for the Weil-Petersson metric the rank is half the
complex dimension of Teichmuller space.
We prove that in a HHS, any quasiflat of dimension equal to the rank lies
within finite distance of a union of standard orthants (under a very mild
condition satisfied by all natural examples). This resolves outstanding
conjectures when applied to a number of different groups and spaces. The
mapping class group case resolves a conjecture of Farb, in Teichmuller space
this resolves a question of Brock, and in the context of CAT(0) cubical groups
it strengthens previous results (so as to handle, for example, the right-angled
Coxeter case).
An important ingredient, is our proof that the hull of any finite set in an
HHS is quasi-isometric to a cube complex of dimension equal to the rank.
We deduce a number of applications; for instance we show that any
quasi-isometry between HHS induces a quasi-isometry between certain simpler
HHS. This allows one, for example, to distinguish quasi-isometry classes of
right-angled Artin/Coxeter groups.
Another application is that our tools, in many cases, allow one to reduce the
problem of quasi-isometric rigidity for a given HHG to a combinatorial problem.
As a template, we give a new proof of quasi-isometric rigidity of mapping class
groups, using simpler combinatorial arguments than in previous proofs.
| 0 | 0 | 1 | 0 | 0 | 0 |
Learning to Skim Text | Recurrent Neural Networks are showing much promise in many sub-areas of
natural language processing, ranging from document classification to machine
translation to automatic question answering. Despite their promise, many
recurrent models have to read the whole text word by word, making it slow to
handle long documents. For example, it is difficult to use a recurrent network
to read a book and answer questions about it. In this paper, we present an
approach of reading text while skipping irrelevant information if needed. The
underlying model is a recurrent network that learns how far to jump after
reading a few words of the input text. We employ a standard policy gradient
method to train the model to make discrete jumping decisions. In our benchmarks
on four different tasks, including number prediction, sentiment analysis, news
article classification and automatic Q\&A, our proposed model, a modified LSTM
with jumping, is up to 6 times faster than the standard sequential LSTM, while
maintaining the same or even better accuracy.
| 1 | 0 | 0 | 0 | 0 | 0 |
Using Deep Neural Networks to Automate Large Scale Statistical Analysis for Big Data Applications | Statistical analysis (SA) is a complex process to deduce population
properties from analysis of data. It usually takes a well-trained analyst to
successfully perform SA, and it becomes extremely challenging to apply SA to
big data applications. We propose to use deep neural networks to automate the
SA process. In particular, we propose to construct convolutional neural
networks (CNNs) to perform automatic model selection and parameter estimation,
two most important SA tasks. We refer to the resulting CNNs as the neural model
selector and the neural model estimator, respectively, which can be properly
trained using labeled data systematically generated from candidate models.
Simulation study shows that both the selector and estimator demonstrate
excellent performances. The idea and proposed framework can be further extended
to automate the entire SA process and have the potential to revolutionize how
SA is performed in big data analytics.
| 1 | 0 | 0 | 1 | 0 | 0 |
Duty to Delete on Non-Volatile Memory | We firstly suggest new cache policy applying the duty to delete invalid cache
data on Non-volatile Memory (NVM). This cache policy includes generating random
data and overwriting the random data into invalid cache data. Proposed cache
policy is more economical and effective regarding perfect deletion of data. It
is ensure that the invalid cache data in NVM is secure against malicious
hackers.
| 1 | 0 | 0 | 0 | 0 | 0 |
Geometry of Log-Concave Density Estimation | Shape-constrained density estimation is an important topic in mathematical
statistics. We focus on densities on $\mathbb{R}^d$ that are log-concave, and
we study geometric properties of the maximum likelihood estimator (MLE) for
weighted samples. Cule, Samworth, and Stewart showed that the logarithm of the
optimal log-concave density is piecewise linear and supported on a regular
subdivision of the samples. This defines a map from the space of weights to the
set of regular subdivisions of the samples, i.e. the face poset of their
secondary polytope. We prove that this map is surjective. In fact, every
regular subdivision arises in the MLE for some set of weights with positive
probability, but coarser subdivisions appear to be more likely to arise than
finer ones. To quantify these results, we introduce a continuous version of the
secondary polytope, whose dual we name the Samworth body. This article
establishes a new link between geometric combinatorics and nonparametric
statistics, and it suggests numerous open problems.
| 0 | 0 | 0 | 1 | 0 | 0 |
Antenna Arrays for Line-of-Sight Massive MIMO: Half Wavelength is not Enough | The aim of this paper is to analyze the array synthesis for 5 G massive MIMO
systems in the line-of-sight working condition. The main result of the
numerical investigation performed is that non-uniform arrays are the natural
choice in this kind of application. In particular, by using non-equispaced
arrays, we show that it is possible to achieve a better average condition
number of the channel matrix and a significantly higher spectral efficiency.
Furthermore, we verify that increasing the array size is beneficial also for
circular arrays, and we provide some useful rules-of-thumb for antenna array
design for massive MIMO applications. These results are in contrast to the
widely-accepted idea in the 5 G massive MIMO literature, in which the
half-wavelength linear uniform array is universally adopted.
| 1 | 0 | 0 | 0 | 0 | 0 |
Gravitational octree code performance evaluation on Volta GPU | In this study, the gravitational octree code originally optimized for the
Fermi, Kepler, and Maxwell GPU architectures is adapted to the Volta
architecture. The Volta architecture introduces independent thread scheduling
requiring either the insertion of the explicit synchronizations at appropriate
locations or the enforcement of the same implicit synchronizations as do the
Pascal or earlier architectures by specifying \texttt{-gencode
arch=compute\_60,code=sm\_70}. The performance measurements on Tesla V100, the
current flagship GPU by NVIDIA, revealed that the $N$-body simulations of the
Andromeda galaxy model with $2^{23} = 8388608$ particles took $3.8 \times
10^{-2}$~s or $3.3 \times 10^{-2}$~s per step for each case. Tesla V100
achieves a 1.4 to 2.2-fold acceleration in comparison with Tesla P100, the
flagship GPU in the previous generation. The observed speed-up of 2.2 is
greater than 1.5, which is the ratio of the theoretical peak performance of the
two GPUs. The independence of the units for integer operations from those for
floating-point number operations enables the overlapped execution of integer
and floating-point number operations. It hides the execution time of the
integer operations leading to the speed-up rate above the theoretical peak
performance ratio. Tesla V100 can execute $N$-body simulation with up to $25
\times 2^{20} = 26214400$ particles, and it took $2.0 \times 10^{-1}$~s per
step. It corresponds to $3.5$~TFlop/s, which is 22\% of the single-precision
theoretical peak performance.
| 1 | 0 | 0 | 0 | 0 | 0 |
Instanton bundles on the flag variety F(0,1,2) | Instanton bundles on $\mathbb{P}^3$ have been at the core of the research in
Algebraic Geometry during the last thirty years. Motivated by the recent
extension of their definition to other Fano threefolds of Picard number one, we
develop the theory of instanton bundles on the complete flag variety
$F:=F(0,1,2)$ of point-lines on $\mathbb{P}^2$. After giving for them two
different monadic presentations, we use it to show that the moduli space
$MI_F(k)$ of instanton bundles of charge $k$ is a geometric GIT quotient with a
generically smooth component of dim $8k-3$. Finally we study their locus of
jumping conics.
| 0 | 0 | 1 | 0 | 0 | 0 |
Non-linear motor control by local learning in spiking neural networks | Learning weights in a spiking neural network with hidden neurons, using
local, stable and online rules, to control non-linear body dynamics is an open
problem. Here, we employ a supervised scheme, Feedback-based Online Local
Learning Of Weights (FOLLOW), to train a network of heterogeneous spiking
neurons with hidden layers, to control a two-link arm so as to reproduce a
desired state trajectory. The network first learns an inverse model of the
non-linear dynamics, i.e. from state trajectory as input to the network, it
learns to infer the continuous-time command that produced the trajectory.
Connection weights are adjusted via a local plasticity rule that involves
pre-synaptic firing and post-synaptic feedback of the error in the inferred
command. We choose a network architecture, termed differential feedforward,
that gives the lowest test error from different feedforward and recurrent
architectures. The learned inverse model is then used to generate a
continuous-time motor command to control the arm, given a desired trajectory.
| 1 | 0 | 0 | 1 | 0 | 0 |
Training Deep Neural Networks via Optimization Over Graphs | In this work, we propose to train a deep neural network by distributed
optimization over a graph. Two nonlinear functions are considered: the
rectified linear unit (ReLU) and a linear unit with both lower and upper
cutoffs (DCutLU). The problem reformulation over a graph is realized by
explicitly representing ReLU or DCutLU using a set of slack variables. We then
apply the alternating direction method of multipliers (ADMM) to update the
weights of the network layerwise by solving subproblems of the reformulated
problem. Empirical results suggest that the ADMM-based method is less sensitive
to overfitting than the stochastic gradient descent (SGD) and Adam methods.
| 1 | 0 | 0 | 0 | 0 | 0 |
Calculation of Effective Interaction Potential During Positron Channeling in Ionic Crystals | An analytical expression is received for the effective interaction potential
of a fast charged particle with the ionic crystal CsCl near the direction of
<100> axis as a function of the temperature of the medium. By numerical
analysis it is shown that the effective potential of axial channeling of
positrons along the axis <100> of negatively charged ions practically does not
depend on temperature of the media
| 0 | 1 | 0 | 0 | 0 | 0 |
The Follower Count Fallacy: Detecting Twitter Users with Manipulated Follower Count | Online Social Networks (OSN) are increasingly being used as platform for an
effective communication, to engage with other users, and to create a social
worth via number of likes, followers and shares. Such metrics and crowd-sourced
ratings give the OSN user a sense of social reputation which she tries to
maintain and boost to be more influential. Users artificially bolster their
social reputation via black-market web services. In this work, we identify
users which manipulate their projected follower count using an unsupervised
local neighborhood detection method. We identify a neighborhood of the user
based on a robust set of features which reflect user similarity in terms of the
expected follower count. We show that follower count estimation using our
method has 84.2% accuracy with a low error rate. In addition, we estimate the
follower count of the user under suspicion by finding its neighborhood drawn
from a large random sample of Twitter. We show that our method is highly
tolerant to synthetic manipulation of followers. Using the deviation of
predicted follower count from the displayed count, we are also able to detect
customers with a high precision of 98.62%
| 1 | 0 | 0 | 0 | 0 | 0 |
Bergman kernel estimates and Toeplitz operators on holomorphic line bundles | We characterize operator-theoretic properties (boundedness, compactness, and
Schatten class membership) of Toeplitz operators with positive measure symbols
on Bergman spaces of holomorphic hermitian line bundles over Kähler
Cartan-Hadamard manifolds in terms of geometric or operator-theoretic
properties of measures.
| 0 | 0 | 1 | 0 | 0 | 0 |
HST PanCET program: A Cloudy Atmosphere for the promising JWST target WASP-101b | We present results from the first observations of the Hubble Space Telescope
(HST) Panchromatic Comparative Exoplanet Treasury (PanCET) program for
WASP-101b, a highly inflated hot Jupiter and one of the community targets
proposed for the James Webb Space Telescope (JWST) Early Release Science (ERS)
program. From a single HST Wide Field Camera 3 (WFC3) observation, we find that
the near-infrared transmission spectrum of WASP-101b contains no significant
H$_2$O absorption features and we rule out a clear atmosphere at 13{\sigma}.
Therefore, WASP-101b is not an optimum target for a JWST ERS program aimed at
observing strong molecular transmission features. We compare WASP-101b to the
well studied and nearly identical hot Jupiter WASP-31b. These twin planets show
similar temperature-pressure profiles and atmospheric features in the
near-infrared. We suggest exoplanets in the same parameter space as WASP-101b
and WASP-31b will also exhibit cloudy transmission spectral features. For
future HST exoplanet studies, our analysis also suggests that a lower count
limit needs to be exceeded per pixel on the detector in order to avoid unwanted
instrumental systematics.
| 0 | 1 | 0 | 0 | 0 | 0 |
Quantum gauge symmetry of reducible gauge theory | We derive the gaugeon formalism of the Kalb-Ramond field theory, a reducible
gauge theory, which discusses the quantum gauge freedom. In gaugeon formalism,
theory admits quantum gauge symmetry which leaves the action form-invariant.
The BRST symmetric gaugeon formalism is also studied which introduces the
gaugeon ghost fields and gaugeon ghosts of ghosts fields. To replace the
Yokoyama subsidiary conditions by a single Kugo-Ojima type condition the virtue
of BRST symmetry is utilized. Under generalized BRST transformations, we show
that the gaugeon fields appear naturally in the reducible gauge theory.
| 0 | 1 | 0 | 0 | 0 | 0 |
The Tian Pseudo-Atom Method | In this work, the authors give a new method for phase determination, the Tian
pseudo atom method (TPAM) or pseudo atom method (PAM) for short. In this new
method, the figure of merit function, Rtian, replaces Rcf in the charge
flipping algorithm. The key difference between Rcf and Rtian is the oberved
structure factor was replaced by the pseudo structure factor. The test results
show that Rtian is more powerful and robust than Rcf to estimate the correct
structure especially with low resolution data. Therefore, the pseudo atom
method could overcome the charge flipping method's defeat to some extent. In
theory, the pseudo atom method could deal with quite low resolution data but it
needs a further test.
| 0 | 1 | 0 | 0 | 0 | 0 |
The Phenotypes of Fluctuating Flow: Development of Distribution Networks in Biology and the Trade-off between Efficiency, Cost, and Resilience | Complex distribution networks are pervasive in biology. Examples include
nutrient transport in the slime mold $Physarum$ $polycephalum$ as well as
mammalian and plant venation. Adaptive rules are believed to guide development
of these networks and lead to a reticulate, hierarchically nested topology that
is both efficient and resilient against perturbations. However, as of yet no
mechanism is known that can generate such networks on all scales. We show how
hierarchically organized reticulation can be generated and maintained through
spatially collective load fluctuations on a particular length scale. We
demonstrate that the resulting network topologies represent a trade-off between
optimizing power dissipation, construction cost, and damage robustness and
identify the Pareto-efficient front that evolution is expected to favor and
select for. We show that the typical fluctuation length scale controls the
position of the networks on the Pareto front and thus on the spectrum of
venation phenotypes. We compare the Pareto archetypes predicted by our model
with examples of real leaf networks.
| 0 | 1 | 0 | 0 | 0 | 0 |
Recent Trends in Deep Learning Based Natural Language Processing | Deep learning methods employ multiple processing layers to learn hierarchical
representations of data and have produced state-of-the-art results in many
domains. Recently, a variety of model designs and methods have blossomed in the
context of natural language processing (NLP). In this paper, we review
significant deep learning related models and methods that have been employed
for numerous NLP tasks and provide a walk-through of their evolution. We also
summarize, compare and contrast the various models and put forward a detailed
understanding of the past, present and future of deep learning in NLP.
| 1 | 0 | 0 | 0 | 0 | 0 |
Volatile memory forensics for the Robot Operating System | The increasing impact of robotics on industry and on society will unavoidably
lead to the involvement of robots in incidents and mishaps. In such cases,
forensic analyses are key techniques to provide useful evidence on what
happened, and try to prevent future incidents. This article discusses volatile
memory forensics for the Robot Operating System (ROS). The authors start by
providing a general overview of forensic techniques in robotics and then
present a robotics-specific Volatility plugin named linux_rosnode, packaged
within the ros_volatility project and aimed to extract evidence from robot's
volatile memory. They demonstrate how this plugin can be used to detect a
specific attack pattern on ROS, where a publisher node is unregistered
externally, leading to denial of service and disruption of robotic behaviors.
Step-by-step, common practices are introduced for performing forensic analysis
and several techniques to capture memory are described. The authors finalize by
introducing some future remarks while providing references to reproduce their
work.
| 1 | 0 | 0 | 0 | 0 | 0 |
HYDRA: HYbrid Design for Remote Attestation (Using a Formally Verified Microkernel) | Remote Attestation (RA) allows a trusted entity (verifier) to securely
measure internal state of a remote untrusted hardware platform (prover). RA can
be used to establish a static or dynamic root of trust in embedded and
cyber-physical systems. It can also be used as a building block for other
security services and primitives, such as software updates and patches,
verifiable deletion and memory resetting. There are three major classes of RA
designs: hardware-based, software-based, and hybrid, each with its own set of
benefits and drawbacks. This paper presents the first hybrid RA design, called
HYDRA, that builds upon formally verified software components that ensure
memory isolation and protection, as well as enforce access control to memory
and other resources. HYDRA obtains these properties by using the formally
verified seL4 microkernel. (Until now, this was only attainable with purely
hardware-based designs.) Using seL4 requires fewer hardware modifications to
the underlying microprocessor. Building upon a formally verified software
component increases confidence in security of the overall design of HYDRA and
its implementation. We instantiate HYDRA on two commodity hardware platforms
and assess the performance and overhead of performing RA on such platforms via
experimentation; we show that HYDRA can attest 10MB of memory in less than
500msec when using a Speck-based message authentication code (MAC) to compute a
cryptographic checksum over the memory to be attested.
| 1 | 0 | 0 | 0 | 0 | 0 |
Social Robots for People with Developmental Disabilities: A User Study on Design Features of a Graphical User Interface | Social robots, also known as service or assistant robots, have been developed
to improve the quality of human life in recent years. The design of socially
capable and intelligent robots can vary, depending on the target user groups.
In this work, we assess the effect of social robots' roles, functions, and
communication approaches in the context of a social agent providing service or
entertainment to users with developmental disabilities. In this paper, we
describe an exploratory study of interface design for a social robot that
assists people suffering from developmental disabilities. We developed series
of prototypes and tested one in a user study that included three residents with
various function levels. This entire study had been recorded for the following
qualitative data analysis. Results show that each design factor played a
different role in delivering information and in increasing engagement. We also
note that some of the fundamental design principles that would work for
ordinary users did not apply to our target user group. We conclude that social
robots could benefit our target users, and acknowledge that these robots were
not suitable for certain scenarios based on the feedback from our users.
| 1 | 0 | 0 | 0 | 0 | 0 |
Inverse problem for multi-species mean field models in the low temperature phase | In this paper we solve the inverse problem for a class of mean field models
(Curie-Weiss model and its multi-species version) when multiple thermodynamic
states are present, as in the low temperature phase where the phase space is
clustered. The inverse problem consists in reconstructing the model parameters
starting from configuration data generated according to the distribution of the
model. We show that the application of the inversion procedure without taking
into account the presence of many states produces very poor inference results.
This problem is overcomed using the clustering algorithm. When the system has
two symmetric states of positive and negative magnetization, the parameter
reconstruction can be also obtained with smaller computational effort simply by
flipping the sign of the magnetizations from positive to negative (or
viceversa). The parameter reconstruction fails when the system is critical: in
this case we give the correct inversion formulas for the Curie-Weiss model and
we show that they can be used to measuring how much the system is close to
criticality.
| 0 | 1 | 0 | 0 | 0 | 0 |
Inertial Odometry on Handheld Smartphones | Building a complete inertial navigation system using the limited quality data
provided by current smartphones has been regarded challenging, if not
impossible. This paper shows that by careful crafting and accounting for the
weak information in the sensor samples, smartphones are capable of pure
inertial navigation. We present a probabilistic approach for orientation and
use-case free inertial odometry, which is based on double-integrating rotated
accelerations. The strength of the model is in learning additive and
multiplicative IMU biases online. We are able to track the phone position,
velocity, and pose in real-time and in a computationally lightweight fashion by
solving the inference with an extended Kalman filter. The information fusion is
completed with zero-velocity updates (if the phone remains stationary),
altitude correction from barometric pressure readings (if available), and
pseudo-updates constraining the momentary speed. We demonstrate our approach
using an iPad and iPhone in several indoor dead-reckoning applications and in a
measurement tool setup.
| 1 | 0 | 0 | 1 | 0 | 0 |
Asymptotic network models of subwavelength metamaterials formed by closely packed photonic and phononic crystals | We demonstrate that photonic and phononic crystals consisting of closely
spaced inclusions constitute a versatile class of subwavelength metamaterials.
Intuitively, the voids and narrow gaps that characterise the crystal form an
interconnected network of Helmholtz-like resonators. We use this intuition to
argue that these continuous photonic (phononic) crystals are in fact
asymptotically equivalent, at low frequencies, to discrete capacitor-inductor
(mass-spring) networks whose lumped parameters we derive explicitly. The
crystals are tantamount to metamaterials as their entire acoustic branch, or
branches when the discrete analogue is polyatomic, is squeezed into a
subwavelength regime where the ratio of wavelength to period scales like the
ratio of period to gap width raised to the power 1/4; at yet larger wavelengths
we accordingly find a comparably large effective refractive index. The fully
analytical dispersion relations predicted by the discrete models yield
dispersion curves that agree with those from finite-element simulations of the
continuous crystals. The insight gained from the network approach is used to
show that, surprisingly, the continuum created by a closely packed hexagonal
lattice of cylinders is represented by a discrete honeycomb lattice. The
analogy is utilised to show that the hexagonal continuum lattice has a
Dirac-point degeneracy that is lifted in a controlled manner by specifying the
area of a symmetry-breaking defect.
| 0 | 1 | 0 | 0 | 0 | 0 |
The Hydrogen Epoch of Reionization Array Dish III: Measuring Chromaticity of Prototype Element with Reflectometry | The experimental efforts to detect the redshifted 21 cm signal from the Epoch
of Reionization (EoR) are limited predominantly by the chromatic instrumental
systematic effect. The delay spectrum methodology for 21 cm power spectrum
measurements brought new attention to the critical impact of an antenna's
chromaticity on the viability of making this measurement. This methodology
established a straightforward relationship between time-domain response of an
instrument and the power spectrum modes accessible to a 21 cm EoR experiment.
We examine the performance of a prototype of the Hydrogen Epoch of Reionization
Array (HERA) array element that is currently observing in Karoo desert, South
Africa. We present a mathematical framework to derive the beam integrated
frequency response of a HERA prototype element in reception from the return
loss measurements between 100-200 MHz and determined the extent of additional
foreground contamination in the delay space. The measurement reveals excess
spectral structures in comparison to the simulation studies of the HERA
element. Combined with the HERA data analysis pipeline that incorporates
inverse covariance weighting in optimal quadratic estimation of power spectrum,
we find that in spite of its departure from the simulated response, HERA
prototype element satisfies the necessary criteria posed by the foreground
attenuation limits and potentially can measure the power spectrum at spatial
modes as low as $k_{\parallel} > 0.1h$~Mpc$^{-1}$. The work highlights a
straightforward method for directly measuring an instrument response and
assessing its impact on 21 cm EoR power spectrum measurements for future
experiments that will use reflector-type antenna.
| 0 | 1 | 0 | 0 | 0 | 0 |
Unifying the micro and macro properties of AGN feeding and feedback | We unify the feeding and feedback of supermassive black holes with the global
properties of galaxies, groups, and clusters, by linking for the first time the
physical mechanical efficiency at the horizon and Mpc scale. The macro hot halo
is tightly constrained by the absence of overheating and overcooling as probed
by X-ray data and hydrodynamic simulations ($\varepsilon_{\rm BH} \simeq$
10$^{-3}\, T_{\rm x,7.4}$). The micro flow is shaped by general relativistic
effects tracked by state-of-the-art GR-RMHD simulations ($\varepsilon_\bullet
\simeq$ 0.03). The SMBH properties are tied to the X-ray halo temperature
$T_{\rm x}$, or related cosmic scaling relation (as $L_{\rm x}$). The model is
minimally based on first principles, as conservation of energy and mass
recycling. The inflow occurs via chaotic cold accretion (CCA), the rain of cold
clouds condensing out of the quenched cooling flow and recurrently funneled via
inelastic collisions. Within 100s gravitational radii, the accretion energy is
transformed into ultrafast 10$^4$ km s$^{-1}$ outflows (UFOs) ejecting most of
the inflowing mass. At larger radii the energy-driven outflow entrains
progressively more mass: at roughly kpc scale, the velocities of the
hot/warm/cold outflows are a few 10$^3$, 1000, 500 km s$^{-1}$, with median
mass rates ~10, 100, several 100 M$_\odot$ yr$^{-1}$, respectively. The unified
CCA model is consistent with the observations of nuclear UFOs, and ionized,
neutral, and molecular macro outflows. We provide step-by-step implementation
for subgrid simulations, (semi)analytic works, or observational interpretations
which require self-regulated AGN feedback at coarse scales, avoiding the
a-posteriori fine-tuning of efficiencies.
| 0 | 1 | 0 | 0 | 0 | 0 |
Model-based Iterative Restoration for Binary Document Image Compression with Dictionary Learning | The inherent noise in the observed (e.g., scanned) binary document image
degrades the image quality and harms the compression ratio through breaking the
pattern repentance and adding entropy to the document images. In this paper, we
design a cost function in Bayesian framework with dictionary learning.
Minimizing our cost function produces a restored image which has better quality
than that of the observed noisy image, and a dictionary for representing and
encoding the image. After the restoration, we use this dictionary (from the
same cost function) to encode the restored image following the
symbol-dictionary framework by JBIG2 standard with the lossless mode.
Experimental results with a variety of document images demonstrate that our
method improves the image quality compared with the observed image, and
simultaneously improves the compression ratio. For the test images with
synthetic noise, our method reduces the number of flipped pixels by 48.2% and
improves the compression ratio by 36.36% as compared with the best encoding
methods. For the test images with real noise, our method visually improves the
image quality, and outperforms the cutting-edge method by 28.27% in terms of
the compression ratio.
| 1 | 0 | 0 | 0 | 0 | 0 |
A First Look at Ad Blocking Apps on Google Play | Online advertisers and analytics services (or trackers), are constantly
tracking users activities as they access web services either through browsers
or a mobile apps. Numerous tools such as browser plugins and specialized mobile
apps have been proposed to limit intrusive advertisements and prevent tracking
on desktop computing and mobile phones. For desktop computing, browser plugins
are heavily studied for their usability and efficiency issues, however, tools
that block ads and prevent trackers in mobile platforms, have received the
least or no attention.
In this paper, we present a first look at 97 Android adblocking apps (or
adblockers), extracted from more than 1.5 million apps from Google Play, that
promise to block advertisements and analytics services. With our data
collection and analysis pipeline of the Android adblockers, we reveal the
presences of third-party tracking libraries and sensitive permissions for
critical resources on user mobile devices as well as have malware in the source
codes. We analyze users' reviews for the in-effectiveness of adblockers in
terms of not blocking ads and trackers. We found that a significant fraction of
adblockers are not fulfilling their advertised functionality.
| 1 | 0 | 0 | 0 | 0 | 0 |
Nondegeneracy of the traveling lump solution to the $2+1$ Toda lattice | We consider the $2+1$ Toda system \[ \frac{1}{4}\Delta
q_{n}=e^{q_{n-1}-q_{n}}-e^{q_{n}-q_{n+1}}\text{ in }\mathbb{R}^{2},\
n\in\mathbb{Z}. \] It has a traveling wave type solution $\left\{ Q_{n}\right\}
$ satisfying $Q_{n+1}(x,y)=Q_{n}(x+\frac{1}{2\sqrt{2}},y)$, and is explicitly
given by \[ Q_{n}\left( x,y\right) =\ln\frac{\frac{1}{4}+\left(
n-1+2\sqrt{2}x\right) ^{2}+4y^{2}}{\frac{1}{4}+\left( n+2\sqrt{2}x\right)
^{2}+4y^{2}}. \] In this paper we prove that \{$Q_{n}$\} is nondegenerate.
| 0 | 0 | 1 | 0 | 0 | 0 |
One side continuity of meromorphic mappings between real analytic hypersurfaces | We prove that a meromorphic mapping, which sends a peace of a real analytic
strictly pseudoconvex hypersurface in $\cc^2$ to a compact subset of $\cc^N$
which doesn't contain germs of non-constant complex curves is continuous from
the concave side of the hypersurface. This implies the analytic continuability
along CR-paths of germs of holomorphic mappings from real analytic
hypersurfaces with non-vanishing Levi form to the locally spherical ones in all
dimensions.
| 0 | 0 | 1 | 0 | 0 | 0 |
Generic Camera Attribute Control using Bayesian Optimization | Cameras are the most widely exploited sensor in both robotics and computer
vision communities. Despite their popularity, two dominant attributes (i.e.,
gain and exposure time) have been determined empirically and images are
captured in very passive manner. In this paper, we present an active and
generic camera attribute control scheme using Bayesian optimization. We extend
from our previous work [1] in two aspects. First, we propose a method that
jointly controls camera gain and exposure time. Secondly, to speed up the
Bayesian optimization process, we introduce image synthesis using the camera
response function (CRF). These synthesized images allowed us to diminish the
image acquisition time during the Bayesian optimization phase, substantially
improving overall control performance. The proposed method is validated both in
an indoor and an outdoor environment where light condition rapidly changes.
Supplementary material is available at this https URL .
| 1 | 0 | 0 | 0 | 0 | 0 |
Far-from-equilibrium energy flow and entanglement entropy | The time evolution of the energy transport triggered in a strongly coupled
system by a temperature gradient is holographically related to the evolution of
an asymptotically AdS black brane. We study the far-from-equilibrium properties
of such a system by using the AdS/CFT correspondence. In particular, we
describe the appearance of a steady state, and study the information flow by
computing the time evolution of the holographic entanglement entropy. Some
universal properties of the quenching process are presented.
| 0 | 1 | 0 | 0 | 0 | 0 |
Algebraic Aspects of Conditional Independence and Graphical Models | This chapter of the forthcoming Handbook of Graphical Models contains an
overview of basic theorems and techniques from algebraic geometry and how they
can be applied to the study of conditional independence and graphical models.
It also introduces binomial ideals and some ideas from real algebraic geometry.
When random variables are discrete or Gaussian, tools from computational
algebraic geometry can be used to understand implications between conditional
independence statements. This is accomplished by computing primary
decompositions of conditional independence ideals. As examples the chapter
presents in detail the graphical model of a four cycle and the intersection
axiom, a certain implication of conditional independence statements. Another
important problem in the area is to determine all constraints on a graphical
model, for example, equations determined by trek separation. The full set of
equality constraints can be determined by computing the model's vanishing
ideal. The chapter illustrates these techniques and ideas with examples from
the literature and provides references for further reading.
| 0 | 0 | 1 | 1 | 0 | 0 |
A Survey of Riccati Equation Results in Negative Imaginary Systems Theory and Quantum Control Theory | This paper presents a survey of some new applications of algebraic Riccati
equations. In particular, the paper surveys some recent results on the use of
algebraic Riccati equations in testing whether a system is negative imaginary
and in synthesizing state feedback controllers which make the closed loop
system negative imaginary. The paper also surveys the use of Riccati equation
methods in the control of quantum linear systems including coherent $H^\infty$
control.
| 1 | 0 | 1 | 0 | 0 | 0 |
The circumstellar disk HD$\,$169142: gas, dust and planets acting in concert? | HD$\,$169142 is an excellent target to investigate signs of planet-disk
interaction due to the previous evidence of gap structures. We performed J-band
(~1.2{\mu}m) polarized intensity imaging of HD169142 with VLT/SPHERE. We
observe polarized scattered light down to 0.16" (~19 au) and find an inner gap
with a significantly reduced scattered light flux. We confirm the previously
detected double ring structure peaking at 0.18" (~21 au) and 0.56" (~66 au),
and marginally detect a faint third gap at 0.70"-0.73" (~82-85 au). We explore
dust evolution models in a disk perturbed by two giant planets, as well as
models with a parameterized dust size distribution. The dust evolution model is
able to reproduce the ring locations and gap widths in polarized intensity, but
fails to reproduce their depths. It, however, gives a good match with the ALMA
dust continuum image at 1.3 mm. Models with a parameterized dust size
distribution better reproduce the gap depth in scattered light, suggesting that
dust filtration at the outer edges of the gaps is less effective. The pile-up
of millimeter grains in a dust trap and the continuous distribution of small
grains throughout the gap likely require a more efficient dust fragmentation
and dust diffusion in the dust trap. Alternatively, turbulence or charging
effects might lead to a reservoir of small grains at the surface layer that is
not affected by the dust growth and fragmentation cycle dominating the dense
disk midplane. The exploration of models shows that extracting planet
properties such as mass from observed gap profiles is highly degenerate.
| 0 | 1 | 0 | 0 | 0 | 0 |
Quantitative analysis of the influence of keV He ion bombardment on exchange bias layer systems | The mechanism of ion bombardment induced magnetic patterning of exchange bias
layer systems for creating engineered magnetic stray field landscapes is still
unclear. We compare results from vectorial magneto-optic Kerr effect
measurements to a recently proposed model with time dependent rotatable
magnetic anisotropy. Results show massive reduction of rotational magnetic
anisotropy compared to all other magnetic anisotropies. We disprove the
assumption of comparable weakening of all magnetic anisotropies and show that
ion bombardment mainly influences smaller grains in the antiferromagnet.
| 0 | 1 | 0 | 0 | 0 | 0 |
Automatic classification of automorphisms of lower-dimensional Lie algebras | We implement two algorithms in MATHEMATICA for classifying automorphisms of
lower-dimensional non-commutative Lie algebras. The first algorithm is a
brute-force approach whereas the second is an evolutionary strategy. These
algorithms are delivered as the MATHEMATICA package cwsAutoClass. In order to
facilitate the application of this package to symmetry Lie algebras of
differential equations, we also provide a package, cwsLieSymTools, for
manipulating finite-dimensional Lie algebras of vector fields. In particular,
this package allows the computations of Lie brackets, structure constants, and
the visualization of commutator tables. Several examples are provided to
illustrate the pertinence of our approach.
| 0 | 0 | 1 | 0 | 0 | 0 |
Information-Theoretic Understanding of Population Risk Improvement with Model Compression | We show that model compression can improve the population risk of a
pre-trained model, by studying the tradeoff between the decrease in the
generalization error and the increase in the empirical risk with model
compression. We first prove that model compression reduces an
information-theoretic bound on the generalization error; this allows for an
interpretation of model compression as a regularization technique to avoid
overfitting. We then characterize the increase in empirical risk with model
compression using rate distortion theory. These results imply that the
population risk could be improved by model compression if the decrease in
generalization error exceeds the increase in empirical risk. We show through a
linear regression example that such a decrease in population risk due to model
compression is indeed possible. Our theoretical results further suggest that
the Hessian-weighted $K$-means clustering compression approach can be improved
by regularizing the distance between the clustering centers. We provide
experiments with neural networks to support our theoretical assertions.
| 1 | 0 | 0 | 1 | 0 | 0 |
Document Retrieval for Large Scale Content Analysis using Contextualized Dictionaries | This paper presents a procedure to retrieve subsets of relevant documents
from large text collections for Content Analysis, e.g. in social sciences.
Document retrieval for this purpose needs to take account of the fact that
analysts often cannot describe their research objective with a small set of key
terms, especially when dealing with theoretical or rather abstract research
interests. Instead, it is much easier to define a set of paradigmatic documents
which reflect topics of interest as well as targeted manner of speech. Thus, in
contrast to classic information retrieval tasks we employ manually compiled
collections of reference documents to compose large queries of several hundred
key terms, called dictionaries. We extract dictionaries via Topic Models and
also use co-occurrence data from reference collections. Evaluations show that
the procedure improves retrieval results for this purpose compared to
alternative methods of key term extraction as well as neglecting co-occurrence
data.
| 1 | 0 | 0 | 0 | 0 | 0 |
Cepheids with the eyes of photometric space telescopes | Space photometric missions have been steadily accumulating observations of
Cepheids in recent years, leading to a flow of new discoveries. In this short
review we summarize the findings provided by the early missions such as WIRE,
MOST, and CoRoT, and the recent results of the Kepler and K2 missions. The
surprising and fascinating results from the high-precision, quasi-continuous
data include the detection of the amplitude increase of Polaris, and exquisite
details about V1154 Cyg within the original Kepler field of view. We also
briefly discuss the current opportunities with the K2 mission, and the
prospects of the TESS space telescope regarding Cepheids.
| 0 | 1 | 0 | 0 | 0 | 0 |
Explicitly correlated formalism for second-order single-particle Green's function | We present an explicitly correlated formalism for the second-order
single-particle Green's function method (GF2-F12) that does not assume the
popular diagonal approximation, and describes the energy dependence of the
explicitly correlated terms. For small and medium organic molecules the basis
set errors of ionization potentials of GF2-F12 are radically improved relative
to GF2: the performance of GF2-F12/aug- cc-pVDZ is better than that of
GF2/aug-cc-pVQZ, at a significantly lower cost.
| 0 | 1 | 0 | 0 | 0 | 0 |
Relative stability associated to quantised extremal Kähler metrics | We study algebro-geometric consequences of the quantised extremal Kähler
metrics, introduced in the previous work of the author. We prove that the
existence of quantised extremal metrics implies weak relative Chow
polystability. As a consequence, we obtain asymptotic weak relative Chow
polystability and $K$-semistability of extremal manifolds by using quantised
extremal metrics; this gives an alternative proof of the results of Mabuchi and
Stoppa--Székelyhidi. In proving them, we further provide an explicit local
density formula for the equivariant Riemann--Roch theorem.
| 0 | 0 | 1 | 0 | 0 | 0 |
Beyond the EULA: Improving consent for data mining | Companies and academic researchers may collect, process, and distribute large
quantities of personal data without the explicit knowledge or consent of the
individuals to whom the data pertains. Existing forms of consent often fail to
be appropriately readable and ethical oversight of data mining may not be
sufficient. This raises the question of whether existing consent instruments
are sufficient, logistically feasible, or even necessary, for data mining. In
this chapter, we review the data collection and mining landscape, including
commercial and academic activities, and the relevant data protection concerns,
to determine the types of consent instruments used. Using three case studies,
we use the new paradigm of human-data interaction to examine whether these
existing approaches are appropriate. We then introduce an approach to consent
that has been empirically demonstrated to improve on the state of the art and
deliver meaningful consent. Finally, we propose some best practices for data
collectors to ensure their data mining activities do not violate the
expectations of the people to whom the data relate.
| 1 | 0 | 0 | 0 | 0 | 0 |
Estimating a Separably-Markov Random Field (SMuRF) from Binary Observations | A fundamental problem in neuroscience is to characterize the dynamics of
spiking from the neurons in a circuit that is involved in learning about a
stimulus or a contingency. A key limitation of current methods to analyze
neural spiking data is the need to collapse neural activity over time or
trials, which may cause the loss of information pertinent to understanding the
function of a neuron or circuit. We introduce a new method that can determine
not only the trial-to-trial dynamics that accompany the learning of a
contingency by a neuron, but also the latency of this learning with respect to
the onset of a conditioned stimulus. The backbone of the method is a separable
two-dimensional (2D) random field (RF) model of neural spike rasters, in which
the joint conditional intensity function of a neuron over time and trials
depends on two latent Markovian state sequences that evolve separately but in
parallel. Classical tools to estimate state-space models cannot be applied
readily to our 2D separable RF model. We develop efficient statistical and
computational tools to estimate the parameters of the separable 2D RF model. We
apply these to data collected from neurons in the pre-frontal cortex (PFC) in
an experiment designed to characterize the neural underpinnings of the
associative learning of fear in mice. Overall, the separable 2D RF model
provides a detailed, interpretable, characterization of the dynamics of neural
spiking that accompany the learning of a contingency.
| 1 | 0 | 0 | 1 | 0 | 0 |
Using Deep Reinforcement Learning for the Continuous Control of Robotic Arms | Deep reinforcement learning enables algorithms to learn complex behavior,
deal with continuous action spaces and find good strategies in environments
with high dimensional state spaces. With deep reinforcement learning being an
active area of research and many concurrent inventions, we decided to focus on
a relatively simple robotic task to evaluate a set of ideas that might help to
solve recent reinforcement learning problems. We test a newly created
combination of two commonly used reinforcement learning methods, whether it is
able to learn more effectively than a baseline. We also compare different ideas
to preprocess information before it is fed to the reinforcement learning
algorithm. The goal of this strategy is to reduce training time and eventually
help the algorithm to converge. The concluding evaluation proves the general
applicability of the described concepts by testing them using a simulated
environment. These concepts might be reused for future experiments.
| 1 | 0 | 0 | 0 | 0 | 0 |
The First Detection of Gravitational Waves | This article deals with the first detection of gravitational waves by the
advanced Laser Interferometer Gravitational Wave Observatory (LIGO) detectors
on 14 September 2015, where the signal was generated by two stellar mass black
holes with masses 36 $ M_{\odot}$ and 29 $ M_{\odot}$ that merged to form a 62
$ M_{\odot}$ black hole, releasing 3 $M_{\odot}$ energy in gravitational waves,
almost 1.3 billion years ago. We begin by providing a brief overview of
gravitational waves, their sources and the gravitational wave detectors. We
then describe in detail the first detection of gravitational waves from a
binary black hole merger. We then comment on the electromagnetic follow up of
the detection event with various telescopes. Finally, we conclude with the
discussion on the tests of gravity and fundamental physics with the first
gravitational wave detection event.
| 0 | 1 | 0 | 0 | 0 | 0 |
Remarkably strong chemisorption of nitric oxide on insulating oxide films promoted by hybrid structure | The remarkably strong chemical adsorption behaviors of nitric oxide on
magnesia (001) film deposited on metal substrate have been investigated by
employing periodic density functional calculations with Van der Waals
corrections. The molybdenum supported magnesia (001) show significantly
enhanced adsorption properties and the nitric oxide is chemisorbed strongly and
preferably trapped in flat adsorption configuration on metal supported oxide
film, due to the substantially large adsorption energies and transformation
barriers. The analysis of Bader charges, projected density of states,
differential charge densities, electron localization function, highest occupied
orbital and particular orbital with largest Mg-NO-Mg bonding coefficients, are
applied to reveal the electronic adsorption properties and characteristics of
bonding between nitric oxide and surface as well as the bonding within the
hybrid structure. The strong chemical binding of nitric oxide on magnesia
deposited on molybdenum slab offers new opportunities for toxic gas detection
and treatment. We anticipate that hybrid structure promoted remarkable chemical
adsorption of nitric oxide on magnesia in this study will provide versatile
strategy for enhancing chemical reactivity and properties of insulating oxide.
| 0 | 1 | 0 | 0 | 0 | 0 |
Knowledge Transfer from Weakly Labeled Audio using Convolutional Neural Network for Sound Events and Scenes | In this work we propose approaches to effectively transfer knowledge from
weakly labeled web audio data. We first describe a convolutional neural network
(CNN) based framework for sound event detection and classification using weakly
labeled audio data. Our model trains efficiently from audios of variable
lengths; hence, it is well suited for transfer learning. We then propose
methods to learn representations using this model which can be effectively used
for solving the target task. We study both transductive and inductive transfer
learning tasks, showing the effectiveness of our methods for both domain and
task adaptation. We show that the learned representations using the proposed
CNN model generalizes well enough to reach human level accuracy on ESC-50 sound
events dataset and set state of art results on this dataset. We further use
them for acoustic scene classification task and once again show that our
proposed approaches suit well for this task as well. We also show that our
methods are helpful in capturing semantic meanings and relations as well.
Moreover, in this process we also set state-of-art results on Audioset dataset,
relying on balanced training set.
| 1 | 0 | 0 | 0 | 0 | 0 |
A priori estimates for the free-boundary Euler equations with surface tension in three dimensions | We derive a priori estimates for the incompressible free-boundary Euler
equations with surface tension in three spatial dimensions. Working in
Lagrangian coordinates, we provide a priori estimates for the local existence
when the initial velocity, which is rotational, belongs to $H^3$ and the trace
of initial velocity on the free boundary to $H^{3.5}$, thus lowering the
requirement on the regularity of initial data in the Lagrangian setting. Our
methods are direct and involve three key elements: estimates for the pressure,
the boundary regularity provided by the mean curvature, and the Cauchy
invariance.
| 0 | 0 | 1 | 0 | 0 | 0 |
Koszul duality via suspending Lefschetz fibrations | Let $M$ be a Liouville 6-manifold which is the smooth fiber of a Lefschetz
fibration on $\mathbb{C}^4$ constructed by suspending a Lefschetz fibration on
$\mathbb{C}^3$. We prove that for many examples including stabilizations of
Milnor fibers of hypersurface cusp singularities, the compact Fukaya category
$\mathcal{F}(M)$ and the wrapped Fukaya category $\mathcal{W}(M)$ are related
through $A_\infty$-Koszul duality, by identifying them with cyclic and
Calabi-Yau completions of the same quiver algebra. This implies the
split-generation of the compact Fukaya category $\mathcal{F}(M)$ by vanishing
cycles. Moreover, new examples of Liouville manifolds which admit
quasi-dilations in the sense of Seidel-Solomon are obtained.
| 0 | 0 | 1 | 0 | 0 | 0 |
Attractor of Cantor Type with Positive Measure | We construct an iterated function system consisting of strictly increasing
contractions $f,g\colon [0,1]\to [0,1]$ with $f([0,1])\cap g([0,1])=\emptyset$
and such that its attractor has positive Lebesgue measure.
| 0 | 0 | 1 | 0 | 0 | 0 |
Generating online social networks based on socio-demographic attributes | Recent years have seen tremendous growth of many online social networks such
as Facebook, LinkedIn and MySpace. People connect to each other through these
networks forming large social communities providing researchers rich datasets
to understand, model and predict social interactions and behaviors. New
contacts in these networks can be formed due to an individual's demographic
attributes such as age group, gender, geographic location, or due to a
network's structural dynamics such as triadic closure and preferential
attachment, or a combination of both demographic and structural
characteristics.
A number of network generation models have been proposed in the last decade
to explain the structure, evolution and processes taking place in different
types of networks, and notably social networks. Network generation models
studied in the literature primarily consider structural properties, and in some
cases an individual's demographic profile in the formation of new social
contacts. These models do not present a mechanism to combine both structural
and demographic characteristics for the formation of new links. In this paper,
we propose a new network generation algorithm which incorporates both these
characteristics to model network formation. We use different publicly available
Facebook datasets as benchmarks to demonstrate the correctness of the proposed
network generation model. The proposed model is flexible and thus can generate
networks with varying demographic and structural properties.
| 1 | 1 | 0 | 0 | 0 | 0 |
Optimal control of diffuser shapes for confined turbulent shear flows | A model for the development of turbulent shear flows, created by non-uniform
parallel flows in a confining channel, is used to identify the diffuser shape
that maximises pressure recovery when the inflow is non-uniform. Wide diffuser
angles tend to accentuate the non- uniform flow, causing poor pressure
recovery. On the other hand, shallow diffuser angles create longer regions with
large wall drag, which is also detrimental to pressure recovery. Thus, optimal
diffuser shapes strike a balance between the two effects. We use a simple model
which describes the evolution of an approximate flow profile and pressure in
the diffuser. The model equations form the dynamics of an optimal control
problem where the control is the diffuser channel shape. A numerical
optimisation approach is used to solve the optimal control problem and we use
analytical results to interpret the numerics in some limiting cases. The
results of the optimisation are compared to calculations from computational
fluid dynamics.
| 0 | 1 | 0 | 0 | 0 | 0 |
Criteria for Solar Car Optimized Route Estimation | This paper gives a thorough overview of Solar Car Optimized Route Estimation
(SCORE), novel route optimization scheme for solar vehicles based on solar
irradiance and target distance. In order to conduct the optimization, both data
collection and the optimization algorithm itself have to be performed using
appropriate hardware. Here we give an insight to both stages, hardware and
software used and present some results of the SCORE system together with
certain improvements of its fusion and optimization criteria. Results and the
limited applicability of SCORE are discussed together with an overview of
future research plans and comparison with state-of-the-art solar vehicle
optimization solutions.
| 1 | 0 | 1 | 0 | 0 | 0 |
Dykes for filtering ocean waves using c-shaped vertical cylinders | The present study investigates a way to design dykes which can filter the
wavelengths of ocean surface waves. This offers the possibility to achieve a
structure that can attenuate waves associated with storm swell, without
affecting coastline in other conditions. Our approach is based on low frequency
resonances in metamaterials combined with Bragg frequencies for which waves
cannot propagate in periodic lattices.
| 0 | 1 | 0 | 0 | 0 | 0 |
Origin of Non-axisymmetric Features of Virgo Cluster Early-type Dwarf Galaxies. I. Bar Formation and Recurrent Buckling | A fraction of early-type dwarf galaxies in the Virgo cluster have a disk
component and even possess disk features such as bar, lens, and spiral arms. In
this study, we construct 15 galaxy models that resemble VCC856, considered to
be an infalling progenitor of disk dwarf galaxies, within observational error
ranges, and use $N$-body simulations to study their long-term dynamical
evolution in isolation as well as the formation of bar in them. We find that
dwarf disk galaxies readily form bars unless they have an excessively
concentrated halo or a hot disk. This suggests that infalling dwarf disk
galaxies are intrinsically unstable to bar formation, even without any external
perturbation, accounting for a population of barred dwarf galaxies in the
outskirts of the Virgo cluster. The bars form earlier and stronger in galaxies
with a lower fraction of counter-streaming motions, lower halo concentration,
lower velocity anisotropy, and thinner disk. Similarly to normal disk galaxies,
dwarf disk galaxies also undergo recurrent buckling instabilities. The first
buckling instability tends to shorten the bar and to thicken the disk, and
drives a dynamical transition in the bar pattern speed as well as mass inflow
rate. In nine models, the bars regrow after the mild first buckling instability
due to the efficient transfer of disk angular momentum to the halo, and are
subject to recurrent buckling instabilities to turn into X-shaped bulges.
| 0 | 1 | 0 | 0 | 0 | 0 |
First results from the DEAP-3600 dark matter search with argon at SNOLAB | This paper reports the first results of a direct dark matter search with the
DEAP-3600 single-phase liquid argon (LAr) detector. The experiment was
performed 2 km underground at SNOLAB (Sudbury, Canada) utilizing a large target
mass, with the LAr target contained in a spherical acrylic vessel of 3600 kg
capacity. The LAr is viewed by an array of PMTs, which would register
scintillation light produced by rare nuclear recoil signals induced by dark
matter particle scattering. An analysis of 4.44 live days (fiducial exposure of
9.87 tonne-days) of data taken with the nearly full detector during the initial
filling phase demonstrates the detector performance and the best electronic
recoil rejection using pulse-shape discrimination in argon, with leakage
$<1.2\times 10^{-7}$ (90% C.L.) between 16 and 33 keV$_{ee}$. No candidate
signal events are observed, which results in the leading limit on WIMP-nucleon
spin-independent cross section on argon, $<1.2\times 10^{-44}$ cm$^2$ for a 100
GeV/c$^2$ WIMP mass (90% C.L.).
| 0 | 1 | 0 | 0 | 0 | 0 |
Camera Calibration by Global Constraints on the Motion of Silhouettes | We address the problem of epipolar geometry using the motion of silhouettes.
Such methods match epipolar lines or frontier points across views, which are
then used as the set of putative correspondences. We introduce an approach that
improves by two orders of magnitude the performance over state-of-the-art
methods, by significantly reducing the number of outliers in the putative
matching. We model the frontier points' correspondence problem as constrained
flow optimization, requiring small differences between their coordinates over
consecutive frames. Our approach is formulated as a Linear Integer Program and
we show that due to the nature of our problem, it can be solved efficiently in
an iterative manner. Our method was validated on four standard datasets
providing accurate calibrations across very different viewpoints.
| 1 | 0 | 0 | 0 | 0 | 0 |
Ann: A domain-specific language for the effective design and validation of Java annotations | This paper describes a new modelling language for the effective design and
validation of Java annotations. Since their inclusion in the 5th edition of
Java, annotations have grown from a useful tool for the addition of meta-data
to play a central role in many popular software projects. Usually they are not
conceived in isolation, but in groups, with dependency and integrity
constraints between them. However, the native support provided by Java for
expressing this design is very limited.
To overcome its deficiencies and make explicit the rich conceptual model
which lies behind a set of annotations, we propose a domain-specific modelling
language. The proposal has been implemented as an Eclipse plug-in, including an
editor and an integrated code generator that synthesises annotation processors.
The environment also integrates a model finder, able to detect unsatisfiable
constraints between different annotations, and to provide examples of correct
annotation usages for validation. The language has been tested using a real set
of annotations from the Java Persistence API (JPA). Within this subset we have
found enough rich semantics expressible with Ann and omitted nowadays by the
Java language, which shows the benefits of Ann in a relevant field of
application.
| 1 | 0 | 0 | 0 | 0 | 0 |
Fréchet ChemNet Distance: A metric for generative models for molecules in drug discovery | The new wave of successful generative models in machine learning has
increased the interest in deep learning driven de novo drug design. However,
assessing the performance of such generative models is notoriously difficult.
Metrics that are typically used to assess the performance of such generative
models are the percentage of chemically valid molecules or the similarity to
real molecules in terms of particular descriptors, such as the partition
coefficient (logP) or druglikeness. However, method comparison is difficult
because of the inconsistent use of evaluation metrics, the necessity for
multiple metrics, and the fact that some of these measures can easily be
tricked by simple rule-based systems. We propose a novel distance measure
between two sets of molecules, called Fréchet ChemNet distance (FCD), that
can be used as an evaluation metric for generative models. The FCD is similar
to a recently established performance metric for comparing image generation
methods, the Fréchet Inception Distance (FID). Whereas the FID uses one of
the hidden layers of InceptionNet, the FCD utilizes the penultimate layer of a
deep neural network called ChemNet, which was trained to predict drug
activities. Thus, the FCD metric takes into account chemically and biologically
relevant information about molecules, and also measures the diversity of the
set via the distribution of generated molecules. The FCD's advantage over
previous metrics is that it can detect if generated molecules are a) diverse
and have similar b) chemical and c) biological properties as real molecules. We
further provide an easy-to-use implementation that only requires the SMILES
representation of the generated molecules as input to calculate the FCD.
Implementations are available at: this https URL
| 0 | 0 | 0 | 1 | 1 | 0 |
RANSAC Algorithms for Subspace Recovery and Subspace Clustering | We consider the RANSAC algorithm in the context of subspace recovery and
subspace clustering. We derive some theory and perform some numerical
experiments. We also draw some correspondences with the methods of Hardt and
Moitra (2013) and Chen and Lerman (2009b).
| 0 | 0 | 1 | 1 | 0 | 0 |
Convex Formulations for Fair Principal Component Analysis | Though there is a growing body of literature on fairness for supervised
learning, the problem of incorporating fairness into unsupervised learning has
been less well-studied. This paper studies fairness in the context of principal
component analysis (PCA). We first present a definition of fairness for
dimensionality reduction, and our definition can be interpreted as saying that
a reduction is fair if information about a protected class (e.g., race or
gender) cannot be inferred from the dimensionality-reduced data points. Next,
we develop convex optimization formulations that can improve the fairness (with
respect to our definition) of PCA and kernel PCA. These formulations are
semidefinite programs (SDP's), and we demonstrate the effectiveness of our
formulations using several datasets. We conclude by showing how our approach
can be used to perform a fair (with respect to age) clustering of health data
that may be used to set health insurance rates.
| 0 | 0 | 0 | 1 | 0 | 0 |
Report: Dynamic Eye Movement Matching and Visualization Tool in Neuro Gesture | In the research of the impact of gestures using by a lecturer, one
challenging task is to infer the attention of a group of audiences. Two
important measurements that can help infer the level of attention are eye
movement data and Electroencephalography (EEG) data. Under the fundamental
assumption that a group of people would look at the same place if they all pay
attention at the same time, we apply a method, "Time Warp Edit Distance", to
calculate the similarity of their eye movement trajectories. Moreover, we also
cluster eye movement pattern of audiences based on these pair-wised similarity
metrics. Besides, since we don't have a direct metric for the "attention"
ground truth, a visual assessment would be beneficial to evaluate the
gesture-attention relationship. Thus we also implement a visualization tool.
| 1 | 0 | 0 | 0 | 0 | 0 |
Subsets and Splits