ID
int64 1
21k
| TITLE
stringlengths 7
239
| ABSTRACT
stringlengths 7
2.76k
| Computer Science
int64 0
1
| Physics
int64 0
1
| Mathematics
int64 0
1
| Statistics
int64 0
1
| Quantitative Biology
int64 0
1
| Quantitative Finance
int64 0
1
|
---|---|---|---|---|---|---|---|---|
19,901 | The intrinsic stable normal cone | We construct an analog of the intrinsic normal cone of Behrend-Fantechi in
the equivariant motivic stable homotopy category over a base-scheme B and
construct a fundament class in E-cohomology for any cohomology theory E in
SH(B). For affine B, a perfect obstruction theory gives rise to a virtual
fundamental class in a twisted Borel-Moore E-homology for arbitrary E. This
includes motivic cohomology (homotopy invariant) K-theory algebraic cobordism
and the oriented Chow groups of Barge-Morel and Fasel. In the case of motivic
cohomology, we recover the constructions of Behrend-Fantechi, with values in
the Chow group.
| 0 | 0 | 1 | 0 | 0 | 0 |
19,902 | Globular cluster formation with multiple stellar populations from hierarchical star cluster complexes | Most old globular clusters (GCs) in the Galaxy are observed to have internal
chemical abundance spreads in light elements. We discuss a new GC formation
scenario based on hierarchical star formation within fractal molecular clouds.
In the new scenario, a cluster of bound and unbound star clusters (`star
cluster complex', SCC) that have a power-law cluster mass function with a slope
(beta) of 2 is first formed from a massive gas clump developed in a dwarf
galaxy. Such cluster complexes and beta=2 are observed and expected from
hierarchical star formation. The most massive star cluster (`main cluster'),
which is the progenitor of a GC, can accrete gas ejected from asymptotic giant
branch (AGB) stars initially in the cluster and other low-mass clusters before
the clusters are tidally stripped or destroyed to become field stars in the
dwarf. The SCC is initially embedded in a giant gas hole created by numerous
supernovae of the SCC so that cold gas outside the hole can be accreted onto
the main cluster later. New stars formed from the accreted gas have chemical
abundances that are different from those of the original SCC. Using
hydrodynamical simulations of GC formation based on this scenario, we show that
the main cluster with the initial mass as large as [2-5]x10^5 Msun can accrete
more than 10^5 Msun gas from AGB stars of the SCC. We suggest that merging of
hierarchical star cluster complexes can play key roles in stellar halo
formation around GCs and self-enrichment processes of GCs.
| 0 | 1 | 0 | 0 | 0 | 0 |
19,903 | Multiple Source Domain Adaptation with Adversarial Training of Neural Networks | While domain adaptation has been actively researched in recent years, most
theoretical results and algorithms focus on the single-source-single-target
adaptation setting. Naive application of such algorithms on multiple source
domain adaptation problem may lead to suboptimal solutions. As a step toward
bridging the gap, we propose a new generalization bound for domain adaptation
when there are multiple source domains with labeled instances and one target
domain with unlabeled instances. Compared with existing bounds, the new bound
does not require expert knowledge about the target distribution, nor the
optimal combination rule for multisource domains. Interestingly, our theory
also leads to an efficient learning strategy using adversarial neural networks:
we show how to interpret it as learning feature representations that are
invariant to the multiple domain shifts while still being discriminative for
the learning task. To this end, we propose two models, both of which we call
multisource domain adversarial networks (MDANs): the first model optimizes
directly our bound, while the second model is a smoothed approximation of the
first one, leading to a more data-efficient and task-adaptive model. The
optimization tasks of both models are minimax saddle point problems that can be
optimized by adversarial training. To demonstrate the effectiveness of MDANs,
we conduct extensive experiments showing superior adaptation performance on
three real-world datasets: sentiment analysis, digit classification, and
vehicle counting.
| 1 | 0 | 0 | 1 | 0 | 0 |
19,904 | OSSOS: V. Diffusion in the orbit of a high-perihelion distant Solar System object | We report the discovery of the minor planet 2013 SY$_{99}$, on an
exceptionally distant, highly eccentric orbit. With a perihelion of 50.0 au,
2013 SY$_{99}$'s orbit has a semi-major axis of $730 \pm 40$ au, the largest
known for a high-perihelion trans-Neptunian object (TNO), well beyond those of
(90377) Sedna and 2012 VP$_{113}$. Yet, with an aphelion of $1420 \pm 90$ au,
2013 SY$_{99}$'s orbit is interior to the region influenced by Galactic tides.
Such TNOs are not thought to be produced in the current known planetary
architecture of the Solar System, and they have informed the recent debate on
the existence of a distant giant planet. Photometry from the
Canada-France-Hawaii Telescope, Gemini North and Subaru indicate 2013 SY$_{99}$
is $\sim 250$ km in diameter and moderately red in colour, similar to other
dynamically excited TNOs. Our dynamical simulations show that Neptune's weak
influence during 2013 SY$_{99}$'s perihelia encounters drives diffusion in its
semi-major axis of hundreds of astronomical units over 4 Gyr. The overall
symmetry of random walks in semi-major axis allow diffusion to populate 2013
SY$_{99}$'s orbital parameter space from the 1000-2000 au inner fringe of the
Oort cloud. Diffusion affects other known TNOs on orbits with perihelia of 45
to 49 au and semi-major axes beyond 250 au, providing a formation mechanism
that implies an extended population, gently cycling into and returning from the
inner fringe of the Oort cloud.
| 0 | 1 | 0 | 0 | 0 | 0 |
19,905 | Solution of the Lindblad equation for spin helix states | Using Lindblad dynamics we study quantum spin systems with dissipative
boundary dynamics that generate a stationary nonequilibrium state with a
non-vanishing spin current that is locally conserved except at the boundaries.
We demonstrate that with suitably chosen boundary target states one can solve
the many-body Lindblad equation exactly in any dimension. As solution we obtain
pure states at any finite value of the dissipation strength and any system
size. They are characterized by a helical stationary magnetization profile and
a superdiffusive ballistic current of order one, independent of system size
even when the quantum spin system is not integrable. These results are derived
in explicit form for the one-dimensional spin-1/2 Heisenberg chain and its
higher-spin generalizations (which include for spin-1 the integrable
Zamolodchikov-Fateev model and the bi-quadratic Heisenberg chain). The
extension of the results to higher dimensions is straightforward.
| 0 | 1 | 0 | 0 | 0 | 0 |
19,906 | Global bifurcation map of the homogeneus states in the Gray-Scott model | We study the spatially homogeneous time dependent solutions and their
bifurcations of the Gray-Scott model. We find the global map of bifurcations by
a combination of rigorous verification of the existence of Takens Bogdanov and
a Bautin bifurcations, in the space of two parameters k and F. With the aid of
numerical continuation of local bifurcation curves we give a global description
of all the possible bifurcations
| 0 | 0 | 1 | 0 | 0 | 0 |
19,907 | Stochastic Composite Least-Squares Regression with convergence rate O(1/n) | We consider the minimization of composite objective functions composed of the
expectation of quadratic functions and an arbitrary convex function. We study
the stochastic dual averaging algorithm with a constant step-size, showing that
it leads to a convergence rate of O(1/n) without strong convexity assumptions.
This thus extends earlier results on least-squares regression with the
Euclidean geometry to (a) all convex regularizers and constraints, and (b) all
geome-tries represented by a Bregman divergence. This is achieved by a new
proof technique that relates stochastic and deterministic recursions.
| 0 | 0 | 1 | 1 | 0 | 0 |
19,908 | Fisher GAN | Generative Adversarial Networks (GANs) are powerful models for learning
complex distributions. Stable training of GANs has been addressed in many
recent works which explore different metrics between distributions. In this
paper we introduce Fisher GAN which fits within the Integral Probability
Metrics (IPM) framework for training GANs. Fisher GAN defines a critic with a
data dependent constraint on its second order moments. We show in this paper
that Fisher GAN allows for stable and time efficient training that does not
compromise the capacity of the critic, and does not need data independent
constraints such as weight clipping. We analyze our Fisher IPM theoretically
and provide an algorithm based on Augmented Lagrangian for Fisher GAN. We
validate our claims on both image sample generation and semi-supervised
classification using Fisher GAN.
| 1 | 0 | 0 | 1 | 0 | 0 |
19,909 | Language as a matrix product state | We propose a statistical model for natural language that begins by
considering language as a monoid, then representing it in complex matrices with
a compatible translation invariant probability measure. We interpret the
probability measure as arising via the Born rule from a translation invariant
matrix product state.
| 1 | 1 | 0 | 1 | 0 | 0 |
19,910 | On the application of Mattis-Bardeen theory in strongly disordered superconductors | The low energy optical conductivity of conventional superconductors is
usually well described by Mattis-Bardeen (MB) theory which predicts the onset
of absorption above an energy corresponding to twice the superconducing (SC)
gap parameter Delta. Recent experiments on strongly disordered superconductors
have challenged the application of the MB formulas due to the occurrence of
additional spectral weight at low energies below 2Delta. Here we identify three
crucial items which have to be included in the analysis of optical-conductivity
data for these systems: (a) the correct identification of the optical threshold
in the Mattis-Bardeen theory, and its relation with the gap value extracted
from the measured density of states, (b) the gauge-invariant evaluation of the
current-current response function, needed to account for the optical absorption
by SC collective modes, and (c) the inclusion into the MB formula of the energy
dependence of the density of states present already above Tc. By computing the
optical conductvity in the disordered attractive Hubbard model we analyze the
relevance of all these items, and we provide a compelling scheme for the
analysis and interpretation of the optical data in real materials.
| 0 | 1 | 0 | 0 | 0 | 0 |
19,911 | Nanoscale Magnetic Imaging using Circularly Polarized High-Harmonic Radiation | This work demonstrates nanoscale magnetic imaging using bright circularly
polarized high-harmonic radiation. We utilize the magneto-optical contrast of
worm-like magnetic domains in a Co/Pd multilayer structure, obtaining
quantitative amplitude and phase maps by lensless imaging. A
diffraction-limited spatial resolution of 49 nm is achieved with iterative
phase reconstruction enhanced by a holographic mask. Harnessing the unique
coherence of high harmonics, this approach will facilitate quantitative,
element-specific and spatially-resolved studies of ultrafast magnetization
dynamics, advancing both fundamental and applied aspects of nanoscale
magnetism.
| 0 | 1 | 0 | 0 | 0 | 0 |
19,912 | Towards integrated superconducting detectors on lithium niobate waveguides | Superconducting detectors are now well-established tools for low-light
optics, and in particular quantum optics, boasting high-efficiency, fast
response and low noise. Similarly, lithium niobate is an important platform for
integrated optics given its high second-order nonlinearity, used for high-speed
electro-optic modulation and polarization conversion, as well as frequency
conversion and sources of quantum light. Combining these technologies addresses
the requirements for a single platform capable of generating, manipulating and
measuring quantum light in many degrees of freedom, in a compact and
potentially scalable manner. We will report on progress integrating tungsten
transition-edge sensors (TESs) and amorphous tungsten silicide superconducting
nanowire single-photon detectors (SNSPDs) on titanium in-diffused lithium
niobate waveguides. The travelling-wave design couples the evanescent field
from the waveguides into the superconducting absorber. We will report on
simulations and measurements of the absorption, which we can characterize at
room temperature prior to cooling down the devices. Independently, we show how
the detectors respond to flood illumination, normally incident on the devices,
demonstrating their functionality.
| 0 | 1 | 0 | 0 | 0 | 0 |
19,913 | The Hilbert scheme of 11 points in A^3 is irreducible | We prove that the Hilbert scheme of 11 points on a smooth threefold is
irreducible. In the course of the proof, we present several known and new
techniques for producing curves on the Hilbert scheme.
| 0 | 0 | 1 | 0 | 0 | 0 |
19,914 | Dynamically reconfigurable metal-semiconductor Yagi-Uda nanoantenna | We propose a novel type of tunable Yagi-Uda nanoantenna composed of
metal-dielectric (Ag-Ge) core-shell nanoparticles. We show that, due to the
combination of two types of resonances in each nanoparticle, such hybrid
Yagi-Uda nanoantenna can operate in two different regimes. Besides the
conventional nonresonant operation regime at low frequencies, characterized by
highly directive emission in the forward direction, there is another one at
higher frequencies caused by hybrid magneto-electric response of the core-shell
nanoparticles. This regime is based on the excitation of the van Hove
singularity, and emission in this regime is accompanied by high values of
directivity and Purcell factor within the same narrow frequency range. Our
analysis reveals the possibility of flexible dynamical tuning of the hybrid
nanoantenna emission pattern via electron-hole plasma excitation by 100
femtosecond pump pulse with relatively low peak intensities $\sim$200
MW/cm$^2$.
| 0 | 1 | 0 | 0 | 0 | 0 |
19,915 | Synthesis and In Situ Modification of Hierarchical SAPO-34 by PEG with Different Molecular Weights; Application in MTO Process | Modified structures of SAPO-34 were prepared using polyethylene glycol as the
mesopores generating agent. The synthesized catalysts were applied in
methanol-to-olefins (MTO) process. All modified synthesized catalysts were
characterized via XRD, XRF, FESEM, FTIR, N2 adsorption-desorption techniques,
and temperature-programmed NH3 desorption and they were compared with
conventional microporous SAPO-34. Introduction of non-ionic PEG capping agent
affected the degree of homogeneity and integrity of the synthesis media and
thus reduced the number of nuclei and order of coordination structures
resulting in larger and less crystalline particles compared with the
conventional sample. During the calcination process, decomposition of absorbed
PEG moieties among the piled up SAPO patches formed a great portion of tuned
mesopores into the microporous matrix. These tailored mesopores were served as
auxiliary diffusion pathways in MTO reaction. The effects of molecular weight
of PEG and PEG/Al molar ratio on the properties of the synthesized materials
were investigated in order to optimize their MTO reaction performance. It was
revealed that both of these two parameters can significantly change the
structural composition and physicochemical properties of resultant products.
Using PEG with MW of 6000 has led to the formation of RHO and CHA structural
frameworks i.e. DNL-6 and SAPO-34, simultaneously, while addition of PEG with
MW of 4000 resulted the formation of pure SAPO-34 phase. Altering the PEG/Al
molar ratio in the precursor significantly influenced the porosity and acidity
of the synthesized silicoaluminophosphate products. SAPO-34 impregnated with
PEG molecular weight of 4000 and PEG/Al molar ratio of 0.0125 showed superior
catalytic stability in MTO reaction because of the tuned bi-modal porosity and
tailored acidity pattern.
| 0 | 1 | 0 | 0 | 0 | 0 |
19,916 | Support Feature Machines | Support Vector Machines (SVMs) with various kernels have played dominant role
in machine learning for many years, finding numerous applications. Although
they have many attractive features interpretation of their solutions is quite
difficult, the use of a single kernel type may not be appropriate in all areas
of the input space, convergence problems for some kernels are not uncommon, the
standard quadratic programming solution has $O(m^3)$ time and $O(m^2)$ space
complexity for $m$ training patterns. Kernel methods work because they
implicitly provide new, useful features. Such features, derived from various
kernels and other vector transformations, may be used directly in any machine
learning algorithm, facilitating multiresolution, heterogeneous models of data.
Therefore Support Feature Machines (SFM) based on linear models in the extended
feature spaces, enabling control over selection of support features, give at
least as good results as any kernel-based SVMs, removing all problems related
to interpretation, scaling and convergence. This is demonstrated for a number
of benchmark datasets analyzed with linear discrimination, SVM, decision trees
and nearest neighbor methods.
| 1 | 0 | 0 | 1 | 0 | 0 |
19,917 | Voltage Control Using Eigen Value Decomposition of Fast Decoupled Load Flow Jacobian | Voltage deviations occur frequently in power systems. If the violation at
some buses falls outside the prescribed range, it will be necessary to correct
the problem by controlling reactive power resources. In this paper, an optimal
algorithm is proposed to solve this problem by identifying the voltage buses,
that will have a maximum effect on the affected buses, and setting their new
set-points. This algorithm is based on the Eigen-Value Decomposition of the
fast decoupled load flow Jacobian matrix. Different Case studies including IEEE
9, 14, 30 and 57 bus systems have been used to verify the method.
| 1 | 0 | 0 | 0 | 0 | 0 |
19,918 | Convolutional neural networks for structured omics: OmicsCNN and the OmicsConv layer | Convolutional Neural Networks (CNNs) are a popular deep learning architecture
widely applied in different domains, in particular in classifying over images,
for which the concept of convolution with a filter comes naturally.
Unfortunately, the requirement of a distance (or, at least, of a neighbourhood
function) in the input feature space has so far prevented its direct use on
data types such as omics data. However, a number of omics data are metrizable,
i.e., they can be endowed with a metric structure, enabling to adopt a
convolutional based deep learning framework, e.g., for prediction. We propose a
generalized solution for CNNs on omics data, implemented through a dedicated
Keras layer. In particular, for metagenomics data, a metric can be derived from
the patristic distance on the phylogenetic tree. For transcriptomics data, we
combine Gene Ontology semantic similarity and gene co-expression to define a
distance; the function is defined through a multilayer network where 3 layers
are defined by the GO mutual semantic similarity while the fourth one by gene
co-expression. As a general tool, feature distance on omics data is enabled by
OmicsConv, a novel Keras layer, obtaining OmicsCNN, a dedicated deep learning
framework. Here we demonstrate OmicsCNN on gut microbiota sequencing data, for
Inflammatory Bowel Disease (IBD) 16S data, first on synthetic data and then a
metagenomics collection of gut microbiota of 222 IBD patients.
| 0 | 0 | 0 | 1 | 0 | 0 |
19,919 | Neural Network Multitask Learning for Traffic Flow Forecasting | Traditional neural network approaches for traffic flow forecasting are
usually single task learning (STL) models, which do not take advantage of the
information provided by related tasks. In contrast to STL, multitask learning
(MTL) has the potential to improve generalization by transferring information
in training signals of extra tasks. In this paper, MTL based neural networks
are used for traffic flow forecasting. For neural network MTL, a
backpropagation (BP) network is constructed by incorporating traffic flows at
several contiguous time instants into an output layer. Nodes in the output
layer can be seen as outputs of different but closely related STL tasks.
Comprehensive experiments on urban vehicular traffic flow data and comparisons
with STL show that MTL in BP neural networks is a promising and effective
approach for traffic flow forecasting.
| 1 | 0 | 0 | 0 | 0 | 0 |
19,920 | A note on a new paradox in superluminal signalling | The Tolman paradox is well known as a base for demonstrating the causality
violation by faster-than-light signals within special relativity. It is
constructed using a two-way exchange of faster-than-light signals between two
inertial observers who are in a relative motion receding one from another.
Recently a one-way superluminal signalling arrangement was suggested as a
possible construction of a causal paradox. In this note we show that this
suggestion is not correct, and no causality principle violation can occur in
any one-way signalling by the use of faster-than-light particles and signals.
| 0 | 1 | 0 | 0 | 0 | 0 |
19,921 | The Compressed Overlap Index | For analysing text algorithms, for computing superstrings, or for testing
random number generators, one needs to compute all overlaps between any pairs
of words in a given set. The positions of overlaps of a word onto itself, or of
two words, are needed to compute the absence probability of a word in a random
text, or the numbers of common words shared by two random texts. In all these
contexts, one needs to compute or to query overlaps between pairs of words in a
given set. For this sake, we designed COvI, a compressed overlap index that
supports multiple queries on overlaps: like computing the correlation of two
words, or listing pairs of words whose longest overlap is maximal among all
possible pairs. COvI stores overlaps in a hierarchical and non-redundant
manner. We propose an implementation that can handle datasets of millions of
words and still answer queries efficiently. Comparison with a baseline solution
- called FullAC - relying on the Aho-Corasick automaton shows that COvI
provides significant advantages. For similar construction times, COvI requires
half the memory FullAC, and still solves complex queries much faster.
| 1 | 0 | 0 | 0 | 0 | 0 |
19,922 | An intuitive approach to the unified theory of spin-relaxation | Spin-relaxation is conventionally discussed using two different approaches
for materials with and without inversion symmetry. The former is known as the
Elliott-Yafet (EY) theory and for the latter the D'yakonov-Perel' (DP) theory
applies, respectively. We discuss herein a simple and intuitive approach to
demonstrate that the two seemingly disparate mechanisms are closely related. A
compelling analogy between the respective Hamiltonian is presented and that the
usual derivation of spin-relaxation times, in the respective frameworks of the
two theories, can be performed. The result also allows to obtain the less
canonical spin-relaxation regimes; the generalization of the EY when the
material has a large quasiparticle broadening and the DP mechanism in ultrapure
semiconductors. The method also allows a practical and intuitive numerical
implementation of the spin-relaxation calculation, which is demonstrated for
MgB$_2$ that has anomalous spin-relaxation properties.
| 0 | 1 | 0 | 0 | 0 | 0 |
19,923 | Weighted Random Walk Sampling for Multi-Relational Recommendation | In the information overloaded web, personalized recommender systems are
essential tools to help users find most relevant information. The most
heavily-used recommendation frameworks assume user interactions that are
characterized by a single relation. However, for many tasks, such as
recommendation in social networks, user-item interactions must be modeled as a
complex network of multiple relations, not only a single relation. Recently
research on multi-relational factorization and hybrid recommender models has
shown that using extended meta-paths to capture additional information about
both users and items in the network can enhance the accuracy of recommendations
in such networks. Most of this work is focused on unweighted heterogeneous
networks, and to apply these techniques, weighted relations must be simplified
into binary ones. However, information associated with weighted edges, such as
user ratings, which may be crucial for recommendation, are lost in such
binarization. In this paper, we explore a random walk sampling method in which
the frequency of edge sampling is a function of edge weight, and apply this
generate extended meta-paths in weighted heterogeneous networks. With this
sampling technique, we demonstrate improved performance on multiple data sets
both in terms of recommendation accuracy and model generation efficiency.
| 1 | 0 | 0 | 0 | 0 | 0 |
19,924 | Understanding the Twitter Usage of Humanities and Social Sciences Academic Journals | Scholarly communication has the scope to transcend the limitations of the
physical world through social media extended coverage and shortened information
paths. Accordingly, publishers have created profiles for their journals in
Twitter to promote their publications and to initiate discussions with public.
This paper investigates the Twitter presence of humanities and social sciences
(HSS) journal titles obtained from mainstream citation indices, by analysing
the interaction and communication patterns. This study utilizes webometric data
collection, descriptive analysis, and social network analysis. Findings
indicate that the presence of HSS journals in Twitter across disciplines is not
yet substantial. Sharing of general websites appears to be the key activity
performed by HSS journals in Twitter. Among them, web content from news portals
and magazines are highly disseminated. Sharing of research articles and
retweeting was not majorly observed. Inter-journal communication is apparent
within the same citation index, but it is very minimal with journals from the
other index. However, there seems to be an effort to broaden communication
beyond the research community, reaching out to connect with the public.
| 1 | 0 | 0 | 0 | 0 | 0 |
19,925 | Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting | Machine learning algorithms, when applied to sensitive data, pose a distinct
threat to privacy. A growing body of prior work demonstrates that models
produced by these algorithms may leak specific private information in the
training data to an attacker, either through the models' structure or their
observable behavior. However, the underlying cause of this privacy risk is not
well understood beyond a handful of anecdotal accounts that suggest overfitting
and influence might play a role.
This paper examines the effect that overfitting and influence have on the
ability of an attacker to learn information about the training data from
machine learning models, either through training set membership inference or
attribute inference attacks. Using both formal and empirical analyses, we
illustrate a clear relationship between these factors and the privacy risk that
arises in several popular machine learning algorithms. We find that overfitting
is sufficient to allow an attacker to perform membership inference and, when
the target attribute meets certain conditions about its influence, attribute
inference attacks. Interestingly, our formal analysis also shows that
overfitting is not necessary for these attacks and begins to shed light on what
other factors may be in play. Finally, we explore the connection between
membership inference and attribute inference, showing that there are deep
connections between the two that lead to effective new attacks.
| 1 | 0 | 0 | 1 | 0 | 0 |
19,926 | Nanoplatelets as material system between strong confinement and weak confinement | Recently, the fabrication of CdSe nanoplatelets became an important research
topic. Nanoplatelets are often described as having a similar electronic
structure as 2D dimensional quantum wells and are promoted as colloidal quantum
wells with monolayer precision width. In this paper, we show, that
nanoplatelets are not ideal quantum wells, but cover depending on the size: the
strong confinement regime, an intermediate regime and a Coulomb dominated
regime. Thus, nanoplatelets are an ideal platform to study the physics in these
regimes. Therefore, the exciton states of the nanoplatelets are numerically
calculated by solving the full four dimensional Schrödinger equation. We
compare the results with approximate solutions from semiconductor quantum well
and quantum dot theory. The paper can also act as review of these concepts for
the colloidal nanoparticle community.
| 0 | 1 | 0 | 0 | 0 | 0 |
19,927 | Combined Thermal Control and GNC: An Enabling Technology for CubeSat Surface Probes and Small Robots | Advances in GNC, particularly from miniaturized control electronics,
reaction-wheels and attitude determination sensors make it possible to design
surface probes and small robots to perform surface exploration and science on
low-gravity environments. These robots would use their reaction wheels to roll,
hop and tumble over rugged surfaces. These robots could provide 'Google
Streetview' quality images of off-world surfaces and perform some unique
science using penetrometers. These systems can be powered by high-efficiency
fuel cells that operate at 60-65 % and utilize hydrogen and oxygen electrolyzed
from water. However, one of the major challenges that prevent these probes and
robots from performing long duration surface exploration and science is thermal
design and control. In the inner solar system, during the day time, there is
often enough solar-insolation to keep these robots warm and power these
devices, but during eclipse the temperatures falls well below storage
temperature. We have developed a thermal control system that utilizes chemicals
to store and dispense heat when needed. The system takes waste products, such
as water from these robots and transfers them to a thermochemical storage
system. These thermochemical storage systems when mixed with water (a waste
product from a PEM fuel cell) releases heat. Under eclipse, the heat from the
thermochemical storage system is released to keep the probe warm enough to
survive. In sunlight, solar photovoltaics are used to electrolyze the water and
reheat the thermochemical storage system to release the water. Our research has
showed thermochemical storage systems are a feasible solution for use on
surface probes and robots for applications on the Moon, Mars and asteroids.
| 1 | 1 | 0 | 0 | 0 | 0 |
19,928 | Denoising Adversarial Autoencoders | Unsupervised learning is of growing interest because it unlocks the potential
held in vast amounts of unlabelled data to learn useful representations for
inference. Autoencoders, a form of generative model, may be trained by learning
to reconstruct unlabelled input data from a latent representation space. More
robust representations may be produced by an autoencoder if it learns to
recover clean input samples from corrupted ones. Representations may be further
improved by introducing regularisation during training to shape the
distribution of the encoded data in latent space. We suggest denoising
adversarial autoencoders, which combine denoising and regularisation, shaping
the distribution of latent space using adversarial training. We introduce a
novel analysis that shows how denoising may be incorporated into the training
and sampling of adversarial autoencoders. Experiments are performed to assess
the contributions that denoising makes to the learning of representations for
classification and sample synthesis. Our results suggest that autoencoders
trained using a denoising criterion achieve higher classification performance,
and can synthesise samples that are more consistent with the input data than
those trained without a corruption process.
| 1 | 0 | 0 | 1 | 0 | 0 |
19,929 | Monodromy map for tropical Dolbeault cohomology | We define monodromy maps for tropical Dolbeault cohomology of algebraic
varieties over non-Archimedean fields. We propose a conjecture of Hodge
isomorphisms via monodromy maps, and provide some evidence.
| 0 | 0 | 1 | 0 | 0 | 0 |
19,930 | Randomness Evaluation with the Discrete Fourier Transform Test Based on Exact Analysis of the Reference Distribution | In this paper, we study the problems in the discrete Fourier transform (DFT)
test included in NIST SP 800-22 released by the National Institute of Standards
and Technology (NIST), which is a collection of tests for evaluating both
physical and pseudo-random number generators for cryptographic applications.
The most crucial problem in the DFT test is that its reference distribution of
the test statistic is not derived mathematically but rather numerically
estimated, the DFT test for randomness is based on a pseudo-random number
generator (PRNG). Therefore, the present DFT test should not be used unless the
reference distribution is mathematically derived. Here, we prove that a power
spectrum, which is a component of the test statistic, follows a chi-squared
distribution with 2 degrees of freedom. Based on this fact, we propose a test
whose reference distribution of the test statistic is mathematically derived.
Furthermore, the results of testing non-random sequences and several PRNGs
showed that the proposed test is more reliable and definitely more sensitive
than the present DFT test.
| 1 | 0 | 0 | 1 | 0 | 0 |
19,931 | Analogies Explained: Towards Understanding Word Embeddings | Word embeddings generated by neural network methods such as word2vec (W2V)
are well known to exhibit seemingly linear behaviour, e.g. the embeddings of
analogy "woman is to queen as man is to king" approximately describe a
parallelogram. This property is particularly intriguing since the embeddings
are not trained to achieve it. Several explanations have been proposed, but
each introduces assumptions that do not hold in practice. We derive a
probabilistically grounded definition of paraphrasing and show it can be
re-interpreted as word transformation, a mathematical description of "$w_x$ is
to $w_y$". From these concepts we prove existence of the linear relationship
between W2V-type embeddings that underlies the analogical phenomenon, and
identify explicit error terms in the relationship.
| 1 | 0 | 0 | 1 | 0 | 0 |
19,932 | Zero sum partition into sets of the same order and its applications | We will say that an Abelian group $\Gamma$ of order $n$ has the
$m$-\emph{zero-sum-partition property} ($m$-\textit{ZSP-property}) if $m$
divides $n$, $m\geq 2$ and there is a partition of $\Gamma$ into pairwise
disjoint subsets $A_1, A_2,\ldots , A_t$, such that $|A_i| = m$ and $\sum_{a\in
A_i}a = g_0$ for $1 \leq i \leq t$, where $g_0$ is the identity element of
$\Gamma$.
In this paper we study the $m$-ZSP property of $\Gamma$. We show that
$\Gamma$ has $m$-ZSP if and only if $|\Gamma|$ is odd or $m\geq 3$ and $\Gamma$
has more than one involution. We will apply the results to the study of group
distance magic graphs as well as to generalized Kotzig arrays.
| 0 | 0 | 1 | 0 | 0 | 0 |
19,933 | Strongly Coupled Dark Energy with Warm dark matter vs. LCDM | Cosmologies including strongly Coupled (SC) Dark Energy (DE) and Warm dark
matter (SCDEW) are based on a conformally invariant (CI) attractor solution
modifying the early radiative expansion. Then, aside of radiation, a kinetic
field $\Phi$ and a DM component account for a stationary fraction, $\sim 1\,
\%$, of the total energy. Most SCDEW predictions are hardly distinguishable
from LCDM, while SCDEW alleviates quite a few LCDM conceptual problems, as well
as its difficulties to meet data below the average galaxy scale. The CI
expansion begins at the inflation end, when $\Phi$ (future DE) possibly plays a
role in reheating, and ends at the Higgs' scale. Afterwards, a number of viable
options is open, allowing for the transition from the CI expansion to the
present Universe. In this paper: (i) We show how the attractor is recovered
when the spin degrees of freedom decreases. (ii) We perform a detailed
comparison of CMB anisotropy and polarization spectra for SCDEW and LCDM,
including tensor components, finding negligible discrepancies. (iii) Linear
spectra exhibit a greater parameter dependence at large $k$'s, but are still
consistent with data for suitable parameter choices. (iv) We also compare
previous simulation results with fresh data on galaxy concentration. Finally,
(v) we outline numerical difficulties at high $k$. This motivates a second
related paper, where such problems are treated in a quantitative way.
| 0 | 1 | 0 | 0 | 0 | 0 |
19,934 | Clausal Analysis of First-order Proof Schemata | Proof schemata are a variant of LK-proofs able to simulate various induction
schemes in first-order logic by adding so called proof links to the standard
first-order LK-calculus. Proof links allow proofs to reference proofs thus
giving proof schemata a recursive structure. Unfortunately, applying reductive
cut- elimination is non-trivial in the presence of proof links. Borrowing the
concept of lazy instantiation from functional programming, we evaluate proof
links locally allowing reductive cut-elimination to proceed past them. Though,
this method cannot be used to obtain cut-free proof schemata, we nonetheless
obtain important results concerning the schematic CERES method, that is a
method of cut-elimination for proof schemata based on resolution. In "Towards a
clausal analysis of cut-elimination", it was shown that reductive
cut-elimination transforms a given LK-proof in such a way that a subsumption
relation holds between the pre- and post-transformation characteristic clause
sets, i.e. the clause set representing the cut-structure of an LK-proof. Let
CL(A') be the characteristic clause set of a normal form A' of an LK-proof A
that is reached by performing reductive cut-elimination on A without atomic cut
elimination. Then CL(A') is subsumed by all characteristic clause sets
extractable from any application of reductive cut-elimination to A. Such a
normal form is referred to as an ACNF top and plays an essential role in
methods of cut-elimination by resolution. These results can be extended to
proof schemata through our "lazy instantiation" of proof links, and provides an
essential step toward a complete cut-elimination method for proof schemata.
| 1 | 0 | 1 | 0 | 0 | 0 |
19,935 | Emotion Intensities in Tweets | This paper examines the task of detecting intensity of emotion from text. We
create the first datasets of tweets annotated for anger, fear, joy, and sadness
intensities. We use a technique called best--worst scaling (BWS) that improves
annotation consistency and obtains reliable fine-grained scores. We show that
emotion-word hashtags often impact emotion intensity, usually conveying a more
intense emotion. Finally, we create a benchmark regression system and conduct
experiments to determine: which features are useful for detecting emotion
intensity, and, the extent to which two emotions are similar in terms of how
they manifest in language.
| 1 | 0 | 0 | 0 | 0 | 0 |
19,936 | Modelling the descent of nitric oxide during the elevated stratopause event of January 2013 | Using simulations with a whole-atmosphere chemistry-climate model nudged by
meteorological analyses, global satellite observations of nitrogen oxide (NO)
and water vapour by the Sub-Millimetre Radiometer instrument (SMR), of
temperature by the Microwave Limb Sounder (MLS), as well as local radar
observations, this study examines the recent major stratospheric sudden warming
accompanied by an elevated stratopause event (ESE) that occurred in January
2013. We examine dynamical processes during the ESE, including the role of
planetary wave, gravity wave and tidal forcing on the initiation of the descent
in the mesosphere-lower thermosphere (MLT) and its continuation throughout the
mesosphere and stratosphere, as well as the impact of model eddy diffusion. We
analyse the transport of NO and find the model underestimates the large descent
of NO compared to SMR observations. We demonstrate that the discrepancy arises
abruptly in the MLT region at a time when the resolved wave forcing and the
planetary wave activity increase, just before the elevated stratopause reforms.
The discrepancy persists despite doubling the model eddy diffusion. While the
simulations reproduce an enhancement of the semi-diurnal tide following the
onset of the 2013 SSW, corroborating new meteor radar observations at high
northern latitudes over Trondheim (63.4$^{\circ}$N), the modelled tidal
contribution to the forcing of the mean meridional circulation and to the
descent is a small portion of the resolved wave forcing, and lags it by about
ten days.
| 0 | 1 | 0 | 0 | 0 | 0 |
19,937 | Optimal Evidence Accumulation on Social Networks | A fundamental question in biology is how organisms integrate sensory and
social evidence to make decisions. However, few models describe how both these
streams of information can be combined to optimize choices. Here we develop a
normative model for collective decision making in a network of agents
performing a two-alternative forced choice task. We assume that rational
(Bayesian) agents in this network make private measurements, and observe the
decisions of their neighbors until they accumulate sufficient evidence to make
an irreversible choice. As each agent communicates its decision to those
observing it, the flow of social information is described by a directed graph.
The decision-making process in this setting is intuitive, but can be complex.
We describe when and how the absence of a decision of a neighboring agent
communicates social information, and how an agent must marginalize over all
unobserved decisions. We also show how decision thresholds and network
connectivity affect group evidence accumulation, and describe the dynamics of
decision making in social cliques. Our model provides a bridge between the
abstractions used in the economics literature and the evidence accumulator
models used widely in neuroscience and psychology.
| 1 | 0 | 0 | 0 | 1 | 0 |
19,938 | Efficient Data-Driven Geologic Feature Detection from Pre-stack Seismic Measurements using Randomized Machine-Learning Algorithm | Conventional seismic techniques for detecting the subsurface geologic
features are challenged by limited data coverage, computational inefficiency,
and subjective human factors. We developed a novel data-driven geological
feature detection approach based on pre-stack seismic measurements. Our
detection method employs an efficient and accurate machine-learning detection
approach to extract useful subsurface geologic features automatically.
Specifically, our method is based on kernel ridge regression model. The
conventional kernel ridge regression can be computationally prohibited because
of the large volume of seismic measurements. We employ a data reduction
technique in combination with the conventional kernel ridge regression method
to improve the computational efficiency and reduce memory usage. In particular,
we utilize a randomized numerical linear algebra technique, named Nyström
method, to effectively reduce the dimensionality of the feature space without
compromising the information content required for accurate detection. We
provide thorough computational cost analysis to show efficiency of our new
geological feature detection methods. We further validate the performance of
our new subsurface geologic feature detection method using synthetic surface
seismic data for 2D acoustic and elastic velocity models. Our numerical
examples demonstrate that our new detection method significantly improves the
computational efficiency while maintaining comparable accuracy. Interestingly,
we show that our method yields a speed-up ratio on the order of $\sim10^2$ to
$\sim 10^3$ in a multi-core computational environment.
| 1 | 0 | 0 | 1 | 0 | 0 |
19,939 | Metastable Modular Metastructures for On-Demand Reconfiguration of Band Structures and Non-Reciprocal Wave Propagation | We present a novel approach to achieve adaptable band structures and
non-reciprocal wave propagation by exploring and exploiting the concept of
metastable modular metastructures. Through studying the dynamics of wave
propagation in a chain composed of finite metastable modules, we provide
experimental and analysis results on non-reciprocal wave propagation and unveil
the underlying mechanisms in accomplishing such unidirectional energy
transmission. Utilizing the property adaptation feature afforded via
transitioning amongst metastable states, we uncovered an unprecedented bandgap
reconfiguration characteristic, which enables the adaptivity of wave
propagation within the metastructure. Overall, this investigation elucidates
the rich dynamics attainable by periodicity, nonlinearity, asymmetry, and
metastability, and creates a new class of adaptive structural and material
systems capable of realizing tunable bandgaps and non-reciprocal wave
transmissions.
| 0 | 1 | 0 | 0 | 0 | 0 |
19,940 | Learning to Plan Chemical Syntheses | From medicines to materials, small organic molecules are indispensable for
human well-being. To plan their syntheses, chemists employ a problem solving
technique called retrosynthesis. In retrosynthesis, target molecules are
recursively transformed into increasingly simpler precursor compounds until a
set of readily available starting materials is obtained. Computer-aided
retrosynthesis would be a highly valuable tool, however, past approaches were
slow and provided results of unsatisfactory quality. Here, we employ Monte
Carlo Tree Search (MCTS) to efficiently discover retrosynthetic routes. MCTS
was combined with an expansion policy network that guides the search, and an
"in-scope" filter network to pre-select the most promising retrosynthetic
steps. These deep neural networks were trained on 12 million reactions, which
represents essentially all reactions ever published in organic chemistry. Our
system solves almost twice as many molecules and is 30 times faster in
comparison to the traditional search method based on extracted rules and
hand-coded heuristics. Finally after a 60 year history of computer-aided
synthesis planning, chemists can no longer distinguish between routes generated
by a computer system and real routes taken from the scientific literature. We
anticipate that our method will accelerate drug and materials discovery by
assisting chemists to plan better syntheses faster, and by enabling fully
automated robot synthesis.
| 1 | 1 | 0 | 0 | 0 | 0 |
19,941 | Projective embedding of pairs and logarithmic K-stability | Let $\hat{L}$ be the projective completion of an ample line bundle $L$ over
$D$, a smooth projective manifold. Hwang-Singer \cite{HwangS} have constructed
complete CSCK metric on $\hat{L}\backslash D$. When the corresponding \kahler
form is in the cohomology class of a rational divisor $A$ and when $L$ has
negative CSCK metric on $D$, we show that the Kodaira embedding induced by
orthonormal basis of the Bergman space of $kA$ is almost balanced. As a
corollary, $(\hat{L},D,cA,0)$ is K-semistable.
| 0 | 0 | 1 | 0 | 0 | 0 |
19,942 | Motion Switching with Sensory and Instruction Signals by designing Dynamical Systems using Deep Neural Network | To ensure that a robot is able to accomplish an extensive range of tasks, it
is necessary to achieve a flexible combination of multiple behaviors. This is
because the design of task motions suited to each situation would become
increasingly difficult as the number of situations and the types of tasks
performed by them increase. To handle the switching and combination of multiple
behaviors, we propose a method to design dynamical systems based on point
attractors that accept (i) "instruction signals" for instruction-driven
switching. We incorporate the (ii) "instruction phase" to form a point
attractor and divide the target task into multiple subtasks. By forming an
instruction phase that consists of point attractors, the model embeds a subtask
in the form of trajectory dynamics that can be manipulated using sensory and
instruction signals. Our model comprises two deep neural networks: a
convolutional autoencoder and a multiple time-scale recurrent neural network.
In this study, we apply the proposed method to manipulate soft materials. To
evaluate our model, we design a cloth-folding task that consists of four
subtasks and three patterns of instruction signals, which indicate the
direction of motion. The results depict that the robot can perform the required
task by combining subtasks based on sensory and instruction signals. And, our
model determined the relations among these signals using its internal dynamics.
| 1 | 0 | 0 | 0 | 0 | 0 |
19,943 | The evolution of magnetic fields in hot stars | Over the last decade, tremendous strides have been achieved in our
understanding of magnetism in main sequence hot stars. In particular, the
statistical occurrence of their surface magnetism has been established (~10%)
and the field origin is now understood to be fossil. However, fundamental
questions remain: how do these fossil fields evolve during the post-main
sequence phases, and how do they influence the evolution of hot stars from the
main sequence to their ultimate demise? Filling the void of known magnetic
evolved hot (OBA) stars, studying the evolution of their fossil magnetic fields
along stellar evolution, and understanding the impact of these fields on the
angular momentum, rotation, mass loss, and evolution of the star itself, is
crucial to answering these questions, with far reaching consequences, in
particular for the properties of the precursors of supernovae explosions and
stellar remnants. In the framework of the BRITE spectropolarimetric survey and
LIFE project, we have discovered the first few magnetic hot supergiants. Their
longitudinal surface magnetic field is very weak but their configuration
resembles those of main sequence hot stars. We present these first
observational results and propose to interpret them at first order in the
context of magnetic flux conservation as the radius of the star expands with
evolution. We then also consider the possible impact of stellar structure
changes along evolution.
| 0 | 1 | 0 | 0 | 0 | 0 |
19,944 | Polynomial-Time Methods to Solve Unimodular Quadratic Programs With Performance Guarantees | We develop polynomial-time heuristic methods to solve unimodular quadratic
programs (UQPs) approximately, which are known to be NP-hard. In the UQP
framework, we maximize a quadratic function of a vector of complex variables
with unit modulus. Several problems in active sensing and wireless
communication applications boil down to UQP. With this motivation, we present
three new heuristic methods with polynomial-time complexity to solve the UQP
approximately. The first method is called dominant-eigenvector-matching; here
the solution is picked that matches the complex arguments of the dominant
eigenvector of the Hermitian matrix in the UQP formulation. We also provide a
performance guarantee for this method. The second method, a greedy strategy, is
shown to provide a performance guarantee of (1-1/e) with respect to the optimal
objective value given that the objective function possesses a property called
string submodularity. The third heuristic method is called row-swap greedy
strategy, which is an extension to the greedy strategy and utilizes certain
properties of the UQP to provide a better performance than the greedy strategy
at the expense of an increase in computational complexity. We present numerical
results to demonstrate the performance of these heuristic methods, and also
compare the performance of these methods against a standard heuristic method
called semidefinite relaxation.
| 1 | 0 | 1 | 0 | 0 | 0 |
19,945 | Can the Wild Bootstrap be Tamed into a General Analysis of Covariance Model? | It is well known that the F test is severly affected by heteroskedasticity in
unbalanced analysis of covariance (ANCOVA) models. Currently available remedies
for such a scenario are either based on heteroskedasticity-consistent
covariance matrix estimation (HCCME) or bootstrap techniques. However, the
HCCME approach tends to be liberal in small samples. Therefore, we propose a
combination of HCCME and a wild bootstrap technique. We prove the theoretical
validity of our approach and investigate its performance in an extensive
simulation study in comparison to existing procedures. The results indicate
that our proposed test remedies all problems of the ANCOVA F test and its
heteroskedasticityconsistent alternatives. Our test only requires very general
conditions, thus being applicable in a broad range of real-life settings.
| 0 | 0 | 0 | 1 | 0 | 0 |
19,946 | Efficient K-Shot Learning with Regularized Deep Networks | Feature representations from pre-trained deep neural networks have been known
to exhibit excellent generalization and utility across a variety of related
tasks. Fine-tuning is by far the simplest and most widely used approach that
seeks to exploit and adapt these feature representations to novel tasks with
limited data. Despite the effectiveness of fine-tuning, itis often sub-optimal
and requires very careful optimization to prevent severe over-fitting to small
datasets. The problem of sub-optimality and over-fitting, is due in part to the
large number of parameters used in a typical deep convolutional neural network.
To address these problems, we propose a simple yet effective regularization
method for fine-tuning pre-trained deep networks for the task of k-shot
learning. To prevent overfitting, our key strategy is to cluster the model
parameters while ensuring intra-cluster similarity and inter-cluster diversity
of the parameters, effectively regularizing the dimensionality of the parameter
search space. In particular, we identify groups of neurons within each layer of
a deep network that shares similar activation patterns. When the network is to
be fine-tuned for a classification task using only k examples, we propagate a
single gradient to all of the neuron parameters that belong to the same group.
The grouping of neurons is non-trivial as neuron activations depend on the
distribution of the input data. To efficiently search for optimal groupings
conditioned on the input data, we propose a reinforcement learning search
strategy using recurrent networks to learn the optimal group assignments for
each network layer. Experimental results show that our method can be easily
applied to several popular convolutional neural networks and improve upon other
state-of-the-art fine-tuning based k-shot learning strategies by more than10%
| 1 | 0 | 0 | 1 | 0 | 0 |
19,947 | The cohomology ring of some Hopf algebras | Let p be a prime, and k be a field of characteristic p. We investigate the
algebra structure and the structure of the cohomology ring for the connected
Hopf algebras of dimension p^3, which appear in the classification obtained in
[V.C. Nguyen, L.-H. Wang and X.-T. Wang, Classification of connected Hopf
algebras of dimension p^3, J. Algebra 424 (2015), 473-505]. The list consists
of 23 algebras together with two infinite families. We identify the Morita type
of the algebra, and in almost all cases this is sufficient to clarify the
structure of the cohomology ring.
| 0 | 0 | 1 | 0 | 0 | 0 |
19,948 | Eckart ro-vibrational Hamiltonians via the gateway Hamilton operator: theory and practice | Recently, a general expression for Eckart-frame Hamilton operators has been
obtained by the gateway Hamiltonian method ({\it J. Chem. Phys.} {\bf 142},
174107 (2015); {\it ibid.} {\bf 143}, 064104 (2015)). The kinetic energy
operator in this general Hamiltonian is nearly identical with that of the
Eckart-Watson operator even when curvilinear vibrational coordinates are
employed. Its different realizations correspond to different methods of
calculating Eckart displacements. There are at least two different methods for
calculating such displacements: rotation and projection. In this communication
the application of Eckart Hamiltonian operators constructed by rotation and
projection, respectively, is numerically demonstrated in calculating
vibrational energy levels. The numerical examples confirm that there is no need
for rotation to construct an Eckart ro-vibrational Hamiltonian. The application
of the gateway method is advantageous even when rotation is used, since it
obviates the need for differentiation of the matrix rotating into the Eckart
frame. Simple geometrical arguments explain that there are infinitely many
different methods for calculating Eckart displacements. The geometrical picture
also suggests that a unique Eckart displacement vector may be defined as the
shortest (mass-weighted) Eckart displacement vector among Eckart displacement
vectors corresponding to configurations related by rotation. Its length, as
shown analytically and demonstrated by way of numerical examples, is equal to
or less than that of the Eckart displacement vector one can obtain by rotation
to the Eckart frame.
| 0 | 1 | 0 | 0 | 0 | 0 |
19,949 | Optimal Timing to Trade Along a Randomized Brownian Bridge | This paper studies an optimal trading problem that incorporates the trader's
market view on the terminal asset price distribution and uninformative noise
embedded in the asset price dynamics. We model the underlying asset price
evolution by an exponential randomized Brownian bridge (rBb) and consider
various prior distributions for the random endpoint. We solve for the optimal
strategies to sell a stock, call, or put, and analyze the associated delayed
liquidation premia. We solve for the optimal trading strategies numerically and
compare them across different prior beliefs. Among our results, we find that
disconnected continuation/exercise regions arise when the trader prescribe a
two-point discrete distribution and double exponential distribution.
| 0 | 0 | 0 | 0 | 0 | 1 |
19,950 | Inflationary Features and Shifts in Cosmological Parameters from Planck 2015 Data | We explore the relationship between features in the Planck 2015 temperature
and polarization data, shifts in the cosmological parameters, and features from
inflation. Residuals in the temperature data at low multipole $\ell$, which are
responsible for the high $H_0\approx 70$ km s$^{-1}$Mpc$^{-1}$ and low
$\sigma_8\Omega_m^{1/2}$ values from $\ell<1000$ in power-law $\Lambda$CDM
models, are better fit to inflationary features with a $1.9\sigma$ preference
for running of the running of the tilt or a stronger $99\%$ CL local
significance preference for a sharp drop in power around $k=0.004$ Mpc$^{-1}$
in generalized slow roll and a lower $H_0\approx 67$ km s$^{-1}$Mpc$^{-1}$. The
same in-phase acoustic residuals at $\ell>1000$ that drive the global $H_0$
constraints and appear as a lensing anomaly also favor running parameters which
allow even lower $H_0$, but not once lensing reconstruction is considered.
Polarization spectra are intrinsically highly sensitive to these parameter
shifts, and even more so in the Planck 2015 TE data due to an outlier at $\ell
\approx 165$, which disfavors the best fit $H_0$ $\Lambda$CDM solution by more
than $2\sigma$, and high $H_0$ value at almost $3\sigma$. Current polarization
data also slightly enhance the significance of a sharp suppression of
large-scale power but leave room for large improvements in the future with
cosmic variance limited $E$-mode measurements.
| 0 | 1 | 0 | 0 | 0 | 0 |
19,951 | Tree based weighted learning for estimating individualized treatment rules with censored data | Estimating individualized treatment rules is a central task for personalized
medicine. [zhao2012estimating] and [zhang2012robust] proposed outcome weighted
learning to estimate individualized treatment rules directly through maximizing
the expected outcome without modeling the response directly. In this paper, we
extend the outcome weighted learning to right censored survival data without
requiring either an inverse probability of censoring weighting or a
semiparametric modeling of the censoring and failure times as done in
[zhao2015doubly]. To accomplish this, we take advantage of the tree based
approach proposed in [zhu2012recursively] to nonparametrically impute the
survival time in two different ways. The first approach replaces the reward of
each individual by the expected survival time, while in the second approach
only the censored observations are imputed by their conditional expected
failure times. We establish consistency and convergence rates for both
estimators. In simulation studies, our estimators demonstrate improved
performance compared to existing methods. We also illustrate the proposed
method on a phase III clinical trial of non-small cell lung cancer.
| 0 | 0 | 1 | 1 | 0 | 0 |
19,952 | Fine-Tuning in the Context of Bayesian Theory Testing | Fine-tuning in physics and cosmology is often used as evidence that a theory
is incomplete. For example, the parameters of the standard model of particle
physics are "unnaturally" small (in various technical senses), which has driven
much of the search for physics beyond the standard model. Of particular
interest is the fine-tuning of the universe for life, which suggests that our
universe's ability to create physical life forms is improbable and in need of
explanation, perhaps by a multiverse. This claim has been challenged on the
grounds that the relevant probability measure cannot be justified because it
cannot be normalized, and so small probabilities cannot be inferred. We show
how fine-tuning can be formulated within the context of Bayesian theory testing
(or \emph{model selection}) in the physical sciences. The normalizability
problem is seen to be a general problem for testing any theory with free
parameters, and not a unique problem for fine-tuning. Physical theories in fact
avoid such problems in one of two ways. Dimensional parameters are bounded by
the Planck scale, avoiding troublesome infinities, and we are not compelled to
assume that dimensionless parameters are distributed uniformly, which avoids
non-normalizability.
| 0 | 1 | 0 | 0 | 0 | 0 |
19,953 | Nuclear physics insights for new-physics searches using nuclei: Neutrinoless $ββ$ decay and dark matter direct detection | Experiments using nuclei to probe new physics beyond the Standard Model, such
as neutrinoless $\beta\beta$ decay searches testing whether neutrinos are their
own antiparticle, and direct detection experiments aiming to identify the
nature of dark matter, require accurate nuclear physics input for optimizing
their discovery potential and for a correct interpretation of their results.
This demands a detailed knowledge of the nuclear structure relevant for these
processes. For instance, neutrinoless $\beta\beta$ decay nuclear matrix
elements are very sensitive to the nuclear correlations in the initial and
final nuclei, and the spin-dependent nuclear structure factors of dark matter
scattering depend on the subtle distribution of the nuclear spin among all
nucleons. In addition, nucleons are composite and strongly interacting, which
implies that many-nucleon processes are necessary for a correct description of
nuclei and their interactions. It is thus crucial that theoretical studies and
experimental analyses consider $\beta$ decays and dark matter interactions with
a coupling to two nucleons, called two-nucleon currents.
| 0 | 1 | 0 | 0 | 0 | 0 |
19,954 | On the (Statistical) Detection of Adversarial Examples | Machine Learning (ML) models are applied in a variety of tasks such as
network intrusion detection or Malware classification. Yet, these models are
vulnerable to a class of malicious inputs known as adversarial examples. These
are slightly perturbed inputs that are classified incorrectly by the ML model.
The mitigation of these adversarial inputs remains an open problem. As a step
towards understanding adversarial examples, we show that they are not drawn
from the same distribution than the original data, and can thus be detected
using statistical tests. Using thus knowledge, we introduce a complimentary
approach to identify specific inputs that are adversarial. Specifically, we
augment our ML model with an additional output, in which the model is trained
to classify all adversarial inputs. We evaluate our approach on multiple
adversarial example crafting methods (including the fast gradient sign and
saliency map methods) with several datasets. The statistical test flags sample
sets containing adversarial inputs confidently at sample sizes between 10 and
100 data points. Furthermore, our augmented model either detects adversarial
examples as outliers with high accuracy (> 80%) or increases the adversary's
cost - the perturbation added - by more than 150%. In this way, we show that
statistical properties of adversarial examples are essential to their
detection.
| 1 | 0 | 0 | 1 | 0 | 0 |
19,955 | Search Rank Fraud De-Anonymization in Online Systems | We introduce the fraud de-anonymization problem, that goes beyond fraud
detection, to unmask the human masterminds responsible for posting search rank
fraud in online systems. We collect and study search rank fraud data from
Upwork, and survey the capabilities and behaviors of 58 search rank fraudsters
recruited from 6 crowdsourcing sites. We propose Dolos, a fraud
de-anonymization system that leverages traits and behaviors extracted from
these studies, to attribute detected fraud to crowdsourcing site fraudsters,
thus to real identities and bank accounts. We introduce MCDense, a min-cut
dense component detection algorithm to uncover groups of user accounts
controlled by different fraudsters, and leverage stylometry and deep learning
to attribute them to crowdsourcing site profiles. Dolos correctly identified
the owners of 95% of fraudster-controlled communities, and uncovered fraudsters
who promoted as many as 97.5% of fraud apps we collected from Google Play. When
evaluated on 13,087 apps (820,760 reviews), which we monitored over more than 6
months, Dolos identified 1,056 apps with suspicious reviewer groups. We report
orthogonal evidence of their fraud, including fraud duplicates and fraud
re-posts.
| 1 | 0 | 0 | 0 | 0 | 0 |
19,956 | Instabilities in Interacting Binary Stars | The types of instability in the interacting binary stars are reviewed. The
project "Inter-Longitude Astronomy" is a series of smaller projects on concrete
stars or groups of stars. It has no special funds, and is supported from
resources and grants of participating organizations, when informal working
groups are created. Totally we studied 1900+ variable stars of different types.
The characteristic timescale is from seconds to decades and (extrapolating)
even more. The monitoring of the first star of our sample AM Her was initiated
by Prof. V.P. Tsesevich (1907-1983). Since more than 358 ADS papers were
published. Some highlights of our photometric and photo-polarimetric monitoring
and mathematical modelling of interacting binary stars of different types are
presented: classical, asynchronous, intermediate polars and magnetic dwarf
novae (DO Dra) with 25 timescales corresponding to different physical
mechanisms and their combinations (part "Polar"); negative and positive
superhumpers in nova-like and many dwarf novae stars ("Superhumper"); eclipsing
"non-magnetic" cataclysmic variables; symbiotic systems ("Symbiosis");
super-soft sources (SSS, QR And); spotted (and not spotted) eclipsing variables
with (and without) evidence for a current mass transfer ("Eclipser") with a
special emphasis on systems with a direct impact of the stream into the gainer
star's atmosphere, or V361 Lyr-type stars. Other parts of the ILA project are
"Stellar Bell" (interesting pulsating variables of different types and periods
- M, SR, RV Tau, RR Lyr, Delta Sct) and "Novice"(="New Variable") discoveries
and classification with a subsequent monitoring for searching and studying
possible multiple components of variability. Special mathematical methods have
been developed to create a set of complementary software for statistically
optimal modelling of variable stars of different types.
| 0 | 1 | 0 | 0 | 0 | 0 |
19,957 | Interaction blockade for bosons in an asymmetric double well | The interaction blockade phenomenon isolates the motion of a single quantum
particle within a multi-particle system, in particular for coherent
oscillations in and out of a region affected by the blockade mechanism. For
identical quantum particles with Bose statistics, the presence of the other
particles is still felt by a bosonic stimulation factor $\sqrt{N}$ that speeds
up the coherent oscillations, where $N$ is the number of bosons. Here we
propose an experiment to observe this enhancement factor with a small number of
bosonic atoms. The proposed protocol realises an asymmetric double well
potential with multiple optical tweezer laser beams. The ability to adjust bias
independently of the coherent coupling between the wells allows the potential
to be loaded with different particle numbers while maintaining the resonance
condition needed for coherent oscillations. Numerical simulations with up to
three bosons in a realistic potential generated by three optical tweezers
predict that the relevant avoided level crossing can be probed and the expected
bosonic enhancement factor observed.
| 0 | 1 | 0 | 0 | 0 | 0 |
19,958 | SPIDER: CMB polarimetry from the edge of space | SPIDER is a balloon-borne instrument designed to map the polarization of the
millimeter-wave sky at large angular scales. SPIDER targets the B-mode
signature of primordial gravitational waves in the cosmic microwave background
(CMB), with a focus on mapping a large sky area with high fidelity at multiple
frequencies. SPIDER's first longduration balloon (LDB) flight in January 2015
deployed a total of 2400 antenna-coupled Transition Edge Sensors (TESs) at 90
GHz and 150 GHz. In this work we review the design and in-flight performance of
the SPIDER instrument, with a particular focus on the measured performance of
the detectors and instrument in a space-like loading and radiation environment.
SPIDER's second flight in December 2018 will incorporate payload upgrades and
new receivers to map the sky at 285 GHz, providing valuable information for
cleaning polarized dust emission from CMB maps.
| 0 | 1 | 0 | 0 | 0 | 0 |
19,959 | Wind Shear and Turbulence on Titan : Huygens Analysis | Wind shear measured by Doppler tracking of the Huygens probe is evaluated,
and found to be within the range anticipated by pre-flight assessments (namely
less than two times the Brunt-Vaisala frequency). The strongest large-scale
shear encountered was ~5 m/s/km, a level associated with 'Light' turbulence in
terrestrial aviation. Near-surface winds (below 4km) have small-scale
fluctuations of ~0.2 m/s , indicated both by probe tilt and Doppler tracking,
and the characteristics of the fluctuation, of interest for future missions to
Titan, can be reproduced with a simple autoregressive (AR(1)) model. The
turbulent dissipation rate at an altitude of ~500m is found to be 16 cm2/sec3,
which may be a useful benchmark for atmospheric circulation models.
| 0 | 1 | 0 | 0 | 0 | 0 |
19,960 | Composable security in relativistic quantum cryptography | Relativistic protocols have been proposed to overcome some impossibility
results in classical and quantum cryptography. In such a setting, one takes the
location of honest players into account, and uses the fact that information
cannot travel faster than the speed of light to limit the abilities of
dishonest agents. For example, various relativistic bit commitment protocols
have been proposed. Although it has been shown that bit commitment is
sufficient to construct oblivious transfer and thus multiparty computation,
composing specific relativistic protocols in this way is known to be insecure.
A composable framework is required to perform such a modular security analysis
of construction schemes, but no known frameworks can handle models of
computation in Minkowski space.
By instantiating the systems model from the Abstract Cryptography framework
with Causal Boxes, we obtain such a composable framework, in which messages are
assigned a location in Minkowski space (or superpositions thereof). This allows
us to analyse relativistic protocols and to derive novel possibility and
impossibility results. We show that (1) coin flipping can be constructed from
the primitive channel with delay, (2) biased coin flipping, bit commitment and
channel with delay are all impossible without further assumptions, and (3) it
is impossible to improve a channel with delay. Note that the impossibility
results also hold in the computational and bounded storage settings. This
implies in particular non-composability of all proposed relativistic bit
commitment protocols, of bit commitment in the bounded storage model, and of
biased coin flipping.
| 1 | 0 | 0 | 0 | 0 | 0 |
19,961 | Tensor networks demonstrate the robustness of localization and symmetry protected topological phases | We prove that all eigenstates of many-body localized symmetry protected
topological systems with time reversal symmetry have four-fold degenerate
entanglement spectra in the thermodynamic limit. To that end, we employ unitary
quantum circuits where the number of sites the gates act on grows linearly with
the system size. We find that the corresponding matrix product operator
representation has similar local symmetries as matrix product ground states of
symmetry protected topological phases. Those local symmetries give rise to a
$\mathbb{Z}_2$ topological index, which is robust against arbitrary
perturbations so long as they do not break time reversal symmetry or drive the
system out of the fully many-body localized phase.
| 0 | 1 | 0 | 0 | 0 | 0 |
19,962 | Formation of Intermediate-Mass Black Holes through Runaway Collisions in the First Star Clusters | We study the formation of massive black holes in the first star clusters. We
first locate star-forming gas clouds in proto-galactic haloes of $\gtrsim
\!10^7\,{\rm M}_{\odot}$ in cosmological hydrodynamics simulations and use them
to generate the initial conditions for star clusters with masses of $\sim
\!10^5\,{\rm M}_{\odot}$. We then perform a series of direct-tree hybrid
$N$-body simulations to follow runaway stellar collisions in the dense star
clusters. In all the cluster models except one, runaway collisions occur within
a few million years, and the mass of the central, most massive star reaches
$\sim \!400-1900\,{\rm M}_{\odot}$. Such very massive stars collapse to leave
intermediate-mass black holes (IMBHs). The diversity of the final masses may be
attributed to the differences in a few basic properties of the host haloes such
as mass, central gas velocity dispersion, and mean gas density of the central
core. Finally, we derive the IMBH mass to cluster mass ratios, and compare them
with the observed black hole to bulge mass ratios in the present-day Universe.
| 0 | 1 | 0 | 0 | 0 | 0 |
19,963 | Mean Reverting Portfolios via Penalized OU-Likelihood Estimation | We study an optimization-based approach to con- struct a mean-reverting
portfolio of assets. Our objectives are threefold: (1) design a portfolio that
is well-represented by an Ornstein-Uhlenbeck process with parameters estimated
by maximum likelihood, (2) select portfolios with desirable characteristics of
high mean reversion and low variance, and (3) select a parsimonious portfolio,
i.e. find a small subset of a larger universe of assets that can be used for
long and short positions. We present the full problem formulation, a
specialized algorithm that exploits partial minimization, and numerical
examples using both simulated and empirical price data.
| 0 | 0 | 0 | 1 | 0 | 1 |
19,964 | When the cookie meets the blockchain: Privacy risks of web payments via cryptocurrencies | We show how third-party web trackers can deanonymize users of
cryptocurrencies. We present two distinct but complementary attacks. On most
shopping websites, third party trackers receive information about user
purchases for purposes of advertising and analytics. We show that, if the user
pays using a cryptocurrency, trackers typically possess enough information
about the purchase to uniquely identify the transaction on the blockchain, link
it to the user's cookie, and further to the user's real identity. Our second
attack shows that if the tracker is able to link two purchases of the same user
to the blockchain in this manner, it can identify the user's entire cluster of
addresses and transactions on the blockchain, even if the user employs
blockchain anonymity techniques such as CoinJoin. The attacks are passive and
hence can be retroactively applied to past purchases. We discuss several
mitigations, but none are perfect.
| 1 | 0 | 0 | 0 | 0 | 0 |
19,965 | Performance Evaluation of Container-based Virtualization for High Performance Computing Environments | Virtualization technologies have evolved along with the development of
computational environments since virtualization offered needed features at that
time such as isolation, accountability, resource allocation, resource fair
sharing and so on. Novel processor technologies bring to commodity computers
the possibility to emulate diverse environments where a wide range of
computational scenarios can be run. Along with processors evolution, system
developers have created different virtualization mechanisms where each new
development enhanced the performance of previous virtualized environments.
Recently, operating system-based virtualization technologies captured the
attention of communities abroad (from industry to academy and research) because
their important improvements on performance area.
In this paper, the features of three container-based operating systems
virtualization tools (LXC, Docker and Singularity) are presented. LXC, Docker,
Singularity and bare metal are put under test through a customized single node
HPL-Benchmark and a MPI-based application for the multi node testbed. Also the
disk I/O performance, Memory (RAM) performance, Network bandwidth and GPU
performance are tested for the COS technologies vs bare metal. Preliminary
results and conclusions around them are presented and discussed.
| 1 | 0 | 0 | 0 | 0 | 0 |
19,966 | An integral formula for the $Q$-prime curvature in 3-dimensional CR geometry | We give an integral formula for the total $Q^\prime$-curvature of a
three-dimensional CR manifold with positive CR Yamabe constant and nonnegative
Paneitz operator. Our derivation includes a relationship between the Green's
functions of the CR Laplacian and the $P^\prime$-operator.
| 0 | 0 | 1 | 0 | 0 | 0 |
19,967 | Intelligent Device Discovery in the Internet of Things - Enabling the Robot Society | The Internet of Things (IoT) is continuously growing to connect billions of
smart devices anywhere and anytime in an Internet-like structure, which enables
a variety of applications, services and interactions between human and objects.
In the future, the smart devices are supposed to be able to autonomously
discover a target device with desired features and generate a set of entirely
new services and applications that are not supervised or even imagined by human
beings. The pervasiveness of smart devices, as well as the heterogeneity of
their design and functionalities, raise a major concern: How can a smart device
efficiently discover a desired target device? In this paper, we propose a
Social-Aware and Distributed (SAND) scheme that achieves a fast, scalable and
efficient device discovery in the IoT. The proposed SAND scheme adopts a novel
device ranking criteria that measures the device's degree, social relationship
diversity, clustering coefficient and betweenness. Based on the device ranking
criteria, the discovery request can be guided to travel through critical
devices that stand at the major intersections of the network, and thus quickly
reach the desired target device by contacting only a limited number of
intermediate devices. With the help of such an intelligent device discovery as
SAND, the IoT devices, as well as other computing facilities, software and data
on the Internet, can autonomously establish new social connections with each
other as human being do. They can formulate self-organized computing groups to
perform required computing tasks, facilitate a fusion of a variety of computing
service, network service and data to generate novel applications and services,
evolve from the individual aritificial intelligence to the collaborative
intelligence, and eventually enable the birth of a robot society.
| 1 | 0 | 0 | 0 | 0 | 0 |
19,968 | Super-blockers and the effect of network structure on information cascades | Modelling information cascades over online social networks is important in
fields from marketing to civil unrest prediction, however the underlying
network structure strongly affects the probability and nature of such cascades.
Even with simple cascade dynamics the probability of large cascades are almost
entirely dictated by network properties, with well-known networks such as
Erdos-Renyi and Barabasi-Albert producing wildly different cascades from the
same model. Indeed, the notion of 'superspreaders' has arisen to describe
highly influential nodes promoting global cascades in a social network. Here we
use a simple model of global cascades to show that the presence of locality in
the network increases the probability of a global cascade due to the increased
vulnerability of connecting nodes. Rather than 'super-spreaders', we find that
the presence of these highly connected 'super-blockers' in heavy-tailed
networks in fact reduces the probability of global cascades, while promoting
information spread when targeted as the initial spreader.
| 1 | 0 | 0 | 0 | 0 | 0 |
19,969 | Super-Gaussian, super-diffusive transport of multi-mode active matter | Living cells exhibit multi-mode transport that switches between an active,
self-propelled motion and a seemingly passive, random motion. Cellular
decision-making over transport mode switching is a stochastic process that
depends on the dynamics of the intracellular chemical network regulating the
cell migration process. Here, we propose a theory and an exactly solvable model
of multi-mode active matter. Our exact model study shows that the reversible
transition between a passive mode and an active mode is the origin of the
anomalous, super-Gaussian transport dynamics, which has been observed in
various experiments for multi-mode active matter. We also present the
generalization of our model to encompass complex multi-mode matter with
arbitrary internal state chemical dynamics and internal state dependent
transport dynamics.
| 0 | 1 | 0 | 0 | 0 | 0 |
19,970 | Height functions for motives | We define various height functions for motives over number fields. We compare
these height functions with classical height functions on algebraic varieties,
and also with analogous height functions for variations of Hodge structures on
curves over C. These comparisons provide new questions on motives over number
fields.
| 0 | 0 | 1 | 0 | 0 | 0 |
19,971 | On the dimension effect of regularized linear discriminant analysis | This paper studies the dimension effect of the linear discriminant analysis
(LDA) and the regularized linear discriminant analysis (RLDA) classifiers for
large dimensional data where the observation dimension $p$ is of the same order
as the sample size $n$. More specifically, built on properties of the Wishart
distribution and recent results in random matrix theory, we derive explicit
expressions for the asymptotic misclassification errors of LDA and RLDA
respectively, from which we gain insights of how dimension affects the
performance of classification and in what sense. Motivated by these results, we
propose adjusted classifiers by correcting the bias brought by the unequal
sample sizes. The bias-corrected LDA and RLDA classifiers are shown to have
smaller misclassification rates than LDA and RLDA respectively. Several
interesting examples are discussed in detail and the theoretical results on
dimension effect are illustrated via extensive simulation studies.
| 0 | 0 | 1 | 1 | 0 | 0 |
19,972 | Plasma Wake Accelerators: Introduction and Historical Overview | Fundamental questions on the nature of matter and energy have found answers
thanks to the use of particle accelerators. Societal applications, such as
cancer treatment or cancer imaging, illustrate the impact of accelerators in
our current life. Today, accelerators use metallic cavities that sustain
electricfields with values limited to about 100 MV/m. Because of their ability
to support extreme accelerating gradients, the plasma medium has recently been
proposed for future cavity-like accelerating structures. This contribution
highlights the tremendous evolution of plasma accelerators driven by either
laser or particle beams that allow the production of high quality particle
beams with a degree of tunability and a set of parameters that make them very
pertinent for many applications.
| 0 | 1 | 0 | 0 | 0 | 0 |
19,973 | $R$-triviality of some exceptional groups | The main aim of this paper is to prove $R$-triviality for simple, simply
connected algebraic groups with Tits index $E_{8,2}^{78}$ or $E_{7,1}^{78}$,
defined over a field $k$ of arbitrary characteristic. Let $G$ be such a group.
We prove that there exists a quadratic extension $K$ of $k$ such that $G$ is
$R$-trivial over $K$, i.e., for any extension $F$ of $K$, $G(F)/R=\{1\}$, where
$G(F)/R$ denotes the group of $R$-equivalence classes in $G(F)$, in the sense
of Manin (see \cite{M}). As a consequence, it follows that the variety $G$ is
retract $K$-rational and that the Kneser-Tits conjecture holds for these groups
over $K$. Moreover, $G(L)$ is projectively simple as an abstract group for any
field extension $L$ of $K$. In their monograph (\cite{TW}) J. Tits and Richard
Weiss conjectured that for an Albert division algebra $A$ over a field $k$, its
structure group $Str(A)$ is generated by scalar homotheties and its
$U$-operators. This is known to be equivalent to the Kneser-Tits conjecture for
groups with Tits index $E_{8,2}^{78}$. We settle this conjecture for Albert
division algebras which are first constructions, in affirmative. These results
are obtained as corollaries to the main result, which shows that if $A$ is an
Albert division algebra which is a first construction and $\Gamma$ its
structure group, i.e., the algebraic group of the norm similarities of $A$,
then $\Gamma(F)/R=\{1\}$ for any field extension $F$ of $k$, i.e., $\Gamma$ is
$R$-trivial.
| 0 | 0 | 1 | 0 | 0 | 0 |
19,974 | Aperture synthesis imaging of the carbon AGB star R Sculptoris: Detection of a complex structure and a dominating spot on the stellar disk | We present near-infrared interferometry of the carbon-rich asymptotic giant
branch (AGB) star R Sculptoris.
The visibility data indicate a broadly circular resolved stellar disk with a
complex substructure. The observed AMBER squared visibility values show drops
at the positions of CO and CN bands, indicating that these lines form in
extended layers above the photosphere. The AMBER visibility values are best fit
by a model without a wind. The PIONIER data are consistent with the same model.
We obtain a Rosseland angular diameter of 8.9+-0.3 mas, corresponding to a
Rosseland radius of 355+-55 Rsun, an effective temperature of 2640+-80 K, and a
luminosity of log L/Lsun=3.74+-0.18. These parameters match evolutionary tracks
of initial mass 1.5+-0.5 Msun and current mass 1.3+-0.7 Msun. The reconstructed
PIONIER images exhibit a complex structure within the stellar disk including a
dominant bright spot located at the western part of the stellar disk. The spot
has an H-band peak intensity of 40% to 60% above the average intensity of the
limb-darkening-corrected stellar disk. The contrast between the minimum and
maximum intensity on the stellar disk is about 1:2.5.
Our observations are broadly consistent with predictions by dynamic
atmosphere and wind models, although models with wind appear to have a
circumstellar envelope that is too extended compared to our observations. The
detected complex structure within the stellar disk is most likely caused by
giant convection cells, resulting in large-scale shock fronts, and their
effects on clumpy molecule and dust formation seen against the photosphere at
distances of 2-3 stellar radii.
| 0 | 1 | 0 | 0 | 0 | 0 |
19,975 | Intrinsically Sparse Long Short-Term Memory Networks | Long Short-Term Memory (LSTM) has achieved state-of-the-art performances on a
wide range of tasks. Its outstanding performance is guaranteed by the long-term
memory ability which matches the sequential data perfectly and the gating
structure controlling the information flow. However, LSTMs are prone to be
memory-bandwidth limited in realistic applications and need an unbearable
period of training and inference time as the model size is ever-increasing. To
tackle this problem, various efficient model compression methods have been
proposed. Most of them need a big and expensive pre-trained model which is a
nightmare for resource-limited devices where the memory budget is strictly
limited. To remedy this situation, in this paper, we incorporate the Sparse
Evolutionary Training (SET) procedure into LSTM, proposing a novel model dubbed
SET-LSTM. Rather than starting with a fully-connected architecture, SET-LSTM
has a sparse topology and dramatically fewer parameters in both phases,
training and inference. Considering the specific architecture of LSTMs, we
replace the LSTM cells and embedding layers with sparse structures and further
on, use an evolutionary strategy to adapt the sparse connectivity to the data.
Additionally, we find that SET-LSTM can provide many different good
combinations of sparse connectivity to substitute the overparameterized
optimization problem of dense neural networks. Evaluated on four sentiment
analysis classification datasets, the results demonstrate that our proposed
model is able to achieve usually better performance than its fully connected
counterpart while having less than 4\% of its parameters.
| 1 | 0 | 0 | 0 | 0 | 0 |
19,976 | Inferencing into the void: problems with implicit populations Comments on `Empirical software engineering experts on the use of students and professionals in experiments' | I welcome the contribution from Falessi et al. [1] hereafter referred to as
F++ , and the ensuing debate. Experimentation is an important tool within
empirical software engineering, so how we select participants is clearly a
relevant question. Moreover as F++ point out, the question is considerably more
nuanced than the simple dichotomy it might appear to be at first sight.
This commentary is structured as follows. In Section 2 I briefly summarise
the arguments of F++ and comment on their approach. Next, in Section 3, I take
a step back to consider the nature of representativeness in inferential
arguments and the need for careful definition. Then I give three examples of
using different types of participant to consider impact. I conclude by arguing,
largely in agreement with F++, that the question of whether student
participants are representative or not depends on the target population.
However, we need to give careful consideration to defining that population and,
in particular, not to overlook the representativeness of tasks and environment.
This is facilitated by explicit description of the target populations.
| 1 | 0 | 0 | 0 | 0 | 0 |
19,977 | NoScope: Optimizing Neural Network Queries over Video at Scale | Recent advances in computer vision-in the form of deep neural networks-have
made it possible to query increasing volumes of video data with high accuracy.
However, neural network inference is computationally expensive at scale:
applying a state-of-the-art object detector in real time (i.e., 30+ frames per
second) to a single video requires a $4000 GPU. In response, we present
NoScope, a system for querying videos that can reduce the cost of neural
network video analysis by up to three orders of magnitude via
inference-optimized model search. Given a target video, object to detect, and
reference neural network, NoScope automatically searches for and trains a
sequence, or cascade, of models that preserves the accuracy of the reference
network but is specialized to the target video and are therefore far less
computationally expensive. NoScope cascades two types of models: specialized
models that forego the full generality of the reference model but faithfully
mimic its behavior for the target video and object; and difference detectors
that highlight temporal differences across frames. We show that the optimal
cascade architecture differs across videos and objects, so NoScope uses an
efficient cost-based optimizer to search across models and cascades. With this
approach, NoScope achieves two to three order of magnitude speed-ups
(265-15,500x real-time) on binary classification tasks over fixed-angle webcam
and surveillance video while maintaining accuracy within 1-5% of
state-of-the-art neural networks.
| 1 | 0 | 0 | 0 | 0 | 0 |
19,978 | TensorQuant - A Simulation Toolbox for Deep Neural Network Quantization | Recent research implies that training and inference of deep neural networks
(DNN) can be computed with low precision numerical representations of the
training/test data, weights and gradients without a general loss in accuracy.
The benefit of such compact representations is twofold: they allow a
significant reduction of the communication bottleneck in distributed DNN
training and faster neural network implementations on hardware accelerators
like FPGAs. Several quantization methods have been proposed to map the original
32-bit floating point problem to low-bit representations. While most related
publications validate the proposed approach on a single DNN topology, it
appears to be evident, that the optimal choice of the quantization method and
number of coding bits is topology dependent. To this end, there is no general
theory available, which would allow users to derive the optimal quantization
during the design of a DNN topology. In this paper, we present a quantization
tool box for the TensorFlow framework. TensorQuant allows a transparent
quantization simulation of existing DNN topologies during training and
inference. TensorQuant supports generic quantization methods and allows
experimental evaluation of the impact of the quantization on single layers as
well as on the full topology. In a first series of experiments with
TensorQuant, we show an analysis of fix-point quantizations of popular CNN
topologies.
| 1 | 0 | 0 | 1 | 0 | 0 |
19,979 | Information-geometrical characterization of statistical models which are statistically equivalent to probability simplexes | The probability simplex is the set of all probability distributions on a
finite set and is the most fundamental object in the finite probability theory.
In this paper we give a characterization of statistical models on finite sets
which are statistically equivalent to probability simplexes in terms of
$\alpha$-families including exponential families and mixture families. The
subject has a close relation to some fundamental aspects of information
geometry such as $\alpha$-connections and autoparallelity.
| 1 | 0 | 1 | 1 | 0 | 0 |
19,980 | The Network Nullspace Property for Compressed Sensing of Big Data over Networks | We present a novel condition, which we term the net- work nullspace property,
which ensures accurate recovery of graph signals representing massive
network-structured datasets from few signal values. The network nullspace
property couples the cluster structure of the underlying network-structure with
the geometry of the sampling set. Our results can be used to design efficient
sampling strategies based on the network topology.
| 1 | 0 | 0 | 1 | 0 | 0 |
19,981 | On physically redundant and irrelevant features when applying Lie-group symmetry analysis to hydrodynamic stability analysis | Every linear system of partial differential equations (PDEs) admits a scaling
symmetry in its dependent variables. In conjunction with other admitted
symmetries of linear type, the associated invariant solution condition poses a
linear eigenvalue problem. If this problem is structured such that the spectral
theorem applies, then the general solution of the considered linear PDE system
is obtained by summing or integrating the invariant eigenfunctions (modes) over
all eigenvalues, depending on whether the spectrum of the operator is discrete
or continuous. By first studying the 1-D diffusion equation as a demonstrating
example, this method is then applied to a relevant 2-D problem from
hydrodynamic stability analysis. The aim of this study is to draw attention to
the following two independent facts that need to be addressed in future studies
when constructing solutions for linear PDEs with the method of Lie-symmetries:
(i) Although each new symmetry leads to a mathematically different spectral
decomposition, they may all be physically redundant to standard ones and do not
reveal a new physical mechanism behind the overall considered dynamical
process, as incorrectly asserted, for example, in the recent studies by the
group of Oberlack et al. Hence, with regard to linear stability analysis, no
physically "new" or more "general" modes are generated by this method than the
ones already established. (ii) Next to the eigenvalue parameters, each single
mode can also acquire non-system parameters, depending on the choice of its
underlying symmetry. These symmetry-induced parameters, however, are all
physically irrelevant, since their effect on a single mode will cancel when
considering all modes collectively. In particular, the collective action of all
single modes is identical for all symmetry-based decompositions and thus
indistinguishable when considering the full physical fields.
| 0 | 1 | 0 | 0 | 0 | 0 |
19,982 | Transforming Musical Signals through a Genre Classifying Convolutional Neural Network | Convolutional neural networks (CNNs) have been successfully applied on both
discriminative and generative modeling for music-related tasks. For a
particular task, the trained CNN contains information representing the decision
making or the abstracting process. One can hope to manipulate existing music
based on this 'informed' network and create music with new features
corresponding to the knowledge obtained by the network. In this paper, we
propose a method to utilize the stored information from a CNN trained on
musical genre classification task. The network was composed of three
convolutional layers, and was trained to classify five-second song clips into
five different genres. After training, randomly selected clips were modified by
maximizing the sum of outputs from the network layers. In addition to the
potential of such CNNs to produce interesting audio transformation, more
information about the network and the original music could be obtained from the
analysis of the generated features since these features indicate how the
network 'understands' the music.
| 1 | 0 | 0 | 0 | 0 | 0 |
19,983 | AMTnet: Action-Micro-Tube Regression by End-to-end Trainable Deep Architecture | Dominant approaches to action detection can only provide sub-optimal
solutions to the problem, as they rely on seeking frame-level detections, to
later compose them into "action tubes" in a post-processing step. With this
paper we radically depart from current practice, and take a first step towards
the design and implementation of a deep network architecture able to classify
and regress whole video subsets, so providing a truly optimal solution of the
action detection problem. In this work, in particular, we propose a novel deep
net framework able to regress and classify 3D region proposals spanning two
successive video frames, whose core is an evolution of classical region
proposal networks (RPNs). As such, our 3D-RPN net is able to effectively encode
the temporal aspect of actions by purely exploiting appearance, as opposed to
methods which heavily rely on expensive flow maps. The proposed model is
end-to-end trainable and can be jointly optimised for action localisation and
classification in a single step. At test time the network predicts
"micro-tubes" encompassing two successive frames, which are linked up into
complete action tubes via a new algorithm which exploits the temporal encoding
learned by the network and cuts computation time by 50%. Promising results on
the J-HMDB-21 and UCF-101 action detection datasets show that our model does
outperform the state-of-the-art when relying purely on appearance.
| 1 | 0 | 0 | 0 | 0 | 0 |
19,984 | Adaptive Estimation in Structured Factor Models with Applications to Overlapping Clustering | This work introduces a novel estimation method, called LOVE, of the entries
and structure of a loading matrix A in a sparse latent factor model X = AZ + E,
for an observable random vector X in Rp, with correlated unobservable factors Z
\in RK, with K unknown, and independent noise E. Each row of A is scaled and
sparse. In order to identify the loading matrix A, we require the existence of
pure variables, which are components of X that are associated, via A, with one
and only one latent factor. Despite the fact that the number of factors K, the
number of the pure variables, and their location are all unknown, we only
require a mild condition on the covariance matrix of Z, and a minimum of only
two pure variables per latent factor to show that A is uniquely defined, up to
signed permutations. Our proofs for model identifiability are constructive, and
lead to our novel estimation method of the number of factors and of the set of
pure variables, from a sample of size n of observations on X. This is the first
step of our LOVE algorithm, which is optimization-free, and has low
computational complexity of order p2. The second step of LOVE is an easily
implementable linear program that estimates A. We prove that the resulting
estimator is minimax rate optimal up to logarithmic factors in p. The model
structure is motivated by the problem of overlapping variable clustering,
ubiquitous in data science. We define the population level clusters as groups
of those components of X that are associated, via the sparse matrix A, with the
same unobservable latent factor, and multi-factor association is allowed.
Clusters are respectively anchored by the pure variables, and form overlapping
sub-groups of the p-dimensional random vector X. The Latent model approach to
OVErlapping clustering is reflected in the name of our algorithm, LOVE.
| 0 | 0 | 1 | 1 | 0 | 0 |
19,985 | Asymptotic profile of solutions for some wave equations with very strong structural damping | We consider the Cauchy problem in R^n for some types of damped wave
equations. We derive asymptotic profiles of solutions with weighted
L^{1,1}(R^n) initial data by employing a simple method introduced by the first
author. The obtained results will include regularity loss type estimates, which
are essentially new in this kind of equations.
| 0 | 0 | 1 | 0 | 0 | 0 |
19,986 | Spontaneous generation of fractional vortex-antivortex pairs at single edges of high-Tc superconductors | Unconventional d-wave superconductors with pair-breaking edges are predicted
to have ground states with spontaneously broken time-reversal and translational
symmetries. We use the quasiclassical theory of superconductivity to
demonstrate that such phases can exist at any single pair-breaking facet. This
implies that a greater variety of systems, not necessarily mesoscopic in size,
should be unstable to such symmetry breaking. The density of states averaged
over the facet displays a broad peak centered at zero energy, which is
consistent with experimental findings of a broad zero-bias conductance peak
with a temperature-independent width at low temperatures.
| 0 | 1 | 0 | 0 | 0 | 0 |
19,987 | A Survey of Neuromorphic Computing and Neural Networks in Hardware | Neuromorphic computing has come to refer to a variety of brain-inspired
computers, devices, and models that contrast the pervasive von Neumann computer
architecture. This biologically inspired approach has created highly connected
synthetic neurons and synapses that can be used to model neuroscience theories
as well as solve challenging machine learning problems. The promise of the
technology is to create a brain-like ability to learn and adapt, but the
technical challenges are significant, starting with an accurate neuroscience
model of how the brain works, to finding materials and engineering
breakthroughs to build devices to support these models, to creating a
programming framework so the systems can learn, to creating applications with
brain-like capabilities. In this work, we provide a comprehensive survey of the
research and motivations for neuromorphic computing over its history. We begin
with a 35-year review of the motivations and drivers of neuromorphic computing,
then look at the major research areas of the field, which we define as
neuro-inspired models, algorithms and learning approaches, hardware and
devices, supporting systems, and finally applications. We conclude with a broad
discussion on the major research topics that need to be addressed in the coming
years to see the promise of neuromorphic computing fulfilled. The goals of this
work are to provide an exhaustive review of the research conducted in
neuromorphic computing since the inception of the term, and to motivate further
work by illuminating gaps in the field where new research is needed.
| 1 | 0 | 0 | 0 | 0 | 0 |
19,988 | Junctions of refined Wilson lines and one-parameter deformation of quantum groups | We study junctions of Wilson lines in refined SU(N) Chern-Simons theory and
their local relations. We focus on junctions of Wilson lines in antisymmetric
and symmetric powers of the fundamental representation and propose a set of
local relations which realize one-parameter deformations of quantum groups
$\dot{U}_{q}(\mathfrak{sl}_{m})$ and $\dot{U}_{q}(\mathfrak{sl}_{n|m})$.
| 0 | 0 | 1 | 0 | 0 | 0 |
19,989 | Online estimation of the asymptotic variance for averaged stochastic gradient algorithms | Stochastic gradient algorithms are more and more studied since they can deal
efficiently and online with large samples in high dimensional spaces. In this
paper, we first establish a Central Limit Theorem for these estimates as well
as for their averaged version in general Hilbert spaces. Moreover, since having
the asymptotic normality of estimates is often unusable without an estimation
of the asymptotic variance, we introduce a new recursive algorithm for
estimating this last one, and we establish its almost sure rate of convergence
as well as its rate of convergence in quadratic mean. Finally, two examples
consisting in estimating the parameters of the logistic regression and
estimating geometric quantiles are given.
| 0 | 0 | 1 | 1 | 0 | 0 |
19,990 | The response of the terrestrial bow shock and magnetopause of the long term decline in solar polar fields | The location of the terrestrial magnetopause (MP) and it's subsolar stand-off
distance depends not only on the solar wind dynamic pressure and the
interplanetary magnetic field (IMF), both of which play a crucial role in
determining it's shape, but also on the nature of the processes involved in the
interaction between the solar wind and the magnetosphere. The stand-off
distance of the earth's MP and bow shock (BS) also define the extent of
terrestrial magnetic fields into near-earth space on the sunward side and have
important consequences for space weather. However, asymmetries due to the
direction of the IMF are hard to account for, making it nearly impossible to
favour any specific model over the other in estimating the extent of the MP or
BS. Thus, both numerical and empirical models have been used and compared to
estimate the BS and MP stand-off distances as well as the MP shape, in the
period Jan. 1975-Dec. 2016, covering solar cycles 21-24. The computed MP and BS
stand-off distances have been found to be increasing steadily over the past two
decades, since ~1995, spanning solar cycles 23 and 24. The increasing trend is
consistent with earlier reported studies of a long term and steady decline in
solar polar magnetic fields and solar wind micro-turbulence levels. The present
study, thus, highlights the response of the terrestrial magnetosphere to the
long term global changes in both solar and solar wind activity, through a
detailed study of the extent and shape of the terrestrial MP and BS over the
past four solar cycles, a period spanning the last four decades.
| 0 | 1 | 0 | 0 | 0 | 0 |
19,991 | Superconductivity of barium-VI synthesized via compression at low temperatures | Using a membrane-driven diamond anvil cell and both ac magnetic
susceptibility and electrical resistivity measurements, we have characterized
the superconducting phase diagram of elemental barium to pressures as high as
65 GPa. We have determined the superconducting properties of the recently
discovered Ba-VI crystal structure, which can only be accessed via the
application of pressure at low temperature. We find that Ba-VI exhibits a
maximum Tc near 8 K, which is substantially higher than the maximum Tc found
when pressure is applied at room temperature.
| 0 | 1 | 0 | 0 | 0 | 0 |
19,992 | Self-Supervised Damage-Avoiding Manipulation Strategy Optimization via Mental Simulation | Everyday robotics are challenged to deal with autonomous product handling in
applications like logistics or retail, possibly causing damage on the items
during manipulation. Traditionally, most approaches try to minimize physical
interaction with goods. However, we propose to take into account any unintended
motion of objects in the scene and to learn manipulation strategies in a
self-supervised way which minimize the potential damage. The presented approach
consists of a planning method that determines the optimal sequence to
manipulate a number of objects in a scene with respect to possible damage by
simulating interaction and hence anticipating scene dynamics. The planned
manipulation sequences are taken as input to a machine learning process which
generalizes to new, unseen scenes in the same application scenario. This
learned manipulation strategy is continuously refined in a self-supervised
optimization cycle dur- ing load-free times of the system. Such a
simulation-in-the-loop setup is commonly known as mental simulation and allows
for efficient, fully automatic generation of training data as opposed to
classical supervised learning approaches. In parallel, the generated
manipulation strategies can be deployed in near-real time in an anytime
fashion. We evaluate our approach on one industrial scenario (autonomous
container unloading) and one retail scenario (autonomous shelf replenishment).
| 1 | 0 | 0 | 0 | 0 | 0 |
19,993 | Adversarial Deep Learning for Robust Detection of Binary Encoded Malware | Malware is constantly adapting in order to avoid detection. Model based
malware detectors, such as SVM and neural networks, are vulnerable to so-called
adversarial examples which are modest changes to detectable malware that allows
the resulting malware to evade detection. Continuous-valued methods that are
robust to adversarial examples of images have been developed using saddle-point
optimization formulations. We are inspired by them to develop similar methods
for the discrete, e.g. binary, domain which characterizes the features of
malware. A specific extra challenge of malware is that the adversarial examples
must be generated in a way that preserves their malicious functionality. We
introduce methods capable of generating functionally preserved adversarial
malware examples in the binary domain. Using the saddle-point formulation, we
incorporate the adversarial examples into the training of models that are
robust to them. We evaluate the effectiveness of the methods and others in the
literature on a set of Portable Execution~(PE) files. Comparison prompts our
introduction of an online measure computed during training to assess general
expectation of robustness.
| 0 | 0 | 0 | 1 | 0 | 0 |
19,994 | Simple closed curves, finite covers of surfaces, and power subgroups of Out(F_n) | We construct examples of finite covers of punctured surfaces where the first
rational homology is not spanned by lifts of simple closed curves. More
generally, for any set $\mathcal{O} \subset F_n$ which is contained in the
union of finitely many $Aut(F_n)$-orbits, we construct finite-index normal
subgroups of $F_n$ whose first rational homology is not spanned by powers of
elements of $\mathcal{O}$. These examples answer questions of Farb-Hensel,
Looijenga, and Marche. We also show that the quotient of $Out(F_n)$ by the
subgroup generated by kth powers of transvections often contains infinite order
elements, strengthening a result of Bridson-Vogtmann saying that it is often
infinite. Finally, for any set $\mathcal{O} \subset F_n$ which is contained in
the union of finitely many $Aut(F_n)$-orbits, we construct integral linear
representations of free groups that have infinite image and map all elements of
$\mathcal{O}$ to torsion elements.
| 0 | 0 | 1 | 0 | 0 | 0 |
19,995 | Hard Mixtures of Experts for Large Scale Weakly Supervised Vision | Training convolutional networks (CNN's) that fit on a single GPU with
minibatch stochastic gradient descent has become effective in practice.
However, there is still no effective method for training large CNN's that do
not fit in the memory of a few GPU cards, or for parallelizing CNN training. In
this work we show that a simple hard mixture of experts model can be
efficiently trained to good effect on large scale hashtag (multilabel)
prediction tasks. Mixture of experts models are not new (Jacobs et. al. 1991,
Collobert et. al. 2003), but in the past, researchers have had to devise
sophisticated methods to deal with data fragmentation. We show empirically that
modern weakly supervised data sets are large enough to support naive
partitioning schemes where each data point is assigned to a single expert.
Because the experts are independent, training them in parallel is easy, and
evaluation is cheap for the size of the model. Furthermore, we show that we can
use a single decoding layer for all the experts, allowing a unified feature
embedding space. We demonstrate that it is feasible (and in fact relatively
painless) to train far larger models than could be practically trained with
standard CNN architectures, and that the extra capacity can be well used on
current datasets.
| 1 | 0 | 0 | 1 | 0 | 0 |
19,996 | Single-hole GPR reflection imaging of solute transport in a granitic aquifer | Identifying transport pathways in fractured rock is extremely challenging as
flow is often organized in a few fractures that occupy a very small portion of
the rock volume. We demonstrate that saline tracer experiments combined with
single-hole ground penetrating radar (GPR) reflection imaging can be used to
monitor saline tracer movement within mm-aperture fractures. A dipole tracer
test was performed in a granitic aquifer by injecting a saline solution in a
known fracture, while repeatedly acquiring single-hole GPR sections in the
pumping borehole located 6 m away. The final depth-migrated difference sections
make it possible to identify consistent temporal changes over a 30 m depth
interval at locations corresponding to fractures previously imaged in GPR
sections acquired under natural flow and tracer-free conditions. The experiment
allows determining the dominant flow paths of the injected tracer and the
velocity (0.4-0.7 m/min) of the tracer front.
| 0 | 1 | 0 | 0 | 0 | 0 |
19,997 | Episodic Torque-Luminosity Correlations and Anticorrelations of GX 1+4 | We analyse archival CGRO-BATSE X-ray flux and spin frequency measurements of
GX 1+4 over a time span of 3000 days. We systematically search for time
dependent variations of torque luminosity correlation. Our preliminary results
indicate that the correlation shifts from being positive to negative on time
scales of few 100 days.
| 0 | 1 | 0 | 0 | 0 | 0 |
19,998 | Applications of Fractional Calculus to Newtonian Mechanics | We investigate some basic applications of Fractional Calculus (FC) to
Newtonian mechanics. After a brief review of FC, we consider a possible
generalization of Newton's second law of motion and apply it to the case of a
body subject to a constant force. In our second application of FC to Newtonian
gravity, we consider a generalized fractional gravitational potential and
derive the related circular orbital velocities. This analysis might be used as
a tool to model galactic rotation curves, in view of the dark matter problem.
Both applications have a pedagogical value in connecting fractional calculus to
standard mechanics and can be used as a starting point for a more advanced
treatment of fractional mechanics.
| 0 | 1 | 0 | 0 | 0 | 0 |
19,999 | Radial Surface Density Profiles of Gas and Dust in the Debris Disk around 49 Ceti | We present ~0.4 resolution images of CO(3-2) and associated continuum
emission from the gas-bearing debris disk around the nearby A star 49 Ceti,
observed with the Atacama Large Millimeter/Submillimeter Array (ALMA). We
analyze the ALMA visibilities in tandem with the broad-band spectral energy
distribution to measure the radial surface density profiles of dust and gas
emission from the system. The dust surface density decreases with radius
between ~100 and 310 au, with a marginally significant enhancement of surface
density at a radius of ~110 au. The SED requires an inner disk of small grains
in addition to the outer disk of larger grains resolved by ALMA. The gas disk
exhibits a surface density profile that increases with radius, contrary to most
previous spatially resolved observations of circumstellar gas disks. While ~80%
of the CO flux is well described by an axisymmetric power-law disk in Keplerian
rotation about the central star, residuals at ~20% of the peak flux exhibit a
departure from axisymmetry suggestive of spiral arms or a warp in the gas disk.
The radial extent of the gas disk (~220 au) is smaller than that of the dust
disk (~300 au), consistent with recent observations of other gas-bearing debris
disks. While there are so far only three broad debris disks with well
characterized radial dust profiles at millimeter wavelengths, 49 Ceti's disk
shows a markedly different structure from two radially resolved gas-poor debris
disks, implying that the physical processes generating and sculpting the gas
and dust are fundamentally different.
| 0 | 1 | 0 | 0 | 0 | 0 |
20,000 | Spontaneously broken translational symmetry at edges of high-temperature superconductors: thermodynamics in magnetic field | We investigate equilibrium properties, including structure of the order
parameter, superflow patterns, and thermodynamics of low-temperature surface
phases of layered d_{x^2-y^2}-wave superconductors in magnetic field. At zero
external magnetic field, time-reversal symmetry and continuous translational
symmetry along the edge are broken spontaneously in a second order phase
transition at a temperature $T^*\approx 0.18 T_c$, where $T_c$ is the
superconducting transition temperature. At the phase transition there is a jump
in the specific heat that scales with the ratio between the edge length $D$ and
layer area ${\cal A}$ as $(D\xi_0/{\cal A})\Delta C_d$, where $\Delta C_d$ is
the jump in the specific heat at the d-wave superconducting transition and
$\xi_0$ is the superconducting coherence length. The phase with broken symmetry
is characterized by a gauge invariant superfluid momentum ${\bf p}_s$ that
forms a non-trivial planar vector field with a chain of sources and sinks along
the edges with a period of approximately $12\xi_0$, and saddle point
disclinations in the interior. To find out the relative importance of
time-reversal and translational symmetry breaking we apply an external field
that breaks time-reversal symmetry explicitly. We find that the phase
transition into the state with the non-trivial ${\bf p}_s$ vector field keeps
its main signatures, and is still of second order. In the external field, the
saddle point disclinations are pushed towards the edges, and thereby a chain of
edge motifs are formed, where each motif contains a source, a sink, and a
saddle point. Due to a competing paramagnetic response at the edges, the phase
transition temperature $T^*$ is slowly suppressed with increasing magnetic
field strength, but the phase with broken symmetry survives into the mixed
state.
| 0 | 1 | 0 | 0 | 0 | 0 |
Subsets and Splits