abstract
stringlengths 42
2.09k
|
---|
We calculate exclusive production of a longitudinally polarized heavy vector
meson at next-to-leading order in the dipole picture. The large quark mass
allows us to separately include both the first QCD correction proportional to
the coupling constant $\alpha_s$, and the first relativistic correction
suppressed by the quark velocity $v^2$. Both of these corrections are found to
be numerically important in $\mathrm{J}/\psi$ production. The results obtained
are directly suitable for phenomenological calculations. We also demonstrate
how vector meson production provides complementary information to structure
function analyses when one extracts the initial condition for the energy
evolution of the proton small-$x$ structure.
|
Nanopapers based on graphene and related materials were recently proposed for
application in heat spreader applications. To overcome typical limitations in
brittleness of such materials, this work addressed the combination of graphite
nanoplatelets (GNP) with a soft, tough and crystalline polymer, acting as an
efficient binder between nanoplates. With this aim, polycaprolactone (PCL) was
selected and exploited in this paper. The crystalline organization of PCL
within the nanopaper was studied to investigate the effect of polymer
confinement between GNP. Thermomechanical properties were studied by dynamic
mechanical analyses at variable temperature and creep measurements at high
temperature, demonstrating superior resistance at temperatures well above PCL
melting. Finally, the heat conduction properties on the nanopapers were
evaluated, resulting in outstanding values above 150 Wm-1K-1.
|
The standard paradigm of cosmology assumes General Relativity (GR) is a valid
theory for gravity at scales in which it has not been properly tested.
Developing novel tests of GR and its alternatives is crucial if we want to give
strength to the model or find departures from GR in the data. Since
alternatives to GR are usually defined through nonlinear equations, designing
new tests for these theories implies a jump in complexity and thus, a need for
refining the simulation techniques. We summarize existing techniques for
dealing with modified gravity (MG) in the context of cosmological simulations.
$N$-body codes for MG are usually based on standard gravity codes. We describe
the required extensions, classifying the models not according to their original
motivation, but by the numerical challenges that must be faced by numericists.
MG models usually give rise to elliptic equations, for which multigrid
techniques are well suited. Thus, we devote a large fraction of this review to
describing this particular technique. Contrary to other reviews on multigrid
methods, we focus on the specific techniques that are required to solve MG
equations and describe useful tricks. Finally, we describe extensions for going
beyond the static approximation and dealing with baryons.
|
There is much confusion in the literature over Hurst exponent (H). The
purpose of this paper is to illustrate the difference between fractional
Brownian motion (fBm) on the one hand and Gaussian Markov processes where H is
different to 1/2 on the other. The difference lies in the increments, which are
stationary and correlated in one case and nonstationary and uncorrelated in the
other. The two- and one-point densities of fBm are constructed explicitly. The
two-point density does not scale. The one-point density for a semi-infinite
time interval is identical to that for a scaling Gaussian Markov process with H
different to 1/2 over a finite time interval. We conclude that both Hurst
exponents and one-point densities are inadequate for deducing the underlying
dynamics from empirical data. We apply these conclusions in the end to make a
focused statement about nonlinear diffusion.
|
Both observations and recent numerical simulations of the circumgalactic
medium (CGM) support the hypothesis that a self-regulating feedback loop
suspends the gas density of the ambient CGM close to the galaxy in a state with
a ratio of cooling time to freefall time >10. This limiting ratio is thought to
arise because circumgalactic gas becomes increasingly susceptible to multiphase
condensation as the ratio declines. If the timescale ratio gets too small, then
cold clouds precipitate out of the CGM, rain into the galaxy, and fuel
energetic feedback that raises the ambient cooling time. The astrophysical
origin of this so-called precipitation limit is not simple but is critical to
understanding the CGM and its role in galaxy evolution. This paper therefore
attempts to interpret its origin as simply as possible, relying mainly on
conceptual reasoning and schematic diagrams. It illustrates how the
precipitation limit can depend on both the global configuration of a galactic
atmosphere and the degree to which dynamical disturbances drive CGM
perturbations. It also frames some tests of the precipitation hypothesis that
can be applied to both CGM observations and numerical simulations of galaxy
evolution.
|
Optimizing parameterized quantum circuits (PQCs) is the leading approach to
make use of near-term quantum computers. However, very little is known about
the cost function landscape for PQCs, which hinders progress towards
quantum-aware optimizers. In this work, we investigate the connection between
three different landscape features that have been observed for PQCs: (1)
exponentially vanishing gradients (called barren plateaus), (2) exponential
cost concentration about the mean, and (3) the exponential narrowness of minina
(called narrow gorges). We analytically prove that these three phenomena occur
together, i.e., when one occurs then so do the other two. A key implication of
this result is that one can numerically diagnose barren plateaus via cost
differences rather than via the computationally more expensive gradients. More
broadly, our work shows that quantum mechanics rules out certain cost
landscapes (which otherwise would be mathematically possible), and hence our
results are interesting from a quantum foundations perspective.
|
The Rosetta mission provided us with detailed data of the surface of the
nucleus of comet 67P/Churyumov-Gerasimenko.In order to better understand the
physical processes associated with the comet activity and the surface evolution
of its nucleus, we performed a detailed comparative morphometrical analysis of
two depressions located in the Ash region. To detect morphological temporal
changes, we compared pre- and post-perihelion high-resolution (pixel scale of
0.07-1.75 m) OSIRIS images of the two depressions. We quantified the changes
using the dynamic heights and the gravitational slopes calculated from the
Digital Terrain Model (DTM) of the studied area using the ArcGIS software
before and after perihelion. Our comparative morphometrical analysis allowed us
to detect and quantify the temporal changes that occurred in two depressions of
the Ash region during the last perihelion passage. We find that the two
depressions grew by several meters. The area of the smallest depression
(structure I) increased by 90+/-20%, with two preferential growths: one close
to the cliff associated with the apparition of new boulders at its foot, and a
second one on the opposite side of the cliff. The largest depression (structure
II) grew in all directions, increasing in area by 20+/-5%, and no new deposits
have been detected. We interpreted these two depression changes as being driven
by the sublimation of ices, which explains their global growth and which can
also trigger landslides. The deposits associated with depression II reveal a
stair-like topography, indicating that they have accumulated during several
successive landslides from different perihelion passages. Overall, these
observations bring additional evidence of complex active processes and
reshaping events occurring on short timescales, such as depression growth and
landslides, and on longer timescales, such as cliff retreat.
|
Frame reconstruction (current or future frame) based on Auto-Encoder (AE) is
a popular method for video anomaly detection. With models trained on the normal
data, the reconstruction errors of anomalous scenes are usually much larger
than those of normal ones. Previous methods introduced the memory bank into AE,
for encoding diverse normal patterns across the training videos. However, they
are memory-consuming and cannot cope with unseen new scenarios in the testing
data. In this work, we propose a dynamic prototype unit (DPU) to encode the
normal dynamics as prototypes in real time, free from extra memory cost. In
addition, we introduce meta-learning to our DPU to form a novel few-shot
normalcy learner, namely Meta-Prototype Unit (MPU). It enables the fast
adaption capability on new scenes by only consuming a few iterations of update.
Extensive experiments are conducted on various benchmarks. The superior
performance over the state-of-the-art demonstrates the effectiveness of our
method.
|
We prove that (strong) fully-concurrent bisimilarity and causal-net
bisimilarity are decidable for finite bounded Petri nets. The proofs are based
on a generalization of the ordered marking proof technique that Vogler used to
demonstrate that (strong) fully-concurrent bisimilarity (or, equivalently,
historypreserving bisimilarity) is decidable on finite safe nets.
|
Based on equivalent-dynamic-linearization model (EDLM), we propose a kind of
model predictive control (MPC) for single-input and single-output (SISO)
nonlinear or linear systems. After compensating the EDLM with disturbance for
multiple-input and multiple-output nonlinear or linear systems, the MPC
compensated with disturbance is proposed to address the disturbance rejection
problem. The system performance analysis results are much clear compared with
the system stability analyses on MPC in current works. And this may help the
engineers understand how to design, analyze and apply the controller in
practical.
|
Point Cloud Sampling and Recovery (PCSR) is critical for massive real-time
point cloud collection and processing since raw data usually requires large
storage and computation. In this paper, we address a fundamental problem in
PCSR: How to downsample the dense point cloud with arbitrary scales while
preserving the local topology of discarding points in a case-agnostic manner
(i.e. without additional storage for point relationship)? We propose a novel
Locally Invertible Embedding for point cloud adaptive sampling and recovery
(PointLIE). Instead of learning to predict the underlying geometry details in a
seemingly plausible manner, PointLIE unifies point cloud sampling and
upsampling to one single framework through bi-directional learning.
Specifically, PointLIE recursively samples and adjusts neighboring points on
each scale. Then it encodes the neighboring offsets of sampled points to a
latent space and thus decouples the sampled points and the corresponding local
geometric relationship. Once the latent space is determined and that the deep
model is optimized, the recovery process could be conducted by passing the
recover-pleasing sampled points and a randomly-drawn embedding to the same
network through an invertible operation. Such a scheme could guarantee the
fidelity of dense point recovery from sampled points. Extensive experiments
demonstrate that the proposed PointLIE outperforms state-of-the-arts both
quantitatively and qualitatively. Our code is released through
https://github.com/zwb0/PointLIE.
|
Mixed linear regression (MLR) model is among the most exemplary statistical
tools for modeling non-linear distributions using a mixture of linear models.
When the additive noise in MLR model is Gaussian, Expectation-Maximization (EM)
algorithm is a widely-used algorithm for maximum likelihood estimation of MLR
parameters. However, when noise is non-Gaussian, the steps of EM algorithm may
not have closed-form update rules, which makes EM algorithm impractical. In
this work, we study the maximum likelihood estimation of the parameters of MLR
model when the additive noise has non-Gaussian distribution. In particular, we
consider the case that noise has Laplacian distribution and we first show that
unlike the the Gaussian case, the resulting sub-problems of EM algorithm in
this case does not have closed-form update rule, thus preventing us from using
EM in this case. To overcome this issue, we propose a new algorithm based on
combining the alternating direction method of multipliers (ADMM) with EM
algorithm idea. Our numerical experiments show that our method outperforms the
EM algorithm in statistical accuracy and computational time in non-Gaussian
noise case.
|
We study automorphism and birational automorphism groups of varieties over
fields of positive characteristic from the point of view of Jordan and
$p$-Jordan property. In particular, we show that the Cremona group of rank $2$
over a field of characteristic $p>0$ is $p$-Jordan, and the birational
automorphism group of an arbitrary geometrically irreducible algebraic surface
is nilpotently $p$-Jordan of class at most $2$. Also, we show that the
automorphism group of a smooth geometrically irreducible projective variety of
non-negative Kodaira dimension is Jordan in the usual sense.
|
One of the most important early results from the Parker Solar Probe (PSP) is
the ubiquitous presence of magnetic switchbacks, whose origin is under debate.
Using a three-dimensional direct numerical simulation of the equations of
compressible magnetohydrodynamics from the corona to 40 solar radii, we
investigate whether magnetic switchbacks emerge from granulation-driven
Alfv\'en waves and turbulence in the solar wind. The simulated solar wind is an
Alfv\'enic slow-solar-wind stream with a radial profile consistent with various
observations, including observations from PSP. As a natural consequence of
Alfv\'en-wave turbulence, the simulation reproduced magnetic switchbacks with
many of the same properties as observed switchbacks, including Alfv\'enic v-b
correlation, spherical polarization (low magnetic compressibility), and a
volume filling fraction that increases with radial distance. The analysis of
propagation speed and scale length shows that the magnetic switchbacks are
large-amplitude (nonlinear) Alfv\'en waves with discontinuities in the magnetic
field direction. We directly compare our simulation with observations using a
virtual flyby of PSP in our simulation domain. We conclude that at least some
of the switchbacks observed by PSP are a natural consequence of the growth in
amplitude of spherically polarized Alfv\'en waves as they propagate away from
the Sun.
|
All the news that's (un)fit to publish.
|
Bifurcation diagram is a powerful tool that visually gives information about
the behavior of the equilibrium points of a dynamical system respect to the
varying parameter. This paper proposes an educational algorithm by which the
local bifurcation diagram could be plotted manually and fast in an easy and
straightforward way. To the students, this algorithmic method seems to be
simpler and more straightforward than mathematical plotting methods in
educational and ordinary problems during the learning and studying of courses
related to dynamical systems and bifurcation diagrams. For validation, the
algorithm has been applied to several educational examples in the course of
dynamical systems.
|
SpeechBrain is an open-source and all-in-one speech toolkit. It is designed
to facilitate the research and development of neural speech processing
technologies by being simple, flexible, user-friendly, and well-documented.
This paper describes the core architecture designed to support several tasks of
common interest, allowing users to naturally conceive, compare and share novel
speech processing pipelines. SpeechBrain achieves competitive or
state-of-the-art performance in a wide range of speech benchmarks. It also
provides training recipes, pretrained models, and inference scripts for popular
speech datasets, as well as tutorials which allow anyone with basic Python
proficiency to familiarize themselves with speech technologies.
|
We propose Citrinet - a new end-to-end convolutional Connectionist Temporal
Classification (CTC) based automatic speech recognition (ASR) model. Citrinet
is deep residual neural model which uses 1D time-channel separable convolutions
combined with sub-word encoding and squeeze-and-excitation. The resulting
architecture significantly reduces the gap between non-autoregressive and
sequence-to-sequence and transducer models. We evaluate Citrinet on
LibriSpeech, TED-LIUM2, AISHELL-1 and Multilingual LibriSpeech (MLS) English
speech datasets. Citrinet accuracy on these datasets is close to the best
autoregressive Transducer models.
|
This paper proposes the trajectory tracking problem between an autonomous
underwater vehicle (AUV) and a mobile surface ship, both equipped with optical
communication transceivers. The challenging issue is to maintain stable
connectivity between the two autonomous vehicles within an optical
communication range. We define a directed optical line-of-sight (LoS) link
between the two-vehicle systems. The transmitter is mounted on the AUV while
the surface ship is equipped with an optical receiver. However, this optical
communication channel needs to preserve a stable transmitter-receiver position
to reinforce service quality, which typically includes a bit rate and bit error
rates. A cone-shaped beam region of the optical receiver is approximated based
on the channel model; then, a minimum bit rate is ensured if the AUV
transmitter remains inside of this region. Additionally, we design two control
algorithms for the transmitter to drive the AUV and maintain it in the
cone-shaped beam region under an uncertain oceanic environment. Lyapunov
function-based analysis that ensures asymptotic stability of the resulting
closed-loop tracking error is used to design the proposed NLPD controller.
Numerical simulations are performed using MATLAB/Simulink to show the
controllers' ability to achieve favorable tracking in the presence of the solar
background noise within competitive times. Finally, results demonstrate the
proposed NLPD controller improves the tracking error performance more than
$70\%$ under nominal conditions and $35\%$ with model uncertainties and
disturbances compared to the original PD strategy.
|
The detection of contextual anomalies is a challenging task for surveillance
since an observation can be considered anomalous or normal in a specific
environmental context. An unmanned aerial vehicle (UAV) can utilize its aerial
monitoring capability and employ multiple sensors to gather contextual
information about the environment and perform contextual anomaly detection. In
this work, we introduce a deep neural network-based method (CADNet) to find
point anomalies (i.e., single instance anomalous data) and contextual anomalies
(i.e., context-specific abnormality) in an environment using a UAV. The method
is based on a variational autoencoder (VAE) with a context sub-network. The
context sub-network extracts contextual information regarding the environment
using GPS and time data, then feeds it to the VAE to predict anomalies
conditioned on the context. To the best of our knowledge, our method is the
first contextual anomaly detection method for UAV-assisted aerial surveillance.
We evaluate our method on the AU-AIR dataset in a traffic surveillance
scenario. Quantitative comparisons against several baselines demonstrate the
superiority of our approach in the anomaly detection tasks. The codes and data
will be available at https://bozcani.github.io/cadnet.
|
We report results of our study of a newly synthesized honeycomb iridate
NaxIrO3 (0.60 < x < 0.80). Single-crystal NaxIrO3 adopts a honeycomb lattice
noticeably without distortions and stacking disorder inherently existent in its
sister compound Na2IrO3. The oxidation state of the Ir ion is a mixed valence
state resulting from a majority Ir5+(5d4) ion and a minority Ir6+(5d3) ion.
NaxIrO3 is a Mott insulator likely with a predominant pseudospin = 1 state. It
exhibits an effective moment of 1.1 Bohr Magneton/Ir and a Curie-Weiss
temperature of -19 K but with no discernable long-range order above 1 K. The
physical behavior below 1 K features two prominent anomalies at Th = 0.9 K and
Tl = 0.12 K in both the heat capacity and AC magnetic susceptibility.
Intermediate between Th and Tl lies a pronounced temperature linearity of the
heat capacity with a large slope of 77 mJ/mole K2, a feature expected for
highly correlated metals but not at all for insulators. These results along
with comparison drawn with the honeycomb lattices Na2IrO3 and (Na0.2Li0.8)2IrO3
point to an exotic ground state in a proximity to a possible Kitaev spin
liquid.
|
Symbolic control techniques aim to satisfy complex logic specifications. A
critical step in these techniques is the construction of a symbolic (discrete)
abstraction, a finite-state system whose behaviour mimics that of a given
continuous-state system. The methods used to compute symbolic abstractions,
however, require knowledge of an accurate closed-form model. To generalize them
to systems with unknown dynamics, we present a new data-driven approach that
does not require closed-form dynamics, instead relying only the ability to
evaluate successors of each state under given inputs. To provide guarantees for
the learned abstraction, we use the Probably Approximately Correct (PAC)
statistical framework. We first introduce a PAC-style behavioural relationship
and an appropriate refinement procedure. We then show how the symbolic
abstraction can be constructed to satisfy this new behavioural relationship.
Moreover, we provide PAC bounds that dictate the number of data required to
guarantee a prescribed level of accuracy and confidence. Finally, we present an
illustrative example.
|
Remote sample recovery is a rapidly evolving application of Small Unmanned
Aircraft Systems (sUAS) for planetary sciences and space exploration.
Development of cyber-physical systems (CPS) for autonomous deployment and
recovery of sensor probes for sample caching is already in progress with NASA's
MARS 2020 mission. To challenge student teams to develop autonomy for sample
recovery settings, the 2020 NSF CPS Challenge was positioned around the launch
of the MARS 2020 rover and sUAS duo. This paper discusses perception and
trajectory planning for sample recovery by sUAS in a simulation environment.
Out of a total of five teams that participated, the results of the top two
teams have been discussed. The OpenUAV cloud simulation framework deployed on
the Cyber-Physical Systems Virtual Organization (CPS-VO) allowed the teams to
work remotely over a month during the COVID-19 pandemic to develop and simulate
autonomous exploration algorithms. Remote simulation enabled teams across the
globe to collaborate in experiments. The two teams approached the task of probe
search, probe recovery, and landing on a moving target differently. This paper
is a summary of teams' insights and lessons learned, as they chose from a wide
range of perception sensors and algorithms.
|
With the development of blockchain technologies, the number of smart
contracts deployed on blockchain platforms is growing exponentially, which
makes it difficult for users to find desired services by manual screening. The
automatic classification of smart contracts can provide blockchain users with
keyword-based contract searching and helps to manage smart contracts
effectively. Current research on smart contract classification focuses on
Natural Language Processing (NLP) solutions which are based on contract source
code. However, more than 94% of smart contracts are not open-source, so the
application scenarios of NLP methods are very limited. Meanwhile, NLP models
are vulnerable to adversarial attacks. This paper proposes a classification
model based on features from contract bytecode instead of source code to solve
these problems. We also use feature selection and ensemble learning to optimize
the model. Our experimental studies on over 3,300 real-world Ethereum smart
contracts show that our model can classify smart contracts without source code
and has better performance than baseline models. Our model also has good
resistance to adversarial attacks compared with NLP-based models. In addition,
our analysis reveals that account features used in many smart contract
classification models have little effect on classification and can be excluded.
|
Infrastructure sharing is a widely discussed and implemented approach and is
successfully adopted in telecommunications networks today. In practice, it is
implemented through prior negotiated Service Level Agreements (SLAs) between
the parties involved. However, it is recognised that these agreements are
difficult to negotiate, monitor and enforce. For future 6G networks, resource
and infrastructure sharing is expected to play an even greater role. It will be
a crucial technique for reducing overall infrastructure costs and increasing
operational efficiencies for operators. More efficient SLA mechanisms are thus
crucial to the success of future networks. In this work, we present "BEAT", an
automated, transparent and accountable end-to-end architecture for network
sharing based on blockchain and smart contracts. This work focuses on a
particular type of blockchain, Permissioned Distributed Ledger (PDL), due to
its permissioned nature allowing for industry-compliant SLAs with stringent
governance. Our architecture can be implemented with minimal hardware changes
and with minimal overheads.
|
Before Brexit, one of the greatest causes of arguments amongst British
families was the question of the nature of Jaffa Cakes. Some argue that their
size and host environment (the biscuit aisle) should make them a biscuit in
their own right. Others consider that their physical properties (e.g. they
harden rather than soften on becoming stale) suggest that they are in fact
cake. In order to finally put this debate to rest, we re-purpose technologies
used to classify transient events. We train two classifiers (a Random Forest
and a Support Vector Machine) on 100 recipes of traditional cakes and biscuits.
Our classifiers have 95 percent and 91 percent accuracy respectively. Finally
we feed two Jaffa Cake recipes to the algorithms and find that Jaffa Cakes are,
without a doubt, cakes. Finally, we suggest a new theory as to why some believe
Jaffa Cakes are biscuits.
|
The pixels in an image, and the objects, scenes, and actions that they
compose, determine whether an image will be memorable or forgettable. While
memorability varies by image, it is largely independent of an individual
observer. Observer independence is what makes memorability an image-computable
measure of information, and eligible for automatic prediction. In this chapter,
we zoom into memorability with a computational lens, detailing the
state-of-the-art algorithms that accurately predict image memorability relative
to human behavioral data, using image features at different scales from raw
pixels to semantic labels. We discuss the design of algorithms and
visualizations for face, object, and scene memorability, as well as algorithms
that generalize beyond static scenes to actions and videos. We cover the
state-of-the-art deep learning approaches that are the current front runners in
the memorability prediction space. Beyond prediction, we show how recent A.I.
approaches can be used to create and modify visual memorability. Finally, we
preview the computational applications that memorability can power, from
filtering visual streams to enhancing augmented reality interfaces.
|
We consider linear preferential attachment random trees with additive
fitness, where fitness is defined as the random initial vertex attractiveness.
We show that when the fitness distribution has positive bounded support, the
weak local limit of this family can be constructed using a sequence of mixed
Poisson point processes. We also provide a rate of convergence of the total
variation distance between the r- neighbourhood of the uniformly chosen vertex
in the preferential attachment tree and that of the root vertex of its weak
local limit. We apply the theorem to obtain the limiting degree distributions
of the uniformly chosen vertex and its ancestors, that is, the vertices that
are on the path between the uniformly chosen vertex and the initial vertex.
Rates of convergence in the total variation distance are established for these
results.
|
In this manuscript, we investigate the oscillatory behaviour of the
anisotropy in the diagonal Bianchi-I spacetimes. Our starting point is a
simplification of Einstein's equations using only observable or physical
variables. As a consequence, we are able to: (a) Prove general results
concerning the existence of oscillations of the anisotropy in the primordial
and the late-time universe. For instance, in the expanding scenario, we show
that a past weakly mixmaster behaviour (oscillations as we approach the Kasner
solutions) might appear even with no violation of the usual energy conditions,
while in the future, the pulsation (oscillations around isotropic solutions)
seems to be most favored; (b) Determine a large scheme for deriving classes of
physically motivated exact solutions, and we give some (including the general
barotropic perfect fluid and the magnetic one); (c) Understand the physical
conditions for the occurrence of the isotropization or anisotropization during
the cosmological evolution; (d) Understand how anisotropy and energy density
are converted one into another. In particular, we call attention to the
presence of a residue in the energy density in a late-time isotropic universe
coming from its past anisotropic behaviour.
|
A fundamental challenge faced by existing Fine-Grained Sketch-Based Image
Retrieval (FG-SBIR) models is the data scarcity -- model performances are
largely bottlenecked by the lack of sketch-photo pairs. Whilst the number of
photos can be easily scaled, each corresponding sketch still needs to be
individually produced. In this paper, we aim to mitigate such an upper-bound on
sketch data, and study whether unlabelled photos alone (of which they are many)
can be cultivated for performances gain. In particular, we introduce a novel
semi-supervised framework for cross-modal retrieval that can additionally
leverage large-scale unlabelled photos to account for data scarcity. At the
centre of our semi-supervision design is a sequential photo-to-sketch
generation model that aims to generate paired sketches for unlabelled photos.
Importantly, we further introduce a discriminator guided mechanism to guide
against unfaithful generation, together with a distillation loss based
regularizer to provide tolerance against noisy training samples. Last but not
least, we treat generation and retrieval as two conjugate problems, where a
joint learning procedure is devised for each module to mutually benefit from
each other. Extensive experiments show that our semi-supervised model yields
significant performance boost over the state-of-the-art supervised
alternatives, as well as existing methods that can exploit unlabelled photos
for FG-SBIR.
|
Bayesian Knowledge Tracing, a model used for cognitive mastery estimation,
has been a hallmark of adaptive learning research and an integral component of
deployed intelligent tutoring systems (ITS). In this paper, we provide a brief
history of knowledge tracing model research and introduce pyBKT, an accessible
and computationally efficient library of model extensions from the literature.
The library provides data generation, fitting, prediction, and cross-validation
routines, as well as a simple to use data helper interface to ingest typical
tutor log dataset formats. We evaluate the runtime with various dataset sizes
and compare to past implementations. Additionally, we conduct sanity checks of
the model using experiments with simulated data to evaluate the accuracy of its
EM parameter learning and use real-world data to validate its predictions,
comparing pyBKT's supported model variants with results from the papers in
which they were originally introduced. The library is open source and open
license for the purpose of making knowledge tracing more accessible to
communities of research and practice and to facilitate progress in the field
through easier replication of past approaches.
|
A spatially inhomogeneous, trapped two-component Bose-Einstein condensate of
cold atoms in the phase separation mode has been numerically simulated. It has
been demonstrated for the first time that the surface tension between the
components makes possible the existence of drops of a denser phase floating on
the surface of a less dense phase. Depending on the harmonic trap anisotropy
and other system parameters, a stable equilibrium of the drop is achieved
either at the poles or at the equator. The drop flotation sometimes persists
even in the presence of an attached quantized vortex.
|
We can compress a rectifier network while exactly preserving its underlying
functionality with respect to a given input domain if some of its neurons are
stable. However, current approaches to determine the stability of neurons with
Rectified Linear Unit (ReLU) activations require solving or finding a good
approximation to multiple discrete optimization problems. In this work, we
introduce an algorithm based on solving a single optimization problem to
identify all stable neurons. Our approach is on median 183 times faster than
the state-of-art method on CIFAR-10, which allows us to explore exact
compression on deeper (5 x 100) and wider (2 x 800) networks within minutes.
For classifiers trained under an amount of L1 regularization that does not
worsen accuracy, we can remove up to 56% of the connections on the CIFAR-10
dataset. The code is available at the following link,
https://github.com/yuxwind/ExactCompression.
|
A central goal in experimental high energy physics is to detect new physics
signals that are not explained by known physics. In this paper, we aim to
search for new signals that appear as deviations from known Standard Model
physics in high-dimensional particle physics data. To do this, we determine
whether there is any statistically significant difference between the
distribution of Standard Model background samples and the distribution of the
experimental observations, which are a mixture of the background and a
potential new signal. Traditionally, one also assumes access to a sample from a
model for the hypothesized signal distribution. Here we instead investigate a
model-independent method that does not make any assumptions about the signal
and uses a semi-supervised classifier to detect the presence of the signal in
the experimental data. We construct three test statistics using the classifier:
an estimated likelihood ratio test (LRT) statistic, a test based on the area
under the ROC curve (AUC), and a test based on the misclassification error
(MCE). Additionally, we propose a method for estimating the signal strength
parameter and explore active subspace methods to interpret the proposed
semi-supervised classifier in order to understand the properties of the
detected signal. We investigate the performance of the methods on a data set
related to the search for the Higgs boson at the Large Hadron Collider at CERN.
We demonstrate that the semi-supervised tests have power competitive with the
classical supervised methods for a well-specified signal, but much higher power
for an unexpected signal which might be entirely missed by the supervised
tests.
|
Real world data is mostly unlabeled or only few instances are labeled.
Manually labeling data is a very expensive and daunting task. This calls for
unsupervised learning techniques that are powerful enough to achieve comparable
results as semi-supervised/supervised techniques. Contrastive self-supervised
learning has emerged as a powerful direction, in some cases outperforming
supervised techniques. In this study, we propose, SelfGNN, a novel contrastive
self-supervised graph neural network (GNN) without relying on explicit
contrastive terms. We leverage Batch Normalization, which introduces implicit
contrastive terms, without sacrificing performance. Furthermore, as data
augmentation is key in contrastive learning, we introduce four feature
augmentation (FA) techniques for graphs. Though graph topological augmentation
(TA) is commonly used, our empirical findings show that FA perform as good as
TA. Moreover, FA incurs no computational overhead, unlike TA, which often has
O(N^3) time complexity, N-number of nodes. Our empirical evaluation on seven
publicly available real-world data shows that, SelfGNN is powerful and leads to
a performance comparable with SOTA supervised GNNs and always better than SOTA
semi-supervised and unsupervised GNNs. The source code is available at
https://github.com/zekarias-tilahun/SelfGNN.
|
Stochastic gradient descent (SGD) is the main approach for training deep
networks: it moves towards the optimum of the cost function by iteratively
updating the parameters of a model in the direction of the gradient of the loss
evaluated on a minibatch. Several variants of SGD have been proposed to make
adaptive step sizes for each parameter (adaptive gradient) and take into
account the previous updates (momentum). Among several alternative of SGD the
most popular are AdaGrad, AdaDelta, RMSProp and Adam which scale coordinates of
the gradient by square roots of some form of averaging of the squared
coordinates in the past gradients and automatically adjust the learning rate on
a parameter basis. In this work, we compare Adam based variants based on the
difference between the present and the past gradients, the step size is
adjusted for each parameter. We run several tests benchmarking proposed methods
using medical image data. The experiments are performed using ResNet50
architecture neural network. Moreover, we have tested ensemble of networks and
the fusion with ResNet50 trained with stochastic gradient descent. To combine
the set of ResNet50 the simple sum rule has been applied. Proposed ensemble
obtains very high performance, it obtains accuracy comparable or better than
actual state of the art. To improve reproducibility and research efficiency the
MATLAB source code used for this research is available at GitHub:
https://github.com/LorisNanni.
|
Using a mechanism which allows naturally small Dirac neutrino masses and its
linkage to a dark gauge $U(1)_D$ symmetry, a realistic Dirac neutrino mass
matrix is derived from $S_3$. The dark sector naturally contains a fermion
singlet having a small seesaw mass. It is thus a good candidate for freeze-in
dark matter from the decay of the $U(1)_D$ Higgs boson.
|
For nitride-based InGaN and AlGaN quantum well (QW) LEDs, the potential
fluctuations caused by natural alloy disorders limit the lateral intra-QW
carrier diffusion length and current spreading. The diffusion length mainly
impacts the overall LED efficiency through sidewall nonradiative recombination,
especially for $\mu$LEDs. In this paper, we study the carrier lateral diffusion
length for nitride-based green, blue, and ultraviolet C (UVC) QWs in three
dimensions. We solve the Poisson and drift-diffusion equations in the framework
of localization landscape theory. The full three-dimensional model includes the
effects of random alloy composition fluctuations and electric fields in the
QWs. The dependence of the minority carrier diffusion length on the majority
carrier density is studied with a full three-dimensional model. The results
show that the diffusion length is limited by the potential fluctuations and the
recombination rate, the latter being controlled by the polarization-induced
electric field in the QWs and by the screening of the internal electric fields
by carriers.
|
Model predictive control is an advanced control approach for multivariable
systems with constraints, which is reliant on an accurate dynamic model. Most
real dynamic models are however affected by uncertainties, which can lead to
closed-loop performance deterioration and constraint violations. In this paper
we introduce a new algorithm to explicitly consider time-invariant stochastic
uncertainties in optimal control problems. The difficulty of propagating
stochastic variables through nonlinear functions is dealt with by combining
Gaussian processes with polynomial chaos expansions. The main novelty in this
paper is to use this combination in an efficient fashion to obtain mean and
variance estimates of nonlinear transformations. Using this algorithm, it is
shown how to formulate both chance-constraints and a probabilistic objective
for the optimal control problem. On a batch reactor case study we firstly
verify the ability of the new approach to accurately approximate the
probability distributions required. Secondly, a tractable stochastic nonlinear
model predictive control approach is formulated with an economic objective to
demonstrate the closed-loop performance of the method via Monte Carlo
simulations.
|
Controller design for nonlinear systems with Control Lyapunov Function (CLF)
based quadratic programs has recently been successfully applied to a diverse
set of difficult control tasks. These existing formulations do not address the
gap between design with continuous time models and the discrete time sampled
implementation of the resulting controllers, often leading to poor performance
on hardware platforms. We propose an approach to close this gap by synthesizing
sampled-data counterparts to these CLF-based controllers, specified as
quadratically constrained quadratic programs (QCQPs). Assuming feedback
linearizability and stable zero-dynamics of a system's continuous time model,
we derive practical stability guarantees for the resulting sampled-data system.
We demonstrate improved performance of the proposed approach over continuous
time counterparts in simulation.
|
Both FCM and PCM clustering methods have been widely applied to pattern
recognition and data clustering. Nevertheless, FCM is sensitive to noise and
PCM occasionally generates coincident clusters. PFCM is an extension of the PCM
model by combining FCM and PCM, but this method still suffers from the
weaknesses of PCM and FCM. In the current paper, the weaknesses of the PFCM
algorithm are corrected and the enhanced possibilistic fuzzy c-means (EPFCM)
clustering algorithm is presented. EPFCM can still be sensitive to noise.
Therefore, we propose an interval type-2 enhanced possibilistic fuzzy c-means
(IT2EPFCM) clustering method by utilizing two fuzzifiers $(m_1, m_2)$ for fuzzy
memberships and two fuzzifiers $({\theta}_1, {\theta}_2)$ for possibilistic
typicalities. Our computational results show the superiority of the proposed
approaches compared with several state-of-the-art techniques in the literature.
Finally, the proposed methods are implemented for analyzing microarray gene
expression data.
|
Societal biases resonate in the retrieved contents of information retrieval
(IR) systems, resulting in reinforcing existing stereotypes. Approaching this
issue requires established measures of fairness in respect to the
representation of various social groups in retrieval results, as well as
methods to mitigate such biases, particularly in the light of the advances in
deep ranking models. In this work, we first provide a novel framework to
measure the fairness in the retrieved text contents of ranking models.
Introducing a ranker-agnostic measurement, the framework also enables the
disentanglement of the effect on fairness of collection from that of rankers.
To mitigate these biases, we propose AdvBert, a ranking model achieved by
adapting adversarial bias mitigation for IR, which jointly learns to predict
relevance and remove protected attributes. We conduct experiments on two
passage retrieval collections (MSMARCO Passage Re-ranking and TREC Deep
Learning 2019 Passage Re-ranking), which we extend by fairness annotations of a
selected subset of queries regarding gender attributes. Our results on the
MSMARCO benchmark show that, (1) all ranking models are less fair in comparison
with ranker-agnostic baselines, and (2) the fairness of Bert rankers
significantly improves when using the proposed AdvBert models. Lastly, we
investigate the trade-off between fairness and utility, showing that we can
maintain the significant improvements in fairness without any significant loss
in utility.
|
Boundary based blackbox attack has been recognized as practical and
effective, given that an attacker only needs to access the final model
prediction. However, the query efficiency of it is in general high especially
for high dimensional image data. In this paper, we show that such efficiency
highly depends on the scale at which the attack is applied, and attacking at
the optimal scale significantly improves the efficiency. In particular, we
propose a theoretical framework to analyze and show three key characteristics
to improve the query efficiency. We prove that there exists an optimal scale
for projective gradient estimation. Our framework also explains the
satisfactory performance achieved by existing boundary black-box attacks. Based
on our theoretical framework, we propose Progressive-Scale enabled projective
Boundary Attack (PSBA) to improve the query efficiency via progressive scaling
techniques. In particular, we employ Progressive-GAN to optimize the scale of
projections, which we call PSBA-PGAN. We evaluate our approach on both spatial
and frequency scales. Extensive experiments on MNIST, CIFAR-10, CelebA, and
ImageNet against different models including a real-world face recognition API
show that PSBA-PGAN significantly outperforms existing baseline attacks in
terms of query efficiency and attack success rate. We also observe relatively
stable optimal scales for different models and datasets. The code is publicly
available at https://github.com/AI-secure/PSBA.
|
Shared-account Cross-domain Sequential recommendation (SCSR) is the task of
recommending the next item based on a sequence of recorded user behaviors,
where multiple users share a single account, and their behaviours are available
in multiple domains. Existing work on solving SCSR mainly relies on mining
sequential patterns via RNN-based models, which are not expressive enough to
capture the relationships among multiple entities. Moreover, all existing
algorithms try to bridge two domains via knowledge transfer in the latent
space, and the explicit cross-domain graph structure is unexploited. In this
work, we propose a novel graph-based solution, namely DA-GCN, to address the
above challenges. Specifically, we first link users and items in each domain as
a graph. Then, we devise a domain-aware graph convolution network to learn
user-specific node representations. To fully account for users' domain-specific
preferences on items, two novel attention mechanisms are further developed to
selectively guide the message passing process. Extensive experiments on two
real-world datasets are conducted to demonstrate the superiority of our DA-GCN
method.
|
Since neural networks are data-hungry, incorporating data augmentation in
training is a widely adopted technique that enlarges datasets and improves
generalization. On the other hand, aggregating predictions of multiple
augmented samples (i.e., test-time augmentation) could boost performance even
further. In the context of person re-identification models, it is common
practice to extract embeddings for both the original images and their
horizontally flipped variants. The final representation is the mean of the
aforementioned feature vectors. However, such scheme results in a gap between
training and inference, i.e., the mean feature vectors calculated in inference
are not part of the training pipeline. In this study, we devise the FlipReID
structure with the flipping loss to address this issue. More specifically,
models using the FlipReID structure are trained on the original images and the
flipped images simultaneously, and incorporating the flipping loss minimizes
the mean squared error between feature vectors of corresponding image pairs.
Extensive experiments show that our method brings consistent improvements. In
particular, we set a new record for MSMT17 which is the largest person
re-identification dataset. The source code is available at
https://github.com/nixingyang/FlipReID.
|
We show that color-breaking vacua may develop at high temperature in the
Mini-Split Supersymmetry (SUSY) scenario. This can lead to a nontrivial
cosmological history of the Universe, including strong first order phase
transitions and domain wall production. Given the typical PeV energy scale
associated with Mini-Split SUSY models, a stochastic gravitational wave
background at frequencies around 1 kHz is expected. We study the potential for
detection of such a signal in future gravitational wave experiments.
|
We report on the observation and coherent excitation of atoms on the narrow
inner-shell orbital transition, connecting the erbium ground state
$[\mathrm{Xe}] 4f^{12} (^3\text{H}_6)6s^{2}$ to the excited state
$[\mathrm{Xe}] 4f^{11}(^4\text{I}_{15/2})^05d (^5\text{D}_{3/2}) 6s^{2}
(15/2,3/2)^0_7$. This transition corresponds to a wavelength of 1299 nm and is
optically closed. We perform high-resolution spectroscopy to extract the
$g_J$-factor of the $1299$-nm state and to determine the frequency shift for
four bosonic isotopes. We further demonstrate coherent control of the atomic
state and extract a lifetime of 178(19) ms which corresponds to a linewidth of
0.9(1) Hz. The experimental findings are in good agreement with our
semi-empirical model. In addition, we present theoretical calculations of the
atomic polarizability, revealing several different magic-wavelength conditions.
Finally, we make use of the vectorial polarizability and confirm a possible
magic wavelength at 532 nm.
|
Viscosity overshoot of entangled polymer melts has been observed under shear
flow and uniaxial elongational flow, but has never been observed under biaxial
elongational flow. We confirmed the presence of viscosity overshoot under
biaxial elongational flows observed in a mixed system of ring and linear
polymers expressed by coarse-grained molecular dynamics simulations. The
overshoot was found to be more pronounced in weakly entangled melts.
Furthermore, the threshold strain rate $\dot{\varepsilon}_{\rm th}$
distinguishing linear and nonlinear behaviors was found to be dependent on the
linear chain length as $\dot{\varepsilon}_{\rm th}(N)\sim N^{-1/2}$, which
differs from the conventional relationship, $\dot{\varepsilon}_{\rm th}(N) \sim
N^{-2}$, expected from the inverse of the Rouse relaxation time. We have
concluded that the cooperative interactions between rings and linear chains
were enhanced under biaxial elongational flow.
|
Gassert's paper "A NOTE ON THE MONOGENEITY OF POWER MAPS" is cited at least
by $17$ papers in the context of monogeneity of pure number fields despite some
errors that it contains and remarks on it. In this note, we point out some of
these errors, and make some improvements on it.
|
Regular arrays of two-level emitters at distances smaller that the transition
wavelength collectively scatter, absorb and emit photons. The strong
inter-particle dipole coupling creates large energy shifts of the collective
delocalized excitations, which generates a highly nonlinear response at the
single and few photon level. This should allow to implement nanoscale
non-classical light sources via weak coherent illumination. At the generic
tailored examples of regular chains or polygons we show that the fields emitted
perpendicular to the illumination direction exhibit a strong directional
confinement with genuine quantum properties as antibunching. For short
interparticle distances superradiant directional emission can enhance the
radiated intensity by an order of magnitude compared to a single atom focused
to a strongly confined solid angle but still keeping the anti-bunching
parameter at the level of $g^{(2)}(0) \approx 10^{-2}$.
|
We study perturbations of the self-adjoint periodic Sturm--Liouville operator
\[
A_0 = \frac{1}{r_0}\left(-\frac{\mathrm d}{\mathrm dx} p_0 \frac{\mathrm
d}{\mathrm dx} + q_0\right) \] and conclude under $L^1$-assumptions on the
differences of the coefficients that the essential spectrum and absolutely
continuous spectrum remain the same. If a finite first moment condition holds
for the differences of the coefficients, then at most finitely many eigenvalues
appear in the spectral gaps. This observation extends a seminal result by
Rofe-Beketov from the 1960s. Finally, imposing a second moment condition we
show that the band edges are no eigenvalues of the perturbed operator.
|
A magnetic skyrmion crystal (SkX) with a swirling spin configuration, which
is one of topological spin crystals as a consequence of an interference between
multiple spin density waves, shows a variety of noncoplanar spin patterns
depending on a way of superposing the waves. By focusing on a phase degree of
freedom among the constituent waves in the SkX, we theoretically investigate a
position of the skyrmion core on a discrete lattice, which is relevant with the
symmetry of the SkX. The results are obtained for the double exchange
(classical Kondo lattice) model on a discrete triangular lattice by the
variational calculations. We find that the skyrmion cores in both two SkXs with
the skyrmion number of one and two are locked at the interstitial site on the
triangular lattice, while it is located at the onsite by introducing a
relatively large easy-axis single-ion anisotropy. The variational parameters
and the resultant Fermi surfaces in each SkX spin texture are also discussed.
The different symmetry of the Fermi surfaces depending on the core position is
obtained when the skyrmion crystal is commensurate with the lattice. The
different Fermi-surface topology is directly distinguished by an electric probe
of angle-resolved photoemission spectroscopy. Furthermore, we show that the
SkXs obtained by the variational calculations are also confirmed by numerical
simulations on the basis of the kernel polynomial method and the Langevin
dynamics for the double exchange model and the simulated annealing for an
effective spin model.
|
It is proved that for any $0<\beta<\alpha$, any bounded Ahlfors
$\alpha$-regular space contains a $\beta$-regular compact subset that embeds
biLipschitzly in an ultrametric with distortion at most
$O(\alpha/(\alpha-\beta))$. The bound on the distortion is asymptotically tight
when $\beta\to \alpha$. The main tool used in the proof is a regular form of
the ultrametric skeleton theorem.
|
In recent years online shopping has gained momentum and became an important
venue for customers wishing to save time and simplify their shopping process. A
key advantage of shopping online is the ability to read what other customers
are saying about products of interest. In this work, we aim to maintain this
advantage in situations where extreme brevity is needed, for example, when
shopping by voice. We suggest a novel task of extracting a single
representative helpful sentence from a set of reviews for a given product. The
selected sentence should meet two conditions: first, it should be helpful for a
purchase decision and second, the opinion it expresses should be supported by
multiple reviewers. This task is closely related to the task of Multi Document
Summarization in the product reviews domain but differs in its objective and
its level of conciseness. We collect a dataset in English of sentence
helpfulness scores via crowd-sourcing and demonstrate its reliability despite
the inherent subjectivity involved. Next, we describe a complete model that
extracts representative helpful sentences with positive and negative sentiment
towards the product and demonstrate that it outperforms several baselines.
|
Quantum computing is a promising paradigm to solve computationally
intractable problems. Various companies such as, IBM, Rigetti and D-Wave offer
quantum computers using a cloud-based platform that possess several interesting
features. These factors motivate a new threat model. To mitigate this threat,
we propose two flavors of QuPUF: one based on superposition, and another based
on decoherence. Experiments on real IBM quantum hardware show that the proposed
QuPUF can achieve inter-die Hamming Distance(HD) of 55% and intra-HD as low as
4%, as compared to ideal cases of 50% and 0% respectively. The proposed QuPUFs
can also be used as a standalone solution for any other application.
|
FASER$\nu$ at the CERN Large Hadron Collider (LHC) is designed to directly
detect collider neutrinos for the first time and study their cross sections at
TeV energies, where no such measurements currently exist. In 2018, a pilot
detector employing emulsion films was installed in the far-forward region of
ATLAS, 480 m from the interaction point, and collected 12.2 fb$^{-1}$ of
proton-proton collision data at a center-of-mass energy of 13 TeV. We describe
the analysis of this pilot run data and the observation of the first neutrino
interaction candidates at the LHC. This milestone paves the way for high-energy
neutrino measurements at current and future colliders.
|
This work extends the framework of the partially-averaged Navier-Stokes
(PANS) equations to variable-density flow, \text{i.e.}, multi-material and/or
compressible mixing problems with density variations and production of
turbulence kinetic energy by both shear and buoyancy mechanisms. The proposed
methodology is utilized to derive the PANS BHR-LEVM closure. This includes
\textit{a-priori} testing to analyze and develop guidelines toward the
efficient selection of the parameters controlling the physical resolution and,
consequently, the range of resolved scales of PANS. Two archetypal test-cases
involving transient turbulence, hydrodynamic instabilities, and coherent
structures are used to illustrate the accuracy and potential of the method: the
Taylor-Green vortex (TGV) at Reynolds number $\mathrm{Re}=3000$, and the
Rayleigh-Taylor (RT) flow at Atwood number $0.5$ and
$(\mathrm{Re})_{\max}\approx 500$. These representative problems, for which
turbulence is generated by shear and buoyancy processes, constitute the initial
validation space of the new model, and their results are comprehensively
discussed in two subsequent studies. The computations indicate that PANS can
accurately predict the selected flow problems, resolving only a fraction of the
scales of large eddy simulation and direct numerical simulation strategies. The
results also reiterate that the physical resolution of the PANS model must
guarantee that the key instabilities and coherent structures of the flow are
resolved. The remaining scales can be modeled through an adequate turbulence
scale-dependent closure.
|
Working in two space dimensions, we show that the orientational order
emerging from self-propelled polar particles aligning nematically is
quasi-long-ranged beyond $\ell_{\rm r}$, the scale associated to induced
velocity reversals, which is typically extremely large and often cannot even be
measured. Below $\ell_{\rm r}$, nematic order is long-range. We construct and
study a hydrodynamic theory for this de facto phase and show that its structure
and symmetries differ from conventional descriptions of active nematics. We
check numerically our theoretical predictions, in particular the presence of
$\pi$-symmetric propagative sound modes, and provide estimates of all scaling
exponents governing long-range space-time correlations.
|
In chirped pulse experiments, magnitude Fourier transform is used to generate
frequency domain spectra. The application of window function as a tool for
lineshape correction and signal-to-noise ratio (SnR) enhancement is rarely
discussed in chirped spectroscopy, with the only exception of using
Kaiser-Bessel window and trivial rectangular window. We present a specific
window function, called "Voigt-1D" window, designed for chirped pulse
spectroscopy. The window function corrects the magnitude Fourier-transform
spectra to Voigt lineshape, and offers wide tunability to control the SnR and
lineshape of the final spectral lines. We derived the mathematical properties
of the window function, and evaluated the performance of the window function in
comparison to the Kaiser-Bessel window on experimental and simulated data sets.
Our result shows that, compared with un-windowed spectra, the Voigt-1D window
is able to produce 100 % SnR enhancement on average.
|
The aim of this paper is to provide the geometrical structure of a
gravitational field that includes the addition of dark matter in the framework
of a Riemannian and a Riemann--Sasaki spacetime. By means of the classical
Riemannian geometric methods we arrive at modified geodesic equations, tidal
forces, and Einstein and Raychaudhuri equations to account for extra dark
gravity. We further examine an application of this approach in cosmology.
Moreover, a possible extension of this model on the tangent bundle is studied
in order to examine the behavior of dark matter in a unified geometric model of
gravity with more degrees of freedom. Particular emphasis shall be laid on the
problem of the geodesic motion under the influence of dark matter.
|
Being able to predict stock prices might be the unspoken wish of stock
investors. Although stock prices are complicated to predict, there are many
theories about what affects their movements, including interest rates, news and
social media. With the help of Machine Learning, complex patterns in data can
be identified beyond the human intellect. In this thesis, a Machine Learning
model for time series forecasting is created and tested to predict stock
prices. The model is based on a neural network with several layers of LSTM and
fully connected layers. It is trained with historical stock values, technical
indicators and Twitter attribute information retrieved, extracted and
calculated from posts on the social media platform Twitter. These attributes
are sentiment score, favourites, followers, retweets and if an account is
verified. To collect data from Twitter, Twitter's API is used. Sentiment
analysis is conducted with VADER. The results show that by adding more Twitter
attributes, the MSE between the predicted prices and the actual prices improved
by 3%. With technical analysis taken into account, MSE decreases from 0.1617 to
0.1437, which is an improvement of around 11%. The restrictions of this study
include that the selected stock has to be publicly listed on the stock market
and popular on Twitter and among individual investors. Besides, the stock
markets' opening hours differ from Twitter, which constantly available. It may
therefore introduce noises in the model.
|
In low-dimensional systems, indistinguishable particles can display
statistics that interpolate between bosons and fermions. Signatures of these
"anyons" have been detected in two-dimensional quasiparticle excitations of the
fractional quantum Hall effect, however experimental access to these
quasiparticles remains limited. As an alternative to these "topological
anyons," we propose "statistical anyons" realized through a statistical mixture
of particles with bosonic and fermionic symmetry. We show that the framework of
statistical anyons is equivalent to the generalized exclusion statistics (GES)
pioneered by Haldane, significantly broadening the range of systems to which
GES apply. We develop the full thermodynamic characterizations of these
statistical anyons, including both equilibrium and nonequilibrium behavior. To
develop a complete picture, we compare the performance of quantum heat engines
with working mediums of statistical anyons and traditional topological anyons,
demonstrating the effects of the anyonic phase in both local equilibrium and
fully nonequilibrium regimes. In addition, methods of optimizing engine
performance through shortcuts to adiabaticity are investigated, using both
linear response and fast forward techniques.
|
The eukaryotic cell's cytoskeleton is a prototypical example of an active
material: objects embedded within it are driven by molecular motors acting on
the cytoskeleton, leading to anomalous diffusive behavior. Experiments tracking
the behavior of cell-attached objects have observed anomalous diffusion with a
distribution of displacements that is non-Gaussian, with heavy tails. This has
been attributed to "cytoquakes" or other spatially extended collective effects.
We show, using simulations and analytical theory, that a simple continuum
active gel model driven by fluctuating force dipoles naturally creates heavy
power-law tails in cytoskeletal displacements. We predict that this power law
exponent should depend on the geometry and dimensionality of where force
dipoles are distributed through the cell; we find qualitatively different
results for force dipoles in a 3D cytoskeleton and a quasi-two-dimensional
cortex. We then discuss potential applications of this model both in cells and
in synthetic active gels.
|
The conversion and interaction between quantum signals at a single-photon
level are essential for scalable quantum photonic information technology. Using
a fully-optimized, periodically-poled lithium niobate microring, we demonstrate
ultra-efficient sum-frequency generation on chip. The external quantum
efficiency reaches $(65\pm3)\%$ with only $(104\pm4)$ $\mu$W pump power,
improving the state-of-the-art by over one order of magnitude. At the peak
conversion, $3\times10^{-5}$ noise photon is created during the cavity
lifetime, which meets the requirement of quantum applications using
single-photon pulses. Using pump and signal in single-photon coherent states,
we directly measure the conversion probability produced by a single pump photon
to be $10^{-5}$ -- breaking the record by 100 times -- and the photon-photon
coupling strength to be 9.1 MHz. Our results mark a new milestone toward
quantum nonlinear optics at the ultimate single photon limit, creating new
background in highly integrated photonics and quantum optical computing.
|
Developing sustainable scientific software for the needs of the scientific
community requires expertise in both software engineering and domain science.
This can be challenging due to the unique needs of scientific software, the
insufficient resources for modern software engineering practices in the
scientific community, and the complexity of evolving scientific contexts for
developers. These difficulties can be reduced if scientists and developers
collaborate. We present a case study wherein scientists from the SuperNova
Early Warning System collaborated with software developers from the Scalable
Cyberinfrastructure for Multi-Messenger Astrophysics project. The collaboration
addressed the difficulties of scientific software development, but presented
additional risks to each team. For the scientists, there was a concern of
relying on external systems and lacking control in the development process. For
the developers, there was a risk in supporting the needs of an user-group while
maintaining core development. We mitigated these issues by utilizing an Agile
Scrum framework to orchestrate the collaboration. This promoted communication
and cooperation, ensuring that the scientists had an active role in development
while allowing the developers to quickly evaluate and implement the scientists'
software requirements. While each system was still in an early stage, the
collaboration provided benefits for each group: the scientists kick-started
their development by using an existing platform, and the developers utilized
the scientists' use-case to improve their systems. This case study suggests
that scientists and software developers can avoid some difficulties of
scientific computing by collaborating and can address emergent concerns using
Agile Scrum methods.
|
Horava gravity is a proposal for completing general relativity in the
ultraviolet by interactions that violate Lorentz invariance at very high
energies. We focus on (2+1)-dimensional projectable Horava gravity, a theory
which is renormalizable and perturbatively ultraviolet-complete, enjoying an
asymptotically free ultraviolet fixed point. Adding a small cosmological
constant to regulate the long distance behavior of the metric, we search for
all circularly symmetric stationary vacuum solutions with vanishing angular
momentum and approaching the de Sitter metric with a possible angle deficit at
infinity. We find a two-parameter family of such geometries. Apart from the
cosmological de Sitter horizon, these solutions generally contain another
Killing horizon and should therefore be interpreted as black holes from the
viewpoint of the low-energy theory. Contrary to naive expectations, their
central singularity is not resolved by the higher derivative terms present in
the action. It is unknown at present if these solutions form as a result of
gravitational collapse. The only solution regular everywhere is just the de
Sitter metric devoid of any black hole horizon.
|
We analyze the popular kernel polynomial method (KPM) for approximating the
spectral density (eigenvalue distribution) of an $n\times n$ Hermitian matrix
$A$. We prove that a simple and practical variant of the KPM algorithm can
approximate the spectral density to $\epsilon$ accuracy in the Wasserstein-1
distance with roughly $O({1}/{\epsilon})$ matrix-vector multiplications with
$A$. This yields a provable linear time result for the problem with better
$\epsilon$ dependence than prior work.
The KPM variant we study is based on damped Chebyshev polynomial expansions.
We show that it is stable, meaning that it can be combined with any approximate
matrix-vector multiplication algorithm for $A$. As an application, we develop
an $O(n\cdot \text{poly}(1/\epsilon))$ time algorithm for computing the
spectral density of any $n\times n$ normalized graph adjacency or Laplacian
matrix. This runtime is sublinear in the size of the matrix, and assumes sample
access to the graph.
Our approach leverages several tools from approximation theory, including
Jackson's seminal work on approximation with positive kernels [Jackson, 1912],
and stability properties of three-term recurrence relations for orthogonal
polynomials.
|
We propose a new method of generating gamma rays with orbital angular
momentum (OAM). Accelerated partially-stripped ions are used as an energy
up-converter. Irradiating an optical laser beam with OAM on ultrarelativistic
ions, they are excited to a state of large angular momentum. Gamma rays with
OAM are emitted in their deexcitation process. We examine the excitation cross
section and deexcitation rate.
|
While the event horizon of a black hole could cast a shadow that was observed
recently, a central singularity without horizon could also give rise to such a
feature. This leaves us with a question on the nature of the supermassive black
holes at the galactic centers, and if they admit an event horizon necessarily.
We point out that observations of motion of stars around the galactic center
should give a clear idea of the nature of this central supermassive object. We
examine and discuss here recent developments that indicate intriguing behavior
of the star motions that could possibly distinguish the existence or otherwise
of an event horizon at the galactic center. We compare the motion of the S2
star with these theoretical results, fitting the observational data with
theory, and it is seen that the star motions and precession of their orbits
around the galactic center provide important clues on the nature of this
central compact object.
|
Influence maximization (IM) is the problem of finding a seed vertex set that
maximizes the expected number of vertices influenced under a given diffusion
model. Due to the NP-Hardness of finding an optimal seed set, approximation
algorithms are frequently used for IM. In this work, we describe a fast,
error-adaptive approach that leverages Count-Distinct sketches and hash-based
fused sampling. To estimate the number of influenced vertices throughout a
diffusion, we use per-vertex Flajolet-Martin sketches where each sketch
corresponds to a sampled subgraph. To efficiently simulate the diffusions, the
reach-set cardinalities of a single vertex are stored in memory in a
consecutive fashion. This allows the proposed algorithm to estimate the number
of influenced vertices in a single step for simulations at once. For a faster
IM kernel, we rebuild the sketches in parallel only after observing estimation
errors above a given threshold. Our experimental results show that the proposed
algorithm yields high-quality seed sets while being up to 119x faster than a
state-of-the-art approximation algorithm. In addition, it is up to 62x faster
than a sketch-based approach while producing seed sets with 3%-12% better
influence scores
|
Purpose: Segmentation of surgical instruments in endoscopic videos is
essential for automated surgical scene understanding and process modeling.
However, relying on fully supervised deep learning for this task is challenging
because manual annotation occupies valuable time of the clinical experts.
Methods: We introduce a teacher-student learning approach that learns jointly
from annotated simulation data and unlabeled real data to tackle the erroneous
learning problem of the current consistency-based unsupervised domain
adaptation framework.
Results: Empirical results on three datasets highlight the effectiveness of
the proposed framework over current approaches for the endoscopic instrument
segmentation task. Additionally, we provide analysis of major factors affecting
the performance on all datasets to highlight the strengths and failure modes of
our approach.
Conclusion: We show that our proposed approach can successfully exploit the
unlabeled real endoscopic video frames and improve generalization performance
over pure simulation-based training and the previous state-of-the-art. This
takes us one step closer to effective segmentation of surgical tools in the
annotation scarce setting.
|
Kagome metals AV3Sb5 (A = K, Rb, and Cs) exhibit superconductivity at 0.9-2.5
K and charge-density wave (CDW) at 78-103 K. Key electronic states associated
with the CDW and superconductivity remain elusive. Here, we investigate
low-energy excitations of CsV3Sb5 by angle-resolved photoemission spectroscopy.
We found an energy gap of 70-100 meV at the Dirac-crossing points of linearly
dispersive bands, pointing to an importance of spin-orbit coupling. We also
found a signature of strongly Fermi-surface and momentum-dependent CDW gap
characterized by the larger energy gap of maximally 70 meV for a band forming a
saddle point around the M point, the smaller (0-18 meV) gap for a band forming
massive Dirac cones, and a zero gap at the Gamma-centered electron pocket. The
observed highly anisotropic CDW gap which is enhanced around the M point
signifies an importance of scattering channel connecting the saddle points,
laying foundation for understanding the nature of CDW and superconductivity in
AV3Sb5.
|
Let G be a permutation group, acting on a set \Omega of size n. A subset B of
\Omega is a base for G if the pointwise stabilizer G_(B) is trivial. Let b(G)
be the minimal size of a base for G. A subgroup G of Sym(n) is large base if
there exist integers m and r \geq 1 such that Alt(m)^r \unlhd G \leq Sym(m) \wr
Sym(r), where the action of Sym(m) is on k-element subsets of {1,...,m} and the
wreath product acts with product action. In this paper we prove that if G is
primitive and not large base, then either G is the Mathieu group M24 in its
natural action on 24 points, or b(G) \leq \lceil \log n\rceil+1. Furthermore,
we show that there are infinitely many primitive groups G that are not large
base for which b(G) > log n + 1, so our bound is optimal.
|
Nebular HeII emission implies the presence of energetic photons (E$\ge$54
eV). Despite the great deal of effort dedicated to understanding HeII
ionization, its origin has remained mysterious, particularly in metal-deficient
star-forming (SF) galaxies. Unfolding HeII-emitting, metal-poor starbursts at z
~ 0 can yield insight into the powerful ionization processes occurring in the
primordial universe. Here we present a new study on the effects that X-ray
sources have on the HeII ionization in the extremely metal-poor galaxy IZw18 (Z
~ 3 % Zsolar), whose X-ray emission is dominated by a single high-mass X-ray
binary (HMXB). This study uses optical integral field spectroscopy, archival
Hubble Space Telescope observations, and all of the X-ray data sets publicly
available for IZw18. We investigate the time-variability of the IZw18 HMXB for
the first time; its emission shows small variations on timescales from days to
decades. The best-fit models for the HMXB X-ray spectra cannot reproduce the
observed HeII ionization budget of IZw18, nor can recent photoionization models
that combine the spectra of both very low metallicity massive stars and the
emission from HMXB. We also find that the IZw18 HMXB and the HeII-emission peak
are spatially displaced at a projected distance of $\simeq$ 200 pc. These
results reduce the relevance of X-ray photons as the dominant HeII ionizing
mode in IZw18, which leaves uncertain what process is responsible for the bulk
of its HeII ionization. This is in line with recent work discarding X-ray
binaries as the main source responsible for HeII ionization in SF galaxies.
|
We present the discovery and characterization of five hot and warm Jupiters
-- TOI-628 b (TIC 281408474; HD 288842), TOI-640 b (TIC 147977348), TOI-1333 b
(TIC 395171208, BD+47 3521A), TOI-1478 b (TIC 409794137), and TOI-1601 b (TIC
139375960) -- based on data from NASA's Transiting Exoplanet Survey Satellite
(TESS). The five planets were identified from the full frame images and were
confirmed through a series of photometric and spectroscopic follow-up
observations by the $TESS$ Follow-up Observing Program (TFOP) Working Group.
The planets are all Jovian size (R$_{\rm P}$ = 1.01-1.77 R$_{\rm J}$) and have
masses that range from 0.85 to 6.33 M$_{\rm J}$. The host stars of these
systems have F and G spectral types (5595 $\le$ T$_{\rm eff}$ $\le$ 6460 K) and
are all relatively bright (9 $<V<$ 10.8, 8.2 $<K<$ 9.3) making them well-suited
for future detailed characterization efforts. Three of the systems in our
sample (TOI-640 b, TOI-1333 b, and TOI-1601 b) orbit subgiant host stars (log
g$_*$ $<$4.1). TOI-640 b is one of only three known hot Jupiters to have a
highly inflated radius (R$_{\rm P}$ > 1.7R$_{\rm J}$, possibly a result of its
host star's evolution) and resides on an orbit with a period longer than 5
days. TOI-628 b is the most massive hot Jupiter discovered to date by $TESS$
with a measured mass of $6.31^{+0.28}_{-0.30}$ M$_{\rm J}$ and a statistically
significant, non-zero orbital eccentricity of e = $0.074^{+0.021}_{-0.022}$.
This planet would not have had enough time to circularize through tidal forces
from our analysis, suggesting that it might be remnant eccentricity from its
migration. The longest period planet in this sample, TOI-1478 b (P = 10.18
days), is a warm Jupiter in a circular orbit around a near-Solar analogue.
NASA's $TESS$ mission is continuing to increase the sample of
well-characterized hot and warm Jupiters, complementing its primary mission
goals.
|
We try to understand which morphisms of complex analytic spaces come from
algebraic geometry. We start with a series of conjectures, and then give some
partial solutions.
|
Quantum computers can provide solutions to classically intractable problems
under specific and adequate conditions. However, current devices have only
limited computational resources, and an effort is made to develop useful
quantum algorithms under these circumstances. This work experimentally
demonstrates that a single-qubit device can host a universal classifier. The
quantum processor used in this work is based on ion traps, providing highly
accurate control on small systems. The algorithm chosen is the re-uploading
scheme, which can address general learning tasks. Ion traps suit the needs of
accurate control required by re-uploading. In the experiment here presented, a
set of non-trivial classification tasks are successfully carried. The training
procedure is performed in two steps combining simulation and experiment. Final
results are benchmarked against exact simulations of the same method and also
classical algorithms, showing a competitive performance of the ion-trap quantum
classifier. This work constitutes the first experimental implementation of a
classification algorithm based on the re-uploading scheme.
|
This review paper discusses the science of astrometric catalogs, their
current applications and future prospects for making progress in fundamental
astronomy, astrophysics and gravitational physics. We discuss the concept of
fundamental catalogs, their practical realizations, and future prospects.
Particular attention is paid to the astrophysical implementations of the
catalogs such as the measurement of the Oort constants, the secular aberration
and parallax, and asteroseismology. We also consider the use of the fundamental
catalogs in gravitational physics for testing general theory of relativity and
detection of ultra-long gravitational waves of cosmological origin.
|
We propose a scheme to implement general quantum measurements, also known as
Positive Operator Valued Measures (POVMs) in dimension $d$ using only classical
resources and a single ancillary qubit. Our method is based on the
probabilistic implementation of $d$-outcome measurements which is followed by
postselection of some of the received outcomes. We conjecture that the success
probability of our scheme is larger than a constant independent of $d$ for all
POVMs in dimension $d$. Crucially, this conjecture implies the possibility of
realizing arbitrary nonadaptive quantum measurement protocol on a
$d$-dimensional system using a single auxiliary qubit with only a
\emph{constant} overhead in sampling complexity. We show that the conjecture
holds for typical rank-one Haar-random POVMs in arbitrary dimensions.
Furthermore, we carry out extensive numerical computations showing success
probability above a constant for a variety of extremal POVMs, including
SIC-POVMs in dimension up to 1299. Finally, we argue that our scheme can be
favourable for the experimental realization of POVMs, as noise compounding in
circuits required by our scheme is typically substantially lower than in the
standard scheme that directly uses Naimark's dilation theorem.
|
Inspired by our previous work on the boundedness of Toeplitz operators, we
introduce weak BMO and VMO type conditions, denoted by BWMO and VWMO,
respectively, for functions on the open unit disc of the complex plane. We show
that the average function of a function $f$ in BWMO is boundedly oscillating,
and the analogous result holds for $f$ in VWMO. The result is applied for
generalizations of known results on the essential spectra and norms of Toeplitz
operators. Finally, we provide examples of functions satisfying the VWMO
condition which are not in the classical VMO or even in BMO.
|
Understanding the low-temperature pure state structure of spin glasses
remains an open problem in the field of statistical mechanics of disordered
systems. Here we study Monte Carlo dynamics, performing simulations of the
growth of correlations following a quench from infinite temperature to a
temperature well below the spin-glass transition temperature $T_c$ for a
one-dimensional Ising spin glass model with diluted long-range interactions. In
this model, the probability $P_{ij}$ that an edge $\{i,j\}$ has nonvanishing
interaction falls as a power-law with chord distance,
$P_{ij}\propto1/R_{ij}^{2\sigma}$, and we study a range of values of $\sigma$
with $1/2<\sigma<1$. We consider a correlation function $C_{4}(r,t)$. A dynamic
correlation length that shows power-law growth with time $\xi(t)\propto
t^{1/z}$ can be identified in the data and, for large time $t$, $C_{4}(r,t)$
decays as a power law $r^{-\alpha_d}$ with distance $r$ when $r\ll \xi(t)$. The
calculation can be interpreted in terms of the maturation metastate averaged
Gibbs state, or MMAS, and the decay exponent $\alpha_d$ differentiates between
a trivial MMAS ($\alpha_d=0$), as expected in the droplet picture of spin
glasses, and a nontrivial MMAS ($\alpha_d\ne 0$), as in the
replica-symmetry-breaking (RSB) or chaotic pairs pictures. We find nonzero
$\alpha_d$ even in the regime $\sigma >2/3$ which corresponds to short-range
systems below six dimensions. For $\sigma < 2/3$, the decay exponent $\alpha_d$
follows the RSB prediction for the decay exponent $\alpha_s = 3 - 4 \sigma$ of
the static metastate, consistent with a conjectured statics-dynamics relation,
while it approaches $\alpha_d=1-\sigma$ in the regime $2/3<\sigma<1$; however,
it deviates from both lines in the vicinity of $\sigma=2/3$.
|
Different theories of gravity can admit the same black hole solution, but the
parameters usually have different physical interpretations. In this work we
study in depth the linear term $\beta r$ in the redshift function of black
holes, which arises in conformal gravity, de Rham-Gabadadze-Tolley (dRGT)
massive gravity, $f(R)$ gravity (as approximate solution) and general
relativity. Geometrically we quantify the parameter $\beta$ in terms of the
curvature invariants. Astrophysically we found that $\beta$ can be expressed in
terms of the cosmological constant, the photon orbit radius and the innermost
stable circular orbit (ISCO) radius. The metric degeneracy can be broken once
black hole thermodynamics is taken into account. Notably, we show that under
Hawking evaporation, different physical theories with the same black hole
solution (at the level of the metric) can lead to black hole remnants with
different values of their physical masses with direct consequences on their
viability as dark matter candidates. In particular, the mass of the graviton in
massive gravity can be expressed in terms of the cosmological constant and of
the formation epoch of the remnant. Furthermore the upper bound of remnant mass
can be estimated to be around $0.5 \times 10^{27}$ kg.
|
This study reports the magnetization switching induced by spin-orbit torque
(SOT) from the spin current generated in Co2MnGa magnetic Weyl semimetal (WSM)
thin films. We deposited epitaxial Co2MnGa thin films with highly B2-ordered
structure on MgO(001) substrates. The SOT was characterized by harmonic Hall
measurements in a Co2MnGa/Ti/CoFeB heterostructure and a relatively large spin
Hall efficiency of -7.8% was obtained.The SOT-induced magnetization switching
of the perpendicularly magnetized CoFeB layer was further demonstrated using
the structure. The symmetry of second harmonic signals, thickness dependence of
spin Hall efficiency, and shift of anomalous Hall loops under applied currents
were also investigated. This study not only contributes to the understanding of
the mechanisms of spin-current generation from magnetic-WSM-based
heterostructures, but also paves a way for the applications of magnetic WSMs in
spintronic devices.
|
In this article, we derive analytically the complex optical spectrum of a
pulsed laser source obtained when a frequency comb generated by phase
modulation is input into a synchronized intensity modulator. We then show how
this knowledge of the spectrum may help to achieve unprecedented accuracy
during the experimental spectrum correction step usually carried out with an
optical spectrum processor. In numerical examples, for a given average power we
present up to a 75 % increase in peak power and an enhancement of the
extinction ratio by at least three orders of magnitude. This method also
enables large-factor rate-multiplications of these versatile coherent sources
using the Talbot effect with negligible degradation of the signal.
|
The analysis of 20 years of spectrophotometric data of the double shell
planetary nebula PM\,1-188 is presented, aiming to determine the time evolution
of the emission lines and the physical conditions of the nebula, as a
consequence of the systematic fading of its [WC\,10] central star whose
brightness has declined by about 10 mag in the past 40 years. Our main results
include that the [\ion{O}{iii}], [\ion{O}{ii}], [\ion{N}{ii}] line intensities
are increasing with time in the inner nebula as a consequence of an increase in
electron temperature from 11,000 K in 2005 to more than 14,000 K in 2018, due
to shocks. The intensity of the same lines are decreasing in the outer nebula,
due to a decrease in temperature, from 13,000 K to 7,000 K, in the same period.
The chemical composition of the inner and outer shells was derived and they are
similar. Both nebulae present subsolar O, S and Ar abundances, while they are
He, N and Ne rich. For the outer nebula the values are 12+log He/H=
11.13$\pm$0.05, 12+log O/H = 8.04$\pm$0.04, 12+log N/H= 7.87$\pm$0.06, 12+log
S/H = 7.18$\pm$0.10 and 12+log Ar = 5.33$\pm$0.16. The O, S and Ar abundances
are several times lower than the average values found in disc non-Type I PNe,
and are reminiscent of some halo PNe. From high resolution spectra, an outflow
in the N-S direction was found in the inner zone. Position-velocity diagrams
show that the outflow expands at velocities in the $-$150 to 100 km s$^{-1}$
range, and both shells have expansion velocities of about 40 km s$^{-1}$.
|
We investigate long-lived particles (LLPs) produced in pair from neutral
currents and decaying into a displaced electron plus two jets at the LHC,
utilizing the proposed minimum ionizing particle timing detector at CMS. We
study two benchmark models: the R-parity-violating supersymmetry with the
lightest neutralinos being the lightest supersymmetric particle and two
different $U(1)$ extensions of the standard model with heavy neutral leptons
(HNLs). The light neutralinos are produced from the standard model $Z$-boson
decays via small Higgsino components, and the HNLs arise from decays of a heavy
gauge boson, $Z'$. By simulating the signal processes at the HL-LHC with the
center-of-mass energy $\sqrt{s}=$ 14 TeV and integrated luminosity of 3
ab$^{-1}$, our analyses indicate that the search strategy based on a timing
trigger and the final state kinematics has the potential to probe the parameter
space that is complementary to other traditional LLP search strategies such as
those based on the displaced vertex.
|
Algorithms produce a growing portion of decisions and recommendations both in
policy and business. Such algorithmic decisions are natural experiments
(conditionally quasi-randomly assigned instruments) since the algorithms make
decisions based only on observable input variables. We use this observation to
develop a treatment-effect estimator for a class of stochastic and
deterministic decision-making algorithms. Our estimator is shown to be
consistent and asymptotically normal for well-defined causal effects. A key
special case of our estimator is a multidimensional regression discontinuity
design. We apply our estimator to evaluate the effect of the Coronavirus Aid,
Relief, and Economic Security (CARES) Act, where hundreds of billions of
dollars worth of relief funding is allocated to hospitals via an algorithmic
rule. Our estimates suggest that the relief funding has little effect on
COVID-19-related hospital activity levels. Naive OLS and IV estimates exhibit
substantial selection bias.
|
Concurrent accesses to databases are typically encapsulated in transactions
in order to enable isolation from other concurrent computations and resilience
to failures. Modern databases provide transactions with various semantics
corresponding to different trade-offs between consistency and availability.
Since a weaker consistency model provides better performance, an important
issue is investigating the weakest level of consistency needed by a given
program (to satisfy its specification). As a way of dealing with this issue, we
investigate the problem of checking whether a given program has the same set of
behaviors when replacing a consistency model with a weaker one. This property
known as robustness generally implies that any specification of the program is
preserved when weakening the consistency. We focus on the robustness problem
for consistency models which are weaker than standard serializability, namely,
causal consistency, prefix consistency, and snapshot isolation. We show that
checking robustness between these models is polynomial time reducible to a
state reachability problem under serializability. We use this reduction to also
derive a pragmatic proof technique based on Lipton's reduction theory that
allows to prove programs robust. We have applied our techniques to several
challenging applications drawn from the literature of distributed systems and
databases.
|
Given the prevalence of pre-trained contextualized representations in today's
NLP, there have been several efforts to understand what information such
representations contain. A common strategy to use such representations is to
fine-tune them for an end task. However, how fine-tuning for a task changes the
underlying space is less studied. In this work, we study the English BERT
family and use two probing techniques to analyze how fine-tuning changes the
space. Our experiments reveal that fine-tuning improves performance because it
pushes points associated with a label away from other labels. By comparing the
representations before and after fine-tuning, we also discover that fine-tuning
does not change the representations arbitrarily; instead, it adjusts the
representations to downstream tasks while preserving the original structure.
Finally, using carefully constructed experiments, we show that fine-tuning can
encode training sets in a representation, suggesting an overfitting problem of
a new kind.
|
In this article, we discuss how to solve information-gathering problems
expressed as rho-POMDPs, an extension of Partially Observable Markov Decision
Processes (POMDPs) whose reward rho depends on the belief state. Point-based
approaches used for solving POMDPs have been extended to solving rho-POMDPs as
belief MDPs when its reward rho is convex in B or when it is
Lipschitz-continuous. In the present paper, we build on the POMCP algorithm to
propose a Monte Carlo Tree Search for rho-POMDPs, aiming for an efficient
on-line planner which can be used for any rho function. Adaptations are
required due to the belief-dependent rewards to (i) propagate more than one
state at a time, and (ii) prevent biases in value estimates. An asymptotic
convergence proof to epsilon-optimal values is given when rho is continuous.
Experiments are conducted to analyze the algorithms at hand and show that they
outperform myopic approaches.
|
Modern vehicles equipped with on-board units (OBU) are playing an essential
role in the smart city revolution. The vehicular processing resources, however,
are not used to their fullest potential. The concept of vehicular clouds is
proposed to exploit the underutilized vehicular resources to supplement cloud
computing services to relieve the burden on cloud data centers and improve
quality of service. In this paper we introduce a vehicular cloud architecture
supported by fixed edge computing nodes and the central cloud. A mixed integer
linear programming (MLP) model is developed to optimize the allocation of the
computing demands in the distributed architecture while minimizing power
consumption. The results show power savings as high as 84% over processing in
the conventional cloud. A heuristic with performance approaching that of the
MILP model is developed to allocate computing demands in real time.
|
Nambu dynamics is a generalized Hamiltonian dynamics of more than two
variables, whose time evolutions are given by the Nambu bracket, a
generalization of the canonical Poisson bracket. Nambu dynamics can always be
represented in the form of noncanonical Hamiltonian dynamics by defining the
noncanonical Poisson bracket by means of the Nambu bracket. For the time
evolution to be consistent, the Nambu bracket must satisfy the fundamental
identity, while the noncanonical Poisson bracket must satisfy the Jacobi
identity. However, in many degrees of freedom systems, it is well known that
the fundamental identity does not hold. In the present paper we show that, even
if the fundamental identity is violated, the Jacobi identity for the
corresponding noncanonical Hamiltonian dynamics could hold. As an example, we
evaluate these identities for a semiclassical system of two coupled
oscillators.
|
How to measure the incremental Return On Ad Spend (iROAS) is a fundamental
problem for the online advertising industry. A standard modern tool is to run
randomized geo experiments, where experimental units are non-overlapping
ad-targetable geographical areas (Vaver & Koehler 2011). However, how to design
a reliable and cost-effective geo experiment can be complicated, for example:
1) the number of geos is often small, 2) the response metric (e.g. revenue)
across geos can be very heavy-tailed due to geo heterogeneity, and furthermore
3) the response metric can vary dramatically over time. To address these
issues, we propose a robust nonparametric method for the design, called Trimmed
Match Design (TMD), which extends the idea of Trimmed Match (Chen & Au 2019)
and furthermore integrates the techniques of optimal subset pairing and sample
splitting in a novel and systematic manner. Some simulation and real case
studies are presented. We also point out a few open problems for future
research.
|
Visually realistic GAN-generated images have recently emerged as an important
misinformation threat. Research has shown that these synthetic images contain
forensic traces that are readily identifiable by forensic detectors.
Unfortunately, these detectors are built upon neural networks, which are
vulnerable to recently developed adversarial attacks. In this paper, we propose
a new anti-forensic attack capable of fooling GAN-generated image detectors.
Our attack uses an adversarially trained generator to synthesize traces that
these detectors associate with real images. Furthermore, we propose a technique
to train our attack so that it can achieve transferability, i.e. it can fool
unknown CNNs that it was not explicitly trained against. We demonstrate the
performance of our attack through an extensive set of experiments, where we
show that our attack can fool eight state-of-the-art detection CNNs with
synthetic images created using seven different GANs.
|
We study a generalized Blume-Capel model on the simple cubic lattice. In
addition to the nearest neighbor coupling there is a next to next to nearest
neighbor coupling. In order to quantify spatial anisotropy, we determine the
correlation length in the high temperature phase of the model for three
different directions. It turns out that the spatial anisotropy depends very
little on the dilution parameter $D$ of the model and is essentially determined
by the ratio of the nearest neighbor and the next to next to nearest neighbor
coupling. This ratio is tuned such that the leading contribution to the spatial
anisotropy is eliminated. Next we perform a finite size scaling (FSS) study to
tune $D$ such that also the leading correction to scaling is eliminated. Based
on this FSS study, we determine the critical exponents $\nu=0.62998(5)$ and
$\eta=0.036284(40)$, which are in nice agreement with the more accurate results
obtained by using the conformal bootstrap method. Furthermore we provide
accurate results for fixed point values of dimensionless quantities such as the
Binder cumulant and for the critical couplings. These results provide the
groundwork for broader studies of universal properties of the three-dimensional
Ising universality class.
|
Sign language translation (SLT) is often decomposed into video-to-gloss
recognition and gloss-to-text translation, where a gloss is a sequence of
transcribed spoken-language words in the order in which they are signed. We
focus here on gloss-to-text translation, which we treat as a low-resource
neural machine translation (NMT) problem. However, unlike traditional
low-resource NMT, gloss-to-text translation differs because gloss-text pairs
often have a higher lexical overlap and lower syntactic overlap than pairs of
spoken languages. We exploit this lexical overlap and handle syntactic
divergence by proposing two rule-based heuristics that generate pseudo-parallel
gloss-text pairs from monolingual spoken language text. By pre-training on the
thus obtained synthetic data, we improve translation from American Sign
Language (ASL) to English and German Sign Language (DGS) to German by up to
3.14 and 2.20 BLEU, respectively.
|
A type of polar self-propelled particle generates a torque that makes it
naturally drawn to higher-density areas. The collective behaviour this induces
in assemblies of particles constitutes a new form of phase separation in active
fluids.
|
When fonts are used on documents, they are intentionally selected by
designers. For example, when designing a book cover, the typography of the text
is an important factor in the overall feel of the book. In addition, it needs
to be an appropriate font for the rest of the book cover. Thus, we propose a
method of generating a book title image based on its context within a book
cover. We propose an end-to-end neural network that inputs the book cover, a
target location mask, and a desired book title and outputs stylized text
suitable for the cover. The proposed network uses a combination of a
multi-input encoder-decoder, a text skeleton prediction network, a perception
network, and an adversarial discriminator. We demonstrate that the proposed
method can effectively produce desirable and appropriate book cover text
through quantitative and qualitative results.
|
The music of Northern Myanmar Kachin ethnic group is compared to the music of
western China, Xijiang based Uyghur music, using timbre and pitch feature
extraction and machine learning. Although separated by Tibet, the muqam
tradition of Xinjiang might be found in Kachin music due to myths of Kachin
origin, as well as linguistic similarities, e.g., the Kachin term 'makan' for a
musical piece. Extractions were performed using the apollon and COMSAR
(Computational Music and Sound Archiving) frameworks, on which the Ethnographic
Sound Recordings Archive (ESRA) is based, using ethnographic recordings from
ESRA next to additional pieces. In terms of pitch, tonal systems were compared
using Kohonen self-organizing map (SOM), which clearly clusters Kachin and
Uyghur musical pieces. This is mainly caused by the Xinjiang muqam music
showing just fifth and fourth, while Kachin pieces tend to have a higher fifth
and fourth, next to other dissimilarities. Also, the timbre features of
spectral centroid and spectral sharpness standard deviation clearly tells
Uyghur from Kachin pieces, where Uyghur music shows much larger deviations.
Although more features will be compared in the future, like rhythm or melody,
these already strong findings might introduce an alternative comparison
methodology of ethnic groups beyond traditional linguistic definitions.
|
Enabling out-of-distribution (OOD) detection for DNNs is critical for their
safe and reliable operation in the open world. Despite recent progress, current
works often consider a coarse level of granularity in the OOD problem, which
fail to approximate many real-world fine-grained tasks where high granularity
may be expected between the in-distribution (ID) data and the OOD data (e.g.,
identifying novel bird species for a bird classification system in the wild).
In this work, we start by carefully constructing four large-scale fine-grained
test environments in which existing methods are shown to have difficulties. We
find that current methods, including ones that include a large/diverse set of
outliers during DNN training, have poor coverage over the broad region where
fine-grained OOD samples locate. We then propose Mixture Outlier Exposure
(MixOE), which effectively expands the covered OOD region by mixing ID data and
training outliers, and regularizes the model behaviour by linearly decaying the
prediction confidence as the input transitions from ID to OOD. Extensive
experiments and analyses demonstrate the effectiveness of MixOE for improving
OOD detection in fine-grained settings.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.