abstract
stringlengths 42
2.09k
|
---|
Our goal in this work is to present some mean value type theorems that are
not studied in classic calculus and analysis courses. They are simple theorems
yet with large applicability in mathematical analysis (for example, in the
study of functional equations and integral operators), computational
mathematics, economics among other areas.
|
Depression is a public health issue which severely affects one's well being
and cause negative social and economic effect for society. To rise awareness of
these problems, this publication aims to determine if long lasting effects of
depression can be determined from electoencephalographic (EEG) signals. The
article contains accuracy comparison for SVM, LDA, NB, kNN and D3 binary
classifiers which were trained using linear (relative band powers, APV, SASI)
and non-linear (HFD, LZC, DFA) EEG features. The age and gender matched dataset
consisted of 10 healthy subjects and 10 subjects with depression diagnosis at
some point in their lifetime. Several of the proposed feature selection and
classifier combinations reached accuracy of 90% where all models where
evaluated using 10-fold cross validation and averaged over 100 repetitions with
random sample permutations.
|
The amount and variety of data is increasing drastically for several years.
These data are often represented as networks, which are then explored with
approaches arising from network theory. Recent years have witnessed the
extension of network exploration methods to leverage more complex and richer
network frameworks. Random walks, for instance, have been extended to explore
multilayer networks. However, current random walk approaches are limited in the
combination and heterogeneity of network layers they can handle. New analytical
and numerical random walk methods are needed to cope with the increasing
diversity and complexity of multilayer networks. We propose here MultiXrank, a
Python package that enables Random Walk with Restart (RWR) on any kind of
multilayer network with an optimized implementation. This package is supported
by a universal mathematical formulation of the RWR. We evaluated MultiXrank
with leave-one-out cross-validation and link prediction, and introduced
protocols to measure the impact of the addition or removal of multilayer
network data on prediction performances. We further measured the sensitivity of
MultiXrank to input parameters by in-depth exploration of the parameter space.
Finally, we illustrate the versatility of MultiXrank with different use-cases
of unsupervised node prioritization and supervised classification in the
context of human genetic diseases.
|
We consider the point evaluation of the solution to interface problems with
geometric uncertainties, where the uncertainty in the obstacle is described by
a high-dimensional parameter $\boldsymbol{y}\in[-1,1]^d$, $d\in\mathbb{N}$. We
focus in particular on an elliptic interface problem and a Helmholtz
transmission problem. Point values of the solution in the physical domain
depend in general non-smoothly on the high-dimensional parameter, posing a
challenge when one is interested in building surrogates. Indeed, high-order
methods show poor convergence rates, while methods which are able to track
discontinuities usually suffer from the so-called curse of dimensionality. For
this reason, in this work we propose to build surrogates for point evaluation
using deep neural networks. We provide a theoretical justification for why we
expect neural networks to provide good surrogates. Furthermore, we present
extensive numerical experiments showing their good performance in practice. We
observe in particular that neural networks do not suffer from the curse of
dimensionality, and we study the dependence of the error on the number of point
evaluations (that is, the number of discontinuities in the parameter space), as
well as on several modeling parameters, such as the contrast between the two
materials and, for the Helmholtz transmission problem, the wavenumber.
|
Starting from 2003, a large number of the so-called exotic hadrons, such as
$X(3872)$ and $D_{s0}^*(2317)$, were discovered experimentally. Since then,
understanding the nature of these states has been a central issue both
theoretically and experimentally. As many of these states are located close to
two hadron thresholds, they are believed to be molecular states or at least
contain large molecular components. We argue that if they are indeed molecular
states, in the way that the deuteron is a bound state of proton and neutron,
then molecular states of three or more hadrons are likely, in the sense that
atomic nuclei are bound states of nucleons. Following this conjecture, we study
the likely existence of $DDK$, $D\bar{D}K$, and $D\bar{D}^{*}K$ molecular
states. We show that within the theoretical uncertainties of the two-body
interactions deduced, they most likely exist. Furthermore, we predict their
strong decays to help guide future experimental searches. In addition, we show
that the same approach can indeed reproduce some of the known three-body
systems from the two-body inputs, such as the deuteron-triton and the
$\Lambda(1405)$-$\bar{K}NN$ systems.
|
The integration of energy systems such as electricity and gas grids and power
and thermal grids can bring significant benefits in terms of system security,
reliability, and reduced emissions. Another alternative coupling of sectors
with large potential benefits is the power and transportation networks. This is
primarily due to the increasing use of electric vehicles (EV) and their demand
on the power grid. Besides, the production and operating costs of EVs and
battery technologies are steadily decreasing, while tax credits for EV purchase
and usage are being offered to users in developed countries. The power grid is
also undergoing major upgrades and changes with the aim of ensuring
environmentally sustainable grids. These factors influence our work. We present
a new operating model for an integrated EV-grid system that incorporates a set
of aggregators (owning a fleet of EVs) with partial access to the distribution
grid. Then, the Cooperative Game Theory is used to model the behavior of the
system. The Core is used to describe the stability of the interaction between
these aggregators, and the Shapley value is used to assign costs to them. The
results obtained show the benefit of cooperation, which could lead to an
overall reduction in energy consumption, reduced operating costs for electric
vehicles and the distribution grid, and, in some cases, the additional monetary
budget available to reinforce the transmission and grid infrastructures.
|
We investigate classes of shear-free cosmological dust models with
irrotational fluid flows within the framework of $f(T)$ gravity. In particular,
we use the $1 + 3$ covariant formalism and present the covariant linearised
evolution and constraint equations describing such models. We then derive the
integrability conditions describing a consistent evolution of the linearised
field equations of these quasi-Newtonian universes in the $f(T)$ gravitational
theory. Finally, we derive the evolution equations for the density and velocity
perturbations of the quasi-Newtonian universe. We explore the behaviour of the
matter density contrast for two models - $f(T)= \mu T_{0}(T/T_{0})^{n}$ and the
more generalised case, where $f(T)= T+ \mu T_{0} (T/T_{0})^{n}$, with and
without the application of the quasi-static approximation. Our numerical
solutions show that these $f(T)$ theories can be suitable alternatives to study
the background dynamics, whereas the growth of energy density fluctuations
change dramatically from the expected $\Lambda$CDM behaviour even for small
deviations away from the general relativistic limits of the underlying $f(T)$
theory. Moreover, applying the so-called quasi-static approximation yields
exact-solution results that are orders of magnitude different from the
numerically integrated solutions of the full system, suggesting that these
approximations are not applicable here.
|
Dual-energy X-ray tomography is considered in a context where the target
under imaging consists of two distinct materials. The materials are assumed to
be possibly intertwined in space, but at any given location there is only one
material present. Further, two X-ray energies are chosen so that there is a
clear difference in the spectral dependence of the attenuation coefficients of
the two materials. A novel regularizer is presented for the inverse problem of
reconstructing separate tomographic images for the two materials. A combination
of two things, (a) non-negativity constraint, and (b) penalty term containing
the inner product between the two material images, promotes the presence of at
most one material in a given pixel. A preconditioned interior point method is
derived for the minimization of the regularization functional. Numerical tests
with digital phantoms suggest that the new algorithm outperforms the baseline
method, Joint Total Variation regularization, in terms of correctly
material-characterized pixels. While the method is tested only in a
two-dimensional setting with two materials and two energies, the approach
readily generalizes to three dimensions and more materials. The number of
materials just needs to match the number of energies used in imaging.
|
We consider random Hermitian matrices with independent upper triangular
entries. Wigner's semicircle law says that under certain additional
assumptions, the empirical spectral distribution converges to the semicircle
distribution. We characterize convergence to semicircle in terms of the
variances of the entries, under natural assumptions such as the Lindeberg
condition. The result extends to certain matrices with entries having infinite
second moments. As a corollary, another characterization of semicircle
convergence is given in terms of convergence in distribution of the row sums to
the standard normal distribution.
|
With strong marketing advocacy of the benefits of cannabis use for improved
mental health, cannabis legalization is a priority among legislators. However,
preliminary scientific research does not conclusively associate cannabis with
improved mental health. In this study, we explore the relationship between
depression and consumption of cannabis in a targeted social media corpus
involving personal use of cannabis with the intent to derive its potential
mental health benefit. We use tweets that contain an association among three
categories annotated by domain experts - Reason, Effect, and Addiction. The
state-of-the-art Natural Langauge Processing techniques fall short in
extracting these relationships between cannabis phrases and the depression
indicators. We seek to address the limitation by using domain knowledge;
specifically, the Drug Abuse Ontology for addiction augmented with Diagnostic
and Statistical Manual of Mental Disorders lexicons for mental health. Because
of the lack of annotations due to the limited availability of the domain
experts' time, we use supervised contrastive learning in conjunction with GPT-3
trained on a vast corpus to achieve improved performance even with limited
supervision. Experimental results show that our method can significantly
extract cannabis-depression relationships better than the state-of-the-art
relation extractor. High-quality annotations can be provided using a nearest
neighbor approach using the learned representations that can be used by the
scientific community to understand the association between cannabis and
depression better.
|
Sequential recommendation aims to leverage users' historical behaviors to
predict their next interaction. Existing works have not yet addressed two main
challenges in sequential recommendation. First, user behaviors in their rich
historical sequences are often implicit and noisy preference signals, they
cannot sufficiently reflect users' actual preferences. In addition, users'
dynamic preferences often change rapidly over time, and hence it is difficult
to capture user patterns in their historical sequences. In this work, we
propose a graph neural network model called SURGE (short for SeqUential
Recommendation with Graph neural nEtworks) to address these two issues.
Specifically, SURGE integrates different types of preferences in long-term user
behaviors into clusters in the graph by re-constructing loose item sequences
into tight item-item interest graphs based on metric learning. This helps
explicitly distinguish users' core interests, by forming dense clusters in the
interest graph. Then, we perform cluster-aware and query-aware graph
convolutional propagation and graph pooling on the constructed graph. It
dynamically fuses and extracts users' current activated core interests from
noisy user behavior sequences. We conduct extensive experiments on both public
and proprietary industrial datasets. Experimental results demonstrate
significant performance gains of our proposed method compared to
state-of-the-art methods. Further studies on sequence length confirm that our
method can model long behavioral sequences effectively and efficiently.
|
The multiple lobes of high order Hermite-Gaussian (HG) laser modes differ in
terms of shape, size, and optical energy distribution. Here, we introduce a
generic numerical method that redistributes optical energy among the lobes of
high order HG modes such that all the identical low intense lobes become both
moderate or high intense lobes and vice-versa, in a controlled manner. Further,
the modes which consist of only two types of intensity distribution among its
multiple lobes are transformed together into all high intense lobes.
Furthermore, in some cases, moderate intense lobes together with high intense
lobes become high intense lobes, and moderate intense lobes together with low
intense lobes become high intense lobes. Such controlled modulation of optical
energy may offer efficient and selective utilization of each lobe of HG modes
in most applications like particle manipulation, optical lithography, and the
method can be used in other fields like nonlinear frequency conversion and
shaping ultrafast optical pulses.
|
Internet of Things (IoT) is now omnipresent in all aspects of life and
provides a large number of potentially critical services. For this, Internet of
Things relies on the data collected by objects. Data integrity is therefore
essential. Unfortunately, this integrity is threatened by a type of attack
known as False Data Injection Attack. This consists of an attacker who injects
fabricated data into a system to modify its behaviour. In this work, we dissect
and present a method that uses a Domain-Specific Language (DSL) to generate
altered data, allowing these attacks to be simulated and tested.
|
We consider the problem of empirical risk minimization given a database,
using the gradient descent algorithm. We note that the function to be optimized
may be non-convex, consisting of saddle points which impede the convergence of
the algorithm. A perturbed gradient descent algorithm is typically employed to
escape these saddle points. We show that this algorithm, that perturbs the
gradient, inherently preserves the privacy of the data. We then employ the
differential privacy framework to quantify the privacy hence achieved. We also
analyze the change in privacy with varying parameters such as problem dimension
and the distance between the databases.
|
In situ generation of a high-energy, high-current, spin-polarized electron
beam is an outstanding scientific challenge to the development of plasma-based
accelerators for high-energy colliders. In this Letter we show how such a
spin-polarized relativistic beam can be produced by ionization injection of
electrons of certain atoms with a circularly polarized laser field into a
beam-driven plasma wakefield accelerator, providing a much desired one-step
solution to this challenge. Using time-dependent Schr\"odinger equation (TDSE)
simulations, we show the propensity rule of spin-dependent ionization of xenon
atoms can be reversed in the strong-field multi-photon regime compared with the
non-adiabatic tunneling regime, leading to high total spin-polarization.
Furthermore, three-dimensional particle-in-cell (PIC) simulations are
incorporated with TDSE simulations, providing start-to-end simulations of
spin-dependent strong-field ionization of xenon atoms and subsequent trapping,
acceleration, and preservation of electron spin-polarization in lithium plasma.
We show the generation of a high-current (0.8 kA),
ultra-low-normalized-emittance (~37 nm), and high-energy (2.7 GeV) electron
beam within just 11 cm distance, with up to ~31% net spin polarization. Higher
current, energy, and net spin-polarization beams are possible by optimizing
this concept, thus solving a long-standing problem facing the development of
plasma accelerators.
|
Angular path integration is the ability of a system to estimate its own
heading direction from potentially noisy angular velocity (or increment)
observations. Non-probabilistic algorithms for angular path integration, which
rely on a summation of these noisy increments, do not appropriately take into
account the reliability of such observations, which is essential for
appropriately weighing one's current heading direction estimate against
incoming information. In a probabilistic setting, angular path integration can
be formulated as a continuous-time nonlinear filtering problem (circular
filtering) with observed state increments. The circular symmetry of heading
direction makes this inference task inherently nonlinear, thereby precluding
the use of popular inference algorithms such as Kalman filters and rendering
the problem analytically inaccessible. Here, we derive an approximate solution
to circular continuous-time filtering, which integrates state increment
observations while maintaining a fixed representation through both state
propagation and observational updates. Specifically, we extend the established
projection-filtering method to account for observed state increments and apply
this framework to the circular filtering problem. We further propose a
generative model for continuous-time angular-valued direct observations of the
hidden state, which we integrate seamlessly into the projection filter.
Applying the resulting scheme to a model of probabilistic angular path
integration, we derive an algorithm for circular filtering, which we term the
circular Kalman filter. Importantly, this algorithm is analytically accessible,
interpretable, and outperforms an alternative filter based on a Gaussian
approximation.
|
Semidefinite programs (SDPs) can be solved in polynomial time by interior
point methods. However, when the dimension of the problem gets large, interior
point methods become impractical both in terms of computational time and memory
requirements. First order methods, such as Alternating Direction Methods of
Multipliers (ADMMs), turned out to be suitable algorithms to deal with large
scale SDPs and gained growing attention during the past decade. In this paper,
we focus on an ADMM designed for SDPs in standard form and extendit to deal
with inequalities when solving SDPs in general form. This allows to handle SDP
relaxations of classical combinatorial problems such as the graph coloring
problem and the maximum clique problem, that we consider in our extensive
numerical experience. The numerical results show the comparison of the method
proposed equipped with a post-processing procedure with the state-of-the-art
solver SDPNAL+.
|
Non-Gaussian continuous variable states play a central role both in the
foundations of quantum theory and for emergent quantum technologies. In
particular, "cat states", i.e., two-component macroscopic quantum
superpositions, embody quantum coherence in an accessible way and can be
harnessed for fundamental tests and quantum information tasks alike. Degenerate
optical parametric oscillators can naturally produce single-mode cat states and
thus represent a promising platform for their realization and harnessing. We
show that a dissipative coupling between degenerate optical parametric
oscillators extends this to two-mode entangled cat states, i.e., two-mode
entangled cat states are naturally produced under such dissipative coupling.
While overcoming single-photon loss still represents a major challenge towards
the realization of sufficiently pure single-mode cat states in degenerate
optical parametric oscillators, we show that the generation of two-mode
entangled cat states under such dissipative coupling can then be achieved
without additional hurdles. We numerically explore the parameter regime for the
successful generation of transient two-mode entangled cat states in two
dissipatively coupled degenerate optical parametric oscillators. To certify the
cat-state entanglement, we employ a tailored, variance-based entanglement
criterion, which can robustly detect cat-state entanglement under realistic
conditions.
|
The interpolation of spatial data can be of tremendous value in various
applications, such as forecasting weather from only a few measurements of
meteorological or remote sensing data. Existing methods for spatial
interpolation, such as variants of kriging and spatial autoregressive models,
tend to suffer from at least one of the following limitations: (a) the
assumption of stationarity, (b) the assumption of isotropy, and (c) the
trade-off between modelling local or global spatial interaction. Addressing
these issues in this work, we propose the use of Markov reward processes (MRPs)
as a spatial interpolation method, and we introduce three variants thereof: (i)
a basic static discount MRP (SD-MRP), (ii) an accurate but mostly theoretical
optimised MRP (O-MRP), and (iii) a transferable weight prediction MRP (WP-MRP).
All variants of MRP interpolation operate locally, while also implicitly
accounting for global spatial relationships in the entire system through
recursion. Additionally, O-MRP and WP-MRP no longer assume stationarity and are
robust to anisotropy. We evaluated our proposed methods by comparing the mean
absolute errors of their interpolated grid cells to those of 7 common
baselines, selected from models based on spatial autocorrelation, (spatial)
regression, and deep learning.
We performed detailed evaluations on two publicly available datasets (local
GDP values, and COVID-19 patient trajectory data). The results from these
experiments clearly show the competitive advantage of MRP interpolation, which
achieved significantly lower errors than the existing methods in 23 out of 40
experimental conditions, or 35 out of 40 when including O-MRP.
|
Metallography is crucial for a proper assessment of material's properties. It
involves mainly the investigation of spatial distribution of grains and the
occurrence and characteristics of inclusions or precipitates. This work
presents an holistic artificial intelligence model for Anomaly Detection that
automatically quantifies the degree of anomaly of impurities in alloys. We
suggest the following examination process: (1) Deep semantic segmentation is
performed on the inclusions (based on a suitable metallographic database of
alloys and corresponding tags of inclusions), producing inclusions masks that
are saved into a separated database. (2) Deep image inpainting is performed to
fill the removed inclusions parts, resulting in 'clean' metallographic images,
which contain the background of grains. (3) Grains' boundaries are marked using
deep semantic segmentation (based on another metallographic database of
alloys), producing boundaries that are ready for further inspection on the
distribution of grains' size. (4) Deep anomaly detection and pattern
recognition is performed on the inclusions masks to determine spatial, shape
and area anomaly detection of the inclusions. Finally, the system recommends to
an expert on areas of interests for further examination. The performance of the
model is presented and analyzed based on few representative cases. Although the
models presented here were developed for metallography analysis, most of them
can be generalized to a wider set of problems in which anomaly detection of
geometrical objects is desired. All models as well as the data-sets that were
created for this work, are publicly available at
https://github.com/Scientific-Computing-Lab-NRCN/MLography.
|
The intensity of the Cosmic UV background (UVB), coming from all sources of
ionising photons such as star-forming galaxies and quasars, determines the
thermal evolution and ionization state of the intergalactic medium (IGM) and
is, therefore, a critical ingredient for models of cosmic structure formation.
Most of the previous estimates are based on the comparison between observed and
simulated Lyman-$\alpha$ forest. We present the results of an independent
method to constrain the product of the UVB photoionisation rate and the
covering fraction of Lyman limit systems (LLSs) by searching for the
fluorescent Lyman-$\alpha$ emission produced by self-shielded clouds. Because
the expected surface brightness is well below current sensitivity limits for
direct imaging, we developed a new method based on three-dimensional stacking
of the IGM around Lyman-$\alpha$ emitting galaxies (LAEs) between 2.9<z<6.6
using deep MUSE observations. Combining our results with covering fractions of
LLSs obtained from mock cubes extracted from the EAGLE simulation, we obtain
new and independent constraints on the UVB at z>3 that are consistent with
previous measurements, with a preference for relatively low UVB intensities at
z=3, and which suggest a non-monotonic decrease of $\Gamma$HI with increasing
redshift between 3<z<5. This could suggest a possible tension between some UVB
models and current observations which however require deeper and wider
observations in Lyman-$\alpha$ emission and absorption to be confirmed.
Assuming instead a value of UVB from current models, our results constrain the
covering fraction of LLSs at 3<z<4.5 to be less than 25% within 150kpc from
LAEs.
|
The ability to accurately detect and localize objects is recognized as being
the most important for the perception of self-driving cars. From 2D to 3D
object detection, the most difficult is to determine the distance from the
ego-vehicle to objects. Expensive technology like LiDAR can provide a precise
and accurate depth information, so most studies have tended to focus on this
sensor showing a performance gap between LiDAR-based methods and camera-based
methods. Although many authors have investigated how to fuse LiDAR with RGB
cameras, as far as we know there are no studies to fuse LiDAR and stereo in a
deep neural network for the 3D object detection task. This paper presents
SLS-Fusion, a new approach to fuse data from 4-beam LiDAR and a stereo camera
via a neural network for depth estimation to achieve better dense depth maps
and thereby improves 3D object detection performance. Since 4-beam LiDAR is
cheaper than the well-known 64-beam LiDAR, this approach is also classified as
a low-cost sensors-based method. Through evaluation on the KITTI benchmark, it
is shown that the proposed method significantly improves depth estimation
performance compared to a baseline method. Also, when applying it to 3D object
detection, a new state of the art on low-cost sensor based method is achieved.
|
We experimentally demonstrate that when three single photons transmit through
two polarization channels, in a well-defined pre- and postselected ensemble,
there are no two photons in the same polarization channel by weak-strength
measurement, a counter-intuitive quantum counting effect called quantum
pigeonhole paradox. We further show that this effect breaks down in
second-order measurement. These results indicate the existence of quantum
pigeonhole paradox and its operating regime.
|
HCI and NLP traditionally focus on different evaluation methods. While HCI
involves a small number of people directly and deeply, NLP traditionally relies
on standardized benchmark evaluations that involve a larger number of people
indirectly. We present five methodological proposals at the intersection of HCI
and NLP and situate them in the context of ML-based NLP models. Our goal is to
foster interdisciplinary collaboration and progress in both fields by
emphasizing what the fields can learn from each other.
|
We study distribution dependent stochastic differential equation driven by a
continuous process, without any specification on its law, following the
approach initiated in [16]. We provide several criteria for existence and
uniqueness of solutions which go beyond the classical globally Lipschitz
setting. In particular we show well-posedness of the equation, as well as
almost sure convergence of the associated particle system, for drifts
satisfying either Osgood-continuity, monotonicity, local Lipschitz or Sobolev
differentiability type assumptions.
|
We demonstrate Raman sideband thermometry of single carbyne chains confined
in double-walled carbon nanotubes. Our results show that carbyne's record-high
Raman scattering cross section enables anti-Stokes Raman measurements at the
single chain level. Using laser irradiation as a heating source, we exploit the
temperature dependence of the anti-Stokes/Stokes ratio for local temperature
sensing. Due to its molecular size and its large Raman cross section carbyne is
an efficient probe for local temperature monitoring, with applications ranging
from nanoelectronics to biology.
|
In this paper we present our preliminary work on model-based behavioral
analysis of horse motion. Our approach is based on the SMAL model, a 3D
articulated statistical model of animal shape. We define a novel SMAL model for
horses based on a new template, skeleton and shape space learned from $37$
horse toys. We test the accuracy of our hSMAL model in reconstructing a horse
from 3D mocap data and images. We apply the hSMAL model to the problem of
lameness detection from video, where we fit the model to images to recover 3D
pose and train an ST-GCN network on pose data. A comparison with the same
network trained on mocap points illustrates the benefit of our approach.
|
This paper continues discussions in the author's previous paper about the
Misiurewicz polynomials defined for a family of degree $d \ge 2$ rational maps
with an automorphism group containing the cyclic group of order $d$. In
particular, we extend the sufficient conditions that the Misiurewicz
polynomials are irreducible over $\mathbb{Q}$. We also prove that the
Misiurewicz polynomials always have an irreducible factor of large degree.
|
If devices are physically accessible optical fault injection attacks pose a
great threat since the data processed as well as the operation flow can be
manipulated. Successful physical attacks may lead not only to leakage of secret
information such as cryptographic private keys, but can also cause economic
damage especially if as a result of such a manipulation a critical
infrastructure is successfully attacked. Laser based attacks exploit the
sensitivity of CMOS technologies to electromagnetic radiation in the visible or
the infrared spectrum. It can be expected that radiation-hard designs,
specially crafted for space applications, are more robust not only against
high-energy particles and short electromagnetic waves but also against optical
fault injection attacks. In this work we investigated the sensitivity of
radiation-hard JICG shift registers to optical fault injection attacks. In our
experiments, we were able to trigger bit-set and bit-reset repeatedly changing
the data stored in single JICG flip-flops despite their high-radiation fault
tolerance.
|
Geometrical chirality is a universal phenomenon that is encountered on many
different length scales ranging from geometrical shapes of various living
organisms to protein and DNA molecules. Interaction of chiral matter with
chiral light - that is, electromagnetic field possessing a certain handedness -
underlies our ability to discriminate enantiomers of chiral molecules. In this
context, it is often desired to have an optical cavity that would efficiently
couple to only a specific (right or left) molecular enantiomer, and not couple
to the opposite one. Here, we demonstrate a single-handedness chiral optical
cavity supporting only an eigenmode of a given handedness without the presence
of modes of other helicity. Resonant excitation of the cavity with light of
appropriate handedness enables formation of a helical standing wave with a
uniform chirality density, while the opposite handedness does not cause any
resonant effects. Furthermore, only chiral emitters of the matching handedness
efficiently interact with such a chiral eigenmode, enabling the
handedness-selective coupling light-matter strength. The proposed system
expands the set of tools available for investigations of chiral matter and
opens the door to studies of chiral electromagnetic vacuum.
|
In this article, we consider the estimation of the marginal distributions for
pairs of data are recorded, with unobserved order in each pair. New estimators
are proposed and their asymptotic properties are established, by proving a
Glivenko-Cantelli theorem and a functional central limit result. Results from a
simulation study are included and we illustrate the applicability of the method
on the homologous chromosomes data.
|
Pitting corrosion is a much-studied and technologically relevant subject.
However, the fundamental mechanisms responsible for the breakdown of the
passivating oxide layer are still subjects of debate. Chloride anions are known
to accelerate corrosion; relevant hypotheses include Cl insertion into
positively charged oxygen vacancies in the oxide film, and Cl adsorption on
passivating oxide surfaces, substituting for surface hydroxyl groups. In this
work, we conduct large-scale first principles modeling of explicit
metal/Al(2)O(3) interfaces to investigate the energetics and electronic
structures associated with these hypotheses. The explicit interface models
allow electron transfer that mimics electrochemical events, and the
establishment of the relation between atomic structures at different interfaces
and the electronic band alignment. For multiple model interfaces, we find that
doubly charged oxygen vacancies, which are key ingredients of the point defect
model (PDM) often used to analyze corrosion data, can only occur in the
presence of a potential gradient that raises the voltage. Cl-insertion into
oxide films can be energetically favorable in some oxygen vacancy sites,
depending on the voltage. We also discuss the challenges associated with
explicit DFT modeling of these complex interfaces.
|
The LHC is undergoing a high luminosity upgrade, which is set to increase the
instantaneous luminosity by at least a factor of five, resulting in a higher
muon flux rate in the forward region, which will overwhelm the current trigger
system of the CMS experiment. The ME0, a gas electron multiplier detector, is
proposed for the Phase-2 Muon System Upgrade to help increase the muon
acceptance and to control the Level 1 muon trigger rate. To lower the
probability of HV discharges, the ME0 was designed with GEM foils that are
segmented on both sides. Initial testing of the ME0 showed substantial
crosstalk between readout sectors. Here, we investigate, characterize, and
quantify the crosstalk in the detector, and estimate the performance of the
chamber as a result of this crosstalk via simulation of the detector dead time,
efficiency loss, and frontend electronics response. The results of crosstalk
via signals produced by applying a square voltage pulse directly on the readout
strips of the detector with a pulser are summarized, and the efficacy of
various mitigation strategies are presented. The crosstalk is a result of
capacitive coupling between the readout strips on the readout board and between
the readout strips and the bottom of GEM3. The crosstalk also generally follows
a pattern where the largest magnitude of crosstalk is within the same azimuthal
readout segment in the detector and in the nearest horizontal segments. The use
of bypass capacitors and larger HV segments successfully reduce the crosstalk:
we observe a maximum decrease of crosstalk in sectors previously experiencing
crosstalk from $(1.66\pm0.03)\%$ to $(1.11\pm0.02)\%$ with all HV segments
connected in parallel on the bottom of GEM3, with an HV low-pass filter, and an
HV divider. These mitigation strategies slightly increase crosstalk
$\big(\hspace{-0.1cm}\lessapprox 0.4\%\big)$ in readout sectors farther away.
|
High-dimensional black-box optimisation remains an important yet notoriously
challenging problem. Despite the success of Bayesian optimisation methods on
continuous domains, domains that are categorical, or that mix continuous and
categorical variables, remain challenging. We propose a novel solution -- we
combine local optimisation with a tailored kernel design, effectively handling
high-dimensional categorical and mixed search spaces, whilst retaining sample
efficiency. We further derive convergence guarantee for the proposed approach.
Finally, we demonstrate empirically that our method outperforms the current
baselines on a variety of synthetic and real-world tasks in terms of
performance, computational costs, or both.
|
Tournaments are a widely used mechanism to rank alternatives in a noisy
environment. This paper investigates a fundamental issue of economics in
tournament design: what is the best usage of limited resources, that is, how
should the alternatives be compared pairwise to best approximate their true but
latent ranking. We consider various formats including knockout tournaments,
multi-stage championships consisting of round-robin groups followed by single
elimination, and the Swiss-system. They are evaluated via Monte-Carlo
simulations under six different assumptions on winning probabilities. Comparing
the same pair of alternatives multiple times turns out to be an inefficacious
policy. While seeding can increase the efficacy of the knockout and group-based
designs, its influence remains marginal unless one has an unrealistically good
estimation on the true ranking of the players. The Swiss-system is found to be
the most accurate among all these tournament formats, especially in its ability
to rank all participants. A possible explanation is that it does not eliminate
a player after a single loss, while it takes the history of the comparisons
into account. The results can be especially interesting for emerging esports,
where the tournament designs are not yet solidified.
|
Context: Software startups develop innovative, software-intensive products.
Given the uncertainty associated with such an innovative context,
experimentation is a valuable approach for these companies, especially in the
early stages of the development, when implementing unnecessary features
represents a higher risk for companies' survival. Nevertheless, researchers
have argued that the lack of clearly defined practices led to limited adoption
of experimentation. In this regard, the first step is to define the hypotheses
based on which teams will create experiments. Objective: We aim to develop a
systematic technique to identify hypotheses for early-stage software startups.
Methods: We followed a Design Science approach consisted of three cycles in the
construction phase, that involved seven startups in total, and an evaluation of
the final artifact within three startups. Results: We developed the HyMap, a
hypotheses elicitation technique based on cognitive mapping. It consists of a
visual language to depict a cognitive map representing the founder's
understanding of the product, and a process to elicit this map consisted of a
series of questions the founder must answer. Our evaluation showed that the
artifacts are clear, easy to use, and useful leading to hypotheses and
facilitating founders to visualize their idea. Conclusion: Our study
contributes to both descriptive and prescriptive bodies of knowledge. Regarding
the first, it provides a better understanding of the guidance founders use to
develop their startups and, for the latter, a technique to identify hypotheses
in early-stage software startups.
|
We prove an algebraic version of the Hamilton-Tian Conjecture for all log
Fano pairs. More precisely, we show that any log Fano pair admits a canonical
two-step degeneration to a reduced uniformly Ding stable triple, which admits a
K\"ahler-Ricci soliton when the ground field $\mathbb{k}=\mathbb{C}$.
|
In the text processing context, most ML models are built on word embeddings.
These embeddings are themselves trained on some datasets, potentially
containing sensitive data. In some cases this training is done independently,
in other cases, it occurs as part of training a larger, task-specific model. In
either case, it is of interest to consider membership inference attacks based
on the embedding layer as a way of understanding sensitive information leakage.
But, somewhat surprisingly, membership inference attacks on word embeddings and
their effect in other natural language processing (NLP) tasks that use these
embeddings, have remained relatively unexplored.
In this work, we show that word embeddings are vulnerable to black-box
membership inference attacks under realistic assumptions. Furthermore, we show
that this leakage persists through two other major NLP applications:
classification and text-generation, even when the embedding layer is not
exposed to the attacker. We show that our MI attack achieves high attack
accuracy against a classifier model and an LSTM-based language model. Indeed,
our attack is a cheaper membership inference attack on text-generative models,
which does not require the knowledge of the target model or any expensive
training of text-generative models as shadow models.
|
In this work we study the time complexity for the search of local minima in
random graphs whose vertices have i.i.d. cost values. We show that, for
Erd\"os-R\'enyi graphs with connection probability given by $\lambda/n^\alpha$
(with $\lambda > 0$ and $0 < \alpha < 1$), a family of local algorithms that
approximate a gradient descent find local minima faster than the full gradient
descent. Furthermore, we find a probabilistic representation for the running
time of these algorithms leading to asymptotic estimates of the mean running
times.
|
Spatially separating electrons of different spins and efficiently generating
spin currents are crucial steps towards building practical spintronics devices.
Transverse magnetic focusing is a potential technique to accomplish both those
tasks. In a material where there is significant Rashba spin-orbit interaction,
electrons of different spins will traverse different paths in the presence of
an external magnetic field. Experiments have demonstrated the viability of this
technique by measuring conductance spectra that indicate the separation of
spin-up and spin-down electrons. However the effect that the geometry of the
leads has on these measurements is not well understood. We show that the
resolution of features in the conductance spectra is affected by the shape,
separation and width of the leads. Furthermore, the number of subbands occupied
by the electrons in the leads affects the ratio between the amplitudes of the
spin-split peaks in the spectra. We simulated devices with random onsite
potentials and observed that transverse magnetic focusing devices are sensitive
to disorder. Ultimately we show that careful choice and characterisation of
device geometry is crucial for correctly interpreting the results of transverse
magnetic focusing experiments.
|
Predicting materials properties from composition or structure is of great
interest to the materials science community. Deep learning has recently
garnered considerable interest in materials predictive tasks with low model
errors when dealing with large materials data. However, deep learning models
suffer in the small data regime that is common in materials science. Here we
leverage the transfer learning concept and the graph network deep learning
framework and develop the AtomSets machine learning framework for consistent
high model accuracy at both small and large materials data. The AtomSets models
can work with both compositional and structural materials data. By combining
with transfer learned features from graph networks, they can achieve
state-of-the-art accuracy from using small compositional data (<400) to large
structural data (>130,000). The AtomSets models show much lower errors than the
state-of-the-art graph network models at small data limits and the classical
machine learning models at large data limits. They also transfer better in the
simulated materials discovery process where the targeted materials have
property values out of the training data limits. The models require minimal
domain knowledge inputs and are free from feature engineering. The presented
AtomSets model framework opens new routes for machine learning-assisted
materials design and discovery.
|
We present an analytical model to identify thin discs in galaxies, and apply
this model to a sample of SDSS MaNGA galaxies. This model fits the velocity and
velocity dispersion fields of galaxies with regular kinematics. By introducing
two parameters $\zeta$ related to the comparison of the model's asymmetric
drift correction to the observed gas kinematics and $\eta$ related to the
dominant component of a galaxy, we classify the galaxies in the sample as
"disc-dominated", "non-disc-dominated", or "disc-free" indicating galaxies with
a dominating thin disc, a non-dominating thin disc, or no thin disc detection
with our method, respectively. The dynamical mass resulting from our model
correlates with stellar mass, and we investigate discrepancies by including gas
mass and variation of the initial mass function. As expected, most spiral
galaxies in the sample are disc-dominated, while ellipticals are predominantly
disc-free. Lenticular galaxies show a dichotomy in their kinematic
classification, which is related to their different star formation rates and
gas fractions. We propose two possible scenarios to explain these results. In
the first scenario, disc-free lenticulars formed in more violent processes than
disc-dominated ones, while in the second scenario, the quenching processes in
lenticulars lead to a change in their kinematic structures as disc-dominated
lenticulars evolve to disc-free ones.
|
Motivated by applications from computer vision to bioinformatics, the field
of shape analysis deals with problems where one wants to analyze geometric
objects, such as curves, while ignoring actions that preserve their shape, such
as translations, rotations, or reparametrizations. Mathematical tools have been
developed to define notions of distances, averages, and optimal deformations
for geometric objects. One such framework, which has proven to be successful in
many applications, is based on the square root velocity (SRV) transform, which
allows one to define a computable distance between spatial curves regardless of
how they are parametrized. This paper introduces a supervised deep learning
framework for the direct computation of SRV distances between curves, which
usually requires an optimization over the group of reparametrizations that act
on the curves. The benefits of our approach in terms of computational speed and
accuracy are illustrated via several numerical experiments.
|
Perhaps the most explored hypothesis for the accelerated cosmic expansion
rate arise in the context of extra fields or modifications to General
Relativity. A prevalent approach is to parameterise the expansion history
through the equation of state, $\omega(z)$. We present a parametric form for
$\omega(z)$ that can reproduce the generic behaviour of the most widely used
physical models for accelerated expansion with infrared corrections. The
present proposal has at most 3 free parameters which can be mapped back to
specific archetypal models for dark energy. We analyze in detail how different
combinations of data can constrain the specific cases embedded in our form for
$\omega(z)$. We implement our parametric equation for $\omega(z)$ to
observations from CMB, luminous distance of SNeIa, cosmic chronometers, and
baryon acoustic oscillations identified in galaxies and in the Lymann-$\alpha$
forest. We find that the parameters can be well constrained by using different
observational data sets. Our findings point to an oscillatory behaviour which
is consistent with an $f(R)$-like model or an unknown combination of scalar
fields. When we let the three parameters vary freely, we find an EOS which
oscillates around the phantom-dividing line, and, with over 99$\%$ of
confidence, the cosmological constant solution is disfavored.
|
We introduce the Macaulay2 package $\mathtt{LinearTruncations}$ for finding
and studying the truncations of a multigraded module over a standard
multigraded ring that have linear resolutions.
|
As a critical component for online advertising and marking, click-through
rate (CTR) prediction has draw lots of attentions from both industry and
academia field. Recently, the deep learning has become the mainstream
methodological choice for CTR. Despite of sustainable efforts have been made,
existing approaches still pose several challenges. On the one hand, high-order
interaction between the features is under-explored. On the other hand,
high-order interactions may neglect the semantic information from the low-order
fields. In this paper, we proposed a novel prediction method, named FINT, that
employs the Field-aware INTeraction layer which captures high-order feature
interactions while retaining the low-order field information. To empirically
investigate the effectiveness and robustness of the FINT, we perform extensive
experiments on the three realistic databases: KDD2012, Criteo and Avazu. The
obtained results demonstrate that the FINT can significantly improve the
performance compared to the existing methods, without increasing the amount of
computation required. Moreover, the proposed method brought about 2.72\%
increase to the advertising revenue of a big online video app through A/B
testing. To better promote the research in CTR field, we released our code as
well as reference implementation at: https://github.com/zhishan01/FINT.
|
In this paper, the zero-forcing (ZF) precoder with max-min power allocation
is proposed for cell-free millimeter wave (mmWave) massive multiple-input
multiple-output (MIMO) systems using low-resolution digital-to-analog
converters (DACs) with limited-capacity fronthaul links. The proposed power
allocation aims to achieve max-min fairness on the achievable rate lower bounds
of the users obtained by the additive quantization noise model (AQNM), which
mimics the effect of low-resolution DACs. To solve the max-min power allocation
problem, an alternating optimization (AO) method is proposed, which is
guaranteed to converge because the global optima of the subproblems that
constitute the original problem are attained at each AO iteration. The
performance of cell-free and small-cell systems is explored in the simulation
results, which suggest that not-too-small fronthaul capacity suffices for
cell-free systems to outperform small-cell systems.
|
We consider scattering theory of the Laplace Beltrami operator on
differential forms on a Riemannian manifold that is Euclidean near infinity.
Allowing for compact boundaries of low regularity we prove a Birman-Krein
formula on the space of co-closed differential forms. In the case of dimension
three this reduces to a Birman-Krein formula in Maxwell scattering.
|
The evolution by horizontal mean curvature flow (HMCF) is a partial
differential equation in a sub-Riemannian setting with application in IT and
neurogeometry (see Citti-Franceschiello-Sanguinetti-Sarti, 2016). Unfortunately
this equation is difficult to study, since the horizontal normal is not always
well defined. To overcome this problem the Riemannian approximation was
introduced. In this article we define a stochastic representation of the
solution of the approximated Riemannian mean curvature using the Riemannian
approximation and we will prove that it is a solution in the viscosity sense of
the approximated mean curvature flow, generalizing the result of
Dirr-Dragoni-von Renesse, 2010.
|
The advent of the transformer has sparked a quick growth in the size of
language models, far outpacing hardware improvements. (Dense) transformers are
expected to reach the trillion-parameter scale in the near future, for which
training requires thousands or even tens of thousands of GPUs. We investigate
the challenges of training at this scale and beyond on commercially available
hardware. In particular, we analyse the shortest possible training time for
different configurations of distributed training, leveraging empirical scaling
laws for language models to estimate the optimal (critical) batch size.
Contrary to popular belief, we find no evidence for a memory wall, and instead
argue that the real limitation -- other than the cost -- lies in the training
duration.
In addition to this analysis, we introduce two new methods, \textit{layered
gradient accumulation} and \textit{modular pipeline parallelism}, which
together cut the shortest training time by half. The methods also reduce data
movement, lowering the network requirement to a point where a fast InfiniBand
connection is not necessary. This increased network efficiency also improve on
the methods introduced with the ZeRO optimizer, reducing the memory usage to a
tiny fraction of the available GPU memory.
|
We investigated the out-of-plane transport properties of parent and
chemically substituted BaFe$_{2}$As$_{2}$ for various types of substitution.
Based on the studies of Hall coefficient and chemical-substitution effect, we
have clarified the origin for the unusual temperature dependence of
out-of-plane resistivity $\rho_c(T)$ in the high-temperature
paramagnetic-tetragonal phase. Electron (hole) carriers have an incoherent
(coherent) character, which is responsible for non-metallic (metallic)
$\rho_c(T)$. Although both of electron and hole contributions are almost
comparable, a slightly larger contribution comes from electrons at high
temperatures, while from holes at low temperatures, resulting in a maximum in
$\rho_c(T)$. In the low-temperature antiferromagnetic-orthorhombic phase, the
major effect of substitution is to increase the residual-resistivity component,
as in the case for the in-plane transport. In particular, Co atoms substituted
for Fe give rise to strong scattering with large $\mathit{ac}$ anisotropy. We
found that K substitution induces a non-metallic behavior in $\rho_c(T)$ at low
temperatures, which is likely due to a weakly localized nature along the
$c$-axis direction.
|
We study the problem of determining elements of the Selberg class by
information on the coefficents of the Dirichlet series at the squares of
primes, or information about the zeroes of the functions.
|
Clinicians often do not sufficiently adhere to evidence-based clinical
guidelines in a manner sensitive to the context of each patient. It is
important to detect such deviations, typically including redundant or missing
actions, even when the detection is performed retrospectively, so as to inform
both the attending clinician and policy makers. Furthermore, it would be
beneficial to detect such deviations in a manner proportional to the level of
the deviation, and not to simply use arbitrary cut-off values. In this study,
we introduce a new approach for automated guideline-based quality assessment of
the care process, the bidirectional knowledge-based assessment of compliance
(BiKBAC) method. Our BiKBAC methodology assesses the degree of compliance when
applying clinical guidelines, with respect to multiple different aspects of the
guideline (e.g., the guideline's process and outcome objectives). The
assessment is performed through a highly detailed, automated quality-assessment
retrospective analysis, which compares a formal representation of the guideline
and of its process and outcome intentions (we use the Asbru language for that
purpose) with the longitudinal electronic medical record of its continuous
application over a significant time period, using both a top-down and a
bottom-up approach, which we explain in detail. Partial matches of the data to
the process and to the outcome objectives are resolved using fuzzy temporal
logic. We also introduce the DiscovErr system, which implements the BiKBAC
approach, and present its detailed architecture. The DiscovErr system was
evaluated in a separate study in the type 2 diabetes management domain, by
comparing its performance to a panel of three clinicians, with highly
encouraging results with respect to the completeness and correctness of its
comments.
|
The presence of a small concentration of in-plane Fe dopants in
La$_{1.87}$Sr$_{0.13}$Cu$_{0.99}$Fe$_{0.01}$O$_4$ is known to enhance
stripe-like spin and charge density wave (SDW and CDW) order, and suppress the
superconducting $T_c$. Here, we show that it also induces highly
two-dimensional (2D) superconducting correlations that have been argued to be
signatures of a new form of superconducting order, so-called pair-density-wave
(PDW) order. In addition, using the resonant soft x-ray scattering, we find
that the 2D superconducting fluctuation is strongly associated with the CDW
stripe. In particular, the PDW signature first appears when the correlation
length of the CDW stripe grows over eight times the lattice unit ($\sim$ 8$a$).
These results provide critical conditions for the formation of PDW order.
|
Hyperbolic metamaterials (HMMs) are highly anisotropic optical materials that
behave as metals or as dielectrics depending on the direction of propagation of
light. They are becoming essential for a plethora of applications, ranging from
aerospace to automotive, from wireless to medical and IoT. These applications
often work in harsh environments or may sustain remarkable external stresses.
This calls for materials that show enhanced optical properties as well as
tailorable mechanical properties. Depending on their specific use, both hard
and ultrasoft materials could be required, although the combination with
optical hyperbolic response is rarely addressed. Here, we demonstrate the
possibility to combine optical hyperbolicity and tunable mechanical properties
in the same (meta)material, focusing on the case of extreme mechanical
hardness. Using high-throughput calculations from first principles and
effective medium theory, we explored a large class of layered materials with
hyperbolic optical activity in the near-IR and visible range, and we identified
a reduced number of ultrasoft and hard HMMs among more than 1800 combinations
of transition metal rocksalt crystals. Once validated by the experiments, this
new class of metamaterials may foster previously unexplored optical/mechanical
applications.
|
Interpretability of learning algorithms is crucial for applications involving
critical decisions, and variable importance is one of the main interpretation
tools. Shapley effects are now widely used to interpret both tree ensembles and
neural networks, as they can efficiently handle dependence and interactions in
the data, as opposed to most other variable importance measures. However,
estimating Shapley effects is a challenging task, because of the computational
complexity and the conditional expectation estimates. Accordingly, existing
Shapley algorithms have flaws: a costly running time, or a bias when input
variables are dependent. Therefore, we introduce SHAFF, SHApley eFfects via
random Forests, a fast and accurate Shapley effect estimate, even when input
variables are dependent. We show SHAFF efficiency through both a theoretical
analysis of its consistency, and the practical performance improvements over
competitors with extensive experiments. An implementation of SHAFF in C++ and R
is available online.
|
In the present work, we report the dynamics and geometrical features of the
plasma plume formed by the laser ablation of copper and graphite (carbon)
targets in the presence of different transverse magnetic field. This work
emphasizes on the effect of atomic mass of the plume species on the diamagnetic
behaviour and geometrical aspect of the expanding plasma plume in the magnetic
field. The time-resolved analysis of the simultaneously captured two
directional images in orthogonal to the expansion axis is carried out for the
comparative study of projected three-dimensional structure of copper and carbon
plasma plume. In the presence of magnetic field, sharp differences are observed
between the copper and carbon plasma plumes in terms of formation of
diamagnetic cavity and structure formation. An elliptical cavity-like structure
is observed in case of copper plasma plume which attains the sharp conical
shape with increasing the time delay or magnetic field strength. On the other
hand, splitted carbon plasma plume appears as a Y-shape structure in the
presence of magnetic field where the cavity-like structure is not observed for
the considered time and magnetic field. Based on the modified energy balance
relation for the elliptic cylindrical geometry, we have also simulated the
dynamics of the plume which is in close agreement with observed plasma
expansion in diamagnetic and non-diamagnetic regions.
|
Large labeled data sets are one of the essential basics of modern deep
learning techniques. Therefore, there is an increasing need for tools that
allow to label large amounts of data as intuitively as possible. In this paper,
we introduce SALT, a tool to semi-automatically annotate RGB-D video sequences
to generate 3D bounding boxes for full six Degrees of Freedom (DoF) object
poses, as well as pixel-level instance segmentation masks for both RGB and
depth. Besides bounding box propagation through various interpolation
techniques, as well as algorithmically guided instance segmentation, our
pipeline also provides built-in pre-processing functionalities to facilitate
the data set creation process. By making full use of SALT, annotation time can
be reduced by a factor of up to 33.95 for bounding box creation and 8.55 for
RGB segmentation without compromising the quality of the automatically
generated ground truth.
|
Several language applications often require word semantics as a core part of
their processing pipeline, either as precise meaning inference or semantic
similarity. Multi-sense embeddings (M-SE) can be exploited for this important
requirement. M-SE seeks to represent each word by their distinct senses in
order to resolve the conflation of meanings of words as used in different
contexts. Previous works usually approach this task by training a model on a
large corpus and often ignore the effect and usefulness of the semantic
relations offered by lexical resources. However, even with large training data,
coverage of all possible word senses is still an issue. In addition, a
considerable percentage of contextual semantic knowledge are never learned
because a huge amount of possible distributional semantic structures are never
explored. In this paper, we leverage the rich semantic structures in WordNet
using a graph-theoretic walk technique over word senses to enhance the quality
of multi-sense embeddings. This algorithm composes enriched texts from the
original texts. Furthermore, we derive new distributional semantic similarity
measures for M-SE from prior ones. We adapt these measures to word sense
disambiguation (WSD) aspect of our experiment. We report evaluation results on
11 benchmark datasets involving WSD and Word Similarity tasks and show that our
method for enhancing distributional semantic structures improves embeddings
quality on the baselines. Despite the small training data, it achieves
state-of-the-art performance on some of the datasets.
|
Score-based generative models provide state-of-the-art quality for image and
audio synthesis. Sampling from these models is performed iteratively, typically
employing a discretized series of noise levels and a predefined scheme. In this
note, we first overview three common sampling schemes for models trained with
denoising score matching. Next, we focus on one of them, consistent annealed
sampling, and study its hyper-parameter boundaries. We then highlight a
possible formulation of such hyper-parameter that explicitly considers those
boundaries and facilitates tuning when using few or a variable number of steps.
Finally, we highlight some connections of the formulation with other sampling
schemes.
|
Larger language models have higher accuracy on average, but are they better
on every single instance (datapoint)? Some work suggests larger models have
higher out-of-distribution robustness, while other work suggests they have
lower accuracy on rare subgroups. To understand these differences, we
investigate these models at the level of individual instances. However, one
major challenge is that individual predictions are highly sensitive to noise in
the randomness in training. We develop statistically rigorous methods to
address this, and after accounting for pretraining and finetuning noise, we
find that our BERT-Large is worse than BERT-Mini on at least 1-4% of instances
across MNLI, SST-2, and QQP, compared to the overall accuracy improvement of
2-10%. We also find that finetuning noise increases with model size and that
instance-level accuracy has momentum: improvement from BERT-Mini to BERT-Medium
correlates with improvement from BERT-Medium to BERT-Large. Our findings
suggest that instance-level predictions provide a rich source of information;
we therefore, recommend that researchers supplement model weights with model
predictions.
|
We consider an architecture of confidential cloud-based control synthesis
based on Homomorphic Encryption (HE). Our study is motivated by the recent
surge of data-driven control such as deep reinforcement learning, whose heavy
computational requirements often necessitate an outsourcing to the third party
server. To achieve more flexibility than Partially Homomorphic Encryption (PHE)
and less computational overhead than Fully Homomorphic Encryption (FHE), we
consider a Reinforcement Learning (RL) architecture over Leveled Homomorphic
Encryption (LHE). We first show that the impact of the encryption noise under
the Cheon-Kim-Kim-Song (CKKS) encryption scheme on the convergence of the
model-based tabular Value Iteration (VI) can be analytically bounded. We also
consider secure implementations of TD(0), SARSA(0) and Z-learning algorithms
over the CKKS scheme, where we numerically demonstrate that the effects of the
encryption noise on these algorithms are also minimal.
|
Recently, surface electromyography (sEMG) emerged as a novel biometric
authentication method. Since EMG system parameters, such as the feature
extraction methods and the number of channels, have been known to affect system
performances, it is important to investigate these effects on the performance
of the sEMG-based biometric system to determine optimal system parameters. In
this study, three robust feature extraction methods, Time-domain (TD) feature,
Frequency Division Technique (FDT), and Autoregressive (AR) feature, and their
combinations were investigated while the number of channels varying from one to
eight. For these system parameters, the performance of sixteen static wrist and
hand gestures was systematically investigated in two authentication modes:
verification and identification. The results from 24 participants showed that
the TD features significantly (p<0.05) and consistently outperformed FDT and AR
features for all channel numbers. The results also showed that the performance
of a four-channel setup was not significantly different from those with higher
number of channels. The average equal error rate (EER) for a four-channel sEMG
verification system was 4% for TD features, 5.3% for FDT features, and 10% for
AR features. For an identification system, the average Rank-1 error (R1E) for a
four-channel configuration was 3% for TD features, 12.4% for FDT features, and
36.3% for AR features. The electrode position on the flexor carpi ulnaris (FCU)
muscle had a critical contribution to the authentication performance. Thus, the
combination of the TD feature set and a four-channel sEMG system with one of
the electrodes positioned on the FCU are recommended for optimal authentication
performance.
|
Let "Faulhaber's formula" refer to an expression for the sum of powers of
integers written with terms in n(n+1)/2. Initially, the author used Faulhaber's
formula to explain why odd Bernoulli numbers are equal to zero. Next, Cereceda
gave alternate proofs of that result and then proved the converse, if odd
Bernoulli numbers are equal to zero then we can derive Faulhaber's formula.
Here, the original author will give a new proof of the converse.
|
Following the recent discovery of the Lauricella string scattering amplitudes
(LSSA) and their associated exact SL(K+3,C) symmetry, we give a brief comment
on Gross conjecture regarding "High energy symmetry of string theory".
|
A method to compute optimal collision avoidance maneuvers for short-term
encounters is presented. The maneuvers are modeled as multiple-impulses to
handle impulsive cases and to approximate finite burn arcs associated either
with short alert times or the use of low-thrust propulsion. The maneuver design
is formulated as a sequence of convex optimization problems solved in
polynomial time by state-of-the-art primal-dual interior-point algorithms. The
proposed approach calculates optimal solutions without assumptions about the
thrust arc structure and thrust direction. The execution time is fraction of a
second for an optimization problem with hundreds of variables and constraints,
making it suitable for autonomous calculations.
|
Microgrids with energy storage systems and distributed renewable energy
sources play a crucial role in reducing the consumption from traditional power
sources and the emission of $CO_2$. Connecting multi microgrid to a
distribution power grid can facilitate a more robust and reliable operation to
increase the security and privacy of the system. The proposed model consists of
three layers, smart grid layer, independent system operator (ISO) layer and
power grid layer. Each layer aims to maximise its benefit. To achieve these
objectives, an intelligent multi-microgrid energy management method is proposed
based on the multi-objective reinforcement learning (MORL) techniques, leading
to a Pareto optimal set. A non-dominated solution is selected to implement a
fair design in order not to favour any particular participant. The simulation
results demonstrate the performance of the MORL and verify the viability of the
proposed approach.
|
At ambient pressure, lithium molybdenum purple bronze (Li0.9Mo6O17) is a
quasi-one dimensional solid in which the anisotropic crystal structure and the
linear dispersion of the underlying bands produced by electronic correlations
possibly bring about a rare experimental realization of Tomomaga-Luttinger
liquid physics. It is also the sole member of the broader purple molybdenum
bronzes family where a Peierls instability has not been identified at low
temperatures. The present study reports a pressure-induced series of phase
transitions between 0 and 12 GPa. These transitions are strongly reflected in
infrared spectroscopy, Raman spectroscopy, and x-ray diffraction. The most
dramatic effect seen in optical conductivity is the metallization of the
c-axis, concomitant to the decrease of conductivity along the b-axis. This
indicates that high pressure drives the material away from its quasi-one
dimensional behavior at ambient pressure. While the first pressure-induced
structure of the series is resolved, the identification of the underlying
mechanisms driving the dimensional change in the physics remains a challenge.
|
Optimizing the confinement and transport of fast ions is an important
consideration in the design of modern fusion reactors. For spherical tokamaks
in particular, fast ions can significantly influence global plasma behavior
because their large drift orbits often sample both core and scrape-off-layer
(SOL) plasma conditions. Their Larmor radii are also comparable to the SOL
width, rendering the commonly chosen guiding center approximations
inappropriate. Accurately modeling the behavior of fast ions therefore requires
retaining a complete description of the fast ion orbit including its Larmor
motion. Here, we introduce the Scrape-Off-Layer Fast Ion (SOLFI) code, which is
a new and versatile full-orbit Monte Carlo particle tracer being developed to
follow fast ion orbits inside and outside the separatrix. We benchmark SOLFI in
a simple straight mirror geometry and show that the code (i) conserves particle
energy and magnetic moment, (ii) obtains the correct passing boundary for
particles moving in magnetic mirror field with an imposed electrostatic field,
and (iii) correctly observes equal ion and electron current at the ambipolar
potential predicted from analytical theory.
|
Automated testing tools typically create test cases that are different from
what human testers create. This often makes the tools less effective, the
created tests harder to understand, and thus results in tools providing less
support to human testers. Here, we propose a framework based on cognitive
science and, in particular, an analysis of approaches to problem-solving, for
identifying cognitive processes of testers. The framework helps map test design
steps and criteria used in human test activities and thus to better understand
how effective human testers perform their tasks. Ultimately, our goal is to be
able to mimic how humans create test cases and thus to design more human-like
automated test generation systems. We posit that such systems can better
augment and support testers in a way that is meaningful to them.
|
We obtain a type of Ay\'{o}n-Beato-Garc\'{\i}a (ABG) related black hole
solutions with five parameters: the mass $m$, the charge $q$, and three
dimensionless parameters $\alpha$, $\beta$ and $\gamma$ associated with
nonlinear electrodynamics. We find that this type of black holes is regular
under the conditions: $\alpha \gamma \geqslant 6$, $\beta \gamma \geqslant 8$,
and $\gamma >0$. Here we focus on the saturated case: $\alpha={6}/{\gamma}$ and
$\beta ={8}/{\gamma }$, such that only three parameters $m$, $q$ and $\gamma$
remain, which leads to a new family of ABG black holes. For such a family of
black holes, we investigate the influence of the charge $q$ and the parameter
$\gamma$ on the horizon radius and the Hawking temperature. In addition, we
calculate the quasinormal mode frequencies of massless scalar field
perturbations by using the sixth-order WKB approximation method and the
unstable circular null geodesic method in the eikonal limit. We also compute
the shadow radius for the new family of ABG black holes and use the shadow data
of the $M87^{*}$ black hole detected by the Event Horizon Telescope to provide
an upper limit on the charge $q$ of the new black holes. Using the shadow data
of the $M87^{*}$ black hole, we find that the upper limit of the charge $q$
increases rapidly at first and then slowly but does not exceed the mass of the
$M87^{*}$ black hole at last when the parameter $\gamma$ is increasing and
going to infinity, and that the data restrict the frequency range of the
fundamental mode with $l=1$ to $1.4\times 10^{-6}Hz\sim 1.9\times 10^{-6} Hz$.
|
This paper describes the development needed to support the functional and
teaching requirements of iRead, a 4-year EU-funded project which produced an
award-winning serious game utilising lexical and syntactical game content. The
main functional requirement was that the game should retain different profiles
for each student, encapsulating both the respective language model (which
language features should be taught/used in the game first, before moving on to
more advanced ones) and the user model (mastery level for each feature, as
reported by the student's performance in the game). In addition to this,
researchers and stakeholders stated additional requirements related to learning
objectives and strategies to make the game more interesting and successful;
these were implemented as a set of selection rules which take into account not
only the mastery level for each feature, but also respect the priorities set by
teachers, helping avoid repetition of content and features, and maintaining a
balance between new content and revision of already mastered features to give
students the sense of progress, while also reinforcing learning.
|
Surface ion traps are among the most promising technologies for scaling up
quantum computing machines, but their complicated multi-electrode geometry can
make some tasks, including compensation for stray electric fields, challenging
both at the level of modeling and of practical implementation. Here we
demonstrate the compensation of stray electric fields using a gradient descent
algorithm and a machine learning technique, which trained a deep learning
network. We show automated dynamical compensation tested against induced
electric charging from UV laser light hitting the chip trap surface. The
results show improvement in compensation using gradient descent and the machine
learner over manual compensation. This improvement is inferred from an increase
of the fluorescence rate of 78% and 96% respectively, for a trapped
$^{171}$Yb$^+$ ion driven by a laser tuned to -7.8 MHz of the
$^2$S$_{1/2}\leftrightarrow^2$P$_{1/2}$ Doppler cooling transition at 369.5 nm.
|
Active learning is usually applied to acquire labels of informative data
points in supervised learning, to maximize accuracy in a sample-efficient way.
However, maximizing the accuracy is not the end goal when the results are used
for decision-making, for example in personalized medicine or economics. We
argue that when acquiring samples sequentially, separating learning and
decision-making is sub-optimal, and we introduce an active learning strategy
which takes the down-the-line decision problem into account. Specifically, we
introduce a novel active learning criterion which maximizes the expected
information gain on the posterior distribution of the optimal decision. We
compare our targeted active learning strategy to existing alternatives on both
simulated and real data, and show improved performance in decision-making
accuracy.
|
We show that there is a red-blue colouring of $[N]$ with no blue 3-term
arithmetic progression and no red arithmetic progression of length $e^{C(\log
N)^{3/4}(\log \log N)^{1/4}}$. Consequently, the two-colour van der Waerden
number $w(3,k)$ is bounded below by $k^{b(k)}$, where $b(k) = c \big(
\frac{\log k}{\log\log k} \big)^{1/3}$. Previously it had been speculated,
supported by data, that $w(3,k) = O(k^2)$.
|
We prove that, for $g\geq19$ the mapping class group of a nonorientable
surface of genus $g$, $\textrm{Mod}(N_g)$, can be generated by two elements,
one of which is of order $g$. We also prove that for $g\geq26$,
$\textrm{Mod}(N_g)$ can be generated by three involutions if $g\geq26$.
|
Advances in both instrumentation and data analysis software are now enabling
the first ultra-high-resolution microcalorimeter gamma spectrometers designed
for implementation in nuclear facilities and analytical laboratories. With
approximately ten times better energy resolution than high-purity germanium
detectors, these instruments can overcome important uncertainty limits.
Microcalorimeter gamma spectroscopy is intended to provide nondestructive
isotopic analysis capabilities with sufficient precision and accuracy to reduce
the need for sampling, chemical separations, and mass spectrometry to meet
safeguards and security goals. Key milestones were the development of the SOFIA
instrument (Spectrometer Optimized for Facility Integrated Applications) and
the SAPPY software (Spectral Analysis Program in PYthon). SOFIA is a compact
instrument that combines advances in large multiplexed transition-edge sensor
arrays with optimized cryogenic performance to overcome many practical
limitations of previous systems. With a 256-pixel multiplexed detector array
capable of 5000 counts per second, measurement time can be comparable to
high-purity germanium detectors. SAPPY was developed to determine isotopic
ratios in data from SOFIA and other microcalorimeter instruments with an
approach similar to the widely-used FRAM software. SAPPY provides a flexible
framework with rigorous uncertainty analysis for both microcalorimeter and HPGe
data, allowing direct comparison. We present current results from the SOFIA
instrument, preliminary isotopic analysis using SAPPY, and describe how the
technology is being used to explore uncertainty limits of nondestructive
isotopic characterization, inform safeguards models, and extract improved
nuclear data including gamma-ray branching ratios.
|
This dissertation studies the quantum anomalous effects on the description of
high energy electrodynamics. We argue that on the temperatures comparable to
the electroweak scale, characteristic for the early Universe and objects like
neutron stars, the description of electromagnetic fields in conductive plasmas
needs to be extended to include the effects of chiral anomaly. It is
demonstrated that chiral effects can have a significant influence on the
evolution of magnetic fields, tending to produce exponential amplification,
creation of magnetic helicity from initially non-helical fields, and can lead
to an inverse energy transfer. We further discuss the modified
magnetohydrodynamic equations around the electroweak transition. The obtained
solutions demonstrate that the asymmetry between right-handed and left-handed
charged fermions of negligible mass typically grows with time when approaching
the electroweak crossover from higher temperatures, until it undergoes a fast
decrease at the transition, and then eventually gets damped at lower
temperatures in the broken phase. At the same time, the dissipation of magnetic
fields gets slower due to the chiral effects. We furthermore report some first
analytical attempts in the study of chiral magnetohydrodynamic turbulence.
Using the analysis of simplified regimes and qualitative arguments, it is shown
that anomalous effects can strongly support turbulent inverse cascade and lead
to a faster growth of the correlation length, when compared to the evolution
predicted by the non-chiral magnetohydrodynamics. Finally, the discussion of
relaxation towards minimal energy states in the chiral magnetohydrodynamic
turbulence is also presented.
|
The Covid-19 pandemic introduces new challenges and constraints for return to
work business planning. We describe a space allocation problem that
incorporates social distancing constraints while optimising the number of
available safe workspaces in a return to work scenario. We propose and
demonstrate a graph based approach that solves the optimisation problem via
modelling as a bipartite graph of disconnected components over a graph of
constraints. We compare results obtained with a constrained random walk and a
linear programming approach.
|
We provide a new analysis technique to measure the effect of the isotropic
polarization rotation, induced by e.g. the isotropic cosmic birefringence from
axion-like particles and a miscalibration of CMB polarization angle, via mode
coupling in the cosmic microwave background (CMB). Several secondary effects
such as gravitational lensing and CMB optical-depth anisotropies lead to mode
coupling in observed CMB anisotropies, i.e., non-zero off-diagonal elements in
the observed CMB covariance. To derive the mode coupling, however, we usually
assume no parity violation in the observed CMB anisotropies. We first derive a
new contribution to the CMB mode coupling arising from parity violation in
observed CMB. Since the isotropic polarization rotation leads to parity
violation in the observed CMB anisotropies, we then discuss the use of the new
mode coupling for constraining the isotropic polarization angle. We find that
constraints on the isotropic polarization angle by measuring the new
mode-coupling contribution are comparable to that using the $EB$ cross-power
spectrum in future high-sensitivity polarization experiments such as CMB-S4 and
LiteBIRD. Thus, this technique can be used to cross-check results obtained by
the use of the $EB$ cross-power spectrum.
|
What breathes life into an embodied agent or avatar? While body motions such
as facial expressions, speech and gestures have been well studied, relatively
little attention has been applied to subtle changes due to underlying
physiology. We argue that subtle pulse signals are important for creating more
lifelike and less disconcerting avatars. We propose a method for animating
blood flow patterns, based on a data-driven physiological model that can be
used to directly augment the appearance of synthetic avatars and
photo-realistic faces. While the changes are difficult for participants to
"see", they significantly more frequently select faces with blood flow as more
anthropomorphic and animated than faces without blood flow. Furthermore, by
manipulating the frequency of the heart rate in the underlying signal we can
change the perceived arousal of the character.
|
During software evolution, inexperienced developers may introduce design
anti-patterns when they modify their software systems to fix bugs or to add new
functionalities based on changes in requirements. Developers may also use
design patterns to promote software quality or as a possible cure for some
design anti-patterns. Thus, design patterns and design anti-patterns are
introduced, removed, and mutated from one another by developers.
Many studies investigated the evolution of design patterns and design
anti-patterns and their impact on software development. However, they
investigated design patterns or design anti-patterns in isolation and did not
consider their mutations and the impact of these mutations on software quality.
Therefore, we report our study of bidirectional mutations between design
patterns and design anti-patterns and the impacts of these mutations on
software change- and fault-proneness.
We analyzed snapshots of seven Java software systems with diverse sizes,
evolution histories, and application domains. We built Markov models to capture
the probability of occurrences of the different design patterns and design
anti-patterns mutations. Results from our study show that (1) design patterns
and design anti-patterns mutate into other design patterns and/or design
anti-patterns. They also show that (2) some change types primarily trigger
mutations of design patterns and design anti-patterns (renaming and changes to
comments, declarations, and operators), and (3) some mutations of design
anti-patterns and design patterns are more faulty in specific contexts. These
results provide important insights into the evolution of design patterns and
design anti-patterns and its impact on the change- and fault-proneness of
software systems.
|
In order to study the phenomenon of regional economic development and urban
expansion from the perspective of night-light remote sensing images,
researchers use NOAA-provided night-light remote sensing image data (data from
1992 to 2013) along with ArcGIS software to process image information, obtain
the basic pixel information data of specific areas of the image, and analyze
these data from the space-time domain for presentation of the trend of regional
economic development in China in recent years, and tries to explore the
urbanization effect brought by the rapid development of China's economy.
Through the analysis and study of the data, the results show that the
urbanization development speed in China is still at its peak, and has great
development potential and space. But at the same time, people also need to pay
attention to the imbalance of regional development.
|
Specific heat and linear thermal expansivity are fundamental thermal dynamics
and have been proven as interesting relaxing quantities to investigate in glass
transition and glassy state. However, their possibility has much less been
exploited compared to mechanical and dielectric susceptibilities due to the
limited spectroscopy bandwidth. This work reports on simultaneous spectroscopy
of the two by making use of ultrafast time-resolved thermal lens (TL)
spectroscopy. Detailed modeling of the thermoelastic transients of a relaxing
system subjected to ultrashort laser heating is presented to describe the TL
response. The model has been applied to analyze a set of experimentally
recorded TL waveforms, allowing the determination of relaxation strength and
relaxation frequency from sub-kilohertz to sub-100 MHz and in a wide
temperature range from 200-280 K.
|
This paper proposes an efficient video summarization framework that will give
a gist of the entire video in a few key-frames or video skims. Existing video
summarization frameworks are based on algorithms that utilize computer vision
low-level feature extraction or high-level domain level extraction. However,
being the ultimate user of the summarized video, humans remain the most
neglected aspect. Therefore, the proposed paper considers human's role in
summarization and introduces human visual attention-based summarization
techniques. To understand human attention behavior, we have designed and
performed experiments with human participants using electroencephalogram (EEG)
and eye-tracking technology. The EEG and eye-tracking data obtained from the
experimentation are processed simultaneously and used to segment frames
containing useful information from a considerable video volume. Thus, the frame
segmentation primarily relies on the cognitive judgments of human beings. Using
our approach, a video is summarized by 96.5% while maintaining higher precision
and high recall factors. The comparison with the state-of-the-art techniques
demonstrates that the proposed approach yields ceiling-level performance with
reduced computational cost in summarising the videos.
|
Let $\mathbf{X}\in\mathbb{C}^{m\times n}$ ($m\geq n$) be a random matrix with
independent rows each distributed as complex multivariate Gaussian with zero
mean and {\it single-spiked} covariance matrix $\mathbf{I}_n+ \eta
\mathbf{u}\mathbf{u}^*$, where $\mathbf{I}_n$ is the $n\times n$ identity
matrix, $\mathbf{u}\in\mathbb{C}^{n\times n}$ is an arbitrary vector with a
unit Euclidean norm, $\eta\geq 0$ is a non-random parameter, and $(\cdot)^*$
represents conjugate-transpose. This paper investigates the distribution of the
random quantity $\kappa_{\text{SC}}^2(\mathbf{X})=\sum_{k=1}^n
\lambda_k/\lambda_1$, where $0<\lambda_1<\lambda_2<\ldots<\lambda_n<\infty$ are
the ordered eigenvalues of $\mathbf{X}^*\mathbf{X}$ (i.e., single-spiked
Wishart matrix). This random quantity is intimately related to the so called
{\it scaled condition number} or the Demmel condition number (i.e.,
$\kappa_{\text{SC}}(\mathbf{X})$) and the minimum eigenvalue of the fixed trace
Wishart-Laguerre ensemble (i.e., $\kappa_{\text{SC}}^{-2}(\mathbf{X})$). In
particular, we use an orthogonal polynomial approach to derive an exact
expression for the probability density function of
$\kappa_{\text{SC}}^2(\mathbf{X})$ which is amenable to asymptotic analysis as
matrix dimensions grow large. Our asymptotic results reveal that, as
$m,n\to\infty$ such that $m-n$ is fixed and when $\eta$ scales on the order of
$1/n$, $\kappa_{\text{SC}}^2(\mathbf{X})$ scales on the order of $n^3$. In this
respect we establish simple closed-form expressions for the limiting
distributions.
|
Millimeter wave multiple-input multiple-output (mmWave-MIMO) systems with
small number of radio-frequency (RF) chains have limited multiplexing gain.
Spatial path index modulation (SPIM) is helpful in improving this gain by
utilizing additional signal bits modulated by the indices of spatial paths. In
this paper, we introduce model-based and model-free frameworks for beamformer
design in multi-user SPIM-MIMO systems. We first design the beamformers via
model-based manifold optimization algorithm. Then, we leverage federated
learning (FL) with dropout learning (DL) to train a learning model on the local
dataset of users, who estimate the beamformers by feeding the model with their
channel data. The DL randomly selects different set of model parameters during
training, thereby further reducing the transmission overhead compared to
conventional FL. Numerical experiments show that the proposed framework
exhibits higher spectral efficiency than the state-of-the-art SPIM-MIMO methods
and mmWave-MIMO, which relies on the strongest propagation path. Furthermore,
the proposed FL approach provides at least 10 times lower transmission overhead
than the centralized learning techniques.
|
Out-of-town recommendation is designed for those users who leave their
home-town areas and visit the areas they have never been to before. It is
challenging to recommend Point-of-Interests (POIs) for out-of-town users since
the out-of-town check-in behavior is determined by not only the user's
home-town preference but also the user's travel intention. Besides, the user's
travel intentions are complex and dynamic, which leads to big difficulties in
understanding such intentions precisely. In this paper, we propose a
TRAvel-INtention-aware Out-of-town Recommendation framework, named TRAINOR. The
proposed TRAINOR framework distinguishes itself from existing out-of-town
recommenders in three aspects. First, graph neural networks are explored to
represent users' home-town check-in preference and geographical constraints in
out-of-town check-in behaviors. Second, a user-specific travel intention is
formulated as an aggregation combining home-town preference and generic travel
intention together, where the generic travel intention is regarded as a mixture
of inherent intentions that can be learned by Neural Topic Model (NTM). Third,
a non-linear mapping function, as well as a matrix factorization method, are
employed to transfer users' home-town preference and estimate out-of-town POI's
representation, respectively. Extensive experiments on real-world data sets
validate the effectiveness of the TRAINOR framework. Moreover, the learned
travel intention can deliver meaningful explanations for understanding a user's
travel purposes.
|
Cavity optomechanical systems have become a popular playground for studies of
controllable nonlinear interactions between light and motion. Owing to the
large speed of light, realizing cavity optomechanics in the microwave frequency
range requires cavities up to several mm in size, hence making it hard to embed
several of them on the same chip. An alternative scheme with much smaller
footprint is provided by magnomechanics, where the electromagnetic cavity is
replaced by a magnet undergoing ferromagnetic resonance, and the optomechanical
coupling originates from magnetic shape anisotropy. Here, we consider the
magnomechanical interaction occurring in a suspended magnetic beam -- a scheme
in which both magnetic and mechanical modes physically overlap and can also be
driven individually. We show that a sizable interaction can be produced if the
beam has some initial static deformation, as is often the case due to unequal
strains in the constituent materials. We also show how the magnetism affects
the magnetomotive detection of the vibrations, and how the magnomechanics
interaction can be used in microwave signal amplification. Finally, we discuss
experimental progress towards realizing the scheme.
|
This paper studies offline Imitation Learning (IL) where an agent learns to
imitate an expert demonstrator without additional online environment
interactions. Instead, the learner is presented with a static offline dataset
of state-action-next state transition triples from a potentially less
proficient behavior policy. We introduce Model-based IL from Offline data
(MILO): an algorithmic framework that utilizes the static dataset to solve the
offline IL problem efficiently both in theory and in practice. In theory, even
if the behavior policy is highly sub-optimal compared to the expert, we show
that as long as the data from the behavior policy provides sufficient coverage
on the expert state-action traces (and with no necessity for a global coverage
over the entire state-action space), MILO can provably combat the covariate
shift issue in IL. Complementing our theory results, we also demonstrate that a
practical implementation of our approach mitigates covariate shift on benchmark
MuJoCo continuous control tasks. We demonstrate that with behavior policies
whose performances are less than half of that of the expert, MILO still
successfully imitates with an extremely low number of expert state-action pairs
while traditional offline IL method such as behavior cloning (BC) fails
completely. Source code is provided at https://github.com/jdchang1/milo.
|
This position paper examines potential pitfalls on the way towards achieving
human-AI co-creation with generative models in a way that is beneficial to the
users' interests. In particular, we collected a set of nine potential pitfalls,
based on the literature and our own experiences as researchers working at the
intersection of HCI and AI. We illustrate each pitfall with examples and
suggest ideas for addressing it. Reflecting on all pitfalls, we discuss and
conclude with implications for future research directions. With this
collection, we hope to contribute to a critical and constructive discussion on
the roles of humans and AI in co-creative interactions, with an eye on related
assumptions and potential side-effects for creative practices and beyond.
|
We give an explicit description of the generator of finitely presented
objects of the coslice of a locally finitely presentable category under a given
object, as consisting of all pushouts of finitely presented maps under this
object. Then we prove that the comma category under the direct image part of a
morphism of locally finitely presentable category is still locally finitely
presentable, and we give again an explicit description of its generator of
finitely presented objects. We finally deduce that 2-category $\LFP$ has comma
objects computed in $\Cat$.
|
Genome-wide association studies (GWAS) have identified thousands of genetic
variants associated with complex traits. Many complex traits are found to have
shared genetic etiology. Genetic covariance is defined as the underlying
covariance of genetic values and can be used to measure the shared genetic
architecture. The data of two outcomes may be collected from the same group or
different groups of individuals and the outcomes can be of different types or
collected based on different study designs. This paper proposes a unified
approach to robust estimation and inference for genetic covariance of general
outcomes that may be associated with genetic variants nonlinearly. We provide
the asymptotic properties of the proposed estimator and show that our proposal
is robust under certain model mis-specification. Our method under linear
working models provides robust inference for the narrow-sense genetic
covariance, even when both linear models are mis-specified. Various numerical
experiments are performed to support the theoretical results. Our method is
applied to an outbred mice GWAS data set to study the overlapping genetic
effects between the behavioral and physiological phenotypes. The real data
results demonstrate the robustness of the proposed method and reveal
interesting genetic covariance among different mice developmental traits.
|
We study unirationality of a Del Pezzo surface of degree two over a given
(non algebraically closed) field, under the assumption that it admits at least
one rational double point over an algebraic closure of the base field. As
corollaries of our main results, we find that over a finite field, it is
unirational if the cardinality of the field is greater than or equal to nine
and we also find that over an infinite field, which is not necessarily perfect,
it is unirational if and only if the rational points are Zariski dense over the
field.
|
Input perturbation methods occlude parts of an input to a function and
measure the change in the function's output. Recently, input perturbation
methods have been applied to generate and evaluate saliency maps from
convolutional neural networks. In practice, neutral baseline images are used
for the occlusion, such that the baseline image's impact on the classification
probability is minimal. However, in this paper we show that arguably neutral
baseline images still impact the generated saliency maps and their evaluation
with input perturbations. We also demonstrate that many choices of
hyperparameters lead to the divergence of saliency maps generated by input
perturbations. We experimentally reveal inconsistencies among a selection of
input perturbation methods and find that they lack robustness for generating
saliency maps and for evaluating saliency maps as saliency metrics.
|
We introduce a novel fine-grained dataset and benchmark, the Danish Fungi
2020 (DF20). The dataset, constructed from observations submitted to the Atlas
of Danish Fungi, is unique in its taxonomy-accurate class labels, small number
of errors, highly unbalanced long-tailed class distribution, rich observation
metadata, and well-defined class hierarchy. DF20 has zero overlap with
ImageNet, allowing unbiased comparison of models fine-tuned from publicly
available ImageNet checkpoints. The proposed evaluation protocol enables
testing the ability to improve classification using metadata -- e.g. precise
geographic location, habitat, and substrate, facilitates classifier calibration
testing, and finally allows to study the impact of the device settings on the
classification performance. Experiments using Convolutional Neural Networks
(CNN) and the recent Vision Transformers (ViT) show that DF20 presents a
challenging task. Interestingly, ViT achieves results superior to CNN baselines
with 80.45% accuracy and 0.743 macro F1 score, reducing the CNN error by 9% and
12% respectively. A simple procedure for including metadata into the decision
process improves the classification accuracy by more than 2.95 percentage
points, reducing the error rate by 15%. The source code for all methods and
experiments is available at https://sites.google.com/view/danish-fungi-dataset.
|
In the present work, the European option pricing SWIFT method is extended for
Heston model calibration. The computation of the option price gradient is
simplified thanks to the knowledge of the characteristic function in closed
form. The proposed calibration machinery appears to be extremely fast, in
particular for a single expiry and multiples strikes, outperforming the
state-of-the-art method we compare with. Further, the a priori knowledge of
SWIFT parameters makes possible a reliable and practical implementation of the
presented calibration method. A wide set of stress, speed and convergence
numerical experiments is carried out, with deep in-the-money, at-the-money and
deep out-of-the-money options for very short and very long maturities.
|
We note that a strongly minimal Steiner $k$-Steiner system $(M,R)$ from
(Baldwin-Paolini 2020) can be `coordinatized' in the sense of (Gantner-Werner
1975) by a quasigroup if $k$ is a prime-power. But for the basic construction
this coordinatization is never definable in $(M,R)$. Nevertheless, by refining
the construction, if $k$ is a prime power there is a $(2,k)$-variety of
quasigroups which is strongly minimal and definably coordinatizes a Steiner
$k$-system.
|
The chromosphere is a partially ionized layer of the solar atmosphere, the
transition between the photosphere where the gas motion is determined by the
gas pressure and the corona dominated by the magnetic field. We study the
effect of partial ionization for 2D wave propagation in a gravitationally
stratified, magnetized atmosphere with properties similar to the solar
chromosphere. We adopt an oblique uniform magnetic field in the plane of
propagation with strength suitable for a quiet sun region. The theoretical
model used is a single fluid magnetohydrodynamic approximation, where
ion-neutral interaction is modeled by the ambipolar diffusion term. Magnetic
energy can be converted into internal energy through the dissipation of the
electric current produced by the drift between ions and neutrals. We use
numerical simulations where we continuously drive fast waves at the bottom of
the atmosphere. The collisional coupling between ions and neutrals decreases
with the decrease of the density and the ambipolar effect becomes important.
Fast waves excited at the base of the atmosphere reach the equipartition layer
and reflect or transmit as slow waves. While the waves propagate through the
atmosphere and the density drops, the waves steepen into shocks. The main
effect of ambipolar diffusion is damping of the waves. We find that for the
parameters chosen in this work, the ambipolar diffusion affects the fast wave
before it is reflected, with damping being more pronounced for waves which are
launched in a direction perpendicular to the magnetic field. Slow waves are
less affected by ambipolar effects. The damping increases for shorter periods
and larger magnetic field strengths. Small scales produced by the nonlinear
effects and the superposition of different types of waves created at the
equipartition height are efficiently damped by ambipolar diffusion.
|
In this paper we obtain a parametric solution of the hitherto unsolved
diophantine equation $(x_1^5+x_2^5)(x_3^5+x_4^5)=(y_1^5+y_2^5)(y_3^5+y_4^5)$.
Further, we show, using elliptic curves, that there exist infinitely many
parametric solutions of the aforementioned diophantine equation, and they can
be effectively computed.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.