abstract
stringlengths 42
2.09k
|
---|
In this paper, we propose our enhanced approach to create a dedicated corpus
for Algerian Arabic newspapers comments. The developed approach has to enhance
an existing approach by the enrichment of the available corpus and the
inclusion of the annotation step by following the Model Annotate Train Test
Evaluate Revise (MATTER) approach. A corpus is created by collecting comments
from web sites of three well know Algerian newspapers. Three classifiers,
support vector machines, na{\"i}ve Bayes, and k-nearest neighbors, were used
for classification of comments into positive and negative classes. To identify
the influence of the stemming in the obtained results, the classification was
tested with and without stemming. Obtained results show that stemming does not
enhance considerably the classification due to the nature of Algerian comments
tied to Algerian Arabic Dialect. The promising results constitute a motivation
for us to improve our approach especially in dealing with non Arabic sentences,
especially Dialectal and French ones.
|
We describe a technique to measure photon pair joint spectra by detecting the
time-correlation beat note when non-degenerate photon pairs interfere at a
beamsplitter. The technique implements a temporal analog of the Ghosh-Mandel
effect with one photon counter and a time-resolved Hong-Ou-Mandel interference
with two. It is well suited to characterize pairs of photons, each of which can
interact with a single atomic species, as required to study recently predicted
photon-photon interaction in sub-wavelength atomic arrays. With this technique,
we characterize photon pairs from cavity-enhanced parametric downconversion
with a bandwidth $\approx$ 5 MHz and frequency separation of $\sim$ 200 MHz
near the D$_1$ line of atomic Rb.
|
We consider the problem of linear regression from strategic data sources with
a public good component, i.e., when data is provided by strategic agents who
seek to minimize an individual provision cost for increasing their data's
precision while benefiting from the model's overall precision. In contrast to
previous works, our model tackles the case where there is uncertainty on the
attributes characterizing the agents' data -- a critical aspect of the problem
when the number of agents is large. We provide a characterization of the game's
equilibrium, which reveals an interesting connection with optimal design.
Subsequently, we focus on the asymptotic behavior of the covariance of the
linear regression parameters estimated via generalized least squares as the
number of data sources becomes large. We provide upper and lower bounds for
this covariance matrix and we show that, when the agents' provision costs are
superlinear, the model's covariance converges to zero but at a slower rate
relative to virtually all learning problems with exogenous data. On the other
hand, if the agents' provision costs are linear, this covariance fails to
converge. This shows that even the basic property of consistency of generalized
least squares estimators is compromised when the data sources are strategic.
|
The supernova remnant (SNR) 3C 397 is thought to originate from a Type Ia
supernova (SN Ia) explosion of a near-Chandrasekhar-mass ($M_{\rm Ch}$)
progenitor, based on the enhanced abundances of Mn and Ni revealed by previous
X-ray study with Suzaku. Here we report follow-up XMM-Newton observations of
this SNR, conducted with the aim of investigating the detailed spatial
distribution of the Fe-peak elements. We have discovered an ejecta clump with
extremely high abundances of Ti and Cr, in addition to Mn, Fe, and Ni, in the
southern part of the SNR. The Fe mass of this ejecta clump is estimated to be
$\sim$ 0.06 $M_{\odot}$, under the assumption of a typical Fe yield for SNe Ia
(i.e., $\sim$ 0.8 $M_{\odot}$). The observed mass ratios among the Fe-peak
elements and Ti require substantial neutronization that is achieved only in the
innermost regions of a near-$M_{\rm Ch}$ SN Ia with a central density of
$\rho_c \sim 5 \times 10^9$ g cm$^{-3}$, significantly higher than typically
assumed for standard near-$M_{\rm Ch}$ SNe Ia ($\rho_c \sim 2 \times 10^9$ g
cm$^{-3}$). The overproduction of the neutron-rich isotopes (e.g., $^{50}$Ti
and $^{54}$Cr) is significant in such high-$\rho_c$ SNe Ia, with respect to the
solar composition. Therefore, if 3C 397 is a typical high-$\rho_c$ near-$M_{\rm
Ch}$ SN Ia remnant, the solar abundances of these isotopes could be reproduced
by the mixture of the high- and low-$\rho_c$ near-$M_{\rm Ch}$ and sub-$M_{\rm
Ch}$ Type Ia events, with $\lesssim$ 20 % being high-$\rho_c$ near-$M_{\rm
Ch}$.
|
We present the results of the X-ray flaring activity of 1ES 1959+650 during
October 25-26, 2017 using AstroSat observations. The source was variable in the
X-ray. We investigated the evolution of the X-ray spectral properties of the
source by dividing the total observation period ($\sim 130$ ksecs) into time
segments of 5 ksecs, and fitting the SXT and LAXPC spectra for each segment.
Synchrotron emission of a broken power-law particle density model provided a
better fit than the log-parabola one. The X-ray flux and the normalised
particle density at an energy less than the break one, were found to
anti-correlate with the index before the break. However, a stronger correlation
between the density and index was obtained when a delay of $\sim 60$ ksec was
introduced. The amplitude of the normalised particle density variation $|\Delta
n_\gamma/n_\gamma| \sim 0.1$ was found to be less than that of the index
$\Delta \Gamma \sim 0.5$. We model the amplitudes and the time delay in a
scenario where the particle acceleration time-scale varies on a time-scale
comparable to itself. In this framework, the rest frame acceleration time-scale
is estimated to be $\sim 1.97\times10^{5}$ secs and the emission region size to
be $\sim 6.73\times 10^{15}$ cms.
|
In this paper we consider the problem to reconstruct a $k$-uniform hypergraph
from its line graph. In general this problem is hard. We solve this problem
when the number of hyperedges containing any pair of vertices is bounded. Given
an integer sequence, constructing a $k$-uniform hypergraph with that as its
degree sequence is NP-complete. Here we show that for constant integer
sequences the question can be answered in polynomial time using Baranyai's
theorem.
|
We quantitatively investigate the dependence of central galaxy HI mass
($M_{\rm HI}$) on the stellar mass ($M_\ast$), halo mass ($M_{\rm h}$), star
formation rate (SFR), and central stellar surface density within 1 kpc
($\Sigma_1$), taking advantage of the HI spectra stacking technique using both
the Arecibo Fast Legacy ALFA Survey and the Sloan Digital Sky Survey. We find
that the shapes of $M_{\rm HI}$-$M_{\rm h}$ and $M_{\rm HI}$-$M_\ast$ relations
are remarkably similar for both star-forming and quenched galaxies, with
massive quenched galaxies having constantly lower HI masses of around 0.6 dex.
This similarity strongly suggests that neither halo mass nor stellar mass is
the direct cause of quenching, but rather the depletion of HI reservoir. While
the HI reservoir for low-mass galaxies of $M_\ast<10^{10.5}M_\odot$ strongly
increases with $M_{\rm h}$, more massive galaxies show no significant
dependence of $M_{\rm HI}$ on $M_{\rm h}$, indicating the effect of halo to
determine the smooth cold gas accretion. We find that the star formation and
quenching of central galaxies are directly regulated by the available HI
reservoir, with an average relation of ${\rm SFR}\propto M_{\rm
HI}^{2.75}/M_\ast^{0.40}$, implying a quasi-steady state of star formation. We
further confirm that galaxies are depleted of their HI reservoir once they drop
off the star-formation main sequence and there is a very tight and consistent
correlation between $M_{\rm HI}$ and $\Sigma_1$ in this phase, with $M_{\rm
HI}\propto\Sigma_1^{-2}$. This result is in consistent with the
compaction-triggered quenching scenario, with galaxies going through three
evolutionary phases of cold gas accretion, compaction and post-compaction, and
quenching.
|
The competent programmer hypothesis states that most programmers are
competent enough to create correct or almost correct source code. Because this
implies that bugs should usually manifest through small variations of the
correct code, the competent programmer hypothesis is one of the fundamental
assumptions of mutation testing. Unfortunately, it is still unclear if the
competent programmer hypothesis holds and past research presents contradictory
claims. Within this article, we provide a new perspective on the competent
programmer hypothesis and its relation to mutation testing. We try to re-create
real-world bugs through chains of mutations to understand if there is a direct
link between mutation testing and bugs. The lengths of these paths help us to
understand if the source code is really almost correct, or if large variations
are required. Our results indicate that while the competent programmer
hypothesis seems to be true, mutation testing is missing important operators to
generate representative real-world bugs.
|
Demanding that charged Nariai black holes in (quasi-)de Sitter space decay
without becoming super-extremal implies a lower bound on the masses of charged
particles, known as the Festina Lente (FL) bound. In this paper we fix the
$\mathcal{O}(1)$ constant in the bound and elucidate various aspects of it, as
well as extensions to $d>4$ and to situations with scalar potentials and
dilatonic couplings. We also discuss phenomenological implications of FL
including an explanation of why the Higgs potential cannot have a local minimum
at the origin, thus explaining why the weak force must be broken. For
constructions of meta-stable dS involving anti-brane uplift scenarios, even
though the throat region is consistent with FL, the bound implies that we
cannot have any light charged matter fields coming from any far away region in
the compactified geometry, contrary to the fact that they are typically
expected to arise in these scenarios. This strongly suggests that introduction
of warped anti-branes in the throat cannot be decoupled from the bulk dynamics
as is commonly assumed. Finally, we provide some evidence that in certain
situations the FL bound can have implications even with gravity decoupled and
illustrate this in the context of non-compact throats.
|
From small screenshots to large videos, documents take up a bulk of space in
a modern smartphone. Documents in a phone can accumulate from various sources,
and with the high storage capacity of mobiles, hundreds of documents are
accumulated in a short period. However, searching or managing documents remains
an onerous task, since most search methods depend on meta-information or only
text in a document. In this paper, we showcase that a single modality is
insufficient for classification and present a novel pipeline to classify
documents on-device, thus preventing any private user data transfer to server.
For this task, we integrate an open-source library for Optical Character
Recognition (OCR) and our novel model architecture in the pipeline. We optimise
the model for size, a necessary metric for on-device inference. We benchmark
our classification model with a standard multimodal dataset FOOD-101 and
showcase competitive results with the previous State of the Art with 30% model
compression.
|
We consider a novel formulation of the dynamic pricing and demand learning
problem, where the evolution of demand in response to posted prices is governed
by a stochastic variant of the popular Bass model with parameters $\alpha,
\beta$ that are linked to the so-called "innovation" and "imitation" effects.
Unlike the more commonly used i.i.d. and contextual demand models, in this
model the posted price not only affects the demand and the revenue in the
current round but also the future evolution of demand, and hence the fraction
of potential market size $m$ that can be ultimately captured. In this paper, we
consider the more challenging incomplete information problem where dynamic
pricing is applied in conjunction with learning the unknown parameters, with
the objective of optimizing the cumulative revenues over a given selling
horizon of length $T$. Equivalently, the goal is to minimize the regret which
measures the revenue loss of the algorithm relative to the optimal expected
revenue achievable under the stochastic Bass model with market size $m$ and
time horizon $T$. Our main contribution is the development of an algorithm that
satisfies a high probability regret guarantee of order $\tilde O(m^{2/3})$;
where the market size $m$ is known a priori. Moreover, we show that no
algorithm can incur smaller order of loss by deriving a matching lower bound.
Unlike most regret analysis results, in the present problem the market size $m$
is the fundamental driver of the complexity; our lower bound in fact, indicates
that for any fixed $\alpha, \beta$, most non-trivial instances of the problem
have constant $T$ and large $m$. We believe that this insight sets the problem
of dynamic pricing under the Bass model apart from the typical i.i.d. setting
and multi-armed bandit based models for dynamic pricing, which typically focus
only on the asymptotics with respect to time horizon $T$.
|
We focus on the realistic maximization of the uplink minimum
signal-to-interference-plus-noise ratio (SINR) of a general multiple-input
single-output (MISO) system assisted by an intelligent reflecting surface (IRS)
in the large system limit accounting for HIs. In particular, we introduce the
HIs at both the IRS (IRS-HIs) and the transceiver HIs (AT-HIs), usually
neglected despite their inevitable impact. Specifically, the deterministic
equivalent analysis enables the derivation of the asymptotic weighted
maximum-minimum SINR with HIs by jointly optimizing the HIs-aware receiver, the
transmit power, and the reflect beamforming matrix (RBM). Notably, we obtain
the optimal power allocation and reflect beamforming matrix with low overhead
instead of their frequent necessary computation in conventional MIMO systems
based on the instantaneous channel information. Monte Carlo simulations verify
the analytical results which show the insightful interplay among the key
parameters and the degradation of the performance due to HIs.
|
It was suggested that a programmable matter system (composed of multiple
computationally weak mobile particles) should remain connected at all times
since otherwise, reconnection is difficult and may be impossible. At the same
time, it was not clear that allowing the system to disconnect carried a
significant advantage in terms of time complexity. We demonstrate for a
fundamental task, that of leader election, an algorithm where the system
disconnects and then reconnects automatically in a non-trivial way (particles
can move far away from their former neighbors and later reconnect to others).
Moreover, the runtime of the temporarily disconnecting deterministic leader
election algorithm is linear in the diameter. Hence, the disconnecting --
reconnecting algorithm is as fast as previous randomized algorithms. When
comparing to previous deterministic algorithms, we note that some of the
previous work assumed weaker schedulers. Still, the runtime of all the previous
deterministic algorithms that did not assume special shapes of the particle
system (shapes with no holes) was at least quadratic in $n$, where $n$ is the
number of particles in the system. (Moreover, the new algorithm is even faster
in some parameters than the deterministic algorithms that did assume special
initial shapes.)
Since leader election is an important module in algorithms for various other
tasks, the presented algorithm can be useful for speeding up other algorithms
under the assumption of a strong scheduler. This leaves open the question: "can
a deterministic algorithm be as fast as the randomized ones also under weaker
schedulers?"
|
We discuss the prospects of gravitational lensing of gravitational waves
(GWs) coming from core-collapse supernovae (CCSN). As the CCSN GW signal can
only be detected from within our own Galaxy and the local group by current and
upcoming ground-based GW detectors, we focus on microlensing. We introduce a
new technique based on analysis of the power spectrum and association of peaks
of the power spectrum with the peaks of the amplification factor to identify
lensed signals. We validate our method by applying it on the CCSN-like mock
signals lensed by a point mass lens. We find that the lensed and unlensed
signal can be differentiated using the association of peaks by more than one
sigma for lens masses larger than 150 solar masses. We also study the
correlation integral between the power spectra and corresponding amplification
factor. This statistical approach is able to differentiate between unlensed and
lensed signals for lenses as small as 15 solar masses. Further, we demonstrate
that this method can be used to estimate the mass of a lens in case the signal
is lensed. The power spectrum based analysis is general and can be applied to
any broad band signal and is especially useful for incoherent signals.
|
Recently superconductivity was discovered in the Kagome metal AV3Sb5 (A = K,
Rb, and Cs), which has an ideal Kagome lattice of vanadium. These V-based
superconductors also host charge density wave (CDW) and topological nontrivial
band structure. Here we report the ultralow-temperature thermal conductivity
and high pressure resistance measurements on CsV3Sb5 with Tc = 2.5 K, the
highest among AV3Sb5. A finite residual linear term of thermal conductivity at
zero magnetic field and its rapid increase in fields suggest nodal
superconductivity. By applying pressure, the Tc of CsV3Sb5 increases first,
then decreases to lower than 0.3 K at 11.4 GPa, showing a clear first
superconducting dome peaked around 0.8 GPa. Above 11.4 GPa, superconductivity
re-emerges, suggesting a second superconducting dome. Both nodal
superconductivity and superconducting domes point to unconventional
superconductivity in this V-based superconductor. While our finding of nodal
superconductivity puts a strong constrain on the pairing state of the first
dome, which should be related to the CDW instability, the superconductivity of
the second dome may present another exotic pairing state in this ideal Kagome
lattice of vanadium.
|
Scientific journals are currently the primary medium used by researchers to
report their research findings. The transformation of print journals into
e-journals has simplified the process of submissions to journals and also their
access has become wider. Journals are usually published by commercial
publishers, learned societies as well as Universities. There are different
number of journals published from different countries. This paper attempts to
explore whether the number of journals published from a country influences its
research output. Scopus master journal list is analysed to identify journals
published from 50 selected countries with significant volume of research
output. The following relationship are analysed: (a) number of journals from a
country and its research output, (b) growth rate of journals and research
output for different countries, (c) global share of journals and research
output for different countries, and (d) subject area-wise number of journals
and research output in that subject area for different countries. Factors like
journal packing density are also analysed. The results obtained show that for
majority of the countries, the number of journals is positively correlated to
their research output volume, though some other factors also play a role in
growth of research output. The study at the end presents a discussion of the
analytical outcomes and provides useful suggestions on policy perspectives for
different countries.
|
Superluminal tunneling of light through a barrier has attracted broad
interest in the last several decades. Despite the observation of such phenomena
in various systems, it has been under intensive debate whether the transmitted
light truly carry the information of the original pulse. Here we report
observation of anomalous time response for terahertz electromagnetic pulses
passing through thin metal films, with the pulse shape of the transmitted beam
faithfully resembling that of the incident beam. A causal theoretical analysis
is developed to explain the experiments, though the theory of Special
Relativity may confront a challenge in this exceptional circumstance. These
findings may facilitate future applications in high-speed optical communication
or signal transmission, and may reshape our fundamental understanding about the
tunneling of light.
|
Deep Neural Networks have achieved unprecedented success in the field of face
recognition such that any individual can crawl the data of others from the
Internet without their explicit permission for the purpose of training
high-precision face recognition models, creating a serious violation of
privacy. Recently, a well-known system named Fawkes (published in USENIX
Security 2020) claimed this privacy threat can be neutralized by uploading
cloaked user images instead of their original images. In this paper, we present
Oriole, a system that combines the advantages of data poisoning attacks and
evasion attacks, to thwart the protection offered by Fawkes, by training the
attacker face recognition model with multi-cloaked images generated by Oriole.
Consequently, the face recognition accuracy of the attack model is maintained
and the weaknesses of Fawkes are revealed. Experimental results show that our
proposed Oriole system is able to effectively interfere with the performance of
the Fawkes system to achieve promising attacking results. Our ablation study
highlights multiple principal factors that affect the performance of the Oriole
system, including the DSSIM perturbation budget, the ratio of leaked clean user
images, and the numbers of multi-cloaks for each uncloaked image. We also
identify and discuss at length the vulnerabilities of Fawkes. We hope that the
new methodology presented in this paper will inform the security community of a
need to design more robust privacy-preserving deep learning models.
|
Quantum correlations, in particular those, which enable to violate a Bell
inequality \cite{BELL}, open a way to advantage in certain communication tasks.
However, the main difficulty in harnessing quantumness is its fragility to,
e.g, noise or loss of particles. We study the persistency of Bell correlations
of GHZ based mixtures and Dicke states. For the former, we consider quantum
communication complexity reduction (QCCR) scheme, and propose new Bell
inequalities (BIs), which can be used in that scheme for higher persistency in
the limit of large number of particles $N$. In case of Dicke states, we show
that persistency can reach $0.482N$, significantly more than reported in
previous studies.
|
Neural network modules conditioned by known priors can be effectively trained
and combined to represent systems with nonlinear dynamics. This work explores a
novel formulation for data-efficient learning of deep control-oriented
nonlinear dynamical models by embedding local model structure and constraints.
The proposed method consists of neural network blocks that represent input,
state, and output dynamics with constraints placed on the network weights and
system variables. For handling partially observable dynamical systems, we
utilize a state observer neural network to estimate the states of the system's
latent dynamics. We evaluate the performance of the proposed architecture and
training methods on system identification tasks for three nonlinear systems: a
continuous stirred tank reactor, a two tank interacting system, and an
aerodynamics body. Models optimized with a few thousand system state
observations accurately represent system dynamics in open loop simulation over
thousands of time steps from a single set of initial conditions. Experimental
results demonstrate an order of magnitude reduction in open-loop simulation
mean squared error for our constrained, block-structured neural models when
compared to traditional unstructured and unconstrained neural network models.
|
Third-generation gravitational wave detectors, such as the Einstein Telescope
and Cosmic Explorer, will detect a bunch of gravitational-wave (GW) signals
originating from the coalescence of binary neutron star (BNS) and binary black
hole (BBH) systems out to the higher redshifts, $z\sim 5-10$. There is a
potential concern that some of the GW signals detected at a high statistical
significance eventually overlap with each other, and the parameter estimation
of such an overlapping system can differ from the one expected from a single
event. Also, there are certainly overlapping systems in which one of the
overlapping events has a low signal-to-noise ratio $\lesssim 4$, and is thus
unable to be clearly detected. Those system will potentially be misidentified
with a single GW event, and the estimated parameters of binary GWs can be
biased. We estimate the occurrence rate of those overlapping events. We find
that the numbers of overlapping events are $\sim 200$ per day for BNSs and a
few per hour for BBHs. Then we study the statistical impacts of these
overlapping GWs on a parameter estimation based on the Fisher matrix analysis.
Our finding is that the overlapping signals produce neither large statistical
errors nor serious systematic biases on the parameters of binary systems,
unless the coalescence time and the redshifted chirp masses of the two
overlapping GWs are very close to each other, i.e.,
$|\mathcal{M}_{z1}-\mathcal{M}_{z2}|\lesssim10^{-4} \,(10^{-1})\,M_\odot$ and
$|t_{\rm c1}-t_{\rm c2}|\lesssim10^{-2}\,(10^{-1})$\,s for BNSs (BBHs). The
occurrence rate of such a closely overlapping event is shown to be much smaller
than one per year with the third-generation detectors.
|
We present the concept of a magnetless Reflective Gyrotropic Spatial Isolator
(RGSI) metasurface. This is a birefringent metasurface that reflects vertically
polarized incident waves into a horizontally polarized waves, and absorbs
horizontally polarized incident waves, hence providing isolation between the
two orthogonal polarization. We first synthesize the metasurface using surface
susceptibility-based Generalized Sheet Transition Conditions~(GSTCs). We then
propose a mirror-backed metaparticle implementation of this metasurface, where
transistor-loaded resonators provide the desired magnetless nonreciprocal
response. Finally, we demonstrate the metasurface by full-wave simulation
results. The proposed RGSI metasurface may be used in various electromagnetic
applications, and may also serve as a step towards more sophisticated
magnetless nonreciprocal metasurface systems.
|
In this article, we aim to recover locally conservative and $H(div)$
conforming fluxes for the linear Cut Finite Element Solution with Nitsche's
method for Poisson problems with Dirichlet boundary condition. The computation
of the conservative flux in the Raviart-Thomas space is completely local and
does not require to solve any mixed problem. The $L^2$-norm of the difference
between the numerical flux and the recovered flux can then be used as a
posteriori error estimator in the adaptive mesh refinement procedure.
Theoretically we are able to prove the global reliability and local efficiency.
The theoretical results are verified in the numerical results. Moreover, in the
numerical results we also observe optimal convergence rate for the flux error.
|
We perform explorative analyses of the 3D gluon content of the proton via a
study of (un)polarized twist-2 gluon TMDs, calculated in a spectator model for
the parent nucleon. Our approach encodes a flexible parametrization for the
spectator-mass density, suited to describe both moderate and small-$x$ effects.
All these prospective developments are relevant in the investigation of the
gluon dynamics inside nucleons and nuclei, which constitutes one of the major
goals of new-generation colliding machines, as the EIC, the HL-LHC and NICA.
|
The strong alignment of small-scale turbulent Alfv\'enic motions with the
direction of the magnetic field that percolates the small-scale eddies and
imprints the direction of the magnetic field is a property that follows from
the MHD theory and the theory of turbulent reconnection. The Alfv\'enic eddies
mix magnetic fields perpendicular to the direction of the local magnetic field,
and this type of motion is used to trace magnetic fields with the velocity
gradient technique (VGT). The other type of turbulent motion, fast modes,
induces anisotropies orthogonal to Alfv\'enic eddies and interferes with the
tracing of the magnetic field with the VGT. We report a new effect, i.e., in a
magnetically dominated low-\beta subsonic medium, fast modes are very
intermittent, and in a volume, with a small filling factor the fast modes
dominate other turbulent motions. We identify these localized regions as the
cause of the occasional change of direction of gradients in our synthetic
observations. We show that the new technique of measuring the gradients of
gradient amplitudes suppresses the contribution from the fast-mode-dominated
regions, improving the magnetic field tracing. In addition, we show that the
distortion of the gradient measurements by fast modes is also applicable to the
synchrotron intensity gradients, but the effect is reduced compared to the VGT.
|
This paper presents an approach to improve the forecast of computational
fluid dynamics (CFD) simulations of urban air pollution using deep learning,
and most specifically adversarial training. This adversarial approach aims to
reduce the divergence of the forecasts from the underlying physical model. Our
two-step method integrates a Principal Components Analysis (PCA) based
adversarial autoencoder (PC-AAE) with adversarial Long short-term memory (LSTM)
networks. Once the reduced-order model (ROM) of the CFD solution is obtained
via PCA, an adversarial autoencoder is used on the principal components time
series. Subsequentially, a Long Short-Term Memory network (LSTM) is
adversarially trained on the latent space produced by the PC-AAE to make
forecasts. Once trained, the adversarially trained LSTM outperforms a LSTM
trained in a classical way. The study area is in South London, including
three-dimensional velocity vectors in a busy traffic junction.
|
Promising searches for new physics beyond the current Standard Model (SM) of
particle physics are feasible through isotope-shift spectroscopy, which is
sensitive to a hypothetical fifth force between the neutrons of the nucleus and
the electrons of the shell. Such an interaction would be mediated by a new
particle which could in principle be associated with dark matter. In so-called
King plots, the mass-scaled frequency shifts of two optical transitions are
plotted against each other for a series of isotopes. Subtle deviations from the
expected linearity could reveal such a fifth force. Here, we study
experimentally and theoretically six transitions in highly charged ions of Ca,
an element with five stable isotopes of zero nuclear spin. Some of the
transitions are suitable for upcoming high-precision coherent laser
spectroscopy and optical clocks. Our results provide a sufficient number of
clock transitions for -- in combination with those of singly charged Ca$^+$ --
application of the generalized King plot method. This will allow future
high-precision measurements to remove higher-order SM-related nonlinearities
and open a new door to yet more sensitive searches for unknown forces and
particles.
|
This paper and accompanying Python and C++ Framework is the product of the
authors perceived problems with narrow (Discrimination based) AI. (Artificial
Intelligence) The Framework attempts to develop a genetic transfer of
experience through potential structural expressions using a common
regulation/exchange value (energy) to create a model whereby neural
architecture and all unit processes are co-dependently developed by genetic and
real time signal processing influences; successful routes are defined by
stability of the spike distribution per epoch which is influenced by
genetically encoded morphological development biases.These principles are aimed
towards creating a diverse and robust network that is capable of adapting to
general tasks by training within a simulation designed for transfer learning to
other mediums at scale.
|
Relativistic runaway electron avalanches (RREAs) imply a large multiplication
of high energy electrons (~1 MeV). Two factors are necessary for this
phenomenon: a high electric field sustained over a large distance and an
energetic particle to serve as a seed. The former sustains particle energies as
they keep colliding and lose energy randomly, and the latter serves as a
multiplication starting point that promotes avalanches. RREA is usually
connected to both terrestrial gamma-ray flashes (TGFs) and gamma-ray glows
(also known as Thunderstorm Ground Enhancement (TGE) when detected at ground
level) as possible generation mechanism of both events, but the current
knowledge does not provide a clear relationship between these events (TGF and
TGE), beyond their possible common source mechanism, still as they have
different characteristics. In particular, their timescales differ by several
orders of magnitude. This work shows that chain reactions by TGF byproducts can
continue for the timescale of gamma-ray glows and even provide energetic
particles as seeds for RREAs of gamma-ray glows.
|
We generalize the correspondence between theories and monads with arities of
arXiv:1101.3064 to $\infty$-categories. Additionally, we introduce the notion
of complete theories that is unique to the $\infty$-categorical case and
provide a completion construction for a certain class of theories. Along the
way we also develop the necessary technical material related to the flagged
bicategory of correspondences and lax functor in the $\infty$-categorical
context.
|
We report on the experimental observation of mirror enhanced directional
surface enhanced Raman scattering (SERS) from a self-assembled monolayer of
molecules coupled to a nanowire-nanoparticle (NW-NP) junction on a mirror in
remote excitation configuration. Placing NW-NP junction on a metallic mirror
generates multiple gap plasmon modes which have unique momentum space
scattering signatures. We perform Fourier plane imaging of SERS from NW-NP on a
mirror to understand the effect of multiple hotspots on molecular emission. We
systematically study the effect of ground plane on the directionality of
emission from NW-NP junction and show that the presence of a mirror drastically
reduces angular spread of emission. The effect of multiple hotspots in the
geometry on directionality of molecular emission is studied using 3D numerical
simulations. The results presented here will have implications in understanding
plasmon hybridization in the momentum space and its effects on molecular
emission.
|
In the steady-state contingency analysis, the traditional Newton-Raphson
method suffers from non-convergence issues when solving post-outage power flow
problems, which hinders the integrity and accuracy of security assessment. In
this paper, we propose a novel robust contingency analysis approach based on
holomorphic embedding (HE). The HE-based simulator guarantees convergence if
the true power flow solution exists, which is desirable because it avoids the
influence of numerical issues and provides a credible security assessment
conclusion. In addition, based on the multi-area characteristics of real-world
power systems, a partitioned HE (PHE) method is proposed with an
interface-based partitioning of HE formulation. The PHE method does not
undermine the numerical robustness of HE and significantly reduces the
computation burden in large-scale contingency analysis. The PHE method is
further enhanced by parallel or distributed computation to become parallel PHE
(P${}^\mathrm{2}$HE). Tests on a 458-bus system, a synthetic 419-bus system and
a large-scale 21447-bus system demonstrate the advantages of the proposed
methods in robustness and efficiency.
|
The $\alpha$-spectral radius of a connected graph $G$ is the spectral radius
of $A_\alpha$-matrix of $G$. In this paper, we discuss the methods for
comparing $\alpha$-spectral radius of graphs. As applications, we characterize
the graphs with the maximal $\alpha$-spectral radius among all unicyclic and
bicyclic graphs of order $n$ with diameter $d$, respectively. Finally, we
determine the unique graph with maximal signless Laplacian spectral radius
among bicyclic graphs of order $n$ with diameter $d$.
|
Machine learning models are known to be vulnerable to adversarial attacks,
namely perturbations of the data that lead to wrong predictions despite being
imperceptible. However, the existence of "universal" attacks (i.e., unique
perturbations that transfer across different data points) has only been
demonstrated for images to date. Part of the reason lies in the lack of a
common domain, for geometric data such as graphs, meshes, and point clouds,
where a universal perturbation can be defined. In this paper, we offer a change
in perspective and demonstrate the existence of universal attacks for geometric
data (shapes). We introduce a computational procedure that operates entirely in
the spectral domain, where the attacks take the form of small perturbations to
short eigenvalue sequences; the resulting geometry is then synthesized via
shape-from-spectrum recovery. Our attacks are universal, in that they transfer
across different shapes, different representations (meshes and point clouds),
and generalize to previously unseen data.
|
The large radii of many hot Jupiters can only be matched by models that have
hot interior adiabats, and recent theoretical work has shown that the interior
evolution of hot Jupiters has a significant impact on their atmospheric
structure. Due to its inflated radius, low gravity, and ultra-hot equilibrium
temperature, WASP-76b is an ideal case study for the impact of internal
evolution on observable properties. Hot interiors should most strongly affect
the non-irradiated side of the planet, and thus full phase curve observations
are critical to ascertain the effect of the interior on the atmospheres of hot
Jupiters. In this work, we present the first Spitzer phase curve observations
of WASP-76b. We find that WASP-76b has an ultra-hot day side and relatively
cold nightside with brightness temperatures of $2471 \pm 27~\mathrm{K}$/$1518
\pm 61~\mathrm{K}$ at $3.6~\micron$ and $2699 \pm 32~\mathrm{K}$/$1259 \pm
44~\mathrm{K}$ at $4.5~\micron$, respectively. These results provide evidence
for a dayside thermal inversion. Both channels exhibit small phase offsets of
$0.68 \pm 0.48^{\circ}$ at $3.6~\micron$ and $0.67 \pm 0.2^{\circ}$ at
$4.5~\mu\mathrm{m}$. We compare our observations to a suite of general
circulation models that consider two end-members of interior temperature along
with a broad range of frictional drag strengths. Strong frictional drag is
necessary to match the small phase offsets and cold nightside temperatures
observed. From our suite of cloud-free GCMs, we find that only cases with a
cold interior can reproduce the cold nightsides and large phase curve amplitude
at $4.5~\micron$, hinting that the hot interior adiabat of WASP-76b does not
significantly impact its atmospheric dynamics or that clouds blanket its
nightside.
|
Many researchers have been concerned with whether social media has a negative
impact on the well-being of their audience. With the popularity of social
networking sites (SNS) steadily increasing, psychological and social sciences
have shown great interest in their effects and consequences on humans. In this
work, we investigate Facebook using the tools of HCI to find connections
between interface features and the concerns raised by these domains. Using an
empirical design analysis, we identify interface interferences impacting users'
online privacy. Through a subsequent survey (n=116), we find usage behaviour
changes due to increased privacy concerns and report individual cases of
addiction and mental health issues. These observations are the results of a
rapidly changing SNS creating a gap of understanding between users'
interactions with the platform and future consequences. We explore how HCI can
help close this gap and work towards more ethical user interfaces in the
future.
|
The full physics potential of the next-generation Deep Underground Neutrino
Experiment (DUNE) is still being explored. In particular, there have been some
recent studies on the possibility of improving DUNE's neutrino energy
reconstruction. The main motivation is that a better determination of the
neutrino energy in an event-by-event basis will translate into an improved
measurement of the Dirac $CP$ phase and other neutrino oscillation parameters.
To further motivate studies and improvements on the neutrino energy
reconstruction, we evaluate the impact of energy resolution at DUNE on an
illustrative new physics scenario, viz. non-standard interactions (NSI) of
neutrinos with matter. We show that a better energy resolution in comparison to
the ones given in the DUNE conceptual and technical design reports may
significantly enhance the experimental sensitivity to NSI, particularly when
degeneracies are present. While a better reconstruction of the first
oscillation peak helps disentangling standard $CP$ effects from those coming
from NSIs, we find that the second oscillation peak also plays a nontrivial
role in improving DUNE's sensitivity.
|
Transition metal dichalcogenide (TMD) moir\'e heterostructures provide an
ideal platform to explore the extended Hubbard model1 where long-range Coulomb
interactions play a critical role in determining strongly correlated electron
states. This has led to experimental observations of Mott insulator states at
half filling2-4 as well as a variety of extended Wigner crystal states at
different fractional fillings5-9. Microscopic understanding of these emerging
quantum phases, however, is still lacking. Here we describe a novel scanning
tunneling microscopy (STM) technique for local sensing and manipulation of
correlated electrons in a gated WS2/WSe2 moir\'e superlattice that enables
experimental extraction of fundamental extended Hubbard model parameters. We
demonstrate that the charge state of local moir\'e sites can be imaged by their
influence on STM tunneling current, analogous to the charge-sensing mechanism
in a single-electron transistor. In addition to imaging, we are also able to
manipulate the charge state of correlated electrons. Discharge cascades of
correlated electrons in the moir\'e superlattice are locally induced by ramping
the STM bias, thus enabling the nearest-neighbor Coulomb interaction (UNN) to
be estimated. 2D mapping of the moir\'e electron charge states also enables us
to determine onsite energy fluctuations at different moir\'e sites. Our
technique should be broadly applicable to many semiconductor moir\'e systems,
offering a powerful new tool for microscopic characterization and control of
strongly correlated states in moir\'e superlattices.
|
The idea of coded caching for content distribution networks was introduced by
Maddah-Ali and Niesen, who considered the canonical $(N, K)$ cache network in
which a server with $N$ files satisfy the demands of $K$ users (equipped with
independent caches of size $M$ each). Among other results, their work provided
a characterization of the exact rate memory tradeoff for the problem when
$M\geq\frac{N}{K}(K-1)$. In this paper, we improve this result for large caches
with $M\geq \frac{N}{K}(K-2)$. For the case
$\big\lceil\frac{K+1}{2}\big\rceil\leq N \leq K$, we propose a new coded
caching scheme, and derive a matching lower bound to show that the proposed
scheme is optimal. This extends the characterization of the exact rate memory
tradeoff to the case $M\geq \frac{N}{K}\Big(K-2+\frac{(K-2+1/N)}{(K-1)}\Big)$.
For the case $1\leq N\leq \big\lceil\frac{K+1}{2}\big\rceil$, we derive a new
lower bound, which demonstrates that the scheme proposed by Yu et al. is
optimal and thus extend the characterization of the exact rate memory tradeoff
to the case $M\geq \frac{N}{K}(K-2)$.
|
Segmentation-based methods are widely used for scene text detection due to
their superiority in describing arbitrary-shaped text instances. However, two
major problems still exist: 1) current label generation techniques are mostly
empirical and lack theoretical support, discouraging elaborate label design; 2)
as a result, most methods rely heavily on text kernel segmentation which is
unstable and requires deliberate tuning. To address these challenges, we
propose a human cognition-inspired framework, termed, Conceptual Text Region
Network (CTRNet). The framework utilizes Conceptual Text Regions (CTRs), which
is a class of cognition-based tools inheriting good mathematical properties,
allowing for sophisticated label design. Another component of CTRNet is an
inference pipeline that, with the help of CTRs, completely omits the need for
text kernel segmentation. Compared with previous segmentation-based methods,
our approach is not only more interpretable but also more accurate.
Experimental results show that CTRNet achieves state-of-the-art performance on
benchmark CTW1500, Total-Text, MSRA-TD500, and ICDAR 2015 datasets, yielding
performance gains of up to 2.0%. Notably, to the best of our knowledge, CTRNet
is among the first detection models to achieve F-measures higher than 85.0% on
all four of the benchmarks, with remarkable consistency and stability.
|
Improving the image resolution and acquisition speed of magnetic resonance
imaging (MRI) is a challenging problem. There are mainly two strategies dealing
with the speed-resolution trade-off: (1) $k$-space undersampling with
high-resolution acquisition, and (2) a pipeline of lower resolution image
reconstruction and image super-resolution. However, these approaches either
have limited performance at certain high acceleration factor or suffer from the
error accumulation of two-step structure. In this paper, we combine the idea of
MR reconstruction and image super-resolution, and work on recovering HR images
from low-resolution under-sampled $k$-space data directly. Particularly, the
SR-involved reconstruction can be formulated as a variational problem, and a
learnable network unrolled from its solution algorithm is proposed. A
discriminator was introduced to enhance the detail refining performance.
Experiment results using in-vivo HR multi-coil brain data indicate that the
proposed SRR-Net is capable of recovering high-resolution brain images with
both good visual quality and perceptual quality.
|
In the absence of an initial seed, the Biermann battery term of a non-ideal
induction equation acts as a source that generates weak magnetic fields. These
fields are then amplified via a dynamo mechanism. The Kelvin-Helmholtz
instability is a fluid phenomenon that takes place in many astrophysical
scenarios and can trigger the action of the Biermann battery and dynamo
processes. We aim to investigate the effect that the ionisation degree of the
plasma and the interaction between the charged and neutral species have on the
generation and amplification of magnetic fields during the different stages of
the instability. We use the two-fluid model implemented in the numerical code
Mancha-2F. We perform 2D simulations starting from a configuration with no
initial magnetic field and which is unstable due to a velocity shear. We vary
the ionisation degree of the plasma and we analyse the role that the different
collisional terms included in the equations of the model play on the evolution
of the instability and the generation of magnetic field. We find that when no
collisional coupling is considered between the two fluids, the effect of the
Biermann battery mechanism does not depend on the ionisation degree. However,
when elastic collisions are taken into account, the generation of magnetic
field is increased as the ionisation degree is reduced. This behaviour is
slightly enhanced if the process of charge-exchange is also considered. We also
find a dependence on the total density of the plasma related to the dependence
on the coupling degree between the two fluids. As the total density is
increased, the results from the two-fluid model converge to the predictions of
single-fluid models.
|
High-angular-resolution cosmic microwave background experiments provide a
unique opportunity to conduct a survey of time-variable sources at millimeter
wavelengths, a population which has primarily been understood through follow-up
measurements of detections in other bands. Here we report the first results of
an astronomical transient survey with the South Pole Telescope (SPT) using the
SPT-3G camera to observe 1500 square degrees of the southern sky. The
observations took place from March to November 2020 in three bands centered at
95, 150, and 220 GHz. This survey yielded the detection of fifteen transient
events from sources not previously detected by the SPT. The majority are
associated with variable stars of different types, expanding the number of such
detected flares by more than a factor of two. The stellar flares are
unpolarized and bright, in some cases exceeding 1 Jy, and have durations from a
few minutes to several hours. Another population of detected events last for
2--3 weeks and appear to be extragalactic in origin. Though data availability
at other wavelengths is limited, we find evidence for concurrent optical
activity for two of the stellar flares. Future data from SPT-3G and forthcoming
instruments will provide real-time detection of millimeter-wave transients on
timescales of minutes to months.
|
The Fermilab Muon $g-2$ collaboration recently announced the first result of
measurement of the muon anomalous magnetic moment ($g-2$), which confirmed the
previous result at the Brookhaven National Laboratory and thus the discrepancy
with its Standard Model prediction. We revisit low-scale supersymmetric models
that are naturally capable to solve the muon $g-2$ anomaly, focusing on two
distinct scenarios: chargino-contribution dominated and pure-bino-contribution
dominated scenarios. It is shown that the slepton pair-production searches have
excluded broad parameter spaces for both two scenarios, but they are not closed
yet. For the chargino-dominated scenario, the models with $m_{\tilde{\mu}_{\rm
L}}\gtrsim m_{\tilde{\chi}^{\pm}_1}$ are still widely allowed. For the
bino-dominated scenario, we find that, although slightly non-trivial, the
region with low $\tan \beta$ with heavy higgsinos is preferred. In the case of
universal slepton masses, the low mass regions with $m_{\tilde{\mu}}\lesssim
230$ GeV can explain the $g-2$ anomaly while satisfying the LHC constraints.
Furthermore, we checked that the stau-bino coannihilation works properly to
realize the bino thermal relic dark matter. We also investigate heavy staus
case for the bino-dominated scenario, where the parameter region that can
explain the muon $g-2$ anomaly is stretched to $m_{\tilde{\mu}}\lesssim 1.3$
TeV.
|
Overparametrized neural networks, where the number of active parameters is
larger than the sample size, prove remarkably effective in modern deep learning
practice. From the classical perspective, however, much fewer parameters are
sufficient for optimal estimation and prediction, whereas overparametrization
can be harmful even in the presence of explicit regularization. To reconcile
this conflict, we present a generalization theory for overparametrized ReLU
networks by incorporating an explicit regularizer based on the scaled variation
norm. Interestingly, this regularizer is equivalent to the ridge from the angle
of gradient-based optimization, but is similar to the group lasso in terms of
controlling model complexity. By exploiting this ridge-lasso duality, we show
that overparametrization is generally harmless to two-layer ReLU networks. In
particular, the overparametrized estimators are minimax optimal up to a
logarithmic factor. By contrast, we show that overparametrized random feature
models suffer from the curse of dimensionality and thus are suboptimal.
|
Since planet occurrence and primordial atmospheric retention probability
increase with period, the occurrence-weighted median planets discovered by
transit surveys may bear little resemblance to the low-occurrence, short-period
planets sculpted by atmospheric escape ordinarily used to calibrate
mass--radius relations and planet formation models. An occurrence-weighted
mass--radius relation for the low-mass planets discovered so far by transit
surveys orbiting solar-type stars requires both occurrence-weighted median
Earth-mass and Neptune-mass planets to have a few percent of their masses in
hydrogen/helium (H/He) atmospheres. Unlike the Earth that finished forming long
after the protosolar nebula was dissipated, these occurrence-weighted median
Earth-mass planets must have formed early in their systems' histories. The
existence of significant H/He atmospheres around Earth-mass planets confirms an
important prediction of the core-accretion model of planet formation. It also
implies core masses $M_{\text{c}}$ in the range $2~M_{\oplus}\lesssim
M_{\text{c}}\lesssim 8~M_{\oplus}$ that can retain their primordial
atmospheres. If atmospheric escape is driven by photoevaporation due to extreme
ultraviolet (EUV) flux, then our observation requires a reduction in the
fraction of incident EUV flux converted into work usually assumed in
photoevaporation models. If atmospheric escape is core driven, then the
occurrence-weighted median Earth-mass planets must have large Bond albedos. In
contrast to Uranus and Neptune that have at least 10% of their masses in H/He
atmospheres, these occurrence-weighted median Neptune-mass planets are H/He
poor. The implication is that they experienced collisions or formed in much
shorter-lived and/or hotter parts of their parent protoplanetary disks than
Uranus and Neptune's formation location in the protosolar nebula.
|
Subgradient and Newton algorithms for nonsmooth optimization require
generalized derivatives to satisfy subtle approximation properties:
conservativity for the former and semismoothness for the latter. Though these
two properties originate in entirely different contexts, we show that in the
semi-algebraic setting they are equivalent. Both properties for a generalized
derivative simply require it to coincide with the standard directional
derivative on the tangent spaces of some partition of the domain into smooth
manifolds. An appealing byproduct is a new short proof that semi-algebraic maps
are semismooth relative to the Clarke Jacobian.
|
Lung nodule detection from 3D Computed Tomography scans plays a vital role in
efficient lung cancer screening. Despite the SOTA performance obtained by
recent anchor-based detectors using CNNs for this task, they require
predetermined anchor parameters such as the size, number, and aspect ratio of
anchors, and have limited robustness when dealing with lung nodules with a
massive variety of sizes. To overcome these problems, we propose a 3D sphere
representation-based center-points matching detection network that is
anchor-free and automatically predicts the position, radius, and offset of
nodules without the manual design of nodule/anchor parameters. The SCPM-Net
consists of two novel components: sphere representation and center points
matching. First, to match the nodule annotation in clinical practice, we
replace the commonly used bounding box with our proposed bounding sphere to
represent nodules with the centroid, radius, and local offset in 3D space. A
compatible sphere-based intersection over-union loss function is introduced to
train the lung nodule detection network stably and efficiently. Second, we
empower the network anchor-free by designing a positive center-points selection
and matching process, which naturally discards pre-determined anchor boxes. An
online hard example mining and re-focal loss subsequently enable the CPM
process to be more robust, resulting in more accurate point assignment and
mitigation of class imbalance. In addition, to better capture spatial
information and 3D context for the detection, we propose to fuse multi-level
spatial coordinate maps with the feature extractor and combine them with 3D
squeeze-and-excitation attention modules. Experimental results on the LUNA16
dataset showed that our proposed framework achieves superior performance
compared with existing anchor-based and anchor-free methods for lung nodule
detection.
|
The idea of possible modification to gravity theory, whether it is in the
Newtonian or general relativistic premises, is there for quite sometime. Based
on it, astrophysical and cosmological problems are targeted to solve. But none
of the Newtonian theories of modification has been performed from the first
principle. Here, we modify Poisson's equation and propose two possible ways to
modify the law gravitation which, however, reduces to Newton's law far away
from the center of source. Based on these modified Newton's laws, we attempt to
solve problems lying with white dwarfs. There are observational evidences for
possible violation of the Chandrasekhar mass-limit significantly: it could be
sub- as well as super-Chandrasekhar. We show that modified Newton's law, either
by modifying LHS or RHS of Poisson's equation, can explain them.
|
Within the framework of canonical type-I seesaw, a feebly interacting massive
particle (FIMP) $\chi$ is introduced as a dark matter candidate. The
leptogenesis mechanism and dark matter relic density share a common origin via
decays of Majorana neutrinos $N$. Provided an additional species $\varphi$
whose energy density red-shifts as $\rho_{\varphi}\propto a^{-(4+n)}$, the
Hubble expansion rate is larger than the standard scenario, i.e., the Universe
expands faster. The consequences of such a fast expanding Universe on
leptogenesis as well as FIMP dark matter are investigated in detail. We
demonstrate a significant impact on the final baryon asymmetry and dark matter
abundance due to the existence of $\varphi$ for the strong washout scenario.
While for the weak washout scenario, the effects of FEU are relatively small.
We introduce scale factors $F_L$ and $F_\chi$ to describe the corresponding
effects of FEU. A semi-analytical approach to derive the efficiency factors
$\eta_L$ and $\eta_\chi$ in FEU is also discussed. The viable parameter space
for success thermal leptogenesis and correct FIMP DM relic density is obtained
for standard cosmology and FEU. Our results show that it is possible to
distinguish different cosmology scenarios for strong washout cases.
|
In this paper, we study important Schr\"{o}dinger systems with linear and
nonlinear couplings \begin{equation}\label{eq:diricichlet} \begin{cases}
-\Delta u_1-\lambda_1 u_1=\mu_1 |u_1|^{p_1-2}u_1+r_1\beta
|u_1|^{r_1-2}u_1|u_2|^{r_2}+\kappa (x)u_2~\hbox{in}~\mathbb{R}^N,\\ -\Delta
u_2-\lambda_2 u_2=\mu_2 |u_2|^{p_2-2}u_2+r_2\beta
|u_1|^{r_1}|u_2|^{r_2-2}u_2+\kappa (x)u_1~ \hbox{in}~\mathbb{R}^N,\\ u_1\in
H^1(\mathbb{R}^N), u_2\in H^1(\mathbb{R}^N),\nonumber \end{cases}
\end{equation} with the condition $$\int_{\mathbb{R}^N} u_1^2=a_1^2,
\int_{\mathbb{R}^N} u_2^2=a_2^2,$$ where $N\geq 2$, $\mu_1,\mu_2,a_1,a_2>0$,
$\beta\in\mathbb{R}$, $2<p_1,p_2<2^*$, $2<r_1+r_2<2^*$, $\kappa(x)\in
L^{\infty}(\mathbb{R}^N)$ with fixed sign and $\lambda_1,\lambda_2$ are
Lagrangian multipliers. We use Ekland variational principle to prove this
system has a normalized radially symmetric solution for $L^2-$subcritical case
when $N\geq 2$, and use minimax method to prove this system has a normalized
radially symmetric positive solution for $L^2-$supercritical case when $N=3$,
$p_1=p_2=4,\ r_1=r_2=2$.
|
We present radiative hydrodynamic simulations of solar flares generated by
the RADYN and RH codes to study the perturbations induced in photospheric Fe I
lines by electron beam heating. We investigate how variations in the beam
parameters result in discernible differences in the induced photospheric
velocities. Line synthesis revealed a significant chromospheric contribution to
the line profiles resulting in an apparent red asymmetry by as much as 40 m/s
close to the time of maximum beam heating which was not reflective of the
upflow velocities that arose from the radiative hydrodynamic simulations at
those times. The apparent redshift to the overall line profile was produced by
significant chromospheric emission that was blueshifted by as much as 400 m/s
and fills in the blue side of the near stationary photospheric absorption
profile. The velocity information that can be retrieved from photospheric line
profiles during flares must therefore be treated with care to mitigate the
effects of higher parts of the atmosphere providing an erroneous velocity
signal.
|
We develop a theory of vector spaces spanned by orbit-finite sets. Using this
theory, we give a decision procedure for equivalence of weighted register
automata, which are the common generalization of weighted automata and register
automata for infinite alphabets. The algorithm runs in exponential time, and in
polynomial time for a fixed number of registers. As a special case, we can
decide, with the same complexity, language equivalence for unambiguous register
automata, which improves previous results in three ways: (a) we allow for order
comparisons on atoms, and not just equality; (b) the complexity is
exponentially better; and (c) we allow automata with guessing.
|
In this paper, we present the submitted system for the third DIHARD Speech
Diarization Challenge from the DKU-Duke-Lenovo team. Our system consists of
several modules: voice activity detection (VAD), segmentation, speaker
embedding extraction, attentive similarity scoring, agglomerative hierarchical
clustering. In addition, the target speaker VAD (TSVAD) is used for the phone
call data to further improve the performance. Our final submitted system
achieves a DER of 15.43% for the core evaluation set and 13.39% for the full
evaluation set on task 1, and we also get a DER of 21.63% for core evaluation
set and 18.90% for full evaluation set on task 2.
|
The local stability of ion-temperature gradient driven mode (ITG) in the
presence of a given radial electric field is investigated using gyrokinetic
theory and ballooning mode formalism with toroidal effect accounted for. It is
found that, zero frequency radial electric field induced poloidal rotation can
significantly stabilize ITG, while the associated density perturbation has
little effect on ITG stability due to the modification of finite-orbit-width
effect. However, the parallel mode structure is slightly affected due to the
evenly symmetric density modulation of ZFZF.
|
Despite the prominence of neural abstractive summarization models, we know
little about how they actually form summaries and how to understand where their
decisions come from. We propose a two-step method to interpret summarization
model decisions. We first analyze the model's behavior by ablating the full
model to categorize each decoder decision into one of several generation modes:
roughly, is the model behaving like a language model, is it relying heavily on
the input, or is it somewhere in between? After isolating decisions that do
depend on the input, we explore interpreting these decisions using several
different attribution methods. We compare these techniques based on their
ability to select content and reconstruct the model's predicted token from
perturbations of the input, thus revealing whether highlighted attributions are
truly important for the generation of the next token. While this machinery can
be broadly useful even beyond summarization, we specifically demonstrate its
capability to identify phrases the summarization model has memorized and
determine where in the training pipeline this memorization happened, as well as
study complex generation phenomena like sentence fusion on a per-instance
basis.
|
Over the past decade, many approaches have been introduced for traffic speed
prediction. However, providing fine-grained, accurate, time-efficient, and
adaptive traffic speed prediction for a growing transportation network where
the size of the network keeps increasing and new traffic detectors are
constantly deployed has not been well studied. To address this issue, this
paper presents DistTune based on Long Short-Term Memory (LSTM) and the
Nelder-Mead method. Whenever encountering an unprocessed detector, DistTune
decides if it should customize an LSTM model for this detector by comparing the
detector with other processed detectors in terms of the normalized traffic
speed patterns they have observed. If similarity is found, DistTune directly
shares an existing LSTM model with this detector to achieve time-efficient
processing. Otherwise, DistTune customizes an LSTM model for the detector to
achieve fine-grained prediction. To make DistTune even more time-efficient,
DistTune performs on a cluster of computing nodes in parallel. To achieve
adaptive traffic speed prediction, DistTune also provides LSTM re-customization
for detectors that suffer from unsatisfactory prediction accuracy due to for
instance traffic speed pattern change. Extensive experiments based on traffic
data collected from freeway I5-N in California are conducted to evaluate the
performance of DistTune. The results demonstrate that DistTune provides
fine-grained, accurate, time-efficient, and adaptive traffic speed prediction
for a growing transportation network.
|
This paper develops a fractional stochastic partial differential equation
(SPDE) to model the evolution of a random tangent vector field on the unit
sphere. The SPDE is governed by a fractional diffusion operator to model the
L\'{e}vy-type behaviour of the spatial solution, a fractional derivative in
time to depict the intermittency of its temporal solution, and is driven by
vector-valued fractional Brownian motion on the unit sphere to characterize its
temporal long-range dependence. The solution to the SPDE is presented in the
form of the Karhunen-Lo\`{e}ve expansion in terms of vector spherical
harmonics. Its covariance matrix function is established as a tensor field on
the unit sphere that is an expansion of Legendre tensor kernels. The variance
of the increments and approximations to the solutions are studied and
convergence rates of the approximation errors are given. It is demonstrated how
these convergence rates depend on the decay of the power spectrum and variances
of the fractional Brownian motion.
|
Since the advent of graphene ushered the era of two-dimensional materials,
many forms of hydrogenated graphene have been reported, exhibiting diverse
properties ranging from a tunable band gap to ferromagnetic ordering. Patterned
hydrogenated graphene with micron-scale patterns has been fabricated by
lithographic means. Here we report successful millimeter-scale synthesis of an
intrinsically honeycomb patterned form of hydrogenated graphene on Ru(0001) by
epitaxial growth followed by hydrogenation. Combining scanning tunneling
microscopy observations with density-functional-theory (DFT) calculations, we
reveal that an atomic-hydrogen layer intercalates between graphene and
Ru(0001). The result is a hydrogen honeycomb structure that serves as a
template for the final hydrogenation, which converts the graphene into graphane
only over the template, yielding honeycomb-patterned hydrogenated graphene
(HPHG). In effect, HPHG is a form of patterned graphane. DFT calculations find
that the unhydrogenated graphene regions embedded in the patterned graphane
exhibit spin-polarized edge states. This type of growth mechanism provides new
pathways for the fabrication of intrinsically patterned graphene-based
materials.
|
The interlayer van der Waals interaction in twisted bilayer graphene (tBLG)
induces both in-plane and out-of-plane atomic displacements showing complex
patterns that depend on the twist angle. In particular, for small twist angles,
within each graphene layer, the relaxations give rise to a vortex-like
displacement pattern which is known to affect the dispersion of the flat bands.
Here, we focus on yet another structural property, the chirality of the twisted
bilayer. We perform first-principles calculations based on density functional
theory to investigate the properties induced by twist chirality in both real
and momentum space. In real space, we study the interplay between twist
chirality and atomic relaxation patterns. In momentum space, we investigate the
spin textures around the $K$ points of the Brillouin zone, showing that
alternating vortex-like textures are correlated with the chirality of tBLG.
Interestingly, the helicity of each vortex is inverted by changing the
chirality while the different twist angles also modify the spin textures. We
discuss the origin of the spin textures by calculating the layer weights and
using plot regression models.
|
Super-resolution (SR) is a coveted image processing technique for mobile apps
ranging from the basic camera apps to mobile health. Existing SR algorithms
rely on deep learning models with significant memory requirements, so they have
yet to be deployed on mobile devices and instead operate in the cloud to
achieve feasible inference time. This shortcoming prevents existing SR methods
from being used in applications that require near real-time latency. In this
work, we demonstrate state-of-the-art latency and accuracy for on-device
super-resolution using a novel hybrid architecture called SplitSR and a novel
lightweight residual block called SplitSRBlock. The SplitSRBlock supports
channel-splitting, allowing the residual blocks to retain spatial information
while reducing the computation in the channel dimension. SplitSR has a hybrid
design consisting of standard convolutional blocks and lightweight residual
blocks, allowing people to tune SplitSR for their computational budget. We
evaluate our system on a low-end ARM CPU, demonstrating both higher accuracy
and up to 5 times faster inference than previous approaches. We then deploy our
model onto a smartphone in an app called ZoomSR to demonstrate the first-ever
instance of on-device, deep learning-based SR. We conducted a user study with
15 participants to have them assess the perceived quality of images that were
post-processed by SplitSR. Relative to bilinear interpolation -- the existing
standard for on-device SR -- participants showed a statistically significant
preference when looking at both images (Z=-9.270, p<0.01) and text (Z=-6.486,
p<0.01).
|
We present GeoSP, a parallel method that creates a parcellation of the
cortical mesh based on a geodesic distance, in order to consider gyri and sulci
topology. The method represents the mesh with a graph and performs a K-means
clustering in parallel. It has two modes of use, by default, it performs the
geodesic cortical parcellation based on the boundaries of the anatomical
parcels provided by the Desikan-Killiany atlas. The other mode performs the
complete parcellation of the cortex. Results for both modes and with different
values for the total number of sub-parcels show homogeneous sub-parcels.
Furthermore, the execution time is 82 s for the whole cortex mode and 18 s for
the Desikan-Killiany atlas subdivision, for a parcellation into 350
sub-parcels. The proposed method will be available to the community to perform
the evaluation of data-driven cortical parcellations. As an example, we
compared GeoSP parcellation with Desikan-Killiany and Destrieux atlases in 50
subjects, obtaining more homogeneous parcels for GeoSP and minor differences in
structural connectivity reproducibility across subjects.
|
Neural network classifiers are vulnerable to misclassification of adversarial
samples, for which the current best defense trains classifiers with adversarial
samples. However, adversarial samples are not optimal for steering attack
convergence, based on the minimization at the core of adversarial attacks. The
minimization perturbation term can be minimized towards $0$ by replacing
adversarial samples in training with duplicated original samples, labeled
differently only for training. Using only original samples, Target Training
eliminates the need to generate adversarial samples for training against all
attacks that minimize perturbation. In low-capacity classifiers and without
using adversarial samples, Target Training exceeds both default CIFAR10
accuracy ($84.3$%) and current best defense accuracy (below $25$%) with $84.8$%
against CW-L$_2$($\kappa=0$) attack, and $86.6$% against DeepFool. Using
adversarial samples against attacks that do not minimize perturbation, Target
Training exceeds current best defense ($69.1$%) with $76.4$% against
CW-L$_2$($\kappa=40$) in CIFAR10.
|
Entity linking -- the task of identifying references in free text to relevant
knowledge base representations -- often focuses on single languages. We
consider multilingual entity linking, where a single model is trained to link
references to same-language knowledge bases in several languages. We propose a
neural ranker architecture, which leverages multilingual transformer
representations of text to be easily applied to a multilingual setting. We then
explore how a neural ranker trained in one language (e.g. English) transfers to
an unseen language (e.g. Chinese), and find that while there is a consistent
but not large drop in performance. How can this drop in performance be
alleviated? We explore adding an adversarial objective to force our model to
learn language-invariant representations. We find that using this approach
improves recall in several datasets, often matching the in-language
performance, thus alleviating some of the performance loss occurring from
zero-shot transfer.
|
We analyze Hegselmann-Krause opinion formation models with leadership in
presence of time delay effects. In particular, we consider a model with
pointwise time variable time delay and a model with a distributed delay. In
both cases we show that, when the delays satisfy suitable smallness conditions,
then the leader can control the system, leading the group to any prefixed
state. Some numerical tests illustrate our analytical results.
|
In this work we prove the uniqueness of solutions to the nonlocal linear
equation $L \varphi - c(x)\varphi = 0$ in $\mathbb{R}$, where $L$ is an
elliptic integro-differential operator, in the presence of a positive solution
or of an odd solution vanishing only at zero. As an application, we deduce the
nondegeneracy of layer solutions (bounded and monotone solutions) to the
semilinear problem $L u = f(u)$ in $\mathbb{R}$ when the nonlinearity is of
Allen-Cahn type. To our knowledge, this is the first work where such uniqueness
and nondegeneracy results are proven in the nonlocal framework when the
Caffarelli-Silvestre extension technique is not available. Our proofs are based
on a nonlocal Liouville-type method developed by Hamel, Ros-Oton, Sire, and
Valdinoci for nonlinear problems in dimension two.
|
Predicting the final folded structure of protein molecules and simulating
their folding pathways is of crucial importance for designing viral drugs and
studying diseases such as Alzheimer's at the molecular level. To this end, this
paper investigates the problem of protein conformation prediction under the
constraint of avoiding high-entropy-loss routes during folding. Using the
well-established kinetostatic compliance (KCM)-based nonlinear dynamics of a
protein molecule, this paper formulates the protein conformation prediction as
a pointwise optimal control synthesis problem cast as a quadratic program (QP).
It is shown that the KCM torques in the protein folding literature can be
utilized for defining a reference vector field for the QP-based control
generation problem. The resulting kinetostatic control torque inputs will be
close to the KCM-based reference vector field and guaranteed to be constrained
by a predetermined bound; hence, high-entropy-loss routes during folding are
avoided while the energy of the molecule is decreased.
|
In a recent Letter [T.~Dornheim \emph{et al.}, Phys.~Rev.~Lett.~\textbf{125},
085001 (2020)], we have presented the first \emph{ab initio} results for the
nonlinear density response of electrons in the warm dense matter regime. In the
present work, we extend these efforts by carrying out extensive new path
integral Monte Carlo (PIMC) simulations of a \emph{ferromagnetic} electron gas
that is subject to an external harmonic perturbation. This allows us to
unambiguously quantify the impact of spin-effects on the nonlinear density
response of the warm dense electron gas. In addition to their utility for the
description of warm dense matter in an external magnetic field, our results
further advance our current understanding of the uniform electron gas as a
fundamental model system, which is important in its own right.
|
This article presents a method that uses turn-by-turn beam position data and
k-modulation data to measure the calibration factors of beam position monitors
in high energy accelerators. In this method, new algorithms have been developed
to reduce the effect of coupling and other sources of uncertainty, allowing
accurate estimates of the calibration factors. Simulations with known sources
of errors indicate that calibration factors can be recovered with an accuracy
of 0.7% rms for arc beam position monitors and an accuracy of 0.4% rms for
interaction region beam position monitors. The calibration factors are also
obtained from LHC experimental data and are used to evaluate the effect this
calibration has on a quadrupole correction estimated with the action and phase
jump method for a interaction region of the LHC.
|
Point cloud registration is a common step in many 3D computer vision tasks
such as object pose estimation, where a 3D model is aligned to an observation.
Classical registration methods generalize well to novel domains but fail when
given a noisy observation or a bad initialization. Learning-based methods, in
contrast, are more robust but lack in generalization capacity. We propose to
consider iterative point cloud registration as a reinforcement learning task
and, to this end, present a novel registration agent (ReAgent). We employ
imitation learning to initialize its discrete registration policy based on a
steady expert policy. Integration with policy optimization, based on our
proposed alignment reward, further improves the agent's registration
performance. We compare our approach to classical and learning-based
registration methods on both ModelNet40 (synthetic) and ScanObjectNN (real
data) and show that our ReAgent achieves state-of-the-art accuracy. The
lightweight architecture of the agent, moreover, enables reduced inference time
as compared to related approaches. In addition, we apply our method to the
object pose estimation task on real data (LINEMOD), outperforming
state-of-the-art pose refinement approaches.
|
We construct non-Abelian analogs for some KdV type equations, including the
(rational form of) exponential Calogero--Degasperis equation and
generalizations of the Schwarzian KdV equation. Equations and differential
substitutions under study contain arbitrary non-Abelian parameters.
|
1I/'Oumuamua (or 1I) and 2I/Borisov (or 2I), the first InterStellar Objects
(ISOs) discovered passing through the solar system, have opened up entirely new
areas of exobody research. Finding additional ISOs and planning missions to
intercept or rendezvous with these bodies will greatly benefit from knowledge
of their likely orbits and arrival rates. Here, we use the local velocity
distribution of stars from the Gaia Early Data Release 3 Catalogue of Nearby
Stars and a standard gravitational focusing model to predict the velocity
dependent flux of ISOs entering the solar system. With an 1I-type ISO number
density of $\sim$0.1 AU$^{-3}$, we predict that a total of $\sim$6.9 such
objects per year should pass within 1 AU of the Sun. There will be a fairly
large high-velocity tail to this flux, with half of the incoming ISOs predicted
to have a velocity at infinity, v$_{\infty}$, $>$ 40 km s$^{-1}$. Our model
predicts that $\sim$92\% of incoming ISOs will be residents of the galactic
thin disk, $\sim$6\% ($\sim$4 per decade) will be from the thick disk, $\sim$1
per decade will be from the halo and at most $\sim$3 per century will be
unbound objects, ejected from our galaxy or entering the Milky Way from another
galaxy. The rate of ISOs with very low v$_{\infty}$ $\lesssim$ 1.5 km s$^{-1}$
is so low in our model that any incoming very low velocity ISOs are likely to
be previously lost solar system objects. Finally, we estimate a cometary ISO
number density of $\sim$7 $\times$ 10$^{-5}$ AU$^{-3}$ for 2I type ISOs,
leading to discovery rates for these objects possibly approaching once per
decade with future telescopic surveys.
|
In game-theoretic learning, several agents are simultaneously following their
individual interests, so the environment is non-stationary from each player's
perspective. In this context, the performance of a learning algorithm is often
measured by its regret. However, no-regret algorithms are not created equal in
terms of game-theoretic guarantees: depending on how they are tuned, some of
them may drive the system to an equilibrium, while others could produce cyclic,
chaotic, or otherwise divergent trajectories. To account for this, we propose a
range of no-regret policies based on optimistic mirror descent, with the
following desirable properties: i) they do not require any prior tuning or
knowledge of the game; ii) they all achieve O(\sqrt{T}) regret against
arbitrary, adversarial opponents; and iii) they converge to the best response
against convergent opponents. Also, if employed by all players, then iv) they
guarantee O(1) social regret; while v) the induced sequence of play converges
to Nash equilibrium with O(1) individual regret in all variationally stable
games (a class of games that includes all monotone and convex-concave zero-sum
games).
|
Intelligent robots provide a new insight into efficiency improvement in
industrial and service scenarios to replace human labor. However, these
scenarios include dense and dynamic obstacles that make motion planning of
robots challenging. Traditional algorithms like A* can plan collision-free
trajectories in static environment, but their performance degrades and
computational cost increases steeply in dense and dynamic scenarios.
Optimal-value reinforcement learning algorithms (RL) can address these problems
but suffer slow speed and instability in network convergence. Network of policy
gradient RL converge fast in Atari games where action is discrete and finite,
but few works have been done to address problems where continuous actions and
large action space are required. In this paper, we modify existing advantage
actor-critic algorithm and suit it to complex motion planning, therefore
optimal speeds and directions of robot are generated. Experimental results
demonstrate that our algorithm converges faster and stable than optimal-value
RL. It achieves higher success rate in motion planning with lesser processing
time for robot to reach its goal.
|
Thermodynamic uncertainty relation (TUR) provides a stricter bound for
entropy production (EP) than that of the thermodynamic second law. This
stricter bound can be utilized to infer the EP and derive other trade-off
relations. Though the validity of the TUR has been verified in various
stochastic systems, its application to general Langevin dynamics has not been
successful in a unified way, especially for underdamped Langevin dynamics,
where odd parity variables in time-reversal operation such as velocity get
involved. Previous TURs for underdamped Langevin dynamics is neither
experimentally accessible nor reduced to the original form of the overdamped
Langevin dynamics in the zero-mass limit. Here, we find an operationally
accessible TUR for underdamped Langevin dynamics with an arbitrary
time-dependent protocol. We show that the original TUR is a consequence of our
underdamped TUR in the zero-mass limit. This indicates that the TUR formulation
presented here can be regarded as the universal form of the TUR for general
Langevin dynamics. The validity of our result is examined and confirmed for
three prototypical underdamped Langevin systems and their zero-mass limits;
free diffusion dynamics, charged Brownian particle in a magnetic field, and
molecular refrigerator.
|
The use of Bayesian information criterion (BIC) in the model selection
procedure is under the assumption that the observations are independent and
identically distributed (i.i.d.). However, in practice, we do not always have
i.i.d. samples. For example, clustered observations tend to be more similar
within the same group, and longitudinal data is collected by measuring the same
subject repeatedly. In these scenarios, the assumption in BIC is not satisfied.
The concept of effective sample size is brought up and improved BIC is defined
by replacing the sample size in the original BIC expression with the effective
sample size, which will give us a better theoretical foundation in the
circumstance that mixed-effects models involve. Numerical experiment results
are also given by comparing the performance of our new BIC with other widely
used BICs.
|
Based on relative energy estimates, we study the stability of solutions to
the Cahn-Hilliard equation with concentration dependent mobility with respect
to perturbations. As a by-product of our analysis, we obtain a weak-strong
uniqueness principle on the continuous level under realistic regularity
assumptions on strong solutions. We then show that the stability estimates can
be further inherited almost verbatim by appropriate Galerkin approximations in
space and time. This allows us to derive sharp bounds for the discretization
error in terms of certain projection errors and to establish order-optimal
a-priori error estimates for semi- and fully discrete approximation schemes.
|
Performing imperfect or noisy measurements on a quantum system both impacts
the measurement outcome and the state of the system after the measurement. In
this paper we are concerned with imperfect calorimetric measurements. In
calorimetric measurements one typically measures the energy of a thermal
environment to extract information about the system. The measurement is
imperfect in the sense that we simultaneously measure the energy of the
calorimeter and an additional noise bath. Under weak coupling assumptions, we
find that the presence of the noise bath manifests itself by modifying the jump
rates of the reduced system dynamics. We study an example of a driven qubit
interacting with resonant bosons calorimeter and find increasing the noise
leads to a reduction in the power flowing from qubit to calorimeter and thus an
apparent heating up of the calorimeter.
|
Let $\mathcal{G}$ be a connected reductive almost simple group over the Witt
ring $W(\mathbb{F})$ for $\mathbb{F}$ a finite field of characteristic $p$. Let
$R$ and $R'$ be complete noetherian local $W(\mathbb{F})$ -algebras with
residue field $\mathbb{F}$. Under a mild condition on $p$ in relation to
structural constants of $\mathcal{G}$, we show the following results: (1) Every
closed subgroup $H$ of $\mathcal{G}(R)$ with full residual image
$\mathcal{G}(\mathbb{F})$ is a conjugate of a group $\mathcal{G}(A)$ for
$A\subset R$ a closed subring that is local and has residue field $\mathbb{F}$
. (2) Every surjective homomorphism $\mathcal{G}(R)\to\mathcal{G}(R')$ is, up
to conjugation, induced from a ring homomorphism $R\to R'$. (3) The identity
map on $\mathcal{G}(R)$ represents the universal deformation of the
representation of the profinite group $\mathcal{G}(R)$ given by the reduction
map $\mathcal{G}(R)\to\mathcal{G}(\mathbb{F})$. This generalizes results of
Dorobisz and Eardley-Manoharmayum and of Manoharmayum, and in addition provides
an abstract classification result for closed subgroups of $\mathcal{G}(R)$ with
residually full image.
We provide an axiomatic framework to study this type of question, also for
slightly more general $\mathcal{G}$, and we study in the case at hand in great
detail what conditions on $\mathbb{F}$ or on $p$ in relation to $\mathcal{G}$
are necessary for the above results to hold.
|
Optical isolators are indispensible components in nearly all photonic systems
as they help ensure unidirectionality and provide crucial protection from
undesirable reflections. While commercial isolators are exclusively built on
magneto-optic (MO) principles they are not readily implemented within photonic
integrated circuits due to the need for specialized materials. Importantly, the
MO effect is generally weak, especially at shorter wavelengths. These
challenges as a whole have motivated extensive research on non-MO alternatives.
To date, however, no alternative technology has managed to simultaneously
combine linearity (i.e. no frequency shift), linear response (i.e. input-output
scaling), ultralow insertion loss, and large directional contrast on-chip. Here
we demonstrate an optical isolator design that leverages the unbeatable
transparency of a short, high quality dielectric waveguide, with the
near-perfect attenuation from a critically-coupled absorber. Our design concept
is implemented using a lithium niobate racetrack resonator in which phonon
mediated Autler-Townes splitting (ATS) breaks the chiral symmetry of the
resonant modes. We demonstrate on-chip optical isolators at wavelengths one
octave apart near 1550 nm and 780 nm, fabricated from the same lithium
niobate-on-insulator wafer. Linear optical isolation is demonstrated with
simultaneously <1 dB insertion loss, >39 dB contrast, and bandwidth as wide as
the optical mode that is used. Our results outperform the current best-in-class
MO isolator on-chip on both insertion loss and isolator figures-of-merit, and
demonstrate a lithographically defined wavelength adaptability that cannot yet
be achieved with any MO isolator.
|
The High Energy Rapid Modular Ensemble of Satellites (HERMES) Technological
and Scientific pathfinder is a space borne mission based on a constellation of
LEO nanosatellites. The payloads of these CubeSats consist of miniaturized
detectors designed for bright high-energy transients such as Gamma-Ray Bursts
(GRBs). This platform aims to impact Gamma Ray Burst (GRB) science and enhance
the detection of Gravitational Wave (GW) electromagnetic counterparts. This
goal will be achieved with a field of view of several steradians, arcmin
precision and state of the art timing accuracy. The localization performance
for the whole constellation is proportional to the number of components and
inversely proportional to the average baseline between them, and therefore is
expected to increase as more. In this paper we describe the Payload Data
Handling Unit (PDHU) for the HERMES-TP and HERMES SP mission. The PDHU is the
main interface between the payload and the satellite bus. The PDHU is also in
charge of the on-board control and monitoring of the scintillating crystal
detectors. We will explain the TM/TC design and the distinct modes of
operation. We also discuss the on-board data processing carried out by the PDHU
and its impact on the output data of the detector.
|
For an inverse temperature $\beta>0$, we define the $\beta$-circular Riesz
gas on $\mathbb{R}^d$ as any microscopic thermodynamic limit of Gibbs particle
systems on the torus interacting via the Riesz potential $g(x) = \Vert x
\Vert^{-s}$. We focus on the non integrable case $d-1<s<d$. Our main result
ensures, for any dimension $d\ge 1$ and inverse temperature $\beta>0$, the
existence of a $\beta$-circular Riesz gas which is not number-rigid. Recall
that a point process is said number rigid if the number of points in a bounded
Borel set $\Delta$ is a function of the point configuration outside $\Delta$.
It is the first time that the non number-rigidity is proved for a Gibbs point
process interacting via a non integrable potential. We follow a statistical
physics approach based on the canonical DLR equations. It is inspired by
Dereudre-Hardy-Lebl\'e and Ma\"ida (2021) where the authors prove the
number-rigidity of the $\text{Sine}_\beta$ process.
|
We consider Morrey's open question whether rank-one convexity already implies
quasiconvexity in the planar case. For some specific families of energies,
there are precise conditions known under which rank-one convexity even implies
polyconvexity. We will extend some of these findings to the more general family
of energies $W:\operatorname{GL}^+(n)\rightarrow\mathbb{R}$ with an additive
volumetric-isochoric split, i.e. \[
W(F)=W_{\rm iso}(F)+W_{\rm vol}(\det F)=\widetilde W_{\rm
iso}\bigg(\frac{F}{\sqrt{\det F}}\bigg)+W_{\rm vol}(\det F)\,, \] which is the
natural finite extension of isotropic linear elasticity. Our approach is based
on a condition for rank-one convexity which was recently derived from the
classical two-dimensional criterion by Knowles and Sternberg and consists of a
family of one-dimensional coupled differential inequalities. We identify a
number of \enquote{least} rank-one convex energies and, in particular, show
that for planar volumetric-isochorically split energies with a concave
volumetric part, the question of whether rank-one convexity implies
quasiconvexity can be reduced to the open question of whether the rank-one
convex energy function \[
W_{\rm magic}^+(F)=\frac{\lambda_{\rm max}}{\lambda_{\rm
min}}-\log\frac{\lambda_{\rm max}}{\lambda_{\rm min}}+\log\det
F=\frac{\lambda_{\rm max}}{\lambda_{\rm min}}-2\log\lambda_{\rm min} \] is
quasiconvex. In addition, we demonstrate that under affine boundary conditions,
$W_{\rm magic}^+(F)$ allows for non-trivial inhomogeneous deformations with the
same energy level as the homogeneous solution, and show a surprising connection
to the work of Burkholder and Iwaniec in the field of complex analysis.
|
The Uhlmann process is built on the density matrix of a mixed quantum state
and offers a way to characterize topological properties at finite temperatures.
We analyze an ideal spin-j quantum paramagnet in a magnetic field undergoing an
Uhlmann process and derive general formulae of the Uhlmann phase and Loschmidt
amplitude for arbitrary j as the system traverses a great circle in the
parameter space. A quantized jump of the Uhlmann phase signifies a topological
quantum phase transition (TQPT) of the underlying process, which is accompanied
by a zero of the Loschmidt amplitude. The exact results of j=1/2 and j=1
systems show topological regimes that only survive at finite temperatures but
not at zero temperature, and the number of TQPTs is associated with the winding
number in the parameter space. Our results pave the way for future studies on
finite-temperature topological properties, and possible experimental protocols
and implications for atomic simulators and digital simulations are discussed.
|
Processing astronomical data often comes with huge challenges with regards to
data management as well as data processing. MeerKAT telescope is one of the
precursor telescopes of the World's largest observatory - Square Kilometre
Array. So far, MeerKAT data was processed using the South African computing
facility i.e. IDIA, and exploited to make ground-breaking discoveries. However,
to process MeerKAT data on UK's IRIS computing facility requires new
implementation of the MeerKAT pipeline. This paper focuses on how to transfer
MeerKAT data from the South African site to UK's IRIS systems for processing.
We discuss about our RapifXfer Data transfer framework for transferring the
MeerKAT data from South Africa to the UK, and the MeerKAT job processing
framework pertaining to the UK's IRIS resources.
|
We study analytic properties of "$q$-deformed real numbers", a notion
recently introduced by two of us. A $q$-deformed positive real number is a
power series with integer coefficients in one formal variable~$q$. We study the
radius of convergence of these power series assuming that $q \in \C.$ Our main
conjecture, which can be viewed as a $q$-analogue of Hurwitz's Irrational
Number Theorem, provides a lower bound for these radii, given by the radius of
convergence of the $q$-deformed golden ratio. The conjecture is proved in
several particular cases and confirmed by a number of computer experiments. For
an interesting sequence of "Pell polynomials", we obtain stronger bounds.
|
Abstractive summarization, the task of generating a concise summary of input
documents, requires: (1) reasoning over the source document to determine the
salient pieces of information scattered across the long document, and (2)
composing a cohesive text by reconstructing these salient facts into a shorter
summary that faithfully reflects the complex relations connecting these facts.
In this paper, we adapt TP-TRANSFORMER (Schlag et al., 2019), an architecture
that enriches the original Transformer (Vaswani et al., 2017) with the
explicitly compositional Tensor Product Representation (TPR), for the task of
abstractive summarization. The key feature of our model is a structural bias
that we introduce by encoding two separate representations for each token to
represent the syntactic structure (with role vectors) and semantic content
(with filler vectors) separately. The model then binds the role and filler
vectors into the TPR as the layer output. We argue that the structured
intermediate representations enable the model to take better control of the
contents (salient facts) and structures (the syntax that connects the facts)
when generating the summary. Empirically, we show that our TP-TRANSFORMER
outperforms the Transformer and the original TP-TRANSFORMER significantly on
several abstractive summarization datasets based on both automatic and human
evaluations. On several syntactic and semantic probing tasks, we demonstrate
the emergent structural information in the role vectors and improved syntactic
interpretability in the TPR layer outputs. Code and models are available at
https://github.com/jiangycTarheel/TPT-Summ.
|
We generalize Mertens' product theorem to Chebotarev sets of prime ideals in
Galois extensions of number fields. Using work of Rosen, we extend an argument
of Williams from cyclotomic extensions to this more general case. Additionally,
we compute these products for Cheboratev sets in abelian extensions, $S_3$
sextic extensions, and sets of primes represented by some quadratic forms.
|
The problem of quantifying uncertainty about the locations of multiple change
points by means of confidence intervals is addressed. The asymptotic
distribution of the change point estimators obtained as the local maximisers of
moving sum statistics is derived, where the limit distributions differ
depending on whether the corresponding size of changes is local, i.e. tends to
zero as the sample size increases, or fixed. A bootstrap procedure for
confidence interval generation is proposed which adapts to the unknown
magnitude of changes and guarantees asymptotic validity both for local and
fixed changes. Simulation studies show good performance of the proposed
bootstrap procedure, and some discussions about how it can be extended to
serially dependent errors is provided.
|
We propose a nonlinear acoustic echo cancellation system, which aims to model
the echo path from the far-end signal to the near-end microphone in two parts.
Inspired by the physical behavior of modern hands-free devices, we first
introduce a novel neural network architecture that is specifically designed to
model the nonlinear distortions these devices induce between receiving and
playing the far-end signal. To account for variations between devices, we
construct this network with trainable memory length and nonlinear activation
functions that are not parameterized in advance, but are rather optimized
during the training stage using the training data. Second, the network is
succeeded by a standard adaptive linear filter that constantly tracks the echo
path between the loudspeaker output and the microphone. During training, the
network and filter are jointly optimized to learn the network parameters. This
system requires 17 thousand parameters that consume 500 Million floating-point
operations per second and 40 Kilo-bytes of memory. It also satisfies hands-free
communication timing requirements on a standard neural processor, which renders
it adequate for embedding on hands-free communication devices. Using 280 hours
of real and synthetic data, experiments show advantageous performance compared
to competing methods.
|
We investigate a possibility to describe the non-Debye relaxation processes
using the Volterra-type equations with kernels given by the Prabhakar functions
with the upper parameter $\nu$ being negative. Proposed integro-differential
equations mimic the fading memory effects and are explicitly solved using the
umbral calculus and the Laplace transform methods. Both approaches lead to the
same results valid for admissible domain of the parameters $\alpha$, $\mu$ and
$\nu$ characterizing the Prabhakar function. For the special case $\alpha\in
(0,1]$, $\mu=0$ and $\nu=-1$ we recover the Cole-Cole model, in general having
a residual polarization. We also show that our scheme gives results equivalent
to those obtained using the stochastic approach to relaxation phenomena merged
with integral equations involving kernels given by the Prabhakar functions with
the positive upper parameter.
|
Billions of X-ray images are taken worldwide each year. Machine learning, and
deep learning in particular, has shown potential to help radiologists triage
and diagnose images. However, deep learning requires large datasets with
reliable labels. The CheXpert dataset was created with the participation of
board-certified radiologists, resulting in the strong ground truth needed to
train deep learning networks. Following the structured format of Datasheets for
Datasets, this paper expands on the original CheXpert paper and other sources
to show the critical role played by radiologists in the creation of reliable
labels and to describe the different aspects of the dataset composition in
detail. Such structured documentation intends to increase the awareness in the
machine learning and medical communities of the strengths, applications, and
evolution of CheXpert, thereby advancing the field of medical image analysis.
Another objective of this paper is to put forward this dataset datasheet as an
example to the community of how to create detailed and structured descriptions
of datasets. We believe that clearly documenting the creation process, the
contents, and applications of datasets accelerates the creation of useful and
reliable models.
|
Knowledge distillation aims to transfer representation ability from a teacher
model to a student model. Previous approaches focus on either individual
representation distillation or inter-sample similarity preservation. While we
argue that the inter-sample relation conveys abundant information and needs to
be distilled in a more effective way. In this paper, we propose a novel
knowledge distillation method, namely Complementary Relation Contrastive
Distillation (CRCD), to transfer the structural knowledge from the teacher to
the student. Specifically, we estimate the mutual relation in an anchor-based
way and distill the anchor-student relation under the supervision of its
corresponding anchor-teacher relation. To make it more robust, mutual relations
are modeled by two complementary elements: the feature and its gradient.
Furthermore, the low bound of mutual information between the anchor-teacher
relation distribution and the anchor-student relation distribution is maximized
via relation contrastive loss, which can distill both the sample representation
and the inter-sample relations. Experiments on different benchmarks demonstrate
the effectiveness of our proposed CRCD.
|
Error-bounded lossy compression is a critical technique for significantly
reducing scientific data volumes. With ever-emerging heterogeneous
high-performance computing (HPC) architecture, GPU-accelerated error-bounded
compressors (such as cuSZ+ and cuZFP) have been developed. However, they suffer
from either low performance or low compression ratios. To this end, we propose
cuSZ+ to target both high compression ratios and throughputs. We identify that
data sparsity and data smoothness are key factors for high compression
throughputs. Our key contributions in this work are fourfold: (1) We propose an
efficient compression workflow to adaptively perform run-length encoding and/or
variable-length encoding. (2) We derive Lorenzo reconstruction in decompression
as multidimensional partial-sum computation and propose a fine-grained Lorenzo
reconstruction algorithm for GPU architectures. (3) We carefully optimize each
of cuSZ+ kernels by leveraging state-of-the-art CUDA parallel primitives. (4)
We evaluate cuSZ+ using seven real-world HPC application datasets on V100 and
A100 GPUs. Experiments show cuSZ+ improves the compression throughputs and
ratios by up to 18.4X and 5.3X, respectively, over cuSZ on the tested datasets.
|
In the high energy limit of hadron collisions, the evolution of the gluon
density in the longitudinal momentum fraction can be deduced from the Balitsky
hierarchy of equations or, equivalently, from the nonlinear
Jalilian-Marian-Iancu-McLerran-Weigert-Leonidov-Kovner (JIMWLK) equation. The
solutions of the latter can be studied numerically by using its reformulation
in terms of a Langevin equation. In this paper, we present a comprehensive
study of systematic effects associated with the numerical framework, in
particular the ones related to the inclusion of the running coupling. We
consider three proposed ways in which the running of the coupling constant can
be included: "square root" and "noise" prescriptions and the recent proposal by
Hatta and Iancu. We implement them both in position and momentum spaces and we
investigate and quantify the differences in the resulting evolved gluon
distributions. We find that the systematic differences associated with the
implementation technicalities can be of a similar magnitude as differences in
running coupling prescriptions in some cases, or much smaller in other cases.
|
We prove a formula for the polar degree of projective hypersurfaces in terms
of the Milnor data of the singularities, extending to 1-dimensional
singularities the Dimca-Papadima result for isolated singularities. We discuss
the semi-continuity of the polar degree in deformations, and we classify the
homaloidal cubic surfaces with 1-dimensional singular locus. Some open
questions are pointed out along the way.
|
Postive semidefinite (PSD) cone is the cone of positive semidefinite
matrices, and is the object of interest in semidefinite programming (SDP). A
computational efficient approximation of the PSD cone is the $k$-PSD closure,
$1 \leq k < n$, cone of $n\times n$ real symmetric matrices such that all of
their $k\times k$ principal submatrices are positive semidefinite. For $k=1$,
one obtains a polyhedral approximation, while $k=2$ yields a second order conic
(SOC) approximation of the PSD cone. These approximations of the PSD cone have
been used extensively in real-world applications such as AC Optimal Power Flow
(ACOPF) to address computational inefficiencies where SDP relaxations are
utilized for convexification the non-convexities. In a recent series of
articles Blekharman et al. provided bounds on the quality of these
approximations. In this work, we revisit some of their results and also propose
a new dominant bound on quality of the $k$-PSD closure approximation of the PSD
cone. In addition, we characterize the extreme rays of the $2$-PSD closure.
|
Despite many of the most common chaotic dynamical systems being continuous in
time, it is through discrete time mappings that much of the understanding of
chaos is formed. Henri Poincar\'e first made this connection by tracking
consecutive iterations of the continuous flow with a lower-dimensional,
transverse subspace. The mapping that iterates the dynamics through consecutive
intersections of the flow with the subspace is now referred to as a Poincar\'e
map, and it is the primary method available for interpreting and classifying
chaotic dynamics. Unfortunately, in all but the simplest systems, an explicit
form for such a mapping remains outstanding. This work proposes a method for
obtaining explicit Poincar\'e mappings by using deep learning to construct an
invertible coordinate transformation into a conjugate representation where the
dynamics are governed by a relatively simple chaotic mapping. The invertible
change of variable is based on an autoencoder, which allows for dimensionality
reduction, and has the advantage of classifying chaotic systems using the
equivalence relation of topological conjugacies. Indeed, the enforcement of
topological conjugacies is the critical neural network regularization for
learning the coordinate and dynamics pairing. We provide expository
applications of the method to low-dimensional systems such as the R\"ossler and
Lorenz systems, while also demonstrating the utility of the method on
infinite-dimensional systems, such as the Kuramoto--Sivashinsky equation.
|
Multimodal generative models should be able to learn a meaningful latent
representation that enables a coherent joint generation of all modalities
(e.g., images and text). Many applications also require the ability to
accurately sample modalities conditioned on observations of a subset of the
modalities. Often not all modalities may be observed for all training data
points, so semi-supervised learning should be possible. In this study, we
propose a novel product-of-experts (PoE) based variational autoencoder that
have these desired properties. We benchmark it against a mixture-of-experts
(MoE) approach and an approach of combining the modalities with an additional
encoder network. An empirical evaluation shows that the PoE based models can
outperform the contrasted models. Our experiments support the intuition that
PoE models are more suited for a conjunctive combination of modalities.
|
We explore the interplay of New Physics (NP) effects in $(g-2)_\ell$ and $h
\to \ell^+ \ell^-$ within the Standard Model Effective Field Theory (SMEFT)
framework, including one-loop Renormalization Group (RG) evolution of the
Wilson coefficients as well as matching to the observables below the
electroweak symmetry breaking scale. We include both the leading dimension six
chirality flipping operators including a Higgs and $SU(2)_L$ gauge bosons as
well as four-fermion scalar and tensor operators, forming a closed operator set
under the SMEFT RG equations. We compare present and future experimental
sensitivity to different representative benchmark scenarios. We also consider
two simple UV completions, a Two Higgs Doublet Model and a single scalar
LeptoQuark extension of the SM, and show how tree level matching to SMEFT
followed by the one-loop RG evolution down to the electroweak scale can
reproduce with high accuracy the $(g-2)_\ell$ and $h \to \ell^+ \ell^-$
contributions obtained by the complete one- and even two-loop calculations in
the full models.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.