abstract
stringlengths 42
2.09k
|
---|
Astrophysical observations show complex organic molecules (COMs) in the gas
phase of protoplanetary disks. X-rays emitted from the central young stellar
object (YSO) that irradiate interstellar ices in the disk, followed by the
ejection of molecules in the gas phase, are a possible route to explain the
abundances observed in the cold regions. This process, known as X-ray
photodesorption, needs to be quantified for methanol-containing ices. This
paper I focuses on the case of X-ray photodesorption from pure methanol ices.
We aim at experimentally measuring X-ray photodesorption yields of methanol and
its photo-products from pure CH$_3$OH ices, and to shed light on the mechanisms
responsible for the desorption process. We irradiated methanol ices at 15 K
with X-rays in the 525 - 570 eV range. The release of species in the gas phase
was monitored by quadrupole mass spectrometry, and photodesorption yields were
derived. Under our experimental conditions, the CH$_3$OH X-ray photodesorption
yield from pure methanol ice is 10$^{-2}$ molecule/photon at 564 eV.
Photo-products such as CH$_4$, H$_2$CO, H$_2$O, CO$_2$ , and CO also desorb at
increasing efficiency. X-ray photodesorption of larger COMs, which can be
attributed to either ethanol, dimethyl ether, and/or formic acid, is also
detected. The physical mechanisms at play are discussed and must likely involve
the thermalization of Auger electrons in the ice, thus indicating that its
composition plays an important role. Finally, we provide desorption yields
applicable to protoplanetary disk environments for astrochemical models. The
X-rays are shown to be a potential candidate to explain gas-phase abundances of
methanol in disks. However, more relevant desorption yields derived from
experiments on mixed ices are mandatory to properly support the role played by
X-rays in nonthermal desorption of methanol (see paper II).
|
We consider the global Morrey-type spaces with variable exponents and general
function defining these spaces. In the case of unbounded sets, we prove
boundedness of the Hardy-Littlewood maximal operator, potential type operator
in these spaces.
|
The Galactic black hole transient GRS1915+105 is famous for its markedly
variable X-ray and radio behaviour, and for being the archetypal galactic
source of relativistic jets. It entered an X-ray outburst in 1992 and has been
active ever since. Since 2018 GRS1915+105 has declined into an extended
low-flux X-ray plateau, occasionally interrupted by multi-wavelength flares.
Here we report the radio and X-ray properties of GRS1915+105 collected in this
new phase, and compare the recent data to historic observations. We find that
while the X-ray emission remained unprecedentedly low for most of the time
following the decline in 2018, the radio emission shows a clear mode change
half way through the extended X-ray plateau in 2019 June: from low flux (~3mJy)
and limited variability, to marked flaring with fluxes two orders of magnitude
larger. GRS1915+105 appears to have entered a low-luminosity canonical hard
state, and then transitioned to an unusual accretion phase, characterised by
heavy X-ray absorption/obscuration. Hence, we argue that a local absorber hides
from the observer the accretion processes feeding the variable jet responsible
for the radio flaring. The radio-X-ray correlation suggests that the current
low X-ray flux state may be a signature of a super-Eddington state akin to the
X-ray binaries SS433 or V404 Cyg.
|
We show that for any finite set $A$ and an arbitrary $\varepsilon>0$ there is
$k=k(\varepsilon)$ such that the higher energy ${\mathsf{E}}_k(A)$ is at most
$|A|^{k+\varepsilon}$ unless $A$ has a very specific structure. As an
application we obtain that any finite subset $A$ of the real numbers or the
prime field either contains an additive Sidon--type subset of size
$|A|^{1/2+c}$ or a multiplicative Sidon--type subset of size $|A|^{1/2+c}$.
|
Skin blemishes and diseases have attracted increasing research interest in
recent decades, due to their growing frequency of occurrence and the severity
of related diseases. Various laser treatment approaches have been introduced
for the alleviation and removal of skin pigmentation. The treatments' effects
highly depend on the experience and prognosis of the relevant operators. But,
the operation process lacks real-time feedback, which may directly reflect the
extent of the treatment. In this manuscript, we report a photoacoustic-guided
laser treatment method with a feasibility study, specifically for laser
treatment targeting the tattoo's removal. The results well validated the
feasibility of the proposed method through the experiments on phantoms and ex
vivo pig skin samples.
|
Rate-Splitting Multiple Access (RSMA) is an emerging flexible and powerful
multiple access for downlink multiantenna networks. In this paper, we introduce
the concept of RSMA into short-packet downlink communications. We design
optimal linear precoders that maximize the sum rate with Finite Blocklength
(FBL) constraints. The relations between the sum rate and blocklength of RSMA
are investigated for a wide range of network loads and user deployments.
Numerical results demonstrate that RSMA can achieve the same transmission rate
as Non-Orthogonal Multiple Access (NOMA) and Space Division Multiple Access
(SDMA) with shorter blocklengths (and therefore lower latency), especially in
overloaded multi-antenna networks. Hence, we conclude that RSMA is a promising
multiple access for low-latency communications.
|
Due to a lack of treatments and universal vaccine, early forecasts of Dengue
are an important tool for disease control. Neural networks are powerful
predictive models that have made contributions to many areas of public health.
In this systematic review, we provide an introduction to the neural networks
relevant to Dengue forecasting and review their applications in the literature.
The objective is to help inform model design for future work. Following the
PRISMA guidelines, we conduct a systematic search of studies that use neural
networks to forecast Dengue in human populations. We summarize the relative
performance of neural networks and comparator models, model architectures and
hyper-parameters, as well as choices of input features. Nineteen papers were
included. Most studies implement shallow neural networks using historical
Dengue incidence and meteorological input features. Prediction horizons tend to
be short. Building on the strengths of neural networks, most studies use
granular observations at the city or sub-national level. Performance of neural
networks relative to comparators such as Support Vector Machines varies across
study contexts. The studies suggest that neural networks can provide good
predictions of Dengue and should be included in the set of candidate models.
The use of convolutional, recurrent, or deep networks is relatively unexplored
but offers promising avenues for further research, as does the use of a broader
set of input features such as social media or mobile phone data.
|
Quantum uncertainty relations are typically analyzed for a pair of
incompatible observables, however, the concept per se naturally extends to
situations of more than two observables. In this work, we obtain tripartite
quantum memory-assisted entropic uncertainty relations and show that the lower
bounds of these relations have three terms that depend on the complementarity
of the observables, the conditional von-Neumann entropies, the Holevo
quantities, and the mutual information. The saturation of these inequalities is
analyzed.
|
A new technique that utilizes surface integrals to find the force, torque and
potential energy between two non-spherical, rigid bodies is presented. The
method is relatively fast, and allows us to solve the full rigid two-body
problem for pairs of spheroids and ellipsoids with 12 degrees of freedom. We
demonstrate the method with two dimensionless test scenarios, one where
tumbling motion develops, and one where the motion of the bodies resemble
spinning tops. We also test the method on the asteroid binary (66391) 1999 KW4,
where both components are modelled either as spheroids or ellipsoids. The two
different shape models have negligible effects on the eccentricity and
semi-major axis, but have a larger impact on the angular velocity along the
$z$-direction. In all cases, energy and total angular momentum is conserved,
and the simulation accuracy is kept at the machine accuracy level.
|
Online debates are often characterised by extreme polarisation and heated
discussions among users. The presence of hate speech online is becoming
increasingly problematic, making necessary the development of appropriate
countermeasures. In this work, we perform hate speech detection on a corpus of
more than one million comments on YouTube videos through a machine learning
model fine-tuned on a large set of hand-annotated data. Our analysis shows that
there is no evidence of the presence of "serial haters", intended as active
users posting exclusively hateful comments. Moreover, coherently with the echo
chamber hypothesis, we find that users skewed towards one of the two categories
of video channels (questionable, reliable) are more prone to use inappropriate,
violent, or hateful language within their opponents community. Interestingly,
users loyal to reliable sources use on average a more toxic language than their
counterpart. Finally, we find that the overall toxicity of the discussion
increases with its length, measured both in terms of number of comments and
time. Our results show that, coherently with Godwin's law, online debates tend
to degenerate towards increasingly toxic exchanges of views.
|
Brain storm optimization (BSO) is a newly proposed population-based
optimization algorithm, which uses a logarithmic sigmoid transfer function to
adjust its search range during the convergent process. However, this adjustment
only varies with the current iteration number and lacks of flexibility and
variety which makes a poor search effciency and robustness of BSO. To alleviate
this problem, an adaptive step length structure together with a success memory
selection strategy is proposed to be incorporated into BSO. This proposed
method, adaptive step length based on memory selection BSO, namely ASBSO,
applies multiple step lengths to modify the generation process of new
solutions, thus supplying a flexible search according to corresponding problems
and convergent periods. The novel memory mechanism, which is capable of
evaluating and storing the degree of improvements of solutions, is used to
determine the selection possibility of step lengths. A set of 57 benchmark
functions are used to test ASBSO's search ability, and four real-world problems
are adopted to show its application value. All these test results indicate the
remarkable improvement in solution quality, scalability, and robustness of
ASBSO.
|
We construct almost holomorphic and holomorphic modular forms by considering
theta series for quadratic forms of signature $(n-1,1)$. We include homogeneous
and spherical polynomials in the definition of the theta series (generalizing a
construction of the second author) to obtain holomorphic, almost holomorphic
and modular theta series. We give a criterion for these series to coincide,
enabling us to construct almost holomorphic and holomorphic cusp forms on
congruence subgroups of the modular group. Further, we provide numerous
explicit examples.
|
We measure chemical abundances for over 20 elements of 15 N-rich field stars
with high resolution ($R \sim 30000$) optical spectra. We find that Na, Mg, Al,
Si, and Ca abundances of our N-rich field stars are mostly consistent with
those of stars from globular clusters (GCs). Seven stars are estimated to have
[Al/Fe$]>0.5$, which is not found in most GC "first generation" stars. On the
other hand, $\alpha$ element abundances (especially Ti) could show
distinguishable differences between in situ stars and accreted stars. We
discover that one interesting star, with consistently low [Mg/Fe], [Si/Fe],
[Ca/Fe], [Ti/Fe], [Sc/Fe], [V/Fe], and [Co/Fe], show similar kinematic and
[Ba/Eu] as other stars from the dissolved dwarf galaxy
"$Gaia$-Sausage-Enceladus". The $\alpha$-element abundances and the iron-peak
element abundances of the N-rich field stars with metallicities $-1.25 \le {\rm
[Fe/H]} \le -0.95$ show consistent values with Milky Way field stars rather
than stars from dwarf galaxies, indicating that they were formed in situ. In
addition, the neutron capture elements of N-rich field stars show that most of
them could be enriched by asymptotic giant branch (AGB) stars with masses
around $3 - 5\, M_{\odot}$.
|
Lithium niobate on insulator (LNOI), regarded as an important candidate
platform for optical integration due to its excellent nonlinear, electro-optic
and other physical properties, has become a research hotspot. Light source, as
an essential component for integrated optical system, is urgently needed. In
this paper, we reported the realization of 1550-nm band on-chip LNOI
microlasers based on erbium-doped LNOI ring cavities with loaded quality
factors higher than one million, which were fabricated by using electron beam
lithography and inductively coupled plasma reactive ion etching processes.
These microlasers demonstrated a low pump threshold of ~20 {\mu}W and stable
performance under the pump of a 980-nm band continuous laser. Comb-like laser
spectra spanning from 1510 nm to 1580 nm were observed in high pump power
regime, which lays the foundation of the realization of pulsed laser and
frequency combs on rare-earth ion doped LNOI platform. This work has
effectively promoted the development of on-chip integrated active LNOI devices.
|
In systems where interactions couple a central degree of freedom and a bath,
one would expect signatures of the bath's phase to be reflected in the dynamics
of the central degree of freedom. This has been recently explored in connection
with many-body localized baths coupled with a central qubit or a single cavity
mode -- systems with growing experimental relevance in various platforms. Such
models also have an interesting connection with Floquet many-body localization
via quantizing the external drive, although this has been relatively
unexplored. Here we adapt the multilayer multiconfigurational time-dependent
Hartree (ML-MCTDH) method, a well-known tree tensor network algorithm, to
numerically simulate the dynamics of a central degree of freedom, represented
by a $d$-level system (qudit), coupled to a disordered interacting 1D spin
bath. ML-MCTDH allows us to reach $\approx 10^2$ lattice sites, a far larger
system size than what is feasible with exact diagonalization or kernel
polynomial methods. From the intermediate time dynamics, we find a well-defined
thermodynamic limit for the qudit dynamics upon appropriate rescaling of the
system-bath coupling. The spin system shows similar scaling collapse in the
Edward-Anderson spin glass order parameter or entanglement entropy at
relatively short times. At longer time scales, we see slow growth of the
entanglement, which may arise from dephasing mechanisms in the localized system
or long-range interactions mediated by the central degree of freedom. Similar
signs of localization are shown to appear as well with unscaled system-bath
coupling.
|
With the rapid growth of online services, the number of online accounts
proliferates. The security of a single user account no longer depends merely on
its own service provider but also the accounts on other service platforms(We
refer to this online account environment as Online Account Ecosystem). In this
paper, we first uncover the vulnerability of Online Account Ecosystem, which
stems from the defective multi-factor authentication (MFA), specifically the
ones with SMS-based verification, and dependencies among accounts on different
platforms. We propose Chain Reaction Attack that exploits the weakest point in
Online Account Ecosystem and can ultimately compromise the most secure
platform. Furthermore, we design and implement ActFort, a systematic approach
to detect the vulnerability of Online Account Ecosystem by analyzing the
authentication credential factors and sensitive personal information as well as
evaluating the dependency relationships among online accounts. We evaluate our
system on hundreds of representative online services listed in Alexa in
diversified fields. Based on the analysis from ActFort, we provide several
pragmatic insights into the current Online Account Ecosystem and propose
several feasible countermeasures including the online account exposed
information protection mechanism and the built-in authentication to fortify the
security of Online Account Ecosystem.
|
We propose Sliceable Monolith, a new methodology for developing microservice
architectures and perform their integration testing by leveraging most of the
simplicity of a monolith: a single codebase and a local execution environment
that simulates distribution. Then, a tool compiles a codebase for each
microservice and a cloud deployment configuration. The key enabler of our
approach is the technology-agnostic service definition language offered by
Jolie.
|
We introduce a proper display calculus for first-order logic, of which we
prove soundness, completeness, conservativity, subformula property and cut
elimination via a Belnap-style metatheorem. All inference rules are closed
under uniform substitution and are without side conditions.
|
Contextual multi-armed bandit has shown to be an effective tool in
recommender systems. In this paper, we study a novel problem of multi-facet
bandits involving a group of bandits, each characterizing the users' needs from
one unique aspect. In each round, for the given user, we need to select one arm
from each bandit, such that the combination of all arms maximizes the final
reward. This problem can find immediate applications in E-commerce, healthcare,
etc. To address this problem, we propose a novel algorithm, named MuFasa, which
utilizes an assembled neural network to jointly learn the underlying reward
functions of multiple bandits. It estimates an Upper Confidence Bound (UCB)
linked with the expected reward to balance between exploitation and
exploration. Under mild assumptions, we provide the regret analysis of MuFasa.
It can achieve the near-optimal $\widetilde{ \mathcal{O}}((K+1)\sqrt{T})$
regret bound where $K$ is the number of bandits and $T$ is the number of played
rounds. Furthermore, we conduct extensive experiments to show that MuFasa
outperforms strong baselines on real-world data sets.
|
As is well known, there are various mass limits for compact stars. For
example, the maximum mass for non-rotating white dwarfs is given by the famous
Chandrasekhar limit about $1.4 M_\odot$ (solar masses). Although the mass limit
for neutron stars is not so clear to date, one of the widely accepted values is
about $2.1 M_\odot\,$. Recently, challenges to these mass limits appeared.
Motivated by the super-Chandrasekhar mass white dwarfs with masses up to $2.4
\sim 2.8 M_\odot\,$, and compact objects (probably neutron stars) in the mass
gap (from $2.5 M_\odot$ or $3 M_\odot$ to $5 M_\odot$) inferred from
gravitational waves detected by LIGO/Virgo in the third observing run (O3), we
reconsider the mass limits for compact stars in the present work. Without
invoking strong magnetic field and/or exotic equation of state (EOS), we try to
increase the mass limits for compact stars in modified gravity theory. In this
work, we propose an inverse chameleon mechanism, and show that the fifth-force
mediated by the scalar field can evade the severe tests on earth, in solar
system and universe, but manifest itself in compact stars such as white dwarfs
and neutron stars. The mass limits for compact stars in the inverse chameleon
mechanism can be easily increased to $3 M_\odot\,$, $5 M_\odot$ or even larger.
We argue that the inverse chameleon mechanism might be constrained by the
observations of exoplanets orbiting compact stars (such as white dwarfs and
neutron stars), and gravitational waves from the last stage of binary compact
star coalescence.
|
We study the connection between extreme events of thermal and kinetic energy
dissipation rates in the bulk of three-dimensional Rayleigh-B\'{e}nard
convection and the wall shear stress patterns at the top and the bottom planes
that enclose the layer. Zero points of this two-dimensional vector field stand
for detachments of strong thermal plumes. If their position at the opposite
planes and a given time is close then they can be considered as precursors for
high-amplitude bulk dissipation events triggered by plume collisions or close
passings. This scenario requires a breaking of the synchronicity of the
boundary layer dynamics at both plates which is found to be in line with a
transition of the bulk derivative statistics from Gaussian to intermittent. Our
studies are based on three-dimensional high-resolution direct numerical
simulations for moderate Rayleigh numbers between $Ra=10^4$ and $5\times 10^5$.
|
Virtual reality (VR) head-mounted displays (HMD) have recently been used to
provide an immersive, first-person vision/view in real-time for manipulating
remotely-controlled unmanned ground vehicles (UGV). The teleoperation of UGV
can be challenging for operators when it is done in real time. One big
challenge is for operators to perceive quickly and rapidly the distance of
objects that are around the UGV while it is moving. In this research, we
explore the use of monoscopic and stereoscopic views and display types
(immersive and non-immersive VR) for operating vehicles remotely. We conducted
two user studies to explore their feasibility and advantages. Results show a
significantly better performance when using an immersive display with
stereoscopic view for dynamic, real-time navigation tasks that require avoiding
both moving and static obstacles. The use of stereoscopic view in an immersive
display in particular improved user performance and led to better usability.
|
We discuss various aspects of anisotropic gravity in $(d+D)$-dimensional
spacetime where $D$ dimensions are treated as extra dimensions. It is based on
the foliation preserving diffeomorphism invariance and anisotropic conformal
invariance. The anisotropy is embodied by introducing a factor $z$ which
discriminates the scaling degree of the extra $D$ dimensions against the
$d$-dimensional base spacetime and Weyl scalar field which mediates the
anisotropic scaling symmetry. There is no intrinsic scale but a physical scale
$M_*$ emerges as a consequence of spontaneous conformal symmetry breaking. Some
vacuum solutions are obtained and we discuss an issue of `size separation'
between the base spacetime and the extra dimensions. The size separation means
large hierarchy between the scales appearing in the base spacetime and the
extra dimensions respectively. We also discuss interesting theories obtained
from our model. In the case of (4,1), we propose a resolution of hierarchy
problem and discuss comparison with the results of the brane-world model. In a
$(d,D)=(2,2)$ case, we suggest a UV-complete unitary quantum gravity which
might become Einstein gravity in IR. In a certain (2,1) case, we obtain
CGHS-model.
|
In many mechanistic medical, biological, physical and engineered
spatiotemporal dynamic models the numerical solution of partial differential
equations (PDEs) can make simulations impractically slow. Biological models
require the simultaneous calculation of the spatial variation of concentration
of dozens of diffusing chemical species. Machine learning surrogates, neural
networks trained to provide approximate solutions to such complicated numerical
problems, can often provide speed-ups of several orders of magnitude compared
to direct calculation. PDE surrogates enable use of larger models than are
possible with direct calculation and can make including such simulations in
real-time or near-real time workflows practical. Creating a surrogate requires
running the direct calculation tens of thousands of times to generate training
data and then training the neural network, both of which are computationally
expensive. We use a Convolutional Neural Network to approximate the stationary
solution to the diffusion equation in the case of two equal-diameter, circular,
constant-value sources located at random positions in a two-dimensional square
domain with absorbing boundary conditions. To improve convergence during
training, we apply a training approach that uses roll-back to reject stochastic
changes to the network that increase the loss function. The trained neural
network approximation is about 1e3 times faster than the direct calculation for
individual replicas. Because different applications will have different
criteria for acceptable approximation accuracy, we discuss a variety of loss
functions and accuracy estimators that can help select the best network for a
particular application.
|
Peer-to-peer (p2p) content delivery is promising to provide benefits like
cost-saving and scalable peak-demand handling in comparison with conventional
content delivery networks (CDNs) and complement the decentralized storage
networks such as Filecoin. However, reliable p2p delivery requires proper
enforcement of delivery fairness, i.e., the deliverers should be rewarded
according to their in-time delivery. Unfortunately, most existing studies on
delivery fairness are based on non-cooperative game-theoretic assumptions that
are arguably unrealistic in the ad-hoc p2p setting. We for the first time put
forth the expressive yet still minimalist securities for p2p content delivery,
and give two efficient solutions FairDownload and FairStream via the blockchain
for p2p downloading and p2p streaming scenarios, respectively. Our designs not
only guarantee delivery fairness to ensure deliverers be paid (nearly)
proportional to his in-time delivery, but also ensure the content consumers and
content providers to be fairly treated. The fairness of each party can be
guaranteed when the other two parties collude to arbitrarily misbehave.
Moreover, the systems are efficient in the sense of attaining asymptotically
optimal on-chain costs and optimal deliverer communication. We implement the
protocols to build the prototype systems atop the Ethereum Ropsten network.
Extensive experiments done in LAN and WAN settings showcase their high
practicality.
|
We provide a variational approximation of Ambrosio-Tortorelli type for
brittle fracture energies of piecewise-rigid solids. Our result covers both the
case of geometrically nonlinear elasticity and that of linearised elasticity.
|
Inspired by natural evolution, evolutionary search algorithms have proven
remarkably capable due to their dual abilities to radiantly explore through
diverse populations and to converge to adaptive pressures. A large part of this
behavior comes from the selection function of an evolutionary algorithm, which
is a metric for deciding which individuals survive to the next generation. In
deceptive or hard-to-search fitness landscapes, greedy selection often fails,
thus it is critical that selection functions strike the correct balance between
gradient-exploiting adaptation and exploratory diversification. This paper
introduces Sel4Sel, or Selecting for Selection, an algorithm that searches for
high-performing neural-network-based selection functions through a
meta-evolutionary loop. Results on three distinct bitstring domains indicate
that Sel4Sel networks consistently match or exceed the performance of both
fitness-based selection and benchmarks explicitly designed to encourage
diversity. Analysis of the strongest Sel4Sel networks reveals a general
tendency to favor highly novel individuals early on, with a gradual shift
towards fitness-based selection as deceptive local optima are bypassed.
|
Recently, channel state information (CSI) at the physical-layer has been
utilized to detect spoofing attacks in wireless communications. However, due to
hardware impairments and communication noise, the CSI cannot be estimated
accurately, which significantly degrades the attack detection performance.
Besides, the reliability of CSI based detection schemes is challenged by
time-varying scenarios. To address these issues, we propose an adaptive Kalman
based detection scheme. By utilizing the knowledge of the predicted channel we
eliminate the channel estimation error, especially the random phase error which
occurs due to the lack of synchronization between transmitter and receiver.
Furthermore, we define a Kalman residual based test statistic for attack
detection. Simulation results show that our proposed scheme makes the detection
more robust at low signal-to-noise ratio (SNR) and in dynamic scenarios.
|
In this work, we solve the problem of finding non-intersecting paths between
points on a plane with a new approach by borrowing ideas from geometric
topology, in particular, from the study of polygonal schema in mathematics. We
use a topological transformation on the 2-dimensional planar routing
environment that simplifies the routing problem into a problem of connecting
points on a circle with straight line segments that do not intersect in the
interior of the circle. These points are either the points that need to be
connected by non-intersecting paths or special `reference' points that
parametrize the topology of the original environment prior to the
transformation. When all the necessary points on the circle are fully
connected, the transformation is reversed such that the line segments combine
to become the non-intersecting paths that connect the start and end points in
the original environment. We interpret the transformed environment in which the
routing problem is solved as a new data structure where any routing problem can
be solved efficiently. We perform experiments and show that the routing time
and success rate of the new routing algorithm outperforms the ones for the
A*-algorithm.
|
Massive Open Online Courses (MOOCs) have been used by students as a low-cost
and low-touch educational credential in a variety of fields. Understanding the
grading mechanisms behind these course assignments is important for evaluating
MOOC credentials. A common approach to grading free-response assignments is
massive scale peer-review, especially used for assignments that are not easy to
grade programmatically. It is difficult to assess these approaches since the
responses typically require human evaluation. Here we link data from public
code repositories on GitHub and course grades for a large massive-online open
course to study the dynamics of massive scale peer review. This has important
implications for understanding the dynamics of difficult to grade assignments.
Since the research was not hypothesis-driven, we described the results in an
exploratory framework. We find three distinct clusters of repeated peer-review
submissions and use these clusters to study how grades change in response to
changes in code submissions. Our exploration also leads to an important
observation that massive scale peer-review scores are highly variable,
increase, on average, with repeated submissions, and changes in scores are not
closely tied to the code changes that form the basis for the re-submissions.
|
State estimators are crucial components of anomaly detectors that are used to
monitor cyber-physical systems. Many frequently-used state estimators are
susceptible to model risk as they rely critically on the availability of an
accurate state-space model. Modeling errors make it more difficult to
distinguish whether deviations from expected behavior are due to anomalies or
simply a lack of knowledge about the system dynamics. In this research, we
account for model uncertainty through a multiplicative noise framework.
Specifically, we propose to use the multiplicative noise LQG based compensator
in this setting to hedge against the model uncertainty risk. The size of the
residual from the estimator can then be compared against a threshold to detect
anomalies. Finally, the proposed detector is validated using numerical
simulations. Extension of state-of-the-art anomaly detection in cyber-physical
systems to handle model uncertainty represents the main novel contribution of
the present work.
|
In this article, we prove that if $(\mathcal A ,\mathcal B,\mathcal C)$ is a
recollement of extriangulated categories, then torsion pairs in $\mathcal A$
and $\mathcal C$ can induce torsion pairs in $\mathcal B$, and the converse
holds under natural assumptions. Besides, we give mild conditions on a cluster
tilting subcategory on the middle category of a recollement of extriangulated
categories, for the corresponding abelian quotients to form a recollement of
abelian categories.
|
In this paper, we establish blow-up results for the semilinear wave equation
in generalized Einstein-de Sitter spacetime with nonlinearity of derivative
type. Our approach is based on the integral representation formula for the
solution to the corresponding linear problem in the one-dimensional case, that
we will determine through Yagdjian's Integral Transform approach. As upper
bound for the exponent of the nonlinear term, we discover a Glassey-type
exponent which depends both on the space dimension and on the Lorentzian metric
in the generalized Einstein-de Sitter spacetime.
|
The dairy industry uses clover and grass as fodder for cows. Accurate
estimation of grass and clover biomass yield enables smart decisions in
optimizing fertilization and seeding density, resulting in increased
productivity and positive environmental impact. Grass and clover are usually
planted together, since clover is a nitrogen-fixing plant that brings nutrients
to the soil. Adjusting the right percentages of clover and grass in a field
reduces the need for external fertilization. Existing approaches for estimating
the grass-clover composition of a field are expensive and time consuming -
random samples of the pasture are clipped and then the components are
physically separated to weigh and calculate percentages of dry grass, clover
and weeds in each sample. There is growing interest in developing novel deep
learning based approaches to non-destructively extract pasture phenotype
indicators and biomass yield predictions of different plant species from
agricultural imagery collected from the field. Providing these indicators and
predictions from images alone remains a significant challenge. Heavy occlusions
in the dense mixture of grass, clover and weeds make it difficult to estimate
each component accurately. Moreover, although supervised deep learning models
perform well with large datasets, it is tedious to acquire large and diverse
collections of field images with precise ground truth for different biomass
yields. In this paper, we demonstrate that applying data augmentation and
transfer learning is effective in predicting multi-target biomass percentages
of different plant species, even with a small training dataset. The scheme
proposed in this paper used a training set of only 261 images and provided
predictions of biomass percentages of grass, clover, white clover, red clover,
and weeds with mean absolute error of 6.77%, 6.92%, 6.21%, 6.89%, and 4.80%
respectively.
|
With the increasing scale and diversification of interaction behaviors in
E-commerce, more and more researchers pay attention to multi-behavior
recommender systems that utilize interaction data of other auxiliary behaviors
such as view and cart. To address these challenges in heterogeneous scenarios,
non-sampling methods have shown superiority over negative sampling methods.
However, two observations are usually ignored in existing state-of-the-art
non-sampling methods based on binary regression: (1) users have different
preference strengths for different items, so they cannot be measured simply by
binary implicit data; (2) the dependency across multiple behaviors varies for
different users and items. To tackle the above issue, we propose a novel
non-sampling learning framework named \underline{C}riterion-guided
\underline{H}eterogeneous \underline{C}ollaborative \underline{F}iltering
(CHCF). CHCF introduces both upper and lower bounds to indicate selection
criteria, which will guide user preference learning. Besides, CHCF integrates
criterion learning and user preference learning into a unified framework, which
can be trained jointly for the interaction prediction on target behavior. We
further theoretically demonstrate that the optimization of Collaborative Metric
Learning can be approximately achieved by CHCF learning framework in a
non-sampling form effectively. Extensive experiments on two real-world datasets
show that CHCF outperforms the state-of-the-art methods in heterogeneous
scenarios.
|
Using the econometric models, this paper addresses the ability of Albanian
Small and Medium-sized Enterprises (SMEs) to identify the risks they face. To
write this paper, we studied SMEs operating in the Gjirokastra region. First,
qualitative data gathered through a questionnaire was used. Next, the 5-level
Likert scale was used to measure it. Finally, the data was processed through
statistical software SPSS version 21, using the binary logistic regression
model, which reveals the probability of occurrence of an event when all
independent variables are included. Logistic regression is an integral part of
a category of statistical models, which are called General Linear Models.
Logistic regression is used to analyze problems in which one or more
independent variables interfere, which influences the dichotomous dependent
variable. In such cases, the latter is seen as the random variable and is
dependent on them. To evaluate whether Albanian SMEs can identify risks, we
analyzed the factors that SMEs perceive as directly affecting the risks they
face. At the end of the paper, we conclude that Albanian SMEs can identify risk
|
The Patterson-Sullivan construction is proved almost surely to recover a
Bergman function from its values on a random discrete subset sampled with the
determinantal point process induced by the Bergman kernel on the unit ball
$\mathbb{D}_d$ in $\mathbb{C}^d$. For super-critical weighted Bergman spaces,
the interpolation is uniform when the functions range over the unit ball of the
weighted Bergman space. As main results, we obtain a necessary and sufficient
condition for interpolation of a fixed pluriharmonic function in the complex
hyperbolic space of arbitrary dimension (cf. Theorem 1.4 and Theorem 4.11);
optimal simultaneous uniform interpolation for weighted Bergman spaces (cf.
Theorem 1.8, Proposition 1.9 and Theorem 4.13); strong simultaneous uniform
interpolation for weighted harmonic Hardy spaces (cf. Theorem 1.11 and Theorem
4.15); and establish the impossibility of the uniform simultaneous
interpolation for the Bergman space $A^2(\mathbb{D}_d)$ on $\mathbb{D}_d$ (cf.
Theorem 1.12 and Theorem 6.7).
|
The exponential growth of the number of multihomed mobile devices is changing
the way how we can connect to the Internet. Our mobile devices are demanding
for more network resources, in terms of traffic volume and QoS requirements.
Unfortunately, it is very hard to a multihomed device to be simultaneously
connected to the network through multiple links. The current work enhances the
network access of multihomed devices agnostically to the deployed access
technologies. This enhancement is achieved by using simultaneously all of the
mobile devices interfaces, and by routing each individual data flow through the
most convenient access technology. The proposed solution is only deployed at
the network side and it extends Proxy Mobile IPv6 with flow mobility in a
completely transparent way to mobile nodes. In fact, it gives particular
attention to the handover mechanisms, by improving the detection and attachment
of nodes in the network, with the inclusion of the IEEE 802.21 standard in the
solution. This provides the necessary implementation and integration details to
extend a network topology with femtocell devices. Each femtocell is equipped
with various network interfaces supporting a diverse set of access
technologies. There is also a decision entity that manages individually each
data flow according to its QoS / QoE requisites. The proposed solution has been
developed and extensively tested with a real prototype. Evaluation results
evidence that the overhead for using the solution is negligible as compared to
the offered advantages such as: the support of flow mobility, the fulfil of
VoIP functional requisites, the session continuity in spite of flows mobility,
its low overhead, its high scalability, and the complete transparency of the
proposed solution to the user terminals.
|
Brain-inspired machine learning is gaining increasing consideration,
particularly in computer vision. Several studies investigated the inclusion of
top-down feedback connections in convolutional networks; however, it remains
unclear how and when these connections are functionally helpful. Here we
address this question in the context of object recognition under noisy
conditions. We consider deep convolutional networks (CNNs) as models of
feed-forward visual processing and implement Predictive Coding (PC) dynamics
through feedback connections (predictive feedback) trained for reconstruction
or classification of clean images. To directly assess the computational role of
predictive feedback in various experimental situations, we optimize and
interpret the hyper-parameters controlling the network's recurrent dynamics.
That is, we let the optimization process determine whether top-down connections
and predictive coding dynamics are functionally beneficial. Across different
model depths and architectures (3-layer CNN, ResNet18, and EfficientNetB0) and
against various types of noise (CIFAR100-C), we find that the network
increasingly relies on top-down predictions as the noise level increases; in
deeper networks, this effect is most prominent at lower layers. In addition,
the accuracy of the network implementing PC dynamics significantly increases
over time-steps, compared to its equivalent forward network. All in all, our
results provide novel insights relevant to Neuroscience by confirming the
computational role of feedback connections in sensory systems, and to Machine
Learning by revealing how these can improve the robustness of current vision
models.
|
The induced surface charges appear to diverge when dielectric particles form
close contacts. Resolving this singularity numerically is prohibitively
expensive because high spatial resolution is needed. We show that the strength
of this singularity is logarithmic in both inter-particle separation and
dielectric permittivity. A regularization scheme is proposed to isolate this
singularity, and to calculate the exact cohesive energy for clusters of
contacting dielectric particles. The results indicate that polarization energy
stabilizes clusters of open configurations when permittivity is high, in
agreement with the behavior of conducting particles, but stabilizes the compact
configurations when permittivity is low.
|
We introduce the dueling teams problem, a new online-learning setting in
which the learner observes noisy comparisons of disjoint pairs of $k$-sized
teams from a universe of $n$ players. The goal of the learner is to minimize
the number of duels required to identify, with high probability, a Condorcet
winning team, i.e., a team which wins against any other disjoint team (with
probability at least $1/2$). Noisy comparisons are linked to a total order on
the teams. We formalize our model by building upon the dueling bandits setting
(Yue et al.2012) and provide several algorithms, both for stochastic and
deterministic settings. For the stochastic setting, we provide a reduction to
the classical dueling bandits setting, yielding an algorithm that identifies a
Condorcet winning team within $\mathcal{O}((n + k \log (k)) \frac{\max(\log\log
n, \log k)}{\Delta^2})$ duels, where $\Delta$ is a gap parameter. For
deterministic feedback, we additionally present a gap-independent algorithm
that identifies a Condorcet winning team within $\mathcal{O}(nk\log(k)+k^5)$
duels.
|
Detecting which parts of a sentence contribute to that sentence's toxicity --
rather than providing a sentence-level verdict of hatefulness -- would increase
the interpretability of models and allow human moderators to better understand
the outputs of the system. This paper presents our team's, UTNLP, methodology
and results in the SemEval-2021 shared task 5 on toxic spans detection. We test
multiple models and contextual embeddings and report the best setting out of
all. The experiments start with keyword-based models and are followed by
attention-based, named entity-based, transformers-based, and ensemble models.
Our best approach, an ensemble model, achieves an F1 of 0.684 in the
competition's evaluation phase.
|
We continue the constructive program about tensor field theory through the
next natural model, namely the rank five tensor theory with quartic melonic
interactions and propagator inverse of the Laplacian on $U(1)^5$. We make a
first step towards its construction by establishing its power counting,
identifiying the divergent graphs and performing a careful study of (a slight
modification of) its RG flow. Thus we give strong evidence that this just
renormalizable tensor field theory is non perturbatively asymptotically free.
|
Distributed delay equations have been used to model situations in which there
is some sort of delay whose duration is uncertain. However, the interpretation
of a distributed delay equation is actually very different from that of a delay
differential equation with a random delay. This work explicitly highlights this
distinction as it is an important consideration to make when modeling delayed
systems in which the delay can take on several values.
|
We characterize the kinematic and chemical properties of 589 Galactic
Anticenter Substructure Stars (GASS) with K-/M- giants in Integrals-of-Motion
space. These stars likely include members of previously identified
substructures such as Monoceros, A13, and the Triangulum-Andromeda cloud
(TriAnd). We show that these stars are on nearly circular orbits on both sides
of the Galactic plane. We can see velocity($V_{Z}$) gradient along Y-axis
especially for the south GASS members. Our GASS members have similar energy and
angular momentum distributions to thin disk stars. Their location in
[$\alpha$/M] vs. [M/H] space is more metal poor than typical thin disk stars,
with [$\alpha$/M] \textbf{lower} than the thick disk. We infer that our GASS
members are part of the outer metal-poor disk stars, and the outer-disk extends
to 30 kpc. Considering the distance range and $\alpha$-abundance features, GASS
could be formed after the thick disk was formed due to the molecular cloud
density decreased in the outer disk where the SFR might be less efficient than
the inner disk.
|
Database management has become an enormous tool for on-demand content
distribution services, proffering required information and providing custom
services to the user. Also plays a major role for the platforms to manage their
data in such a way that data redundancy is minimized. This paper emphasizes
improving the user experience for the platform by efficiently managing data.
Keeping in mind all the new age requirements, especially after COVID-19 the
sudden surge in subscription has led the stakeholders to try new things to lead
the OTT market. Collection of shows being the root of the tree here, this paper
improvises the currently existing branches via various tables and suggests some
new features on how the data collected can be utilized for introducing new and
much-required query results for the consumer.
|
The ratio of the $B^0_s$ and $B^0$ fragmentation fractions, $f_s/f_d$, in
proton-proton collisions at the LHC, is obtained as a function of $B$-meson
transverse momentum and collision centre-of-mass energy from the combined
analysis of different $B$-decay channels measured by the LHCb experiment. The
results are described by a linear function of the meson transverse momentum, or
with a function inspired by Tsallis statistics. Precise measurements of the
branching fractions of the $B^0_s \to J/\psi \phi$ and $B^0_s \to D^-_s \pi^+$
decays are performed, reducing their uncertainty by about a factor of two with
respect to previous world averages. Numerous $B^0_s$ decay branching fractions,
measured at the LHCb experiment, are also updated using the new values of
$f_s/f_d$ and branching fractions of normalisation channels. These results
reduce a major source of systematic uncertainty in several searches for new
physics performed through measurements of $B^0_s$ branching fractions.
|
We present the Mathematica package QMeS-Derivation. It derives symbolic
functional equations from a given master equation. The latter include
functional renormalisation group equations, Dyson-Schwinger equations,
Slavnov-Taylor and Ward identities and their modifications in the presence of
momentum cutoffs. The modules allow to derive the functional equations, take
functional derivatives, trace over field space, apply a given truncation
scheme, and do momentum routings while keeping track of prefactors and signs
that arise from fermionic commutation relations. The package furthermore
contains an installer as well as Mathematica notebooks with showcase examples.
|
The pion structure is represented by Generelazed parton distribution
functions (GPDs). The momentum transfer dependence of GPDs of the pion was
obtained on the basis of the form of GPDs of the nucleon in the framework of
the high energy generalized structure (HEGS) model. To this end, different
forms of PDFs of the pion of various Collaborations were examined with taking
into account the available experimental data on the pion form factors. As a
result, the electromagnetic and gravitomagnetic form factors of the pion were
calculated. They were used in the framework of the HEGS model with the
electromagnetic and gravitomagnetic form factors of the proton for describing
pion-nucleon elastic scattering in a wide energy and momentum transfer region
with a minimum of fitting parameters. The properties of the obtained scattering
amplitude were analyzed.
|
Non-linear dimensionality reduction can be performed by \textit{manifold
learning} approaches, such as Stochastic Neighbour Embedding (SNE), Locally
Linear Embedding (LLE) and Isometric Feature Mapping (ISOMAP). These methods
aim to produce two or three latent embeddings, primarily to visualise the data
in intelligible representations. This manuscript proposes extensions of
Student's t-distributed SNE (t-SNE), LLE and ISOMAP, for dimensionality
reduction and visualisation of multi-view data. Multi-view data refers to
multiple types of data generated from the same samples. The proposed multi-view
approaches provide more comprehensible projections of the samples compared to
the ones obtained by visualising each data-view separately. Commonly
visualisation is used for identifying underlying patterns within the samples.
By incorporating the obtained low-dimensional embeddings from the multi-view
manifold approaches into the K-means clustering algorithm, it is shown that
clusters of the samples are accurately identified. Through the analysis of real
and synthetic data the proposed multi-SNE approach is found to have the best
performance. We further illustrate the applicability of the multi-SNE approach
for the analysis of multi-omics single-cell data, where the aim is to visualise
and identify cell heterogeneity and cell types in biological tissues relevant
to health and disease.
|
The paper considers the system of pressureless gas dynamics in one space
dimension. The question of solvability of the initial-boundary value problem is
addressed. Using the method of generalized potentials and characteristic
triangles, extended to the boundary value case, an explicit way of constructing
measure-valued solutions is presented. The prescription of boundary data is
shown to depend on the behavior of the generalized potentials at the boundary.
We show that the constructed solution satisfies an entropy condition and it
conserves mass, whereby mass may accumulate at the boundary. Conservation of
momentum again depends on the behavior of the generalized boundary potentials.
There is a large amount of literature where the initial value problem for the
pressureless gas dynamics model has been studied. To our knowledge, this paper
is the first one which considers the initial-boundary value problem.
|
Recently, the object detection based on deep learning has proven to be
vulnerable to adversarial patch attacks. The attackers holding a specially
crafted patch can hide themselves from the state-of-the-art person detectors,
e.g., YOLO, even in the physical world. This kind of attack can bring serious
security threats, such as escaping from surveillance cameras. In this paper, we
deeply explore the detection problems about the adversarial patch attacks to
the object detection. First, we identify a leverageable signature of existing
adversarial patches from the point of the visualization explanation. A fast
signature-based defense method is proposed and demonstrated to be effective.
Second, we design an improved patch generation algorithm to reveal the risk
that the signature-based way may be bypassed by the techniques emerging in the
future. The newly generated adversarial patches can successfully evade the
proposed signature-based defense. Finally, we present a novel
signature-independent detection method based on the internal content semantics
consistency rather than any attack-specific prior knowledge. The fundamental
intuition is that the adversarial object can appear locally but disappear
globally in an input image. The experiments demonstrate that the
signature-independent method can effectively detect the existing and improved
attacks. It has also proven to be a general method by detecting unforeseen and
even other types of attacks without any attack-specific prior knowledge. The
two proposed detection methods can be adopted in different scenarios, and we
believe that combining them can offer a comprehensive protection.
|
Highly dynamic environments, with moving objects such as cars or humans, can
pose a performance challenge for LiDAR SLAM systems that assume largely static
scenes. To overcome this challenge and support the deployment of robots in real
world scenarios, we propose a complete solution for a dynamic object aware
LiDAR SLAM algorithm. This is achieved by leveraging a real-time capable neural
network that can detect dynamic objects, thus allowing our system to deal with
them explicitly. To efficiently generate the necessary training data which is
key to our approach, we present a novel end-to-end occupancy grid based
pipeline that can automatically label a wide variety of arbitrary dynamic
objects. Our solution can thus generalize to different environments without the
need for expensive manual labeling and at the same time avoids assumptions
about the presence of a predefined set of known objects in the scene. Using
this technique, we automatically label over 12000 LiDAR scans collected in an
urban environment with a large amount of pedestrians and use this data to train
a neural network, achieving an average segmentation IoU of 0.82. We show that
explicitly dealing with dynamic objects can improve the LiDAR SLAM odometry
performance by 39.6% while yielding maps which better represent the
environments. A supplementary video as well as our test data are available
online.
|
Let $r>0, n\in\mathbb{N}, {\bf k}\in\mathbb{N}$. Consider the delay
differential equation $$ x'(t)=g(x(t-d_1(Lx_t)),\ldots,x(t-d_{{\bf k}}(Lx_t)))
$$ for $g:(\mathbb{R}^n)^{{\bf k}}\supset V\to\mathbb{R}^n$ continuously
differentiable, $L$ a continuous linear map from $C([-r,0],\mathbb{R}^n)$ into
a finite-dimensional vectorspace $F$, each $d_k:F\supset W\to[0,r]$,
$k=1,\ldots,{\bf k}$, continuously differentiable, and $x_t(s)=x(t+s)$. The
solutions define a semiflow of continuously differentiable solution operators
on the submanifold $X_f\subset C^1([-r,0],\mathbb{R}^n)$ which is given by the
compatibility condition $\phi'(0)=f(\phi)$ with $$
f(\phi)=g(\phi(-d_1(L\phi)),\ldots,\phi(-d_{{\bf k}}(L\phi))). $$ We prove that
$X_f$ has a finite atlas of at most $2^{{\bf k}}$ manifold charts, whose
domains are almost graphs over $X_0$. The size of the atlas depends solely on
the zerosets of the delay functions $d_k$.
|
We elaborate on the structure of the conformal anomaly effective action up to
4-th order, in an expansion in the gravitational fluctuations $(h)$ of the
background metric, in the flat spacetime limit. For this purpose we discuss the
renormalization of 4-point functions containing insertions of stress-energy
tensors (4T), in conformal field theories in four spacetime dimensions with the
goal of identifying the structure of the anomaly action. We focus on a
separation of the correlator into its transverse/traceless and longitudinal
components, applied to the trace and conservation Ward identities (WI) in
momentum space. These are sufficient to identify, from their hierarchical
structure, the anomaly contribution, without the need to proceed with a
complete determination of all of its independent form factors. Renormalization
induces sequential bilinear graviton-scalar mixings on single, double and
multiple trace terms, corresponding to $R\square^{-1}$ interactions of the
scalar curvature, with intermediate virtual massless exchanges. These
dilaton-like terms couple to the conformal anomaly, as for the chiral anomalous
WIs. We show that at 4T level a new traceless component appears after
renormalization. We comment on future extensions of this result to more general
backgrounds, with possible applications to non local cosmologies.
|
Interfacial segregation can stabilize grain structures and even lead to grain
boundary complexion transitions. However, understanding of the complexity of
such phenomena in polycrystalline materials is limited, as most studies focus
on bicrystal geometries. In this work, we investigate interfacial segregation
and subsequent complexion transitions in polycrystalline Cu-Zr alloys using
hybrid Monte Carlo/molecular dynamics simulations. No significant change in the
grain size or structure is observed upon Zr dopant addition to a pure Cu
polycrystal at moderate temperature, where grain boundary segregation is the
dominant behavior. Segregation within the boundary network is inhomogeneous,
with some boundaries having local concentrations that are an order of magnitude
larger than the global value and others having almost no segregation, and
changes to physical parameters such as boundary free volume and energy are
found to correlate with dopant concentration. Further, another alloy sample is
investigated at a higher temperature to probe the occurrence of widespread
transitions in interfacial structure, where a significant fraction of the
originally ordered boundaries transition to amorphous complexions,
demonstrating the coexistence of multiple complexion types, each with their own
distribution of boundary chemical composition. Overall, this work highlights
that interfacial segregation and complexion structure can be diverse in a
polycrystalline network. The findings shown here complement existing
computational and experimental studies of individual interfaces and help pave
the way for unraveling the complexity of interfacial structure in realistic
microstructures.
|
Performance of neural models for named entity recognition degrades over time,
becoming stale. This degradation is due to temporal drift, the change in our
target variables' statistical properties over time. This issue is especially
problematic for social media data, where topics change rapidly. In order to
mitigate the problem, data annotation and retraining of models is common.
Despite its usefulness, this process is expensive and time-consuming, which
motivates new research on efficient model updating. In this paper, we propose
an intuitive approach to measure the potential trendiness of tweets and use
this metric to select the most informative instances to use for training. We
conduct experiments on three state-of-the-art models on the Temporal Twitter
Dataset. Our approach shows larger increases in prediction accuracy with less
training data than the alternatives, making it an attractive, practical
solution.
|
As important data carriers, the drastically increasing number of multimedia
videos often brings many duplicate and near-duplicate videos in the top results
of search. Near-duplicate video retrieval (NDVR) can cluster and filter out the
redundant contents. In this paper, the proposed NDVR approach extracts the
frame-level video representation based on convolutional neural network (CNN)
features from fully-connected layer and aggregated intermediate convolutional
layers. Unsupervised metric learning is used for similarity measurement and
feature matching. An efficient re-ranking algorithm combined with k-nearest
neighborhood fuses the retrieval results from two levels of features and
further improves the retrieval performance. Extensive experiments on the widely
used CC\_WEB\_VIDEO dataset shows that the proposed approach exhibits superior
performance over the state-of-the-art.
|
Accurate cosmological parameter estimates using polarization data of the
cosmic microwave background (CMB) put stringent requirements on map
calibration, as highlighted in the recent results from the Planck satellite. In
this paper, we point out that a model-dependent determination of polarization
calibration can be achieved by the joint fit of the TE and EE CMB power
spectra. This provides a valuable cross-check to band-averaged polarization
efficiency measurements determined using other approaches. We demonstrate that,
in $\Lambda$CDM, the combination of the TE and EE constrain polarization
calibration with sub-percent uncertainty with Planck data and 2% uncertainty
with SPTpol data. We arrive at similar conclusions when extending $\Lambda$CDM
to include the amplitude of lensing $A_{\rm L}$, the number of relativistic
species $N_{\rm eff}$, or the sum of the neutrino masses $\sum m_{\nu}$. The
uncertainties on cosmological parameters are minimally impacted when
marginalizing over polarization calibration, except, as can be expected, for
the uncertainty on the amplitude of the primordial scalar power spectrum
$\ln(10^{10} A_{\rm s})$, which increases by $20-50$%. However, this
information can be fully recovered by adding TT data. For current and future
ground-based experiments, SPT-3G and CMB-S4, we forecast the cosmological
parameter uncertainties to be minimally degraded when marginalizing over
polarization calibration parameters. In addition, CMB-S4 could constrain its
polarization calibration at the level of $\sim$0.2% by combining TE and EE, and
reach $\sim$0.06% by also including TT. We therefore conclude that relying on
calibrating against Planck polarization maps, whose statistical uncertainty is
limited to $\sim$0.5%, would be insufficient for upcoming experiments.
|
We investigate the dynamics of kicked pseudo-spin-1/2 Bose-Einstein
condensates (BECs) with spin-orbit coupling (SOC) in a tightly confined
toroidal trap. The system exhibits different dynamical behaviors depending on
the competition among SOC, kick strength, kick period and interatomic
interaction. For weak kick strength, with the increase of SOC the density
profiles of two components evolve from overlapped symmetric distributions into
staggered antisymmetric distributions, and the evolution of energy experiences
a transition from quasiperiodic motion to modulated quantum beating. For large
kick strength, when the SOC strength increases, the overlapped symmetric
density distributions become staggered irregular patterns, and the energy
evolution undergoes a transition from quasiperiodic motion to dynamical
localization. Furthermore, in the case of weak SOC, the increase of kick period
leads to a transition of the system from quantum beating to Rabi oscillation,
while for the case of strong SOC the system demonstrates complex quasiperiodic
motion.
|
Recent advances in Reinforcement Learning (RL) combined with Deep Learning
(DL) have demonstrated impressive performance in complex tasks, including
autonomous driving. The use of RL agents in autonomous driving leads to a
smooth human-like driving experience, but the limited interpretability of Deep
Reinforcement Learning (DRL) creates a verification and certification
bottleneck. Instead of relying on RL agents to learn complex tasks, we propose
HPRL - Hierarchical Program-triggered Reinforcement Learning, which uses a
hierarchy consisting of a structured program along with multiple RL agents,
each trained to perform a relatively simple task. The focus of verification
shifts to the master program under simple guarantees from the RL agents,
leading to a significantly more interpretable and verifiable implementation as
compared to a complex RL agent. The evaluation of the framework is demonstrated
on different driving tasks, and NHTSA precrash scenarios using CARLA, an
open-source dynamic urban simulation environment.
|
The DiCOVA challenge aims at accelerating research in diagnosing COVID-19
using acoustics (DiCOVA), a topic at the intersection of speech and audio
processing, respiratory health diagnosis, and machine learning. This challenge
is an open call for researchers to analyze a dataset of sound recordings
collected from COVID-19 infected and non-COVID-19 individuals for a two-class
classification. These recordings were collected via crowdsourcing from multiple
countries, through a website application. The challenge features two tracks,
one focusing on cough sounds, and the other on using a collection of breath,
sustained vowel phonation, and number counting speech recordings. In this
paper, we introduce the challenge and provide a detailed description of the
task, and present a baseline system for the task.
|
We study the evolution of six exoplanetary systems with the stellar
evolutionary code MESA and conclude that they will likely spin-up the envelope
of their parent stars on the red giant branch (RGB) or later on the asymptotic
giant branch (AGB) to the degree that the mass loss process might become
non-spherical. We choose six observed exoplanetary systems where the semi-major
axis is ~1-2AU, and use the binary mode of MESA to follow the evolution of the
systems. In four systems the star engulfs the planet on the RGB, and in two
systems on the AGB, and the systems enter a common envelope evolution (CEE). In
two systems where the exoplanet masses are Mp~10MJ, where MJ is Jupiter mass,
the planet spins-up the envelope to about 10% of the break-up velocity. Such
envelopes are likely to have significant non-spherical mass loss geometry. In
the other four systems where Mp~MJ the planet spins-up the envelope to values
of ~1-2% of break-up velocity. Magnetic activity in the envelope that
influences dust formation might lead to a small departure from spherical mass
loss even in these cases. In the two cases of CEE on the AGB the planet
deposits energy to the envelope that amounts to >10% of the envelope binding
energy. We expect this to cause a non-spherical mass loss that will shape an
elliptical planetary nebula in each case.
|
We modify the standard model of price competition with horizontally
differentiated products, imperfect information, and search frictions by
allowing consumers to flexibly acquire information about a product's match
value during their visits. We characterize a consumer's optimal search and
information acquisition protocol and analyze the pricing game between firms.
Notably, we establish that in search markets there are fundamental differences
between search frictions and information frictions, which affect market prices,
profits, and consumer welfare in markedly different ways. Although higher
search costs beget higher prices (and profits for firms), higher information
acquisition costs lead to lower prices and may benefit consumers. We discuss
implications of our findings for policies concerning disclosure rules and
hidden fees.
|
Pharmaceutical companies regularly need to make decisions about drug
development programs based on the limited knowledge from early stage clinical
trials. In this situation, eliciting the judgements of experts is an attractive
approach for synthesising evidence on the unknown quantities of interest. When
calculating the probability of success for a drug development program, multiple
quantities of interest - such as the effect of a drug on different endpoints -
should not be treated as unrelated.
We discuss two approaches for establishing a multivariate distribution for
several related quantities within the SHeffield ELicitation Framework (SHELF).
The first approach elicits experts' judgements about a quantity of interest
conditional on knowledge about another one. For the second approach, we first
elicit marginal distributions for each quantity of interest. Then, for each
pair of quantities, we elicit the concordance probability that both lie on the
same side of their respective elicited medians. This allows us to specify a
copula to obtain the joint distribution of the quantities of interest.
We show how these approaches were used in an elicitation workshop that was
performed to assess the probability of success of the registrational program of
an asthma drug. The judgements of the experts, which were obtained prior to
completion of the pivotal studies, were well aligned with the final trial
results.
|
In this paper, we release an open-source library, called TextBox, to provide
a unified, modularized, and extensible text generation framework. TextBox aims
to support a broad set of text generation tasks and models. In our library, we
implement 21 text generation models on 9 benchmark datasets, covering the
categories of VAE, GAN, and pretrained language models. Meanwhile, our library
maintains sufficient modularity and extensibility by properly decomposing the
model architecture, inference, and learning process into highly reusable
modules, which allows users to easily incorporate new models into our
framework. The above features make TextBox specially suitable for researchers
and practitioners to quickly reproduce baseline models and develop new models.
TextBox is implemented based on PyTorch, and released under Apache License 2.0
at https://github.com/RUCAIBox/TextBox.
|
The detection of voiced speech, the estimation of the fundamental frequency,
and the tracking of pitch values over time are crucial subtasks for a variety
of speech processing techniques. Many different algorithms have been developed
for each of the three subtasks. We present a new algorithm that integrates the
three subtasks into a single procedure. The algorithm can be applied to
pre-recorded speech utterances in the presence of considerable amounts of
background noise. We combine a collection of standard metrics, such as the
zero-crossing rate, for example, to formulate an unsupervised voicing
classifier. The estimation of pitch values is accomplished with a hybrid
autocorrelation-based technique. We propose a forward-backward Kalman filter to
smooth the estimated pitch contour. In experiments, we are able to show that
the proposed method compares favorably with current, state-of-the-art pitch
detection algorithms.
|
Stream graphs are a very useful mode of representation for temporal network
data, whose richness offers a wide range of possible approaches. The various
methods aimed at generalising the classical approaches applied to static
networks are constantly being improved. In this paper, we describe a framework
that extend to stream graphs iterative weighted-rich-clubs characterisation for
static networks proposed in [1]. The general principle is that we no longer
consider the membership of a node to one of the weighted-rich-clubs for the
whole time period, but each node is associated with a temporal profile which is
the concatenation of the successive memberships of the node to the
weighted-rich-clubs that appear, disappear and change all along the period. A
clustering of these profiles gives the possibility to establish a reduced list
of typical temporal profiles and so a more in-depth understanding of the
temporal structure of the network. This approach is tested on real world data
produced by recording the interactions between different students within their
respective schools. [1] M. Djellabi, B. Jouve, and F. Amblard. Dense and sparse
vertex connectivity in networks. Journal of Complex Networks, 8(3), 2020.
|
We post-process galaxies in the IllustrisTNG simulations with SKIRT radiative
transfer calculations to make predictions for the rest-frame near-infrared
(NIR) and far-infrared (FIR) properties of galaxies at $z\geq 4$. The
rest-frame $K$- and $z$-band galaxy luminosity functions from TNG are overall
consistent with observations, despite a $\sim 0.4\,\mathrm{dex}$
underprediction at $z=4$ for $M_{\rm z}\lesssim -24$. Predictions for the JWST
MIRI observed galaxy luminosity functions and number counts are given. We show
that the next-generation survey conducted by JWST can detect 500 (30) galaxies
in F1000W in a survey area of $500\,{\rm arcmin}^{2}$ at $z=6$ ($z=8$). As
opposed to the consistency in the UV, optical and NIR, we find that TNG,
combined with our dust modelling choices, significantly underpredicts the
abundance of most dust-obscured and thus most luminous FIR galaxies. As a
result, the obscured cosmic star formation rate density (SFRD) and the SFRD
contributed by optical/NIR dark objects are underpredicted. The discrepancies
discovered here could provide new constraints on the sub-grid feedback models,
or the dust contents, of simulations. Meanwhile, although the TNG predicted
dust temperature and its relations with IR luminosity and redshift are
qualitatively consistent with observations, the peak dust temperature of $z\geq
6$ galaxies are overestimated by about $20\,{\rm K}$. This could be related to
the limited mass resolution of our simulations to fully resolve the porosity of
the interstellar medium (or specifically its dust content) at these redshifts.
|
We compute the leading corrections to the differential cross section for
top-pair production via gluon fusion due to dimension-six operators at leading
order in QCD. The Standard Model fields are assumed to couple only weakly to
the hypothetical new sector. A systematic approach then suggests treating
single insertions of the operator class containing gluon field strength tensors
on the same footing as expli\-citly loop suppressed contributions from
four-fermion operators. This is in particular the case for the chromomagnetic
operator $Q_{(uG)}$ and the purely bosonic operators $Q_{(G)}$ and $Q_{(\varphi
G)}$. All leading order dimension-six contributions are consequently suppressed
with a loop factor $1/16\pi^2$.
|
Finding out the differences and commonalities between the knowledge of two
parties is an important task. Such a comparison becomes necessary, when one
party wants to determine how much it is worth to acquire the knowledge of the
second party, or similarly when two parties try to determine, whether a
collaboration could be beneficial. When these two parties cannot trust each
other (for example, due to them being competitors) performing such a comparison
is challenging as neither of them would be willing to share any of their
assets. This paper addresses this problem for knowledge graphs, without a need
for non-disclosure agreements nor a third party during the protocol.
During the protocol, the intersection between the two knowledge graphs is
determined in a privacy preserving fashion. This is followed by the computation
of various metrics, which give an indication of the potential gain from
obtaining the other parties knowledge graph, while still keeping the actual
knowledge graph contents secret. The protocol makes use of blind signatures and
(counting) Bloom filters to reduce the amount of leaked information. Finally,
the party who wants to obtain the other's knowledge graph can get a part of
such in a way that neither party is able to know beforehand which parts of the
graph are obtained (i.e., they cannot choose to only get or share the good
parts). After inspection of the quality of this part, the Buyer can decide to
proceed with the transaction.
The analysis of the protocol indicates that the developed protocol is secure
against malicious participants. Further experimental analysis shows that the
resource consumption scales linear with the number of statements in the
knowledge graph.
|
Glacier calving front position (CFP) is an important glaciological variable.
Traditionally, delineating the CFPs has been carried out manually, which was
subjective, tedious and expensive. Automating this process is crucial for
continuously monitoring the evolution and status of glaciers. Recently, deep
learning approaches have been investigated for this application. However, the
current methods get challenged by a severe class-imbalance problem. In this
work, we propose to mitigate the class-imbalance between the calving front
class and the non-calving front class by reformulating the segmentation problem
into a pixel-wise regression task. A Convolutional Neural Network gets
optimized to predict the distance values to the glacier front for each pixel in
the image. The resulting distance map localizes the CFP and is further
post-processed to extract the calving front line. We propose three
post-processing methods, one method based on statistical thresholding, a second
method based on conditional random fields (CRF), and finally the use of a
second U-Net. The experimental results confirm that our approach significantly
outperforms the state-of-the-art methods and produces accurate delineation. The
Second U-Net obtains the best performance results, resulting in an average
improvement of about 21% dice coefficient enhancement.
|
We present HandGAN (H-GAN), a cycle-consistent adversarial learning approach
implementing multi-scale perceptual discriminators. It is designed to translate
synthetic images of hands to the real domain. Synthetic hands provide complete
ground-truth annotations, yet they are not representative of the target
distribution of real-world data. We strive to provide the perfect blend of a
realistic hand appearance with synthetic annotations. Relying on image-to-image
translation, we improve the appearance of synthetic hands to approximate the
statistical distribution underlying a collection of real images of hands. H-GAN
tackles not only the cross-domain tone mapping but also structural differences
in localized areas such as shading discontinuities. Results are evaluated on a
qualitative and quantitative basis improving previous works. Furthermore, we
relied on the hand classification task to claim our generated hands are
statistically similar to the real domain of hands.
|
Ideals in BIT speciale varieties are characterized. In particular, it is
proved that, for any finitary BIT speciale variety, there is a finite set of
ideal terms determining ideals. Several ideal term sets of this kind are given.
For the variety of groups one of these sets consists of the terms $y_1y_2$,
$xyx^{-1}$, $x^{-1}y^{-1}x$ (and $y$ which can be ignored), while for rings it
consists of the terms $y_1+y_2$, $-y$, $xy$, $yx$. For each of the following
varieties -- groups with multiple operators, semi-loops, and divisible
involutory groupoids -- one of the term sets found in this paper almost
coincides with the term sets found earlier for these particular varieties by
resp. Higgins, B\v{e}lohl\'{a}vek and Chajda, and Hala\v{s}. The coincidence is
precise in the case of divisible involutory groupoids. For loops (resp. loops
with operators), the intersection of one of the term sets found in this paper
with the term sets found earlier for loops (resp. for loops with operators) by
Bruck (resp. by Higgins) forms the major part of both sets.
|
Counting the number of nodes in Anonymous Dynamic Networks is enticing from
an algorithmic perspective: an important computation in a restricted platform
with promising applications. Starting with Michail, Chatzigiannakis, and
Spirakis [19], a flurry of papers sped up the running time guarantees from
doubly-exponential to polynomial [16]. There is a common theme across all those
works: a distinguished node is assumed to be present, because Counting cannot
be solved deterministically without at least one. In the present work we study
challenging questions that naturally follow: how to efficiently count with more
than one distinguished node, or how to count without any distinguished node.
More importantly, what is the minimal information needed about these
distinguished nodes and what is the best we can aim for (count precision,
stochastic guarantees, etc.) without any. We present negative and positive
results to answer these questions. To the best of our knowledge, this is the
first work that addresses them.
|
Convolutional networks (ConvNets) have shown impressive capability to solve
various vision tasks. Nevertheless, the trade-off between performance and
efficiency is still a challenge for a feasible model deployment on
resource-constrained platforms. In this paper, we introduce a novel concept
termed multi-path fully connected pattern (MPFC) to rethink the
interdependencies of topology pattern, accuracy and efficiency for ConvNets.
Inspired by MPFC, we further propose a dual-branch module named dynamic clone
transformer (DCT) where one branch generates multiple replicas from inputs and
another branch reforms those clones through a series of difference vectors
conditional on inputs itself to produce more variants. This operation allows
the self-expansion of channel-wise information in a data-driven way with little
computational cost while providing sufficient learning capacity, which is a
potential unit to replace computationally expensive pointwise convolution as an
expansion layer in the bottleneck structure.
|
An exciting recent development is the uptake of deep learning in many
scientific fields, where the objective is seeking novel scientific insights and
discoveries. To interpret a learning outcome, researchers perform hypothesis
testing for explainable features to advance scientific domain knowledge. In
such a situation, testing for a blackbox learner poses a severe challenge
because of intractable models, unknown limiting distributions of parameter
estimates, and high computational constraints. In this article, we derive two
consistent tests for the feature relevance of a blackbox learner. The first one
evaluates a loss difference with perturbation on an inference sample, which is
independent of an estimation sample used for parameter estimation in model
fitting. The second further splits the inference sample into two but does not
require data perturbation. Also, we develop their combined versions by
aggregating the order statistics of the $p$-values based on repeated sample
splitting. To estimate the splitting ratio and the perturbation size, we
develop adaptive splitting schemes for suitably controlling the Type \rom{1}
error subject to computational constraints. By deflating the
\textit{bias-sd-ratio}, we establish asymptotic null distributions of the test
statistics and their consistency in terms of statistical power. Our theoretical
power analysis and simulations indicate that the one-split test is more
powerful than the two-split test, though the latter is easier to apply for
large datasets. Moreover, the combined tests are more stable while compensating
for a power loss by repeated sample splitting. Numerically, we demonstrate the
utility of the proposed tests on two benchmark examples. Accompanying this
paper is our Python library {\tt dnn-inference}
https://dnn-inference.readthedocs.io/en/latest/ that implements the proposed
tests.
|
Fast radio bursts (FRBs) can be scattered by ionized gas in their local
environments, host galaxies, intervening galaxies along their lines-of-sight,
the intergalactic medium, and the Milky Way. The relative contributions of
these different media depend on their geometric configuration and the internal
properties of the gas. When these relative contributions are well understood,
FRB scattering is a powerful probe of density fluctuations along the
line-of-sight. The precise scattering measurements for FRB 121102 and FRB
180916 allow us to place an upper limit on the amount of scattering contributed
by the Milky Way halo to these FRBs. The scattering time $\tau\propto(\tilde{F}
\times {\rm DM}^2) A_\tau$, where ${\rm DM}$ is the dispersion measure,
$\tilde{F}$ quantifies electron density variations with $\tilde{F}=0$ for a
smooth medium, and the dimensionless constant $A_\tau$ quantifies the
difference between the mean scattering delay and the $1/e$ scattering time
typically measured. A likelihood analysis of the observed scattering and halo
DM constraints finds that $\tilde{F}$ is at least an order of magnitude smaller
in the halo than in the Galactic disk. The maximum pulse broadening from the
halo is $\tau\lesssim12$ $\mu$s at 1 GHz. We compare our analysis of the Milky
Way halo with other galaxy haloes by placing limits on the scattering
contributions from haloes intersecting the lines-of-sight to FRB 181112 and FRB
191108. Our results are consistent with haloes making negligible or very small
contributions to the scattering times of these FRBs.
|
Using the Feynman-Kac formula, a work fluctuation theorem for a Brownian
particle in a nonconfining potential, e.g., a potential well with finite depth,
is derived. The theorem yields aninequality that puts a lower bound on the
average work needed to change the potential in time.In comparison to the
Jarzynski equality, which holds for confining potentials, an additional
termdescribing a form of energy related to the never ending diffusive expansion
appears.
|
A necessary first step for dust removal in protoplanetary disc winds is the
delivery of dust from the disc to the wind. In the case of ionized winds, the
disc and wind are sharply delineated by a narrow ionization front where the gas
density and temperature vary by more than an order of magnitude. Using a novel
method that is able to model the transport of dust across the ionization front
in the presence of disc turbulence, we revisit the problem of dust delivery.
Our results show that the delivery of dust to the wind is determined by the
vertical gas flow through the disc induced by the mass loss, rather than
turbulent diffusion (unless the turbulence is strong, i.e. $\alpha \gtrsim
0.01$). Using these results we provide a simple relation between the maximum
size of particle that can be delivered to the wind and the local mass-loss rate
per unit area from the wind. This relation is independent of the physical
origin of the wind and predicts typical sizes in the 0.01 -- $1\,\mu m$ range
for EUV or X-ray driven winds. These values are a factor $\sim 10$ smaller than
those obtained when considering only whether the wind is able to carry away the
grains.
|
Exoplanetary science continues to excite and surprise with its rich
diversity. We discuss here some key aspects potentially influencing the range
of exoplanetary terrestrial-type atmospheres which could exist in nature. We
are motivated by newly emerging observations, refined approaches to address
data degeneracies, improved theories for key processes affecting atmospheric
evolution and a new generation of atmospheric models which couple physical
processes from the deep interior through to the exosphere and consider the
planetary-star system as a whole. Using the Solar System as our guide we first
summarize the main processes which sculpt atmospheric evolution then discuss
their potential interactions in the context of exoplanetary environments. We
summarize key uncertainties and consider a diverse range of atmospheric
compositions discussing their potential occurrence in an exoplanetary context.
|
In this paper, we propose methods for improving the modeling performance of a
Transformer-based non-autoregressive text-to-speech (TNA-TTS) model. Although
the text encoder and audio decoder handle different types and lengths of data
(i.e., text and audio), the TNA-TTS models are not designed considering these
variations. Therefore, to improve the modeling performance of the TNA-TTS model
we propose a hierarchical Transformer structure-based text encoder and audio
decoder that are designed to accommodate the characteristics of each module.
For the text encoder, we constrain each self-attention layer so the encoder
focuses on a text sequence from the local to the global scope. Conversely, the
audio decoder constrains its self-attention layers to focus in the reverse
direction, i.e., from global to local scope. Additionally, we further improve
the pitch modeling accuracy of the audio decoder by providing sentence and
word-level pitch as conditions. Various objective and subjective evaluations
verified that the proposed method outperformed the baseline TNA-TTS.
|
Lattice sums of cuboidal lattices, which connect the face-centered with the
mean-centered and the body-centered cubic lattices through parameter dependent
lattice vectors, are evaluated by decomposing them into two separate lattice
sums related to a scaled cubic lattice and a scaled Madelung constant. Using
theta functions we were able to derive fast converging series in terms of
Bessel functions. Analytical continuations of these lattice sums are discussed
in detail.
|
An Integral Equation (IE) based electromagnetic field solver using
metasurface susceptibility tensors is proposed and validated using variety of
numerical examples in 2D. The proposed method solves for fields generated by
the metasurface which are represented as spatial discontinuities satisfying the
Generalized Sheet Transition Conditions (GSTCs), and described using tensorial
surface susceptibility densities, $\bar{\bar{\chi}}$. For the first time, the
complete tensorial representation of susceptibilities is incorporated in this
integrated IE-GSTC framework, where the normal surface polarizabilities and
their spatial derivatives along the metasurface are rigorously taken into
account. The proposed field equation formulation further utilizes a local
co-ordinate system which enables modeling metasurfaces with arbitrary
orientations and geometries. The proposed 2D BEM-GSTC framework is successfully
tested using variety of examples including infinite and finite sized
metasurfaces, periodic metasurfaces and complex shaped structures, showing
comparisons with both analytical results and a commercial full-wave solver. It
is shown that the zero-thickness sheet model with complete tensorial
susceptibilities can very accurately reproduce the macroscopic fields,
accounting for their angular field scattering response and the edge diffraction
effects in finite-sized surfaces.
|
We study the electronic structures of chiral, Eshelby-twisted van der Waals
atomic layers with a particular focus on a chiral twisted graphite (CTG), a
graphene stack with a constant twist angle $\theta$ between successive layers.
We show that each CTG can host infinitely many resonant states which arise from
the interaction between the degenerate monolayer states of the constituent
layers. Each resonant state has a screw rotational symmetry, and may have a
smaller reduced Brillouin zone than other non-resonant states in the same
structure. And each CTG can have the resonant states with up to four different
screw symmetries. We derive the energies and wave functions of the resonant
states in a universal form of a one-dimensional chain regardless of $\theta$,
and show that these states exhibit a clear optical selection rule for
circularly polarized light. Finally, we discuss the uniqueness and existence of
the exact center of the lattice and the self-similarity of the wave amplitudes
of the resonant states.
|
We present the second public data release of the Dark Energy Survey, DES DR2,
based on optical/near-infrared imaging by the Dark Energy Camera mounted on the
4-m Blanco telescope at Cerro Tololo Inter-American Observatory in Chile. DES
DR2 consists of reduced single-epoch and coadded images, a source catalog
derived from coadded images, and associated data products assembled from 6
years of DES science operations. This release includes data from the DES
wide-area survey covering ~5000 deg2 of the southern Galactic cap in five broad
photometric bands, grizY. DES DR2 has a median delivered point-spread function
full-width at half maximum of g= 1.11, r= 0.95, i= 0.88, z= 0.83, and Y= 0.90
arcsec photometric uniformity with a standard deviation of < 3 mmag with
respect to Gaia DR2 G-band, a photometric accuracy of ~10 mmag, and a median
internal astrometric precision of ~27 mas. The median coadded catalog depth for
a 1.95 arcsec diameter aperture at S/N= 10 is g= 24.7, r= 24.4, i= 23.8, z=
23.1 and Y= 21.7 mag. DES DR2 includes ~691 million distinct astronomical
objects detected in 10,169 coadded image tiles of size 0.534 deg2 produced from
76,217 single-epoch images. After a basic quality selection, benchmark galaxy
and stellar samples contain 543 million and 145 million objects, respectively.
These data are accessible through several interfaces, including interactive
image visualization tools, web-based query clients, image cutout servers and
Jupyter notebooks. DES DR2 constitutes the largest photometric data set to date
at the achieved depth and photometric precision.
|
We recently proposed DOVER-Lap, a method for combining overlap-aware speaker
diarization system outputs. DOVER-Lap improved upon its predecessor DOVER by
using a label mapping method based on globally-informed greedy search. In this
paper, we analyze this label mapping in the framework of a maximum orthogonal
graph partitioning problem, and present three inferences. First, we show that
DOVER-Lap label mapping is exponential in the input size, which poses a
challenge when combining a large number of hypotheses. We then revisit the
DOVER label mapping algorithm and propose a modification which performs similar
to DOVER-Lap while being computationally tractable. We also derive an
approximation bound for the algorithm in terms of the maximum number of
hypotheses speakers. Finally, we describe a randomized local search algorithm
which provides a near-optimal $(1-\epsilon)$-approximate solution to the
problem with high probability. We empirically demonstrate the effectiveness of
our methods on the AMI meeting corpus. Our code is publicly available:
https://github.com/desh2608/dover-lap.
|
In this work, we study the computational complexity of determining whether a
machine learning model that perfectly fits the training data will generalizes
to unseen data. In particular, we study the power of a malicious agent whose
goal is to construct a model g that fits its training data and nothing else,
but is indistinguishable from an accurate model f. We say that g strongly
spoofs f if no polynomial-time algorithm can tell them apart. If instead we
restrict to algorithms that run in $n^c$ time for some fixed $c$, we say that g
c-weakly spoofs f. Our main results are
1. Under cryptographic assumptions, strong spoofing is possible and 2. For
any c> 0, c-weak spoofing is possible unconditionally
While the assumption of a malicious agent is an extreme scenario (hopefully
companies training large models are not malicious), we believe that it sheds
light on the inherent difficulties of blindly trusting large proprietary models
or data.
|
With the advancement of machine learning (ML) and its growing awareness, many
organizations who own data but not ML expertise (data owner) would like to pool
their data and collaborate with those who have expertise but need data from
diverse sources to train truly generalizable models (model owner). In such
collaborative ML, the data owner wants to protect the privacy of its training
data, while the model owner desires the confidentiality of the model and the
training method which may contain intellectual properties. However, existing
private ML solutions, such as federated learning and split learning, cannot
meet the privacy requirements of both data and model owners at the same time.
This paper presents Citadel, a scalable collaborative ML system that protects
the privacy of both data owner and model owner in untrusted infrastructures
with the help of Intel SGX. Citadel performs distributed training across
multiple training enclaves running on behalf of data owners and an aggregator
enclave on behalf of the model owner. Citadel further establishes a strong
information barrier between these enclaves by means of zero-sum masking and
hierarchical aggregation to prevent data/model leakage during collaborative
training. Compared with the existing SGX-protected training systems, Citadel
enables better scalability and stronger privacy guarantees for collaborative
ML. Cloud deployment with various ML models shows that Citadel scales to a
large number of enclaves with less than 1.73X slowdown caused by SGX.
|
With updated experimental data and improved theoretical calculations, several
significant deviations are being observed between the Standard Model
predictions and the experimental measurements of the branching ratios of
$\bar{B}_{(s)}^0\to D_{(s)}^{(*)+} L^-$ decays, where $L$ is a light meson from
the set $\{\pi,\rho,K^{(\ast)}\}$. Especially for the two channels
$\bar{B}^0\to D^{+}K^-$ and $\bar{B}_{s}^0\to D_{s}^{+}\pi^-$, both of which
are free of the weak annihilation contribution, the deviations observed can
even reach 4-5$\sigma$. Here we exploit possible new-physics effects in these
class-I non-leptonic $B$-meson decays within the framework of QCD
factorization. Firstly, we perform a model-independent analysis of the effects
from twenty linearly independent four-quark operators that can contribute,
either directly or through operator mixing, to the quark-level $b\to c\bar{u}
d(s)$ transitions. It is found that, under the combined constraints from the
current experimental data, the deviations observed could be well explained at
the $1\sigma$ level by the new-physics four-quark operators with
$\gamma^{\mu}(1-\gamma_5)\otimes\gamma_{\mu} (1-\gamma_5)$ structure, and also
at the $2\sigma$ level by the operators with $(1+\gamma_5)\otimes(1-\gamma_5)$
and $(1+\gamma_5)\otimes(1+\gamma_5)$ structures. However, the new-physics
four-quark operators with other Dirac structures fail to provide a consistent
interpretation, even at the $2\sigma$ level. Then, as two specific examples of
model-dependent considerations, we discuss the case where the new-physics
four-quark operators are generated by either a colorless charged gauge boson or
a colorless charged scalar, with their masses fixed both at the $1$~TeV.
Constraints on the effective coefficients describing the couplings of these
mediators to the relevant quarks are obtained by fitting to the current
experimental data.
|
Motivation: Cryo-Electron Tomography (cryo-ET) is a 3D bioimaging tool that
visualizes the structural and spatial organization of macromolecules at a
near-native state in single cells, which has broad applications in life
science. However, the systematic structural recognition and recovery of
macromolecules captured by cryo-ET are difficult due to high structural
complexity and imaging limits. Deep learning based subtomogram classification
have played critical roles for such tasks. As supervised approaches, however,
their performance relies on sufficient and laborious annotation on a large
training dataset.
Results: To alleviate this major labeling burden, we proposed a Hybrid Active
Learning (HAL) framework for querying subtomograms for labelling from a large
unlabeled subtomogram pool. Firstly, HAL adopts uncertainty sampling to select
the subtomograms that have the most uncertain predictions. Moreover, to
mitigate the sampling bias caused by such strategy, a discriminator is
introduced to judge if a certain subtomogram is labeled or unlabeled and
subsequently the model queries the subtomogram that have higher probabilities
to be unlabeled. Additionally, HAL introduces a subset sampling strategy to
improve the diversity of the query set, so that the information overlap is
decreased between the queried batches and the algorithmic efficiency is
improved. Our experiments on subtomogram classification tasks using both
simulated and real data demonstrate that we can achieve comparable testing
performance (on average only 3% accuracy drop) by using less than 30% of the
labeled subtomograms, which shows a very promising result for subtomogram
classification task with limited labeling resources.
|
Motivated by the complexity of network data, we propose a directed hybrid
random network that mixes preferential attachment (PA) rules with uniform
attachment (UA) rules. When a new edge is created, with probability $p\in
[0,1]$, it follows the PA rule. Otherwise, this new edge is added between two
uniformly chosen nodes. Such mixture makes the in- and out-degrees of a fixed
node grow at a slower rate, compared to the pure PA case, thus leading to
lighter distributional tails. Useful inference methods for the proposed hybrid
model are then provided and applied to both synthetic and real datasets. We see
that with extra flexibility given by the parameter $p$, the hybrid random
network provides a better fit to real-world scenarios, where lighter tails from
in- and out-degrees are observed.
|
The sine-Gordon expansion method; which is a transformation of the
sine-Gordon equation has been applied to the potential-YTSF equation of
dimension (3+1) and the reaction-diffusion equation. We obtain new solitons of
this equation in the form hyperbolic, complex and trigonometric function by
using this method. We plot 2D and 3D graphics of these solutions using symbolic
software.
|
We consider a gravitational theory with an additional non-minimal coupling
between baryonic matter fields and geometry. The coupling is second order in
the energy momentum tensor and can be seen as a generalization of the
energy-momentum squared gravity model. We will add a constraint through a
Lagrange multiplier to ensure the conservation of the energy-momentum tensor.
Background cosmological implications together with its dynamical system
analysis will be investigated in details. Also we will consider the growth of
matter perturbation at first order, and estimate the model parameter from
observations on $H$ and also $f\sigma_8$. We will show that the model parameter
should be small and positive in 2$\sigma$ confidence interval. The theory is
shown to be in a good agreement with observational data.
|
The advancement in the world of global satellite systems has enhanced the
accuracy and reliability between various constellations. However, these
enhancements have made numerous challenges for the receivers designer. For
instance, comparing the acquisition and tracking of the Galileo signals turn
into relatively complex processes after the utilization of Alternate Binary
Offset Carrier AltBOC modulation scheme. This paper presents an efficient and
unique method for comparing baseband signal processing of the complex receiver
structure of the Galileo E5 AltBOC signal. More specifically, the data
demodulation has attained after comparing the noisy satellite data sets with
the clean data sets. Moreover, the paper presents the implementation of signal
acquisition, code tracking, multipath noise characteristics, and carrier
tracking for various datasets of the satellite after the eradication of noise.
The results obtained in the paper are promising and provide the through
treatment to the problem.
|
The simultaneous detection of organic molecules of the form
C$_2$H$_{\text{n}}$O, such as ketene (CH$_2$CO), acetaldehyde (CH$_3$CHO), and
ethanol (CH$_3$CH$_2$OH), toward early star-forming regions offers hints of
shared chemical history. Several reaction routes have been proposed and
experimentally verified under various interstellar conditions to explain the
formation pathways involved. Most noticeably, the non-energetic processing of
C$_2$H$_2$ ice with OH-radicals and H-atoms was shown to provide formation
routes to ketene, acetaldehyde, ethanol, and vinyl alcohol (CH$_2$CHOH) along
the H$_2$O formation sequence on grain surfaces. In this work, the
non-energetic formation scheme is extended with laboratory measurements
focusing on the energetic counterpart, induced by cosmic rays penetrating the
H$_2$O-rich ice mantle. The focus here is on the H$^+$ radiolysis of
interstellar C$_2$H$_2$:H$_2$O ice analogs at 17 K. Ultra-high vacuum
experiments were performed to investigate the 200 keV H$^+$ radiolysis
chemistry of predeposited C$_2$H$_2$:H$_2$O ices, both as mixed and layered
geometries. Fourier-transform infrared spectroscopy was used to monitor in situ
newly formed species as a function of the accumulated energy dose (or H$^+$
fluence). The infrared (IR) spectral assignments are further confirmed in
isotope labeling experiments using H$_2$$^{18}$O. The energetic processing of
C$_2$H$_2$:H$_2$O ice not only results in the formation of (semi-) saturated
hydrocarbons (C$_2$H$_4$ and C$_2$H$_6$) and polyynes as well as cumulenes
(C$_4$H$_2$ and C$_4$H$_4$), but it also efficiently forms O-bearing COMs,
including vinyl alcohol, ketene, acetaldehyde, and ethanol, for which the
reaction cross-section and product composition are derived. A clear composition
transition of the product, from H-poor to H-rich species, is observed as a
function of the accumulated energy dose.
|
We propose a new approach to obtain the momentum expectation value of an
electron in a high-intensity laser, including multiple photon emissions and
loops. We find a recursive formula that allows us to obtain the
$\mathcal{O}(\alpha^n)$ term from $\mathcal{O}(\alpha^{n-1})$, which can also
be expressed as an integro-differential equation. In the classical limit we
obtain the solution to the Landau-Lifshitz equation to all orders. We show how
spin-dependent quantum radiation reaction can be obtained by resumming both the
energy expansion as well as the $\alpha$ expansion.
|
In this paper the determination of material properties such as Sieverts'
constant (solubility) and diffusivity (transport rate) via so-called gas
release experiments is discussed. In order to simulate the time-dependent
hydrogen fluxes and concentration profiles efficiently, we make use of an
analytical method, namely we provide an analytical solution for the
corresponding diffusion equations on a cylindrical specimen and a cylindrical
container for three boundary conditions. These conditions occur in three phases
-- loading phase, evacuation phase and gas release phase. In the loading phase
the specimen is charged with hydrogen assuring a constant partial pressure of
hydrogen. Then the gas will be quickly removed by a vacuum pump in the second
phase, and finally in the third time interval, the hydrogen is released from
the specimen to the gaseous phase, where the pressure increase will be measured
by an equipment which is attached to the cylindrical container. The
investigated diffusion equation in each phase is a simple homogeneous equation,
but due to the complex time-dependent boundary conditions which include the
Sieverts' constant and the pressure, we transform the homogeneous equations to
the non-homogeneous ones with a zero Dirichlet boundary condition. Compared
with the time consuming numerical methods our analytical approach has an
advantage that the flux of desorbed hydrogen can be explicitly given and
therefore can be evaluated efficiently. Our analytical solution also assures
that the time-dependent boundary conditions are exactly satisfied and
furthermore that the interaction between specimen and container is correctly
taken into account.
|
Vehicular Ad hoc Network (VANET) is a new sort of wireless ad-hoc network.
Vehicle-to-Vehicle (V2V) communication is one of the main communication
paradigms that provide a level of safety and convenience to drivers and
passengers on the road. In such an environment, routing data packets is
challenging due to frequent changes of network topology because of the highly
dynamic nature of vehicles. Thus, routing in VANETs requires efficient
protocols that guarantee message transmission among vehicles. Numerous routing
protocols and algorithms have been proposed or enhanced to solve the
aforementioned problems. Many position-based routing protocols have been
developed for routing messages that have been identified to be appropriate for
VANETs. This work explores the performances of selected unicast non-delay
tolerant overlay position-based routing protocols. The evaluation has been
conducted in highway and urban environments in two different scenarios. The
evaluation metrics that are used are Packet Delivery Ratio (PDR), Void Problem
Occurrence (VPO), and Average Hop Count (AHC).
|
A Lifshitz black brane at generic dynamical critical exponent $z > 1$, with
non-zero linear momentum along the boundary, provides a holographic dual
description of a non-equilibrium steady state in a quantum critical fluid, with
Lifshitz scale invariance but without boost symmetry. We consider moving
Lifshitz branes in Einstein-Maxwell-Dilaton gravity and obtain the
non-relativistic stress tensor complex of the dual field theory via a suitable
holographic renormalisation procedure. The resulting black brane hydrodynamics
and thermodynamics are a concrete holographic realization of a Lifshitz perfect
fluid with a generic dynamical critical exponent.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.