abstract
stringlengths 42
2.09k
|
---|
Predictive uncertainty estimation is an essential next step for the reliable
deployment of deep object detectors in safety-critical tasks. In this work, we
focus on estimating predictive distributions for bounding box regression output
with variance networks. We show that in the context of object detection,
training variance networks with negative log likelihood (NLL) can lead to high
entropy predictive distributions regardless of the correctness of the output
mean. We propose to use the energy score as a non-local proper scoring rule and
find that when used for training, the energy score leads to better calibrated
and lower entropy predictive distributions than NLL. We also address the
widespread use of non-proper scoring metrics for evaluating predictive
distributions from deep object detectors by proposing an alternate evaluation
approach founded on proper scoring rules. Using the proposed evaluation tools,
we show that although variance networks can be used to produce high quality
predictive distributions, ad-hoc approaches used by seminal object detectors
for choosing regression targets during training do not provide wide enough data
support for reliable variance learning. We hope that our work helps shift
evaluation in probabilistic object detection to better align with predictive
uncertainty evaluation in other machine learning domains. Code for all models,
evaluation, and datasets is available at:
https://github.com/asharakeh/probdet.git.
|
Numerous missions planned for the next decade are likely to target a handful
of smal sites of interest on the Moon's surface, creating risks of crowding and
interference at these locations. The Moon presents finite and scarce areas with
rare topography or concentrations of resources of special value. Locations of
interest to science, notably for astronomy, include the Peaks of Eternal Light,
the coldest of the cold traps and smooth areas on the far side. Regions richest
in physical resources could also be uniquely suited to settlement and commerce.
Such sites of interest are both few and small. Typically, there are fewer than
ten key sites of each type, each site spanning a few kilometres across. We
survey the implications for different kins of mission and find that the diverse
actors pursuing incomptible ends at these sites could soon crowd and interfere
with each other, leaving almost all actors worse off. Without proactive
measures to prevent these outcomes, lunar actors are likely to experience
significant losses of opportunity. We highlight the legal, policy, and ethical
ramifications. Insights from research on comparable sites on Earth present a
path toward managing lunar crowding and interference grounded in ethical and
practical near-term considerations. This article is part of a discussion
meeting issue 'Astronomy from the Moon: the next decades'.
|
To improve online search results, clarification questions can be used to
elucidate the information need of the user. This research aims to predict the
user engagement with the clarification pane as an indicator of relevance based
on the lexical information: query, question, and answers. Subsequently, the
predicted user engagement can be used as a feature to rank the clarification
panes. Regression and classification are applied for predicting user engagement
and compared to naive heuristic baselines (e.g. mean) on the new MIMICS dataset
[20]. An ablation study is carried out using a RankNet model to determine
whether the predicted user engagement improves clarification pane ranking
performance. The prediction models were able to improve significantly upon the
naive baselines, and the predicted user engagement feature significantly
improved the RankNet results in terms of NDCG and MRR. This research
demonstrates the potential for ranking clarification panes based on lexical
information only and can serve as a first neural baseline for future research
to improve on. The code is available online.
|
The NLP community has witnessed steep progress in a variety of tasks across
the realms of monolingual and multilingual language processing recently. These
successes, in conjunction with the proliferating mixed language interactions on
social media have boosted interest in modeling code-mixed texts. In this work,
we present CodemixedNLP, an open-source library with the goals of bringing
together the advances in code-mixed NLP and opening it up to a wider machine
learning community. The library consists of tools to develop and benchmark
versatile model architectures that are tailored for mixed texts, methods to
expand training sets, techniques to quantify mixing styles, and fine-tuned
state-of-the-art models for 7 tasks in Hinglish. We believe this work has a
potential to foster a distributed yet collaborative and sustainable ecosystem
in an otherwise dispersed space of code-mixing research. The toolkit is
designed to be simple, easily extensible, and resourceful to both researchers
as well as practitioners.
|
In the work we use integral formulas for calculating the monodromy data for
the Painlev\'e-2 equation. The perturbation theory for the auxiliary linear
system is constructed and formulas for the variation of the monodromy data are
obtained. We also derive a formula for solving the linearized Painlev\'e-2
equation based on the Fourier-type integral of the squared solutions of the
auxiliary linear system of equations.
|
A liquid droplet impacting on a solvophobic surface normally rebounds. The
rebound is suppressed by a small amount of dissolved polymer. In this work,
using multi-body dissipative particle dynamics simulations, two anti-rebound
mechanisms, the slow-retraction and the slow-hopping mechanisms, are
identified. Which of them dominates depends on the polymer-surface attraction
strength. However, these two mechanisms are not excluding each other but may
coexist. During the droplet rebound, the surface-adsorbed polymer acts in two
ways: the adsorbed beads mediate solvent-surface interactions, and highly
stretching unadsorbed polymer segment exerts a retraction force on the liquid.
Both actions increase the friction against retraction and the resistance
against hopping. We also investigate the effects of the molecular weight and
the concentration of the polymer additive, the droplet size, and the impact
velocity on the rebound tendency. As the first work to provide a microscopic
explanation of the anti-rebound mechanism by polymer additives, this study
allows better understanding of wetting behavior by polymer-solution droplets.
|
The successful deployment of artificial intelligence (AI) in many domains
from healthcare to hiring requires their responsible use, particularly in model
explanations and privacy. Explainable artificial intelligence (XAI) provides
more information to help users to understand model decisions, yet this
additional knowledge exposes additional risks for privacy attacks. Hence,
providing explanation harms privacy. We study this risk for image-based model
inversion attacks and identified several attack architectures with increasing
performance to reconstruct private image data from model explanations. We have
developed several multi-modal transposed CNN architectures that achieve
significantly higher inversion performance than using the target model
prediction only. These XAI-aware inversion models were designed to exploit the
spatial knowledge in image explanations. To understand which explanations have
higher privacy risk, we analyzed how various explanation types and factors
influence inversion performance. In spite of some models not providing
explanations, we further demonstrate increased inversion performance even for
non-explainable target models by exploiting explanations of surrogate models
through attention transfer. This method first inverts an explanation from the
target prediction, then reconstructs the target image. These threats highlight
the urgent and significant privacy risks of explanations and calls attention
for new privacy preservation techniques that balance the dual-requirement for
AI explainability and privacy.
|
This paper explores supply chain viability through empirical network-level
analysis of supplier reachability under various scenarios. Specifically, this
study investigates the effect of multi-tier random failures across different
scales, as well as intelligent attacks on the global supply chain of medical
equipment, an industry whose supply chain's viability was put under a crucial
test during the COVID-19 pandemic. The global supply chain data was mined and
analyzed from about 45,000 firms with about 115,000 intertwined relationships
spanning across 10 tiers of the backward supply chain of medical equipment.
This complex supply chain network was analyzed at four scales, namely: firm,
country-industry, industry, and country. A notable contribution of this study
is the application of a supply chain tier optimization tool to identify the
lowest tier of the supply chain that can provide adequate resolution for the
study of the supply chain pattern in the medical equipment sector. We also
developed data-driven-tools to identify the thresholds for the breakdown and
fragmentation of the medical equipment supply chain when faced with random
failures, or different intelligent attack scenarios. The novel network analysis
tools utilized in the study can be applied to the study of supply chain
reachability and viability in other industries.
|
We revisit the dynamic relationship between stock market and domestic
economic policy uncertainty (EPU) with the symmetric thermal optimal path
(TOPS) method. We observe totally different interaction pattern in emerging and
developed markets. Economic policy uncertainty can drive stock market in China,
while stock market plays a leading role in the UK and the US. Meanwhile, the
lead-lag relationship of the three countries react significantly when extreme
events happen. Our findings have important implications for investors and
policy makers.
|
We provide necessary and sufficient conditions for generic n-qubit states to
be equivalent under Stochastic Local Operations with Classical Communication
(SLOCC) using a single polynomial entanglement measure. SLOCC operations may be
represented geometrically by M\"obius transformations on the roots of the
entanglement measure on the Bloch sphere. Moreover, we show how the roots of
the 3-tangle measure classify 4-qubit generic states and propose a method to
obtain the normal form of a 4-qubit state which bypasses the possibly infinite
iterative procedure.
|
One of the leading single-channel speech separation (SS) models is based on a
TasNet with a dual-path segmentation technique, where the size of each segment
remains unchanged throughout all layers. In contrast, our key finding is that
multi-granularity features are essential for enhancing contextual modeling and
computational efficiency. We introduce a self-attentive network with a novel
sandglass-shape, namely Sandglasset, which advances the state-of-the-art (SOTA)
SS performance at significantly smaller model size and computational cost.
Forward along each block inside Sandglasset, the temporal granularity of the
features gradually becomes coarser until reaching half of the network blocks,
and then successively turns finer towards the raw signal level. We also unfold
that residual connections between features with the same granularity are
critical for preserving information after passing through the bottleneck layer.
Experiments show our Sandglasset with only 2.3M parameters has achieved the
best results on two benchmark SS datasets -- WSJ0-2mix and WSJ0-3mix, where the
SI-SNRi scores have been improved by absolute 0.8 dB and 2.4 dB, respectively,
comparing to the prior SOTA results.
|
The anomalous magnetic and electric dipole moments in spin motion equation
acquire pseudoscalar corrections if the $T(CP)$-noninvariance is admitted. It
allows to explain the discrepancy between experimental and theoretical values
of muon $(g-2)$ factor under assumption that the pseudoscalar correction is the
dominant source of this discrepancy.
|
To be able to predict a molecular graph structure ($W$) given a 2D image of a
chemical compound ($U$) is a challenging problem in machine learning. We are
interested to learn $f: U \rightarrow W$ where we have a fully mediating
representation $V$ such that $f$ factors into $U \rightarrow V \rightarrow W$.
However, observing V requires detailed and expensive labels. We propose graph
aligning approach that generates rich or detailed labels given normal labels
$W$. In this paper we investigate the scenario of domain adaptation from the
source domain where we have access to the expensive labels $V$ to the target
domain where only normal labels W are available. Focusing on the problem of
predicting chemical compound graphs from 2D images the fully mediating layer is
represented using the planar embedding of the chemical graph structure we are
predicting. The use of a fully mediating layer implies some assumptions on the
mechanism of the underlying process. However if the assumptions are correct it
should allow the machine learning model to be more interpretable, generalize
better and be more data efficient at training time. The empirical results show
that, using only 4000 data points, we obtain up to 4x improvement of
performance after domain adaptation to target domain compared to pretrained
model only on the source domain. After domain adaptation, the model is even
able to detect atom types that were never seen in the original source domain.
Finally, on the Maybridge data set the proposed self-labeling approach reached
higher performance than the current state of the art.
|
The paper is an introduction to intuitionistic mathematics.
|
As machine learning black boxes are increasingly being deployed in critical
domains such as healthcare and criminal justice, there has been a growing
emphasis on developing techniques for explaining these black boxes in a post
hoc manner. In this work, we analyze two popular post hoc interpretation
techniques: SmoothGrad which is a gradient based method, and a variant of LIME
which is a perturbation based method. More specifically, we derive explicit
closed form expressions for the explanations output by these two methods and
show that they both converge to the same explanation in expectation, i.e., when
the number of perturbed samples used by these methods is large. We then
leverage this connection to establish other desirable properties, such as
robustness, for these techniques. We also derive finite sample complexity
bounds for the number of perturbations required for these methods to converge
to their expected explanation. Finally, we empirically validate our theory
using extensive experimentation on both synthetic and real world datasets.
|
The purpose of this note is twofold: firstly to characterize all the
sequences of orthogonal polynomials $(P_n)_{n\geq 0}$ such that $$
\frac{\triangle}{{\bf \triangle} x(s-1/2)}P_{n+1}(x(s-1/2))=c_n(\triangle
+2\,\mathrm{I})P_n(x(s-1/2)), $$ where $\mathrm{I}$ is the identity operator,
$x$ defines a class of lattices with, generally, nonuniform step-size, and
$\triangle f(s)=f(s+1)-f(s)$; and secondly to present, in a friendly way, a
method to deal with these kind of problems.
|
The addition of an external starshade to the {\it Nancy Grace Roman Space
Telescope} will enable the direct imaging of Earth-radius planets orbiting at
$\sim$1 AU. Classification of any detected planets as Earth-like requires both
spectroscopy to characterize their atmospheres and multi-epoch imaging to trace
their orbits. We consider here the ability of the Starshade Rendezvous Probe to
constrain the orbits of directly imaged Earth-like planets. The target list for
this proposed mission consists of the 16 nearby stars best suited for direct
imaging. The field of regard for a starshade mission is constrained by solar
exclusion angles, resulting in four observing windows during a two-year
mission. We find that for habitable-zone planetary orbits that are detected at
least three times during the four viewing opportunities, their semi-major axes
are measured with a median precision of 7 mas, or a median fractional precision
of 3\%. Habitable-zone planets can be correctly identified as such 96.7\% of
the time, with a false positive rate of 2.8\%. If a more conservative criteria
is used for habitable-zone classification (95\% probability), the false
positive rate drops close to zero, but with only 81\% of the truly Earth-like
planets correctly classified as residing in the habitable zone.
|
We consider the incompressible Euler equations in $R^2$ when the initial
vorticity is bounded, radially symmetric and non-increasing in the radial
direction. Such a radial distribution is stationary, and we show that the
monotonicity produces stability in some weighted norm related to the angular
impulse. For instance, it covers the cases of circular vortex patches and
Gaussian distributions. Our stability does not depend on $L^\infty$-bound or
support size of perturbations. The proof is based on the fact that such a
radial monotone distribution minimizes the impulse of functions having the same
level set measure.
|
This paper presents a novel method for clustering surfaces. The proposal
involves first using basis functions in a tensor product to smooth the data and
thus reduce the dimension to a finite number of coefficients, and then using
these estimated coefficients to cluster the surfaces via the k-means algorithm.
An extension of the algorithm to clustering tensors is also discussed. We show
that the proposed algorithm exhibits the property of strong consistency, with
or without measurement errors, in correctly clustering the data as the sample
size increases. Simulation studies suggest that the proposed method outperforms
the benchmark k-means algorithm which uses the original vectorized data. In
addition, an EGG real data example is considered to illustrate the practical
application of the proposal.
|
Acoustic transparency is the capability of a medium to transmit mechanical
waves to adjacent media, without scattering. This characteristic can be
achieved by carefully engineering the acoustic impedance of the medium -- a
combination of wave speed and density, to match that of the surroundings. Owing
to the strong correlation between acoustic wave speed and static stiffness, it
is challenging to design acoustically transparent materials in a fluid, while
maintaining their high structural rigidity. In this work, we propose a method
to design architected lattices with independent control of the elastic wave
speed at a chosen frequency, the mass density, and the static stiffness, along
a chosen loading direction. We provide a sensitivity analysis to optimize these
properties with respect to design parameters of the structure, that include
localized masses at specific positions. We demonstrate the method on five
different periodic, three dimensional lattices, to calculate bounds on the
longitudinal wave speed as a function of their density and stiffness. We then
perform experiments on 3-D printed structures, to validate our numerical
simulations. The tools developed in this work can be used to design lightweight
and stiff materials with optimized acoustic impedance for a plethora of
applications, including ultrasound imaging, wave filtering and waveguiding.
|
We present a short review of possible applications of the Wheeler-De Witt
equation to cosmological models based on the low-energy string effective
action, and characterised by an initial regime of asymptotically flat, low
energy, weak coupling evolution. Considering in particular a class of
duality-related (but classically disconnected) background solutions, we shall
discuss the possibility of quantum transitions between the phases of pre-big
bang and post-big bang evolution. We will show that it is possible, in such a
context, to represent the birth of our Universe as a quantum process of
tunneling or "anti-tunneling" from an initial state asymptotically approaching
the string perturbative vacuum.
|
Finite-difference (FD) modeling of seismic waves in the vicinity of dipping
interfaces gives rise to artifacts. Examples are phase and amplitude errors, as
well as staircase diffractions. Such errors can be reduced in two general ways.
In the first approach, the interface can be anti-aliased (i.e., with an
anti-aliased step-function, or a lowpass filter). Alternatively, the interface
may be replaced with an equivalent medium (i.e., using Schoenberg \& Muir (SM)
calculus or orthorhombic averaging). We test these strategies in acoustic,
elastic isotropic, and elastic anisotropic settings. Computed FD solutions are
compared to analytical solutions. We find that in acoustic media, anti-aliasing
methods lead to the smallest errors. Conversely, in elastic media, the SM
calculus provides the best accuracy. The downside of the SM calculus is that it
requires an anisotropic FD solver even to model an interface between two
isotropic materials. As a result, the computational cost increases compared to
when using isotropic FD solvers. However, since coarser grid spacings can be
used to represent the dipping interfaces, the two effects (an expensive FD
solver on a coarser FD grid) equal out. Hence, the SM calculus can provide an
efficient means to reduce errors, also in elastic isotropic media.
|
The singlemode condition is one of the most important design rules for
optical waveguides in guided-wave optics. The reason following the singlemode
condition is that higher-order modes might be excited and thus introduce some
undesired mode-mismatching loss as well as inter-mode crosstalk when light
propagates along an optical waveguide beyond the singlemode regime. As a
result, multimode photonic waveguides are usually not allowed. In this paper,
we propose the concept of silicon photonics beyond the singlemode regime,
developed with low-loss and low-crosstalk light propagation in multimode
photonic waveguides with broadened silicon cores. In particular, silicon
photonic waveguides with a broadened core region have shown an ultra-low-loss
of ~0.1 dB/cm for the fundamental mode even without any special fabrication
process. A micro-racetrack resonator fabricated with standard 220-nm-SOI
MPW-foundry processes shows a record intrinsic Q-factor as high as 1.02*107 for
the first time, corresponding to ultra-low waveguide propagation loss of only
0.065 dB/cm. A high-performance microwave photonic filter on silicon is then
realized with an ultra-narrow 3-dB bandwidth of 20.6 MHz as well as a tuning
range of ~20 GHz for the first time. An on-chip 100-cm-long delayline is also
demonstrated by using the present broadened SOI photonic waveguides with
compact Euler-curve bends, the measured propagation loss is ~0.14 dB/cm. The
proposed concept of silicon photonics beyond the singlemode regime helps solve
the issue of high propagation loss and also significantly reduces the random
phase errors of light due to the random variations of waveguide dimensions. In
particularity it enables silicon photonic devices with enhanced performances,
which paves the way for new-generation silicon photonics realizing the
large-scale photonic integration.
|
Useful materials must satisfy multiple objectives, where the optimization of
one objective is often at the expense of another. The Pareto front reports the
optimal trade-offs between competing objectives. Here we report a self-driving
laboratory, "Ada", that defines the Pareto front of conductivities and
processing temperatures for palladium films formed by combustion synthesis. Ada
identified previously untested combustion synthesis conditions that resulted in
the discovery of lower processing temperatures (below 200 {\deg}C) relative to
the prior art for this technique (250 {\deg}C), a temperature difference that
makes the coating of different commodity plastic materials possible (e.g.,
Nafion, polyethersulfone). These conditions enabled us to use combustion
synthesis to spray coat uniform palladium films with moderate conductivity (1.1
$\times$ 10$^5$ S m$^{-1}$) at 191 {\deg}C. Spray coating at 226 {\deg}C
yielded films with conductivities (2.0 $\times$ 10$^6$ S m$^{-1}$) comparable
to those of sputtered films (2.0 to 5.8 $\times$ 10$^6$ S m$^{-1}$). This work
shows how self-driving laboratories can discover materials satisfying multiple
objectives.
|
Observations suggest that satellite quenching plays a major role in the
build-up of passive, low-mass galaxies at late cosmic times. Studies of
low-mass satellites, however, are limited by the ability to robustly
characterize the local environment and star-formation activity of faint
systems. In an effort to overcome the limitations of existing data sets, we
utilize deep photometry in Stripe 82 of the Sloan Digital Sky Survey, in
conjunction with a neural network classification scheme, to study the
suppression of star formation in low-mass satellite galaxies in the local
Universe. Using a statistically-driven approach, we are able to push beyond the
limits of existing spectroscopic data sets, measuring the satellite quenched
fraction down to satellite stellar masses of ${\sim}10^7~{\rm M}_{\odot}$ in
group environments (${M}_{\rm{halo}} = 10^{13-14}~h^{-1}~{\rm M}_{\odot}$). At
high satellite stellar masses ($\gtrsim 10^{10}~{\rm M}_{\odot}$), our analysis
successfully reproduces existing measurements of the quenched fraction based on
spectroscopic samples. Pushing to lower masses, we find that the fraction of
passive satellites increases, potentially signaling a change in the dominant
quenching mechanism at ${M}_{\star} \sim 10^{9}~{\rm M}_{\odot}$. Similar to
the results of previous studies of the Local Group, this increase in the
quenched fraction at low satellite masses may correspond to an increase in the
efficacy of ram-pressure stripping as a quenching mechanism in groups.
|
We are concerned with the global bifurcation analysis of positive solutions
to free boundary problems arising in plasma physics. We show that in general,
in the sense of domain variations, the following alternative holds: either the
shape of the branch of solutions resembles the monotone one of the model case
of the two-dimensional disk, or it is a continuous simple curve without
bifurcation points which ends up at a point where the boundary density
vanishes. On the other hand, we deduce a general criterion ensuring the
existence of a free boundary in the interior of the domain. Application to a
classic nonlinear eigenvalue problem is also discussed.
|
Flexible optical network is a promising technology to accommodate
high-capacity demands in next-generation networks. To ensure uninterrupted
communication, existing lightpath provisioning schemes are mainly done with the
assumption of worst-case resource under-provisioning and fixed channel spacing,
which preserves an excessive signal-to-noise ratio (SNR) margin. However, under
a resource over-provisioning scenario, the excessive SNR margin restricts the
transmission bit-rate or transmission reach, leading to physical layer resource
waste and stranded transmission capacity. To tackle this challenging problem,
we leverage an iterative feedback tuning algorithm to provide a just-enough SNR
margin, so as to maximize the network throughput. Specifically, the proposed
algorithm is implemented in three steps. First, starting from the high SNR
margin setup, we establish an integer linear programming model as well as a
heuristic algorithm to maximize the network throughput by solving the problem
of routing, modulation format, forward error correction, baud-rate selection,
and spectrum assignment. Second, we optimize the channel spacing of the
lightpaths obtained from the previous step, thereby increasing the available
physical layer resources. Finally, we iteratively reduce the SNR margin of each
lightpath until the network throughput cannot be increased. Through numerical
simulations, we confirm the throughput improvement in different networks and
with different baud-rates. In particular, we find that our algorithm enables
over 20\% relative gain when network resource is over-provisioned, compared to
the traditional method preserving an excessive SNR margin.
|
We prove limit equalities between the sharp constants in weighted
Nikolskii-type inequalities for multivariate polynomials on an $m$-dimensional
cube and ball and the corresponding constants for entire functions of
exponential type.
|
Developing the flocking behavior for a dynamic squad of fixed-wing UAVs is
still a challenge due to kinematic complexity and environmental uncertainty. In
this paper, we deal with the decentralized flocking and collision avoidance
problem through deep reinforcement learning (DRL). Specifically, we formulate a
decentralized DRL-based decision making framework from the perspective of every
follower, where a collision avoidance mechanism is integrated into the flocking
controller. Then, we propose a novel reinforcement learning algorithm PS-CACER
for training a shared control policy for all the followers. Besides, we design
a plug-n-play embedding module based on convolutional neural networks and the
attention mechanism. As a result, the variable-length system state can be
encoded into a fixed-length embedding vector, which makes the learned DRL
policy independent with the number and the order of followers. Finally,
numerical simulation results demonstrate the effectiveness of the proposed
method, and the learned policies can be directly transferred to semi-physical
simulation without any parameter finetuning.
|
Motivated by the recent advances in the categorification of the cluster
structure on the coordinate rings of Grassmannians of $k$-subspaces in
$n$-space, we investigate a particular construction of root systems of type
$\mathsf{T}_{2,p,q}$, including the type $\mathsf{E}_n$. This construction
generalizes Manin's ``hyperbolic construction'' of $\mathsf{E}_8$ and reveals a
lot of otherwise hidden regularities in this family of root systems.
|
We study the evolutionary dynamics of a phenotypically structured population
in a changing environment , where the environmental conditions vary with a
linear trend but in an oscillatory manner. Such phenomena can be described by
parabolic Lotka-Volterra type equations with non-local competition and a time
dependent growth rate. We first study the long time behavior of the solution to
this problem. Next, using an approach based on Hamilton-Jacobi equations we
study asymptotically such long time solutions when the effects of the mutations
are small. We prove that, as the effect of the mutations vanishes, the
phenotypic density of the population concentrates on a single trait which
varies linearly with time, while the size of the population oscillates
periodically. In contrast with the case of an environment without linear shift,
such dominant trait does not have the maximal growth rate in the averaged
environment and there is a cost on the growth rate due to the climate shift. We
also provide an asymptotic expansion for the average size of the population and
for the critical speed above which the population goes extinct, which is
closely related to the derivation of an asymptotic expansion for the Floquet
eigenvalue in terms of the diffusion rate. By mean of a biological example,
this expansion allows to show that the fluctuations on the environment may help
the population to follow the climatic shift in a better way.
|
The paper was suggested by a brief note of the second author about the
application of the Hubbert curve to predict decay of resource exploitation. A
further suggestion came from the interpretation of the Hubbert curve in terms
of a specific Lotka Volterra (LV) equation. The link with population dynamics
was obvious as logistic function and LV equation were proposed within the
demography science field. Mathematical population dynamics has a history of
about two centuries. The first principle and model of population dynamics can
be regarded the exponential law of Malthus. In the XIX century, the Malthusian
demographic model was first refined to include mortality rate by Gompertz. In
the early XIX century the model was further refined by Verhulst by introducing
the standard logistic function. The previous models only concern the population
of a single species. In the early XX century, the American demographer Lotka
and the Italian mathematician Volterra proposed a pair of state equations which
describe the population dynamics of two competing species, the predator and the
prey. This paper is concerned with the single and two-species fundamental
equations: the logistic and LV equation. The paper starts with the generalized
logistic equation whose free response is derived together with equilibrium
points and stability properties. The parameter estimation of the logistic
function is applied to the raw data of the US crude oil production. The paper
proceeds with the Lotka Volterra equation of the competition between two
species, with the goal of applying it to resource exploitation. At the end, a
limiting version of the LV equation is studied since it describes a competition
model between the production rate of exploited resources and the relevant
capital stock employed in the exploitation.
|
We conduct spectral observations of 138 superthin galaxies (STGs) with high
radial-to-vertical stellar disk scales ratio with the Dual Imaging Spectrograph
(DIS) on the 3.5m telescope at the Apache Point Observatory (APO) to obtain the
ionized gas rotation curves with R ~ 5000 resolution. We also performed near
infrared (NIR) H and Ks photometry for 18 galaxies with the NICFPS camera on
the 3.5m telescope. The spectra, the NIR photometry and published optical and
NIR photometry are used for modeling that utilizes the thickness of the stellar
disk and rotation curves simultaneously. The projection and dust extinction
effects are taken into account. We evaluate eight models that differ by their
free parameters and constraints. As a result, we estimated masses and scale
lengths of the galactic dark halos. We find systematic differences between the
properties of our red and blue STGs. The blue STGs have a large fraction of
dynamically under-evolved galaxies whose vertical velocity dispersion is low in
both gas and stellar disks. The dark halo-to-disk scale ratio is shorter in the
red STGs than in the blue ones, but in a majority of all STGs this ratio is
under 2. The optical color $(r-i)$ of the superthin galaxies correlates with
their rotation curve maximum, vertical velocity dispersion in stellar disks,
and mass of the dark halo. We conclude that there is a threshold central
surface density of 50 $M_{\odot}$\,pc$^{-2}$ below which we do not observe very
thin, rotationally supported galactic disks.
|
UV radiation has been used as a disinfection strategy to deactivate a wide
range of pathogens, but existing irradiation strategies do not ensure
sufficient exposure of all environmental surfaces and/or require long
disinfection times. We present a near-optimal coverage planner for mobile UV
disinfection robots. The formulation optimizes the irradiation time efficiency,
while ensuring that a sufficient dosage of radiation is received by each
surface. The trajectory and dosage plan are optimized taking collision and
light occlusion constraints into account. We propose a two-stage scheme to
approximate the solution of the induced NP-hard optimization, and, for
efficiency, perform key irradiance and occlusion calculations on a GPU.
Empirical results show that our technique achieves more coverage for the same
exposure time as strategies for existing UV robots, can be used to compare UV
robot designs, and produces near-optimal plans. This is an extended version of
the paper originally contributed to ICRA2021.
|
A good joint training framework is very helpful to improve the performances
of weakly supervised audio tagging (AT) and acoustic event detection (AED)
simultaneously. In this study, we propose three methods to improve the best
teacher-student framework of DCASE2019 Task 4 for both AT and AED tasks. A
frame-level target-events based deep feature distillation is first proposed, it
aims to leverage the potential of limited strong-labeled data in weakly
supervised framework to learn better intermediate feature maps. Then we propose
an adaptive focal loss and two-stage training strategy to enable an effective
and more accurate model training, in which the contribution of
difficult-to-classify and easy-to-classify acoustic events to the total cost
function can be automatically adjusted. Furthermore, an event-specific post
processing is designed to improve the prediction of target event time-stamps.
Our experiments are performed on the public DCASE2019 Task4 dataset, and
results show that our approach achieves competitive performances in both AT
(49.8% F1-score) and AED (81.2% F1-score) tasks.
|
We present a novel large-scale dataset and accompanying machine learning
models aimed at providing a detailed understanding of the interplay between
visual content, its emotional effect, and explanations for the latter in
language. In contrast to most existing annotation datasets in computer vision,
we focus on the affective experience triggered by visual artworks and ask the
annotators to indicate the dominant emotion they feel for a given image and,
crucially, to also provide a grounded verbal explanation for their emotion
choice. As we demonstrate below, this leads to a rich set of signals for both
the objective content and the affective impact of an image, creating
associations with abstract concepts (e.g., "freedom" or "love"), or references
that go beyond what is directly visible, including visual similes and
metaphors, or subjective references to personal experiences. We focus on visual
art (e.g., paintings, artistic photographs) as it is a prime example of imagery
created to elicit emotional responses from its viewers. Our dataset, termed
ArtEmis, contains 439K emotion attributions and explanations from humans, on
81K artworks from WikiArt. Building on this data, we train and demonstrate a
series of captioning systems capable of expressing and explaining emotions from
visual stimuli. Remarkably, the captions produced by these systems often
succeed in reflecting the semantic and abstract content of the image, going
well beyond systems trained on existing datasets. The collected dataset and
developed methods are available at https://artemisdataset.org.
|
Finding the mean square averages of the Dirichlet $L$-functions over
Dirichlet characters $\chi$ of same parity is an active problem in number
theory. Here we explicitly evaluate such averages of $L(3,\chi)$ and
$L(4,\chi)$ using certain trigonometric sums and Bernoulli polynomials and
express them in terms of the Euler totient function $\phi$ and the Jordan
totient function $J_s$.
|
Direct measurements of three-dimensional magnetic fields in the interstellar
medium (ISM) are not achievable. However, the anisotropic nature of
magnetohydrodynamic (MHD) turbulence provides a novel way of tracing the
magnetic fields. Guided by the advanced understanding of turbulence's
anisotropy in the Position-Position-Velocity (PPV) space, we extend the
Structure-Function Analysis (SFA) to measure both the three-dimensional
magnetic field orientation and Alfven Mach number $M_A$, which provides the
information on magnetic field strength. Following the theoretical framework
developed in Kandel et al. (2016), we find that the anisotropy in a given
velocity channel is affected by the inclination angle between the 3D magnetic
field direction and the line-of-sight as well as media magnetization. We
analyze the synthetic PPV cubes generated by incompressible and compressible
MHD simulations. We confirm that the PPV channel's intensity fluctuations
measured in various position angles reveal plane-of-the-sky magnetic field
orientation. We show that by varying the channel width, the anisotropies of the
intensity fluctuations in PPV space can be used to simultaneously estimate both
magnetic field inclination angle and strength of total magnetic fields.
|
We develop a general framework to significantly reduce the degree of
sum-of-squares proofs by introducing new variables. To illustrate the power of
this framework, we use it to speed up previous algorithms based on
sum-of-squares for two important estimation problems, clustering and robust
moment estimation. The resulting algorithms offer the same statistical
guarantees as the previous best algorithms but have significantly faster
running times. Roughly speaking, given a sample of $n$ points in dimension $d$,
our algorithms can exploit order-$\ell$ moments in time $d^{O(\ell)}\cdot
n^{O(1)}$, whereas a naive implementation requires time $(d\cdot n)^{O(\ell)}$.
Since for the aforementioned applications, the typical sample size is
$d^{\Theta(\ell)}$, our framework improves running times from $d^{O(\ell^2)}$
to $d^{O(\ell)}$.
|
Let $\mathbf{P}$ be a parabolic subgroup with Levi $\mathbf{M}$ of a
connected reductive group defined over a locally compact non-archimedean field
$F$. Given a certain compact open subgroup $\Gamma$ of $\mathbf{P}(F)$, this
note proves that the Hecke algebra $\mathcal{H}(\mathbf{M}(F))$ of
$\mathbf{M}(F)$ with respect to $\Gamma\cap \mathbf{M}(F)$ is a left ring of
fractions of the Hecke algebra $\mathcal{H}(\mathbf{P}(F))$ of $\mathbf{P}(F)$
with respect to $\Gamma$. This leads to a characterization of
$\mathcal{H}(\mathbf{P}(F))$-modules that come from
$\mathcal{H}(\mathbf{M}(F))$-modules.
|
The presence of stars on retrograde orbits in disc galaxies is usually
attributed to accretion events, both via direct accretion, as well as through
the heating of the disc stars. Recent studies have shown that retrograde orbits
can also be produced via scattering by dense clumps, which are often present in
the early stages of a galaxy's evolution. However, so far it has been unclear
whether other internally-driven mechanisms, such as bars, are also capable of
driving retrograde motion. Therefore, in this paper, we investigate the
efficiencies with which bars and clumps produce retrograde orbits in disc
galaxies. We do this by comparing the retrograde fractions and the spatial
distributions of the retrograde populations in four $N$-body$+$smooth particle
hydrodynamics (SPH) simulations of isolated disc galaxies spanning a range of
evolutionary behaviours. We find that both bars and clumps are capable of
generating significant retrograde populations of order $\sim 10\%$ of all
stars. We also find that while clump-driven retrograde stars may be found at
large galactocentric radii, bar-driven retrograde stars remain in the vicinity
of the bar, even if the bar dissolves. Consequently, we find that retrograde
stars in the Solar Neighbourhood in the clumpy models are exclusively
clump-driven, but this is a trace population, constituting $0.01-0.04\%$ of the
total stellar population in this region. Finally, we find that neither bars
(including dissolving ones) nor clumps in the models are able to produce
rotationally supported counter-rotating discs.
|
A reinforcement learning (RL) policy trained in a nominal environment could
fail in a new/perturbed environment due to the existence of dynamic variations.
Existing robust methods try to obtain a fixed policy for all envisioned dynamic
variation scenarios through robust or adversarial training. These methods could
lead to conservative performance due to emphasis on the worst case, and often
involve tedious modifications to the training environment. We propose an
approach to robustifying a pre-trained non-robust RL policy with
$\mathcal{L}_1$ adaptive control. Leveraging the capability of an
$\mathcal{L}_1$ control law in the fast estimation of and active compensation
for dynamic variations, our approach can significantly improve the robustness
of an RL policy trained in a standard (i.e., non-robust) way, either in a
simulator or in the real world. Numerical experiments are provided to validate
the efficacy of the proposed approach.
|
The study of anisotropic harmonic flow coefficients $ v_{n}$(n=2,3,4) is
performed in Xe-Xe collisions at $\sqrt{s_{NN}}$ = 5.44 TeV under Monte Carlo
HYDJET++ model (HYDrodynamics plus JETs) framework. Anisotropic flow of
identified particles and correlation between the azimuthal harmonic flow
amplitudes is presented. Here, we have considered body-body and tip-tip type of
geometrical configurations for Xe-Xe collision systems. The kinematic ranges
$|\eta|<0.8$, $0<p_{T}<5.0$ GeV/c, and $|\delta / \eta|> 2$ are considered. The
results have been shown for seven classes of centrality and compared with the
ALICE experimental data. The anisotropic flow of identified charged particles
show a strong centrality dependence. Mass ordering is observed for
$v_{2},v_{3}$ and $v_{4}$. Mass ordering is different for different ranges of
transverse momentum $p_{T}$. Strong correlation is observed between
$v_{3}-v_{2}$, $v_{4}-v_{2}$, and $v_{4}-v_{3}$. Such correlation is centrality
dependent and is different in different centrality windows. The anisotropic
flow coefficients show a clear dependence on the total charged particle
multiplicity. HYDJET++ model justifies experimental data well enough.
|
Here we investigate the temperature dependence of anomalous Hall effect in
Hf/GdFeCo/MgO sheet film and Hall bar device. The magnetic compensation
temperature ($T_{comp}$) for the sheet film and device is found to be ~240 K
and ~118 K, respectively. In sheet film, spin-flopping is witnessed at a
considerably lower field, 0.6 T, close to $T_{comp}$. The AHE hysteresis loops
in the sheet film have a single loop whereas in the Hall bar device, hystereses
consist of triple loops are observed just above the Tcomp. Moreover, the
temperature-dependent anomalous Hall resistance ($R_\mathrm{AHE}$) responds
unusually when a perpendicular magnetic field is applied while recording the
$R_\mathrm{AHE}$. The zero-field $R_\mathrm{AHE}$ scan suggests the Hall signal
generates solely from the FeCo moment. However, the behavior of 3 T-field
$R_\mathrm{AHE}$ scan in which the $R_\mathrm{AHE}$ drops close to zero near
the $T_{comp}$ seems to be following the net magnetization response of the
device, is explained by considering the low field spin-flopping around the
compensation temperature. The results presented here give important insight to
understand the complex AHE behavior of ferrimagnets for their spintronic
applications.
|
Significant efforts have been expended in the research and development of a
database management system (DBMS) that has a wide range of applications for
managing an enormous collection of multisource, heterogeneous, complex, or
growing data. Besides the primary function (i.e., create, delete, and update),
a practical and impeccable DBMS can interact with users through information
selection, that is, querying with their targets. Previous querying algorithms,
such as frequent itemset querying and sequential pattern querying (SPQ) have
focused on the measurement of frequency, which does not involve the concept of
utility, which is helpful for users to discover more informative patterns. To
apply the querying technology for wider applications, we incorporate utility
into target-oriented SPQ and formulate the task of targeted utility-oriented
sequence querying. To address the proposed problem, we develop a novel
algorithm, namely targeted high-utility sequence querying (TUSQ), based on two
novel upper bounds suffix remain utility and terminated descendants utility as
well as a vertical Last Instance Table structure. For further efficiency, TUSQ
relies on a projection technology utilizing a compact data structure called the
targeted chain. An extensive experimental study conducted on several real and
synthetic datasets shows that the proposed algorithm outperformed the designed
baseline algorithm in terms of runtime, memory consumption, and candidate
filtering.
|
Wikidata has been increasingly adopted by many communities for a wide variety
of applications, which demand high-quality knowledge to deliver successful
results. In this paper, we develop a framework to detect and analyze
low-quality statements in Wikidata by shedding light on the current practices
exercised by the community. We explore three indicators of data quality in
Wikidata, based on: 1) community consensus on the currently recorded knowledge,
assuming that statements that have been removed and not added back are
implicitly agreed to be of low quality; 2) statements that have been
deprecated; and 3) constraint violations in the data. We combine these
indicators to detect low-quality statements, revealing challenges with
duplicate entities, missing triples, violated type rules, and taxonomic
distinctions. Our findings complement ongoing efforts by the Wikidata community
to improve data quality, aiming to make it easier for users and editors to find
and correct mistakes.
|
Graph convolution networks (GCN), which recently becomes new state-of-the-art
method for graph node classification, recommendation and other applications,
has not been successfully applied to industrial-scale search engine yet. In
this proposal, we introduce our approach, namely SearchGCN, for embedding-based
candidate retrieval in one of the largest e-commerce search engine in the
world. Empirical studies demonstrate that SearchGCN learns better embedding
representations than existing methods, especially for long tail queries and
items. Thus, SearchGCN has been deployed into JD.com's search production since
July 2020.
|
We study the hop-constrained s-t path enumeration (HcPE) problem, which takes
a graph $G$, two distinct vertices $s,t$ and a hop constraint $k$ as input, and
outputs all paths from $s$ to $t$ whose length is at most $k$. The
state-of-the-art algorithms suffer from severe performance issues caused by the
costly pruning operations during enumeration for the workloads with the large
search space. Consequently, these algorithms hardly meet the real-time
constraints of many online applications. In this paper, we propose PathEnum, an
efficient index-based algorithm towards real-time HcPE. For an input query,
PathEnum first builds a light-weight index aiming to reduce the number of edges
involved in the enumeration, and develops efficient index-based approaches for
enumeration, one based on depth-first search and the other based on joins. We
further develop a query optimizer based on a join-based cost model to optimize
the search order. We conduct experiments with 15 real-world graphs. Our
experiment results show that PathEnum outperforms the state-of-the-art
approaches by orders of magnitude in terms of the query time, throughput and
response time.
|
In this paper, we investigate two dimensional subsonic and subsonic-sonic
spiral flows outside a porous body. The existence and uniqueness of the
subsonic spiral flow are obtained via variational formulation. The optimal
decay rate at far fields is also derived by the Kelvin's transformation and
some elliptic estimates. By extracting spiral subsonic solutions as the
approximate sequences, we obtain the spiral subsonic-sonic limit solution. The
main ingredients of our analysis are methods of calculus of variations, the
theory of second-order quasilinear equations and the compactness framework.
|
We study a non-Hermitian and non-unitary version of the two-dimensional
Chalker-Coddington network model with balanced gain and loss. This model
belongs to the class D^dagger with particle-hole symmetry^dagger and hosts both
the non-Hermitian skin effect as well as exceptional points. By calculating its
two-terminal transmission, we find a novel contact effect induced by the skin
effect, which results in a non-quantized transmission for chiral edge states.
In addition, the model exhibits an insulator to 'supermetal' transition, across
which the transmission changes from exponentially decaying with system size to
exponentially growing with system size. In the clean system, the critical point
separating insulator from supermetal is characterized by a non-Hermitian Dirac
point that produces a quantized critical transmission of 4, instead of the
value of 1 expected in Hermitian systems. This change in critical transmission
is a consequence of the balanced gain and loss. When adding disorder to the
system, we find a critical exponent for the divergence of the localization
length \nu \approx 1, which is the same as that characterizing the universality
class of two-dimensional Hermitian systems in class D. Our work provides a
novel way of exploring the localization behavior of non-Hermitian systems, by
using network models, which in the past proved versatile tools to describe
Hermitian physics.
|
Road accident can be triggered by wet road because it decreases skid
resistance. To prevent the road accident, detecting road surface abnomality is
highly useful. In this paper, we propose the deep learning based cost-effective
real-time anomaly detection architecture, naming with non-compression
auto-encoder (NCAE). The proposed architecture can reflect forward and backward
causality of time series information via convolutional operation. Moreover, the
above architecture shows higher anomaly detection performance of published
anomaly detection model via experiments. We conclude that NCAE as a
cutting-edge model for road surface anomaly detection with 4.20\% higher AUROC
and 2.99 times faster decision than before.
|
We outline the construction of a molecular system that could, in principle,
implement a thermodynamically reversible Universal Turing Machine (UTM). By
proposing a concrete-albeit idealised-design and operational protocol, we
reveal fundamental challenges that arise when attempting to implement arbitrary
computations reversibly. Firstly, the requirements of thermodynamic
reversibility inevitably lead to an intricate design. Secondly,
thermodynamically reversible UTMs, unlike simpler devices, must also be
logically reversible. Finally, implementing multiple distinct computations in
parallel is necessary to take the cost of external control per computation to
zero, but this approach is complicated the distinct halting times of different
computations.
|
Quantum annealing is an emerging technology with the potential to solve some
of the computational challenges that remain unresolved as we approach an era
beyond Moore's Law. In this work, we investigate the capabilities of the
quantum annealers of D-Wave Systems, Inc., for computing a certain type of
Boolean tensor decomposition called Boolean Hierarchical Tucker Network (BHTN).
Boolean tensor decomposition problems ask for finding a decomposition of a
high-dimensional tensor with categorical, [true, false], values, as a product
of smaller Boolean core tensors. As the BHTN decompositions are usually not
exact, we aim to approximate an input high-dimensional tensor by a product of
lower-dimensional tensors such that the difference between both is minimized in
some norm. We show that BHTN can be calculated as a sequence of optimization
problems suitable for the D-Wave 2000Q quantum annealer. Although current
technology is still fairly restricted in the problems they can address, we show
that a complex problem such as BHTN can be solved efficiently and accurately.
|
With the aim of understanding the role of outflows in star formation, we
performed a statistical study of the physical parameters of outflows in eleven
massive protoclusters associated with ultra-compact HII regions. A total of 106
outflow lobes are identified in these protoclusters using the ALMA CO (3-2),
HCN (4-3) and HCO+ (4-3) line observations. Although the position angles of
outflow lobes do not differ in these three tracers, HCN and HCO+ tend to detect
lower terminal velocity of the identified outflows compared to CO. The majority
of the outflows in our targets are young with typical dynamical time-scales of
10^2-10^4 years, and are mostly composed of low-mass outflows along with at
least one high-mass outflow in each target. An anti-correlation of outflow rate
with dynamical time-scale indicates that the outflow rate possibly decreases
with time. Also, a rising trend of dynamical time-scale with the mass of the
associated core hints that the massive cores might have longer accretion
histories than the low mass cores. Estimation of different energies in these
protoclusters shows that outflows studied here cannot account for the
generation of the observed turbulence, but can sustain the turbulence at the
current epoch as the energy injection rate from the outflows is similar to the
estimated dissipation rate.
|
In recent years, deep learning-based automated personality trait detection
has received a lot of attention, especially now, due to the massive digital
footprints of an individual. Moreover, many researchers have demonstrated that
there is a strong link between personality traits and emotions. In this paper,
we build on the known correlation between personality traits and emotional
behaviors, and propose a novel multitask learning framework, SoGMTL that
simultaneously predicts both of them. We also empirically evaluate and discuss
different information-sharing mechanisms between the two tasks. To ensure the
high quality of the learning process, we adopt a MAML-like framework for model
optimization. Our more computationally efficient CNN-based multitask model
achieves the state-of-the-art performance across multiple famous personality
and emotion datasets, even outperforming Language Model based models.
|
Most of the recent deep reinforcement learning advances take an RL-centric
perspective and focus on refinements of the training objective. We diverge from
this view and show we can recover the performance of these developments not by
changing the objective, but by regularising the value-function estimator.
Constraining the Lipschitz constant of a single layer using spectral
normalisation is sufficient to elevate the performance of a Categorical-DQN
agent to that of a more elaborated \rainbow{} agent on the challenging Atari
domain. We conduct ablation studies to disentangle the various effects
normalisation has on the learning dynamics and show that is sufficient to
modulate the parameter updates to recover most of the performance of spectral
normalisation. These findings hint towards the need to also focus on the neural
component and its learning dynamics to tackle the peculiarities of Deep
Reinforcement Learning.
|
We revisit a supersymmetric string model for space-time foam, in which
bosonic open-string states, such as photons, can possess
quantum-gravity-induced velocity fluctuations in vacuum. We argue that the
suggestion of light speed variation with lower bound from gamma-ray burst
photon time delays can serve as a support for this string-inspired framework,
through connecting the experimental finding with model predictions. We also
derive the value of the effective quantum-gravity mass in this framework, and
give a qualitative study on the model-dependent coefficients. Constraints from
birefringent effects and/or photon decays, including the novel $\gamma$-decay
constraint obtained here from the latest Tibet AS$\gamma$ near-PeV photon, are
also found to be consistent with predictions in such a quantum-gravity scheme.
Future observation that can testify further the theory is suggested.
|
Few-shot object detection is an imperative and long-lasting problem due to
the inherent long-tail distribution of real-world data. Its performance is
largely affected by the data scarcity of novel classes. But the semantic
relation between the novel classes and the base classes is constant regardless
of the data availability. In this work, we investigate utilizing this semantic
relation together with the visual information and introduce explicit relation
reasoning into the learning of novel object detection. Specifically, we
represent each class concept by a semantic embedding learned from a large
corpus of text. The detector is trained to project the image representations of
objects into this embedding space. We also identify the problems of trivially
using the raw embeddings with a heuristic knowledge graph and propose to
augment the embeddings with a dynamic relation graph. As a result, our few-shot
detector, termed SRR-FSD, is robust and stable to the variation of shots of
novel objects. Experiments show that SRR-FSD can achieve competitive results at
higher shots, and more importantly, a significantly better performance given
both lower explicit and implicit shots. The benchmark protocol with implicit
shots removed from the pretrained classification dataset can serve as a more
realistic setting for future research.
|
The paper considers a distributed version of deep reinforcement learning
(DRL) for multi-agent decision-making process in the paradigm of federated
learning. Since the deep neural network models in federated learning are
trained locally and aggregated iteratively through a central server, frequent
information exchange incurs a large amount of communication overheads. Besides,
due to the heterogeneity of agents, Markov state transition trajectories from
different agents are usually unsynchronized within the same time interval,
which will further influence the convergence bound of the aggregated deep
neural network models. Therefore, it is of vital importance to reasonably
evaluate the effectiveness of different optimization methods. Accordingly, this
paper proposes a utility function to consider the balance between reducing
communication overheads and improving convergence performance. Meanwhile, this
paper develops two new optimization methods on top of variation-aware periodic
averaging methods: 1) the decay-based method which gradually decreases the
weight of the model's local gradients within the progress of local updating,
and 2) the consensus-based method which introduces the consensus algorithm into
federated learning for the exchange of the model's local gradients. This paper
also provides novel convergence guarantees for both developed methods and
demonstrates their effectiveness and efficiency through theoretical analysis
and numerical simulation results.
|
Scaling the cyber hunt problem poses several key technical challenges.
Detecting and characterizing cyber threats at scale in large enterprise
networks is hard because of the vast quantity and complexity of the data that
must be analyzed as adversaries deploy varied and evolving tactics to
accomplish their goals. There is a great need to automate all aspects, and,
indeed, the workflow of cyber hunting. AI offers many ways to support this. We
have developed the WILEE system that automates cyber threat hunting by
translating high-level threat descriptions into many possible concrete
implementations. Both the (high-level) abstract and (low-level) concrete
implementations are represented using a custom domain specific language (DSL).
WILEE uses the implementations along with other logic, also written in the DSL,
to automatically generate queries to confirm (or refute) any hypotheses tied to
the potential adversarial workflows represented at various layers of
abstraction.
|
Few-shot learning aims to correctly recognize query samples from unseen
classes given a limited number of support samples, often by relying on global
embeddings of images. In this paper, we propose to equip the backbone network
with an attention agent, which is trained by reinforcement learning. The policy
gradient algorithm is employed to train the agent towards adaptively localizing
the representative regions on feature maps over time. We further design a
reward function based on the prediction of the held-out data, thus helping the
attention mechanism to generalize better across the unseen classes. The
extensive experiments show, with the help of the reinforced attention, that our
embedding network has the capability to progressively generate a more
discriminative representation in few-shot learning. Moreover, experiments on
the task of image classification also show the effectiveness of the proposed
design.
|
A few years ago, the first example of a closed manifold admitting an Anosov
diffeomorphism but no expanding map was given. Unfortunately, this example is
not explicit and is high-dimensional, although its exact dimension is unknown
due to the type of construction. In this paper, we present a family of concrete
12-dimensional nilmanifolds with an Anosov diffeomorphism but no expanding map,
where nilmanifolds are defined as the quotient of a 1-connected nilpotent Lie
group by a cocompact lattice. We show that this family has the smallest
possible dimension in the class of infra-nilmanifolds, which is conjectured to
be the only type of manifolds admitting Anosov diffeomorphisms up to
homeomorphism. The proof shows how to construct positive gradings from the
eigenvalues of the Anosov diffeomorphism under some additional assumptions
related to the rank, using the action of the Galois group on these algebraic
units.
|
3D snapshot microscopy enables fast volumetric imaging by capturing a 3D
volume in a single 2D camera image, and has found a variety of biological
applications such as whole brain imaging of fast neural activity in larval
zebrafish. The optimal microscope design for this optical 3D-to-2D encoding is
both sample- and task-dependent, with no general solution known. Highly
programmable optical elements create new possibilities for sample-specific
computational optimization of microscope parameters, e.g. tuning the collection
of light for a given sample structure. We perform such optimization with deep
learning, using a differentiable wave-optics simulation of light propagation
through a programmable microscope and a neural network to reconstruct volumes
from the microscope image. We introduce a class of global kernel Fourier
convolutional neural networks which can efficiently decode information from
multiple depths in the volume, globally encoded across a 3D snapshot image. We
show that our proposed networks succeed in large field of view volume
reconstruction and microscope parameter optimization where traditional networks
fail. We also show that our networks outperform the state-of-the-art learned
reconstruction algorithms for lensless computational photography.
|
Chiral symmetry represents a fundamental concept lying at the core of
particle and nuclear physics. Its spontaneous breaking in vacuum can be
exploited to distinguish chiral hadronic partners, whose masses differ. In
fact, the features of this breaking serve as guiding principles for the
construction of effective approaches of QCD at low energies, e.g., the chiral
perturbation theory, the linear sigma model, the
(Polyakov)--Nambu--Jona-Lasinio model, etc. At high temperatures/densities
chiral symmetry can be restored bringing the chiral partners to be nearly
degenerated in mass. At vanishing baryochemical potential, such restoration
follows a smooth transition, and the chiral companions reach this degeneration
above the transition temperature. In this work I review how different
realizations of chiral partner degeneracy arise in different effective
theories/models of QCD. I distinguish the cases where the chiral states are
either fundamental degrees of freedom or (dynamically-generated) composed
states. In particular, I discuss the intriguing case in which chiral symmetry
restoration involves more than two chiral partners, recently addressed in the
literature.
|
The interplay between time-reversal symmetry (TRS) and band topology plays a
crucial role in topological states of quantum matter. In
time-reversal-invariant (TRI) systems, the inversion of spin-degenerate bands
with opposite parity leads to nontrivial topological states, such as
topological insulators and Dirac semimetals. When the TRS is broken, the
exchange field induces spin splitting of the bands. The inversion of a pair of
spin-splitting subbands can generate more exotic topological states, such as
quantum anomalous Hall insulators and magnetic Weyl semimetals. So far, such
topological phase transitions driven by the TRS breaking have not been
visualized. In this work, using angle-resolved photoemission spectroscopy, we
have demonstrated that the TRS breaking induces a band inversion of a pair of
spin-splitting subbands at the TRI points of Brillouin zone in EuB$_6$, when a
long-range ferromagnetic order is developed. The dramatic changes in the
electronic structure result in a topological phase transition from a TRI
ordinary insulator state to a TRS-broken topological semimetal (TSM) state.
Remarkably, the magnetic TSM state has an ideal electronic structure, in which
the band crossings are located at the Fermi level without any interference from
other bands. Our findings not only reveal the topological phase transition
driven by the TRS breaking, but also provide an excellent platform to explore
novel physical behavior in the magnetic topological states of quantum matter.
|
The exploitation of syntactic graphs (SyGs) as a word's context has been
shown to be beneficial for distributional semantic models (DSMs), both at the
level of individual word representations and in deriving phrasal
representations via composition. However, notwithstanding the potential
performance benefit, the syntactically-aware DSMs proposed to date have huge
numbers of parameters (compared to conventional DSMs) and suffer from data
sparsity. Furthermore, the encoding of the SyG links (i.e., the syntactic
relations) has been largely limited to linear maps. The knowledge graphs'
literature, on the other hand, has proposed light-weight models employing
different geometric transformations (GTs) to encode edges in a knowledge graph
(KG). Our work explores the possibility of adopting this family of models to
encode SyGs. Furthermore, we investigate which GT better encodes syntactic
relations, so that these representations can be used to enhance phrase-level
composition via syntactic contextualisation.
|
We present a new insight into the propagation of ion magnetoacoustic and
neutral acoustic waves in a magnetic arcade in the lower solar atmosphere. By
means of numerical simulations, we aim to: (a) study two-fluid waves
propagating in a magnetic arcade embedded in the partially-ionized, lower solar
atmosphere; and (b) investigate the impact of the background magnetic field
configuration on the observed wave-periods. We consider a 2D approximation of
the gravitationally stratified and partially-ionized lower solar atmosphere
consisting of ion+electron and neutral fluids that are coupled by ion-neutral
collisions. In this model, the convection below the photosphere is responsible
for the excitation of ion magnetoacoustic-gravity and neutral acoustic-gravity
waves. We find that in the solar photosphere, where ions and neutrals are
strongly coupled by collisions, ion magnetoacoustic-gravity and neutral
acoustic-gravity waves have periods ranging from 250 s to 350 s. In the
chromosphere, where the collisional coupling is weak, the wave characteristics
strongly depend on the magnetic field configuration. Above the foot-points of
the considered arcade, the plasma is dominated by a vertical magnetic field
along which ion magnetoacoustic-gravity waves propagate. These waves exhibit a
broad range of periods with the most prominent periods of 180 s, 220 s, and 300
s. Above the main loop of the solar arcade, where mostly horizontal magnetic
field lines guide ion magnetoacoustic-gravity waves, the main spectral power
reduces to the period of about 180 s and longer wave-periods do not exist. Our
results are in agreement with the recent observational data reported by
Wi\'sniewska et al. (2016) and Kayshap et al. (2018).
|
Let $\Sigma$ be a compact convex hypersurface in ${\bf R}^{2n}$ which is
P-cyclic symmetric, i.e., $x\in \Sigma$ implies $Px\in\Sigma$ with P being a
$2n\times2n$ symplectic orthogonal matrix and satisfying $P^k=I_{2n}$,
$ker(P^l-I_{2n})=0$ for $1\leq l< k$, where $n, k\geq2$. In this paper, we
prove that there exist at least $n$ geometrically distinct closed
characteristics on $\Sigma$, which solves a longstanding conjecture about the
multiplicity of closed characteristics for a broad class of compact convex
hypersurfaces with symmetries(cf.,Page 235 of \cite{Eke1}). Based on the proof,
we further prove that if the number of geometrically distinct closed
characteristics on $\Sigma$ is finite, then at least $2[\frac{n}{2}]$ of them
are non-hyperbolic; and if the number of geometrically distinct closed
characteristics on $\Sigma$ is exactly $n$ and $k\geq3$, then all of them are
P-cyclic symmetric, where a closed characteristic $(\tau, y)$ on $\Sigma$ is
called P-cyclic symmetric if $y({\bf R})=Py({\bf R})$.
|
In this paper, we obtain the weighted boundedness for the local
multi(sub)linear Hardy-Littlewood maximal operators and local multilinear
fractional integral operators associated with the local Muckenhoupt weights on
Gaussian measure spaces. We deal with these problems by introducing a new
pointwise equivalent "radial" definitions of these local operators. Moreover
using a similar approach, we also get the weighted boundedness for the local
fractional maximal operators with rough kernel and local fractional integral
operators with rough kernel on Gaussian measure spaces.
|
Synchronization has been the subject of intense research during decades
mainly focused on determining the structural and dynamical conditions driving a
set of interacting units to a coherent state globally stable. However, little
attention has been paid to the description of the dynamical development of each
individual networked unit in the process towards the synchronization of the
whole ensemble. In this paper, we show how in a network of identical dynamical
systems, nodes belonging to the same degree class differentiate in the same
manner visiting a sequence of states of diverse complexity along the route to
synchronization independently on the global network structure. In particular,
we observe, just after interaction starts pulling orbits from the initially
uncoupled attractor, a general reduction of the complexity of the dynamics of
all units being more pronounced in those with higher connectivity. In the weak
coupling regime, when synchronization starts to build up, there is an increase
in the dynamical complexity whose maximum is achieved, in general, first in the
hubs due to their earlier synchronization with the mean field. For very strong
coupling, just before complete synchronization, we found a hierarchical
dynamical differentiation with lower degree nodes being the ones exhibiting the
largest complexity departure. We unveil how this differentiation route holds
for several models of nonlinear dynamics including toroidal chaos and how it
depends on the coupling function. This study provides new insights to
understand better strategies for network identification and control or to
devise effective methods for network inference.
|
This project is based on a mathematical model of erythropoiesis for anemia,
which consists of five hyperbolic population equations describing the
production of red blood cells under treatment with epoetin-alfa (EPO). Extended
dynamic mode decomposition (EDMD) is utilized to approximate the non-linear
dynamical systems by linear ones. This allows for efficient and reliable
strategies based on a combination of EDMD and model predictive control (MPC),
which produces results comparable with the one obtained in past publications
for the original model.
|
We propose a simple and efficient real-space approach for the calculation of
the ground-state energies of Wigner crystals in 1, 2, and 3 dimensions. To be
precise, we calculate the first two terms in the asymptotic expansion of the
total energy per electron which correspond to the classical energy and the
harmonic correction due to the zero-point motion of the Wigner crystals,
respectively. Our approach employs Clifford periodic boundary conditions to
simulate the infinite electron gas and a renormalized distance to evaluate the
Coulomb potential. This allows us to calculate the energies unambiguously and
with a higher precision than those reported in the literature. Our results are
in agreement with the literature values with the exception of harmonic
correction of the 2-dimensional Wigner crystal for which we find a significant
difference. Although we focus on the ground state, i.e., the triangular lattice
and the body-centered cubic lattice, in two and three dimensions, respectively,
we also report the classical energies of several other common lattice
structures.
|
Large matrices are often accessed as a row-order stream. We consider the
setting where rows are time-sensitive (i.e. they expire), which can be
described by the sliding-window row-order model, and provide the first
$(1+\epsilon)$-approximation of Schatten $p$-norms in this setting. Our main
technical contribution is a proof that Schatten $p$-norms in row-order streams
are smooth, and thus fit the smooth-histograms technique of Braverman and
Ostrovsky (FOCS 2007) for sliding-window streams.
|
Smart devices, such as smartphones, wearables, robots, and others, can
collect vast amounts of data from their environment. This data is suitable for
training machine learning models, which can significantly improve their
behavior, and therefore, the user experience. Federated learning is a young and
popular framework that allows multiple distributed devices to train deep
learning models collaboratively while preserving data privacy. Nevertheless,
this approach may not be optimal for scenarios where data distribution is
non-identical among the participants or changes over time, causing what is
known as concept drift. Little research has yet been done in this field, but
this kind of situation is quite frequent in real life and poses new challenges
to both continual and federated learning. Therefore, in this work, we present a
new method, called Concept-Drift-Aware Federated Averaging (CDA-FedAvg). Our
proposal is an extension of the most popular federated algorithm, Federated
Averaging (FedAvg), enhancing it for continual adaptation under concept drift.
We empirically demonstrate the weaknesses of regular FedAvg and prove that
CDA-FedAvg outperforms it in this type of scenario.
|
Exotic tiling patterns of quasicrystals have motivated extensive studies of
quantum phenomena such as critical states and phasons. Nevertheless, the
systematic understanding of the Landau levels of quasicrystals in the presence
of the magnetic field has not been established yet. One of the main obstacles
is the complication of the quasiperiodic tilings without periodic length
scales, thus it has been thought that the system cannot possess any universal
features of the Landau levels. In this paper, contrary to these assertions, we
develop a generic theory of the Landau levels for quasicrystals. Focusing on
the two dimensional quasicrystals with rotational symmetries, we highlight that
quasiperiodic tilings induce anomalous Landau levels where electrons are
localized near the rotational symmetry centers. Interestingly, the localization
length of these Landau levels has a universal dependence on n for quasicrystals
with n-fold rotational symmetry. Furthermore, macroscopically degenerate zero
energy Landau levels are present due to the chiral symmetry of the rhombic
tilings. In this case, each Landau level forms an independent island where
electrons are trapped at given fields, but with field control, the interference
between the islands gives rise to an abrupt change in the local density of
states. Our work provide a general scheme to understand the electron
localization behavior of the Landau levels in quasicrystals.
|
The gas content of the complete compilation of Local Group dwarf galaxies
(119 within 2 Mpc) is presented using HI survey data. Within the virial radius
of the Milky Way (224 kpc here), 53 of 55 dwarf galaxies are devoid of gas to
limits of M$_{\rm HI}<10^4$ M$_\odot$. Within the virial radius of M31 (266
kpc), 27 of 30 dwarf galaxies are devoid of gas (with limits typically $<10^5$
M$_\odot$). Beyond the virial radii of the Milky Way and M31, the majority of
the dwarf galaxies have detected HI gas and have HI masses higher than the
limits. When the relationship between gas content and distance is investigated
using a Local Group virial radius, more of the non-detected dwarf galaxies are
within this radius (85$\pm1$ of the 93 non-detected dwarf galaxies) than within
the virial radii of the Milky Way and M31. Using the Gaia proper motion
measurements available for 38 dwarf galaxies, the minimum gas density required
to completely strip them of gas is calculated. Halo densities between $10^{-5}$
and $5 \times 10^{-4}$ cm$^{-3}$ are typically required for instantaneous
stripping at perigalacticon. When compared to halo density with radius
expectations from simulations and observations, 80% of the dwarf galaxies with
proper motions are consistent with being stripped by ram pressure at Milky Way
pericenter. The results suggest a diffuse gaseous galactic halo medium is
important in quenching dwarf galaxies, and that a Local Group medium also
potentially plays a role.
|
We introduce a visual motion segmentation method employing spherical geometry
for fisheye cameras and automoated driving. Three commonly used geometric
constraints in pin-hole imagery (the positive height, positive depth and
epipolar constraints) are reformulated to spherical coordinates, making them
invariant to specific camera configurations as long as the camera calibration
is known. A fourth constraint, known as the anti-parallel constraint, is added
to resolve motion-parallax ambiguity, to support the detection of moving
objects undergoing parallel or near-parallel motion with respect to the host
vehicle. A final constraint constraint is described, known as the spherical
three-view constraint, is described though not employed in our proposed
algorithm. Results are presented and analyzed that demonstrate that the
proposal is an effective motion segmentation approach for direct employment on
fisheye imagery.
|
Most of the functions performed by astrocytes in brain information processing
are related to calcium waves. Experimental studies involving calcium waves
present discrepant results, leading to gaps in the full understanding of the
functions of these cells. The use of mathematical models help to understand the
experimental results, identifying chemical mechanisms involved in calcium waves
and the limits of experimental methods. The model is diffusion-based and uses
receptors and channels as boundary conditions. The computer program developed
was prepared to allow the study of complex geometries, with several astrocytes,
each of them with several branches. The code structure allows easy adaptation
to various experimental situations in which the model can be compared. The code
was deposited in the ModelDB repository, and will be under number 266795 after
publication. A sensitivity analysis showed the relative significance of the
parameters and identifies the ideal range of values for each one. We showed
that several sets of values can lead to the same calcium signaling dynamics.
This encourages the questioning of parameters to model calcium signaling in
astrocytes that are commonly used in the literature, and it suggests better
experimental planning. In the final part of the work, the effects produced by
the endoplasmic reticulum when located close to the extremities of the branches
were evaluated. We conclude that when they are located close to the region of
the glutamatergic stimulus, they favor local calcium dynamics. By contrast,
when they are located at points away from the stimulated region, they
accelerate the global spread of signaling.
|
Result relevance prediction is an essential task of e-commerce search engines
to boost the utility of search engines and ensure smooth user experience. The
last few years eyewitnessed a flurry of research on the use of
Transformer-style models and deep text-match models to improve relevance.
However, these two types of models ignored the inherent bipartite network
structures that are ubiquitous in e-commerce search logs, making these models
ineffective. We propose in this paper a novel Second-order Relevance, which is
fundamentally different from the previous First-order Relevance, to improve
result relevance prediction. We design, for the first time, an end-to-end
First-and-Second-order Relevance prediction model for e-commerce item
relevance. The model is augmented by the neighborhood structures of bipartite
networks that are built using the information of user behavioral feedback,
including clicks and purchases. To ensure that edges accurately encode
relevance information, we introduce external knowledge generated from BERT to
refine the network of user behaviors. This allows the new model to integrate
information from neighboring items and queries, which are highly relevant to
the focus query-item pair under consideration. Results of offline experiments
showed that the new model significantly improved the prediction accuracy in
terms of human relevance judgment. An ablation study showed that the
First-and-Second-order model gained a 4.3% average gain over the First-order
model. Results of an online A/B test revealed that the new model derived more
commercial benefits compared to the base model.
|
We apply the multi-particle fields model to calculate the differential
cross-section d{\sigma}/dt of elastic proton-proton scattering. This problem
includes the calculation of multidimensional integrals arising from the loop
Feynman diagrams. We demonstrated how these integrals can be reduced with
Laplace's method to one- and two-dimensional integrals which can be calculated
numerically. The obtained result qualitatively describe the minimum in
differential cross-section dependency d{\sigma}/dt(t).
|
Let $p(z)$ be a nonconstant polynomial and $\beta(z)$ be a small entire
function of $e^{p(z)}$ in the sense of Nevanlinna. We first describe the growth
behavior of the entire function $H(z):=e^{p(z)}\int_0^{z}\beta(t)e^{-p(t)}dt$
on the complex plane $\mathbb{C}$. As an application, we solve entire solutions
of Tumura--Clunie type differential equation
$f(z)^n+P(z,f)=b_1(z)e^{p_1(z)}+b_2(z)e^{p_2(z)}$, where $b_1(z)$ and $b_2(z)$
are nonzero polynomials, $p_1(z)$ and $p_2(z)$ are two polynomials of the same
degree~$k\geq 1$ and $P(z,f)$ is a differential polynomial in $f$ of degree
$\leq n-1$ with meromorphic functions of order~$<k$ as coefficients. These
results allow us to determine all solutions with relatively few zeros of the
second-order differential equation
$f''-[b_1(z)e^{p_1(z)}+b_2(z)e^{p_2(z)}+b_3(z)]f=0$, where $b_3(z)$ is a
polynomial. We also prove a theorem on certain first-order linear differential
equation related to complex dynamics.
|
This work is focused on the system-level performance of a broadcast network.
Since all transmitters in a broadcast network transmit the identical signal,
received signals from multiple transmitters can be combined to improve system
performance. We develop a stochastic geometry based analytical framework to
derive the coverage of a typical receiver. We show that there may exist an
optimal connectivity radius that maximizes the rate coverage. Our analysis
includes the fact that users may have their individual content/advertisement
preferences. We assume that there are multiple classes of users with each user
class prefers a particular type of content/advertisements and the users will
pay the network only when then can see content aligned with their interest. The
operator may choose to transmit multiple contents simultaneously to cater more
users' interests to increase its revenue. We present revenue models to study
the impact of the number of contents on the operator revenue. We consider two
scenarios for users' distribution: one where users' interest depends on their
geographical location and the one where it doesn't. With the help of numerical
results and analysis, we show the impact of various parameters including
content granularity, connectivity radius, and rate threshold and present
important design insights.
|
Differential cross sections for the Drell-Yan process, including Z boson
production, using the dimuon decay channel are measured in proton-lead (pPb)
collisions at a nucleon-nucleon centre-of-mass energy of 8.16 TeV. A data
sample recorded with the CMS detector at the LHC is used, corresponding to an
integrated luminosity of 173 nb$^{-1}$. The differential cross section as a
function of the dimuon mass is measured in the range 15-600 GeV, for the first
time in proton-nucleus collisions. It is also reported as a function of dimuon
rapidity over the mass ranges 15-60 GeV and 60-120 GeV, and ratios for the
p-going over the Pb-going beam directions are built. In both mass ranges, the
differential cross sections as functions of the dimuon transverse momentum
$p_\mathrm{T}$ and of a geometric variable $\phi^*$ are measured, where
$\phi^*$ highly correlates with $p_\mathrm{T}$ but is determined with higher
precision. In the Z mass region, the rapidity dependence of the data indicate a
modification of the distribution of partons within a lead nucleus as compared
to the proton case. The data are more precise than predictions based upon
current models of parton distributions.
|
In this paper, we study {\bf twisted Milnor hypersurfaces} and compute their
$\hat A$-genus and Atiyah-Singer-Milnor $\alpha$-invariant. Our tool to compute
the $\alpha$-invariant is Zhang's analytic Rokhlin congruence formula. We also
give some applications about group actions and metrics of positive scalar
curvature on twisted Milnor hypersurfaces.
|
We propose magnetically arrested disks (MADs) in quiescent black-hole (BH)
binaries as the origin of the multiwavelength emission, and argue that this
class of sources can dominate the cosmic-ray spectrum around the knee. X-ray
luminosities of Galactic BH binaries in the quiescent state are far below the
Eddington luminosity, and thus, radiatively inefficient accretion flows (RIAFs)
are formed in the inner region. Strong thermal and turbulent pressures in RIAFs
produce outflows, which can create large-scale poloidal magnetic fields. These
fields are carried to the vicinity of the BH by the rapid inflow motion,
forming a MAD. Inside the MAD, non-thermal protons and electrons are naturally
accelerated by magnetic reconnections or stochastic acceleration by turbulence.
Both thermal and non-thermal electrons emit broadband photons via synchrotron
emission, which are broadly consistent with the optical and X-ray data of the
quiescent BH X-ray binaries. Moreover, protons are accelerated up to PeV
energies and diffusively escape from these MADs, which can account for the
cosmic-ray intensity around the knee energy.
|
In this paper, a hybrid model for single-crystal Shape Memory Alloy (SMA)
wire actuators is presented. The result is based on a mathematical
reformulation of the M\"uller-Achenbach-Seelecke (MAS) model, which provides an
accurate and interconnection-oriented description of the SMA hysteretic
response. The strong nonlinearity and high numerical stiffness of the MAS
model, however, hinder its practical use for simulation and control of complex
SMA-driven systems. The main idea behind the hybrid reformulation is based on
dividing the mechanical hysteresis of the SMA into five operating modes, each
one representing a different physical state of the material. By properly
deriving the switching conditions among those modes in a physically-consistent
way, the MAS model is effectively reformulated within a hybrid dynamical
setting. The main advantage of the hybrid reformulation is the possibility of
describing the material dynamics with a simplified set of state equations while
maintaining all benefits of the physics-based description offered by the MAS
model After describing the novel approach, simulation studies are conducted on
a flexible robotic module actuated by protagonist-antagonist SMA wires. Through
comparative numerical analysis, it is shown how the hybrid model provides the
same accuracy as the MAS model while saving up to 80% of the simulation time.
Moreover, the new modeling framework opens up the possibility of addressing SMA
control from a hybrid systems perspective.
|
We determine the Weierstrass semigroup $H(P_\infty,P_1,\ldots,P_m)$ at
several rational points on the maximal curves which cannot be covered by the
Hermitian curve introduced by Tafazolian, Teher\'an-Herrera, and Torres.
Furthermore, we present some conditions to find pure gaps. We use this
semigroup to obtain AG codes with better relative parameters than comparable
one-point AG codes arising from these curves.
|
We present here a simple mathematical model that provides a successful
strategy, quantitatively, to ending the continued championship futility
experienced by Canadian Hockey Teams. Competitive Intransitivity is used here
as a simple predictive framework to capture how investing strategically, under
a uniform salary cap, in just 3 independently variable aspects of the sport
(such as Offence, Defence, and a Goaltender), by just 3 Hockey Teams applying
differing salary priorities (such as Montreal, Boston, and New York), can lead
to rich and perhaps surprisingly unexpected outcomes in play, similar to
rolling intransitive dice together in a series of head-to-head games. A
possibly fortunate conclusion of this analysis is the prediction that for any
Team's chosen strategy (such as New York's), a counter strategy within the same
salary cap can be adopted by a playoff opponent (such as Montreal) which will
prove victorious over a long playoff series, enabling a pathway to end
prolonged championship futility.
|
We introduce Dynabench, an open-source platform for dynamic dataset creation
and model benchmarking. Dynabench runs in a web browser and supports
human-and-model-in-the-loop dataset creation: annotators seek to create
examples that a target model will misclassify, but that another person will
not. In this paper, we argue that Dynabench addresses a critical need in our
community: contemporary models quickly achieve outstanding performance on
benchmark tasks but nonetheless fail on simple challenge examples and falter in
real-world scenarios. With Dynabench, dataset creation, model development, and
model assessment can directly inform each other, leading to more robust and
informative benchmarks. We report on four initial NLP tasks, illustrating these
concepts and highlighting the promise of the platform, and address potential
objections to dynamic benchmarking as a new standard for the field.
|
Due to the wide adoption of social media platforms like Facebook, Twitter,
etc., there is an emerging need of detecting online posts that can go against
the community acceptance standards. The hostility detection task has been well
explored for resource-rich languages like English, but is unexplored for
resource-constrained languages like Hindidue to the unavailability of large
suitable data. We view this hostility detection as a multi-label multi-class
classification problem. We propose an effective neural network-based technique
for hostility detection in Hindi posts. We leverage pre-trained multilingual
Bidirectional Encoder Representations of Transformer (mBERT) to obtain the
contextual representations of Hindi posts. We have performed extensive
experiments including different pre-processing techniques, pre-trained models,
neural architectures, hybrid strategies, etc. Our best performing neural
classifier model includes One-vs-the-Rest approach where we obtained 92.60%,
81.14%,69.59%, 75.29% and 73.01% F1 scores for hostile, fake, hate, offensive,
and defamation labels respectively. The proposed model outperformed the
existing baseline models and emerged as the state-of-the-art model for
detecting hostility in the Hindi posts.
|
We study the Wishart-Sachdev-Ye-Kitaev (WSYK) model consisting of two
$\hat{q}$-body Sachdev-Ye-Kitaev (SYK) models with general complex couplings,
one the Hermitian conjugate of the other, living in off-diagonal blocks of a
larger WSYK Hamiltonian. The spectrum is positive with a hard edge at zero
energy. We employ diagrammatic and combinatorial techniques to compute
analytically the low-order moments of the Hamiltonian. In the limit of large
number $N$ of Majoranas, we have found striking similarities with the moments
of the weight function of the Al-Salam-Chihara $Q$-Laguerre polynomials. For
$\hat{q} = 3, 4$, the $Q$-Laguerre prediction, with $Q=Q(\hat{q},N)$ also
computed analytically, agrees well with exact diagonalization results for $30 <
N \leq 34$ while we observe some deviations for $\hat q = 2$. The most salient
feature of the spectral density is that, for odd $\hat{q}$, low-energy
excitations grow as a stretched exponential, with a functional form different
from that of the supersymmetric SYK model. For $\hat q = 4$, a detailed
analysis of level statistics reveals quantum chaotic dynamics even for time
scales substantially shorter than the Heisenberg time. More specifically, the
spacing ratios in the bulk of the spectrum and the microscopic spectral density
and the number variance close to the hard edge are very well approximated by
that of an ensemble of random matrices that, depending on $N$, belong to the
chiral or superconducting universality classes. In particular, we report the
first realization of level statistics belonging to the chGUE universality
class, which completes the tenfold-way classification in the SYK model.
|
Data from clinical real-world settings is characterized by variability in
quality, machine-type, setting, and source. One of the primary goals of medical
computer vision is to develop and validate artificial intelligence (AI) based
algorithms on real-world data enabling clinical translations. However, despite
the exponential growth in AI based applications in healthcare, specifically in
ophthalmology, translations to clinical settings remain challenging. Limited
access to adequate and diverse real-world data inhibits the development and
validation of translatable algorithms. In this paper, we present a new
multi-modal longitudinal ophthalmic imaging dataset, the Illinois Ophthalmic
Database Atlas (I-ODA), with the goal of advancing state-of-the-art computer
vision applications in ophthalmology, and improving upon the translatable
capacity of AI based applications across different clinical settings. We
present the infrastructure employed to collect, annotate, and anonymize images
from multiple sources, demonstrating the complexity of real-world retrospective
data and its limitations. I-ODA includes 12 imaging modalities with a total of
3,668,649 ophthalmic images of 33,876 individuals from the Department of
Ophthalmology and Visual Sciences at the Illinois Eye and Ear Infirmary of the
University of Illinois Chicago (UIC) over the course of 12 years.
|
We explore an anomaly-free ${\textrm U}(1)$ gauge extended beyond the
Standard model (BSM) framework, to account for the baryon asymmetry of the
Universe, along with arranging for tiny neutrino mass. Neutrino masses are
generated via higher-dimensional operators (HDOs) involving three right-handed
neutrinos (RHNs) with gauge charges ($4$, $4$ and $-5$ respectively) and two
BSM scalars. This is an attractive framework as it can accommodate a keV scale
dark matter, with the lightest RHN being the candidate. The remaining two RHNs
are quasi-degenerate at the TeV-scale, actively participating in the process of
resonant leptogenesis through their decay governed by the same set of HDOs. The
RHNs being at the TeV scale, make this framework relevant for studying flavored
resonant leptogenesis. This TeV-scale resonant leptogenesis, after satisfying
the neutrino oscillation data, leads to interesting predictions on the Yukawa
sector of the model HDOs. The thermal evolution of the baryon asymmetry has
followed the experimental results rather accurately in that corner of parameter
space. As a matter of fact, this TeV-scale framework which in principle relies
on the low scale resonant leptogenesis typically leads to predictions that
potentially can be tested at the colliders. In particular, we consider the
same-sign dilepton signature that arises from the RHN pair production through
the decay of heavy gauge boson of the extra ${\textrm U}(1)$.
|
We prove that random hypergraphs are asymptotically almost surely resiliently
Hamiltonian. Specifically, for any $\gamma>0$ and $k\ge3$, we show that
asymptotically almost surely, every subgraph of the binomial random $k$-uniform
hypergraph $G^{(k)}\big(n,n^{\gamma-1}\big)$ in which all $(k-1)$-sets are
contained in at least $\big(\tfrac12+2\gamma\big)pn$ edges has a tight Hamilton
cycle. This is a cyclic ordering of the $n$ vertices such that each consecutive
$k$ vertices forms an edge.
|
Language resources are necessary for language processing,but building them is
costly, involves many researches from different areas and needs constant
updating. In this paper, we describe the crosslingual framework used for
developing the Multilingual Central Repository (MCR), a multilingual knowledge
base that includes wordnets of Basque, Catalan, English, Galician, Portuguese,
Spanish and the following ontologies: Base Concepts, Top Ontology, WordNet
Domains and Suggested Upper Merged Ontology. We present the story of MCR, its
state in 2017 and the developed tools.
|
Contrastive learning has delivered impressive results for various tasks in
the self-supervised regime. However, existing approaches optimize for learning
representations specific to downstream scenarios, i.e., \textit{global}
representations suitable for tasks such as classification or \textit{local}
representations for tasks such as detection and localization. While they
produce satisfactory results in the intended downstream scenarios, they often
fail to generalize to tasks that they were not originally designed for. In this
work, we propose to learn video representations that generalize to both the
tasks which require global semantic information (e.g., classification) and the
tasks that require local fine-grained spatio-temporal information (e.g.,
localization). We achieve this by optimizing two contrastive objectives that
together encourage our model to learn global-local visual information given
audio signals. We show that the two objectives mutually improve the
generalizability of the learned global-local representations, significantly
outperforming their disjointly learned counterparts. We demonstrate our
approach on various tasks including action/sound classification, lip reading,
deepfake detection, event and sound localization
(https://github.com/yunyikristy/global\_local).
|
When applying imitation learning techniques to fit a policy from expert
demonstrations, one can take advantage of prior stability/robustness
assumptions on the expert's policy and incorporate such control-theoretic prior
knowledge explicitly into the learning process. In this paper, we formulate the
imitation learning of linear policies as a constrained optimization problem,
and present efficient methods which can be used to enforce stability and
robustness constraints during the learning processes. Specifically, we show
that one can guarantee the closed-loop stability and robustness by posing
linear matrix inequality (LMI) constraints on the fitted policy. Then both the
projected gradient descent method and the alternating direction method of
multipliers (ADMM) method can be applied to solve the resulting constrained
policy fitting problem. Finally, we provide numerical results to demonstrate
the effectiveness of our methods in producing linear polices with various
stability and robustness guarantees.
|
We explore the presence of active galactic nuclei (AGN)/black hole (BH) in
Green Pea galaxies (GPs), motivated by the presence of high ionization emission
lines such as HeII and [NeIII] in their optical spectra. In order to identify
AGN candidates, we used mid-infrared (MIR) photometric observations from the
all-sky Wide-field Infrared Survey Explorer (WISE) mission for a sample of 516
GPs. We select 58 GPs as candidate AGN based on a stringent 3-band WISE color
diagnostic. Using multi-epoch photometry of W1 and W2 bands from the
WISE/NEOWISE-R observations, we find 38 GPs showing significant variability in
both the WISE bands. Four of these were selected as AGN by the WISE 3-band
color diagnostic as well. Interestingly, we find a high fraction of MIR
variable sources among GPs which demonstrates the uniqueness and importance of
studying these extreme objects. Through this work, we demonstrate that
photometric variability is a promising tool to select AGN that may be missed by
other selection techniques (including optical emission-line ratios and X-ray
emission) in star-formation dominated, low-mass, low-metallicity galaxies.
|
Second language (L2) English learners often find it difficult to improve
their pronunciations due to the lack of expressive and personalized corrective
feedback. In this paper, we present Pronunciation Teacher (PTeacher), a
Computer-Aided Pronunciation Training (CAPT) system that provides personalized
exaggerated audio-visual corrective feedback for mispronunciations. Though the
effectiveness of exaggerated feedback has been demonstrated, it is still
unclear how to define the appropriate degrees of exaggeration when interacting
with individual learners. To fill in this gap, we interview 100 L2 English
learners and 22 professional native teachers to understand their needs and
experiences. Three critical metrics are proposed for both learners and teachers
to identify the best exaggeration levels in both audio and visual modalities.
Additionally, we incorporate the personalized dynamic feedback mechanism given
the English proficiency of learners. Based on the obtained insights, a
comprehensive interactive pronunciation training course is designed to help L2
learners rectify mispronunciations in a more perceptible, understandable, and
discriminative manner. Extensive user studies demonstrate that our system
significantly promotes the learners' learning efficiency.
|
Applying the concept of S-convergence, based on averaging in the spirit of
Strong Law of Large Numbers, the vanishing viscosity solutions of the Euler
system are studied. We show how to efficiently compute a viscosity solution of
the Euler system as the S-limit of numerical solutions obtained by the
Viscosity Finite Volume method. Theoretical results are illustrated by
numerical simulations of the Kelvin--Helmholtz instability problem.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.