abstract
stringlengths 42
2.09k
|
---|
Methods for constructing synthetic multidimensional electron hole equilibria
without using particle simulation are investigated. Previous approaches have
various limitations and approximations that make them unsuitable within the
context of expected velocity diffusion near the trapped-passing boundary. An
adjustable model of the distribution function is introduced that avoids
unphysical singularities there, and yet is sufficiently tractable analytically
to enable prescription of the potential spatial profiles. It is shown why
simple models of the charge density as being a function only of potential
cannot give solitary multidimensional electron holes, in contradiction of prior
suppositions. Fully self-consistent axisymmetric electron holes in the
drift-kinetic limit of electron motion (negligible gyro-radius) are constructed
and their properties relevant to observational interpretation and
finite-gyro-radius theory are discussed.
|
In this paper we introduce the notions of phi-contractive parent-child
infinite iterated function system (pcIIFS) and orbital phi-contractive infinite
iterated function system (oIIFS) and we prove that the corresponding fractal
operator is weakly Picard. The corresponding notions of shift space, canonical
projection and their properties are also treated.
|
Delays are ubiquitous in modern hybrid systems, which exhibit both continuous
and discrete dynamical behaviors. Induced by signal transmission, conversion,
the nature of plants, and so on, delays may appear either in the continuous
evolution of a hybrid system such that the evolution depends not only on the
present state but also on its execution history, or in the discrete switching
between its different control modes. In this paper we come up with a new model
of hybrid systems, called \emph{delay hybrid automata}, to capture the dynamics
of systems with the aforementioned two kinds of delays. Furthermore, based upon
this model we study the robust switching controller synthesis problem such that
the controlled delay system is able to satisfy the specified safety properties
regardless of perturbations. To the end, a novel method is proposed to
synthesize switching controllers based on the computation of differential
invariants for continuous evolution and backward reachable sets of discrete
jumps with delays. Finally, we implement a prototypical tool of our approach
and demonstrate it on some case studies.
|
We study the molecular gas content of 24 star-forming galaxies at $z=3-4$,
with a median stellar mass of $10^{9.1}$ M$_{\odot}$, from the MUSE Hubble
Ultra Deep Field (HUDF) Survey. Selected by their Lyman-alpha-emission and
H-band magnitude, the galaxies show an average EW $\approx 20$ angstrom, below
the typical selection threshold for Lyman Alpha Emitters (EW $> 25$ angstrom),
and a rest-frame UV spectrum similar to Lyman Break Galaxies. We use rest-frame
optical spectroscopy from KMOS and MOSFIRE, and the UV features observed with
MUSE, to determine the systemic redshifts, which are offset from Lyman alpha by
346 km s$^{-1}$, with a 100 to 600 km s$^{-1}$ range. Stacking CO(4-3) and
[CI](1-0) (and higher-$J$ CO lines) from the ALMA Spectroscopic Survey of the
HUDF (ASPECS), we determine $3\sigma$ upper limits on the line luminosities of
$4.0\times10^{8}$ K km s$^{-1}$pc$^{2}$ and $5.6\times10^{8}$ K km
s$^{-1}$pc$^{2}$, respectively (for a 300 km s$^{-1}$ linewidth). Stacking the
1.2 mm and 3 mm dust continuum flux densities, we find a $3\sigma$ upper limits
of 9 $\mu$Jy and $1.2$ $\mu$Jy, respectively. The inferred gas fractions, under
the assumption of a 'Galactic' CO-to-H$_{2}$ conversion factor and gas-to-dust
ratio, are in tension with previously determined scaling relations. This
implies a substantially higher $\alpha_{\rm CO} \ge 10$ and $\delta_{\rm GDR}
\ge 1200$, consistent with the sub-solar metallicity estimated for these
galaxies ($12 + \log(O/H) \approx 7.8 \pm 0.2$). The low metallicity of $z \ge
3$ star-forming galaxies may thus make it very challenging to unveil their cold
gas through CO or dust emission, warranting further exploration of alternative
tracers, such as [CII].
|
Multi-layer feedforward networks have been used to approximate a wide range
of nonlinear functions. An important and fundamental problem is to understand
the learnability of a network model through its statistical risk, or the
expected prediction error on future data. To the best of our knowledge, the
rate of convergence of neural networks shown by existing works is bounded by at
most the order of $n^{-1/4}$ for a sample size of $n$. In this paper, we show
that a class of variation-constrained neural networks, with arbitrary width,
can achieve near-parametric rate $n^{-1/2+\delta}$ for an arbitrarily small
positive constant $\delta$. It is equivalent to $n^{-1 +2\delta}$ under the
mean squared error. This rate is also observed by numerical experiments. The
result indicates that the neural function space needed for approximating smooth
functions may not be as large as what is often perceived. Our result also
provides insight to the phenomena that deep neural networks do not easily
suffer from overfitting when the number of neurons and learning parameters
rapidly grow with $n$ or even surpass $n$. We also discuss the rate of
convergence regarding other network parameters, including the input dimension,
network layer, and coefficient norm.
|
This paper presents the components of a newly developed Malaysian SMEs -
Software Process Improvement model (MSME-SPI) that can assess SMEs soft-ware
development industry in managing and improving their software processes
capability. The MSME-SPI is developed in response to practitioner needs that
were highlighted in an empirical study with the Malaysian SME software
development industry. After the model development, there is a need for
independent feedback to show that the model meets its objectives. Consequently,
the validation phase is performed by involving a group of software process
improvement experts in examining the MSME-SPI model components. Besides, the
effectiveness of the MSME-SPI model is validated using an expert panel. Three
criteria were used to evaluate the effectiveness of the model namely:
usefulness, verifiability, and structure. The results show the model effective
to be used by SMEs with minor modifications. The validation phase contributes
towards a better understanding and use of the MSME-SPI model by the
practitioners in the field.
|
Deep learning (DL) models for image-based malware detection have exhibited
their capability in producing high prediction accuracy. But model
interpretability is posing challenges to their widespread application in
security and safety-critical application domains. This paper aims for designing
an Interpretable Ensemble learning approach for image-based Malware Detection
(IEMD). We first propose a Selective Deep Ensemble Learning-based (SDEL)
detector and then design an Ensemble Deep Taylor Decomposition (EDTD) approach,
which can give the pixel-level explanation to SDEL detector outputs.
Furthermore, we develop formulas for calculating fidelity, robustness and
expressiveness on pixel-level heatmaps in order to assess the quality of EDTD
explanation. With EDTD explanation, we develop a novel Interpretable Dropout
approach (IDrop), which establishes IEMD by training SDEL detector. Experiment
results exhibit the better explanation of our EDTD than the previous
explanation methods for image-based malware detection. Besides, experiment
results indicate that IEMD achieves a higher detection accuracy up to 99.87%
while exhibiting interpretability with high quality of prediction results.
Moreover, experiment results indicate that IEMD interpretability increases with
the increasing detection accuracy during the construction of IEMD. This
consistency suggests that IDrop can mitigate the tradeoff between model
interpretability and detection accuracy.
|
For robots to navigate and interact more richly with the world around them,
they will likely require a deeper understanding of the world in which they
operate. In robotics and related research fields, the study of understanding is
often referred to as semantics, which dictates what does the world "mean" to a
robot, and is strongly tied to the question of how to represent that meaning.
With humans and robots increasingly operating in the same world, the prospects
of human-robot interaction also bring semantics and ontology of natural
language into the picture. Driven by need, as well as by enablers like
increasing availability of training data and computational resources, semantics
is a rapidly growing research area in robotics. The field has received
significant attention in the research literature to date, but most reviews and
surveys have focused on particular aspects of the topic: the technical research
issues regarding its use in specific robotic topics like mapping or
segmentation, or its relevance to one particular application domain like
autonomous driving. A new treatment is therefore required, and is also timely
because so much relevant research has occurred since many of the key surveys
were published. This survey therefore provides an overarching snapshot of where
semantics in robotics stands today. We establish a taxonomy for semantics
research in or relevant to robotics, split into four broad categories of
activity, in which semantics are extracted, used, or both. Within these broad
categories we survey dozens of major topics including fundamentals from the
computer vision field and key robotics research areas utilizing semantics,
including mapping, navigation and interaction with the world. The survey also
covers key practical considerations, including enablers like increased data
availability and improved computational hardware, and major application areas
where...
|
We introduce a differential extension of algebraic K-theory of an algebra
using Karoubi's Chern character. In doing so, we develop a necessary theory of
secondary transgression forms as well as a differential refinement of the
smooth Serre--Swan correspondence. Our construction subsumes the differential
K-theory of a smooth manifold when the algebra is complex-valued smooth
functions. Furthermore, our construction fits into a noncommutative
differential cohomology hexagon diagram.
|
While the LCDM framework has been incredibly successful for modern cosmology,
it requires the admission of two mysterious substances as a part of the
paradigm, dark energy and dark matter. Although this framework adequately
explains most of the large-scale properties of the Universe (i.e., existence
and structure of the CMB, the large-scale structure of galaxies, the abundances
of light elements and the accelerating expansion), it has failed to make
significant predictions on smaller scale features such as the kinematics of
galaxies and their formation. In particular, the rotation curves of disk
galaxies (the original observational discovery of dark matter) are better
represented by non-Newtonian models of gravity that challenge our understanding
of motion in the low acceleration realm (much as general relativity provided an
extension of gravity into the high acceleration realm e.g., blackholes). The
tension between current cold dark matter scenarios and proposed new
formulations of gravity in the low energy regime suggests an upcoming paradigm
shift in cosmology. And, if history is a guide, observations will lead the way.
|
As artificial intelligence (AI) systems are increasingly deployed, principles
for ethical AI are also proliferating. Certification offers a method to both
incentivize adoption of these principles and substantiate that they have been
implemented in practice. This paper draws from management literature on
certification and reviews current AI certification programs and proposals.
Successful programs rely on both emerging technical methods and specific design
considerations. In order to avoid two common failures of certification, program
designs should ensure that the symbol of the certification is substantially
implemented in practice and that the program achieves its stated goals. The
review indicates that the field currently focuses on self-certification and
third-party certification of systems, individuals, and organizations - to the
exclusion of process management certifications. Additionally, the paper
considers prospects for future AI certification programs. Ongoing changes in AI
technology suggest that AI certification regimes should be designed to
emphasize governance criteria of enduring value, such as ethics training for AI
developers, and to adjust technical criteria as the technology changes.
Overall, certification can play a valuable mix in the portfolio of AI
governance tools.
|
We perform the analysis of the focusing nonlinear Schr\"odinger equation on
the half-line with time-dependent boundary conditions along the lines of the
nonlinear method of images with the help of B\"acklund transformations. The
difficulty arising from having such time-dependent boundary conditions at $x=0$
is overcome by changing the viewpoint of the method and fixing the B\"acklund
transformation at infinity as well as relating its value at $x=0$ to a
time-dependent reflection matrix. The interplay between the various aspects of
integrable boundary conditions is reviewed in detail to brush a picture of the
area. We find two possible classes of solutions. One is very similar to the
case of Robin boundary conditions whereby solitons are reflected at the
boundary, as a result of an effective interaction with their images on the
other half-line. The new regime of solutions supports the existence of one
soliton that is not reflected at the boundary but can be either absorbed or
emitted by it. We demonstrate that this is a unique feature of time-dependent
integrable boundary conditions.
|
In this paper, we study the problem of exact community recovery in the
symmetric stochastic block model, where a graph of $n$ vertices is randomly
generated by partitioning the vertices into $K \ge 2$ equal-sized communities
and then connecting each pair of vertices with probability that depends on
their community memberships. Although the maximum-likelihood formulation of
this problem is discrete and non-convex, we propose to tackle it directly using
projected power iterations with an initialization that satisfies a partial
recovery condition. Such an initialization can be obtained by a host of
existing methods. We show that in the logarithmic degree regime of the
considered problem, the proposed method can exactly recover the underlying
communities at the information-theoretic limit. Moreover, with a qualified
initialization, it runs in $\mathcal{O}(n\log^2n/\log\log n)$ time, which is
competitive with existing state-of-the-art methods. We also present numerical
results of the proposed method to support and complement our theoretical
development.
|
Large optimization problems with hard constraints arise in many settings, yet
classical solvers are often prohibitively slow, motivating the use of deep
networks as cheap "approximate solvers." Unfortunately, naive deep learning
approaches typically cannot enforce the hard constraints of such problems,
leading to infeasible solutions. In this work, we present Deep Constraint
Completion and Correction (DC3), an algorithm to address this challenge.
Specifically, this method enforces feasibility via a differentiable procedure,
which implicitly completes partial solutions to satisfy equality constraints
and unrolls gradient-based corrections to satisfy inequality constraints. We
demonstrate the effectiveness of DC3 in both synthetic optimization tasks and
the real-world setting of AC optimal power flow, where hard constraints encode
the physics of the electrical grid. In both cases, DC3 achieves near-optimal
objective values while preserving feasibility.
|
I recount my personal experience interacting with Roman Jackiw in the 1980s,
when we both worked on Chern-Simons theories in three dimensions.
|
The Minimum Linear Arrangement problem (MLA) consists of finding a mapping
$\pi$ from vertices of a graph to distinct integers that minimizes
$\sum_{\{u,v\}\in E}|\pi(u) - \pi(v)|$. In that setting, vertices are often
assumed to lie on a horizontal line and edges are drawn as semicircles above
said line. For trees, various algorithms are available to solve the problem in
polynomial time in $n=|V|$. There exist variants of the MLA in which the
arrangements are constrained. Iordanskii, and later Hochberg and Stallmann
(HS), put forward $O(n)$-time algorithms that solve the problem when
arrangements are constrained to be planar (also known as one-page book
embeddings). We also consider linear arrangements of rooted trees that are
constrained to be projective (planar embeddings where the root is not covered
by any edge). Gildea and Temperley (GT) sketched an algorithm for projective
arrangements which they claimed runs in $O(n)$ but did not provide any
justification of its cost. In contrast, Park and Levy claimed that GT's
algorithm runs in $O(n \log d_{max})$ where $d_{max}$ is the maximum degree but
did not provide sufficient detail. Here we correct an error in HS's algorithm
for the planar case, show its relationship with the projective case, and derive
simple algorithms for the projective and planar cases that run without a doubt
in $O(n)$ time.
|
It is well known that a quantum circuit on $N$ qubits composed of Clifford
gates with the addition of $k$ non Clifford gates can be simulated on a
classical computer by an algorithm scaling as $\text{poly}(N)\exp(k)$[1]. We
show that, for a quantum circuit to simulate quantum chaotic behavior, it is
both necessary and sufficient that $k=O(N)$. This result implies the
impossibility of simulating quantum chaos on a classical computer.
|
We present an algorithm for computing $\epsilon$-coresets for $(k,
\ell)$-median clustering of polygonal curves in $\mathbb{R}^d$ under the
Fr\'echet distance. This type of clustering is an adaption of Euclidean
$k$-median clustering: we are given a set of $n$ polygonal curves in
$\mathbb{R}^d$, each of complexity (number of vertices) at most $m$, and want
to compute $k$ median curves such that the sum of distances from the given
curves to their closest median curve is minimal. Additionally, we restrict the
complexity of the median curves to be at most $\ell$ each, to suppress
overfitting, a problem specific for sequential data. Our algorithm has running
time linear in $n$, sub-quartic in $m$ and quadratic in $\epsilon^{-1}$. With
high probability it returns $\epsilon$-coresets of size quadratic in
$\epsilon^{-1}$ and logarithmic in $n$ and $m$. We achieve this result by
applying the improved $\epsilon$-coreset framework by Langberg and Feldman to a
generalized $k$-median problem over an arbitrary metric space. Later we combine
this result with the recent result by Driemel et al. on the VC dimension of
metric balls under the Fr\'echet distance. Furthermore, our framework yields
$\epsilon$-coresets for any generalized $k$-median problem where the range
space induced by the open metric balls of the underlying space has bounded VC
dimension, which is of independent interest. Finally, we show that our
$\epsilon$-coresets can be used to improve the running time of an existing
approximation algorithm for $(1,\ell)$-median clustering.
|
In this paper, we enable automated property verification of deliberative
components in robot control architectures. We focus on formalizing the
execution context of Behavior Trees (BTs) to provide a scalable, yet formally
grounded, methodology to enable runtime verification and prevent unexpected
robot behaviors. To this end, we consider a message-passing model that
accommodates both synchronous and asynchronous composition of parallel
components, in which BTs and other components execute and interact according to
the communication patterns commonly adopted in robotic software architectures.
We introduce a formal property specification language to encode requirements
and build runtime monitors. We performed a set of experiments, both on
simulations and on the real robot, demonstrating the feasibility of our
approach in a realistic application and its integration in a typical robot
software architecture. We also provide an OS-level virtualization environment
to reproduce the experiments in the simulated scenario.
|
Aided by recent advances in Deep Learning, Image Caption Generation has seen
tremendous progress over the last few years. Most methods use transfer learning
to extract visual information, in the form of image features, with the help of
pre-trained Convolutional Neural Network models followed by transformation of
the visual information using a Caption Generator module to generate the output
sentences. Different methods have used different Convolutional Neural Network
Architectures and, to the best of our knowledge, there is no systematic study
which compares the relative efficacy of different Convolutional Neural Network
architectures for extracting the visual information. In this work, we have
evaluated 17 different Convolutional Neural Networks on two popular Image
Caption Generation frameworks: the first based on Neural Image Caption (NIC)
generation model and the second based on Soft-Attention framework. We observe
that model complexity of Convolutional Neural Network, as measured by number of
parameters, and the accuracy of the model on Object Recognition task does not
necessarily co-relate with its efficacy on feature extraction for Image Caption
Generation task.
|
We present a computational study of the behaviour of a lipid-coated SonoVue
microbubble with initial radius $1 \, \mu \text{m} \leq R_0 \leq 2 \, \mu
\text{m}$, excited at frequencies (200-1500 kHz) significantly below the linear
resonance frequency and pressure amplitudes of up to 1500 kPa, an excitation
regime used in many focused ultrasound applications. The bubble dynamics are
simulated using the Rayleigh-Plesset equation and the Gilmore equation, in
conjunction with the Marmottant model for the lipid monolayer coating. Also, a
new continuously differentiable variant of the Marmottant model is introduced.
Below the onset of inertial cavitation, a linear regime is identified in which
the maximum pressure at the bubble wall is linearly proportional to the
excitation pressure amplitude and, likewise, the mechanical index. This linear
regime is bounded by the Blake pressure and, in line with recent in vitro
experiments, the onset of inertial cavitation is found to occur approximately
at an excitation pressure amplitude of 130-190 kPa, dependent on the initial
bubble size. In the nonlinear regime the maximum pressure at the bubble wall is
found to be readily predicted by the maximum bubble radius and both the
Rayleigh-Plesset and Gilmore equations are shown to predict the onset of sub-
and ultraharmonic frequencies of the acoustic emissions compared to in vitro
experiments. Neither the surface dilatational viscosity of the lipid monolayer
nor the compressibility of the liquid have a discernible influence on the
studied quantities, yet accounting for the lipid coating is critical for the
accurate prediction of the bubble behaviour. The Gilmore equation is shown to
be valid for the considered bubbles and excitation regime, and the
Rayleigh-Plesset equation also provides accurate qualitative predictions, even
though it is outside its range of validity for many of the considered cases.
|
We consider the problem of private distributed matrix multiplication under
limited resources. Coded computation has been shown to be an effective solution
in distributed matrix multiplication, both providing privacy against the
workers and boosting the computation speed by efficiently mitigating
stragglers. In this work, we propose the use of recently-introduced bivariate
polynomial codes to further speed up private distributed matrix multiplication
by exploiting the partial work done by the stragglers rather than completely
ignoring them. We show that the proposed approach reduces the average
computation time of private distributed matrix multiplication compared to its
competitors in the literature while improving the upload communication cost and
the workers' storage efficiency.
|
In this paper, a novel three-dimensional (3D) non-stationary geometry-based
stochastic model (GBSM) for the fifth generation (5G) and beyond 5G (B5G)
systems is proposed. The proposed B5G channel model (B5GCM) is designed to
capture various channel characteristics in (B)5G systems such as
space-time-frequency (STF) non-stationarity, spherical wavefront (SWF), high
delay resolution, time-variant velocities and directions of motion of the
transmitter, receiver, and scatterers, spatial consistency, etc. By combining
different channel properties into a general channel model framework, the
proposed B5GCM is able to be applied to multiple frequency bands and multiple
scenarios, including massive multiple-input multiple-output (MIMO),
vehicle-to-vehicle (V2V), high-speed train (HST), and millimeter wave-terahertz
(mmWave-THz) communication scenarios. Key statistics of the proposed B5GCM are
obtained and compared with those of standard 5G channel models and
corresponding measurement data, showing the generalization and usefulness of
the proposed model.
|
Logit dynamics is a form of randomized game dynamics where players have a
bias towards strategic deviations that give a higher improvement in cost. It is
used extensively in practice. In congestion (or potential) games, the dynamics
converges to the so-called Gibbs distribution over the set of all strategy
profiles, when interpreted as a Markov chain. In general, logit dynamics might
converge slowly to the Gibbs distribution, but beyond that, not much is known
about their algorithmic aspects, nor that of the Gibbs distribution. In this
work, we are interested in the following two questions for congestion games: i)
Is there an efficient algorithm for sampling from the Gibbs distribution? ii)
If yes, do there also exist natural randomized dynamics that converges quickly
to the Gibbs distribution?
We first study these questions in extension parallel congestion games, a
well-studied special case of symmetric network congestion games. As our main
result, we show that there is a simple variation on the logit dynamics (in
which we in addition are allowed to randomly interchange the strategies of two
players) that converges quickly to the Gibbs distribution in such games. This
answers both questions above affirmatively. We also address the first question
for the class of so-called capacitated $k$-uniform congestion games.
To prove our results, we rely on the recent breakthrough work of Anari, Liu,
Oveis-Gharan and Vinzant (2019) concerning the approximate sampling of the base
of a matroid according to strongly log-concave probability distribution.
|
The ability to recognize the position and order of the floor-level lines that
divide adjacent building floors can benefit many applications, for example,
urban augmented reality (AR). This work tackles the problem of locating
floor-level lines in street-view images, using a supervised deep learning
approach. Unfortunately, very little data is available for training such a
network $-$ current street-view datasets contain either semantic annotations
that lack geometric attributes, or rectified facades without perspective
priors. To address this issue, we first compile a new dataset and develop a new
data augmentation scheme to synthesize training samples by harassing (i) the
rich semantics of existing rectified facades and (ii) perspective priors of
buildings in diverse street views. Next, we design FloorLevel-Net, a multi-task
learning network that associates explicit features of building facades and
implicit floor-level lines, along with a height-attention mechanism to help
enforce a vertical ordering of floor-level lines. The generated segmentations
are then passed to a second-stage geometry post-processing to exploit
self-constrained geometric priors for plausible and consistent reconstruction
of floor-level lines. Quantitative and qualitative evaluations conducted on
assorted facades in existing datasets and street views from Google demonstrate
the effectiveness of our approach. Also, we present context-aware image overlay
results and show the potentials of our approach in enriching AR-related
applications.
|
Natural language processing (NLP) has been applied to various fields
including text classification and sentiment analysis. In the shared task of
sentiment analysis of code-mixed tweets, which is a part of the SemEval-2020
competition~\cite{patwa2020sentimix}, we preprocess datasets by replacing emoji
and deleting uncommon characters and so on, and then fine-tune the
Bidirectional Encoder Representation from Transformers(BERT) to perform the
best. After exhausting top3 submissions, Our team MeisterMorxrc achieves an
averaged F1 score of 0.730 in this task, and and our codalab username is
MeisterMorxrc.
|
In this paper, we illustrate how to fine-tune the entire Retrieval Augment
Generation (RAG) architecture in an end-to-end manner. We highlighted the main
engineering challenges that needed to be addressed to achieve this objective.
We also compare how end-to-end RAG architecture outperforms the original RAG
architecture for the task of question answering. We have open-sourced our
implementation in the HuggingFace Transformers library.
|
We study solutions of the Bethe ansatz equations associated to the
orthosymplectic Lie superalgebras $\mathfrak{osp}_{2m+1|2n}$ and
$\mathfrak{osp}_{2m|2n}$. Given a solution, we define a reproduction procedure
and use it to construct a family of new solutions which we call a population.
To each population we associate a symmetric rational pseudo-differential
operator $\mathcal R$. Under some technical assumptions, we show that the
superkernel $W$ of $\mathcal R$ is a self-dual superspace of rational
functions, and the population is in a canonical bijection with the variety of
isotropic full superflags in $W$ and with the set of symmetric complete
factorizations of $\mathcal R$. In particular, our results apply to the case of
even Lie algebras of type D${}_m$ corresponding to
$\mathfrak{osp}_{2m|0}=\mathfrak{so}_{2m}$.
|
Astounding results from Transformer models on natural language tasks have
intrigued the vision community to study their application to computer vision
problems. Among their salient benefits, Transformers enable modeling long
dependencies between input sequence elements and support parallel processing of
sequence as compared to recurrent networks e.g., Long short-term memory (LSTM).
Different from convolutional networks, Transformers require minimal inductive
biases for their design and are naturally suited as set-functions. Furthermore,
the straightforward design of Transformers allows processing multiple
modalities (e.g., images, videos, text and speech) using similar processing
blocks and demonstrates excellent scalability to very large capacity networks
and huge datasets. These strengths have led to exciting progress on a number of
vision tasks using Transformer networks. This survey aims to provide a
comprehensive overview of the Transformer models in the computer vision
discipline. We start with an introduction to fundamental concepts behind the
success of Transformers i.e., self-attention, large-scale pre-training, and
bidirectional encoding. We then cover extensive applications of transformers in
vision including popular recognition tasks (e.g., image classification, object
detection, action recognition, and segmentation), generative modeling,
multi-modal tasks (e.g., visual-question answering, visual reasoning, and
visual grounding), video processing (e.g., activity recognition, video
forecasting), low-level vision (e.g., image super-resolution, image
enhancement, and colorization) and 3D analysis (e.g., point cloud
classification and segmentation). We compare the respective advantages and
limitations of popular techniques both in terms of architectural design and
their experimental value. Finally, we provide an analysis on open research
directions and possible future works.
|
Volcanic ash clouds often become multilayered and thin with distance from the
vent. We explore one mechanism for development of this layered structure. We
review data on the characteristics of turbulence layering in the free
atmosphere, as well as examples of observations of layered clouds both
near-vent and distally. We then explore and contrast the output of volcanic ash
transport and dispersal models with models that explicitly use the observed
layered structure of atmospheric turbulence. The results suggest that the
alternation of turbulent and quiescent atmospheric layers provides one
mechanism for development of multilayered ash clouds by modulating the manner
in which settling occurs.
|
Differences in gait patterns of children with Duchenne muscular dystrophy
(DMD) and typically developing (TD) peers are visible to the eye, but
quantification of those differences outside of the gait laboratory has been
elusive. We measured vertical, mediolateral, and anteroposterior acceleration
using a waist-worn iPhone accelerometer during ambulation across a typical
range of velocities. Six TD and six DMD children from 3-15 years of age
underwent seven walking/running tasks, including five 25m walk/run tests at a
slow walk to running speeds, a 6-minute walk test (6MWT), and a
100-meter-run/walk (100MRW). We extracted temporospatial clinical gait features
(CFs) and applied multiple Artificial Intelligence (AI) tools to differentiate
between DMD and TD control children using extracted features and raw data.
Extracted CFs showed reduced step length and a greater mediolateral component
of total power (TP) consistent with shorter strides and Trendelenberg-like gait
commonly observed in DMD. AI methods using CFs and raw data varied
ineffectiveness at differentiating between DMD and TD controls at different
speeds, with an accuracy of some methods exceeding 91%. We demonstrate that by
using AI tools with accelerometer data from a consumer-level smartphone, we can
identify DMD gait disturbance in toddlers to early teens.
|
Detecting changes in COVID-19 disease transmission over time is a key
indicator of epidemic growth.Near real-time monitoring of the pandemic growth
is crucial for policy makers and public health officials who need to make
informed decisions about whether to enforce lockdowns or allow certain
activities. The effective reproduction number Rt is the standard index used in
many countries for this goal. However, it is known that due to the delays
between infection and case registration, its use for decision making is
somewhat limited. In this paper a near real-time COVINDEX is proposed for
monitoring the evolution of the pandemic. The index is computed from
predictions obtained from a GAM beta regression for modelling the test positive
rate as a function of time. The proposal is illustrated using data on COVID-19
pandemic in Italy and compared with Rt. A simple chart is also proposed for
monitoring local and national outbreaks by policy makers and public health
officials.
|
The dynamics of cross-diffusion models leads to a high computational
complexity for implicit difference schemes, turning them unsuitable for tasks
that require results in real-time. We propose the use of two operator splitting
schemes for nonlinear cross-diffusion processes in order to lower the
computational load, and establish their stability properties using discrete
$L^2$ energy methods. Furthermore, by attaining a stable factorization of the
system matrix as a forward-backward pass, corresponding to the Thomas algorithm
for self-diffusion processes, we show that the use of implicit cross-diffusion
can be competitive in terms of execution time, widening the range of viable
cross-diffusion coefficients for \textit{on-the-fly} applications.
|
Classical supervised methods commonly used often suffer from the requirement
of an abudant number of training samples and are unable to generalize on unseen
datasets. As a result, the broader application of any trained model is very
limited in clinical settings. However, few-shot approaches can minimize the
need for enormous reliable ground truth labels that are both labor intensive
and expensive. To this end, we propose to exploit an optimization-based
implicit model agnostic meta-learning {iMAML} algorithm in a few-shot setting
for medical image segmentation. Our approach can leverage the learned weights
from a diverse set of training samples and can be deployed on a new unseen
dataset. We show that unlike classical few-shot learning approaches, our method
has improved generalization capability. To our knowledge, this is the first
work that exploits iMAML for medical image segmentation. Our quantitative
results on publicly available skin and polyp datasets show that the proposed
method outperforms the naive supervised baseline model and two recent few-shot
segmentation approaches by large margins.
|
The non-Debye, \textit{i.e.,} non-exponential, behavior characterizes a large
plethora of dielectric relaxation phenomena. Attempts to find their theoretical
explanation are dominated either by considerations rooted in the stochastic
processes methodology or by the so-called \textsl{fractional dynamics} based on
equations involving fractional derivatives which mimic the non-local time
evolution and as such may be interpreted as describing memory effects. Using
the recent results coming from the stochastic approach we link memory functions
with the Laplace (characteristic) exponents of infinitely divisible probability
distributions and show how to relate the latter with experimentally measurable
spectral functions characterizing relaxation in the frequency domain. This
enables us to incorporate phenomenological knowledge into the evolution laws.
To illustrate our approach we consider the standard Havriliak-Negami and
Jurlewicz-Weron-Stanislavsky models for which we derive well-defined evolution
equations. Merging stochastic and fractional dynamics approaches sheds also new
light on the analysis of relaxation phenomena which description needs going
beyond using the single evolution pattern. We determine sufficient conditions
under which such description is consistent with general requirements of our
approach.
|
Context: Individuals' personality traits have been shown to influence their
behavior during team work. In particular, positive group attitudes are said to
be essential for distributed and global software development efforts where
collaboration is critical to project success. Objective: Given this, we have
sought to study the influence of global software practitioners' personality
profiles from a psycholinguistic perspective. Method: Artifacts from ten teams
were selected from the IBM Rational Jazz repository and mined. We employed
social network analysis (SNA) techniques to identify and group practitioners
into two clusters based on the numbers of messages they communicated, Top
Members and Others, and used standard statistical techniques to assess
practitioners' engagement in task changes associated with work items. We then
performed psycholinguistic analysis on practitioners' messages using linguistic
dimensions of the LIWC tool that had been previously correlated with the Big
Five personality profiles. Results: For our sample of 146 practitioners, we
found that the Top Members demonstrated more openness to experience than the
Other practitioners. Additionally, practitioners involved in usability-related
tasks were found to be highly extroverted, and coders were most neurotic and
conscientious. Conclusion: High levels of organizational and inter-personal
skills may be useful for those operating in distributed settings, and
personality diversity is likely to boost team performance.
|
Electron dynamics in water are of fundamental importance for a broad range of
phenomena, but their real-time study faces numerous conceptual and
methodological challenges. Here, we introduce attosecond size-resolved cluster
spectroscopy and build up a molecular-level understanding of the attosecond
electron dynamics in water. We measure the effect that the addition of single
water molecules has on the photoionization time delays of water clusters. We
find a continuous increase of the delay for clusters containing up to 4-5
molecules and little change towards larger clusters. We show that these delays
are proportional to the spatial extension of the created electron hole, which
first increases with cluster size and then partially localizes through the
onset of structural disorder that is characteristic of large clusters and bulk
liquid water. These results establish a previously unknown sensitivity of
photoionization delays to electron-hole delocalization and reveal a direct link
between electronic structure and attosecond photoemission dynamics. Our results
offer novel perspectives for studying electron/hole delocalization and its
attosecond dynamics.
|
General circulation models are essential tools in weather and hydrodynamic
simulation. They solve discretized, complex physical equations in order to
compute evolutionary states of dynamical systems, such as the hydrodynamics of
a lake. However, high-resolution numerical solutions using such models are
extremely computational and time consuming, often requiring a high performance
computing architecture to be executed satisfactorily. Machine learning
(ML)-based low-dimensional surrogate models are a promising alternative to
speed up these simulations without undermining the quality of predictions. In
this work, we develop two examples of fast, reliable, low-dimensional surrogate
models to produce a 36 hour forecast of the depth-averaged hydrodynamics at
Lake George NY, USA. Our ML approach uses two widespread artificial neural
network (ANN) architectures: fully connected neural networks and long
short-term memory. These ANN architectures are first validated in the
deterministic and chaotic regimes of the Lorenz system and then combined with
proper orthogonal decomposition (to reduce the dimensionality of the incoming
input data) to emulate the depth-averaged hydrodynamics of a flow simulator
called SUNTANS. Results show the ANN-based reduced order models have promising
accuracy levels (within 6% of the prediction range) and advocate for further
investigation into hydrodynamic applications.
|
Ground surface detection in point cloud is widely used as a key module in
autonomous driving systems. Different from previous approaches which are mostly
developed for lidars with high beam resolution, e.g. Velodyne HDL-64, this
paper proposes ground detection techniques applicable to much sparser point
cloud captured by lidars with low beam resolution, e.g. Velodyne VLP-16. The
approach is based on the RANSAC scheme of plane fitting. Inlier verification
for plane hypotheses is enhanced by exploiting the point-wise tangent, which is
a local feature available to compute regardless of the density of lidar beams.
Ground surface which is not perfectly planar is fitted by multiple
(specifically 4 in our implementation) disjoint plane regions. By assuming
these plane regions to be rectanglar and exploiting the integral image
technique, our approach approximately finds the optimal region partition and
plane hypotheses under the RANSAC scheme with real-time computational
complexity.
|
A large eddy simulation (LES) study of the flow around a 1/4 scale squareback
Ahmed body at $Re_H=33,333$ is presented. The study consists of both
wall-resolved (WRLES) and wall-modelled (WMLES) simulations, and investigates
the bimodal switching of the wake between different horizontal positions.
Within a non-dimensional time-window of 1050 convective flow units, both WRLES
and WMLES simulations, for which only the near-wall region of the turbulent
boundary layer is treated in a Reynolds-averaged sense, are able to capture
horizontal (spanwise) shifts in the wake's cross-stream orientation.
Equilibrium wall-models in the form of Spalding's law and the log-law of the
wall are successfully used. Once these wall-models are, however, applied to a
very coarse near-wall WMLES mesh, in which a portion of the turbulent boundary
layer's outer region dynamics is treated in a Reynolds-averaged manner as well,
large-scale horizontal shifts in the wake's orientation are no longer detected.
This suggests larger-scale flow structures found within the turbulent boundary
layer's outer domain are responsible for generating the critical amount of flow
intermittency needed to trigger a bimodal switching event. By looking at mean
flow structures, instantaneous flow features and their associated turbulent
kinetic energy (TKE) production, it becomes clear that the front separation
bubbles just aft of the Ahmed body nose generate high levels of TKE through the
shedding of large hairpin vortices. Only in the reference WRLES and
(relatively) fine near-wall mesh WMLES simulations are these features present,
exemplifying their importance in triggering a bimodal event. This motivates
studies on the suppression of wake bimodality by acting upon the front
separation bubbles.
|
We show under some natural smoothness assumptions that pure in-plane drill
rotations as deformation mappings of a $C^2$-smooth regular shell surface to
another one parametrized over the same domain are impossible provided that the
rotations are fixed at a portion of the boundary. Put otherwise, if the tangent
vectors of the new surface are obtained locally by only rotating the given
tangent vectors, and if these rotations have a rotation axis which coincides
everywhere with the normal of the initial surface, then the two surfaces are
equal provided they coincide at a portion of the boundary. In the language of
differential geometry of surfaces we show that any isometry which leaves
normals invariant and which coincides with the given surface at a portion of
the boundary, is the identity mapping.
|
Digitalization opens up new opportunities in the collection, analysis, and
presentation of data which can contribute to the achievement of the 2030 Agenda
and its Sustainable Development Goals (SDGs). In particular, the access to and
control of environmental and geospatial data is fundamental to identify and
understand global issues and trends. Also immediate crises such as the COVID-19
pandemic demonstrate the importance of accurate health data such as infection
statistics and the relevance of digital tools like video conferencing
platforms. However, today much of the data is collected and processed by
private actors. Thus, governments and researchers depend on data platforms and
proprietary systems of big tech companies such as Google or Microsoft. The
market capitalization of the seven largest US and Chinese big tech companies
has grown to 8.7tn USD in recent years, about twice the size of Germany's gross
domestic product (GDP). Therefore, their market power is enormous, allowing
them to dictate many rules of the digital space and even interfere with
legislations. Based on a literature review and nine expert interviews this
study presents a framework that identifies the risks and consequences along the
workflow of collecting, processing, storing, using of data. It also includes
solutions that governmental and multilateral actors can strive for to alleviate
the risks. Fundamental to this framework is the novel concept of "data
colonialism" which describes today's trend of private companies appropriating
the digital sphere. Historically, colonial nations used to grab indigenous land
and exploit the cheap labor of slave workers. In a similar way, today's big
tech corporations use cheap data of their users to produce valuable services
and thus create enormous market power.
|
To reveal the detail of the internal structure, the relationship between
chromospheric activity and the Rossby number, N_R (= rotational period P /
convective turnover time tau_c), has been extensively examined for
main-sequence stars. The goal of our work is to apply the same methods to
pre-main-sequence (PMS) stars and identify the appropriate model of tau_c for
them. Yamashita et al. (2020) investigated the relationship between N_R and
strengths of the Ca II infrared triplet (IRT; lambda 8498, 8542, 8662 A)
emission lines of 60 PMS stars. Their equivalent widths are converted into the
emission line to stellar bolometric luminosity ratio (R'). The 54 PMS stars
have N_R < 10^{-1.0} and show R' \sim 10^{-4.2} as large as the maximum R' of
the zero-age main-sequence (ZAMS) stars. However, because all R' was saturated
against N_R, it was not possible to estimate the appropriate tau_c model for
the PMS stars. We noticed that Mg I emission lines at 8808 A is an optically
thin chromospheric line, appropriate for determination of the adequate tau_c
for PMS stars. Using the archive data of the Anglo-Australian Telescope
(AAT)/the University College London Echelle Spectrograph (UCLES), we
investigated the Mg I line of 52 ZAMS stars. After subtracting photospheric
absorption component, the Mg I line is detected as an emission line in 45 ZAMS
stars, whose R' is between 10^{-5.9} and 10^{-4.1}. The Mg I line is not
saturated yet in "the saturated regime for the Ca II emission lines", i.e.
10^{-1.6} < N_R < 10^{-0.8}. Therefore, the adequate tau_c for PMS stars can be
determined by measuring of their R' values.
|
Time series imputation is a fundamental task for understanding time series
with missing data. Existing methods either do not directly handle
irregularly-sampled data or degrade severely with sparsely observed data. In
this work, we reformulate time series as permutation-equivariant sets and
propose a novel imputation model NRTSI that does not impose any recurrent
structures. Taking advantage of the permutation equivariant formulation, we
design a principled and efficient hierarchical imputation procedure. In
addition, NRTSI can directly handle irregularly-sampled time series, perform
multiple-mode stochastic imputation, and handle data with partially observed
dimensions. Empirically, we show that NRTSI achieves state-of-the-art
performance across a wide range of time series imputation benchmarks.
|
The point-splitting renormalization method offers a prescription to calculate
finite expectation values of quadratic operators constructed from quantum
fields in a general curved spacetime. It has been recently shown by Levi and
Ori that when the background metric possesses an isometry, like stationary or
spherically symmetric black holes, the method can be upgraded into a pragmatic
procedure of renormalization that produces efficient numerical calculations. In
this note we show that when the background enjoys three-dimensional spatial
symmetries, like homogeneous expanding universes, the above pragmatic
regularization technique reduces to the well established adiabatic
regularization method.
|
Merging beliefs depends on the relative reliability of their sources. When
unknown, assuming equal reliability is unwarranted. The solution proposed in
this article is that every reliability profile is possible, and only what holds
according to all is accepted. Alternatively, one source is completely reliable,
but which one is unknown. These two cases motivate two existing forms of
merging: maxcons-based merging and arbitration.
|
Pest and disease control plays a key role in agriculture since the damage
caused by these agents are responsible for a huge economic loss every year.
Based on this assumption, we create an algorithm capable of detecting rust
(Hemileia vastatrix) and leaf miner (Leucoptera coffeella) in coffee leaves
(Coffea arabica) and quantify disease severity using a mobile application as a
high-level interface for the model inferences. We used different convolutional
neural network architectures to create the object detector, besides the OpenCV
library, k-means, and three treatments: the RGB and value to quantification,
and the AFSoft software, in addition to the analysis of variance, where we
compare the three methods. The results show an average precision of 81,5% in
the detection and that there was no significant statistical difference between
treatments to quantify the severity of coffee leaves, proposing a
computationally less costly method. The application, together with the trained
model, can detect the pest and disease over different image conditions and
infection stages and also estimate the disease infection stage.
|
Next generation mobile networks need to expand towards uncharted territories
in order to enable the digital transformation of society. In this context,
aerial devices such as unmanned aerial vehicles (UAVs) are expected to address
this gap in hard-to-reach locations. However, limited battery-life is an
obstacle for the successful spread of such solutions. Reconfigurable
intelligent surfaces (RISs) represent a promising solution addressing this
challenge since on-board passive and lightweight controllable devices can
efficiently reflect the signal propagation from the ground BSs towards specific
target areas. In this paper, we focus on air-to-ground networks where UAVs
equipped with RIS can fly over selected areas to provide connectivity. In
particular, we study how to optimally compensate flight effects and propose
RiFe as well as its practical implementation Fair-RiFe that automatically
configure RIS parameters accounting for undesired UAV oscillations due to
adverse atmospheric conditions. Our results show that both algorithms provide
robustness and reliability while outperforming state-of-the-art solutions in
the multiple conditions studied.
|
We experimentally studied a clusterization process in a system of polyamide
tracers that are used for visualizing the flow of liquids on their surface. It
was shown that in a surface structure system appearing on the water surface a
Pareto distribution is formed for normalized cluster density and it is well
described by a C/x^(-n) power-law function. One can suggest that the growth of
a surface structure is defined by the action of surface tension forces, and the
number of surface structures decreases exponentially, while maintaining their
total surface area. We show experimentally the significant role of background
liquid flows on the surface in the clusterization process
|
Fault Localization (FL) is an important first step in software debugging and
is mostly manual in the current practice. Many methods have been proposed over
years to automate the FL process, including information retrieval (IR)-based
techniques. These methods localize the fault based on the similarity of the
reported bug report and the source code. Newer variations of IR-based FL (IRFL)
techniques also look into the history of bug reports and leverage them during
the localization. However, all existing IRFL techniques limit themselves to the
current project's data (local data). In this study, we introduce Globug, which
is an IRFL framework consisting of methods that use models pre-trained on the
global data (extracted from open-source benchmark projects). In Globug, we
investigate two heuristics: a) the effect of global data on a state-of-the-art
IR-FL technique, namely BugLocator, and b) the application of a Word Embedding
technique (Doc2Vec) together with global data. Our large scale experiment on 51
software projects shows that using global data improves BugLocator on average
6.6% and 4.8% in terms of MRR (Mean Reciprocal Rank) and MAP (Mean Average
Precision), with over 14% in a majority (64% and 54% in terms of MRR and MAP,
respectively) of the cases. This amount of improvement is significant compared
to the improvement rates that five other state-of-the-art IRFL tools provide
over BugLocator. In addition, training the models globally is a one-time
offline task with no overhead on BugLocator's run-time fault localization. Our
study, however, shows that a Word Embedding-based global solution did not
further improve the results.
|
We theoretically investigate high harmonic generation (HHG) from silicon thin
films with thicknesses from a few atomic layers to a few hundreds of
nanometers, to determine the most efficient thickness for producing intense HHG
in the reflected and transmitted pulses. For this purpose, we employ a few
theoretical and computational methods. The most sophisticated method is the ab
initio time-dependent density functional theory coupled with the Maxwell
equations in a common spatial resolution. This enables us to explore such
effects as the surface electronic structure and light propagation, as well as
electronic motion in the energy band in a unified manner. We also utilize a
multiscale method that is applicable to thicker films. Two-dimensional
approximation is introduced to obtain an intuitive understanding of the
thickness dependence of HHG. From these ab initio calculations, we find that
the HHG signals are the strongest in films with thicknesses of 2-15 nm, which
is determined by the bulk conductivity of silicon. We also find that the HHG
signals in the reflected and transmitted pulses are identical in such thin
films. In films whose thicknesses are comparable to the wavelength in the
medium, the intensity of HHG signals in the reflected (transmitted) pulse is
found to correlate with the magnitude of the electric field at the front (back)
surface of the thin film.
|
Time series data play an important role in many applications and their
analysis reveals crucial information for understanding the underlying
processes. Among the many time series learning tasks of great importance, we
here focus on semi-supervised learning based on a graph representation of the
data. Two main aspects are involved in this task. A suitable distance measure
to evaluate the similarities between time series, and a learning method to make
predictions based on these distances. However, the relationship between the two
aspects has never been studied systematically in the context of graph-based
learning. We describe four different distance measures, including (Soft) DTW
and MPDist, a distance measure based on the Matrix Profile, as well as four
successful semi-supervised learning methods, including the graph Allen--Cahn
method and a Graph Convolutional Neural Network. We then compare the
performance of the algorithms on binary classification data sets. In our
findings we compare the chosen graph-based methods using all distance measures
and observe that the results vary strongly with respect to the accuracy. As
predicted by the ``no free lunch'' theorem, no clear best combination to employ
in all cases is found. Our study provides a reproducible framework for future
work in the direction of semi-supervised learning for time series with a focus
on graph representations.
|
It is widely perceived that leveraging the success of modern machine learning
techniques to mobile devices and wireless networks has the potential of
enabling important new services. This, however, poses significant challenges,
essentially due to that both data and processing power are highly distributed
in a wireless network. In this paper, we develop a learning algorithm and an
architecture that make use of multiple data streams and processing units, not
only during the training phase but also during the inference phase. In
particular, the analysis reveals how inference propagates and fuses across a
network. We study the design criterion of our proposed method and its bandwidth
requirements. Also, we discuss implementation aspects using neural networks in
typical wireless radio access; and provide experiments that illustrate benefits
over state-of-the-art techniques.
|
In this paper, we study the nonlinear inverse problem of estimating the
spectrum of a system matrix, that drives a finite-dimensional affine dynamical
system, from partial observations of a single trajectory data. In the noiseless
case, we prove an annihilating polynomial of the system matrix, whose roots are
a subset of the spectrum, can be uniquely determined from data. We then study
which eigenvalues of the system matrix can be recovered and derive various
sufficient and necessary conditions to characterize the relationship between
the recoverability of each eigenvalue and the observation locations. We propose
various reconstruction algorithms, with theoretical guarantees, generalizing
the classical Prony method, ESPIRIT, and matrix pencil method. We test the
algorithms over a variety of examples with applications to graph signal
processing, disease modeling and a real-human motion dataset. The numerical
results validate our theoretical results and demonstrate the effectiveness of
the proposed algorithms, even when the data did not follow an exact linear
dynamical system.
|
The interplay between strong electron correlation and band topology is at the
forefront of condensed matter research. As a direct consequence of correlation,
magnetism enriches topological phases and also has promising functional
applications. However, the influence of topology on magnetism remains unclear,
and the main research effort has been limited to ground state magnetic orders.
Here we report a novel order above the magnetic transition temperature in
magnetic Weyl semimetal (WSM) CeAlGe. Such order shows a number of anomalies in
electrical and thermal transport, and neutron scattering measurements. We
attribute this order to the coupling of Weyl fermions and magnetic fluctuations
originating from a three-dimensional Seiberg-Witten monopole, which
qualitatively agrees well with the observations. Our work reveals a prominent
role topology may play in tailoring electron correlation beyond ground state
ordering, and offers a new avenue to investigate emergent electronic properties
in magnetic topological materials.
|
Modifying Serre's arguments, Cojocaru showed that for any elliptic curve
$E/\mathbb{Q}$ and any integer $m$ co-prime to $30,$ the induced Galois
representation $$\rho_{E,m}: \text{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})
\longrightarrow \text{GL}_{2}(\mathbb{Z}/m\mathbb{Z})$$ is surjective if and
only if $\rho_{E,\ell}$ is surjective for any prime $\ell|m.$ In this article,
we shall first study an analog of the same problem over arbitrary number
fields. Later, we shall extend this to the product of elliptic curves and
abelian varieties. At the end we shall discuss this local-global phenomenon for
a larger family of linear algebraic groups.
|
In a recent paper on a study of the Sylow 2-subgroups of the symmetric group
with 2^n elements it has been show that the growth of the first (n-2)
consecutive indices of a certain normalizer chain is linked to the sequence of
partitions of integers into distinct parts. Unrefinable partitions into
distinct parts are those in which no part x can be replaced with integers whose
sum is x obtaining a new partition into distinct parts. We prove here that the
(n-1)-th index of the previously mentioned chain is related to the number of
unrefinable partitions into distinct parts satisfying a condition on the
minimal excludant.
|
It is known that the ordered Bell numbers count all the ordered partitions of
the set $[n]=\{1,2,\dots,n\}$. In this paper, we introduce the deranged Bell
numbers that count the total number of deranged partitions of $[n]$. We first
study the classical properties of these numbers (generating function, explicit
formula, convolutions, etc.), we then present an asymptotic behavior of the
deranged Bell numbers. Finally, we give some brief results for their
$r$-versions.
|
We propose methods that augment existing numerical schemes for the simulation
of hyperbolic balance laws with Dirichlet boundary conditions to allow for the
simulation of a broad class of differential algebraic conditions. Our approach
is similar to that of Thompson (1987), where the boundary values were simulated
by combining characteristic equations with the time derivative of the algebraic
conditions, but differs in two important regards. Firstly, when the boundary is
a characteristic of one of the fields Thompson's method can fail to produce
reasonable values. We propose a method of combining the characteristic
equations with extrapolation which ensures convergence. Secondly, the
application of algebraic conditions can suffer from $O(1)$ drift-off error, and
we discuss projective time-stepping algorithms designed to converge for this
type of system. Test problems for the shallow water equations are presented to
demonstrate the result of simulating with and without the modifications
discussed, illustrating their necessity for certain problems.
|
This survey presents an overview of integrating prior knowledge into machine
learning systems in order to improve explainability. The complexity of machine
learning models has elicited research to make them more explainable. However,
most explainability methods cannot provide insight beyond the given data,
requiring additional information about the context. We propose to harness prior
knowledge to improve upon the explanation capabilities of machine learning
models. In this paper, we present a categorization of current research into
three main categories which either integrate knowledge into the machine
learning pipeline, into the explainability method or derive knowledge from
explanations. To classify the papers, we build upon the existing taxonomy of
informed machine learning and extend it from the perspective of explainability.
We conclude with open challenges and research directions.
|
Retrieving keywords (bidwords) with the same intent as query, referred to as
close variant keywords, is of prime importance for effective targeted search
advertising. For head and torso search queries, sponsored search engines use a
huge repository of same intent queries and keywords, mined ahead of time.
Online, this repository is used to rewrite the query and then lookup the
rewrite in a repository of bid keywords contributing to significant revenue.
Recently generative retrieval models have been shown to be effective at the
task of generating such query rewrites. We observe two main limitations of such
generative models. First, rewrites generated by these models exhibit low
lexical diversity, and hence the rewrites fail to retrieve relevant keywords
that have diverse linguistic variations. Second, there is a misalignment
between the training objective - the likelihood of training data, v/s what we
desire - improved quality and coverage of rewrites. In this work, we introduce
CLOVER, a framework to generate both high-quality and diverse rewrites by
optimizing for human assessment of rewrite quality using our diversity-driven
reinforcement learning algorithm. We use an evaluation model, trained to
predict human judgments, as the reward function to finetune the generation
policy. We empirically show the effectiveness of our proposed approach through
offline experiments on search queries across geographies spanning three major
languages. We also perform online A/B experiments on Bing, a large commercial
search engine, which shows (i) better user engagement with an average increase
in clicks by 12.83% accompanied with an average defect reduction by 13.97%, and
(ii) improved revenue by 21.29%.
|
Supervised deep convolutional neural networks (DCNNs) are currently one of
the best computational models that can explain how the primate ventral visual
stream solves object recognition. However, embodied cognition has not been
considered in the existing visual processing models. From the ecological
standpoint, humans learn to recognize objects by interacting with them,
allowing better classification, specialization, and generalization. Here, we
ask if computational models under the embodied learning framework can explain
mechanisms underlying object recognition in the primate visual system better
than the existing supervised models? To address this question, we use
reinforcement learning to train neural network models to play a 3D computer
game and we find that these reinforcement learning models achieve neural
response prediction accuracy scores in the early visual areas (e.g., V1 and V2)
in the levels that are comparable to those accomplished by the supervised
neural network model. In contrast, the supervised neural network models yield
better neural response predictions in the higher visual areas, compared to the
reinforcement learning models. Our preliminary results suggest the future
direction of visual neuroscience in which deep reinforcement learning should be
included to fill the missing embodiment concept.
|
Estimation of a precision matrix (i.e., inverse covariance matrix) is widely
used to exploit conditional independence among continuous variables. The
influence of abnormal observations is exacerbated in a high dimensional setting
as the dimensionality increases. In this work, we propose robust estimation of
the inverse covariance matrix based on an $l_1$ regularized objective function
with a weighted sample covariance matrix. The robustness of the proposed
objective function can be justified by a nonparametric technique of the
integrated squared error criterion. To address the non-convexity of the
objective function, we develop an efficient algorithm in a similar spirit of
majorization-minimization. Asymptotic consistency of the proposed estimator is
also established. The performance of the proposed method is compared with
several existing approaches via numerical simulations. We further demonstrate
the merits of the proposed method with application in genetic network
inference.
|
Motivated by the recently observed unconventional Hall effect in ultra-thin
films of ferromagnetic SrRuO$_3$ (SRO) we investigate the effect of
strain-induced oxygen octahedral distortion in the electronic structure and
anomalous Hall response of the SRO ultra-thin films by virtue of density
functional theory calculations. Our findings reveal that the ferromagnetic SRO
films grown on SrTiO$_3$ (in-plane strain of $-$0.47$\%$) have an orthorhombic
(both tilting and rotation) distorted structure and with an increasing amount
of substrate-induced compressive strain the octahedral tilting angle is found
to be suppressed gradually, with SRO films grown on NdGaO$_3$ (in-plane strain
of $-$1.7$\%$) stabilized in the tetragonal distorted structure (with zero
tilting). Our Berry curvature calculations predict a positive value of the
anomalous Hall conductivity of $+$76\,S/cm at $-$1.7$\%$ strain, whereas it is
found to be negative ($-$156\,S/cm) at $-$0.47$\%$ strain. We attribute the
found behavior of the anomalous Hall effect to the nodal point dynamics in the
electronic structure arising in response to tailoring the oxygen octahedral
distortion driven by the substrate-induced strain. We also calculate
strain-mediated anomalous Hall conductivity as a function of reduced
magnetization obtained by scaling down the magnitude of the exchange field
inside Ru atoms finding good qualitative agreement with experimental
observations, which indicates a strong impact of longitudinal thermal
fluctuations of Ru spin moments on the anomalous Hall effect in this system.
|
Widely deployed deep neural network (DNN) models have been proven to be
vulnerable to adversarial perturbations in many applications (e.g., image,
audio and text classifications). To date, there are only a few adversarial
perturbations proposed to deviate the DNN models in video recognition systems
by simply injecting 2D perturbations into video frames. However, such attacks
may overly perturb the videos without learning the spatio-temporal features
(across temporal frames), which are commonly extracted by DNN models for video
recognition. To our best knowledge, we propose the first black-box attack
framework that generates universal 3-dimensional (U3D) perturbations to subvert
a variety of video recognition systems. U3D has many advantages, such as (1) as
the transfer-based attack, U3D can universally attack multiple DNN models for
video recognition without accessing to the target DNN model; (2) the high
transferability of U3D makes such universal black-box attack easy-to-launch,
which can be further enhanced by integrating queries over the target model when
necessary; (3) U3D ensures human-imperceptibility; (4) U3D can bypass the
existing state-of-the-art defense schemes; (5) U3D can be efficiently generated
with a few pre-learned parameters, and then immediately injected to attack
real-time DNN-based video recognition systems. We have conducted extensive
experiments to evaluate U3D on multiple DNN models and three large-scale video
datasets. The experimental results demonstrate its superiority and
practicality.
|
We propose optical longitudinal conductivity as a realistic observable to
detect light-induced Floquet band gaps in graphene. These gaps manifest as
resonant features in the conductivity, when resolved with respect to the
probing frequency and the driving field strength. We demonstrate these features
via a dissipative master equation approach which gives access to a frequency-
and momentum-resolved electron distribution. This distribution follows the
light-induced Floquet-Bloch bands, resulting in a natural interpretation as
occupations of these bands. Furthermore, we show that there are population
inversions of the Floquet-Bloch bands at the band gaps for sufficiently strong
driving field strengths. This strongly reduces the conductivity at the
corresponding frequencies. Therefore our proposal puts forth not only an
unambiguous demonstration of light-induced Floquet-Bloch bands, which advances
the field of Floquet engineering in solids, but also points out the control of
transport properties via light, that derives from the electron distribution on
these bands.
|
We propose to predict the future trajectories of observed agents (e.g.,
pedestrians or vehicles) by estimating and using their goals at multiple time
scales. We argue that the goal of a moving agent may change over time, and
modeling goals continuously provides more accurate and detailed information for
future trajectory estimation. In this paper, we present a novel recurrent
network for trajectory prediction, called Stepwise Goal-Driven Network (SGNet).
Unlike prior work that models only a single, long-term goal, SGNet estimates
and uses goals at multiple temporal scales. In particular, the framework
incorporates an encoder module that captures historical information, a stepwise
goal estimator that predicts successive goals into the future, and a decoder
module that predicts future trajectory. We evaluate our model on three
first-person traffic datasets (HEV-I, JAAD, and PIE) as well as on two bird's
eye view datasets (ETH and UCY), and show that our model outperforms the
state-of-the-art methods in terms of both average and final displacement errors
on all datasets. Code has been made available at:
https://github.com/ChuhuaW/SGNet.pytorch.
|
We construct an equivalence between the 2-categories VMonCat of rigid
V-monoidal categories for a braided monoidal category V and VModTens of oplax
braided functors from V into the Drinfeld centers of ordinary rigid monoidal
categories. The 1-cells in each are the respective lax monoidal functors, and
the 2-cells are the respective monoidal natural transformations. Our proof also
gives an equivalence in the case that we consider only strong monoidal 1-cells
on both sides. The 2-categories VMonCat and VModTens have G-graded analogues.
We also get an equivalence of 2-categories between G-extensions of some fixed
V-monoidal category A, and G-extensions of some fixed V-module tensor category
(A, F_A^Z).
|
The Jiangmen Underground Neutrino Observatory (JUNO) is an experiment
designed to study neutrino oscillations. Determination of neutrino mass
ordering and precise measurement of neutrino oscillation parameters $\sin^2
2\theta_{12}$, $\Delta m^2_{21}$ and $\Delta m^2_{32}$ are the main goals of
the experiment. A rich physical program beyond the oscillation analysis is also
foreseen. The ability to accurately reconstruct particle interaction events in
JUNO is of great importance for the success of the experiment. In this work we
present a few machine learning approaches applied to the vertex and the energy
reconstruction. Multiple models and architectures were compared and studied,
including Boosted Decision Trees (BDT), Deep Neural Networks (DNN), a few kinds
of Convolution Neural Networks (CNN), based on ResNet and VGG, and a Graph
Neural Network based on DeepSphere. Based on a study, carried out using the
dataset, generated by the official JUNO software, we demonstrate that machine
learning approaches achieve the necessary level of accuracy for reaching the
physical goals of JUNO: $\sigma_E=3\%$ at $E_\text{vis}=1~\text{MeV}$ for the
energy and $\sigma_{x,y,z}=10~\text{cm}$ at $E_\text{vis}=1~\text{MeV}$ for the
position.
|
In temporal action localization methods, temporal downsampling operations are
widely used to extract proposal features, but they often lead to the aliasing
problem, due to lacking consideration of sampling rates. This paper aims to
verify the existence of aliasing in TAL methods and investigate utilizing low
pass filters to solve this problem by inhibiting the high-frequency band.
However, the high-frequency band usually contains large amounts of specific
information, which is important for model inference. Therefore, it is necessary
to make a tradeoff between anti-aliasing and reserving high-frequency
information. To acquire optimal performance, this paper learns different cutoff
frequencies for different instances dynamically. This design can be plugged
into most existing temporal modeling programs requiring only one additional
cutoff frequency parameter. Integrating low pass filters to the downsampling
operations significantly improves the detection performance and achieves
comparable results on THUMOS'14, ActivityNet~1.3, and Charades datasets.
Experiments demonstrate that anti-aliasing with low pass filters in TAL is
advantageous and efficient.
|
3D Lidar imaging can be a challenging modality when using multiple
wavelengths, or when imaging in high noise environments (e.g., imaging through
obscurants). This paper presents a hierarchical Bayesian algorithm for the
robust reconstruction of multispectral single-photon Lidar data in such
environments. The algorithm exploits multi-scale information to provide robust
depth and reflectivity estimates together with their uncertainties to help with
decision making. The proposed weight-based strategy allows the use of available
guide information that can be obtained by using state-of-the-art learning based
algorithms. The proposed Bayesian model and its estimation algorithm are
validated on both synthetic and real images showing competitive results
regarding the quality of the inferences and the computational complexity when
compared to the state-of-the-art algorithms.
|
Intrusion detection is an essential task in the cyber threat environment.
Machine learning and deep learning techniques have been applied for intrusion
detection. However, most of the existing research focuses on the model work but
ignores the fact that poor data quality has a direct impact on the performance
of a machine learning system. More attention should be paid to the data work
when building a machine learning-based intrusion detection system. This article
first summarizes existing machine learning-based intrusion detection systems
and the datasets used for building these systems. Then the data preparation
workflow and quality requirements for intrusion detection are discussed. To
figure out how data and models affect machine learning performance, we
conducted experiments on 11 HIDS datasets using seven machine learning models
and three deep learning models. The experimental results show that BERT and GPT
were the best algorithms for HIDS on all of the datasets. However, the
performance on different datasets varies, indicating the differences between
the data quality of these datasets. We then evaluate the data quality of the 11
datasets based on quality dimensions proposed in this paper to determine the
best characteristics that a HIDS dataset should possess in order to yield the
best possible result. This research initiates a data quality perspective for
researchers and practitioners to improve the performance of machine
learning-based intrusion detection.
|
Parallel tempering (PT) is a class of Markov chain Monte Carlo algorithms
that constructs a path of distributions annealing between a tractable reference
and an intractable target, and then interchanges states along the path to
improve mixing in the target. The performance of PT depends on how quickly a
sample from the reference distribution makes its way to the target, which in
turn depends on the particular path of annealing distributions. However, past
work on PT has used only simple paths constructed from convex combinations of
the reference and target log-densities. This paper begins by demonstrating that
this path performs poorly in the setting where the reference and target are
nearly mutually singular. To address this issue, we expand the framework of PT
to general families of paths, formulate the choice of path as an optimization
problem that admits tractable gradient estimates, and propose a flexible new
family of spline interpolation paths for use in practice. Theoretical and
empirical results both demonstrate that our proposed methodology breaks
previously-established upper performance limits for traditional paths.
|
How does one compute the Bondi mass on an arbitrary cut of null infinity
$\scri$ when it is not presented in a Bondi system? What then is the correct
definition of the mass aspect? How does one normalise an asymptotic translation
computed on a cut which is not equipped with the unit-sphere metric? These are
questions which need to be answered if one wants to calculate the Bondi-Sachs
energy-momentum for a space-time which has been determined numerically. Under
such conditions there is not much control over the presentation of $\scri$ so
that most of the available formulations of the Bondi energy-momentum simply do
not apply. The purpose of this article is to provide the necessary background
for a manifestly conformally invariant and gauge independent formulation of the
Bondi energy-momentum. To this end we introduce a conformally invariant version
of the GHP formalism to rephrase all the well-known formulae. This leads us to
natural definitions for the space of asymptotic translations with its
Lorentzian metric, for the Bondi news and the mass-aspect. A major role in
these developments is played by the "co-curvature", a naturally appearing
quantity closely related to the Gau{\ss} curvature on a cut of~$\scri$.
|
The surface topography of diamond coatings strongly affects surface
properties such as adhesion, friction, wear, and biocompatibility. However, the
understanding of multi-scale topography, and its effect on properties, has been
hindered by conventional measurement methods, which capture only a single
length scale. Here, four different polycrystalline diamond coatings are
characterized using transmission electron microscopy to assess the roughness
down to the sub-nanometer scale. Then these measurements are combined, using
the power spectral density (PSD), with conventional methods (stylus
profilometry and atomic force microscopy) to characterize all scales of
topography. The results demonstrate the critical importance of measuring
topography across all length scales, especially because their PSDs cross over
one another, such that a surface that is rougher at a larger scale may be
smoother at a smaller scale and vice versa. Furthermore, these measurements
reveal the connection between multi-scale topography and grain size, with
characteristic scaling behavior at and slightly below the mean grain size, and
self-affine fractal-like roughness at other length scales. At small (subgrain)
scales, unpolished surfaces exhibit a common form of residual roughness that is
self-affine in nature but difficult to detect with conventional methods. This
approach of capturing topography from the atomic- to the macro-scale is termed
comprehensive topography characterization, and all of the topography data from
these surfaces has been made available for further analysis by experimentalists
and theoreticians. Scientifically, this investigation has identified four
characteristic regions of topography scaling in polycrystalline diamond
materials.
|
In this article we analyse the oil-food price co-movement and its
determinants in both time and frequency domains, using the wavelet analysis
approach. Our results show that the significant local correlation between food
and oil is only apparent. This is mainly due to the activity of commodity index
investments and, to a lower extent, to the increasing demand from emerging
economies. Furthermore, we employ the wavelet entropy to assess the
predictability of the time series under consideration. We find that some
variables share with both food and oil a similar predictability structure.
These variables are those that mostly co-move with both oil and food. Some
policy implications can be derived from our results, the most relevant being
that the activity of commodity index investments is able to increase
correlation between food and oil. This activity generates highly integrated
markets and an increasing risk of joint price movements which is potentially
dangerous in periods of economic downturn and financial stress. In our work we
suggest that governments should also provide subsidy packages based on the
commodity traded in commodity indices to protect producers and consumers from
adverse price movements due to financial activity rather than lack of supply or
demand.
|
Facebook and Twitter recently announced community-based review platforms to
address misinformation. We provide an overview of the potential affordances of
such community-based approaches to content moderation based on past research
and preliminary analysis of Twitter's Birdwatch data. While our analysis
generally supports a community-based approach to content moderation, it also
warns against potential pitfalls, particularly when the implementation of the
new infrastructure focuses on crowd-based "validation" rather than
"collaboration." We call for multidisciplinary research utilizing methods from
complex systems studies, behavioural sociology, and computational social
science to advance the research on crowd-based content moderation.
|
We report a $\approx 400$-hour Giant Metrewave Radio Telescope (GMRT) search
for HI 21 cm emission from star-forming galaxies at $z = 1.18-1.39$ in seven
fields of the DEEP2 Galaxy Survey. Including data from an earlier 60-hour GMRT
observing run, we co-added the HI 21 cm emission signals from 2,841 blue
star-forming galaxies that lie within the full-width at half-maximum of the
GMRT primary beam. This yielded a $5.0\sigma$ detection of the average HI 21 cm
signal from the 2,841 galaxies at an average redshift $\langle z \rangle
\approx 1.3$, only the second detection of HI 21 cm emission at $z\ge1$. We
obtain an average HI mass of $\langle {\rm M_{HI}} \rangle=(3.09 \pm 0.61)
\times 10^{10}\ {\rm M}_\odot$ and an HI-to-stellar mass ratio of $2.6\pm0.5$,
both significantly higher than values in galaxies with similar stellar masses
in the local Universe. We also stacked the 1.4 GHz continuum emission of the
galaxies to obtain a median star-formation rate (SFR) of $14.5\pm1.1\ {\rm
M}_\odot \textrm{yr}^{-1}$. This implies an average HI depletion timescale of
$\approx 2$ Gyr for blue star-forming galaxies at $z\approx 1.3$, a factor of
$\approx 3.5$ lower than that of similar local galaxies. Our results suggest
that the HI content of galaxies towards the end of the epoch of peak cosmic SFR
density is insufficient to sustain their high SFR for more than $\approx 2$
Gyr. Insufficient gas accretion to replenish the HI could then explain the
observed decline in the cosmic SFR density at $z< 1$.
|
Single-particle resonances in the continuum are crucial for studies of exotic
nuclei. In this study, the Green's function approach is employed to search for
single-particle resonances based on the relativistic-mean-field model. Taking
$^{120}$Sn as an example, we identify single-particle resonances and determine
the energies and widths directly by probing the extrema of the Green's
functions. In contrast to the results found by exploring for the extremum of
the density of states proposed in our recent study [Chin. Phys. C, 44:084105
(2020)], which has proven to be very successful, the same resonances as well as
very close energies and widths are obtained. By comparing the Green's functions
plotted in different coordinate space sizes, we also found that the results
very slightly depend on the space size. These findings demonstrate that the
approach by exploring for the extremum of the Green's function is also very
reliable and effective for identifying resonant states, regardless of whether
they are wide or narrow.
|
We study the problem of a Herbig-Haro jet with a uniformly accelerating
ejection velocity, travelling into a uniform environment. For the ejection
density we consider two cases: a time-independent density, and a
time-independent mass loss rate. For these two cases, we obtain analytic
solutions for the motion of the jet head using a ram-pressure balance and a
center of mass equation of motion. We also compute axisymmetric numerical
simulations of the same flow, and compare the time-dependent positions of the
leading working surface shocks with the predictions of the two analytic models.
We find that if the jet is over-dense and over-pressured (with respect to the
environment) during its evolution, a good agreement is obtained with the
analytic models, with the flow initially following the center of mass analytic
solution, and (for the constant ejection density case) at later times
approaching the ram-pressure balance solution.
|
We conduct an analysis of the quasi-normal modes for generic spin
perturbations of the Kerr black hole using the isomonodromic method. The
strategy consists of solving the Riemann-Hilbert map relating the accessory
parameters of the differential equations involved to monodromy properties of
the solutions, using the $\tau$-function for the Painlev\'e V transcendent. We
show good accordance of the method with the literature for generic rotation
parameter $a<M$. In the extremal limit, we determined the dependence of the
modes with the black hole temperature and establish that the extremal values of
the modes are obtainable from the Painlev\'e V and III transcendents.
|
We study finite temperature string scale $AdS_3$ backgrounds. One background
is $AdS_3 \times S^1 \times T^2$ in which the anti-de Sitter space-time and the
circle are at the radius $\sqrt{\alpha'}$. Using path integral techniques, we
show that the bulk spectrum includes a continuum of states as well as
Ramond-Ramond ground states that agree with those of the symmetric orbifold of
the two-torus after second quantization. We also examine the one-loop free
energy of the background $AdS_3 \times S^1$ at curvature radius $\sqrt{2
\alpha'/3}$. In the space-time NSNS sector, the string theory spontaneously
breaks conformal symmetry as well as R-charge conjugation symmetry. We prove
that the minimum in the boundary energy is reached for a singly wound string.
In the RR sector, we classify the infinite set of ground states with fractional
R-charges. Moreover, we remark on the behaviour of critical temperatures as the
curvature scale becomes smaller than the string scale. In an appendix, we
derive the Hawking-Page transition in string theory by integrating a world
sheet one-point function.
|
We develop a new phenomenological model that addresses current tensions
between observations of the early and late Universe. Our scenario features: (i)
a decaying dark energy fluid (DDE), which undergoes a transition at $z \sim
5,000$, to raise today's value of the Hubble parameter -- addressing the $H_0$
tension, and (ii) an ultra-light axion (ULA), which starts oscillating at
$z\gtrsim 10^4$, to suppress the matter power spectrum -- addressing the $S_8$
tension. Our Markov Chain Monte Carlo analyses show that such a Dark Sector
model fits a combination of Cosmic Microwave Background (CMB), Baryon Acoustic
Oscillations, and Large Scale Structure (LSS) data slightly better than the
$\Lambda$CDM model, while importantly reducing both the $H_0$ and $S_8$
tensions with late universe probes ($\lesssim 3\sigma$). Combined with
measurements from cosmic shear surveys, we find that the discrepancy on $S_8$
is reduced to the $1.4\sigma$ level, and the value of $H_0$ is further raised.
Adding local supernovae measurements, we find that the $H_0$ and $S_8$ tensions
are reduced to the $1.4\sigma$ and $1.2\sigma$ level respectively, with a
significant improvement $\Delta\chi^2\simeq -18$ compared to the $\Lambda$CDM
model. With this complete dataset, the DDE and ULA are detected at
$\simeq4\sigma$ and $\simeq2\sigma$, respectively. We discuss a possible
particle physics realization of this model, with a dark confining gauge sector
and its associated axion, although embedding the full details within
microphysics remains an urgent open question. Our scenario will be decisively
probed with future CMB and LSS surveys.
|
We consider a wide range of UV scenarios with the aim of informing searches
for CP violation at the TeV scale using effective field theory techniques. We
demonstrate that broad theoretical assumptions about the nature of UV dynamics
responsible for CP violation map out a small subset of relevant operators at
the TeV scale. Concretely, this will allow us to reduce the number of free
parameters that need to be considered in experimental investigations, thus
enhancing analyses' sensitivities. In parallel, reflecting the UV dynamics'
Wilson coefficient hierarchy will enable a streamlined theoretical
interpretation of such analyses in the future. We demonstrate a minimal
approach to analysing CP violation in this context using a Monte Carlo study of
a combination of weak boson fusion Higgs and electroweak diboson production,
which provide complementary information on the relevant EFT operators.
|
Neural decoders were introduced as a generalization of the classic Belief
Propagation (BP) decoding algorithms, where the Trellis graph in the BP
algorithm is viewed as a neural network, and the weights in the Trellis graph
are optimized by training the neural network. In this work, we propose a novel
neural decoder for cyclic codes by exploiting their cyclically invariant
property. More precisely, we impose a shift invariant structure on the weights
of our neural decoder so that any cyclic shift of inputs results in the same
cyclic shift of outputs. Extensive simulations with BCH codes and punctured
Reed-Muller (RM) codes show that our new decoder consistently outperforms
previous neural decoders when decoding cyclic codes. Finally, we propose a list
decoding procedure that can significantly reduce the decoding error probability
for BCH codes and punctured RM codes. For certain high-rate codes, the gap
between our list decoder and the Maximum Likelihood decoder is less than
$0.1$dB. Code available at
https://github.com/cyclicallyneuraldecoder/CyclicallyEquivariantNeuralDecoders
|
The INvestigating Stellar Population In RElics is an on-going project
targeting 52 ultra-compact massive galaxies at 0.1<z<0.5 with the X-Shooter@VLT
spectrograph (XSH). These objects are the perfect candidates to be 'relics',
massive red-nuggets formed at high-z (z>2) through a short and intense star
formation burst, that evolved passively and undisturbed until the present-day.
Relics provide a unique opportunity to study the mechanisms of star formation
at high-z. In this paper, we present the first INSPIRE Data Release, comprising
19 systems with observations completed in 2020. We use the methods already
presented in the INSPIRE Pilot, but revisiting the 1D spectral extraction. For
these 19 systems, we obtain an estimate of the stellar velocity dispersion,
fitting separately the two UVB and VIS XSH arms at their original resolution.
We estimate [Mg/Fe] abundances via line-index strength and mass-weighted
integrated stellar ages and metallicities with full spectral fitting on the
combined spectrum. Ages are generally old, in agreement with the photometric
ones, and metallicities are almost always super-solar, confirming the
mass-metallicity relation. The [Mg/Fe] ratio is also larger than solar for the
great majority of the galaxies, as expected. We find that 10 objects have
formed more than 75% of their stellar mass (M*) within 3 Gyr from the Big Bang
and classify them as relics. Among these, we identify 4 galaxies which had
already fully assembled their M* by that time. They are therefore `extreme
relics' of the ancient Universe. The INSPIRE DR1 catalogue of 10 known relics
to-date augment by a factor of 3.3 the total number of confirmed relics, also
enlarging the redshift window. It is therefore the largest publicly available
collection. Thanks to the larger number of systems, we can also better quantify
the existence of a 'degree of relicness', already hinted at the Pilot Paper.
|
In recent years, 5G is widely used in parallel with IoT networks to enable
massive data connectivity and exchange with ultra-reliable and low latency
communication (URLLC) services. The internet requirements from user's
perspective have shifted from simple human to human interactions to different
communication paradigms and information-centric networking (ICN). ICN
distributes the content among the users based on their trending requests. ICN
is responsible not only for the routing and caching but also for naming the
network's content. ICN considers several parameters such as cache-hit ratio,
content diversity, content redundancy, and stretch to route the content. ICN
enables name-based caching of the required content according to the user's
request based on the router's interest table. The stretch shows the path
covered while retrieving the content from producer to consumer. Reduction in
path length also leads to a reduction in end-to-end latency and better data
rate availability. ICN routers must have the minimum stretch to obtain a better
system efficiency. Reinforcement learning (RL) is widely used in networks
environment to increase agent efficiency to make decisions. In ICN, RL can aid
to increase caching and stretch efficiency. This paper investigates a stretch
reduction strategy for ICN routers by formulating the stretch reduction problem
as a Markov decision process. The evaluation of the proposed stretch reduction
strategy's accuracy is done by employing Q-Learning, an RL technique. The
simulation results indicate that by using the optimal parameters for the
proposed stretch reduction strategy.
|
Characterising the atmospheres of exoplanets is key to understanding their
nature and provides hints about their formation and evolution. High-resolution
measurements of the helium triplet, He(2$^{3}$S), absorption of highly
irradiated planets have been recently reported, which provide a new mean to
study their atmospheric escape. In this work, we study the escape of the upper
atmospheres of HD 189733 b and GJ 3470 b by analysing high-resolution
He(2$^{3}$S) absorption measurements and using a 1D hydrodynamic model coupled
with a non-LTE model for the He(2$^{3}$S) state. We also use the H density
derived from Ly$\alpha$ observations to further constrain their temperatures,
T, mass-loss rates,$\dot M$, and H/He ratios. We have significantly improved
our knowledge of the upper atmospheres of these planets. While HD 189733 b has
a rather compressed atmosphere and small gas radial velocities, GJ 3470 b, with
a gravitational potential ten times smaller, exhibits a very extended
atmosphere and large radial outflow velocities. Hence, although GJ 3470 b is
much less irradiated in the XUV, and its upper atmosphere is much cooler, it
evaporates at a comparable rate. In particular, we find that the upper
atmosphere of HD 189733 b is compact and hot, with a maximum T of
12400$^{+400}_{-300}$ K, with very low mean molecular mass
(H/He=(99.2/0.8)$\pm0.1$), almost fully ionised above 1.1 R$_p$, and with $\dot
M$=(1.1$\pm0.1$)$\times$10$^{11}$ g/s. In contrast, the upper atmosphere of GJ
3470 b is highly extended and relatively cold, with a maximum T of 5100$\pm900$
K, also with very low mean molecular mass (H/He=(98.5/1.5)$^{+1.0}_{-1.5}$),
not strongly ionised and with $\dot M$=(1.9$\pm1.1$)$\times$10$^{11}$ g/s.
Furthermore, our results suggest that the upper atmospheres of giant planets
undergoing hydrodynamic escape tend to have very low mean molecular mass
(H/He$\gtrsim$97/3).
|
In this paper, a new lattice concept called the locally symmetric lattice is
proposed for storage ring light sources. In this new lattice, beta functions
are made locally symmetric about two mirror planes of the lattice cell, and the
phase advances between the two mirror planes satisfy the condition of nonlinear
dynamics cancellation. There are two kinds of locally symmetric lattices,
corresponding to two symmetric representations of lattice cell. In a locally
symmetric lattice, main nonlinear effects caused by sextupoles can be
effectively cancelled within one lattice cell, and generally there can also be
many knobs of sextupoles available for further optimizing the nonlinear
dynamics. Two kinds of locally symmetric lattices are designed for a 2.2 GeV
diffraction-limited storage ring to demonstrate the lattice concept.
|
We derive a new model for neutrino-plasma interactions in an expanding
universe that incorporates the collective effects of the neutrinos on the
plasma constituents. We start from the kinetic description of a multi-species
plasma in the flat Friedmann-Robertson-Walker metric, where the particles are
coupled to neutrinos through the charged- and neutral-current forms of the weak
interaction. We then derive the fluid equations and specialize our model to (a)
the lepton epoch, where we consider a pair electron-positron plasma interacting
with electron (anti-)neutrinos, and (b) after the electron-positron
annihilation, where we model an electron-proton plasma and take the limit of
slow ions and inertia-less electrons to obtain a set of neutrino-electron
magnetohydrodynamics (NEMHD) equations. In both models, the dynamics of the
plasma is affected by the neutrino motion through a ponderomotive force and, as
a result, new terms appear in the induction equation that can act as a source
for magnetic field generation in the early universe. A brief discussion on the
possible applications of our model is proposed.
|
We introduce a spacetime discretization of the Dirac equation that has the
form of a quantum automaton and that is invariant upon changing the
representation of the Clifford algebra, as the Dirac equation itself. Our
derivation follows Dirac's original one: We required that the square of the
discrete Dirac scheme be a discretization of the Klein-Gordon equation.
Contrary to standard lattice gauge theory in discrete time, in which unitarity
needs to be proven, we show that the quantum automaton delivers naturally
unitary Wilson fermions for any choice of Wilson's parameter.
|
In this paper, we study the dynamic regret of online linear quadratic
regulator (LQR) control with time-varying cost functions and disturbances. We
consider the case where a finite look-ahead window of cost functions and
disturbances is available at each stage. The online control algorithm studied
in this paper falls into the category of model predictive control (MPC) with a
particular choice of terminal costs to ensure the exponential stability of MPC.
It is proved that the regret of such an online algorithm decays exponentially
fast with the length of predictions. The impact of inaccurate prediction on
disturbances is also investigated in this paper.
|
We investigate the contributions of the hadronic structure of the neutron to
radiative $O(\alpha E_e/m_N)$ corrections (or the inner $O(\alpha E_e/m_N)$ RC)
to the neutron beta decay, where $\alpha$, $E_e$ and $m_N$ are the
fine-structure constant, the electron energy and the nucleon mass,
respectively. We perform the calculation within the effective quantum field
theory of strong low-energy pion-nucleon interactions described by the linear
$\sigma$-model with chiral $SU(2) \times SU(2)$ symmetry and electroweak
hadron-hadron, hadron-lepton and lepton-lepton interactions for the
electron-lepton family with $SU(2)_L \times U(1)_Y$ symmetry of the Standard
Electroweak Theory (Ivanov et al., Phys. Rev. D99, 093006 (2019)). We show that
after renormalization, carried out in accordance with Sirlin's prescription
(Sirlin, Phys. Rev. 164, 1767 (1967)), the inner $O(\alpha E_e/m_N)$ RC are of
the order of a few parts of $10^{-5} - 10^{-4}$. This agrees well with the
results obtained in (Ivanov et al., Phys. Rev. D99, 093006 (2019)).
|
This paper presents a novel distributed active set method for model
predictive control of linear systems. The method combines a primal active set
strategy with a decentralized conjugate gradient method to solve convex
quadratic programs. An advantage of the proposed method compared to existing
distributed model predictive algorithms is the primal feasibility of the
iterates. Numerical results show that the proposed method can compete with the
alternating direction method of multipliers in terms of communication
requirements for a chain of masses example.
|
Olivine and pyroxene are important mineral end-members for studying the
sur-face material compositions of mafic bodies. The profiles of visible and
near-infraredspectra of olivine-orthopyroxene mixtures systematically varied
with their compositionratios. In our experiments, we combine the RELAB spectral
database with a new spec-tral data obtained from some assembled
olivine-orthopyroxene mixtures. We found thatthe commonly-used band area ratio
(BAR, Cloutis et al. 1986) does not work well onour newly obtained spectral
data. To investigate this issue, an empirical procedure basedon fitted results
by modified Gaussian model is proposed to analyze the spectral curves.Following
the new empirical procedure, the end-member abundances can be estimatedwith a
15% accuracy with some prior mineral absorption features. In addition, the
mix-ture samples configured in our experiments are also irradiated by pulsed
lasers to simulateand investigate the space weathering effects. Spectral
deconvolution results confirm thatlow-content olivine on celestial bodies are
difficult to measure and estimate. Therefore,the olivine abundance of space
weathered materials may be underestimated from remotesensing data. This study
may be used to quantify the spectral relationship of olivine-orthopyroxene
mixtures and further reveal their correlation between the spectra of ordi-nary
chondrites and silicate asteroids.
|
Bilinear pairing is a fundamental operation that is widely used in
cryptographic algorithms (e.g., identity-based cryptographic algorithms) to
secure IoT applications. Nonetheless, the time complexity of bilinear pairing
is $O(n^3)$, making it a very time-consuming operation, especially for
resource-constrained IoT devices. Secure outsourcing of bilinear pairing has
been studied in recent years to enable computationally weak devices to securely
outsource the bilinear pairing to untrustworthy cloud servers. However, the
state-of-art algorithms often require to pre-compute and store some values,
which results in storage burden for devices. In the Internet of Things, devices
are generally with very limited storage capacity. Thus, the existing algorithms
do not fit the IoT well. In this paper, we propose a secure outsourcing
algorithm of bilinear pairings, which does not require pre-computations. In the
proposed algorithm, the outsourcer side's efficiency is significantly improved
compared with executing the original bilinear pairing operation. At the same
time, the privacy of the input and output is ensured. Also, we apply the
Ethereum blockchain in our outsourcing algorithm to enable fair payments, which
ensures that the cloud server gets paid only when he correctly accomplished the
outsourced work. The theoretical analysis and experimental results show that
the proposed algorithm is efficient and secure.
|
Transformer networks are able to capture patterns in data coming from many
domains (text, images, videos, proteins, etc.) with little or no change to
architecture components. We perform a theoretical analysis of the core
component responsible for signal propagation between elements, i.e. the
self-attention matrix. In practice, this matrix typically exhibits two
properties: (1) it is sparse, meaning that each token only attends to a small
subset of other tokens; and (2) it changes dynamically depending on the input
to the module. With these considerations in mind, we ask the following
question: Can a fixed self-attention module approximate arbitrary sparse
patterns depending on the input? How small is the hidden size $d$ required for
such approximation? We make progress in answering this question and show that
the self-attention matrix can provably approximate sparse matrices, where
sparsity is in terms of a bounded number of nonzero elements in each row and
column. While the parameters of self-attention are fixed, various sparse
matrices can be approximated by only modifying the inputs. Our proof is based
on the random projection technique and uses the seminal Johnson-Lindenstrauss
lemma. Our proof is constructive, enabling us to propose an algorithm for
finding adaptive inputs and fixed self-attention parameters in order to
approximate a given matrix. In particular, we show that, in order to
approximate any sparse matrix up to a given precision defined in terms of
preserving matrix element ratios, $d$ grows only logarithmically with the
sequence length $L$ (i.e. $d = O(\log L)$).
|
We study the optical appearance of a thin accretion disk around compact
objects within the Einstein-Gauss-Bonnet gravity. Considering static
spherically symmetric black holes and naked singularities we search for
characteristic signatures which can arise in the observable images due to the
modification of general relativity. While the images of the Gauss-Bonnet black
holes closely resemble the Schwarzschild black hole, naked singularities
possess a distinctive feature. A series of bright rings are formed in the
central part of the images with observable radiation $10^3$ times larger than
the rest of the flux making them observationally significant. We elucidate the
physical mechanism, which causes the appearance of the central rings, showing
that the image is determined by the light ring structure of the spacetime. In a
certain region of the parametric space the Gauss-Bonnet naked singularities
possess a stable and an unstable light ring. In addition the gravitational
field becomes repulsive in a certain neighbourhood of the singularity. This
combination of features leads to the formation of the central rings implying
that the effect is not specific for the Einstein-Gauss-Bonnet gravity but would
also appear for any other compact object with the same characteristics of the
photon dynamics.
|
The classical Mountain Pass Lemma of Ambrosetti-Rabinowitz has been studied,
extended and modified in several directions. Notable examples would certainly
include the generalization to locally Lipschitz functionals by K.C. Chang,
analyzing the structure of the critical set in the mountain pass theorem in the
works of Hofer, Pucci-Serrin and Tian, and the extension by Ghoussoub-Preiss to
closed subsets in a Banach space with recent variations. In this paper, we
utilize the generalized gradient of Clarke and Ekeland's variatonal principle
to generalize the Ghoussoub-Preiss's Theorem in the setting of locally
Lipschitz functionals. We give an application to periodic solutions of
Hamiltonian systems.
|
The stochastic gravitational wave background (SGWB) created by astrophysical
sources in the nearby universe is likely to be anisotropic. Upper limits on
SGWB anisotropy have been produced for all major data taking runs by the
ground-based laser interferometric detectors. However, due to the challenges
involved in numerically inverting the pixel-to-pixel noise covariance matrix,
which is necessary for setting upper limits, the searches accounted for angular
correlations in the map by using the spherical harmonic basis, where
regularization was relatively easier. This approach is better suited though for
extended sources.Moreover, the upper limit maps produced in the two different
bases are seemingly different. While the upper limits may be consistent within
statistical errors, it was important to check whether the results would remain
consistent if the full noise covariance matrix was used in the pixel basis.
Here, we use the full pixel-to-pixel Fisher information matrix to create upper
limit maps of SGWB anisotropy. We first perform an unmodeled search for
persistent, directional gravitational wave sources using folded data from the
first (O1) and second (O2) observing runs of Advanced LIGO and show that the
results are consistent with the upper limits published by the LIGO-Virgo
Collaboration (LVC). We then explore various ways to account for the
pixel-to-pixel Fisher information matrix using singular value decomposition and
Bayesian regularization schemes. We also account for the bias arising from
regularization in the likelihood. We do not find evidence for any SGWB signal
in the data, consistent with the LVC results and, though the upper limits
differ significantly. Through an injection study we show that they are all
valid $95\%$ upper limits, that is, the upper limit in a pixel is less than the
injected signal strength in less than $5\%$ of the pixels.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.