abstract
stringlengths 42
2.09k
|
---|
We study cosmic evolution based on the fixed points in the dynamical analysis
of the Degenerate Higher-Order Scalar-Tensor (DHOST) theories. We consider the
DHOST theories in which the propagation speed of gravitational waves is equal
to the speed of light, the tensor perturbations do not decay to dark energy
perturbations, and the scaling solutions exist. The scaling fixed point
associated with late time acceleration of universe can be either stable or
saddle depending on the parameters of the theory. For some ranges of the
parameters, the accelerated scaling point and the field dominated point can be
simultaneously stable. Cosmic evolution will reach the accelerated scaling
point if the time derivative of the scalar field in the theory is positive
during the matter domination. If the time derivative of the scalar field is
negative during the matter domination, the background universe will evolve
towards the field dominated point. The density parameter of the matter can be
larger than unity before reaching the scaling fixed point if the deviation from
the Einstein theory of gravity is too large and the initial conditions for the
dynamical variables during the matter domination are significantly different
from the accelerated scaling point. The stabilities of $\phi$MDE fixed point
are similar to the coupled dark energy models. In our consideration, the
universe can only evolve from the $\phi$MDE to the field dominated point.
|
We study a bottom-up holographic description of the QCD colour
superconducting phase in the presence of higher derivative corrections. We
expand this holographic model in the context of Gauss-Bonnet (GB) gravity. The
Cooper pair condensate has been investigated in the deconfinement phase for
different values of the GB coupling parameter $\lambda_{G B}$, we observe a
change in the value of the critical chemical potential $\mu_c$ in comparison to
Einstein gravity. We find that $\mu_c$ grows as $\lambda_{G B}$ increases. We
add four fermion interactions and show that in the presence of these
corrections the main interesting features of the model are still present and
that the intrinsic attractive interaction can not be switched off. This study
suggests to find GB corrections to equation of state of holographic QCD matter.
|
This paper proposes a new concept in which a digital twin derived from a
digital product description will automatically perform assembly planning and
orchestrate the production resources in a manufacturing cell. Thus the
manufacturing cell has generic services with minimal assumptions about what
kind of product will be assembled, while the digital product description is
designed collaboratively between the designer at an OEM and automated services
at potential manufacturers. This has several advantages. Firstly, the resulting
versatile manufacturing facility can handle a broad variety of products with
minimal or no reconfiguration effort, so it can cost-effectively offer its
services to a large number of OEMs. Secondly, a solution is presented to the
problem of performing concurrent product design and assembly planning over the
organizational boundary. Thirdly, the product design at the OEM is not
constrained to the capabilities of specific manufacturing facilities. The
concept is presented in general terms in UML and an implementation is provided
in a 3D simulation environment using Automation Markup Language for digital
product descriptions. Finally, two case studies are presented and applications
in a real industrial context are discussed.
|
We present a practical analysis of the fermion sign problem in fermionic path
integral Monte Carlo (PIMC) simulations in the grand-canonical ensemble (GCE).
As a representative model system, we consider electrons in a $2D$ harmonic
trap. We find that the sign problem in the GCE is even more severe than in the
canonical ensemble at the same conditions, which, in general, makes the latter
the preferred option. Despite these difficulties, we show that fermionic PIMC
simulations in the GCE are still feasible in many cases, which potentially
gives access to important quantities like the compressiblity or the Matsubara
Greens function. This has important implications for contemporary fields of
research such as warm dense matter, ultracold atoms, and electrons in quantum
dots.
|
We present a general approach to obtain effective field theories for
topological crystalline insulators whose low-energy theories are described by
massive Dirac fermions. We show that these phases are characterized by the
responses to spatially dependent mass parameters with interfaces. These mass
interfaces implement the dimensional reduction procedure such that the state of
interest is smoothly deformed into a topological crystal, which serves as a
representative state of a phase in the general classification. Effective field
theories are obtained by integrating out the massive Dirac fermions, and
various quantized topological terms are uncovered. Our approach can be
generalized to other crystalline symmetry protected topological phases and
provides a general strategy to derive effective field theories for such
crystalline topological phases.
|
We investigate the creation of scalar particles inside a region delimited by
a bubble which is expanding with non-zero acceleration. The bubble is modelled
as a thin shell and plays the role of a moving boundary, thus influencing the
fluctuations of the test scalar field inside it. Bubbles expanding in Minkowski
spacetime as well as those dividing two de Sitter spacetimes are explored in a
unified way. Our results for the Bogoliubov coefficient $\beta_k$ in the
adiabatic approximation show that in all cases the creation of scalar particles
decreases with the mass, and is much more significant in the case of nonzero
curvature. They also show that the dynamics of the bubble and its size are
relevant for particle creation, but in the dS-dS case the combination of both
effects leads to a behaviour different from that of Minkowski space-time, due
to the presence of a length scale (the Hubble radius of the internal geometry).
|
Deep neural networks have emerged as very successful tools for image
restoration and reconstruction tasks. These networks are often trained
end-to-end to directly reconstruct an image from a noisy or corrupted
measurement of that image. To achieve state-of-the-art performance, training on
large and diverse sets of images is considered critical. However, it is often
difficult and/or expensive to collect large amounts of training images.
Inspired by the success of Data Augmentation (DA) for classification problems,
in this paper, we propose a pipeline for data augmentation for accelerated MRI
reconstruction and study its effectiveness at reducing the required training
data in a variety of settings. Our DA pipeline, MRAugment, is specifically
designed to utilize the invariances present in medical imaging measurements as
naive DA strategies that neglect the physics of the problem fail. Through
extensive studies on multiple datasets we demonstrate that in the low-data
regime DA prevents overfitting and can match or even surpass the state of the
art while using significantly fewer training data, whereas in the high-data
regime it has diminishing returns. Furthermore, our findings show that DA can
improve the robustness of the model against various shifts in the test
distribution.
|
Quantum circuits that are classically simulatable tell us when quantum
computation becomes less powerful than or equivalent to classical computation.
Such classically simulatable circuits are of importance because they illustrate
what makes universal quantum computation different from classical computers. In
this work, we propose a novel family of classically simulatable circuits by
making use of dual-unitary quantum circuits (DUQCs), which have been recently
investigated as exactly solvable models of non-equilibrium physics, and we
characterize their computational power. Specifically, we investigate the
computational complexity of the problem of calculating local expectation values
and the sampling problem of one-dimensional DUQCs, and we generalize them to
two spatial dimensions. We reveal that a local expectation value of a DUQC is
classically simulatable at an early time, which is linear in a system length.
In contrast, in a late time, they can perform universal quantum computation,
and the problem becomes a BQP-complete problem. Moreover, classical simulation
of sampling from a DUQC turns out to be hard.
|
In this work, we consider a binary classification problem and cast it into a
binary hypothesis testing framework, where the observations can be perturbed by
an adversary. To improve the adversarial robustness of a classifier, we include
an abstain option, where the classifier abstains from making a decision when it
has low confidence about the prediction. We propose metrics to quantify the
nominal performance of a classifier with an abstain option and its robustness
against adversarial perturbations. We show that there exist a tradeoff between
the two metrics regardless of what method is used to choose the abstain region.
Our results imply that the robustness of a classifier with an abstain option
can only be improved at the expense of its nominal performance. Further, we
provide necessary conditions to design the abstain region for a 1- dimensional
binary classification problem. We validate our theoretical results on the MNIST
dataset, where we numerically show that the tradeoff between performance and
robustness also exist for the general multi-class classification problems.
|
Off-policy evaluation learns a target policy's value with a historical
dataset generated by a different behavior policy. In addition to a point
estimate, many applications would benefit significantly from having a
confidence interval (CI) that quantifies the uncertainty of the point estimate.
In this paper, we propose a novel deeply-debiasing procedure to construct an
efficient, robust, and flexible CI on a target policy's value. Our method is
justified by theoretical results and numerical experiments. A Python
implementation of the proposed procedure is available at
https://github.com/RunzheStat/D2OPE.
|
The successful amalgamation of cryptocurrency and consumer Internet of Things
(IoT) devices can pave the way for novel applications in machine-to-machine
economy. However, the lack of scalability and heavy resource requirements of
initial blockchain designs hinders the integration as they prioritized
decentralization and security. Numerous solutions have been proposed since the
emergence of Bitcoin to achieve this goal. However, none of them seem to
dominate and thus it is unclear how consumer devices will be adopting these
approaches. Therefore, in this paper, we critically review the existing
integration approaches and cryptocurrency designs that strive to enable
micro-payments among consumer devices. We identify and discuss solutions under
three main categories; direct integration, payment channel network and new
cryptocurrency design. The first approach utilizes a full node to interact with
the payment system. Offline channel payment is suggested as a second layer
solution to solve the scalability issue and enable instant payment with low
fee. New designs converge to semi-centralized scheme and focuson lightweight
consensus protocol that does not require highcomputation power which might mean
loosening the initial designchoices in favor of scalability. We evaluate the
pros and cons ofeach of these approaches and then point out future
researchchallenges. Our goal is to help researchers and practitioners tobetter
focus their efforts to facilitate micro-payment adoptions.
|
We show that it is possible to have arbitrarily long sequences of Alices and
Bobs so every (Alice, Bob) pair violates a Bell inequality. We propose an
experiment to observe this effect with two Alices and two Bobs.
|
One of the main goal of large-scale structure surveys is to test the
consistency of General Relativity at cosmological scales. In the $\Lambda$CDM
model of cosmology, the relations between the fields describing the geometry
and the content of our Universe are uniquely determined. In particular, the two
gravitational potentials -- that describe the spatial and temporal fluctuations
in the geometry -- are equal. Whereas large classes of dark energy models
preserve this equality, theories of modified gravity generally create a
difference between the potentials, known as anisotropic stress. Even though
measuring this anisotropic stress is one of the key goals of large-scale
structure surveys, there are currently no methods able to measure it directly.
Current methods all rely on measurements of galaxy peculiar velocities (through
redshift-space distortions), from which the time component of the metric is
inferred, assuming that dark matter follows geodesics. If this is not the case,
all the proposed tests fail to measure the anisotropic stress. In this letter,
we propose a novel test which directly measures anisotropic stress, without
relying on any assumption about the unknown dark matter. Our method uses
relativistic effects in the galaxy number counts to provide a direct
measurement of the time component of the metric. By comparing this with lensing
observations our test provides a direct measurement of the anisotropic stress.
|
Intensification and poleward expansion of upwelling favourable winds have
been predicted as a response to anthropogenic global climate change and have
recently been documented in most Eastern Boundary Upwelling Ecosystems of the
world. To identify how these processes are impacting nearshore oceanographic
habitats and, especially, long term trends of primary productivity in the
Humboldt Upwelling Ecosystem (HUE), we analysed time series of sea level
pressure, wind stress, sea surface and atmospheric surface temperatures, and
Chlorophyll-a, as a proxy for primary productivity, along 26{\deg} - 36{\deg}
S. We show that climate induced trends in primary productivity are highly
heterogeneous across the region. On the one hand, the well documented poleward
migration of the South Pacific Anticyclone (SPA) has led to decreased spring
upwelling winds in the region between ca. 30{\deg} and 34{\deg} S, and to their
intensification to the south. Decreased winds have produced slight increases in
sea surface temperature and a pronounced and meridionally extensive decrease in
surface Chlorophyll-a in this region of central Chile. To the north of 30{\deg}
S, significant increases in upwelling winds, decreased SST, and enhanced
Chlorophyll-a concentration are observed in the nearshore. We show that this
increased in upwelling driven coastal productivity is probably produced by the
increased land-sea pressure gradients (Bakun's effect) that have occurred over
the past two decades north of 30{\deg} S. Thus, climate drivers along the HUE
are inducing contrasting trends in oceanographic conditions and primary
productivity, which can have far-reaching consequences for coastal pelagic and
benthic ecosystems and lead to geographic displacements of the major fisheries.
|
Extensions of a set partition obtained by imposing bounds on the size of the
parts and the coloring of some of the elements are examined. Combinatorial
properties and the generating functions of some counting sequences associated
with these partitions are established. Connections with Riordan arrays are
presented.
|
Finding the largest cardinality feasible subset of an infeasible set of
linear constraints is the Maximum Feasible Subsystem problem (MAX FS). Solving
this problem is crucial in a wide range of applications such as machine
learning and compressive sensing. Although MAX FS is NP-hard, useful heuristic
algorithms exist, but these can be slow for large problems. We extend the
existing heuristics for the case of dense constraint matrices to greatly
increase their speed while preserving or improving solution quality. We test
the extended algorithms on two applications that have dense constraint
matrices: binary classification, and sparse recovery in compressive sensing. In
both cases, speed is greatly increased with no loss of accuracy.
|
Simulations of high energy density physics are expensive in terms of
computational resources. In particular, the computation of opacities of plasmas
in the non-local thermal equilibrium (NLTE) regime can consume as much as 90\%
of the total computational time of radiation hydrodynamics simulations for high
energy density physics applications. Previous work has demonstrated that a
combination of fully-connected autoencoders and a deep jointly-informed neural
network (DJINN) can successfully replace the standard NLTE calculations for the
opacity of krypton. This work expands this idea to combining multiple elements
into a single surrogate model with the focus here being on the autoencoder.
|
We consider the minimal seesaw model, the Standard Model extended by two
right-handed neutrinos, for explaining the neutrino masses and mixing angles
measured in oscillation experiments. When one of right-handed neutrinos is
lighter than the electroweak scale, it can give a sizable contribution to
neutrinoless double beta ($0\nu \beta \beta$) decay. We show that the detection
of the $0 \nu \beta \beta$ decay by future experiments gives a significant
implication to the search for such light right-handed neutrino.
|
In the Stochastic Thermodynamics theory, heat is a random variable with a
probability distribution associated. Studies in the distribution of heat are
mostly in the overdamped regime. Here we solve the heat distribution in the
underdamped regime for three different cases: the free particle, the linear
potential, and the harmonic potential. The results are exact and generalize
known results in the literature.
|
For safety of autonomous driving, vehicles need to be able to drive under
various lighting, weather, and visibility conditions in different environments.
These external and environmental factors, along with internal factors
associated with sensors, can pose significant challenges to perceptual data
processing, hence affecting the decision-making and control of the vehicle. In
this work, we address this critical issue by introducing a framework for
analyzing robustness of the learning algorithm w.r.t varying quality in the
image input for autonomous driving. Using the results of sensitivity analysis,
we further propose an algorithm to improve the overall performance of the task
of "learning to steer". The results show that our approach is able to enhance
the learning outcomes up to 48%. A comparative study drawn between our approach
and other related techniques, such as data augmentation and adversarial
training, confirms the effectiveness of our algorithm as a way to improve the
robustness and generalization of neural network training for autonomous
driving.
|
Semi-supervised domain adaptation (SSDA) aims to solve tasks in target domain
by utilizing transferable information learned from the available source domain
and a few labeled target data. However, source data is not always accessible in
practical scenarios, which restricts the application of SSDA in real world
circumstances. In this paper, we propose a novel task named Semi-supervised
Source Hypothesis Transfer (SSHT), which performs domain adaptation based on
source trained model, to generalize well in target domain with a few
supervisions. In SSHT, we are facing two challenges: (1) The insufficient
labeled target data may result in target features near the decision boundary,
with the increased risk of mis-classification; (2) The data are usually
imbalanced in source domain, so the model trained with these data is biased.
The biased model is prone to categorize samples of minority categories into
majority ones, resulting in low prediction diversity. To tackle the above
issues, we propose Consistency and Diversity Learning (CDL), a simple but
effective framework for SSHT by facilitating prediction consistency between two
randomly augmented unlabeled data and maintaining the prediction diversity when
adapting model to target domain. Encouraging consistency regularization brings
difficulty to memorize the few labeled target data and thus enhances the
generalization ability of the learned model. We further integrate Batch
Nuclear-norm Maximization into our method to enhance the discriminability and
diversity. Experimental results show that our method outperforms existing SSDA
methods and unsupervised model adaptation methods on DomainNet, Office-Home and
Office-31 datasets. The code is available at
https://github.com/Wang-xd1899/SSHT.
|
Dual function radar communications (DFRC) systems are attractive technologies
for autonomous vehicles, which utilize electromagnetic waves to constantly
sense the environment while simultaneously communicating with neighbouring
devices. An emerging approach to implement DFRC systems is to embed information
in radar waveforms via index modulation (IM). Implementation of DFRC schemes in
vehicular systems gives rise to strict constraints in terms of cost, power
efficiency, and hardware complexity. In this paper, we extend IM-based DFRC
systems to utilize sparse arrays and frequency modulated continuous waveforms
(FMCWs), which are popular in automotive radar for their simplicity and low
hardware complexity. The proposed FMCW-based radar-communications system (FRaC)
operates at reduced cost and complexity by transmitting with a reduced number
of radio frequency modules, combined with narrowband FMCW signalling. This is
achieved via array sparsification in transmission, formulating a virtual
multiple-input multiple-output array by combining the signals in one coherent
processing interval, in which the narrowband waveforms are transmitted in a
randomized manner. Performance analysis and numerical results show that the
proposed radar scheme achieves similar resolution performance compared with a
wideband radar system operating with a large receive aperture, while requiring
less hardware overhead. For the communications subsystem, FRaC achieves higher
rates and improved error rates compared to dual-function signalling based on
conventional phase modulation.
|
We deploy and demonstrate the capabilities of the magnetic field model
developed by Ewertowski & Basu (2013) by fitting observed polarimetry data of
the prestellar core FeSt 1-457. The analytic hourglass magnetic field function
derived directly from Maxwell's equations yields a central-to-surface magnetic
field strength ratio in the equatorial plane, as well as magnetic field
directions with relative magnitudes throughout the core. This fit emerges from
a comparison of a single plane of the model with the polarization map that
results from the integrated properties of the magnetic field and dust
throughout the core. Importantly, our fit is independent of any assumed density
profile of the core. We check the robustness of the fit by using the POLARIS
code to create synthetic polarization maps that result from the integrated
scattering and emission properties of the dust grains and their radiative
transfer, employing an observationally-motivated density profile. We find that
the synthetic polarization maps obtained from the model also provides a good
fit to the observed polarimetry. Our model fits the striking feature of
significant curvature of magnetic field lines in the outer part of FeSt 1-457.
Combined with independent column density estimates, we infer that the core of
size $R_{\rm gas}$ has a mildly supercritical mass-to-flux ratio and may have
formed through dynamical motions starting from a significantly larger radius
$R$. A breakdown of flux-freezing through neutral-ion slip (ambipolar
diffusion) could be responsible for effecting such a transition from a
large-scale magnetic field structure to a more compact gas structure.
|
Uncertainty relations play a crucial role in quantum mechanics. Well-defined
methods exist for the derivation of such uncertainties for pairs of
observables. Specific methods also allow to obtain time-energy uncertainty
relations. However, in these cases, different approaches are associated with
different meanings and interpretations. The one of interest here revolves
around the idea of whether quantum mechanics inherently imposes a fundamental
minimum duration for energy measurements with a certain precision. In our
study, we investigate within the Page and Wootters timeless framework how
energy measurements modify the relative "flow of time" between internal and
external clocks. This provides a unified framework for discussing the subject,
allowing us to recover previous results and derive new ones. In particular, we
show that the duration of an energy measurement carried out by an external
system cannot be performed arbitrarily fast from the perspective of the
internal clock. Moreover, we show that during any energy measurement the
evolution given by the internal clock is non-unitary.
|
Recent success of deep neural networks (DNNs) hinges on the availability of
large-scale dataset; however, training on such dataset often poses privacy
risks for sensitive training information. In this paper, we aim to explore the
power of generative models and gradient sparsity, and propose a scalable
privacy-preserving generative model DATALENS. Comparing with the standard PATE
privacy-preserving framework which allows teachers to vote on one-dimensional
predictions, voting on the high dimensional gradient vectors is challenging in
terms of privacy preservation. As dimension reduction techniques are required,
we need to navigate a delicate tradeoff space between (1) the improvement of
privacy preservation and (2) the slowdown of SGD convergence. To tackle this,
we take advantage of communication efficient learning and propose a novel noise
compression and aggregation approach TOPAGG by combining top-k compression for
dimension reduction with a corresponding noise injection mechanism. We
theoretically prove that the DATALENS framework guarantees differential privacy
for its generated data, and provide analysis on its convergence. To demonstrate
the practical usage of DATALENS, we conduct extensive experiments on diverse
datasets including MNIST, Fashion-MNIST, and high dimensional CelebA, and we
show that, DATALENS significantly outperforms other baseline DP generative
models. In addition, we adapt the proposed TOPAGG approach, which is one of the
key building blocks in DATALENS, to DP SGD training, and show that it is able
to achieve higher utility than the state-of-the-art DP SGD approach in most
cases. Our code is publicly available at https://github.com/AI-secure/DataLens.
|
We discuss a mechanism where charged lepton masses are derived from one-loop
diagrams mediated by particles in a dark sector including a dark matter
candidate. We focus on a scenario where the muon and electron masses are
generated at one loop with new ${\cal O}(1)$ Yukawa couplings. The measured
muon anomalous magnetic dipole moment, $(g-2)_\mu$, can be explained in this
framework. As an important prediction, the muon and electron Yukawa couplings
can largely deviate from their standard model predictions, and such deviations
can be tested at High-Luminosity LHC and future $e^+e^-$ colliders.
|
We introduce a new combinatorial principle which we call $\clubsuit_{AD}$.
This principle asserts the existence of a certain multi-ladder system with
guessing and almost-disjointness features, and is shown to be sufficient for
carrying out de Caux type constructions of topological spaces.
Our main result states that strong instances of $\clubsuit_{AD}$ follow from
the existence of a Souslin tree. It is also shown that the weakest instance of
$\clubsuit_{AD}$ does not follow from the existence of an almost Souslin tree.
As an application, we obtain a simple, de Caux type proof of Rudin's result
that if there is a Souslin tree, then there is an $S$-space which is Dowker.
|
In this paper, we study the pattern occurrence in $k$-ary words. We prove an
explicit upper bound on the number of $k$-ary words avoiding any given pattern
using a random walk argument. Additionally, we reproduce several already known
results and establish a simple connection among pattern occurrences in
permutations and $k$-ary words. A simple consequence of this connection is that
Wilf-equivalence of two patterns in words implies their Wilf-equivalence in
permutations.
|
We propose a framework to model an operational conversational negation by
applying worldly context (prior knowledge) to logical negation in compositional
distributional semantics. Given a word, our framework can create its negation
that is similar to how humans perceive negation. The framework corrects logical
negation to weight meanings closer in the entailment hierarchy more than
meanings further apart. The proposed framework is flexible to accommodate
different choices of logical negations, compositions, and worldly context
generation. In particular, we propose and motivate a new logical negation using
matrix inverse.
We validate the sensibility of our conversational negation framework by
performing experiments, leveraging density matrices to encode graded entailment
information. We conclude that the combination of subtraction negation and
phaser in the basis of the negated word yields the highest Pearson correlation
of 0.635 with human ratings.
|
In recent years, an intensive study of strong approximation of stochastic
differential equations (SDEs) with a drift coefficient that may have
discontinuities in space has begun. In many of these results it is assumed that
the drift coefficient satisfies piecewise regularity conditions and the
diffusion coefficient is Lipschitz continuous and non-degenerate at the
discontinuity points of the drift coefficient. For scalar SDEs of that type the
best $L_p$-error rate known so far for approximation of the solution at the
final time point is $3/4$ in terms of the number of evaluations of the driving
Brownian motion and it is achieved by the transformed equidistant
quasi-Milstein scheme, see [M\"uller-Gronbach, T., and Yaroslavtseva, L., A
strong order 3/4 method for SDEs with discontinuous drift coefficient, to
appear in IMA Journal of Numerical Analysis]. Recently in [M\"uller-Gronbach,
T., and Yaroslavtseva, L., Sharp lower error bounds for strong approximation of
SDEs with discontinuous drift coefficient by coupling of noise,
arXiv:2010.00915 (2020)] it has been shown that for such SDEs the $L_p$-error
rate $3/4$ can not be improved in general by no numerical method based on
evaluations of the driving Brownian motion at fixed time points. In the present
article we construct for the first time in the literature a method based on
sequential evaluations of the driving Brownian motion, which achieves an
$L_p$-error rate of at least $1$ in terms of the average number of evaluations
of the driving Brownian motion for such SDEs.
|
Due to the strong coupling between magnetism and ferroelectricity,
$(\mathrm{ND}_4)_2\mathrm{FeCl}_5\cdot\mathrm{D}_2\mathrm{O}$ exhibits several
intriguing magnetic and electric phases. In this letter, we include high-order
onsite spin anisotropic interactions in a spin model that successfully captures
the ferroelectric phase transitions of
$(\mathrm{ND}_4)_2\mathrm{FeCl}_5\cdot\mathrm{D}_2\mathrm{O}$ under a magnetic
field and produces the large weights of high-order harmonic components in the
cycloid structure that are observed from neutron diffraction experiments.
Moreover, we predict a new ferroelectric phase sandwiched between the FE II and
FE III phases in a magnetic field. Our results emphasize the importance of the
high-order spin anisotropic interactions and provide a guideline to understand
multiferroic materials with rich phase diagrams.
|
Continued population growth and urbanization is shifting research to consider
the quality of urban green space over the quantity of these parks, woods, and
wetlands. The quality of urban green space has been hitherto measured by expert
assessments, including in-situ observations, surveys, and remote sensing
analyses. Location data platforms, such as TripAdvisor, can provide people's
opinion on many destinations and experiences, including UGS. This paper
leverages Artificial Intelligence techniques for opinion mining and text
classification using such platform's reviews as a novel approach to urban green
space quality assessments. Natural Language Processing is used to analyze
contextual information given supervised scores of words by implementing
computational analysis. Such an application can support local authorities and
stakeholders in their understanding of and justification for future investments
in urban green space.
|
As a highly data-driven application, recommender systems could be affected by
data bias, resulting in unfair results for different data groups, which could
be a reason that affects the system performance. Therefore, it is important to
identify and solve the unfairness issues in recommendation scenarios. In this
paper, we address the unfairness problem in recommender systems from the user
perspective. We group users into advantaged and disadvantaged groups according
to their level of activity, and conduct experiments to show that current
recommender systems will behave unfairly between two groups of users.
Specifically, the advantaged users (active) who only account for a small
proportion in data enjoy much higher recommendation quality than those
disadvantaged users (inactive). Such bias can also affect the overall
performance since the disadvantaged users are the majority. To solve this
problem, we provide a re-ranking approach to mitigate this unfairness problem
by adding constraints over evaluation metrics. The experiments we conducted on
several real-world datasets with various recommendation algorithms show that
our approach can not only improve group fairness of users in recommender
systems, but also achieve better overall recommendation performance.
|
Humans race drones faster than algorithms, despite being limited to a fixed
camera angle, body rate control, and response latencies in the order of
hundreds of milliseconds. A better understanding of the ability of human pilots
of selecting appropriate motor commands from highly dynamic visual information
may provide key insights for solving current challenges in vision-based
autonomous navigation. This paper investigates the relationship between human
eye movements, control behavior, and flight performance in a drone racing task.
We collected a multimodal dataset from 21 experienced drone pilots using a
highly realistic drone racing simulator, also used to recruit professional
pilots. Our results show task-specific improvements in drone racing performance
over time. In particular, we found that eye gaze tracks future waypoints (i.e.,
gates), with first fixations occurring on average 1.5 seconds and 16 meters
before reaching the gate. Moreover, human pilots consistently looked at the
inside of the future flight path for lateral (i.e., left and right turns) and
vertical maneuvers (i.e., ascending and descending). Finally, we found a strong
correlation between pilots eye movements and the commanded direction of
quadrotor flight, with an average visual-motor response latency of 220 ms.
These results highlight the importance of coordinated eye movements in
human-piloted drone racing. We make our dataset publicly available.
|
We investigate formation of Bose-Einstein condensates under non-equilibrium
conditions using numerical simulations of the three-dimensional
Gross-Pitaevskii equation. For this, we set initial random weakly nonlinear
excitations and the forcing at high wave numbers, and study propagation of the
turbulent spectrum toward the low wave numbers. Our primary goal is to compare
the results for the evolving spectrum with the previous results obtained for
the kinetic equation of weak wave turbulence. We demonstrate existence of a
regime for which good agreement with the wave turbulence results is found in
terms of the main features of the previously discussed self-similar solution.
In particular, we find a reasonable agreement with the low-frequency and the
high-frequency power-law asymptotics of the evolving solution, including the
anomalous power-law exponent $x^* \approx 1.24$ for the three-dimensional
waveaction spectrum. We also study the regimes of very weak turbulence, when
the evolution is affected by the discreteness of the Fourier space, and the
strong turbulence regime when emerging condensate modifies the wave dynamics
and leads to formation of strongly nonlinear filamentary vortices.
|
Development of memory devices with ultimate performance has played a key role
in innovation of modern electronics. As a mainstream technology nonvolatile
memory devices have manifested high capacity and mechanical reliability,
however current major bottlenecks include low extinction ratio and slow
operational speed. Although substantial effort has been employed to improve
their performance, a typical hundreds of micro- or even milli- second write
time remains a few orders of magnitude longer than their volatile counterparts.
We have demonstrated nonvolatile, floating-gate memory devices based on van der
Waals heterostructures with atomically sharp interfaces between different
functional elements, and achieved ultrahigh-speed programming/erasing
operations verging on an ultimate theoretical limit of nanoseconds with
extinction ratio up to 10^10. This extraordinary performance has allowed new
device capabilities such as multi-bit storage, thus opening up unforeseen
applications in the realm of modern nanoelectronics and offering future
fabrication guidelines for device scale-up.
|
The optical bistability have been studied theoretically in a multi-mode
optomechanical system with two mechanical oscillators independently coupled to
two cavities in addition to direct tunnel coupling between cavities. It is
proved that the bistable behavior of mean intracavity photon number in the
right cavity can be tuned by adjusting the strength of the pump laser beam
driving the left cavity. And the mean intracavity photon number is relatively
larger in the red sideband regime than that in the blue sideband regime.
Moreover, we have shown that the double optical bistability of intracavity
photon in the right cavity and the two steady-state positions of mechanical
resonators can be observed when the control field power is increased to a
critical value. Besides, the critical values for observing bistability and
double bistability can be tuned by adjusting the coupling coefficient between
two cavities and the coupling rates between cavities mode and mechanical mode.
|
Let $1<g_1<\ldots<g_{\varphi(p-1)}<p-1$ be the ordered primitive roots
modulo~$p$. We study the pseudorandomness of the binary sequence $(s_n)$
defined by $s_n\equiv g_{n+1}+g_{n+2}\bmod 2$, $n=0,1,\ldots$. In particular,
we study the balance, linear complexity and $2$-adic complexity of $(s_n)$. We
show that for a typical $p$ the sequence $(s_n)$ is quite unbalanced. However,
there are still infinitely many $p$ such that $(s_n)$ is very balanced. We also
prove similar results for the distribution of longer patterns. Moreover, we
give general lower bounds on the linear complexity and $2$-adic complexity
of~$(s_n)$ and state sufficient conditions for attaining their maximums. Hence,
for carefully chosen $p$, these sequences are attractive candidates for
cryptographic applications.
|
We study a frequency-modulated quantum harmonic oscillator as a thermodynamic
system. For this purpose, we introduce an `invariant' thermal state by using
Ermakov-Lewis-Riesenfeld invariant in place of an initial state. This
prescription enables us to analyze the thermodynamics of the oscillator system
regardless of whether the process is slowly varying (adiabatic) or not
(nonadiabatic). We introduce a quantity $\mathscr{S}$ that describes the
`nonadiabaticity' contribution satisfactorily. We write down the thermodynamics
of the oscillator system by using this quantity in addition to the ordinary
thermodynamical ones. As a result, we extend the first law of thermodynamics to
nonadiabatic processes. We discuss universality for the method and some
possible applications. In short, we suggest a schematic procedure for obtaining
a measure of the `degree of nonadiabaticity' and present an application to the
thermodynamics of the squeezed quantum oscillators.
|
Representation learning has been widely studied in the context of
meta-learning, enabling rapid learning of new tasks through shared
representations. Recent works such as MAML have explored using
fine-tuning-based metrics, which measure the ease by which fine-tuning can
achieve good performance, as proxies for obtaining representations. We present
a theoretical framework for analyzing representations derived from a MAML-like
algorithm, assuming the available tasks use approximately the same underlying
representation. We then provide risk bounds on the best predictor found by
fine-tuning via gradient descent, demonstrating that the algorithm can provably
leverage the shared structure. The upper bound applies to general function
classes, which we demonstrate by instantiating the guarantees of our framework
in the logistic regression and neural network settings. In contrast, we
establish the existence of settings where any algorithm, using a representation
trained with no consideration for task-specific fine-tuning, performs as well
as a learner with no access to source tasks in the worst case. This separation
result underscores the benefit of fine-tuning-based methods, such as MAML, over
methods with "frozen representation" objectives in few-shot learning.
|
Network pruning is an effective method to reduce the computational expense of
over-parameterized neural networks for deployment on low-resource systems.
Recent state-of-the-art techniques for retraining pruned networks such as
weight rewinding and learning rate rewinding have been shown to outperform the
traditional fine-tuning technique in recovering the lost accuracy (Renda et
al., 2020), but so far it is unclear what accounts for such performance. In
this work, we conduct extensive experiments to verify and analyze the uncanny
effectiveness of learning rate rewinding. We find that the reason behind the
success of learning rate rewinding is the usage of a large learning rate.
Similar phenomenon can be observed in other learning rate schedules that
involve large learning rates, e.g., the 1-cycle learning rate schedule (Smith
et al., 2019). By leveraging the right learning rate schedule in retraining, we
demonstrate a counter-intuitive phenomenon in that randomly pruned networks
could even achieve better performance than methodically pruned networks
(fine-tuned with the conventional approach). Our results emphasize the
cruciality of the learning rate schedule in pruned network retraining - a
detail often overlooked by practitioners during the implementation of network
pruning. One-sentence Summary: We study the effective of different retraining
mechanisms while doing pruning
|
For broad nanoscale applications, it is crucial to implement more functional
properties, especially those ferroic orders, into two-dimensional materials.
Here GdI$_3$ is theoretically identified as a honeycomb antiferromagnet with
large $4f$ magnetic moment. The intercalation of metal atoms can dope electrons
into Gd's $5d$-orbitals, which alters its magnetic state and lead to Peierls
transition. Due to the strong electron-phonon coupling, the Peierls transition
induces prominent ferroelasticity, making it a multiferroic system. The strain
from undirectional stretching can be self-relaxed via resizing of triple
ferroelastic domains, which can protect the magnet aganist mechnical breaking
in flexible applications.
|
The temperature scales of screening of local magnetic and orbital moments are
important characteristics of strongly correlated substances. In a recent paper
X. Deng et al. using dynamic mean-field theory (DMFT) have identified
temperature scales of the onset of screening in orbital and spin channels in
some correlated metals from the deviation of temperature dependence of local
susceptibility from the Curie law. We argue that the scales obtained this way
are in fact much larger, than the corresponding Kondo temperatures, and,
therefore, do not characterize the screening process. By reanalyzing the
results of this paper we find the characteristic (Kondo) temperatures for
screening in the spin channel $T_K\approx 100$ K for V$_2$O$_3$ and $T_K\approx
350$ K for Sr$_2$RuO$_4$, which are almost an order of magnitude smaller than
those for the onset of the screening estimated in the paper ($1000$ K and
$2300$ K, respectively); for V$_2$O$_3$ the obtained temperature scale $T_K$ is
therefore comparable to the temperature of completion of the screening, $T^{\rm
comp}\sim 25$ K, which shows that the screening in this material can be
described in terms of a single temperature scale.
|
We present a multi-line survey of the interstellar medium (ISM) in two $z>6$
quasar (QSO) host galaxies, PJ231-20 ($z=6.59$) and PJ308-21 ($z=6.23$), and
their two companion galaxies. Observations were carried out using the Atacama
Large (sub-)Millimeter Array (ALMA). We targeted eleven transitions including
atomic fine structure lines (FSLs) and molecular lines: [NII]$_{\rm 205\mu m}$,
[CI]$_{\rm 369\mu m}$, CO ($J_{\rm up} = 7, 10, 15, 16$), H$_2$O
$3_{12}-2_{21}$, $3_{21}-3_{12}$, $3_{03}-2_{12}$, and the OH$_{\rm 163\mu m}$
doublet. The underlying far-infrared (FIR) continuum samples the Rayleigh-Jeans
tail of the respective dust emission. By combining this information with our
earlier ALMA [CII]$_{\rm 158\mu m}$ observations, we explore the effects of
star formation and black hole feedback on the galaxies' ISM using the CLOUDY
radiative transfer models. We estimate dust masses, spectral indexes, IR
luminosities, and star-formation rates from the FIR continuum. The analysis of
the FSLs indicates that the [CII]$_{\rm 158\mu m}$ and [CI]$_{\rm 369\mu m}$
emission arises predominantly from the neutral medium in photodissociation
regions (PDRs). We find that line deficits are in agreement with those of local
luminous infrared galaxies. The CO spectral line energy distributions (SLEDs),
reveal significant high-$J$ CO excitation in both quasar hosts. Our CO SLED
modeling of the quasar PJ231-20 shows that PDRs dominate the molecular mass and
CO luminosities for $J_{\rm up}\le 7$, while the $J_{\rm up}\ge10$ CO emission
is likely driven by X-ray dissociation regions produced by the active galactic
nucleus (AGN) at the very center of the quasar host [abridged].
|
The central idea of this review is to consider quantum field theory models
relevant for particle physics and replace the fermionic matter in these models
by a bosonic one. This is mostly motivated by the fact that bosons are more
``accessible'' and easier to manipulate for experimentalists, but this
``substitution'' also leads to new physics and novel phenomena. It allows us to
gain new information about among other things confinement and the dynamics of
the deconfinement transition. We will thus consider bosons in dynamical
lattices corresponding to the bosonic Schwinger or Z$_2$ Bose-Hubbard models.
Another central idea of this review concerns atomic simulators of paradigmatic
models of particle physics theory such as the Creutz-Hubbard ladder, or
Gross-Neveu-Wilson and Wilson-Hubbard models. Finally, we will briefly describe
our efforts to design experimentally friendly simulators of these and other
models relevant for particle physics.
|
How can neural networks trained by contrastive learning extract features from
the unlabeled data? Why does contrastive learning usually need much stronger
data augmentations than supervised learning to ensure good representations?
These questions involve both the optimization and statistical aspects of deep
learning, but can hardly be answered by analyzing supervised learning, where
the target functions are the highest pursuit. Indeed, in self-supervised
learning, it is inevitable to relate to the optimization/generalization of
neural networks to how they can encode the latent structures in the data, which
we refer to as the feature learning process.
In this work, we formally study how contrastive learning learns the feature
representations for neural networks by analyzing its feature learning process.
We consider the case where our data are comprised of two types of features: the
more semantically aligned sparse features which we want to learn from, and the
other dense features we want to avoid. Theoretically, we prove that contrastive
learning using $\mathbf{ReLU}$ networks provably learns the desired sparse
features if proper augmentations are adopted. We present an underlying
principle called $\textbf{feature decoupling}$ to explain the effects of
augmentations, where we theoretically characterize how augmentations can reduce
the correlations of dense features between positive samples while keeping the
correlations of sparse features intact, thereby forcing the neural networks to
learn from the self-supervision of sparse features. Empirically, we verified
that the feature decoupling principle matches the underlying mechanism of
contrastive learning in practice.
|
In this paper, we propose a transformer based approach for visual grounding.
Unlike previous proposal-and-rank frameworks that rely heavily on pretrained
object detectors or proposal-free frameworks that upgrade an off-the-shelf
one-stage detector by fusing textual embeddings, our approach is built on top
of a transformer encoder-decoder and is independent of any pretrained detectors
or word embedding models. Termed VGTR -- Visual Grounding with TRansformers,
our approach is designed to learn semantic-discriminative visual features under
the guidance of the textual description without harming their location ability.
This information flow enables our VGTR to have a strong capability in capturing
context-level semantics of both vision and language modalities, rendering us to
aggregate accurate visual clues implied by the description to locate the
interested object instance. Experiments show that our method outperforms
state-of-the-art proposal-free approaches by a considerable margin on five
benchmarks while maintaining fast inference speed.
|
Microservices have become popular in the past few years, attracting the
interest of both academia and industry. Despite of its benefits, this new
architectural style still poses important challenges, such as resilience,
performance and evolution. Self-adaptation techniques have been applied
recently as an alternative to solve or mitigate those problems. However, due to
the range of quality attributes that affect microservice architectures, many
different self-adaptation strategies can be used. Thus, to understand the
state-of-the-art of the use of self-adaptation techniques and mechanisms in
microservice-based systems, this work conducted a systematic mapping, in which
21 primary studies were analyzed considering qualitative and quantitative
research questions. The results show that most studies focus on the Monitor
phase (28.57%) of the adaptation control loop, address the self-healing
property (23.81%), apply a reactive adaptation strategy (80.95%) in the system
infrastructure level (47.62%) and use a centralized approach (38.10%). From
those, it was possible to propose some research directions to fill existing
gaps.
|
We recently found the globular cluster (GC) EXT8 in M31 to have an extremely
low metallicity of [Fe/H]=-2.91+/-0.04 using high-resolution spectroscopy. Here
we present a colour-magnitude diagram (CMD) for EXT8, obtained with the Wide
Field Camera 3 on board the Hubble Space Telescope. Compared with the CMDs of
metal-poor Galactic GCs, we find that the upper red giant branch (RGB) of EXT8
is about 0.03 mag bluer in F606W-F814W and slightly steeper, as expected from
the low spectroscopic metallicity. The observed colour spread on the upper RGB
is consistent with being caused entirely by the measurement uncertainties, and
we place an upper limit of sigma(F606W-F814W)=0.015 mag on any intrinsic colour
spread. The corresponding metallicity spread can be up to sigma([Fe/H])=0.2 dex
or >0.7 dex, depending on the isochrone library adopted. The horizontal branch
(HB) is located mostly on the blue side of the instability strip and has a tail
extending to at least M(F606W)=+3, as in the Galactic GC M15. We identify two
candidate RR Lyrae variables and several UV-luminous post-HB/post AGB star
candidates, including one very bright (M(F300X)=-3.2) source near the centre of
EXT8. The surface brightness of EXT8 out to a radius of 25 arcsec is well
fitted by a Wilson-type profile with an ellipticity of epsilon=0.20, a
semi-major axis core radius of 0.25", and a central surface brightness of 15.2
mag per square arcsec in the F606W band, with no evidence of extra-tidal
structure. Overall, EXT8 has properties consistent with it being a "normal",
but very metal-poor GC, and its combination of relatively high mass and very
low metallicity thus remains challenging to explain in the context of GC
formation theories operating within the hierarchical galaxy assembly paradigm.
|
The next-generation non-volatile memory (NVM) is striding into computer
systems as a new tier as it incorporates both DRAM's byte-addressability and
disk's persistency. Researchers and practitioners have considered building
persistent memory by placing NVM on the memory bus for CPU to directly load and
store data. As a result, cache-friendly data structures have been developed for
NVM. One of them is the prevalent B+-tree. State-of-the-art in-NVM B+-trees
mainly focus on the optimization of write operations (insertion and deletion).
However, search is of vital importance for B+-tree. Not only search-intensive
workloads benefit from an optimized search, but insertion and deletion also
rely on a preceding search operation to proceed. In this paper, we attentively
study a sorted B+-tree node that spans over contiguous cache lines. Such cache
lines exhibit a monotonically increasing trend and searching a target key
across them can be accelerated by estimating a range the key falls into. To do
so, we construct a probing Sentinel Array in which a sentinel stands for each
cache line of B+-tree node. Checking the Sentinel Array avoids scanning
unnecessary cache lines and hence significantly reduces cache misses for a
search. A quantitative evaluation shows that using Sentinel Arrays boosts the
search performance of state-of-the-art in-NVM B+-trees by up to 48.4% while the
cost of maintaining of Sentinel Array is low.
|
Kontsevich introduced certain ribbon graphs as cell decompositions for
combinatorial models of moduli spaces of complex curves with boundaries in his
proof of Witten's conjecture. In this work, we define four types of generalised
Kontsevich graphs and find combinatorial relations among them. We call the main
type ciliated maps and use the auxiliary ones to show they satisfy a Tutte
recursion that we turn into a combinatorial interpretation of the loop
equations of topological recursion for a large class of spectral curves. It
follows that ciliated maps, which are Feynman graphs for the Generalised
Kontsevich matrix Model (GKM), are computed by topological recursion. Our
particular instance of the GKM relates to the r-KdV integrable hierarchy and
since the string solution of the latter encodes intersection numbers with
Witten's $r$-spin class, we find an identity between ciliated maps and $r$-spin
intersection numbers, implying that they are also governed by topological
recursion. In turn, this paves the way towards a combinatorial understanding of
Witten's class. This new topological recursion perspective on the GKM provides
concrete tools to explore the conjectural symplectic invariance property of
topological recursion for large classes of spectral curves.
|
In recent years, the Internet of Things (IoT) technology has led to the
emergence of multiple smart applications in different vital sectors including
healthcare, education, agriculture, energy management, etc. IoT aims to
interconnect several intelligent devices over the Internet such as sensors,
monitoring systems, and smart appliances to control, store, exchange, and
analyze collected data. The main issue in IoT environments is that they can
present potential vulnerabilities to be illegally accessed by malicious users,
which threatens the safety and privacy of gathered data. To face this problem,
several recent works have been conducted using microservices-based architecture
to minimize the security threats and attacks related to IoT data. By employing
microservices, these works offer extensible, reusable, and reconfigurable
security features. In this paper, we aim to provide a survey about
microservices-based approaches for securing IoT applications. This survey will
help practitioners understand ongoing challenges and explore new and promising
research opportunities in the IoT security field. To the best of our knowledge,
this paper constitutes the first survey that investigates the use of
microservices technology for securing IoT applications.
|
The past decades have witnessed the prosperity of graph mining, with a
multitude of sophisticated models and algorithms designed for various mining
tasks, such as ranking, classification, clustering and anomaly detection.
Generally speaking, the vast majority of the existing works aim to answer the
following question, that is, given a graph, what is the best way to mine it? In
this paper, we introduce the graph sanitation problem, to answer an orthogonal
question. That is, given a mining task and an initial graph, what is the best
way to improve the initially provided graph? By learning a better graph as part
of the input of the mining model, it is expected to benefit graph mining in a
variety of settings, ranging from denoising, imputation to defense. We
formulate the graph sanitation problem as a bilevel optimization problem, and
further instantiate it by semi-supervised node classification, together with an
effective solver named GaSoliNe. Extensive experimental results demonstrate
that the proposed method is (1) broadly applicable with respect to different
graph neural network models and flexible graph modification strategies, (2)
effective in improving the node classification accuracy on both the original
and contaminated graphs in various perturbation scenarios. In particular, it
brings up to 25% performance improvement over the existing robust graph neural
network methods.
|
In the context of DP-SGD each round communicates a local SGD update which
leaks some new information about the underlying local data set to the outside
world. In order to provide privacy, Gaussian noise is added to local SGD
updates. However, privacy leakage still aggregates over multiple training
rounds. Therefore, in order to control privacy leakage over an increasing
number of training rounds, we need to increase the added Gaussian noise per
local SGD update. This dependence of the amount of Gaussian noise $\sigma$ on
the number of training rounds $T$ may impose an impractical upper bound on $T$
(because $\sigma$ cannot be too large) leading to a low accuracy global model
(because the global model receives too few local SGD updates). This makes
DP-SGD much less competitive compared to other existing privacy techniques.
We show for the first time that for $(\epsilon,\delta)$-differential privacy
$\sigma$ can be chosen equal to $\sqrt{2(\epsilon +\ln(1/\delta))/\epsilon}$
for $\epsilon=\Omega(T/N^2)$. In many existing machine learning problems, $N$
is always large and $T=O(N)$. Hence, $\sigma$ becomes ``independent'' of any
$T=O(N)$ choice with $\epsilon=\Omega(1/N)$ (aggregation of privacy leakage
increases to a limit). This means that our $\sigma$ only depends on $N$ rather
than $T$. This important discovery brings DP-SGD to practice -- as also
demonstrated by experiments -- because $\sigma$ can remain small to make the
trained model have high accuracy even for large $T$ as usually happens in
practice.
|
We discuss functoriality properties of the Ozsvath-Szabo contact invariant,
and expose a number of results which seemed destined for folklore. We clarify
the (in)dependence of the invariant on the basepoint, prove that it is
functorial with respect to contactomorphisms, and show that it is strongly
functorial under Stein cobordisms.
|
The Standing Wave (SW) TESLA niobium-based superconducting radio frequency
structure is limited to an accelerating gradient of about 50 MV/m by the
critical RF magnetic field. To break through this barrier, we explore the
option of niobium-based traveling wave (TW) structures. Optimization of TW
structures was done considering experimentally known limiting electric and
magnetic fields. It is shown that a TW structure can have an accelerating
gradient above 70 MeV/m that is about 1.5 times higher than contemporary
standing wave structures with the same critical magnetic field. The other
benefit of TW structures shown is R/Q about 2 times higher than TESLA structure
that reduces the dynamic heat load by a factor of 2. A method is proposed how
to make TW structures multipactor-free. Some design proposals are offered to
facilitate fabrication. Further increase of the real-estate gradient
(equivalent to 80 MV/m active gradient) is also possible by increasing the
length of the accelerating structure because of higher group velocity and
cell-to-cell coupling. Realization of this work opens paths to ILC energy
upgrades beyond 1 TeV to 3 TeV in competition with CLIC. The paper will discuss
corresponding opportunities and challenges.
|
We present here a detailed calculation of opacities for Fe~XVII at the
physical conditions corresponding to the base of the Solar convection zone.
Many ingredients are involved in the calculation of opacities. We review the
impact of each ingredient on the final monochromatic and mean opacities
(Rosseland and Planck). The necessary atomic data were calculated with the
$R$-matrix and the distorted-wave (DW) methods. We study the effect of
broadening, of resolution, of the extent of configuration sets and of
configuration interaction to understand the differences between several
theoretical predictions as well as the existing large disagreement with
measurements. New Dirac $R$-matrix calculations including all configurations up
to the $n=$ 4, 5 and $6$ complexes have been performed as well as corresponding
Breit--Pauli DW calculations. The DW calculations have been extended to include
autoionizing initial levels. A quantitative contrast is made between comparable
DW and $R$-matrix models. We have reached self-convergence with $n=6$
$R$-matrix and DW calculations. Populations in autoionizing initial levels
contribute significantly to the opacities and should not be neglected. The
$R$-matrix and DW results are consistent under the similar treatment of
resonance broadening. The comparison with the experiment shows a persistent
difference in the continuum while the filling of the windows shows some
improvement. The present study defines our path to the next generation of
opacities and opacity tables for stellar modeling.
|
The aim of this article is to provide characterizations for
subadditivity-like growth conditions for the so-called associated weight
functions in terms of the defning weight sequence. Such growth requirements
arise frequently in the literature and are standard when dealing with
ultradifferentiable function classes defned by Braun-Meise-Taylor weight
functions since they imply or even characterize important and desired
consequences for the underlying function spaces, e.g. closedness under
composition.
|
In a pervious paper Weidmann shows that there a bound on the number of orbits
of edges in a tree on which a finitely generated group acts
$(k,C)$-acylindrically. In this paper we extend this result to actions which
are $k$-acylindrical except on a family of groups with "finite height". We also
give an example which gives a negative result to a conjecture of Weidmann from
the same paper and produce a sharp bound for groups acting $k$--acylindrically.
|
The growing demand for connected devices and the increase in investments in
the Internet of Things (IoT) sector induce the growth of the market for this
technology. IoT permeates all areas of life of an individual, from smartwatches
to entire home assistants and solutions in different areas. The IoT concept is
gradually increasing all over the globe. IoT projects induce an articulation of
studies in software engineering to prepare the development and operation of
software systems materialized in physical objects and structures interconnected
with embedded software and hosted in clouds. IoT projects have boundaries
between development and operation stages. This study search for evidence in
scientific literature to support these boundaries through Development and
Operations (DevOps) principles. We rely on a Systematic Literature Review to
investigate the relations of DevOps in IoT software systems. As a result, we
identify concepts, characterize the benefits and challenges in the context of
knowledge previously reported in primary studies in the literature. The main
contributions of this paper are: (i) discussion of benefits and challenges for
DevOps in IoT software systems, (ii) identification of tools, concepts, and
programming languages used, and, (iii) perceived pipeline for this kind of
software development.
|
A music mashup combines audio elements from two or more songs to create a new
work. To reduce the time and effort required to make them, researchers have
developed algorithms that predict the compatibility of audio elements. Prior
work has focused on mixing unaltered excerpts, but advances in source
separation enable the creation of mashups from isolated stems (e.g., vocals,
drums, bass, etc.). In this work, we take advantage of separated stems not just
for creating mashups, but for training a model that predicts the mutual
compatibility of groups of excerpts, using self-supervised and semi-supervised
methods. Specifically, we first produce a random mashup creation pipeline that
combines stem tracks obtained via source separation, with key and tempo
automatically adjusted to match, since these are prerequisites for high-quality
mashups. To train a model to predict compatibility, we use stem tracks obtained
from the same song as positive examples, and random combinations of stems with
key and/or tempo unadjusted as negative examples. To improve the model and use
more data, we also train on "average" examples: random combinations with
matching key and tempo, where we treat them as unlabeled data as their true
compatibility is unknown. To determine whether the combined signal or the set
of stem signals is more indicative of the quality of the result, we experiment
on two model architectures and train them using semi-supervised learning
technique. Finally, we conduct objective and subjective evaluations of the
system, comparing them to a standard rule-based system.
|
The success of Convolutional Neural Networks (CNNs) in computer vision is
mainly driven by their strong inductive bias, which is strong enough to allow
CNNs to solve vision-related tasks with random weights, meaning without
learning. Similarly, Long Short-Term Memory (LSTM) has a strong inductive bias
towards storing information over time. However, many real-world systems are
governed by conservation laws, which lead to the redistribution of particular
quantities -- e.g. in physical and economical systems. Our novel
Mass-Conserving LSTM (MC-LSTM) adheres to these conservation laws by extending
the inductive bias of LSTM to model the redistribution of those stored
quantities. MC-LSTMs set a new state-of-the-art for neural arithmetic units at
learning arithmetic operations, such as addition tasks, which have a strong
conservation law, as the sum is constant over time. Further, MC-LSTM is applied
to traffic forecasting, modelling a pendulum, and a large benchmark dataset in
hydrology, where it sets a new state-of-the-art for predicting peak flows. In
the hydrology example, we show that MC-LSTM states correlate with real-world
processes and are therefore interpretable.
|
In this work, we address the problem of formal safety verification for
stochastic cyber-physical systems (CPS) equipped with ReLU neural network (NN)
controllers. Our goal is to find the set of initial states from where, with a
predetermined confidence, the system will not reach an unsafe configuration
within a specified time horizon. Specifically, we consider discrete-time LTI
systems with Gaussian noise, which we abstract by a suitable graph. Then, we
formulate a Satisfiability Modulo Convex (SMC) problem to estimate upper bounds
on the transition probabilities between nodes in the graph. Using this
abstraction, we propose a method to compute tight bounds on the safety
probabilities of nodes in this graph, despite possible over-approximations of
the transition probabilities between these nodes. Additionally, using the
proposed SMC formula, we devise a heuristic method to refine the abstraction of
the system in order to further improve the estimated safety bounds. Finally, we
corroborate the efficacy of the proposed method with simulation results
considering a robot navigation example and comparison against a
state-of-the-art verification scheme.
|
Galactic charged cosmic rays (notably electrons, positrons, antiprotons and
light antinuclei) are powerful probes of dark matter annihilation or decay, in
particular for candidates heavier than a few MeV or tiny evaporating primordial
black holes. Recent measurements by PAMELA, AMS-02, or VOYAGER on positrons and
antiprotons already translate into constraints on several models over a large
mass range. However, these constraints depend on Galactic transport models, in
particular the diffusive halo size, subject to theoretical and statistical
uncertainties. We update the so-called MIN-MED-MAX benchmark transport
parameters that yield generic minimal, median and maximal dark-matter induced
fluxes; this reduces the uncertainties on fluxes by a factor of about 2 for
positrons and 6 for antiprotons, with respect to their former version. We also
provide handy fitting formulae for the associated predicted secondary
antiproton and positron background fluxes. Finally, for more refined analyses,
we provide the full details of the model parameters and covariance matrices of
uncertainties.
|
It has been shown beyond reasonable doubt that the majority (about 95%) of
the total energy budget of the universe is given by the dark components, namely
Dark Matter and Dark Energy. What constitutes these components remains to be
satisfactorily understood however, despite a number of promising candidates. An
associated conundrum is that of the coincidence, i.e. the question as to why
the Dark Matter and Dark Energy densities are of the same order of magnitude at
the present epoch, after evolving over the entire expansion history of the
universe. In an attempt to address these, we consider a quantum potential
resulting from a quantum corrected Raychaudhuri/Friedmann equation in presence
of a cosmic fluid, which is presumed to be a Bose-Einstein condensate (BEC) of
ultralight bosons. For a suitable and physically motivated macroscopic ground
state wavefunction of the BEC, we show that a unified picture of the cosmic
dark sector can indeed emerge, thus resolving the issue of the coincidence. The
effective Dark energy component turns out to be a cosmological constant, by
virtue of a residual homogeneous term in the quantum potential. Furthermore,
comparison with the observational data gives an estimate of the mass of the
constituent bosons in the BEC, which is well within the bounds predicted from
other considerations.
|
We present a new form of intermittency, L\'evy on-off intermittency, which
arises from multiplicative $\alpha$-stable white noise close to an instability
threshold. We study this problem in the linear and nonlinear regimes, both
theoretically and numerically, for the case of a pitchfork bifurcation with
fluctuating growth rate. We compute the stationary distribution analytically
and numerically from the associated fractional Fokker-Planck equation in the
Stratonovich interpretation. We characterize the system in the parameter space
$(\alpha,\beta)$ of the noise, with stability parameter $\alpha\in (0,2)$ and
skewness parameter $\beta\in[-1,1]$. Five regimes are identified in this
parameter space, in addition to the well-studied Gaussian case $\alpha=2$.
Three regimes are located at $1<\alpha<2$, where the noise has finite mean but
infinite variance. They are differentiated by $\beta$ and all display a
critical transition at the deterministic instability threshold, with on-off
intermittency close to onset. Critical exponents are computed from the
stationary distribution. Each regime is characterized by a specific form of the
density and specific critical exponents, which differ starkly from the Gaussian
case. A finite or infinite number of integer-order moments may converge,
depending on parameters. Two more regimes are found at $0<\alpha\leq 1$. There,
the mean of the noise diverges, and no critical transition occurs. In one case
the origin is always unstable, independently of the distance $\mu$ from the
deterministic threshold. In the other case, the origin is conversely always
stable, independently of $\mu$. We thus demonstrate that an instability subject
to non-equilibrium, power-law-distributed fluctuations can display
substantially different properties than for Gaussian thermal fluctuations, in
terms of statistics and critical behavior.
|
Object pose estimation from a single RGB image is a challenging problem due
to variable lighting conditions and viewpoint changes. The most accurate pose
estimation networks implement pose refinement via reprojection of a known,
textured 3D model, however, such methods cannot be applied without high quality
3D models of the observed objects. In this work we propose an approach, namely
an Innovation CNN, to object pose estimation refinement that overcomes the
requirement for reprojecting a textured 3D model. Our approach improves initial
pose estimation progressively by applying the Innovation CNN iteratively in a
stochastic gradient descent (SGD) framework. We evaluate our method on the
popular LINEMOD and Occlusion LINEMOD datasets and obtain state-of-the-art
performance on both datasets.
|
In order to connect galaxy clusters to their progenitor protoclusters, we
must constrain the star formation histories within their member galaxies and
the timescale of virial collapse. In this paper we characterize the complex
star-forming properties of a $z=2.5$ protocluster in the COSMOS field using
ALMA dust continuum and new VLA CO(1-0) observations of two filaments
associated with the structure, sometimes referred to as the "Hyperion"
protocluster. We focus in particular on the protocluster "core" which has
previously been suggested as the highest redshift bona fide galaxy cluster
traced by extended X-ray emission in a stacked Chandra/XMM image. We re-analyze
this data and refute these claims, finding that at least 40 $\pm$ 17% of
extended X-ray sources of similar luminosity and size at this redshift arise
instead from Inverse Compton scattering off recently extinguished radio
galaxies rather than intracluster medium. Using ancillary COSMOS data, we also
constrain the SEDs of the two filaments' eight constituent galaxies from the
rest-frame UV to radio. We do not find evidence for enhanced star formation
efficiency in the core and conclude that the constituent galaxies are already
massive (M$_{\star} \approx 10^{11} M_{\odot}$), with molecular gas reservoirs
$>10^{10} M_{\odot}$ that will be depleted within 200-400 Myr. Finally, we
calculate the halo mass of the nested core at $z=2.5$ and conclude that it will
collapse into a cluster of 2-9 $\times 10^{14} M_{\odot}$, comparable to the
size of the Coma cluster at $z=0$ and accounting for at least 50% of the total
estimated halo mass of the extended "Hyperion" structure.
|
Quantum-mechanical correlations of interacting fermions result in the
emergence of exotic phases. Magnetic phases naturally arise in the
Mott-insulator regime of the Fermi-Hubbard model, where charges are localized
and the spin degree of freedom remains. In this regime, the occurrence of
phenomena such as resonating valence bonds, frustrated magnetism, and spin
liquids is predicted. Quantum systems with engineered Hamiltonians can be used
as simulators of such spin physics to provide insights beyond the capabilities
of analytical methods and classical computers. To be useful, methods for the
preparation of intricate many-body spin states and access to relevant
observables are required. Here, we show the quantum simulation of magnetism in
the Mott-insulator regime with a linear quantum-dot array. We characterize the
energy spectrum for a Heisenberg spin chain, from which we can identify when
the conditions for homogeneous exchange couplings are met. Next, we study the
multispin coherence with global exchange oscillations in both the singlet and
triplet subspace of the Heisenberg Hamiltonian. Last, we adiabatically prepare
the low-energy global singlet of the homogeneous spin chain and probe it with
two-spin singlettriplet measurements on each nearest-neighbor pair and the
correlations therein. The methods and control presented here open new
opportunities for the simulation of quantum magnetism benefiting from the
flexibility in tuning and layout of gate-defined quantum-dot arrays.
|
The exotic range of known planetary systems has provoked an equally exotic
range of physical explanations for their diverse architectures. However,
constraining formation processes requires mapping the observed exoplanet
population to that which initially formed in the protoplanetary disc. Numerous
results suggest that (internal or external) dynamical perturbation alters the
architectures of some exoplanetary systems. Isolating planets that have evolved
without any perturbation can help constrain formation processes. We consider
the Kepler multiples, which have low mutual inclinations and are unlikely to
have been dynamically perturbed. We apply a modelling approach similar to that
of Mulders et al. (2018), additionally accounting for the two-dimensionality of
the radius ($R =0.3-20\,R_\oplus$) and period ($P= 0.5-730$ days) distribution.
We find that an upper limit in planet mass of the form $M_{\rm{lim}} \propto
a^\beta \exp(-a_{\rm{in}}/a)$, for semi-major axis $a$ and a broad range of
$a_{\rm{in}}$ and $\beta$, can reproduce a distribution of $P$, $R$ that is
indistinguishable from the observed distribution by our comparison metric. The
index is consistent with $\beta= 1.5$, expected if growth is limited by
accretion within the Hill radius. This model is favoured over models assuming a
separable PDF in $P$, $R$. The limit, extrapolated to longer periods, is
coincident with the orbits of RV-discovered planets ($a>0.2$ au,
$M>1\,M_{\rm{J}}$) around recently identified low density host stars, hinting
at isolation mass limited growth. We discuss the necessary circumstances for a
coincidental age-related bias as the origin of this result, concluding that
such a bias is possible but unlikely. We conclude that, in light of the
evidence that some planetary systems have been dynamically perturbed, simple
models for planet growth during the formation stage are worth revisiting.
|
Data is the key factor to drive the development of machine learning (ML)
during the past decade. However, high-quality data, in particular labeled data,
is often hard and expensive to collect. To leverage large-scale unlabeled data,
self-supervised learning, represented by contrastive learning, is introduced.
The objective of contrastive learning is to map different views derived from a
training sample (e.g., through data augmentation) closer in their
representation space, while different views derived from different samples more
distant. In this way, a contrastive model learns to generate informative
representations for data samples, which are then used to perform downstream ML
tasks. Recent research has shown that machine learning models are vulnerable to
various privacy attacks. However, most of the current efforts concentrate on
models trained with supervised learning. Meanwhile, data samples' informative
representations learned with contrastive learning may cause severe privacy
risks as well.
In this paper, we perform the first privacy analysis of contrastive learning
through the lens of membership inference and attribute inference. Our
experimental results show that contrastive models trained on image datasets are
less vulnerable to membership inference attacks but more vulnerable to
attribute inference attacks compared to supervised models. The former is due to
the fact that contrastive models are less prone to overfitting, while the
latter is caused by contrastive models' capability of representing data samples
expressively. To remedy this situation, we propose the first privacy-preserving
contrastive learning mechanism, Talos, relying on adversarial training.
Empirical results show that Talos can successfully mitigate attribute inference
risks for contrastive models while maintaining their membership privacy and
model utility.
|
Many ecological and spatial processes are complex in nature and are not
accurately modeled by linear models. Regression trees promise to handle the
high-order interactions that are present in ecological and spatial datasets,
but fail to produce physically realistic characterizations of the underlying
landscape. The "autocart" (autocorrelated regression trees) R package extends
the functionality of previously proposed spatial regression tree methods
through a spatially aware splitting function and novel adaptive inverse
distance weighting method in each terminal node. The efficacy of these autocart
models, including an autocart extension of random forest, is demonstrated on
multiple datasets. This highlights the ability of autocart to model complex
interactions between spatial variables while still providing physically
realistic representations of the landscape.
|
The lightest neutralino, assumed to be the lightest supersymmetric particle,
is proposed to be a dark matter (DM) candidate for the mass $\cal{O}$(100) GeV.
Constraints from various direct dark matter detection experiments and Planck
measurements exclude a substantial region of parameter space of the minimal
supersymmetric standard model (MSSM). However, a "mild-tempered" neutralino
with dominant bino composition and a little admixture of Higgsino is found to
be a viable candidate for DM. Within the MSSM framework, we revisit the allowed
region of parameter space that is consistent with all existing constraints.
Regions of parameters that are not sensitive to direct detection experiments,
known as "blind spots," are also revisited. Complimentary to the direct
detection of DM particles, a mild-tempered neutralino scenario is explored at
the LHC with the center of mass energy $\rm \sqrt{s}$=13 TeV through the
top-squark pair production, and its subsequent decays with the
standard-model-like Higgs boson in the final state. Our considered channel is
found to be very sensitive also to the blind spot scenario. Detectable signal
sensitivities are achieved using the cut-based method for the high luminosity
options $\rm 300$ and $\rm 3000 ~fb^{-1}$, which are further improved by
applying the multi-variate analysis technique.
|
Various autonomous or assisted driving strategies have been facilitated
through the accurate and reliable perception of the environment around a
vehicle. Among the commonly used sensors, radar has usually been considered as
a robust and cost-effective solution even in adverse driving scenarios, e.g.,
weak/strong lighting or bad weather. Instead of considering to fuse the
unreliable information from all available sensors, perception from pure radar
data becomes a valuable alternative that is worth exploring. In this paper, we
propose a deep radar object detection network, named RODNet, which is
cross-supervised by a camera-radar fused algorithm without laborious annotation
efforts, to effectively detect objects from the radio frequency (RF) images in
real-time. First, the raw signals captured by millimeter-wave radars are
transformed to RF images in range-azimuth coordinates. Second, our proposed
RODNet takes a sequence of RF images as the input to predict the likelihood of
objects in the radar field of view (FoV). Two customized modules are also added
to handle multi-chirp information and object relative motion. Instead of using
human-labeled ground truth for training, the proposed RODNet is
cross-supervised by a novel 3D localization of detected objects using a
camera-radar fusion (CRF) strategy in the training stage. Finally, we propose a
method to evaluate the object detection performance of the RODNet. Due to no
existing public dataset available for our task, we create a new dataset, named
CRUW, which contains synchronized RGB and RF image sequences in various driving
scenarios. With intensive experiments, our proposed cross-supervised RODNet
achieves 86% average precision and 88% average recall of object detection
performance, which shows the robustness to noisy scenarios in various driving
conditions.
|
Stochastic gradient algorithms are often unstable when applied to functions
that do not have Lipschitz-continuous and/or bounded gradients. Gradient
clipping is a simple and effective technique to stabilize the training process
for problems that are prone to the exploding gradient problem. Despite its
widespread popularity, the convergence properties of the gradient clipping
heuristic are poorly understood, especially for stochastic problems. This paper
establishes both qualitative and quantitative convergence results of the
clipped stochastic (sub)gradient method (SGD) for non-smooth convex functions
with rapidly growing subgradients. Our analyses show that clipping enhances the
stability of SGD and that the clipped SGD algorithm enjoys finite convergence
rates in many cases. We also study the convergence of a clipped method with
momentum, which includes clipped SGD as a special case, for weakly convex
problems under standard assumptions. With a novel Lyapunov analysis, we show
that the proposed method achieves the best-known rate for the considered class
of problems, demonstrating the effectiveness of clipped methods also in this
regime. Numerical results confirm our theoretical developments.
|
We present the first results from the Quasar Feedback Survey, a sample of 42
z<0.2, [O III] luminous AGN (L[O III]>10^42.1 ergs/s) with moderate radio
luminosities (i.e. L(1.4GHz)>10^23.4 W/Hz; median L(1.4GHz)=5.9x10^23 W/Hz).
Using high spatial resolution (~0.3-1 arcsec), 1.5-6 GHz radio images from the
Very Large Array, we find that 67 percent of the sample have spatially extended
radio features, on ~1-60 kpc scales. The radio sizes and morphologies suggest
that these may be lower radio luminosity versions of compact, radio-loud AGN.
By combining the radio-to-infrared excess parameter, spectral index, radio
morphology and brightness temperature, we find radio emission in at least 57
percent of the sample that is associated with AGN-related processes (e.g. jets,
quasar-driven winds or coronal emission). This is despite only 9.5-21 percent
being classified as radio-loud using traditional criteria. The origin of the
radio emission in the remainder of the sample is unclear. We find that both the
established anti-correlation between radio size and the width of the [O III]
line, and the known trend for the most [O III] luminous AGN to be associated
with spatially-extended radio emission, also hold for our sample of moderate
radio luminosity quasars. These observations add to the growing evidence of a
connection between the radio emission and ionised gas in quasar host galaxies.
This work lays the foundation for deeper investigations into the drivers and
impact of feedback in this unique sample.
|
The European Spallation Source (ESS), currently finishing its construction,
will soon provide the most intense neutron beams for multi-disciplinary
science. At the same time, it will also produce a high-intensity neutrino flux
with an energy suitable for precision measurements of Coherent Elastic
Neutrino-Nucleus Scattering. We describe some physics prospects, within and
beyond the Standard Model, of employing innovative detector technologies to
take the most out of this large flux. We show that, compared to current
measurements, the ESS will provide a much more precise understanding of
neutrino and nuclear properties.
|
With approximately 50 binary black hole events detected by LIGO/Virgo to date
and many more expected in the next few years, gravitational-wave astronomy is
shifting from individual-event analyses to population studies. We perform a
hierarchical Bayesian analysis on the GWTC-2 catalog by combining several
astrophysical formation models with a population of primordial black holes. We
compute the Bayesian evidence for a primordial population compared to the null
hypothesis, and the inferred fraction of primordial black holes in the data. We
find that these quantities depend on the set of assumed astrophysical models:
the evidence for primordial black holes against an astrophysical-only
multichannel model is decisively favored in some scenarios, but it is
significantly reduced in the presence of a dominant stable-mass-transfer
isolated formation channel. The primordial channel can explain mergers in the
upper mass gap such as GW190521, but (depending on the astrophysical channels
we consider) a significant fraction of the events could be of primordial origin
even if we neglected GW190521. The tantalizing possibility that LIGO/Virgo may
have already detected black holes formed after inflation should be verified by
reducing uncertainties in astrophysical and primordial formation models, and it
may ultimately be confirmed by third-generation interferometers.
|
The paper is devoted to the participation of the TUDublin team in
Constraint@AAAI2021 - COVID19 Fake News Detection Challenge. Today, the problem
of fake news detection is more acute than ever in connection with the pandemic.
The number of fake news is increasing rapidly and it is necessary to create AI
tools that allow us to identify and prevent the spread of false information
about COVID-19 urgently. The main goal of the work was to create a model that
would carry out a binary classification of messages from social media as real
or fake news in the context of COVID-19. Our team constructed the ensemble
consisting of Bidirectional Long Short Term Memory, Support Vector Machine,
Logistic Regression, Naive Bayes and a combination of Logistic Regression and
Naive Bayes. The model allowed us to achieve 0.94 F1-score, which is within 5\%
of the best result.
|
We consider the moduli space $\mathcal{M}_{\nu}$ of torsion-free,
asymptotically conical (AC) Spin(7)-structures which are defined on the same
manifold and asymptotic to the same Spin(7)-cone with decay rate $\nu<0$. We
show that $\mathcal{M}_{\nu}$ is an orbifold if $\nu$ is a generic rate in the
non-$L^2$ regime $(-4,0)$. Infinitesimal deformations are given by topological
data and solutions to a non-elliptic first-order PDE system on the compact link
of the asymptotic cone. As an application, we show that the classical
Bryant-Salamon metric on the bundle of positive spinors on $S^4$ has no
continuous deformations as an AC Spin(7)-metric.
|
The advent of large pre-trained language models has given rise to rapid
progress in the field of Natural Language Processing (NLP). While the
performance of these models on standard benchmarks has scaled with size,
compression techniques such as knowledge distillation have been key in making
them practical. We present, MATE-KD, a novel text-based adversarial training
algorithm which improves the performance of knowledge distillation. MATE-KD
first trains a masked language model based generator to perturb text by
maximizing the divergence between teacher and student logits. Then using
knowledge distillation a student is trained on both the original and the
perturbed training samples. We evaluate our algorithm, using BERT-based models,
on the GLUE benchmark and demonstrate that MATE-KD outperforms competitive
adversarial learning and data augmentation baselines. On the GLUE test set our
6 layer RoBERTa based model outperforms BERT-Large.
|
Simulated images of a black hole surrounded by optically thin emission
typically display two main features: a central brightness depression and a
narrow, bright "photon ring" consisting of strongly lensed images superposed on
top of the direct emission. The photon ring closely tracks a theoretical curve
on the image plane corresponding to light rays that asymptote to unstably bound
photon orbits around the black hole. This critical curve has a size and shape
that are purely governed by the Kerr geometry; in contrast, the size, shape,
and depth of the observed brightness depression all depend on the details of
the emission region. For instance, images of spherical accretion models display
a distinctive dark region -- the "black hole shadow" -- that completely fills
the photon ring. By contrast, in models of equatorial disks extending to the
black hole's event horizon, the darkest region in the image is restricted to a
much smaller area -- an inner shadow -- whose edge lies near the direct lensed
image of the equatorial horizon. Using both semi-analytic models and general
relativistic magnetohydrodynamic (GRMHD) simulations, we demonstrate that the
photon ring and inner shadow may be simultaneously visible in submillimeter
images of M87*, where magnetically arrested disk (MAD) simulations predict that
the emission arises in a thin region near the equatorial plane. We show that
the relative size, shape, and centroid of the photon ring and inner shadow can
be used to estimate the black hole mass and spin, breaking degeneracies in
measurements of these quantities that rely on the photon ring alone. Both
features may be accessible to direct observation via high-dynamic-range images
with a next-generation Event Horizon Telescope.
|
In this paper we investigate Erd\H{o}s-Ko-Rado theorems in ovoidal circle
geometries. We prove that in M\"obius planes of even order greater than 2, and
ovoidal Laguerre planes of odd order, the largest families of circles which
pairwise intersect in at least one point, consist of all circles through a
fixed point. In ovoidal Laguerre planes of even order, a similar result holds,
but there is one other type of largest family of pairwise intersecting circles.
As a corollary, we prove that the largest families of polynomials over $\mathbb
F_q$ of degree at most $k$, with $2 \leq k < q$, which pairwise take the same
value on at least one point, consist of all polynomials $f$ of degree at most
$k$ such that $f(x) = y$ for some fixed $x$ and $y$ in $\mathbb F_q$. We also
discuss this problem for ovoidal Minkowski planes, and we investigate the
largest families of circles pairwise intersecting in two points in circle
geometries.
|
We investigated laser-induced periodic surface structures (LIPSS) generated
on indium-tin-oxide (ITO) thin films with femtosecond laser pulses in the
infrared region. Using pulses between 1.6 and 2.4 ${\mu}$m central wavelength,
we observed robust LIPSS morphologies with a periodicity close to
${\lambda}$/10. Supporting finite-difference time-domain calculations suggest
that the surface forms are rooted in the field localization in the surface pits
leading to a periodically increased absorption of the laser pulse energy that
creates the observed periodic structures.
|
We study the 2+1 dimensional continuum model for the evolution of stepped
epitaxial surface under long-range elastic interaction proposed by Xu and Xiang
(SIAM J. Appl. Math. 69, 1393-1414, 2009). The long-range interaction term and
the two length scales in this model makes PDE analysis challenging. Moreover,
unlike in the 1+1 dimensional case, there is a nonconvexity contribution (of
the gradient norm of the surface height) in the total energy in the 2+1
dimensional case, and it is not easy to prove that the solution is always in
the well-posed regime during the evolution. In this paper, we propose a
modified 2+1 dimensional continuum model and prove the existence and uniqueness
of both the static and dynamic solutions and derive a minimum energy scaling
law for it. We show that the minimum energy surface profile is mainly attained
by surfaces with step meandering instability. This is essentially different
from the energy scaling law for the 1+1 dimensional epitaxial surfaces under
elastic effects attained by step bunching surface profiles. We also discuss the
transition from the step bunching instability to the step meandering
instability in 2+1 dimensions.
|
Full connectivity of qubits is necessary for most quantum algorithms, which
is difficult to directly implement on Noisy Intermediate-Scale Quantum
processors. However, inserting swap gate to enable the two-qubit gates between
uncoupled qubits significantly decreases the computation result fidelity. To
this end, we propose a Special-Purpose Quantum Processor Design method that can
design suitable structures for different quantum algorithms. Our method extends
the processor structure from two-dimensional lattice graph to general planar
graph and arranges the physical couplers according to the two-qubit gate
distribution between the logical qubits of the quantum algorithm and the
physical constraints. Experimental results show that our design methodology,
compared with other methods, could reduce the number of extra swap gates per
two-qubit gate by at least 104.2% on average. Also, our method's advantage over
other methods becomes more obvious as the depth and qubit number increase. The
result reveals that our method is competitive in improving computation result
fidelity and it has the potential to demonstrate quantum advantage under the
technical conditions.
|
In this paper, we consider the Potts-SOS model where the spin takes values in
the set $\{0, 1, 2\}$ on the Cayley tree of order two. We describe all the
translation-invariant splitting Gibbs measures for this model in some
conditions. Moreover, we investigate whether these Gibbs measures are extremal
or non-extremal in the set of all Gibbs measures.
|
Machine learning models often use spurious patterns such as "relying on the
presence of a person to detect a tennis racket," which do not generalize. In
this work, we present an end-to-end pipeline for identifying and mitigating
spurious patterns for image classifiers. We start by finding patterns such as
"the model's prediction for tennis racket changes 63% of the time if we hide
the people." Then, if a pattern is spurious, we mitigate it via a novel form of
data augmentation. We demonstrate that this approach identifies a diverse set
of spurious patterns and that it mitigates them by producing a model that is
both more accurate on a distribution where the spurious pattern is not helpful
and more robust to distribution shift.
|
Tensegrity structures are lightweight, can undergo large deformations, and
have outstanding robustness capabilities. These unique properties inspired
roboticists to investigate their use. However, the morphological design,
control, assembly, and actuation of tensegrity robots are still difficult
tasks. Moreover, the stiffness of tensegrity robots is still an underestimated
design parameter. In this article, we propose to use easy to assemble, actuated
tensegrity modules and body-brain co-evolution to design soft tensegrity
modular robots. Moreover, we prove the importance of tensegrity robots
stiffness showing how the evolution suggests a different morphology, control,
and locomotion strategy according to the modules stiffness.
|
We derive the conjugate prior of the Dirichlet and beta distributions and
explore it with numerical examples to gain an intuitive understanding of the
distribution itself, its hyperparameters, and conditions concerning its
convergence. Due to the prior's intractability, we proceed to define and
analyze a closed-form approximation. Finally, we provide an algorithm
implementing this approximation that enables fully tractable Bayesian conjugate
treatment of Dirichlet and beta likelihoods without the need for Monte Carlo
simulations.
|
We consider non-convex stochastic optimization using first-order algorithms
for which the gradient estimates may have heavy tails. We show that a
combination of gradient clipping, momentum, and normalized gradient descent
yields convergence to critical points in high-probability with best-known rates
for smooth losses when the gradients only have bounded $\mathfrak{p}$th moments
for some $\mathfrak{p}\in(1,2]$. We then consider the case of second-order
smooth losses, which to our knowledge have not been studied in this setting,
and again obtain high-probability bounds for any $\mathfrak{p}$. Moreover, our
results hold for arbitrary smooth norms, in contrast to the typical SGD
analysis which requires a Hilbert space norm. Further, we show that after a
suitable "burn-in" period, the objective value will monotonically decrease for
every iteration until a critical point is identified, which provides intuition
behind the popular practice of learning rate "warm-up" and also yields a
last-iterate guarantee.
|
We give a local characterization for the Cuntz semigroup of AI-algebras
building upon Shen's characterization of dimension groups. Using this result,
we provide an abstract characterization for the Cuntz semigroup of AI-algebras.
|
Much recent interest has focused on the design of optimization algorithms
from the discretization of an associated optimization flow, i.e., a system of
differential equations (ODEs) whose trajectories solve an associated
optimization problem. Such a design approach poses an important problem: how to
find a principled methodology to design and discretize appropriate ODEs. This
paper aims to provide a solution to this problem through the use of contraction
theory. We first introduce general mathematical results that explain how
contraction theory guarantees the stability of the implicit and explicit Euler
integration methods. Then, we propose a novel system of ODEs, namely the
Accelerated-Contracting-Nesterov flow, and use contraction theory to establish
it is an optimization flow with exponential convergence rate, from which the
linear convergence rate of its associated optimization algorithm is immediately
established. Remarkably, a simple explicit Euler discretization of this flow
corresponds to the Nesterov acceleration method. Finally, we present how our
approach leads to performance guarantees in the design of optimization
algorithms for time-varying optimization problems.
|
Depth maps captured with commodity sensors are often of low quality and
resolution; these maps need to be enhanced to be used in many applications.
State-of-the-art data-driven methods of depth map super-resolution rely on
registered pairs of low- and high-resolution depth maps of the same scenes.
Acquisition of real-world paired data requires specialized setups. Another
alternative, generating low-resolution maps from high-resolution maps by
subsampling, adding noise and other artificial degradation methods, does not
fully capture the characteristics of real-world low-resolution images. As a
consequence, supervised learning methods trained on such artificial paired data
may not perform well on real-world low-resolution inputs. We consider an
approach to depth super-resolution based on learning from unpaired data. While
many techniques for unpaired image-to-image translation have been proposed,
most fail to deliver effective hole-filling or reconstruct accurate surfaces
using depth maps. We propose an unpaired learning method for depth
super-resolution, which is based on a learnable degradation model, enhancement
component and surface normal estimates as features to produce more accurate
depth maps. We propose a benchmark for unpaired depth SR and demonstrate that
our method outperforms existing unpaired methods and performs on par with
paired.
|
We consider the Schr\"odinger equation with nonlinear derivative term on
$[0,+\infty)$ under Robin boundary condition at $0$. Using a virial argument,
we obtain the existence of blowing up solutions and using variational
techniques, we obtain stability and instability by blow up results for standing
waves.
|
A key question concerning collective decisions is whether a social system can
settle on the best available option when some members learn from others instead
of evaluating the options on their own. This question is challenging to study,
and previous research has reached mixed conclusions, because collective
decision outcomes depend on the insufficiently understood complex system of
cognitive strategies, task properties, and social influence processes. This
study integrates these complex interactions together in one general yet
partially analytically tractable mathematical framework using a dynamical
system model. In particular, it investigates how the interplay of the
proportion of social learners, the relative merit of options, and the type of
conformity response affect collective decision outcomes in a binary choice. The
model predicts that when the proportion of social learners exceeds a critical
threshold, a bi-stable state appears in which the majority can end up favoring
either the higher- or lower-merit option, depending on fluctuations and initial
conditions. Below this threshold, the high-merit option is chosen by the
majority. The critical threshold is determined by the conformity response
function and the relative merits of the two options. The study helps reconcile
disagreements about the effect of social learners on collective performance and
proposes a mathematical framework that can be readily adapted to extensions
investigating a wider variety of dynamics.
|
Dirbusting is a technique used to brute force directories and file names on
web servers while monitoring HTTP responses, in order to enumerate server
contents. Such a technique uses lists of common words to discover the hidden
structure of the target website. Dirbusting typically relies on response codes
as discovery conditions to find new pages. It is widely used in web application
penetration testing, an activity that allows companies to detect websites
vulnerabilities. Dirbusting techniques are both time and resource consuming and
innovative approaches have never been explored in this field. We hence propose
an advanced technique to optimize the dirbusting process by leveraging
Artificial Intelligence. More specifically, we use semantic clustering
techniques in order to organize wordlist items in different groups according to
their semantic meaning. The created clusters are used in an ad-hoc implemented
next-word intelligent strategy. This paper demonstrates that the usage of
clustering techniques outperforms the commonly used brute force methods.
Performance is evaluated by testing eight different web applications. Results
show a performance increase that is up to 50% for each of the conducted
experiments.
|
Decision-making policies for agents are often synthesized with the constraint
that a formal specification of behaviour is satisfied. Here we focus on
infinite-horizon properties. On the one hand, Linear Temporal Logic (LTL) is a
popular example of a formalism for qualitative specifications. On the other
hand, Steady-State Policy Synthesis (SSPS) has recently received considerable
attention as it provides a more quantitative and more behavioural perspective
on specifications, in terms of the frequency with which states are visited.
Finally, rewards provide a classic framework for quantitative properties. In
this paper, we study Markov decision processes (MDP) with the specification
combining all these three types. The derived policy maximizes the reward among
all policies ensuring the LTL specification with the given probability and
adhering to the steady-state constraints. To this end, we provide a unified
solution reducing the multi-type specification to a multi-dimensional long-run
average reward. This is enabled by Limit-Deterministic B\"uchi Automata (LDBA),
recently studied in the context of LTL model checking on MDP, and allows for an
elegant solution through a simple linear programme. The algorithm also extends
to the general $\omega$-regular properties and runs in time polynomial in the
sizes of the MDP as well as the LDBA.
|
We study the design of revenue-maximizing bilateral trade mechanisms in the
correlated private value environment. We assume the designer only knows the
expectations of the agents' values, but knows neither the marginal distribution
nor the correlation structure. The performance of a mechanism is evaluated in
the worst-case over the uncertainty of joint distributions that are consistent
with the known expectations. Among all dominant-strategy incentive compatible
and ex-post individually rational mechanisms, we provide a complete
characterization of the maxmin trade mechanisms and the worst-case joint
distributions.
|
The design of moderators and cold sources of neutrons is a key point in
research-reactor physics, requiring extensive knowledge of the scattering
properties of very important light molecular liquids such as methane, hydrogen
and their deuterated counterparts. Inelastic scattering measurements constitute
the basic source of such information but are difficult to perform, the more so
when high accuracy is required, and additional experimental information is
scarce. The need of data covering as large as possible portions of the
kinematic Q-E plane thus pushes towards the use of computable models, validated
by testing them, mainly, against integral quantities (either known from theory
or measured) such as spectral moments and total cross section data. A few
recent experiments demonstrated that, at least for the self contribution, which
dominates in the incoherent scattering case of hydrogen, accurate calculations
can be performed by means of quantum simulations of the velocity
autocorrelation function. This method is shown here to be by far superior to
the use of standard analytical models devised, although rather cleverly, for
generic classical samples. The neutron dynamic structure factor (and
consequently the well-known S({\alpha},{\beta}) of parahydrogen and deuterium,
suitable for use in packages like NJOY, are given and shown to agree very well
with total cross section measurements and expected quantum properties.
|
Subsets and Splits