abstract
stringlengths 42
2.09k
|
---|
Numerical solutions of the Enskog-Vlasov (EV) equation are used to determine
the velocity distribution function of atoms spontaneously evaporating into
near-vacuum conditions. It is found that an accurate approximation is provided
by a half-Maxwellian including a drift velocity combined with different
characteristic temperatures for the velocity components normal and parallel to
the liquid-vapor interface. The drift velocity and the temperature anisotropy
reduce as the liquid bulk temperature decreases but persist for relatively low
temperatures corresponding to a vapor behaviour which is only slightly
non-ideal. Deviations from the undrifted isotropic half-Maxwellian are shown to
be consequences of collisions in the liquid-vapor interface which
preferentially backscatter atoms with lower normal-velocity component.
|
Brain tissue is a heterogeneous material, constituted by a soft matrix filled
with cerebrospinal fluid. The interactions between, and the complexity of each
of these components are responsible for the non-linear rate-dependent behaviour
that characterizes what is one of the most complex tissue in nature. Here, we
investigate the influence of the cutting rate on the fracture properties of
brain, through wire cutting experiments. We also present a model for the
rate-dependent behaviour of fracture propagation in soft materials, which
comprises the effects of fluid interaction through a poro-hyperelastic
formulation. The method is developed in the framework of finite strain
continuum mechanics, implemented in a commercial finite element code, and
applied to the case of an edge-crack remotely loaded by a controlled
displacement. Experimental and numerical results both show a toughening effect
with increasing rates, which is linked to the energy dissipated by the
fluid-solid interactions in the process zone ahead of the crack.
|
In this paper, we design decentralized control strategies for the
two-dimensional movement of autonomous vehicles on lane-free roads. The bicycle
kinematic model is used to model the dynamics of the vehicles, and each vehicle
determines its control input based only on its own speed and on the distance
from other (adjacent) vehicles and the boundary of the road. Potential
functions and Barbalat's lemma are employed to prove the following properties,
which are ensured by the proposed controller: (i) the vehicles do not collide
with each other or with the boundary of the road; (ii) the speeds of all
vehicles are always positive, i.e., no vehicle moves backwards at any time;
(iii) the speed of all vehicles remain below a given speed limit; (iv) all
vehicle speeds converge to a given longitudinal speed set-point; and (v) the
accelerations, lateral speeds, and orientations of all vehicles tend to zero.
The efficiency of the proposed 2-D cruise controllers is illustrated by means
of numerical examples.
|
A multi-modal framework to generated user intention distributions when
operating a mobile vehicle is proposed in this work. The model learns from past
observed trajectories and leverages traversability information derived from the
visual surroundings to produce a set of future trajectories, suitable to be
directly embedded into a perception-action shared control strategy on a mobile
agent, or as a safety layer to supervise the prudent operation of the vehicle.
We base our solution on a conditional Generative Adversarial Network with
Long-Short Term Memory cells to capture trajectory distributions conditioned on
past trajectories, further fused with traversability probabilities derived from
visual segmentation with a Convolutional Neural Network. The proposed
data-driven framework results in a significant reduction in error of the
predicted trajectories (versus the ground truth) from comparable strategies in
the literature (e.g. Social-GAN) that fail to account for information other
than the agent's past history. Experiments were conducted on a dataset
collected with a custom wheelchair model built onto the open-source urban
driving simulator CARLA, proving also that the proposed framework can be used
with a small, un-annotated dataset.
|
Several physical mechanisms are involved in excavating granular materials
beneath a vertical jet of gas. These occur, for example, beneath the exhaust
plume of a rocket landing on the soil of the Moon or Mars. A series of
experiments and simulations have been performed to provide a detailed view of
the complex gas/soil interactions. Measurements have also been taken from the
Apollo lunar landing videos and from photographs of the resulting terrain, and
these help to demonstrate how the interactions extrapolate into the lunar
environment. It is important to understand these processes at a fundamental
level to support the on-going design of higher-fidelity numerical simulations
and larger-scale experiments. These are needed to enable future lunar
exploration wherein multiple hardware assets will be placed on the Moon within
short distances of one another. The high-velocity spray of soil from landing
spacecraft must be accurately predicted and controlled lest it erosively damage
the surrounding hardware.
|
Terahertz (THz) emission spectroscopy is a powerful method that allows one to
measure the ultrafast dynamics of polarization, current, or magnetization in a
material based on THz emission from the material. However, the practical
implementation of this method can be challenging, and can result in significant
errors in the reconstruction of the quantity of interest. Here, we
experimentally and theoretically demonstrate a rigorous method of signal
reconstruction in THz emission spectroscopy, and describe the main experimental
and theoretical sources of reconstruction error. We identify the linear
line-of-sight geometry of the THz emission spectrometer as the optimal
configuration for accurate, fully calibrated THz signal reconstruction. As an
example, we apply our reconstruction method to ultrafast THz magnetometry
experiment, where we recover the ultrafast magnetization dynamics in a
photoexcited iron film, including both its temporal shape and absolute
magnitude.
|
We demonstrate an external cavity laser formed by combining a silicon nitride
photonic integrated circuit with a reflective semiconductor optical amplifier.
The laser uses an alignment tolerant edge coupler formed by a multi-mode
waveguide splitter right at the edge of the silicon nitride chip that relaxes
the required alignment to the III-V gain chip and equally splits the power
among its two output waveguides. Both the ground and first order mode are
excited in the coupler and reach the quadrature condition at the waveguide
junction, ensuring equal power to be coupled to both. Two high-quality-factor
ring resonators arranged in Vernier configuration close a Sagnac loop between
the two waveguides. In addition to wideband frequency tuning, they result in a
longer effective cavity length. The alignment tolerant coupler increases the
alignment tolerance in the two directions parallel to the chip surface by a
factor 3 relative to conventional edge couplers, making it ideal for gain chip
integration via pick-and-place technology. Lasing is maintained in a
misalignment range of $\pm$6 $\mu$m in the direction along the edge of the
chip. A Lorentzian laser linewidth of 42 kHz is achieved.
|
Engineering Thermodynamics has been the core course of many science and
engineering majors around the world, including energy and power, mechanical
engineering, civil engineering, aerospace, cryogenic refrigeration, food
engineering, chemical engineering, and environmental engineering, among which
gas power cycle is one of the important contents. However, many Engineering
Thermodynamics textbooks focus only on evaluating the thermal efficiency of gas
power cycle, while the important concept of specific cycle work is ignored.
Based on the generalized temperature-entropy diagram for the gas power cycles
proposed by the authors, an ideal Otto cycle and an ideal Miller-Diesel cycle
are taking as examples for the thermodynamic analyses of gas power cycles. The
optimum compression ratio (or the pressure ratio) for the maximum specific
cycle work or the maximum mean effective pressure is analyzed and determined.
The ideal Otto and the ideal Miller-Diesel cycles, and also other gas power
cycles for movable applications, are concluded that the operation under the
maximum specific cycle work or the maximum mean effective pressure, instead of
under the higher efficiency, is more economic and more reasonable. We concluded
that the very important concept, i.e., the optimum compression (or pressure)
ratio for the gas power cycles, should be emphasized in the Engineering
Thermodynamics teaching process and in the latter revised or the newly edited
textbooks, in order to better guide the engineering applications.
|
We investigate the potential of a recently proposed model for 3D compressible
MHD turbulence (Chevillard et al. 2010; Durrive et al. 2021) to be used as a
tool to characterize statistically 2D and 3D turbulent data. This model is
parametrized by a dozen of free (intuitive, physically motivated) parameters,
which control the statistics of the fields (density, velocity and magnetic
fields). The present study is a proof of concept study: (i) we restrict
ourselves to the incompressible hydrodynamical part of the model, (ii) we
consider as data centroid velocity maps, and (iii) we let only three of the
free parameters vary (namely the correlation length, the Hurst parameter and
the intermittency parameter). Within this framework, we demonstrate that, given
a centroid velocity map, we can find in an automated manner (i.e. by a Markov
Chain Monte Carlo analysis) values of the parameters such that the model
resembles the given map, i.e. which reproduces its statistics fairly well.
Hence, thanks to this procedure, one may characterize statistically, and thus
compare, various turbulent data. In other words, we show how this model may be
used as a metric to compare observational or simulated data sets. In addition,
because this model is numerically particularly fast (nearly 500 times faster
than the numerical simulation we use to generate our reference data) it may be
used as a surrogate model. Finally, by this process we also initiate the first
systematic exploration of the parameter space of this model. Doing so, we show
how the parameters impact the visual and the statistical properties of centroid
velocity maps, and exhibit the correlations between the various parameters,
providing new insight into the model.
|
An essential problem of swarm robotics is how members of the swarm knows the
positions of other robots. The main aim of this research is to develop a
cost-effective and simple vision-based system to detect the range, bearing, and
heading of the robots inside a swarm using a multi-purpose passive landmark. A
small Zumo robot equipped with Raspberry Pi, PiCamera is utilized for the
implementation of the algorithm, and different kinds of multipurpose passive
landmarks with nonsymmetrical patterns, which give reliable information about
the range, bearing and heading in a single unit, are designed. By comparing the
recorded features obtained from image analysis of the landmark through
systematical experimentation and the actual measurements, correlations are
obtained, and algorithms converting those features into range, bearing and
heading are designed. The reliability and accuracy of algorithms are tested and
errors are found within an acceptable range.
|
We investigate a well defined heterostructure constituted by magnetic Fe
layers sandwiched between graphene (Gr) and Ir(111). The challenging task to
avoid Fe-C solubility and Fe-Ir intermixing has been achieved with atomic
controlled Fe intercalation at moderate temperature below 500 K. Upon
intercalation of a single ordered Fe layer in registry with the Ir substrate,
an intermixing of the Gr bands and Fe d states breaks the symmetry of the Dirac
cone, with a downshift in energy of the apex by about 3 eV, and well-localized
Fe intermixed states induced in the energy region just below the Fermi level.
First principles electronic structure calculations show a large spin splitting
of the Fe states, resulting in a majority spin channel almost fully occupied
and strongly hybridized with Gr {\pi} states. X-ray magnetic circular dichroism
on the Gr/Fe/Ir heterostructure reveals an ordered spin configuration with a
ferromagnetic response of Fe layer(s), with enhanced spin and orbital
configurations with respect to the bcc-Fe bulk values. The magnetization
switches from a perpendicular easy magnetization axis when the Fe single layer
is lattice matched with the Ir(111) surface to a parallel one when the Fe thin
film is almost commensurate with graphene.
|
The dynamics of the coupled electron-nuclear spin system is studied in an
ensemble of singly-charged (In,Ga)As/GaAs quantum dots (QDs) using periodic
optical excitation at 1 GHz repetition rate. In combination with the
electron-nuclei interaction, the highly repetitive excitation allows us to lock
the electron spins into magnetic resonance in a transverse external magnetic
field. Sweeping the field to higher values, the locking leads to an effective
"diamagnetic" response of significant strength due to dynamic nuclear
polarization, which shields the QD electrons at least partly from the external
field and can even keep the internal magnetic field constant up to 1.3 T field
variation. We model the effect through a magnetic field-dependent polarization
rate of the nuclei, from which we suggest a strategy for adjusting the nuclear
polarization through the detuning between optical excitation and electronic
transition, in addition to tuning the magnetic field.
|
We develop a novel approach to non-relativistic closed bosonic string theory
that is based on a string $1/c^2$ expansion of the relativistic string, where
$c$ is the speed of light. This approach has the benefit that one does not need
to take a limit of a string in a near-critical Kalb-Ramond background. The
$1/c^2$-expanded Polyakov action at next-to-leading order reproduces the known
action of non-relativistic string theory provided that the target space obeys
an appropriate foliation constraint. We compute the spectrum in a flat target
space, with one circle direction that is wound by the string, up to
next-to-leading order and show that it reproduces the spectrum of the
Gomis-Ooguri string.
|
In this note we continue giving the characterisation of weights for
two-weight Hardy inequalities to hold on general metric measure spaces
possessing polar decompositions. Since there may be no differentiable structure
on such spaces, the inequalities are given in the integral form in the spirit
of Hardy's original inequality. This is a continuation of our paper [M.
Ruzhansky and D. Verma. Hardy inequalities on metric measure spaces, Proc. R.
Soc. A., 475(2223):20180310, 2018] where we treated the case $p\leq q$. Here
the remaining range $p>q$ is considered, namely, $0<q<p$, $1<p<\infty.$ We give
examples obtaining new weighted Hardy inequalities on $\mathbb R^n$, on
homogeneous groups, on hyperbolic spaces, and on Cartan-Hadamard manifolds. We
note that doubling conditions are not required for our analysis.
|
In continual learning, a system learns from non-stationary data streams or
batches without catastrophic forgetting. While this problem has been heavily
studied in supervised image classification and reinforcement learning,
continual learning in neural networks designed for abstract reasoning has not
yet been studied. Here, we study continual learning of analogical reasoning.
Analogical reasoning tests such as Raven's Progressive Matrices (RPMs) are
commonly used to measure non-verbal abstract reasoning in humans, and recently
offline neural networks for the RPM problem have been proposed. In this paper,
we establish experimental baselines, protocols, and forward and backward
transfer metrics to evaluate continual learners on RPMs. We employ experience
replay to mitigate catastrophic forgetting. Prior work using replay for image
classification tasks has found that selectively choosing the samples to replay
offers little, if any, benefit over random selection. In contrast, we find that
selective replay can significantly outperform random selection for the RPM
task.
|
A set $D$ of vertices in an isolate-free graph $G$ is a semitotal dominating
set of $G$ if $D$ is a dominating set of $G$ and every vertex in $D$ is within
distance $2$ from another vertex of $D$.The semitotal domination number of $G$
is the minimum cardinality of a semitotal dominating set of $G$ and is denoted
by $\gamma_{t2}(G)$. In this paper after computation of semitotal domination
number of specific graphs, we count the number of this kind of dominating sets
of arbitrary size in some graphs.
|
Many techniques have been proposed for image reconstruction in medical
imaging that aim to recover high-quality images especially from limited or
corrupted measurements. Model-based reconstruction methods have been
particularly popular (e.g., in magnetic resonance imaging and tomographic
modalities) and exploit models of the imaging system's physics together with
statistical models of measurements, noise and often relatively simple object
priors or regularizers. For example, sparsity or low-rankness based
regularizers have been widely used for image reconstruction from limited data
such as in compressed sensing. Learning-based approaches for image
reconstruction have garnered much attention in recent years and have shown
promise across biomedical imaging applications. These methods include synthesis
dictionary learning, sparsifying transform learning, and different forms of
deep learning involving complex neural networks. We briefly discuss classical
model-based reconstruction methods and then review reconstruction methods at
the intersection of model-based and learning-based paradigms in detail. This
review includes many recent methods based on unsupervised learning, and
supervised learning, as well as a framework to combine multiple types of
learned models together.
|
Common statistical measures of uncertainty such as $p$-values and confidence
intervals quantify the uncertainty due to sampling, that is, the uncertainty
due to not observing the full population. However, sampling is not the only
source of uncertainty. In practice, distributions change between locations and
across time. This makes it difficult to gather knowledge that transfers across
data sets. We propose a measure of uncertainty or instability that quantifies
the distributional instability of a statistical parameter with respect to
Kullback-Leibler divergence, that is, the sensitivity of the parameter under
general distributional perturbations within a Kullback-Leibler divergence ball.
In addition, we propose measures to elucidate the instability of parameters
with respect to directional or variable-specific shifts. Measuring instability
with respect to directional shifts can be used to detect the type of shifts a
parameter is sensitive to. We discuss how such knowledge can inform data
collection for improved estimation of statistical parameters under shifted
distributions. We evaluate the performance of the proposed measure on real data
and show that it can elucidate the distributional (in-)stability of a parameter
with respect to certain shifts and can be used to improve the accuracy of
estimation under shifted distributions.
|
We propose the gentle measurement principle (GMP) as one of the principles at
the foundation of quantum mechanics. It asserts that if a set of states can be
distinguished with high probability, they can be distinguished by a measurement
that leaves the states almost invariant, including correlation with a reference
system. While GMP is satisfied in both classical and quantum theories, we show,
within the framework of general probabilistic theories, that it imposes strong
restrictions on the law of physics. First, the measurement uncertainty of a
pair of observables cannot be significantly larger than the preparation
uncertainty. Consequently, the strength of the CHSH nonlocality cannot be
maximal. The parameter in the stretched quantum theory, a family of general
probabilistic theories that includes the quantum theory, is also limited.
Second, the conditional entropy defined in terms of a data compression theorem
satisfies the chain inequality. Not only does it imply information causality
and Tsirelson's bound, but it singles out the quantum theory from the stretched
one. All these results show that GMP would be one of the principles at the
heart of quantum mechanics.
|
We give some formulas of poly-Cauchy numbers by the $r$-Stirling transform.
In the case of the classical or poly-Bernoulli numbers, the formulas are with
Stirling numbers of the first kind. In our case of the classical or poly-Cauchy
numbers, the formulas are with Stirling numbers of the second kind. We also
discuss annihilation formulas for poly-Cauchy number with negative indices.
|
In this paper, we introduce a complete system for autonomous flight of
quadrotors in dynamic environments with onboard sensing. Extended from existing
work, we develop an occlusion-aware dynamic perception method based on depth
images, which classifies obstacles as dynamic and static. For representing
generic dynamic environment, we model dynamic objects with moving ellipsoids
and fuse static ones into an occupancy grid map. To achieve dynamic avoidance,
we design a planning method composed of modified kinodynamic path searching and
gradient-based optimization. The method leverages manually constructed
gradients without maintaining a signed distance field (SDF), making the
planning procedure finished in milliseconds. We integrate the above methods
into a customized quadrotor system and thoroughly test it in realworld
experiments, verifying its effective collision avoidance in dynamic
environments.
|
The Banach-Picard iteration is widely used to find fixed points of locally
contractive (LC) maps. This paper extends the Banach-Picard iteration to
distributed settings; specifically, we assume the map of which the fixed point
is sought to be the average of individual (not necessarily LC) maps held by a
set of agents linked by a communication network. An additional difficulty is
that the LC map is not assumed to come from an underlying optimization problem,
which prevents exploiting strong global properties such as convexity or
Lipschitzianity. Yet, we propose a distributed algorithm and prove its
convergence, in fact showing that it maintains the linear rate of the standard
Banach-Picard iteration for the average LC map. As another contribution, our
proof imports tools from perturbation theory of linear operators, which, to the
best of our knowledge, had not been used before in the theory of distributed
computation.
|
Cosmos has always sparked human curiosity to unveil and speculate its
fascinating secrets. This curiosity has ultimately opened a window to other
worlds. After years of observation, computation, and data analysis, scientists
have revealed the diversity in exoplanets that have been helpful in the
characterization and further investigation of biosignatures and
technosignatures. This article presents some of the scientific advances made in
extraterrestrial planetary science that guarantees to search through the thick
clouds of the mysterious planets in the near future.
|
Let $G$ be a group with identity $e$. Let $R$ be a $G$-graded commutative
ring with identity and $M$ a graded $R$-module. In this paper, we introduce the
concept of graded $I_{e}$-prime submodule as a generalization of a graded prime
submodule for $I=\oplus_{g\in G}I_{g}$ a fixed graded ideal of $R$. We give a
number of results concerning of these classes of graded submodules and their
homogeneous components. A proper graded submodule $N$ of $M$ is said to be a
graded $I_{e}$-prime submodule of $M$ if whenever $% r_{g}\in h(R)$ and
$m_{h}\in h(M)$ with $r_{g}m_{h}\in N-I_{e}N,$ then either $r_{g}\in (N:_{R}M)$
or $m_{h}\in N.$
|
Acquisition of training data for the standard semantic segmentation is
expensive if requiring that each pixel is labeled. Yet, current methods
significantly deteriorate in weakly supervised settings, e.g. where a fraction
of pixels is labeled or when only image-level tags are available. It has been
shown that regularized losses - originally developed for unsupervised low-level
segmentation and representing geometric priors on pixel labels - can
considerably improve the quality of weakly supervised training. However, many
common priors require optimization stronger than gradient descent. Thus, such
regularizers have limited applicability in deep learning. We propose a new
robust trust region approach for regularized losses improving the
state-of-the-art results. Our approach can be seen as a higher-order
generalization of the classic chain rule. It allows neural network optimization
to use strong low-level solvers for the corresponding regularizers, including
discrete ones.
|
We present the confirmation of a new sub-Neptune close to the transition
between super-Earths and sub-Neptunes transiting the M2 dwarf TOI- 269 (TIC
220479565, V = 14.4 mag, J = 10.9 mag, Rstar = 0.40 Rsun, Mstar = 0.39 Msun, d
= 57 pc). The exoplanet candidate has been identified in multiple TESS sectors,
and validated with high-precision spectroscopy from HARPS and ground-based
photometric follow-up from ExTrA and LCO-CTIO. We determined mass, radius, and
bulk density of the exoplanet by jointly modeling both photometry and radial
velocities with juliet. The transiting exoplanet has an orbital period of P =
3.6977104 +- 0.0000037 days, a radius of 2.77 +- 0.12 Rearth, and a mass of 8.8
+- 1.4 Mearth. Since TOI-269 b lies among the best targets of its category for
atmospheric characterization, it would be interesting to probe the atmosphere
of this exoplanet with transmission spectroscopy in order to compare it to
other sub-Neptunes. With an eccentricity e = 0.425+0.082-0.086, TOI-269 b has
one of the highest eccentricities of the exoplanets with periods less than 10
days. The star being likely a few Gyr old, this system does not appear to be
dynamically young. We surmise TOI-269 b may have acquired its high eccentricity
as it migrated inward through planet-planet interactions.
|
As Artificial Intelligence as a Service gains popularity, protecting
well-trained models as intellectual property is becoming increasingly
important. Generally speaking, there are two common protection methods:
ownership verification and usage authorization. In this paper, we propose
Non-Transferable Learning (NTL), a novel approach that captures the exclusive
data representation in the learned model and restricts the model generalization
ability to certain domains. This approach provides effective solutions to both
model verification and authorization. For ownership verification, watermarking
techniques are commonly used but are often vulnerable to sophisticated
watermark removal methods. Our NTL-based model verification approach instead
provides robust resistance to state-of-the-art watermark removal methods, as
shown in extensive experiments for four of such methods over the digits,
CIFAR10 & STL10, and VisDA datasets. For usage authorization, prior solutions
focus on authorizing specific users to use the model, but authorized users can
still apply the model to any data without restriction. Our NTL-based
authorization approach instead provides data-centric usage protection by
significantly degrading the performance of usage on unauthorized data. Its
effectiveness is also shown through experiments on a variety of datasets.
|
Neural networks have emerged as a powerful way to approach many practical
problems in quantum physics. In this work, we illustrate the power of deep
learning to predict the dynamics of a quantum many-body system, where the
training is \textit{based purely on monitoring expectation values of
observables under random driving}. The trained recurrent network is able to
produce accurate predictions for driving trajectories entirely different than
those observed during training. As a proof of principle, here we train the
network on numerical data generated from spin models, showing that it can learn
the dynamics of observables of interest without needing information about the
full quantum state. This allows our approach to be applied eventually to actual
experimental data generated from a quantum many-body system that might be open,
noisy, or disordered, without any need for a detailed understanding of the
system. This scheme provides considerable speedup for rapid explorations and
pulse optimization. Remarkably, we show the network is able to extrapolate the
dynamics to times longer than those it has been trained on, as well as to the
infinite-system-size limit.
|
We study the dynamics of the quasi-one-dimensional Ising-Heisenberg
antiferromagnet BaCo2V2O8 under a transverse magnetic field. Combining
inelastic neutron scattering experiments and theoretical analyses by field
theories and numerical simulations, we mainly elucidate the structure of the
spin excitation spectrum in the high field phase, appearing above the quantum
phase transition point mu0Hc ~ 10 T. We find that it is characterized by
collective solitonic excitations superimposed on a continuum. These solitons
are strongly bound in pairs due to the effective staggered field induced by the
nondiagonal g tensor of the compound, and are topologically different from the
fractionalized spinons in the weak field region. The dynamical susceptibility
numerically calculated with the infinite time-evolving block decimation method
shows an excellent agreement with the measured spectra, which enables us to
identify the dispersion branches with elementary excitations. The lowest energy
dispersion has an incommensurate nature and has a local minimum at an
irrational wave number due to the applied transverse field.
|
In this paper, we explore generalizable, perception-to-action robotic
manipulation for precise, contact-rich tasks. In particular, we contribute a
framework for closed-loop robotic manipulation that automatically handles a
category of objects, despite potentially unseen object instances and
significant intra-category variations in shape, size and appearance. Previous
approaches typically build a feedback loop on top of a real-time 6-DOF pose
estimator. However, representing an object with a parameterized transformation
from a fixed geometric template does not capture large intra-category shape
variation. Hence we adopt the keypoint-based object representation proposed in
kPAM for category-level pick-and-place, and extend it to closed-loop
manipulation policies with contact-rich tasks. We first augment keypoints with
local orientation information. Using the oriented keypoints, we propose a novel
object-centric action representation in terms of regulating the linear/angular
velocity or force/torque of these oriented keypoints. This formulation is
surprisingly versatile -- we demonstrate that it can accomplish contact-rich
manipulation tasks that require precision and dexterity for a category of
objects with different shapes, sizes and appearances, such as peg-hole
insertion for pegs and holes with significant shape variation and tight
clearance. With the proposed object and action representation, our framework is
also agnostic to the robot grasp pose and initial object configuration, making
it flexible for integration and deployment.
|
Utilizing intelligent reflecting surface (IRS) was proven to be efficient in
improving the energy efficiency for wireless networks. In this paper, we
investigate the passive beamforming and channel estimation for IRS assisted
wireless communications with low-resolution analog-to-digital converters (ADCs)
at the receiver. We derive the approximate achievable rate by using the
Bussgang theorem. Based on the derived analytical achievable rate expression,
we maximize the achievable rate by using semidefinite programming (SDP),
branch-and-bound (BB), and gradient-based approaches. A maximum likelihood (ML)
estimator is then proposed for channel estimation by considering the
$\mathrm{1}$-bit quantization ADC. Numerical result shows that the proposed
beamforming design and channel estimation method significantly outperforms the
existing methods.
|
The impact of ram pressure stripping on galaxy evolution is well known.
Recent multi-wavelength data have revealed many examples of galaxies undergoing
stripping, often accompanied with multi-phase tails. As energy transfer in the
multi-phase medium is an outstanding question in astrophysics, galaxies in
stripping are great objects to study. Despite the recent burst of observational
evidence, the relationship between gas in different phases in the tails is
poorly known. Here we report a strong linear correlation between the X-ray
surface brightness and the H$\alpha$ surface brightness of the diffuse gas in
the stripped tails at $\sim$ 10 - 40 kpc scales, with a slope of $\sim$ 3.5.
This discovery provides evidence for the mixing of the stripped interstellar
medium with the hot intracluster medium as the origin of the multi-phase tails.
The established relation in stripped tails, also in comparison with the likely
related correlations in similar environments like galactic winds and X-ray cool
cores, provides an important test for models of energy transfer in the
multi-phase gas. It also indicates the importance of the H$\alpha$ data to
study clumping and turbulence in the intracluster medium.
|
Automated metaphor detection is a challenging task to identify metaphorical
expressions of words in a sentence. To tackle this problem, we adopt
pre-trained contextualized models, e.g., BERT and RoBERTa. To this end, we
propose a novel metaphor detection model, namely metaphor-aware late
interaction over BERT (MelBERT). Our model not only leverages contextualized
word representation but also benefits from linguistic metaphor identification
theories to distinguish between the contextual and literal meaning of words.
Our empirical results demonstrate that MelBERT outperforms several strong
baselines on four benchmark datasets, i.e., VUA-18, VUA-20, MOH-X, and TroFi.
|
Lending decisions are usually made with proprietary models that provide
minimally acceptable explanations to users. In a future world without such
secrecy, what decision support tools would one want to use for justified
lending decisions? This question is timely, since the economy has dramatically
shifted due to a pandemic, and a massive number of new loans will be necessary
in the short term. We propose a framework for such decisions, including a
globally interpretable machine learning model, an interactive visualization of
it, and several types of summaries and explanations for any given decision. The
machine learning model is a two-layer additive risk model, which resembles a
two-layer neural network, but is decomposable into subscales. In this model,
each node in the first (hidden) layer represents a meaningful subscale model,
and all of the nonlinearities are transparent. Our online visualization tool
allows exploration of this model, showing precisely how it came to its
conclusion. We provide three types of explanations that are simpler than, but
consistent with, the global model: case-based reasoning explanations that use
neighboring past cases, a set of features that were the most important for the
model's prediction, and summary-explanations that provide a customized sparse
explanation for any particular lending decision made by the model. Our
framework earned the FICO recognition award for the Explainable Machine
Learning Challenge, which was the first public challenge in the domain of
explainable machine learning.
|
The complex nature of lithium-ion battery degradation has led to many machine
learning based approaches to health forecasting being proposed in literature.
However, machine learning can be computationally intensive. Linear approaches
are faster but have previously been too inflexible for successful prognosis.
For both techniques, the choice and quality of the inputs is a limiting factor
of performance. Piecewise-linear models, combined with automated feature
selection, offer a fast and flexible alternative without being as
computationally intensive as machine learning. Here, a piecewise-linear
approach to battery health forecasting was compared to a Gaussian process
regression tool and found to perform equally well. The input feature selection
process demonstrated the benefit of limiting the correlation between inputs.
Further trials found that the piecewise-linear approach was robust to changing
input size and availability of training data.
|
Spatial structuring of an optical pulse can lead in some cases upon free
propagation to changes in its temporal profile. For example, introducing
conventional angular dispersion into the field results in the pulse
encountering group-velocity dispersion in free space. However, only limited
control is accessible via this strategy. Here we show that precise and
versatile control can be exercised in free space over the dispersion profile of
so-called `space-time' wave packets: a class of pulsed beams undergirded by
non-differentiable angular dispersion. This abstract mathematical feature
allows us to tune the magnitude and sign of the different dispersion orders
without introducing losses, thereby realizing arbitrary dispersion profiles,
and achieving dispersion values unattainable in optical materials away from
resonance. Unlike optical materials and photonic structures in which the values
of the different dispersion orders are not independent of each other, these
orders are addressable separately using our strategy. These results demonstrate
the versatility of space-time wave packets as a platform for structured light
and point towards their utility in nonlinear and quantum optics.
|
Collaborative filtering (CF) has achieved great success in the field of
recommender systems. In recent years, many novel CF models, particularly those
based on deep learning or graph techniques, have been proposed for a variety of
recommendation tasks, such as rating prediction and item ranking. These newly
published models usually demonstrate their performance in comparison to
baselines or existing models in terms of accuracy improvements. However, others
have pointed out that many newly proposed models are not as strong as expected
and are outperformed by very simple baselines.
This paper proposes a simple linear model based on Matrix Factorization (MF),
called UserReg, which regularizes users' latent representations with explicit
feedback information for rating prediction. We compare the effectiveness of
UserReg with three linear CF models that are widely-used as baselines, and with
a set of recently proposed complex models that are based on deep learning or
graph techniques. Experimental results show that UserReg achieves overall
better performance than the fine-tuned baselines considered and is highly
competitive when compared with other recently proposed models. We conclude that
UserReg can be used as a strong baseline for future CF research.
|
We establish Arazy-Cwikel type properties for the family of couples
$(\ell^{p},\ell^{q})$, $0\le p<q\le\infty$, and show that $(\ell^{p},\ell^{q})
$ is a Calder\'on-Mityagin couple if and only if $q\ge1$. Moreover, we identify
interpolation orbits of elements with respect to this couple for all $p$ and
$q$ such that $0\le p<q\le\infty$ and obtain a simple positive solution of a
Levitina-Sukochev-Zanin problem, clarifying its connections with whether
$(\ell^{p},\ell^{q})$ has the Calder\'on-Mityagin property or not.
|
The online reconstruction of muon tracks in High Energy Physics experiments
is a highly demanding task, typically performed with programmable logic boards,
such as FPGAs. Complex analytical algorithms are executed in a quasi-real-time
environment to identify, select and reconstruct local tracks in often
noise-rich environments. A novel approach to the generation of local triggers
based on an hybrid combination of Artificial Neural Networks and analytical
methods is proposed, targeting the muon reconstruction for drift tube
detectors. The proposed algorithm exploits Neural Networks to solve otherwise
computationally expensive analytical tasks for the unique identification of
coherent signals and the removal of the geometrical ambiguities. The proposed
approach is deployed on state-of-the-art FPGA and its performances are
evaluated on simulation and on data collected from cosmic rays.
|
Numerical simulations of monolayer dust crystals in an RF complex plasma were
performed to examine the crystal structure and quantify the effects of
including the collision enhanced ion current in the charging model. A GEC cell
similar to a previous experimental work was modeled for a range of RF voltages,
using a continuum description for the plasma and a particle description for
dust grains. The time history of each dust grain was monitored. The dust charge
was computed using both the OML and the collision enhanced charging (CEC) model
applicable to the sheath region. The dust model accounted for the electric
force, ion drag force, neutral drag force, gravity, and the ion wake. The CEC
model produced a lower charge and lower electric force which agreed better with
the experimental data. Then dust crystals composed of 40 to 100 grains were
modeled and the levitation height and inter-particle spacing of the resulting
crystals was examined. Including the collision enhanced current reduced the
inter-particle spacing but only had a minor effect on the levitation height.
|
We introduce an algebraic model based on the expansion of the determinant of
two matrices, one of which is generic, to check the additivity of Z^d-valued
set functions. Each individual term of the expansion is deformed through a
monomial factor in d indeterminates with exponents defined by the set function.
A family of sparse polynomials is derived from Grassmann-Plucker relations, and
their compatibility is linked to the factorisation of hyperdeterminants in the
ring of Laurent polynomials. It is proved that, in broad generality, this
deformation returns a determinantal expansion if and only if it is induced by a
diagonal matrix of monomials acting as a kernel into the initial determinant
expansion, which guarantees the additivity of the set function. The hypotheses
underlying this result are tested through the construction of counterexamples,
and their implications are explored in terms of complexity reduction, with
special attention to permutations of families of subsets.
|
The Kahn--Saks inequality is a classical result on the number of linear
extensions of finite posets. We give a new proof of this inequality for posets
of width two using explicit injections of lattice paths. As a consequence we
obtain a $q$-analogue, a multivariate generalization and an equality condition
in this case. We also discuss the equality conditions of the Kahn--Saks
inequality for general posets and prove several implications between conditions
conjectured to be equivalent.
|
In quantum metrology, nonlinear many-body interactions can enhance the
precision of Hamiltonian parameter estimation to surpass the Heisenberg
scaling. Here, we consider the estimation of the interaction strength in linear
systems with long-range interactions and using the Kitaev chains as a case
study, we establish a transition from the Heisenberg to super-Heisenberg
scaling in the quantum Fisher information by varying the interaction range. We
further show that quantum control can improve the prefactor of the quantum
Fisher information. Our results explore the advantage of optimal quantum
control and long-range interactions in many-body quantum metrology.
|
The optical domain is a promising field for physical implementation of neural
networks, due to the speed and parallelism of optics. Extreme Learning Machines
(ELMs) are feed-forward neural networks in which only output weights are
trained, while internal connections are randomly selected and left untrained.
Here we report on a photonic ELM based on a frequency-multiplexed fiber setup.
Multiplication by output weights can be performed either offline on a computer,
or optically by a programmable spectral filter. We present both numerical
simulations and experimental results on classification tasks and a nonlinear
channel equalization task.
|
We study the practical consequences of dataset sampling strategies on the
performance of recommendation algorithms. Recommender systems are generally
trained and evaluated on samples of larger datasets. Samples are often taken in
a naive or ad-hoc fashion: e.g. by sampling a dataset randomly or by selecting
users or items with many interactions. As we demonstrate, commonly-used data
sampling schemes can have significant consequences on algorithm performance --
masking performance deficiencies in algorithms or altering the relative
performance of algorithms, as compared to models trained on the complete
dataset. Following this observation, this paper makes the following main
contributions: (1) characterizing the effect of sampling on algorithm
performance, in terms of algorithm and dataset characteristics (e.g. sparsity
characteristics, sequential dynamics, etc.); and (2) designing SVP-CF, which is
a data-specific sampling strategy, that aims to preserve the relative
performance of models after sampling, and is especially suited to long-tail
interaction data. Detailed experiments show that SVP-CF is more accurate than
commonly used sampling schemes in retaining the relative ranking of different
recommendation algorithms.
|
In this work we leverage commonsense knowledge in form of knowledge paths to
establish connections between sentences, as a form of explicitation of implicit
knowledge. Such connections can be direct (singlehop paths) or require
intermediate concepts (multihop paths). To construct such paths we combine two
model types in a joint framework we call Co-nnect: a relation classifier that
predicts direct connections between concepts; and a target prediction model
that generates target or intermediate concepts given a source concept and a
relation, which we use to construct multihop paths. Unlike prior work that
relies exclusively on static knowledge sources, we leverage language models
finetuned on knowledge stored in ConceptNet, to dynamically generate knowledge
paths, as explanations of implicit knowledge that connects sentences in texts.
As a central contribution we design manual and automatic evaluation settings
for assessing the quality of the generated paths. We conduct evaluations on two
argumentative datasets and show that a combination of the two model types
generates meaningful, high-quality knowledge paths between sentences that
reveal implicit knowledge conveyed in text.
|
Continuous practices that rely on automation in the software development
workflow have been widely adopted by industry for over a decade. Despite this
widespread use, software development remains a primarily human-driven activity
that is highly creative and collaborative. There has been extensive research on
how continuous practices rely on automation and its impact on software quality
and development velocity, but relatively little has been done to understand how
automation impacts developer behavior and collaboration. In this paper, we
introduce a socio-technical theory about continuous practices. The ADEPT theory
combines constructs that include humans, processes, documentation, automation
and the project environment, and describes propositions that relate these
constructs. The theory was derived from phenomena observed in previous
empirical studies. We show how the ADEPT theory can explain and describe
existing continuous practices in software development, and how it can be used
to generate new propositions for future studies to understand continuous
practices and their impact on the social and technical aspects of software
development.
|
In this paper, the relativistic quantum dynamics of a scalar particle under
the effect of Lorentz symmetry violation determined by a tensor
$(K_{F})_{\mu\,\nu\,\alpha\,\beta}$ out of the Standard Model Extension is
investigated. We see that the bound-state solution of the modified Klein-Gordon
equation can be obtained, and the spectrum of energy and the wave function
depends on the Lorentz symmetry breaking parameters
|
We study the fundamental problem of frequency estimation under both privacy
and communication constraints, where the data is distributed among $k$ parties.
We consider two application scenarios: (1) one-shot, where the data is static
and the aggregator conducts a one-time computation; and (2) streaming, where
each party receives a stream of items over time and the aggregator continuously
monitors the frequencies. We adopt the model of multiparty differential privacy
(MDP), which is more general than local differential privacy (LDP) and
(centralized) differential privacy. Our protocols achieve optimality (up to
logarithmic factors) permissible by the more stringent of the two constraints.
In particular, when specialized to the $\varepsilon$-LDP model, our protocol
achieves an error of $\sqrt{k}/(e^{\Theta(\varepsilon)}-1)$ using $O(k\max\{
\varepsilon, \frac{1}{\varepsilon} \})$ bits of communication and $O(k \log u)$
bits of public randomness, where $u$ is the size of the domain.
|
Focal plane wavefront sensing (FPWFS) is appealing for several reasons.
Notably, it offers high sensitivity and does not suffer from non-common path
aberrations (NCPA). The price to pay is a high computational burden and the
need for diversity to lift any phase ambiguity. If those limitations can be
overcome, FPWFS is a great solution for NCPA measurement, a key limitation for
high-contrast imaging, and could be used as adaptive optics wavefront sensor.
Here, we propose to use deep convolutional neural networks (CNNs) to measure
NCPA based on focal plane images. Two CNN architectures are considered:
ResNet-50 and U-Net which are used respectively to estimate Zernike
coefficients or directly the phase. The models are trained on labelled datasets
and evaluated at various flux levels and for two spatial frequency contents (20
and 100 Zernike modes). In these idealized simulations we demonstrate that the
CNN-based models reach the photon noise limit in a large range of conditions.
We show, for example, that the root mean squared (rms) wavefront error (WFE)
can be reduced to < $\lambda$/1500 for $2 \times 10^6$ photons in one iteration
when estimating 20 Zernike modes. We also show that CNN-based models are
sufficiently robust to varying signal-to-noise ratio, under the presence of
higher-order aberrations, and under different amplitudes of aberrations.
Additionally, they display similar to superior performance compared to
iterative phase retrieval algorithms. CNNs therefore represent a compelling way
to implement FPWFS, which can leverage the high sensitivity of FPWFS over a
broad range of conditions.
|
We analyze the behavior of third-order in time linear and nonlinear sound
waves in thermally relaxing fluids and gases as the sound diffusivity vanishes.
The nonlinear acoustic propagation is modeled by the
Jordan--Moore--Gibson--Thompson equation both in its Westervelt and in its
Kuznetsov-type forms, that is, including quadratic nonlinearities of the type
$(u^2)_{tt}$ and $ (u_t^2 + |\nabla u|^2)_t$. As it turns out, sufficiently
smooth solutions of these equations converge in the energy norm to the
solutions of the corresponding inviscid models at a linear rate. Numerical
experiments illustrate our theoretical findings.
|
A growing number of people engage in online health forums, making it
important to understand the quality of the advice they receive. In this paper,
we explore the role of expertise in responses provided to help-seeking posts
regarding mental health. We study the differences between (1) interactions with
peers; and (2) interactions with self-identified mental health professionals.
First, we show that a classifier can distinguish between these two groups,
indicating that their language use does in fact differ. To understand this
difference, we perform several analyses addressing engagement aspects,
including whether their comments engage the support-seeker further as well as
linguistic aspects, such as dominant language and linguistic style matching.
Our work contributes toward the developing efforts of understanding how health
experts engage with health information- and support-seekers in social networks.
More broadly, it is a step toward a deeper understanding of the styles of
interactions that cultivate supportive engagement in online communities.
|
The NA62 experiment at CERN reports searches for $K^+\to\mu^+N$ and
$K^+\to\mu^+\nu X$ decays, where $N$ and $X$ are massive invisible particles,
using the 2016-2018 data set. The $N$ particle is assumed to be a heavy neutral
lepton, and the results are expressed as upper limits of ${\cal O}(10^{-8})$ of
the neutrino mixing parameter $|U_{\mu4}|^2$ for $N$ masses in the range
200-384 MeV/$c^2$ and lifetime exceeding 50 ns. The $X$ particle is considered
a scalar or vector hidden sector mediator decaying to an invisible final state,
and upper limits of the decay branching fraction for $X$ masses in the range
10-370 MeV/$c^2$ are reported for the first time, ranging from ${\cal
O}(10^{-5})$ to ${\cal O}(10^{-7})$. An improved upper limit of $1.0\times
10^{-6}$ is established at 90% CL on the $K^+\to\mu^+\nu\nu\bar\nu$ branching
fraction.
|
Spoken Language Understanding (SLU) aims to extract the semantics frame of
user queries, which is a core component in a task-oriented dialog system. With
the burst of deep neural networks and the evolution of pre-trained language
models, the research of SLU has obtained significant breakthroughs. However,
there remains a lack of a comprehensive survey summarizing existing approaches
and recent trends, which motivated the work presented in this article. In this
paper, we survey recent advances and new frontiers in SLU. Specifically, we
give a thorough review of this research field, covering different aspects
including (1) new taxonomy: we provide a new perspective for SLU filed,
including single model vs. joint model, implicit joint modeling vs. explicit
joint modeling in joint model, non pre-trained paradigm vs. pre-trained
paradigm;(2) new frontiers: some emerging areas in complex SLU as well as the
corresponding challenges; (3) abundant open-source resources: to help the
community, we have collected, organized the related papers, baseline projects
and leaderboard on a public website where SLU researchers could directly access
to the recent progress. We hope that this survey can shed a light on future
research in SLU field.
|
When detecting anomalies in audio, it can often be necessary to consider
concept drift: the distribution of the data may drift over time because of
dynamically changing environments, and anomalies may become normal as time
elapses. We propose to use adaptive Huffman coding for anomaly detection in
audio with concept drift. Compared with the existing method of adaptive
Gaussian mixture modeling (AGMM), adaptive Huffman coding does not require a
priori information about the clusters and can adjust the number of clusters
dynamically depending on the amount of variation in the audio. To control the
size of the Huffman tree, we propose to merge clusters that are close to each
other instead of replacing rare clusters with new data. This reduces redundancy
in the Huffman tree while ensuring that it never forgets past information. On a
dataset of audio with concept drift which we have curated ourselves, our
proposed method achieves higher area under the curve (AUC) compared with AGMM
and fixed-length Huffman trees. The proposed approach is also time-efficient
and can be easily extended to other types of time series data (e.g., video).
|
The Supergiant X-ray binary Vela X-1 represents one of the best astrophysical
sources to investigate the wind environment of a O/B star irradiated by an
accreting neutron star. Previous studies and hydrodynamic simulations of the
system revealed a clumpy environment and the presence of two wakes: an
accretion wake surrounding the compact object and a photoionisation wake
trailing it along the orbit. Our goal is to conduct, for the first time,
high-resolution spectroscopy on Chandra/HETG data at the orbital phase
$\varphi_\mathrm{orb} \approx 0.75$, when the line of sight is crossing the
photoionisation wake. We aim to conduct plasma diagnostics, inferring the
structure and the geometry of the wind. We perform a blind search employing a
Bayesian Block algorithm to find discrete spectral features and identify them
thanks to the most recent laboratory results or through atomic databases.
Plasma properties are inferred both with empirical techniques and with
photoionisation models within CLOUDY and SPEX. We detect and identify five
narrow radiative recombination continua (Mg XI-XII, Ne IX-X, O VIII) and
several emission lines from Fe, S, Si, Mg, Ne, Al, and Na, including four
He-like triplets (S XV, Si XIII, Mg XI, and Ne IX). Photoionisation models well
reproduce the overall spectrum, except for the near-neutral fluorescence lines
of Fe, S, and Si. We conclude that the plasma is mainly photoionised, but more
than one component is most likely present, consistent with a multi-phase plasma
scenario, where denser and colder clumps of matter are embedded in the hot,
photoionised wind of the companion star. Simulations with the future X-ray
satellites Athena and XRISM show that a few hundred seconds of exposure will be
sufficient to disentangle the lines of the Fe K$\alpha$ doublet and the He-like
Fe XXV, improving, in general, the determination of the plasma parameters.
|
We determine the asymptotic behaviour of certain incomplete Betafunctions.
|
We generalise a result of Kazarian regarding Kadomtsev-Petviashvili
integrability for single Hodge integrals to general cohomological field
theories related to Hurwitz-type counting problems or hypergeometric
tau-functions. The proof uses recent results on the relations between
hypergeometric tau-functions and topological recursion, as well as the
Eynard-DOSS correspondence between topological recursion and cohomological
field theories. In particular, we recover the result of Alexandrov of KP
integrability for triple Hodge integrals with a Calabi-Yau condition.
|
We compute successfully the launching of two magnetic winds from two
circumbinary disks formed after a common envelope event. The launching is
produced by the increase of magnetic pressure due to the collapse of the disks.
The collapse is due to internal torques produced by a weak poloidal magnetic
field. The first wind can be described as a wide jet, with an average mass-loss
rate of $\sim 1.3 \times 10^{-7}$ \Moy\ and a maximum radial velocity of $\sim
230$ \kms. The outflow has a half-opening angle of $\sim 20^{\circ}$. Narrow
jets are also formed intermittently with velocities up to 3,000 \kms, with
mass-loss rates of $\sim 6 \times 10^{-12} $ \Moy\ during short periods of
time. The second wind can be described as a wide X-wind, with an average
mass-loss rate of $\sim 1.68 \times 10^{-7}$ \Moy\ and a velocity of $\sim 30$
\kms. A narrow jet is also formed with a velocity of 250 \kms, and a mass-loss
rates of $\sim 10^{-12} $ \Moy.
The computed jets are used to provide inflow boundary conditions for
simulations of proto-planetary nebulae. The wide jet evolves into a molecular
collimated outflow within a few astronomical units, producing proto-planetary
nebulae with bipolar, elongated shapes, whose kinetic energies reach $\sim 4
\times 10^{45}$ erg at 1,000 years. Similarities with observed features in
W43A, OH231.8+4.2, and Hen 3-1475 are discussed.
The computed wide X-wind produces proto-planetary nebulae with slower
expansion velocities, with bipolar and elliptical shapes, and possible starfish
type and quadrupolar morphology.
|
Data from Direct Numerical Simulations of disperse bubbly flows in a vertical
channel are used to study the effect of the bubbles on the carrier-phase
turbulence. A new method is developed, based on the barycentric map approach,
that allows to quantify the anisotropy and componentiality of the flow at any
scale. Using this the bubbles are found to significantly enhance flow
anisotropy at all scales compared with the unladen case, and for some bubble
cases, very strong anisotropy persists down to the smallest flow scales. The
strongest anisotropy observed was for the cases involving small bubbles.
Concerning the inter-scale energy transfer, our results indicate that for the
bubble-laden cases, the energy transfer is from large to small scales, just as
for the unladen case. However, there is evidence of an upscale transfer when
considering the transfer of energy associated with particular components of the
velocity field. Although the direction of the energy transfer is the same with
and without the bubbles, the transfer is much stronger for the bubble-laden
cases, suggesting that the bubbles play a strong role in enhancing the activity
of the nonlinear term in the flow. The normalized forms of the fourth and
sixth-order structure functions are also considered, and reveal that the
introduction of bubbles into the flow strongly enhances intermittency in the
dissipation range, but suppresses it at larger scales. This strong enhancement
of the dissipation scale intermittency has significant implications for
understanding how the bubbles might modify the mixing properties of turbulent
flows.
|
Preference judgments have been demonstrated as a better alternative to graded
judgments to assess the relevance of documents relative to queries. Existing
work has verified transitivity among preference judgments when collected from
trained judges, which reduced the number of judgments dramatically. Moreover,
strict preference judgments and weak preference judgments, where the latter
additionally allow judges to state that two documents are equally relevant for
a given query, are both widely used in literature. However, whether
transitivity still holds when collected from crowdsourcing, i.e., whether the
two kinds of preference judgments behave similarly remains unclear. In this
work, we collect judgments from multiple judges using a crowdsourcing platform
and aggregate them to compare the two kinds of preference judgments in terms of
transitivity, time consumption, and quality. That is, we look into whether
aggregated judgments are transitive, how long it takes judges to make them, and
whether judges agree with each other and with judgments from TREC. Our key
findings are that only strict preference judgments are transitive. Meanwhile,
weak preference judgments behave differently in terms of transitivity, time
consumption, as well as of quality of judgment.
|
In the past decades, DNA has been intensely studied and exploited in
different research areas of nanoscience and nanotechnology. At first glance,
DNA-based nanophotonics seems to deviate quite far from the original goal of
Nadrian Seeman, the founder of DNA nanotechnology, who hoped to organize
biological entities using DNA in high-resolution crystals. As a matter of fact,
DNA-based nanophotonics does closely follow his central spirit. That is, apart
from being a genetic material for inheritance, DNA is also an ideal material
for building molecular devices.
|
In this paper, we present an autonomous navigation system for goal-driven
exploration of unknown environments through deep reinforcement learning (DRL).
Points of interest (POI) for possible navigation directions are obtained from
the environment and an optimal waypoint is selected, based on the available
data. Following the waypoints, the robot is guided towards the global goal and
the local optimum problem of reactive navigation is mitigated. Then, a motion
policy for local navigation is learned through a DRL framework in a simulation.
We develop a navigation system where this learned policy is integrated into a
motion planning stack as the local navigation layer to move the robot between
waypoints towards a global goal. The fully autonomous navigation is performed
without any prior knowledge while a map is recorded as the robot moves through
the environment. Experiments show that the proposed method has an advantage
over similar exploration methods, without reliance on a map or prior
information in complex static as well as dynamic environments.
|
In nanopore sequencing, electrical signal is measured as DNA molecules pass
through the sequencing pores. Translating these signals into DNA bases (base
calling) is a highly non-trivial task, and its quality has a large impact on
the sequencing accuracy. The most successful nanopore base callers to date use
convolutional neural networks (CNN) to accomplish the task.
Convolutional layers in CNNs are typically composed of filters with constant
window size, performing best in analysis of signals with uniform speed.
However, the speed of nanopore sequencing varies greatly both within reads and
between sequencing runs. Here, we present dynamic pooling, a novel neural
network component, which addresses this problem by adaptively adjusting the
pooling ratio. To demonstrate the usefulness of dynamic pooling, we developed
two base callers: Heron and Osprey. Heron improves the accuracy beyond the
experimental high-accuracy base caller Bonito developed by Oxford Nanopore.
Osprey is a fast base caller that can compete in accuracy with Guppy
high-accuracy mode, but does not require GPU acceleration and achieves a near
real-time speed on common desktop CPUs.
Availability: https://github.com/fmfi-compbio/osprey,
https://github.com/fmfi-compbio/heron
Keywords: nanopore sequencing, base calling, convolutional neural networks,
pooling
|
Deep neural network (DNN)-based speech enhancement ordinarily requires clean
speech signals as the training target. However, collecting clean signals is
very costly because they must be recorded in a studio. This requirement
currently restricts the amount of training data for speech enhancement to less
than 1/1000 of that of speech recognition which does not need clean signals.
Increasing the amount of training data is important for improving the
performance, and hence the requirement of clean signals should be relaxed. In
this paper, we propose a training strategy that does not require clean signals.
The proposed method only utilizes noisy signals for training, which enables us
to use a variety of speech signals in the wild. Our experimental results showed
that the proposed method can achieve the performance similar to that of a DNN
trained with clean signals.
|
Deep Metric Learning (DML) is helpful in computer vision tasks. In this
paper, we firstly introduce DML into image co-segmentation. We propose a novel
Triplet loss for Image Segmentation, called IS-Triplet loss for short, and
combine it with traditional image segmentation loss. Different from the general
DML task which learns the metric between pictures, we treat each pixel as a
sample, and use their embedded features in high-dimensional space to form
triples, then we tend to force the distance between pixels of different
categories greater than of the same category by optimizing IS-Triplet loss so
that the pixels from different categories are easier to be distinguished in the
high-dimensional feature space. We further present an efficient triple sampling
strategy to make a feasible computation of IS-Triplet loss. Finally, the
IS-Triplet loss is combined with 3 traditional image segmentation losses to
perform image segmentation. We apply the proposed approach to image
co-segmentation and test it on the SBCoseg dataset and the Internet dataset.
The experimental result shows that our approach can effectively improve the
discrimination of pixels' categories in high-dimensional space and thus help
traditional loss achieve better performance of image segmentation with fewer
training epochs.
|
The domain of Embodied AI has recently witnessed substantial progress,
particularly in navigating agents within their environments. These early
successes have laid the building blocks for the community to tackle tasks that
require agents to actively interact with objects in their environment. Object
manipulation is an established research domain within the robotics community
and poses several challenges including manipulator motion, grasping and
long-horizon planning, particularly when dealing with oft-overlooked practical
setups involving visually rich and complex scenes, manipulation using mobile
agents (as opposed to tabletop manipulation), and generalization to unseen
environments and objects. We propose a framework for object manipulation built
upon the physics-enabled, visually rich AI2-THOR framework and present a new
challenge to the Embodied AI community known as ArmPointNav. This task extends
the popular point navigation task to object manipulation and offers new
challenges including 3D obstacle avoidance, manipulating objects in the
presence of occlusion, and multi-object manipulation that necessitates long
term planning. Popular learning paradigms that are successful on PointNav
challenges show promise, but leave a large room for improvement.
|
Houghton's groups $H_2, H_3, \ldots$ are certain infinite permutation groups
acting on a countably infinite set; they have been studied, among other things,
for their finiteness properties. In this note we describe all of the finite
index subgroups of each Houghton group, and their isomorphism types. Using the
standard notation that $d(G)$ denotes the minimal size of generating set for
$G$ we then show, for each $n\in \{2, 3,\ldots\}$ and $U$ of finite index in
$H_n$, that $d(U)\in\{d(H_n), d(H_n)+1\}$ and characterise when each of these
cases occurs.
|
This article aims at developing a model based optimization for reduction of
temporal unwrapping and field estimation errors in multi-echo acquisition of
Gradient Echo sequence. Using the assumption that the phase is linear along the
temporal dimension, the field estimation is performed by application of unity
rank approximation to the Hankel matrix formed using the complex exponential of
the channel combined phase at each echo time. For the purpose of maintaining
consistency with the observed complex data, the linear phase evolution model is
formulated as an optimization problem with a cost function that involves a
fidelity term and a unity rank prior, implemented using alternating
minimization. Itoh s algorithm applied to the multi-echo phase estimated from
this linear phase evolution model is able to reduce the unwrapping errors as
compared to the unwrapping when directly applied to the measured phase.
Secondly, the improved accuracy of the frequency fit in comparison to
estimation using weighted least-square regression and penalized maximum
likelihood is demonstrated using numerical simulation of field perturbation due
to magnetic susceptibility effect. It is shown that the field can be estimated
with 80 percent reduction in mean absolute error in comparison to wLSR and 66
percent reduction with respect to penalized maximum likelihood. The improvement
in performance becomes more pronounced with increasing strengths of field
gradient magnitudes and echo spacing.
|
Willems' fundamental lemma and system level synthesis both characterize a
linear dynamic system by its input/output sequences. In this work, we extend
the application of the fundamental lemma from deterministic to uncertain LTI
systems and then further prove this extension to be equivalent to system level
synthesis. Based on this uncertain extension, a robust closed-loop data-enabled
predictive control scheme is proposed, where a causal feedback control law is
further derived. Two numerical experiments, including the temperature control
of a single-zone building, are carried out to validate the effectiveness of the
proposed data-driven controller.
|
Pretrained language models have shown success in many natural language
processing tasks. Many works explore incorporating knowledge into language
models. In the biomedical domain, experts have taken decades of effort on
building large-scale knowledge bases. For example, the Unified Medical Language
System (UMLS) contains millions of entities with their synonyms and defines
hundreds of relations among entities. Leveraging this knowledge can benefit a
variety of downstream tasks such as named entity recognition and relation
extraction. To this end, we propose KeBioLM, a biomedical pretrained language
model that explicitly leverages knowledge from the UMLS knowledge bases.
Specifically, we extract entities from PubMed abstracts and link them to UMLS.
We then train a knowledge-aware language model that firstly applies a text-only
encoding layer to learn entity representation and applies a text-entity fusion
encoding to aggregate entity representation. Besides, we add two training
objectives as entity detection and entity linking. Experiments on the named
entity recognition and relation extraction from the BLURB benchmark demonstrate
the effectiveness of our approach. Further analysis on a collected probing
dataset shows that our model has better ability to model medical knowledge.
|
Cloud computing has the capacity to transform many parts of the research
ecosystem, from particular research areas to overall strategic decision making
and policy. Scientometrics sits at the boundary between research and the
decision making and evaluation processes of research. One of the biggest
challenges in research policy and strategy is having access to data that allows
iterative analysis to inform decisions. Many of these decisions are based on
"global" measures such as benchmark metrics that are hard to source. In this
article, Cloud technologies are explored in this context. A novel visualisation
technique is presented and used as a means to explore the potential for scaling
scientometrics by democratising both access to data and compute capacity using
the Cloud.
|
We present photometric and spectroscopic observations of Supernova 2020oi (SN
2020oi), a nearby ($\sim$17 Mpc) type-Ic supernova (SN Ic) within the
grand-design spiral M100. We undertake a comprehensive analysis to characterize
the evolution of SN 2020oi and constrain its progenitor system. We detect flux
in excess of the fireball rise model $\delta t \approx 2.5$ days from the date
of explosion in multi-band optical and UV photometry from the Las Cumbres
Observatory and the Neil Gehrels Swift Observatory, respectively. The derived
SN bolometric luminosity is consistent with an explosion with $M_{\rm ej} =
0.81 \pm 0.03 M_{\odot}$, $E_{k}= 0.79 \pm 0.09 \times 10^{51} \rm{erg}
\rm{s}^{-1}$, and $M_{\rm Ni56} = 0.08 \pm 0.02 M_{\odot}$. Inspection of the
event's decline reveals the highest $\Delta m_{15,\rm{bol}}$ reported for a
stripped-envelope event to date. Modeling of optical spectra near event peak
indicates a partially mixed ejecta comparable in composition to the ejecta
observed in SN 1994I, while the earliest spectrum shows signatures of a
possible interaction with material of a distinct composition surrounding the SN
progenitor. Further, Hubble Space Telescope (HST) pre-explosion imaging reveals
a stellar cluster coincident with the event. From the cluster photometry, we
derive the mass and age of the SN progenitor using stellar evolution models
implemented in the BPASS library. Our results indicate that SN 2020oi occurred
in a binary system from a progenitor of mass $M_{\rm ZAMS} \approx 9.5 \pm 1.0
M_{\odot}$, corresponding to an age of $27 \pm 7$ Myr. SN 2020oi is the dimmest
SN Ic event to date for which an early-time flux excess has been observed, and
the first in which an early excess is unlikely to be associated with
shock-cooling.
|
Following the increasing interest and adoption of FaaS systems, benchmarking
frameworks for determining non-functional properties have also emerged. While
existing (microbenchmark) frameworks only evaluate single aspects of FaaS
platforms, a more holistic, application-driven approach is still missing. In
this paper, we design and present BeFaaS, an extensible application-centric
benchmarking framework for FaaS environments that focuses on the evaluation of
FaaS platforms through realistic and typical examples of FaaS applications.
BeFaaS includes a built-in e-commerce benchmark, is extensible for new workload
profiles and new platforms, supports federated benchmark runs in which the
benchmark application is distributed over multiple providers, and supports a
fine-grained result analysis. Our evaluation compares three major FaaS
providers in single cloud provider setups and shows that BeFaaS is capable of
running each benchmark automatically with minimal configuration effort and
providing detailed insights for each interaction.
|
Confluent cell monolayers and epithelia tissues show remarkable patterns and
correlations in structural arrangements and actively-driven collective flows.
We simulate these properties using multiphase field models. The models are
based on cell deformations and cell-cell interactions and we investigate the
influence of microscopic details to incorporate active forces on emerging
phenomena. We compare four different approaches, one in which the activity is
determined by a random orientation, one where the activity is related to the
deformation of the cells and two models with subcellular details to resolve the
mechanochemical interactions underlying cell migration. The models are compared
with respect to generic features, such as solid-to-liquid phase transitions,
cell shape variability, emerging nematic properties, as well as vorticity
correlations and flow patterns in large confluent monolayers and confinements.
All results are compared with experimental data for a large variety of cell
cultures. The appearing qualitative differences of the models show the
importance of microscopic details and provide a route towards predictive
simulations of patterns and correlations in cell colonies.
|
In this paper we generalize the definition of rationalizability for square
roots of polynomials introduced by M. Besier and the first author to field
extensions. We then show that the rationalizability of a set of field
extensions is equivalent to the rationalizability of the compositum of the
field extensions, providing a new strategy to prove rationalizability of sets
of square roots of polynomials.
|
We show how to infer sharp partial regularity results for relaxed minimizers
of degenerate, nonuniformly elliptic quasiconvex functionals, using tools from
Nonlinear Potential Theory. In particular, in the setting of functionals with
$(p,q)$-growth - according to the terminology of Marcellini [52] - we derive
optimal local regularity criteria under minimal assumptions on the data.
|
Let $G$ be a simple graph with $2n$ vertices and a perfect matching. We
denote by $f(G)$ and $F(G)$ the minimum and maximum forcing number of $G$,
respectively.
Hetyei obtained that the maximum number of edges of graphs $G$ with a unique
perfect matching is $n^2$. We know that $G$ has a unique perfect matching if
and only if $f(G)=0$. Along this line, we generalize the classical result to
all graphs $G$ with $f(G)=k$ for $0\leq k\leq n-1$, and characterize
corresponding extremal graphs as well. Hence we get a non-trivial lower bound
of $f(G)$ in terms of the order and size. For bipartite graphs, we gain
corresponding stronger results. Further, we obtain a new upper bound of $F(G)$.
For bipartite graphs $G$, Che and Chen (2013) obtained that $f(G)=n-1$ if and
only if $G$ is complete bipartite graph $K_{n,n}$. We completely characterize
all bipartite graphs $G$ with $f(G)= n-2$.
|
We investigate ion-scale kinetic plasma instabilities at the collisionless
shock using linear theory and nonlinear Particle-in-Cell (PIC) simulations. We
focus on the Alfv\'en-ion-cyclotron (AIC), mirror, and Weibel instabilities,
which are all driven unstable by the effective temperature anisotropy induced
by the shock-reflected ions within the transition layer of a strictly
perpendicular shock. We conduct linear dispersion analysis with a homogeneous
plasma model to mimic the shock transition layer by adopting a ring
distribution with finite thermal spread to represent the velocity distribution
of the reflected ions. We find that, for wave propagation parallel to the
ambient magnetic field, the AIC instability at lower Alfv\'en Mach numbers
tends to transition to the Weibel instability at higher Alfv\'en Mach numbers.
The instability property is, however, also strongly affected by the sound Mach
number. We conclude that the instability at a strong shock with Alfv\'en and
sound Mach numbers both in excess of $\sim 20{\rm -}40$ may be considered as
Weibel-like in the sense that the reflected ions behave essentially
unmagnetized. Two-dimensional PIC simulations confirm the linear theory and
find that, with typical parameters of young supernova remnant shocks, the ring
distribution model produces magnetic fluctuations of the order of the
background magnetic field, which is smaller than those observed in previous PIC
simulations for Weibel-dominated shocks. This indicates that the assumption of
the gyrotropic reflected ion distribution may not be adequate to quantitatively
predict nonlinear behaviors of the dynamics in high Mach number shocks.
|
The objective of this paper is to construct the accurate (say, to 11 decimal
places) frequencies of the quasinormal modes of the 5-dimensional
Schwarzschild-Tangherlini black hole using three major techniques: the Hill
determinant method, the continued fractions method and the WKB-Pad\'e method
and to discuss the limitations of each. It is shown that for the massless
scalar, gravitational tensor, gravitational vector and electromagnetic vector
perturbations considered in this paper, the Hill determinant method and the
method of continued fractions (both with the convergence acceleration) always
give identical results, whereas the WKB-Pad\'e method gives the results that
are amazingly accurate in most cases. Notable exception are the gravitational
vector perturbations ($j =2$ and $\ell = 2 $), for which the WKB-Pad\'e
approach apparently does not work. Here we have interesting situation in which
the WKB-based methods (WKB-Pad\'e and WKB-Borel-Le Roy) give the complex
frequency that differs from the from the result obtained within the framework
of the continued fraction method and the Hill determinant method. For the
fundamental mode, deviation of the real part of frequency from the exact value
is $0.5\%$ whereas the deviation of the imaginary part is $2.7\%.$ For $\ell
\geq 3$ the accuracy of the WKB results is similar again to the accuracy
obtained for other perturbations. The case of the gravitational scalar
perturbations is briefly discussed.
|
In order to plan a safe maneuver, self-driving vehicles need to understand
the intent of other traffic participants. We define intent as a combination of
discrete high-level behaviors as well as continuous trajectories describing
future motion. In this paper, we develop a one-stage detector and forecaster
that exploits both 3D point clouds produced by a LiDAR sensor as well as
dynamic maps of the environment. Our multi-task model achieves better accuracy
than the respective separate modules while saving computation, which is
critical to reducing reaction time in self-driving applications.
|
We study the the impact of Run2 LHC data on general Composite Higgs
scenarios, where non-linear effects, mixing with additional scalars and new
fermionic degrees of freedom could simultaneously contribute to the
modification of Higgs properties. We obtain new experimental limits on the
scale of compositeness, the mixing with singlets and doublets with the Higgs,
and the mass and mixing angle of top-partners. We also show that for scenarios
where new fermionic degrees of freedom are involved in electroweak symmetry
breaking, there is an interesting interplay among Higgs coupling measurements,
boosted Higgs properties, SMEFT global analyses, and direct searches for
single- and double-production of vector-like quarks.
|
Compressive sensing (CS) is a mathematically elegant tool for reducing the
sampling rate, potentially bringing context-awareness to a wider range of
devices. Nevertheless, practical issues with the sampling and reconstruction
algorithms prevent further proliferation of CS in real world domains,
especially among heterogeneous ubiquitous devices. Deep learning (DL) naturally
complements CS for adapting the sampling matrix, reconstructing the signal, and
learning form the compressed samples. While the CS-DL integration has received
substantial research interest recently, it has not yet been thoroughly
surveyed, nor has the light been shed on practical issues towards bringing the
CS-DL to real world implementations in the ubicomp domain. In this paper we
identify main possible ways in which CS and DL can interplay, extract key ideas
for making CS-DL efficient, identify major trends in CS-DL research space, and
derive guidelines for future evolution of CS-DL within the ubicomp domain.
|
To respond to the increasing need for bone repair strategies, various types
of biomaterials have been developed. Among those, calcium phosphate ceramics
(CPCs) are promising since they possess a chemical composition similar to that
of bones. To be suitable for implants, CPCs need to fulfill a number of
biological and mechanical requirements. Fatigue resistance and toughness are
two key mechanical properties that are still challenging to obtain in CPCs.
This paper thus reviews and discusses current progress in the processing of
CPCs with bioinspired microstructures for load-bearing applications. First,
methods to obtain CPCs with bioinspired structure at individual lengthscales,
namely nano-, micro-, and macroscale are discussed. Then, approaches to attain
synergetic contribution of all lengthscales through a complex and biomimetic
hierarchical structure are reviewed. The processing methods and their design
capabilities are presented and the mechanical properties of the materials they
can produce are analysed. Their limitations and challenges are finally
discussed to suggest new directions for the fabrication of biomimetic bone
implants with satisfactory properties. The paper could help biomedical
researchers, materials scientists and engineers to join forces to create the
next generation of bone implants.
|
The supernova neutrino flavor evolution in the presence of the non-trivial
neutrino magnetic moment and strong magnetic field is numerically derived using
the two-flavor and single-angle approximation. The novel properties of
collective neutrino oscillations are studied and distinct patterns of flavor
and spin-flavor spectral splits are presented. Finally we also discuss how the
neutrino magnetic moment affects the observable supernova neutrino energy
spectra.
|
Under the federated learning paradigm, a set of nodes can cooperatively train
a machine learning model with the help of a centralized server. Such a server
is also tasked with assigning a weight to the information received from each
node, and often also to drop too-slow nodes from the learning process. Both
decisions have major impact on the resulting learning performance, and can
interfere with each other in counterintuitive ways. In this paper, we focus on
edge networking scenarios and investigate existing and novel approaches to such
model-weighting and node-dropping decisions. Leveraging a set of real-world
experiments, we find that popular, straightforward decision-making approaches
may yield poor performance, and that considering the quality of data in
addition to its quantity can substantially improve learning.
|
We prove a myriad of results related to the stabilizer in an algebraic group
$G$ of a generic vector in a representation $V$ of $G$ over an algebraically
closed field $k$. Our results are on the level of group schemes, which carries
more information than considering both the Lie algebra of $G$ and the group
$G(k)$ of $k$-points. For $G$ simple and $V$ faithful and irreducible, we prove
the existence of a stabilizer in general position, sometimes called a principal
orbit type. We determine those $G$ and $V$ for which the stabilizer in general
position is smooth, or $\dim V/G < \dim G$, or there is a $v \in V$ whose
stabilizer in $G$ is trivial.
|
Recent development of lensless imagers has enabled three-dimensional (3D)
imaging through a thin piece of optics in close proximity to a camera sensor. A
general challenge of wide-field lensless imaging is the high computational
complexity and slow speed to reconstruct 3D objects through iterative
optimization process. Here, we demonstrated GEOMScope, a lensless 3D microscope
that forms image through a single layer of microlens array and reconstructs
objects through a geometrical-optics-based pixel back projection algorithm and
background suppressions. Compared to others, our method allows local
reconstruction, which significantly reduces the required computation resource
and increases the reconstruction speed by orders of magnitude. This enables
near real-time object reconstructions across a large volume of 23x23x5 mm^3,
with a lateral resolution of 40 um and axial resolution of 300 um. Our system
opens new avenues for broad biomedical applications such as endoscopy, which
requires both miniaturized device footprint and real-time high resolution
visualization.
|
We study the additive functional $X_n(\alpha)$ on conditioned Galton-Watson
trees given, for arbitrary complex $\alpha$, by summing the $\alpha$th power of
all subtree sizes. Allowing complex $\alpha$ is advantageous, even for the
study of real $\alpha$, since it allows us to use powerful results from the
theory of analytic functions in the proofs.
For $\Re\alpha < 0$, we prove that $X_n(\alpha)$, suitably normalized, has a
complex normal limiting distribution; moreover, as processes in $\alpha$, the
weak convergence holds in the space of analytic functions in the left
half-plane. We establish, and prove similar process-convergence extensions of,
limiting distribution results for $\alpha$ in various regions of the complex
plane. We focus mainly on the case where $\Re\alpha > 0$, for which
$X_n(\alpha)$, suitably normalized, has a limiting distribution that is not
normal but does not depend on the offspring distribution $\xi$ of the
conditioned Galton-Watson tree, assuming only that $E[\xi] = 1$ and $0 <
\mathrm{Var} [\xi] < \infty$. Under a weak extra moment assumption on $\xi$, we
prove that the convergence extends to moments, ordinary and absolute and mixed,
of all orders.
At least when $\Re\alpha > \frac12$, the limit random variable $Y(\alpha)$
can be expressed as a function of a normalized Brownian excursion.
|
MINERvA presents a new analysis of inclusive charged-current neutrino
interactions on a hydrocarbon target. We report single and double-differential
cross sections in muon transverse and longitudinal momentum. These measurements
are compared to neutrino interaction generator predictions from GENIE, NuWro,
GiBUU, and NEUT. In addition, comparisons against models with different
treatments of multi-nucleon correlations, nuclear effects, resonant pion
production, and deep inelastic scattering are presented. The data recorded
corresponds to $10.61\times10^{20}$ protons on target with a peak neutrino
energy of approximately 6 GeV. The higher energy and larger statistics of these
data extend the kinematic range for model testing beyond previous MINERvA
inclusive charged-current measurements. The results are not well modeled by
several generator predictions using a variety of input models.
|
Square-root topology is a recently emerged subfield describing a class of
insulators and superconductors whose topological nature is only revealed upon
squaring their Hamiltonians, i.e., the finite energy edge states of the
starting square-root model inherit their topological features from the
zero-energy edge states of a known topological insulator/superconductor present
in the squared model. Focusing on one-dimensional models, we show how this
concept can be generalized to $2^n$-root topological insulators and
superconductors, with $n$ any positive integer, whose rules of construction are
systematized here. Borrowing from graph theory, we introduce the concept of
arborescence of $2^n$-root topological insulators/superconductors which
connects the Hamiltonian of the starting model for any $n$, through a series of
squaring operations followed by constant energy shifts, to the Hamiltonian of
the known topological insulator/superconductor, identified as the source of its
topological features. Our work paves the way for an extension of $2^n$-root
topology to higher-dimensional systems.
|
A subset of QuantISED Sensor PIs met virtually on May 26, 2020 to discuss a
response to a charge by the DOE Office of High Energy Physics. In this
document, we summarize the QuantISED sensor community discussion, including a
consideration of HEP science enabled by quantum sensors, describing the
distinction between Quantum 1.0 and Quantum 2.0, and discussing
synergies/complementarity with the new DOE NQI centers and with research
supported by other SC offices.
Quantum 2.0 advances in sensor technology offer many opportunities and new
approaches for HEP experiments. The DOE HEP QuantISED program could support a
portfolio of small experiments based on these advances. QuantISED experiments
could use sensor technologies that exemplify Quantum 2.0 breakthroughs. They
would strive to achieve new HEP science results, while possibly spinning off
other domain science applications or serving as pathfinders for future HEP
science targets. QuantISED experiments should be led by a DOE laboratory, to
take advantage of laboratory technical resources, infrastructure, and expertise
in the safe and efficient construction, operation, and review of experiments.
The QuantISED PIs emphasized that the quest for HEP science results under the
QuantISED program is distinct from the ongoing DOE HEP programs on the energy,
intensity, and cosmic frontiers. There is robust evidence for the existence of
particles and phenomena beyond the Standard Model, including dark matter, dark
energy, quantum gravity, and new physics responsible for neutrino masses,
cosmic inflation, and the cosmic preference for matter over antimatter. Where
is this physics and how do we find it? The QuantISED program can exploit new
capabilities provided by quantum technology to probe these kinds of science
questions in new ways and over a broader range of science parameters than can
be achieved with conventional techniques.
|
This article concerns with the global H\"older regularity of weak solutions
to a class of problems involving the fractional $(p,q)$-Laplacian, denoted by
$(-\Delta)^{s_1}_{p}+(-\Delta)^{s_2}_{q}$, for $1<p,q<\infty$ and $s_1,s_2\in
(0,1)$. We use a suitable Caccioppoli inequality and local boundedness result
in order to prove the weak Harnack type inequality. Consequently, by employing
a suitable iteration process, we establish the interior H\"older regularity for
local weak solutions, which need not be assumed bounded. The global H\"older
regularity result we prove expands and improves the regularity results of
Giacomoni, Kumar and Sreenadh (arXiv: 2102.06080) to the subquadratic case
(that is, $q<2$) and more general right hand side, which requires a different
and new approach. Moreover, we establish a nonlocal Harnack type inequality for
weak solutions, which is of independent interest.
|
This work investigates the use of interactively updated label suggestions to
improve upon the efficiency of gathering annotations on the task of opinion
mining in German Covid-19 social media data. We develop guidelines to conduct a
controlled annotation study with social science students and find that
suggestions from a model trained on a small, expert-annotated dataset already
lead to a substantial improvement - in terms of inter-annotator agreement(+.14
Fleiss' $\kappa$) and annotation quality - compared to students that do not
receive any label suggestions. We further find that label suggestions from
interactively trained models do not lead to an improvement over suggestions
from a static model. Nonetheless, our analysis of suggestion bias shows that
annotators remain capable of reflecting upon the suggested label in general.
Finally, we confirm the quality of the annotated data in transfer learning
experiments between different annotator groups. To facilitate further research
in opinion mining on social media data, we release our collected data
consisting of 200 expert and 2,785 student annotations.
|
It is well-known that the training of Deep Neural Networks (DNN) can be
formalized in the language of optimal control. In this context, this paper
leverages classical turnpike properties of optimal control problems to attempt
a quantifiable answer to the question of how many layers should be considered
in a DNN. The underlying assumption is that the number of neurons per layer --
i.e., the width of the DNN -- is kept constant. Pursuing a different route than
the classical analysis of approximation properties of sigmoidal functions, we
prove explicit bounds on the required depths of DNNs based on asymptotic
reachability assumptions and a dissipativity-inducing choice of the
regularization terms in the training problem. Numerical results obtained for
the two spiral task data set for classification indicate that the proposed
estimates can provide non-conservative depth bounds.
|
The bulk properties of nodal line materials have been an important research
topic in recent years. In this paper, we study the orbital magnetic
susceptibility and the Hall conductivity of nodal line materials using the
formalism with thermal Green's functions and find characteristic singular
behaviors of them. It is shown that, in the vicinity of the gapless nodal line,
the orbital magnetic susceptibility shows a $\delta$-function singularity and
the Hall conductivity shows a step function behavior in their chemical
potential dependences. Furthermore, these singular behaviors are found to show
strong field angle dependences corresponding to the orientation of the nodal
line in the momentum space. These singular behaviors and strong angle
dependences will give clear evidence for the presence of the nodal line and its
orientation and can be used to experimentally detect nodal line materials.
|
The past 20 years have witnessed a renewal of interest in the subject of
double-diffusive processes in astrophysics, and their impact on stellar
evolution. This lecture aims to summarize the state of the field as of early
2019, although the reader should bear in mind that it is rapidly evolving. An
Annual Review of Fluid Mechanics article entitled "Double-diffusive convection
at low Prandtl number" (Garaud, 2018) contains a reasonably comprehensive
review of the topic, up to the summer of 2017. I focus here on presenting what
I hope are clear derivations of some of the most important results with an
astrophysical audience in mind, and discuss their implications for stellar
evolution, both in an observational context, and in relation to previous work
on the subject.
|
Variation of positional information, measured by the two-body excess entropy
$\mathsf{S}_\mathrm{2}$, is studied across the liquid-solid equilibrium
transition in a simple two-dimensional system. Analysis reveals a master
relation between $\mathsf{S}_\mathrm{2}$ and the freezing temperature
$T_{\mathrm{f}}$, from which a scaling law is extracted:
$-\mathsf{S}_\mathrm{2}\sim |T_{\mathrm{f}} - T| ^{-1/3}$. Theoretical and
practical implications of the observed universality are discussed.
|
Large pre-trained multilingual models like mBERT, XLM-R achieve state of the
art results on language understanding tasks. However, they are not well suited
for latency critical applications on both servers and edge devices. It's
important to reduce the memory and compute resources required by these models.
To this end, we propose pQRNN, a projection-based embedding-free neural encoder
that is tiny and effective for natural language processing tasks. Without
pre-training, pQRNNs significantly outperform LSTM models with pre-trained
embeddings despite being 140x smaller. With the same number of parameters, they
outperform transformer baselines thereby showcasing their parameter efficiency.
Additionally, we show that pQRNNs are effective student architectures for
distilling large pre-trained language models. We perform careful ablations
which study the effect of pQRNN parameters, data augmentation, and distillation
settings. On MTOP, a challenging multilingual semantic parsing dataset, pQRNN
students achieve 95.9\% of the performance of an mBERT teacher while being 350x
smaller. On mATIS, a popular parsing task, pQRNN students on average are able
to get to 97.1\% of the teacher while again being 350x smaller. Our strong
results suggest that our approach is great for latency-sensitive applications
while being able to leverage large mBERT-like models.
|
In this paper, we present general one-loop form factors for $H\rightarrow
\gamma^* \gamma^*$ in $R_{\xi}$ gauge, considering all cases of two on-shell,
one on-shell and two off-shell for final photons. The calculations are
performed in standard model and in arbitrary beyond the standard models which
charged scalar particles may be exchanged in one-loop diagrams. Analytic
results for the form factors are shown in general forms which are expressed in
terms of the Passarino-Veltman functions. We also confirm the results in
previous computations which are available for the case of two on-shell photons.
The $\xi$-independent of the result is also discussed. We find that numerical
results are good stability with varying $\xi=0,1$ and $\xi\rightarrow \infty$.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.