abstract
stringlengths 42
2.09k
|
---|
Saliency maps have shown to be both useful and misleading for explaining
model predictions especially in the context of images. In this paper, we
perform sanity checks for text modality and show that the conclusions made for
image do not directly transfer to text. We also analyze the effects of the
input multiplier in certain saliency maps using similarity scores,
max-sensitivity and infidelity evaluation metrics. Our observations reveal that
the input multiplier carries input's structural patterns in explanation maps,
thus leading to similar results regardless of the choice of model parameters.
We also show that the smoothness of a Neural Network (NN) function can affect
the quality of saliency-based explanations. Our investigations reveal that
replacing ReLUs with Softplus and MaxPool with smoother variants such as
LogSumExp (LSE) can lead to explanations that are more reliable based on the
infidelity evaluation metric.
|
The main result provide a common generalization for Ramsey-type theorems
concerning finite colorings of edge sets of complete graphs with vertices in
infinite semigroups. We capture the essence of theorems proved in different
fields: for natural numbers due to Milliken--Tylor, Deuber--Hindman,
Bergelson--Hindman, for combinatorial covering properties due to Scheepers and
Tsaban, and local properties in function spaces due to Scheepers. To this end,
we use idempotent ultrafilters in the \v{C}ech--Stone compactifications of
discrete infinite semigroups and topological games. The research is motivated
by the recent breakthrough work of Tsaban about colorings and the Menger
covering property.
|
Quantum effects fundamentally engender exotic physical phenomena in
macroscopic systems, which advance next-generation technological applications.
Rotational tunneling that represents the quantum phenomenon of the librational
motion of molecules is ubiquitous in hydrogen-contained materials. However, its
direct manifestation in realizing macroscopic physical properties is elusive.
Here we report an observation of reentrant ferroelectricity under low pressure
that is mediated by the rotational tunneling of ammonium ions in molecule-based
(NH$_4$)$_2$FeCl$_5 \cdot$H$_2$O. Applying a small pressure leads to a
transition from spin-driven ferroelectricity to paraelectricity coinciding with
the stabilization of a collinear magnetic phase. Such a transition is
attributed to the hydrogen bond fluctuations via the rotational tunneling of
ammonium groups as supported by theoretical calculations. Higher pressure lifts
the quantum fluctuations and leads to a reentrant ferroelectric phase
concomitant with another incommensurate magnetic phase. These results
demonstrate that the rotational tunneling emerges as a new route to control
magnetic-related properties in soft magnets, opening avenues for designing
multi-functional materials and realizing potential quantum control.
|
How to obtain good value estimation is one of the key problems in
Reinforcement Learning (RL). Current value estimation methods, such as DDPG and
TD3, suffer from unnecessary over- or underestimation bias. In this paper, we
explore the potential of double actors, which has been neglected for a long
time, for better value function estimation in continuous setting. First, we
uncover and demonstrate the bias alleviation property of double actors by
building double actors upon single critic and double critics to handle
overestimation bias in DDPG and underestimation bias in TD3 respectively. Next,
we interestingly find that double actors help improve the exploration ability
of the agent. Finally, to mitigate the uncertainty of value estimate from
double critics, we further propose to regularize the critic networks under
double actors architecture, which gives rise to Double Actors Regularized
Critics (DARC) algorithm. Extensive experimental results on challenging
continuous control tasks show that DARC significantly outperforms
state-of-the-art methods with higher sample efficiency.
|
How often is a quintic polynomial solvable by radicals? We establish that the
number of such polynomials, monic and irreducible with integer coefficients in
$[-H,H]$, is $O(H^{3.91})$. More generally, we show that if $n \ge 3$ and $n
\notin \{ 7, 8, 10 \}$ then there are $O(H^{n-1.017})$ monic, irreducible
polynomials of degree $n$ with integer coefficients in $[-H,H]$ and Galois
group not containing $A_n$. Save for the alternating group and degrees
$7,8,10$, this establishes a 1936 conjecture of van der Waerden.
|
Epitaxial growth on a surface vicinal to a high-symmetry crystallographic
plane occurs through the propagation of atomic steps, a process called
step-flow growth. In some instances, the steps tend to form close groups (or
bunches), a phenomenon termed step bunching, which corresponds to an
instability of the equal-spacing step propagation. Over the last fifty years,
various mechanisms have been proposed to explain step bunching, the most
prominent of which are the inverse Ehrlich-Schwoebel effect (i.e., the
asymmetry which favors the attachment of adatoms from the upper terrace),
elastically mediated interactions between steps (in heteroepitaxy), step
permeability (in electromigration-controlled growth), and the chemical effect
(which couples the diffusion fields on all terraces). Beyond the discussion of
the influence of each of these mechanisms taken independently on the propensity
to bunching, we propose a unified treatment of the effect of these mechanisms
on the onset of the bunching instability, which also accounts for their
interplay. This is done in the setting of the so-called quasistatic
approximation, which by permitting mostly analytical treatment, offers a clear
view of the influence on stability of the combined mechanisms. In particular,
we find that the Ehrlich-Schwoebel effect, elastic step-interactions and the
chemical effect combine in a quasi-additive fashion, whereas step permeability
is neither stabilizing nor destabilizing per se but changes the relative
influence of the three aforementioned mechanisms. In a companion paper, we
demonstrate and discuss the importance of another mechanism, which we call the
dynamics effect, that emerges when relaxing the simplifying but questionable
quasistatic approximation.
|
Deep reinforcement learning (DRL) has recently shown its success in tackling
complex combinatorial optimization problems. When these problems are extended
to multiobjective ones, it becomes difficult for the existing DRL approaches to
flexibly and efficiently deal with multiple subproblems determined by weight
decomposition of objectives. This paper proposes a concise meta-learning-based
DRL approach. It first trains a meta-model by meta-learning. The meta-model is
fine-tuned with a few update steps to derive submodels for the corresponding
subproblems. The Pareto front is built accordingly. The computational
experiments on multiobjective traveling salesman problems demonstrate the
superiority of our method over most of learning-based and iteration-based
approaches.
|
Let $E^*$ be a finite complex of locally free sheaves on a complex manifold
$X$. We prove that to every connection of type $(1,0)$ on $E^*$ it is
canonically associated an $L_{\infty}$ morphism $g\colon A^{0,
*}_X(\mathcal{H}om^*_{O_X}(E^*,E^*))\to \dfrac{A^{*,*}_X}{A^{\ge 2,*}_X}[2]$
that lifts the 1-component of Buchweitz-Flenner semiregularity map. An
application to deformations of coherent sheaves on projective manifolds is
given.
|
Single image generative models perform synthesis and manipulation tasks by
capturing the distribution of patches within a single image. The classical (pre
Deep Learning) prevailing approaches for these tasks are based on an
optimization process that maximizes patch similarity between the input and
generated output. Recently, however, Single Image GANs were introduced both as
a superior solution for such manipulation tasks, but also for remarkable novel
generative tasks. Despite their impressiveness, single image GANs require long
training time (usually hours) for each image and each task. They often suffer
from artifacts and are prone to optimization issues such as mode collapse. In
this paper, we show that all of these tasks can be performed without any
training, within several seconds, in a unified, surprisingly simple framework.
We revisit and cast the "good-old" patch-based methods into a novel
optimization-free framework. We start with an initial coarse guess, and then
simply refine the details coarse-to-fine using patch-nearest-neighbor search.
This allows generating random novel images better and much faster than GANs. We
further demonstrate a wide range of applications, such as image editing and
reshuffling, retargeting to different sizes, structural analogies, image
collage and a newly introduced task of conditional inpainting. Not only is our
method faster ($\times 10^3$-$\times 10^4$ than a GAN), it produces superior
results (confirmed by quantitative and qualitative evaluation), less artifacts
and more realistic global structure than any of the previous approaches
(whether GAN-based or classical patch-based).
|
We provide a semigroup approach to the viscous Hamilton-Jacobi equation. It
turns out that exponential Orlicz hearts are suitable spaces to handle the
(quadratic) non-linearity of the Hamiltonian. Based on an abstract extension
result for nonlinear semigroups on spaces of continuous functions, we represent
the solution of the viscous Hamilton-Jacobi equation as a strongly continuous
convex semigroup on an exponential Orlicz heart. As a result, the solution
depends continuously on the initial data. We further determine the symmetric
Lipschitz set which is invariant under the semigroup. This automatically yields
a priori estimates and regularity in Sobolev spaces. In particular, on the
domain restricted to the symmetric Lipschitz set, the generator can be
explicitly determined and linked with the viscous Hamilton-Jacobi equation.
|
Searching for a more compact network width recently serves as an effective
way of channel pruning for the deployment of convolutional neural networks
(CNNs) under hardware constraints. To fulfill the searching, a one-shot
supernet is usually leveraged to efficiently evaluate the performance
\wrt~different network widths. However, current methods mainly follow a
\textit{unilaterally augmented} (UA) principle for the evaluation of each
width, which induces the training unfairness of channels in supernet. In this
paper, we introduce a new supernet called Bilaterally Coupled Network (BCNet)
to address this issue. In BCNet, each channel is fairly trained and responsible
for the same amount of network widths, thus each network width can be evaluated
more accurately. Besides, we leverage a stochastic complementary strategy for
training the BCNet, and propose a prior initial population sampling method to
boost the performance of the evolutionary search. Extensive experiments on
benchmark CIFAR-10 and ImageNet datasets indicate that our method can achieve
state-of-the-art or competing performance over other baseline methods.
Moreover, our method turns out to further boost the performance of NAS models
by refining their network widths. For example, with the same FLOPs budget, our
obtained EfficientNet-B0 achieves 77.36\% Top-1 accuracy on ImageNet dataset,
surpassing the performance of original setting by 0.48\%.
|
We propose here - for the first time in the literature, to our best knowledge
- an electroweak unification based on the $SU(5)_{L}\times U(1)_{Y}$ gauge
group. The spontaneous symmetry breaking takes place in the manner
$SU(5)_{L}\times U(1)_{Y} \rightarrow U(1)_{em}$, due to a particular Higgs
sector consisting of five scalar quintuplets. Each scalar quintuplet acquires
its own vacuum expectation value, by means of a proper parametrization which is
worked out once the overall vacuum expectation value in the model is
established. The decoupling of the low energy regime (corresponding to the
Standard Model) from the high scale (required by out model here) is
straightforwardly achieved in order to preserve the consistency with the
present experimental data. Finally, a promising phenomenological outcome is
derived by simply tuning a single free parameter. Our results include, besides
a viable one-parameter mass spectrum, also the prediction of precisely three
generations in the fermion sector and the electric charge quantization.
|
We study strategic interactions between firms with heterogeneous beliefs
about future climate impacts. To that end, we propose a Cournot-type
equilibrium model where firms choose mitigation efforts and production
quantities such as to maximize the expected profits under their subjective
beliefs. It is shown that optimal mitigation efforts are increased by the
presence of uncertainty and act as substitutes; i.e., one firm's lack of
mitigation incentivizes others to act more decidedly, and vice versa.
|
Several of the latest GAN-based vocoders show remarkable achievements,
outperforming autoregressive and flow-based competitors in both qualitative and
quantitative measures while synthesizing orders of magnitude faster. In this
work, we hypothesize that the common factor underlying their success is the
multi-resolution discriminating framework, not the minute details in
architecture, loss function, or training strategy. We experimentally test the
hypothesis by evaluating six different generators paired with one shared
multi-resolution discriminating framework. For all evaluative measures with
respect to text-to-speech syntheses and for all perceptual metrics, their
performances are not distinguishable from one another, which supports our
hypothesis.
|
In this perspective piece, I benchmark gallium arsenide, silicon, and
germanium as material platforms for gate-defined quantum dot spin qubits. I
focus on materials stacks, quantum dot architectures, bandstructure properties
and qualifiers for disorder from electrical transport. This brief note is far
from being exhaustive and should be considered a first introduction to the
materials challenges and opportunities towards a larger spin qubit quantum
processor.
|
Clustering using deep neural network models have been extensively studied in
recent years. Among the most popular frameworks are the VAE and GAN frameworks,
which learns latent feature representations of data through encoder / decoder
neural net structures. This is a suitable base for clustering tasks, as the
latent space often seems to effectively capture the inherent essence of data,
simplifying its manifold and reducing noise. In this article, the VAE framework
is used to investigate how probability function gradient ascent over data
points can be used to process data in order to achieve better clustering.
Improvements in classification is observed comparing with unprocessed data,
although state of the art results are not obtained. Processing data with
gradient descent however results in more distinct cluster separation, making it
simpler to investigate suitable hyper parameter settings such as the number of
clusters. We propose a simple yet effective method for investigating suitable
number of clusters for data, based on the DBSCAN clustering algorithm, and
demonstrate that cluster number determination is facilitated with gradient
processing. As an additional curiosity, we find that our baseline model used
for comparison; a GMM on a t-SNE latent space for a VAE structure with weight
one on reconstruction during training (autoencoder), yield state of the art
results on the MNIST data, to our knowledge not beaten by any other existing
model.
|
We studied the impact of field heterogeneity on entrainment in a system of
uniformly interacting phase oscillators. Field heterogeneity is shown to induce
dynamical heterogeneity in the system. In effect, the heterogeneous field
partitions the system into interacting groups of oscillators that feel the same
local field strength and phase. Based on numerical and analytical analysis of
the explicit dynamical equations derived from the periodically forced Kuramoto
model, we found that the heterogeneous field can disrupt entrainment at
different field frequencies when compared to the homogeneous field. This
transition occurs when the phase- and frequency-locked synchronization between
groups of oscillators is broken at a critical field frequency, causing each
group to enter a new dynamical state (disrupted state). Strikingly, it is shown
that disrupted dynamics can differ between groups.
|
Large-scale machine learning and data mining methods routinely distribute
computations across multiple agents to parallelize processing. The time
required for the computations at the agents is affected by the availability of
local resources and/or poor channel conditions giving rise to the "straggler
problem". As a remedy to this problem, we employ Unequal Error Protection (UEP)
codes to obtain an approximation of the matrix product in the distributed
computation setting to provide higher protection for the blocks with higher
effect on the final result. We characterize the performance of the proposed
approach from a theoretical perspective by bounding the expected reconstruction
error for matrices with uncorrelated entries. We also apply the proposed coding
strategy to the computation of the back-propagation step in the training of a
Deep Neural Network (DNN) for an image classification task in the evaluation of
the gradients. Our numerical experiments show that it is indeed possible to
obtain significant improvements in the overall time required to achieve the DNN
training convergence by producing approximation of matrix products using UEP
codes in the presence of stragglers.
|
Classes of branched surfaces extend the classes of surfaces or 2-dimensional
manifolds satisfying suitable properties and defined in various manners. Reeb
spaces of smooth maps of suitable classes into surfaces whose codimensions are
negative are regarded as branched surfaces. They are the spaces of all
connected components of preimages and natural quotient spaces of the manifolds
of the domains. They are defined for general smooth maps and important
topological objects in differential topology. They also play important roles in
applied or applications of mathematics such as projections in data analysis and
visualizations.
The present paper concerns global topologies of branched surfaces and
explicit construction of canonically obtained maps from the branched surfaces
into surfaces of the targets via fundamental operations. The class of these
induced maps extends the class of smooth immersions of compact surfaces into
surfaces with no boundaries. It is also regarded as a variant of the class of
so-called generic smooth maps between these surfaces. We study so-called
"geography" of such maps as a natural, important and new study and also study
global topological properties of the branched surfaces such as embeddability
into 3-dimensional closed and connected manifolds as a well-known variant of
embeddability of graphs into surfaces.
The author has also believed that understanding global topologies of Reeb
spaces of suitable smooth maps are important and obtained several Reeb spaces
with information on global algebraic topological or differential topological
properties. The present study can be also regarded as a study on these objects
in 2-dimensional cases for {\it simple} fold maps, generalized versions of
so-called Morse functions and can be seen a new work motivated by various
theory of Morse functions and higher dimensional variants.
|
A thorough investigation of local structure, influencing macroscopic
properties of the solid is of potential interest. We investigated the local
structure of GaN nanowires (NWs) with different native defect concentration
synthesized by the chemical vapor deposition technique. Extended X-ray
absorption fine structure (EXAFS) analysis and semi-empirical and the density
functional theory (DFT) calculations were used to address the effect of dopant
incorporation along with other defects on the co-ordination number and bond
length values. The decrease of the bond length values along preferential
crystal axes in the local tetrahedral structure of GaN emphasizes the preferred
lattice site for oxygen doping. The preferential bond length contraction is
corroborated by the simulations. We have also studied the impact on the local
atomic configuration of GaN NWs with Al incorporation. AlxGa1-xN NWs are
synthesized via novel ion beam techniques of ion beam mixing and
post-irradiation diffusion process. The change in the local tetrahedral
structure of GaN with Al incorporation is investigated by EXAFS analysis. The
analysis provides a clear understanding of choosing a suitable process for
ternary III-nitride random alloy formation. The local structure study with the
EXAFS analysis is corroborated with the observed macroscopic properties studied
using Raman spectroscopy.
|
Let $V$ be a vector space with countable dimension over a field, and let $u$
be an endomorphism of it which is locally finite, i.e. $(u^k(x))_{k \geq 0}$ is
linearly dependent for all $x$ in $V$. We give several necessary and sufficient
conditions for the decomposability of $u$ into the sum of two square-zero
endomorphisms. Moreover, if $u$ is invertible, we give necessary and sufficient
conditions for the decomposability of $u$ into the product of two involutions,
as well as for the decomposability of $u$ into the product of two unipotent
endomorphisms of index $2$. Our results essentially extend the ones that are
known in the finite-dimensional setting.
In particular, we obtain that every strictly upper-triangular infinite matrix
with entries in a field is the sum of two square-zero infinite matrices
(potentially non-triangular, though), and that every upper-triangular infinite
matrix (with entries in a field) with only $\pm 1$ on the diagonal is the
product of two involutory infinite matrices.
|
In this paper, we explore the dynamics of a Hamiltonian system after a double
van der Waals potential energy surface degenerates into a single well. The
energy of the system is increased from the bottom of the potential well up to
the dissociation energy, which occurs when the system becomes open. In
particular, we study the bifurcations of the basic families of periodic orbits
of this system as the energy increases using Lagrangian descriptors and
Poincar\'e maps. We investigate the capability of Lagrangian descriptors to
find periodic orbits of bifurcating families for the case of resonant,
saddle-node and pitchfork bifurcations.
|
In this paper, we propose a novel approach for the transcription of speech
conversations with natural speaker overlap, from single channel recordings. We
propose a combination of a speaker diarization system and a hybrid automatic
speech recognition (ASR) system with speaker activity assisted acoustic model
(AM). An end-to-end neural network system is used for speaker diarization. Two
architectures, (i) input conditioned AM, and (ii) gated features AM, are
explored to incorporate the speaker activity information. The models output
speaker specific senones. The experiments on Switchboard telephone
conversations show the advantage of incorporating speaker activity information
in the ASR system for recordings with overlapped speech. In particular, an
absolute improvement of $11\%$ in word error rate (WER) is seen for the
proposed approach on natural conversation speech with automatic diarization.
|
Recent studies have provided both empirical and theoretical evidence
illustrating that heavy tails can emerge in stochastic gradient descent (SGD)
in various scenarios. Such heavy tails potentially result in iterates with
diverging variance, which hinders the use of conventional convergence analysis
techniques that rely on the existence of the second-order moments. In this
paper, we provide convergence guarantees for SGD under a state-dependent and
heavy-tailed noise with a potentially infinite variance, for a class of
strongly convex objectives. In the case where the $p$-th moment of the noise
exists for some $p\in [1,2)$, we first identify a condition on the Hessian,
coined '$p$-positive (semi-)definiteness', that leads to an interesting
interpolation between positive semi-definite matrices ($p=2$) and diagonally
dominant matrices with non-negative diagonal entries ($p=1$). Under this
condition, we then provide a convergence rate for the distance to the global
optimum in $L^p$. Furthermore, we provide a generalized central limit theorem,
which shows that the properly scaled Polyak-Ruppert averaging converges weakly
to a multivariate $\alpha$-stable random vector. Our results indicate that even
under heavy-tailed noise with infinite variance, SGD can converge to the global
optimum without necessitating any modification neither to the loss function or
to the algorithm itself, as typically required in robust statistics. We
demonstrate the implications of our results to applications such as linear
regression and generalized linear models subject to heavy-tailed data.
|
During the research cruise AL547 with RV ALKOR (October 20-31, 2020), a
collaborative underwater network of ocean observation systems was deployed in
Boknis Eck (SW Baltic Sea, German exclusive economic zone (EEZ)) in the context
of the project ARCHES (Autonomous Robotic Networks to Help Modern Societies).
This network was realized via a Digital Twin Prototype approach. During that
period different scenarios were executed to demonstrate the feasibility of
Digital Twins in an extreme environment such as underwater. One of the
scenarios showed the collaboration of stage IV Digital Twins with their
physical counterparts on the seafloor. This way, we address the research
question, whether Digital Twins represent a feasible approach to operate mobile
ad hoc networks for ocean and coastal observation.
|
This paper introduces an adaptive model-free deep reinforcement approach that
can recognize and adapt to the diurnal patterns in the ride-sharing environment
with car-pooling. Deep Reinforcement Learning (RL) suffers from catastrophic
forgetting due to being agnostic to the timescale of changes in the
distribution of experiences. Although RL algorithms are guaranteed to converge
to optimal policies in Markov decision processes (MDPs), this only holds in the
presence of static environments. However, this assumption is very restrictive.
In many real-world problems like ride-sharing, traffic control, etc., we are
dealing with highly dynamic environments, where RL methods yield only
sub-optimal decisions. To mitigate this problem in highly dynamic environments,
we (1) adopt an online Dirichlet change point detection (ODCP) algorithm to
detect the changes in the distribution of experiences, (2) develop a Deep Q
Network (DQN) agent that is capable of recognizing diurnal patterns and making
informed dispatching decisions according to the changes in the underlying
environment. Rather than fixing patterns by time of week, the proposed approach
automatically detects that the MDP has changed, and uses the results of the new
model. In addition to the adaptation logic in dispatching, this paper also
proposes a dynamic, demand-aware vehicle-passenger matching and route planning
framework that dynamically generates optimal routes for each vehicle based on
online demand, vehicle capacities, and locations. Evaluation on New York City
Taxi public dataset shows the effectiveness of our approach in improving the
fleet utilization, where less than 50% of the fleet are utilized to serve the
demand of up to 90% of the requests, while maximizing profits and minimizing
idle times.
|
There is a growing concern that e-commerce platforms are amplifying
vaccine-misinformation. To investigate, we conduct two-sets of algorithmic
audits for vaccine misinformation on the search and recommendation algorithms
of Amazon -- world's leading e-retailer. First, we systematically audit
search-results belonging to vaccine-related search-queries without logging into
the platform -- unpersonalized audits. We find 10.47% of search-results promote
misinformative health products. We also observe ranking-bias, with Amazon
ranking misinformative search-results higher than debunking search-results.
Next, we analyze the effects of personalization due to account-history, where
history is built progressively by performing various real-world user-actions,
such as clicking a product. We find evidence of filter-bubble effect in
Amazon's recommendations; accounts performing actions on misinformative
products are presented with more misinformation compared to accounts performing
actions on neutral and debunking products. Interestingly, once user clicks on a
misinformative product, homepage recommendations become more contaminated
compared to when user shows an intention to buy that product.
|
With the advent of deep learning, the number of works proposing new methods
or improving existent ones has grown exponentially in the last years. In this
scenario, "very deep" models were emerging, once they were expected to extract
more intrinsic and abstract features while supporting a better performance.
However, such models suffer from the gradient vanishing problem, i.e.,
backpropagation values become too close to zero in their shallower layers,
ultimately causing learning to stagnate. Such an issue was overcome in the
context of convolution neural networks by creating "shortcut connections"
between layers, in a so-called deep residual learning framework. Nonetheless, a
very popular deep learning technique called Deep Belief Network still suffers
from gradient vanishing when dealing with discriminative tasks. Therefore, this
paper proposes the Residual Deep Belief Network, which considers the
information reinforcement layer-by-layer to improve the feature extraction and
knowledge retaining, that support better discriminative performance.
Experiments conducted over three public datasets demonstrate its robustness
concerning the task of binary image classification.
|
Nanowires (NWs) with their quasi-one-dimensionality often present different
structural and opto-electronic properties than their thin-film counterparts.
The thinner they are the larger these differences are, in particular in the
carrier-phonon scattering and thermal conductivity. In this work, we present
femtosecond transient absorbance measurements on GaAs0.8P0.2 NWs of two
different diameters, 36 and 51 nm. The results show that thinner NWs sustain
the hot-carriers at a higher temperature for longer times than thicker NWs. We
explain the observation suggesting that in thinner NWs, the build-up of a
hot-phonon bottleneck is easier than in thicker NWs because of the increased
phonon scattering at the NW sidewalls which facilitates the build-up of a large
phonon density. The large number of optical phonons emitted during the carrier
relaxation processes generate a non-equilibrium population of acoustic phonons
that propagates less efficiently in thin NWs. This makes the possible
acoustic-to-optical phonon up-conversion process easier, which prolongs the LO
phonon lifetime resulting in the slowdown of the carrier cooling. The important
observation that the carrier temperature in thin NWs is higher than in thick
NWs already at the beginning of the hot carrier regime suggests that the
phonon-mediated scattering processes in the non-thermal regime play a major
role at least for the carrier densities investigated here (8x1018-4x1019 cm-3).
Our results also suggest that the boundary scattering of phonons at crystal
defects is negligible compared to the surface scattering at the NW sidewalls.
|
The uncertainty principle states that a measurement inevitably disturbs the
system, while it is often supposed that a quantum system is not disturbed
without state change. Korzekwa, Jennings, and Rudolph [Phys. Rev. A 89, 052108
(2014)] pointed out a conflict between those two views, and concluded that
state-dependent formulations of error-disturbance relations are untenable.
Here, we reconcile the conflict by showing that a quantum system is disturbed
without state change, in favor of the recently obtained universally valid
state-dependent error-disturbance relations.
|
This work focuses on comparing different solutions for machine translation on
low resource language pairs, namely, with zero-shot transfer learning and
unsupervised machine translation. We discuss how the data size affects the
performance of both unsupervised MT and transfer learning. Additionally we also
look at how the domain of the data affects the result of unsupervised MT. The
code to all the experiments performed in this project are accessible on Github.
|
This paper introduces properties and relations for proximal homotopy as well
as descriptive proximal homotopy. A number of results are given for the
homotopy between Lodato-proximally continuous maps and for the homotopy between
descriptive Lodato proximally continuous maps. This paper also introduces
homotopic cycles in conjunction with paths in either proximity spaces in
general or in descriptive proximity spaces. Three main results in this paper
are (1) that the product of descriptive Lodata proximity (dlp) spaces is also a
dlp space, (2) every descriptive dlp relation is an equivalence relation and
(3) every homotopic cycle has a free group representation.
|
The analytical theory of our earlier study (Mortensen et al. (2021),
Mathematical Medicine and Biology, 38(1), pp. 106-131) is extended to address
the outstanding cases of fibroblast barrier distribution and myocyte strait
distribution. In particular, closed-form approximations to the resting membrane
potential and to the critical parameter values for propagation are derived for
these two non-uniform fibroblast distributions and are in good agreement with
numerical estimates.
|
Additive robotic construction of building-scale discrete bar structures, such
as trusses and space frames, is increasingly attractive due to the potential
improvements in efficiency, safety, and design possibilities. However,
programming complex robots, such as manipulators with seven degrees of freedom,
to successfully complete construction tasks can be tedious, challenging, or
impossible for a human to do manually. Namely, the structure must be
constructed in a sequence that preserves structural properties, such as
stiffness, at each step. At the same time, this sequence must allow for the
robot to precisely manipulate elements within the in-progress structure while
respecting geometric constraints that, for example, ensure the robot does not
collide with what it has built. In this work, we present an automated and newly
generalized planning approach for jointly finding a construction sequence and
robot motion plan for additive construction that satisfies these requirements.
Our approach can be applied in a variety of additive construction processes,
and we demonstrate it specifically on spatial extrusion and discrete bar
assembly in this paper. We demonstrate the effectiveness of our approach on
several simulated and real-world extrusion and assembly tasks, including a
human-scale physical prototype, for which our algorithm is deployed for the
first time to plan the assembly of a complicated double tangent bar system
design.
|
Although previous research on Aspect-based Sentiment Analysis (ABSA) for
Indonesian reviews in hotel domain has been conducted using CNN and XGBoost,
its model did not generalize well in test data and high number of OOV words
contributed to misclassification cases. Nowadays, most state-of-the-art results
for wide array of NLP tasks are achieved by utilizing pretrained language
representation. In this paper, we intend to incorporate one of the foremost
language representation model, BERT, to perform ABSA in Indonesian reviews
dataset. By combining multilingual BERT (m-BERT) with task transformation
method, we manage to achieve significant improvement by 8% on the F1-score
compared to the result from our previous study.
|
We report a Karl G. Jansky Very Large Array (JVLA) search for redshifted
CO(1-0) or CO(2-1) emission, and a Hubble Space Telescope Wide Field Camera~3
(HST-WFC3) search for rest-frame near-ultraviolet (NUV) stellar emission, from
seven HI-selected galaxies associated with high-metallicity ([M/H]~$\geq -1.3$)
damped Ly$\alpha$ absorbers (DLAs) at $z\approx 4$. The galaxies were earlier
identified by ALMA imaging of their [CII]~158$\mu$m emission. We also used the
JVLA to search for CO(2-1) emission from the field of a low-metallicity
([M/H]~$=-2.47$) DLA at $z\approx 4.8$. No statistically significant CO
emission is detected from any of the galaxies, yielding upper limits of
$M_{mol}<(7.4 - 17.9)\times 10^{10}\times (\alpha_{CO}/4.36) M_\odot$ on their
molecular gas mass. We detect rest-frame NUV emission from four of the seven
[CII]~158$\mu$m-emitting galaxies, the first detections of the stellar
continuum from HI-selected galaxies at $z\gtrsim 4$. The HST-WFC3 images yield
typical sizes of the stellar continua of $\approx 2-4$~kpc and inferred
dust-unobscured star-formation rates (SFRs) of $\approx 5.0-17.5 M_\odot$/yr,
consistent with, or slightly lower than, the total SFRs estimated from the
far-infrared (FIR) luminosity. We further stacked the CO(2-1) emission signals
of six [CII]~158$\mu$m-emitting galaxies in the image plane. Our non-detection
of CO(2-1) emission in the stacked image yields the limit $M_{mol}<4.1 \times
10^{10}\times (\alpha_{CO}/4.36) M_\odot$ on the average molecular gas mass of
the six galaxies. Our molecular gas mass estimates and NUV SFR estimates in
HI-selected galaxies at $z\approx 4$ are consistent with those of main-sequence
galaxies with similar [CII]~158$\mu$m and FIR luminosities at similar
redshifts. However, the NUV emission in the HI-selected galaxies appears more
extended than that in main-sequence galaxies at similar redshifts.
|
In a recent paper by Jafarov, Nagiyev, Oste and Van der Jeugt (2020 {\sl J.\
Phys.\ A} {\bf 53} 485301), a confined model of the non-relativistic quantum
harmonic oscillator, where the effective mass and the angular frequency are
dependent on the position, was constructed and it was shown that the
confinement parameter gets quantized. By using a point canonical transformation
starting from the constant-mass Schr\"odinger equation for the Rosen-Morse II
potential, it is shown here that similar results can be easily obtained without
quantizing the confinement parameter. In addition, an extension to a confined
shifted harmonic oscillator directly follows from the same point canonical
transformation.
|
We describe the design of a soldered sapphire optical viewport, useful for
spectroscopic applications of samples at high temperatures and high pressures.
The sapphire window is bonded via active soldering to a metal flange with a
structure of two c-shaped rings made of different metallic materials in
between, as to mitigate thermally induced stress. A spectroscopic cell equipped
with two of the optical viewports has been successfully operated with alkali
metals in a noble gas environment at temperatures in the range $20\,${\deg}C to
$450\,${\deg}C at noble gas pressures from $10^{-6}\,$mbar to $330\,$bar. At
the upper pressure range, we observe a leakage rate smaller than our readout
accuracy of $30\,$mbar per day.
|
We propose new width-based planning and learning algorithms inspired from a
careful analysis of the design decisions made by previous width-based planners.
The algorithms are applied over the Atari-2600 games and our best performing
algorithm, Novelty guided Critical Path Learning (N-CPL), outperforms the
previously introduced width-based planning and learning algorithms $\pi$-IW(1),
$\pi$-IW(1)+ and $\pi$-HIW(n, 1). Furthermore, we present a taxonomy of the
Atari-2600 games according to some of their defining characteristics. This
analysis of the games provides further insight into the behaviour and
performance of the algorithms introduced. Namely, for games with large
branching factors, and games with sparse meaningful rewards, N-CPL outperforms
$\pi$-IW, $\pi$-IW(1)+ and $\pi$-HIW(n, 1).
|
Human social behavior plays a crucial role in how pathogens like SARS-CoV-2
or fake news spread in a population. Social interactions determine the contact
network among individuals, while spreading, requiring individual-to-individual
transmission, takes place on top of the network. Studying the topological
aspects of a contact network, therefore, not only has the potential of leading
to valuable insights into how the behavior of individuals impacts spreading
phenomena, but it may also open up possibilities for devising effective
behavioral interventions. Because of the temporal nature of interactions -
since the topology of the network, containing who is in contact with whom,
when, for how long, and in which precise sequence, varies (rapidly) in time -
analyzing them requires developing network methods and metrics that respect
temporal variability, in contrast to those developed for static (i.e.,
time-invariant) networks. Here, by means of event mapping, we propose a method
to quantify how quickly agents mingle by transforming temporal network data of
agent contacts. We define a novel measure called 'contact sequence centrality',
which quantifies the impact of an individual on the contact sequences,
reflecting the individual's behavioral potential for spreading. Comparing
contact sequence centrality across agents allows for ranking the impact of
agents and identifying potential 'behavioral super-spreaders'. The method is
applied to social interaction data collected at an art fair in Amsterdam. We
relate the measure to the existing network metrics, both temporal and static,
and find that (mostly at longer time scales) traditional metrics lose their
resemblance to contact sequence centrality. Our work highlights the importance
of accounting for the sequential nature of contacts when analyzing social
interactions.
|
We propose deep neural network algorithms to calculate efficient frontier in
some Mean-Variance and Mean-CVaR portfolio optimization problems. We show that
we are able to deal with such problems when both the dimension of the state and
the dimension of the control are high. Adding some additional constraints, we
compare different formulations and show that a new projected feedforward
network is able to deal with some global constraints on the weights of the
portfolio while outperforming classical penalization methods. All developed
formulations are compared in between. Depending on the problem and its
dimension, some formulations may be preferred.
|
We show that the assumption of a weak form of the Hardy-Littlewood conjecture
on the Goldbach problem suffices to disprove the possible existence of
exceptional zeros of Dirichlet L-functions. This strengthens a result of the
authors named in the title.
|
Imaging in clinical routine is subject to changing scanner protocols,
hardware, or policies in a typically heterogeneous set of acquisition hardware.
Accuracy and reliability of deep learning models suffer from those changes as
data and targets become inconsistent with their initial static training set.
Continual learning can adapt to a continuous data stream of a changing imaging
environment. Here, we propose a method for continual active learning on a data
stream of medical images. It recognizes shifts or additions of new imaging
sources - domains -, adapts training accordingly, and selects optimal examples
for labelling. Model training has to cope with a limited labelling budget,
resembling typical real world scenarios. We demonstrate our method on
T1-weighted magnetic resonance images from three different scanners with the
task of brain age estimation. Results demonstrate that the proposed method
outperforms naive active learning while requiring less manual labelling.
|
We investigate the large-scale clustering of the final spectroscopic sample
of quasars from the recently completed extended Baryon Oscillation
Spectroscopic Survey (eBOSS). The sample contains $343708$ objects in the
redshift range $0.8<z<2.2$ and $72667$ objects with redshifts $2.2<z<3.5$,
covering an effective area of $4699~{\rm deg}^{2}$. We develop a neural
network-based approach to mitigate spurious fluctuations in the density field
caused by spatial variations in the quality of the imaging data used to select
targets for follow-up spectroscopy. Simulations are used with the same angular
and radial distributions as the real data to estimate covariance matrices,
perform error analyses, and assess residual systematic uncertainties. We
measure the mean density contrast and cross-correlations of the eBOSS quasars
against maps of potential sources of imaging systematics to address algorithm
effectiveness, finding that the neural network-based approach outperforms
standard linear regression. Stellar density is one of the most important
sources of spurious fluctuations, and a new template constructed using data
from the Gaia spacecraft provides the best match to the observed quasar
clustering. The end-product from this work is a new value-added quasar
catalogue with the improved weights to correct for nonlinear imaging systematic
effects, which will be made public. Our quasar catalogue is used to measure the
local-type primordial non-Gaussianity in our companion paper, Mueller et al. in
preparation.
|
Advancements in heterogeneous computing technologies enable the significant
potential of virtual reality (VR) applications. To offer the best user
experience (UX), a system should adopt an untethered, wireless-network-based
architecture to transfer VR content between the user and the content generator.
However, modern wireless network technologies make implementing such an
architecture challenging, as VR applications require superior video quality --
with high resolution, high frame rates, and very low latency.
This paper presents OpenUVR, an open-source framework that uses commodity
hardware components to satisfy the demands of interactive, real-time VR
applications. OpenUVR significantly improves UX through a redesign of the
system stack and addresses the most time-sensitive issues associated with
redundant memory copying in modern computing systems. OpenUVR presents a
cross-layered VR datapath to avoid redundant data operations and computation
among system components, OpenUVR customizes the network stack to eliminate
unnecessary memory operations incurred by mismatching data formats in each
layer, and OpenUVR uses feedback from mobile devices to remove memory buffers.
Together, these modifications allow OpenUVR to reduce VR application delays
to 14.32 ms, meeting the 20 ms minimum latency in avoiding motion sickness. As
an open-source system that is fully compatible with commodity hardware, OpenUVR
offers the research community an opportunity to develop, investigate, and
optimize applications for untethered, high-performance VR architectures.
|
Magnetic field amplification by relativistic streaming plasma instabilities
is central to a wide variety of high-energy astrophysical environments as well
as to laboratory scenarios associated with intense lasers and electron beams.
We report on a new secondary nonlinear instability which arises for
relativistic dilute electron beams after the saturation of the linear Weibel
instability. This instability grows due to the transverse magnetic pressure
associated with the beam current filaments, which cannot be quickly neutralized
due to the inertia of background ions. We show that it can amplify the magnetic
field strength and spatial scale by orders of magnitude, leading to large-scale
plasma cavities with strong magnetic field and to very efficient conversion of
the beam kinetic energy into magnetic energy. The instability growth rate,
saturation level, and scale length are derived analytically and shown to be in
good agreement with fully-kinetic simulations.
|
We introduce a novel data-driven framework for the design of targeted gene
panels for estimating exome-wide biomarkers in cancer immunotherapy. Our first
goal is to develop a generative model for the profile of mutation across the
exome, which allows for gene- and variant type-dependent mutation rates. Based
on this model, we then propose a new procedure for estimating biomarkers such
as Tumour Mutation Burden and Tumour Indel Burden. Our approach allows the
practitioner to select a targeted gene panel of a prespecified size, and then
construct an estimator that only depends on the selected genes. Alternatively,
the practitioner may apply our method to make predictions based on an existing
gene panel, or to augment a gene panel to a given size. We demonstrate the
excellent performance of our proposal using an annotated mutation dataset from
1144 Non-Small Cell Lung Cancer patients.
|
Mechanically interlocked molecules have marked a breakthrough in the field of
topological chemistry and boosted the vigorous development of molecular
machinery. As an archetypal example of the interlocked molecules, catenanes
comprise macrocycles that are threaded through one another like links in a
chain. Inspired by the transition metal-templated approach of catenanes
synthesis, the hierarchical assembly of DNA origami catenanes templated by gold
nanoparticles is demonstrated in this work. DNA origami catenanes, which
contain two, three or four interlocked rings are successfully created. In
particular, the origami rings within the individual catenanes can be set free
with respect to one another by releasing the interconnecting gold
nanoparticles. This work will set the basis for rich progress toward DNA-based
molecular architectures with unique structural programmability and well-defined
topology.
|
Speech enhancement algorithms based on deep learning have greatly surpassed
their traditional counterparts and are now being considered for the task of
removing acoustic echo from hands-free communication systems. This is a
challenging problem due to both real-world constraints like loudspeaker
non-linearities, and to limited compute capabilities in some communication
systems. In this work, we propose a system combining a traditional acoustic
echo canceller, and a low-complexity joint residual echo and noise suppressor
based on a hybrid signal processing/deep neural network (DSP/DNN) approach. We
show that the proposed system outperforms both traditional and other neural
approaches, while requiring only 5.5% CPU for real-time operation. We further
show that the system can scale to even lower complexity levels.
|
Modern software systems rely on Deep Neural Networks (DNN) when processing
complex, unstructured inputs, such as images, videos, natural language texts or
audio signals. Provided the intractably large size of such input spaces, the
intrinsic limitations of learning algorithms, and the ambiguity about the
expected predictions for some of the inputs, not only there is no guarantee
that DNN's predictions are always correct, but rather developers must safely
assume a low, though not negligible, error probability. A fail-safe Deep
Learning based System (DLS) is one equipped to handle DNN faults by means of a
supervisor, capable of recognizing predictions that should not be trusted and
that should activate a healing procedure bringing the DLS to a safe state. In
this paper, we propose an approach to use DNN uncertainty estimators to
implement such a supervisor. We first discuss the advantages and disadvantages
of existing approaches to measure uncertainty for DNNs and propose novel
metrics for the empirical assessment of the supervisor that rely on such
approaches. We then describe our publicly available tool UNCERTAINTY-WIZARD,
which allows transparent estimation of uncertainty for regular tf.keras DNNs.
Lastly, we discuss a large-scale study conducted on four different subjects to
empirically validate the approach, reporting the lessons-learned as guidance
for software engineers who intend to monitor uncertainty for fail-safe
execution of DLS.
|
The leading difficulty in achieving the contrast necessary to directly image
exoplanets and associated structures (eg. protoplanetary disks) at wavelengths
ranging from the visible to the infrared are quasi-static speckles, and they
are hard to distinguish from planets at the necessary level of precision. The
source of the quasi-static speckles is hardware aberrations that are not
compensated by the adaptive optics system. These aberrations are called
non-common path aberrations (NCPA). In 2013, Frazin showed how, in principle,
simultaneous millisecond (ms) telemetry from the wavefront sensor (WFS) and the
science camera behind a stellar coronagraph can be used as input into a
regression scheme that simultaneously and self-consistently estimates the NCPA
and the sought-after image of the planetary system (the exoplanet image). The
physical principle underlying the regression method is rather simple: the
wavefronts, which are measured by the WFS, modulate the speckles caused by the
NCPA and therefore can be used as probes of the optical system. The most
important departure from realism in the author's 2013 article was the
assumption that the WFS made error-free measurements. The simulations in Part I
provide results on the joint regression on the NCPA and the exoplanet image
from three different methods, called the ideal, the naive, and the
bias-corrected estimators. The ideal estimator is not physically realizable but
is a useful as a benchmark for simulation studies, but the other two are, at
least in principle. This article provides the regression equations for all
three of these estimators as well as a supporting technical discussion.
Briefly, the naive estimator simply uses the noisy WFS measurements without any
attempt to account for the errors, and the bias-corrected estimator uses
statistical knowledge of the wavefronts to treat errors in the WFS
measurements.
|
As a result of the Hund's coupling, the band structure of the conducting
electrons in the skyrmion crystal (SkX) shares similar topological properties
with that of graphene, such as its cone-like shape, nonzero band Chern number,
edge states, and etc. In this work, we rigorously demonstrate that the Klein
tunneling phenomena is also shared by these two. We use the Green's function
technique and calculated the transmission probability of the electrons
tunneling through an electrostatic barrier in the SkX expressed by the double
exchange model. Numerical results of the SkX reproduced the Dirac model
obtained by linear fitting the two-dimensional band structure of the SkX.
|
Goods trade is a supply chain transaction that involves shippers buying goods
from suppliers and carriers providing goods transportation. Shippers are issued
invoices from suppliers and carriers. Shippers carry out goods receiving and
invoice processing before payment processing of bills for suppliers and
carriers, where invoice processing includes tasks like processing claims and
adjusting the bill payments. Goods receiving involves verification of received
goods by the Shipper's receiving team. Invoice processing is carried out by the
Shipper's accounts payable team, which in turn is verified by the accounts
receivable teams of suppliers and carriers. This paper presents a
blockchain-based accounts payable system that generates claims for the
deficiency in the goods received and accordingly adjusts the payment in the
bills for suppliers and carriers. Primary motivations for these supply chain
organizations to adopt blockchain-based accounts payable systems are to
eliminate the process redundancies (accounts payable vs. accounts receivable),
to reduce the number of disputes among the transacting participants, and to
accelerate the accounts payable processes via optimizations in the claims
generation and blockchain-based dispute reconciliation.
|
This study demonstrates the implementation of the stochastic ruler discrete
simulation optimization method for calibrating an agent-based model (ABM)
developed to simulate hepatitis C virus (HCV) transmission. The ABM simulates
HCV transmission between agents interacting in multiple environments relevant
for HCV transmission in the Indian context. Key outcomes of the ABM are HCV and
injecting drug user (IDU) prevalences among the simulated cohort. Certain input
parameters of the ABM need to be calibrated so that simulation outcomes attain
values as close as possible to real-world HCV and IDU prevalences. We
conceptualize the calibration process as a discrete simulation optimization
problem by discretizing the calibration parameter ranges, defining an
appropriate objective function, and then applying the stochastic ruler random
search method to solve this problem. We also present a method that exploits the
monotonic relationship between the simulation outcomes and calibration
parameters to yield improved calibration solutions with lesser computational
effort.
|
We study the propagation of a specific class of instrumental systematics to
the reconstruction of the B-mode power spectrum of the cosmic microwave
background (CMB). We focus on the non-idealities of the half-wave plate (HWP),
a polarization modulator that is to be deployed by future CMB experiments, such
as the phase-A satellite mission LiteBIRD. We study the effects of non-ideal
HWP properties, such as transmittance, phase shift, and cross-polarization. To
this end, we developed a simple, yet stand-alone end-to-end simulation pipeline
adapted to LiteBIRD. We analyzed the effects of a possible mismatch between the
measured frequency profiles of HWP properties (used in the mapmaking stage of
the pipeline) and the actual profiles (used in the sky-scanning step). We
simulated single-frequency, CMB-only observations to emphasize the effects of
non-idealities on the BB power spectrum. We also considered multi-frequency
observations to account for the frequency dependence of HWP properties and the
contribution of foreground emission. We quantified the systematic effects in
terms of a bias $\Delta r$ on the tensor-to-scalar ratio, $r$, with respect to
the ideal case without systematic effects. We derived the accuracy requirements
on the measurements of HWP properties by requiring $\Delta r < 10^{-5}$ (1% of
the expected LiteBIRD sensitivity on $r$). Our analysis is introduced by a
detailed presentation of the mathematical formalism employed in this work,
including the use of the Jones and Mueller matrix representations.
|
We describe the class of graphs for which all metric spaces with diametrical
graphs belonging to this class are ultrametric. It is shown that a metric space
$(X, d)$ is ultrametric iff the diametrical graph of the metric
$d_{\varepsilon}(x, y) = \max\{d(x, y), \varepsilon\}$ is either empty or
complete multipartite for every $\varepsilon > 0$. A refinement of the last
result is obtained for totally bounded spaces. Moreover, using complete
multipartite graphs we characterize the compact ultrametrizable topological
spaces. The bounded ultrametric spaces, which are weakly similar to unbounded
ones, are also characterized via complete multipartite graphs.
|
Non-maximum Suppression (NMS) is an essential postprocessing step in modern
convolutional neural networks for object detection. Unlike convolutions which
are inherently parallel, the de-facto standard for NMS, namely GreedyNMS,
cannot be easily parallelized and thus could be the performance bottleneck in
convolutional object detection pipelines. MaxpoolNMS is introduced as a
parallelizable alternative to GreedyNMS, which in turn enables faster speed
than GreedyNMS at comparable accuracy. However, MaxpoolNMS is only capable of
replacing the GreedyNMS at the first stage of two-stage detectors like
Faster-RCNN. There is a significant drop in accuracy when applying MaxpoolNMS
at the final detection stage, due to the fact that MaxpoolNMS fails to
approximate GreedyNMS precisely in terms of bounding box selection. In this
paper, we propose a general, parallelizable and configurable approach
PSRR-MaxpoolNMS, to completely replace GreedyNMS at all stages in all
detectors. By introducing a simple Relationship Recovery module and a Pyramid
Shifted MaxpoolNMS module, our PSRR-MaxpoolNMS is able to approximate GreedyNMS
more precisely than MaxpoolNMS. Comprehensive experiments show that our
approach outperforms MaxpoolNMS by a large margin, and it is proven faster than
GreedyNMS with comparable accuracy. For the first time, PSRR-MaxpoolNMS
provides a fully parallelizable solution for customized hardware design, which
can be reused for accelerating NMS everywhere.
|
We investigate two inequalities of Bugeaud and Laurent, each involving
triples of classical exponents of Diophantine approximation associated to
$\ux\in\mathbb{R}^n$. We provide a complete description of parameter triples
that admit equality for suitable $\ux$, which turns out rather surprising. For
$n=2$ our results agree with work of Laurent. Moreover, we establish lower
bounds for the Hausdorff and packing dimensions of the involved $\ux$, and in
special cases we can show they are sharp. Proofs are based on the variational
principle in parametric geometry of numbers, we enclose sketches of associated
combined graphs (templates) where equality is feasible. A twist of our
construction provides refined information on the joint spectrum of the
respective exponent triples.
|
Let ${\mathcal A}$ be the class of functions that are analytic in the unit
disc ${\mathbb D}$, normalized such that $f(z)=z+\sum_{n=2}^\infty a_nz^n$, and
let class ${\mathcal U}(\lambda)$, $0<\lambda\le1$, consists of functions
$f\in{\mathcal A}$, such that \[ \left |\left (\frac{z}{f(z)} \right
)^{2}f'(z)-1\right | < \lambda\quad (z\in {\mathbb D}). \] In this paper we
determine the sharp upper bounds for the Hankel determinants of second and
third order for the inverse functions of functions from the class ${\mathcal
U}(\lambda)$.
|
Laminar flow velocity profiles depend heavily on fluid rheology. Developing
methods of laminar flow characterization, based on low-field magnetic resonance
(MR), contributes to the widespread industrial application of the MR technique
in rheology. In this paper, we designed a low-cost, palm-sized permanent magnet
with a 1H resonance frequency of 20.48 MHz to measure laminar flow. The magnet
consists of two disk magnets, which were each tilted at an angle of 1{\deg}
from a starting separation of 1.4 cm to generate a constant gradient, 65
gauss/cm, in the direction of flow. Subsequently, a series of process methods,
for MR measurements, were proposed to characterize Newtonian and non-Newtonian
fluid flows in a pipe, including phase-based method, magnitude-based method,
and velocity spectrum method. The accuracies of the proposed methods were
validated by simulations, and experiments of Poiseuille flow and shear-thinning
flow on the designed magnet. The new velocity profile methods proposed are
advantageous because the MR instrumentation and measurement methods are simple
and portable. The sophistication is found in the analysis although the physical
principles are straight forward.
|
Modern intelligent urban mobility applications are underpinned by
large-scale, multivariate, spatiotemporal data streams. Working with this data
presents unique challenges of data management, processing and presentation that
is often overlooked by researchers. Therefore, in this work we present an
integrated data management and processing framework for intelligent urban
mobility systems currently in use by our partner transit agencies. We discuss
the available data sources and outline our cloud-centric data management and
stream processing architecture built upon open-source publish-subscribe and
NoSQL data stores. We then describe our data-integrity monitoring methods. We
then present a set of visualization dashboards designed for our transit agency
partners. Lastly, we discuss how these tools are currently being used for
AI-driven urban mobility applications that use these tools.
|
Most existing graph neural networks (GNNs) learn node embeddings using the
framework of message passing and aggregation. Such GNNs are incapable of
learning relative positions between graph nodes within a graph. To empower GNNs
with the awareness of node positions, some nodes are set as anchors. Then,
using the distances from a node to the anchors, GNNs can infer relative
positions between nodes. However, P-GNNs arbitrarily select anchors, leading to
compromising position-awareness and feature extraction. To eliminate this
compromise, we demonstrate that selecting evenly distributed and asymmetric
anchors is essential. On the other hand, we show that choosing anchors that can
aggregate embeddings of all the nodes within a graph is NP-hard. Therefore,
devising efficient optimal algorithms in a deterministic approach is
practically not feasible. To ensure position-awareness and bypass
NP-completeness, we propose Position-Sensing Graph Neural Networks (PSGNNs),
learning how to choose anchors in a back-propagatable fashion. Experiments
verify the effectiveness of PSGNNs against state-of-the-art GNNs, substantially
improving performance on various synthetic and real-world graph datasets while
enjoying stable scalability. Specifically, PSGNNs on average boost AUC more
than 14% for pairwise node classification and 18% for link prediction over the
existing state-of-the-art position-aware methods. Our source code is publicly
available at: https://github.com/ZhenyueQin/PSGNN
|
Given that symbolic and ordinary powers of an ideal do not always coincide,
we look for conditions on the ideal such that equality holds for every natural
number. This paper focuses on studying the equality for Derksen ideals defined
by finite groups acting linearly on a polynomial ring.
|
Recently many algorithms were devised for reinforcement learning (RL) with
function approximation. While they have clear algorithmic distinctions, they
also have many implementation differences that are algorithm-independent and
sometimes under-emphasized. Such mixing of algorithmic novelty and
implementation craftsmanship makes rigorous analyses of the sources of
performance improvements across algorithms difficult. In this work, we focus on
a series of off-policy inference-based actor-critic algorithms -- MPO, AWR, and
SAC -- to decouple their algorithmic innovations and implementation decisions.
We present unified derivations through a single control-as-inference objective,
where we can categorize each algorithm as based on either
Expectation-Maximization (EM) or direct Kullback-Leibler (KL) divergence
minimization and treat the rest of specifications as implementation details. We
performed extensive ablation studies, and identified substantial performance
drops whenever implementation details are mismatched for algorithmic choices.
These results show which implementation or code details are co-adapted and
co-evolved with algorithms, and which are transferable across algorithms: as
examples, we identified that tanh Gaussian policy and network sizes are highly
adapted to algorithmic types, while layer normalization and ELU are critical
for MPO's performances but also transfer to noticeable gains in SAC. We hope
our work can inspire future work to further demystify sources of performance
improvements across multiple algorithms and allow researchers to build on one
another's both algorithmic and implementational innovations.
|
Understanding how attitudes towards the Climate Emergency vary can hold the
key to driving policy changes for effective action to mitigate climate related
risk. The Oil and Gas industry account for a significant proportion of global
emissions and so it could be speculated that there is a relationship between
Crude Oil Futures and sentiment towards the Climate Emergency. Using Latent
Dirichlet Allocation for Topic Modelling on a bespoke Twitter dataset, this
study shows that it is possible to split the conversation surrounding the
Climate Emergency into 3 distinct topics. Forecasting Crude Oil Futures using
Seasonal AutoRegressive Integrated Moving Average Modelling gives promising
results with a root mean squared error of 0.196 and 0.209 on the training and
testing data respectively. Understanding variation in attitudes towards climate
emergency provides inconclusive results which could be improved using
spatial-temporal analysis methods such as Density Based Clustering (DBSCAN).
|
We explore the possibility of probing flavor violations in the charged-lepton
sector by means of high-luminosity lepton-photon and electron-muon collisions,
by inverting initial and final states in a variety of decay channels presently
used to bound such violations. In particular, we analyse the resonant lepton,
$\gamma\, \ell \to \ell^{\prime}$, and neutral-meson, $e^- \mu ^+ \to
\phi,\eta,\pi^0\!$, scattering channels, whose cross sections are critically
dependent on the colliding-beams energy spread, being particularly demanding in
the case of leptonic processes. For these processes, we compute upper bounds to
the cross-section corresponding to present limits on the inverse decay channel
rates. In order to circumvent the beam energy spread limitations we extend the
analysis to processes in which a photon accompanies the resonance in the final
state, compensating the off-shellness effects by radiative return. These
processes might be studied at future facilities with moderate energies, in case
lepton-photon and electron-muon collisions with sufficiently high luminosity
will be available.
|
Deep generative models can synthesize photorealistic images of human faces
with novel identities. However, a key challenge to the wide applicability of
such techniques is to provide independent control over semantically meaningful
parameters: appearance, head pose, face shape, and facial expressions. In this
paper, we propose VariTex - to the best of our knowledge the first method that
learns a variational latent feature space of neural face textures, which allows
sampling of novel identities. We combine this generative model with a
parametric face model and gain explicit control over head pose and facial
expressions. To generate complete images of human heads, we propose an additive
decoder that adds plausible details such as hair. A novel training scheme
enforces a pose-independent latent space and in consequence, allows learning a
one-to-many mapping between latent codes and pose-conditioned exterior regions.
The resulting method can generate geometrically consistent images of novel
identities under fine-grained control over head pose, face shape, and facial
expressions. This facilitates a broad range of downstream tasks, like sampling
novel identities, changing the head pose, expression transfer, and more. Code
and models are available for research on https://mcbuehler.github.io/VariTex.
|
Quadratic Unconstrained Binary Optimization (QUBO) is a general-purpose
modeling framework for combinatorial optimization problems and is a requirement
for quantum annealers. This paper utilizes the eigenvalue decomposition of the
underlying Q matrix to alter and improve the search process by extracting the
information from dominant eigenvalues and eigenvectors to implicitly guide the
search towards promising areas of the solution landscape. Computational results
on benchmark datasets illustrate the efficacy of our routine demonstrating
significant performance improvements on problems with dominant eigenvalues.
|
We present a synthesis of fast radio burst (FRB) morphology (the change in
flux as a function of time and frequency) as detected in the 400-800 MHz octave
by the FRB project on the Canadian Hydrogen Intensity Mapping Experiment
(CHIME/FRB), using events from the first CHIME/FRB catalog. The catalog
consists of 61 bursts from 18 repeating sources, plus 474 one-off FRBs,
detected between 2018 July 25 and 2019 July 2. We identify four observed
archetypes of burst morphology ("simple broadband," "simple narrowband,"
"temporally complex" and "downward drifting") and describe relevant
instrumental biases that are essential for interpreting the observed
morphologies. Using the catalog properties of the FRBs, we confirm that bursts
from repeating sources, on average, have larger widths and we show, for the
first time, that bursts from repeating sources, on average, are narrower in
bandwidth. This difference could be due to a beaming or propagation effects, or
it could be intrinsic to the populations. We discuss potential implications of
these morphological differences for using FRBs as astrophysical tools.
|
Influenced by the great success of deep learning in computer vision and
language understanding, research in recommendation has shifted to inventing new
recommender models based on neural networks. In recent years, we have witnessed
significant progress in developing neural recommender models, which generalize
and surpass traditional recommender models owing to the strong representation
power of neural networks. In this survey paper, we conduct a systematic review
on neural recommender models from the perspective of recommendation modeling
with the accuracy goal, aiming to summarize this field to facilitate
researchers and practitioners working on recommender systems. Specifically,
based on the data usage during recommendation modeling, we divide the work into
collaborative filtering and information-rich recommendation: 1) collaborative
filtering, which leverages the key source of user-item interaction data; 2)
content enriched recommendation, which additionally utilizes the side
information associated with users and items, like user profile and item
knowledge graph; and 3) temporal/sequential recommendation, which accounts for
the contextual information associated with an interaction, such as time,
location, and the past interactions. After reviewing representative work for
each type, we finally discuss some promising directions in this field.
|
Given a graph $G = (V,E)$ with vertex weights $w(v)$ and a desired number of
parts $k$, the goal in graph partitioning problems is to partition the vertex
set V into parts $V_1,\ldots,V_k$. Metrics for compactness, contiguity, and
balance of the parts $V_i$ are frequent objectives, with much existing
literature focusing on compactness and balance. Revisiting an old method known
as striping, we give the first polynomial-time algorithms with guaranteed
contiguity and provable bicriteria approximations for compactness and balance
for planar grid graphs. We consider several types of graph partitioning,
including when vertex weights vary smoothly or are stochastic, reflecting
concerns in various real-world instances. We show significant improvements in
experiments for balancing workloads for the fire department and reducing
over-policing using 911 call data from South Fulton, GA.
|
We study topological defect lines in two character rational conformal field
theories. Among them one set of two character theories are commutant pairs in
$E_{8,1}$ conformal field theory. Using these defect lines we construct defect
partition function in the $E_8$ theory. We find that the defects preserve only
a part of the $E_8$ current algebra symmetry. We also determine the defect
partition function in $c=24$ CFT using these defects lines of 2 character
theories and we find that these defects preserve all current algebra symmetries
of $c=24$ CFT.
|
A common way to evaluate electronic integrals for polyatomic molecules is to
use Becke's partitioning scheme [J. Chem. Phys.88, 2547 (1988)] in conjunction
with overlapping grids centered at each atomic site. The Becke scheme was
designed for integrands that fall off rapidly at large distances, such as those
approximating bound electronic states. When applied to states in the electronic
continuum, however, Becke scheme exhibits slow convergence and it is highly
redundant. Here, we present a modified version of Becke scheme that is
applicable to functions of the electronic continuum, such as those involved in
molecular photoionization and electron-molecule scattering, and which ensures
convergence and efficiency comparable to those realized in the calculation of
bound states. In this modified scheme, the atomic weights already present in
Becke's partition are smoothly switched off within a range of few bond lengths
from their respective nuclei, and complemented by an asymptotically unitary
weight. The atomic integrals are evaluated on small spherical grids, centered
on each atom, with size commensurate to the support of the corresponding atomic
weight. The residual integral of the interstitial and long-range region is
evaluated with a central master grid. The accuracy of the method is
demonstrated by evaluating integrals involving integrands containing Gaussian
Type Orbitals and Yukawa potentials, on the atomic sites, as well as spherical
Bessel functions centered on the master grid. These functions are
representative of those encountered in realistic electron-scattering and
photoionization calculations in polyatomic molecules.
|
Typical high quality text-to-speech (TTS) systems today use a two-stage
architecture, with a spectrum model stage that generates spectral frames and a
vocoder stage that generates the actual audio. High-quality spectrum models
usually incorporate the encoder-decoder architecture with self-attention or
bi-directional long short-term (BLSTM) units. While these models can produce
high quality speech, they often incur O($L$) increase in both latency and
real-time factor (RTF) with respect to input length $L$. In other words, longer
inputs leads to longer delay and slower synthesis speed, limiting its use in
real-time applications. In this paper, we propose a multi-rate attention
architecture that breaks the latency and RTF bottlenecks by computing a compact
representation during encoding and recurrently generating the attention vector
in a streaming manner during decoding. The proposed architecture achieves high
audio quality (MOS of 4.31 compared to groundtruth 4.48), low latency, and low
RTF at the same time. Meanwhile, both latency and RTF of the proposed system
stay constant regardless of input lengths, making it ideal for real-time
applications.
|
Pre-trained text-to-text transformers such as BART have achieved impressive
performance across a range of NLP tasks. Recent study further shows that they
can learn to generalize to novel tasks, by including task descriptions as part
of the source sequence and training the model with (source, target) examples.
At test time, these fine-tuned models can make inferences on new tasks using
the new task descriptions as part of the input. However, this approach has
potential limitations, as the model learns to solve individual (source, target)
examples (i.e., at the instance level), instead of learning to solve tasks by
taking all examples within a task as a whole (i.e., at the task level). To this
end, we introduce Hypter, a framework that improves text-to-text transformer's
generalization ability to unseen tasks by training a hypernetwork to generate
task-specific, light-weight adapters from task descriptions. Experiments on
ZEST dataset and a synthetic SQuAD dataset demonstrate that Hypter improves
upon fine-tuning baselines. Notably, when using BART-Large as the main network,
Hypter brings 11.3% comparative improvement on ZEST dataset.
|
In this letter, we propose a magnet-less non-reciprocal isolating system
based on time-varying metasurfaces. Two parallel time-varying metasurfaces, one
for frequency up-conversion and one for down-conversion by the same amount, are
used for realizing a region of space where incident waves from opposite
directions experience an opposite Doppler frequency shift. As a result, any
device within this region becomes sensitive to the illumination direction,
exhibiting a different scattering response from opposite directions and thus
breaking reciprocity. Very importantly, thanks to the opposite frequency shift
of the metasurfaces, the frequency of the transmitted electromagnetic field is
the same as for the incident one. Here, we demonstrate this general approach by
using a Bragg grating as the device between the time-varying metasurfaces. The
combined structure of the metasurfaces and the grating exhibits different
transmission and reflection properties for opposite illumination direction,
thereby realizing an isolator. More broadly, this letter presents a strategy
for converting any conventional electromagnetic device to a non-reciprocal one
by placing it between two time-varying metasurfaces. This approach opens the
door to several new non-reciprocal components based on thin and lightweight
metasurfaces, which are simpler to realize compared to their volumetric
counterparts.
|
Following the Hu-Kriz method of computing the $C_2$ genuine dual Steenrod
algebra $(H\mathbf F_2)_{\bigstar}(H\mathbf F_2)$, we calculate the $C_4$
equivariant Bredon cohomology of the classifying space $\mathbf R P^{\infty
\rho}=B_{C_4}\Sigma_{2}$ as an $RO(C_4)$ graded Green-functor. We prove that as
a module over the homology of a point (which we also compute), this cohomology
is not flat. As a result, it can't be used as a test module for obtaining
generators in $(H\mathbf F_2)_{\bigstar}(H\mathbf F_2)$ as Hu-Kriz do in the
$C_2$ case.
|
The paper introduces cycles cross ratio, which extends the classic cross
ratio of four points to various settings: conformal geometry, Lie spheres
geometry, etc. Just like its classic counterpart cycles cross ratio is a
measure of anharmonicity between spheres with respect to inversion. It also
provides a M\"obius invariant distance between spheres. Many further properties
of cycles cross ratio awaiting their exploration. In abstract framework the new
invariant can be considered in any projective space with a bilinear pairing.
|
Self-testing results allow us to infer the underlying quantum mechanical
description of states and measurements from classical outputs produced by
non-communicating parties. The standard definition of self-testing does not
apply in situations when there are two or more inequivalent optimal strategies.
To address this, we introduce the notion of self-testing convex combinations of
reference strategies, which is a generalisation of self-testing to multiple
strategies. We show that the Glued Magic Square game [Quantum 4 (2020), p. 346]
self-tests a convex combination of two inequivalent strategies. As a corollary,
we obtain that the Glued Magic square game self-tests two EPR pairs thus
answering an open question from [Quantum 4 (2020), p. 346]. Our self-test is
robust and extends to natural generalisations of the Glued Magic Square game.
|
Nowadays, live-stream and short video shopping in E-commerce have grown
exponentially. However, the sellers are required to manually match images of
the selling products to the timestamp of exhibition in the untrimmed video,
resulting in a complicated process. To solve the problem, we present an
innovative demonstration of multi-modal retrieval system called "Fashion
Focus", which enables to exactly localize the product images in the online
video as the focuses. Different modality contributes to the community
localization, including visual content, linguistic features and interaction
context are jointly investigated via presented multi-modal learning. Our system
employs two procedures for analysis, including video content structuring and
multi-modal retrieval, to automatically achieve accurate video-to-shop
matching. Fashion Focus presents a unified framework that can orientate the
consumers towards relevant product exhibitions during watching videos and help
the sellers to effectively deliver the products over search and recommendation.
|
Human-computer interaction (HCI) is significantly impacted by delayed
responses from a spoken dialogue system. Hence, end-to-end (e2e) spoken
language understanding (SLU) solutions have recently been proposed to decrease
latency. Such approaches allow for the extraction of semantic information
directly from the speech signal, thus bypassing the need for a transcript from
an automatic speech recognition (ASR) system. In this paper, we propose a
compact e2e SLU architecture for streaming scenarios, where chunks of the
speech signal are processed continuously to predict intent and slot values. Our
model is based on a 3D convolutional neural network (3D-CNN) and a
unidirectional long short-term memory (LSTM). We compare the performance of two
alignment-free losses: the connectionist temporal classification (CTC) method
and its adapted version, namely connectionist temporal localization (CTL). The
latter performs not only the classification but also localization of sequential
audio events. The proposed solution is evaluated on the Fluent Speech Command
dataset and results show our model ability to process incoming speech signal,
reaching accuracy as high as 98.97 % for CTC and 98.78 % for CTL on
single-label classification, and as high as 95.69 % for CTC and 95.28 % for CTL
on two-label prediction.
|
The so-called block-term decomposition (BTD) tensor model, especially in its
rank-$(L_r,L_r,1)$ version, has been recently receiving increasing attention
due to its enhanced ability of representing systems and signals that are
composed of \emph{block} components of rank higher than one, a scenario
encountered in numerous and diverse applications. Its uniqueness and
approximation have thus been thoroughly studied. The challenging problem of
estimating the BTD model structure, namely the number of block terms (rank) and
their individual (block) ranks, is of crucial importance in practice and has
only recently started to attract significant attention. In data-streaming
scenarios and/or big data applications, where the tensor dimension in one of
its modes grows in time or can only be processed incrementally, it is essential
to be able to perform model selection and computation in a recursive
(incremental/online) manner. To date there is only one such work in the
literature concerning the (general rank-$(L,M,N)$) BTD model, which proposes an
incremental method, however with the BTD rank and block ranks assumed to be
a-priori known and time invariant. In this preprint, a novel approach to
rank-$(L_r,L_r,1)$ BTD model selection and tracking is proposed, based on the
idea of imposing column sparsity jointly on the factors and estimating the
ranks as the numbers of factor columns of nonnegligible magnitude. An online
method of the alternating iteratively reweighted least squares (IRLS) type is
developed and shown to be computationally efficient and fast converging, also
allowing the model ranks to change in time. Its time and memory efficiency are
evaluated and favorably compared with those of the batch approach. Simulation
results are reported that demonstrate the effectiveness of the proposed scheme
in both selecting and tracking the correct BTD model.
|
Phase I clinical trials are designed to test the safety (non-toxicity) of
drugs and find the maximum tolerated dose (MTD). This task becomes
significantly more challenging when multiple-drug dose-combinations (DC) are
involved, due to the inherent conflict between the exponentially increasing DC
candidates and the limited patient budget. This paper proposes a novel Bayesian
design, SDF-Bayes, for finding the MTD for drug combinations in the presence of
safety constraints. Rather than the conventional principle of escalating or
de-escalating the current dose of one drug (perhaps alternating between drugs),
SDF-Bayes proceeds by cautious optimism: it chooses the next DC that, on the
basis of current information, is most likely to be the MTD (optimism), subject
to the constraint that it only chooses DCs that have a high probability of
being safe (caution). We also propose an extension, SDF-Bayes-AR, that accounts
for patient heterogeneity and enables heterogeneous patient recruitment.
Extensive experiments based on both synthetic and real-world datasets
demonstrate the advantages of SDF-Bayes over state of the art DC trial designs
in terms of accuracy and safety.
|
Let $R$ be a finitely generated positively graded algebra over a Noetherian
local ring $B$, and $\mathfrak{m} = [R]_+$ be the graded irrelevant ideal of
$R$. We provide a local criterion characterizing the $B$-freeness of all the
local cohomology modules $\text{H}_\mathfrak{m}^i(M)$ of a finitely generated
graded $R$-module $M$. We show that fiber-full modules are exactly the ones
that satisfy this criterion. When we change $B$ by an arbitrary Noetherian ring
$A$, we study the fiber-full locus of a module in $\text{Spec}(A)$: we show
that the fiber-full locus is always an open subset of $\text{Spec}(A)$ and that
it is dense when $A$ is generically reduced.
|
I extend the calculations represented in \cite{konav} regarding the
resistivity in Kondo lattice materials from $3d$ syatem to $2d$ systems. In the
present work I consider a 2d system, and memory function is computed. However,
results found in 2d case are different from 3d system . I find that in $2d$ in
low temperature regime($ k_{B}T\ll \mu_d$) resistivity shows power
law($\frac{1}{T}$) behaviour and in the high temeprature regime($
k_{B}T\gg\mu_d$) resistivity varies linearly with temperature. In $3d$ these
behaviours are as $\frac{1}{T}$ and as $T^{\frac{3}{2}}$ respectively.
|
In online domain-specific customer service applications, many companies
struggle to deploy advanced NLP models successfully, due to the limited
availability of and noise in their datasets. While prior research demonstrated
the potential of migrating large open-domain pretrained models for
domain-specific tasks, the appropriate (pre)training strategies have not yet
been rigorously evaluated in such social media customer service settings,
especially under multilingual conditions. We address this gap by collecting a
multilingual social media corpus containing customer service conversations
(865k tweets), comparing various pipelines of pretraining and finetuning
approaches, applying them on 5 different end tasks. We show that pretraining a
generic multilingual transformer model on our in-domain dataset, before
finetuning on specific end tasks, consistently boosts performance, especially
in non-English settings.
|
Robust visualization of complex data is critical for the effective use of NLP
for event classification, as the volume of data is large and the
high-dimensional structure of text makes data challenging to summarize
succinctly. In event extraction tasks in particular, visualization can aid in
understanding and illustrating the textual relationships from which machine
learning tools produce insights. Through our case study which seeks to identify
potential triggers of state-led mass killings from news articles using NLP, we
demonstrate how visualizations can aid in each stage, from exploratory analysis
of raw data, to machine learning training analysis, and finally post-inference
validation.
|
We identified 8 additional stars as members of the Helmi stream (HStr) in the
combined GALAH+ DR3 and $Gaia$ EDR3 catalog. By consistently reevaluating
claimed members from the literature, we consolidate a sample of 22 HStr stars
with parameters determined from high-resolution spectroscopy and spanning a
considerably wider (by $\sim$0.5 dex) metallicity interval ($-2.5 \lesssim
\rm[Fe/H] < -1.0$) than previously reported. Our study focuses on $\alpha$ (Mg
and Ca) and neutron-capture (Ba and Eu) elements. We find that the chemistry of
HStr is typical of dwarf spheroidal (dSph) galaxies, in good agreement with
previous $N$-body simulations of this merging event. Stars of HStr constitute a
clear declining sequence in $\rm[\alpha/Fe]$ for increasing metallicity up to
$\rm[Fe/H] \sim -1.0$. Moreover, stars of HStr show a median value of $+$0.5
dex for $\rm[Eu/Fe]$ with a small dispersion ($\pm$0.1 dex). Every star
analyzed with $\rm[Fe/H] < -1.2$ belong to the $r$-process enhanced
($\rm[Eu/Fe] > +0.3$ and $\rm[Ba/Eu] < 0.0$) metal-poor category, providing
remarkable evidence that, at such low-metallicity regime, stars of HStr
experienced enrichment in neutron-capture elements predominantly via
$r$-process nucleosynthesis. Finally, the extended metallicity range also
suggests an increase in $\rm[Ba/Eu]$ for higher $\rm[Fe/H]$, in conformity with
other surviving dwarf satellite galaxies of the Milky Way.
|
This paper develops a Closed-Loop Error Learning Control (CLELC) algorithm
for feedback linearizable systems with experimental validation on a mobile
robot. Traditional feedback and feedforward controllers are designed based on
the nominal model by using Feedback Linearization Control (FLC) method. Then,
an intelligent controller is designed based on sliding mode learning algorithm
that utilizes closed-loop error dynamics to learn the system behavior. The
controllers are working in parallel, and the intelligent controller can
gradually replace the feedback controller from the control of the system. In
addition to the stability of the sliding mode learning algorithm, the
closed-loop stability of an $n$th order feedback linearizable system is proven.
The simulation results demonstrate that CLELC algorithm can improve control
performance (e.g., smaller rise time, settling time and overshoot) in the
absence of uncertainties, and also provides robust control performance in the
presence of uncertainties as compared to traditional FLC method. To test the
efficiency and efficacy of CLELC algorithm, the trajectory tracking problem of
a tracked mobile robot is studied in real-time. The experimental results
demonstrate that CLELC algorithm ensures high-accurate trajectory tracking
performance than traditional FLC method.
|
Tyshkovskiy and Panchin have recently published a commentary on our paper in
which they outline several "points of disagreement with the Segreto/Deigin
hypothesis". As our paper is titled "The genetic structure of SARS-CoV-2 does
not rule out a laboratory origin", points of disagreement should provide
evidence that rules out a laboratory origin. However, Tyshkovskiy and Panchin
provide no such evidence and instead attempt to criticize our arguments that
highlight aspects of SARS-CoV-2 that could be consistent with the lab leak
hypothesis. Strikingly, Tyshkovskiy and Panchin's main point of criticism is
based on a false premise that we have claimed RaTG13 to be a direct progenitor
of SARS-CoV-2, and their other points of criticism are either incorrect or
irrelevant to our hypotheses. Thus, the genetic structure of SARS-CoV-2 remains
consistent with both natural or laboratory origin, which means that both the
zoonotic and the lab leak hypothesis need to be investigated equally
thoroughly.
|
Scanning transmission electron microscopy (STEM) is the most widespread
adopted tool for atomic scale characterization of two-dimensional (2D)
materials. Many 2D materials remain susceptible to electron beam damage,
despite the standardized practice to reduce the beam energy from 200 keV to 80
or 60 keV. Although, all elements present can be detected by atomic
electrostatic potential imaging using integrated differential phase contrast
(iDPC) STEM or electron ptychography, capturing dynamics with atomic resolution
and enhanced sensitivity has remained a challenge. Here, by using iDPC-STEM, we
capture defect dynamics in 2D WS$_2$ by atomic electrostatic potential imaging
with a beam energy of only 30 keV. The direct imaging of atomic electrostatic
potentials with high framerate reveals the presence and motion of single atoms
near defects and edges in WS$_2$ that are otherwise invisible with conventional
annular dark-field STEM or cannot be captured sufficiently fast by electron
ptychography.
|
Low dimensional ferroelectrics are highly desired for applications and full
of exotic physics. Here a functionalized MXene Hf$_2$CF$_2$ monolayer is
theoretically studied, which manifests a nonpolar to polar transition upon
moderate biaxial compressive strain. Accompanying this structural transition, a
metal-semiconductor transition occurs. The in-plane shift of unilateral
fluorine layer leads to a polarization pointing out-of-plane. Such
ferroelectricity is unconventional, similar to the recently-proposed
interlayer-sliding ferroelectricity but not identical. Due to its specific
hexapetalous potential energy profile, the possible ferroelectric switching
paths and domain walls are nontrivial, which are mediated via the metallic
paraelectric state. In this sense, the metallic walls can be manipulated by
reshaping the ferroelectric domains.
|
In this article we study a particular class of compact connected orientable
PL $4$-manifolds with empty or connected boundary which have infinite cyclic
fundamental group. We show that the manifold in the class admits a handle
decomposition in which number of $2$-handles depends upon its second Betti
number and other $h$-handles ($h \leq 4$) are at most $2$. In particular, our
main result is that if $M$ is a closed connected orientable PL $4$-manifold
with fundamental group as $\mathbb{{Z}}$, then $M$ admits either of the
following handle decompositions:
(1) one $0$-handle, two $1$-handles, $1+\beta_2(M)$ $2$-handles, one
$3$-handle and one $4$-handle,
(2) one $0$-handle, one $1$-handle, $\beta_2(M)$ $2$-handles, one $3$-handle
and one $4$-handle, where $\beta_2(M)$ denotes the second Betti number of
manifold $M$ with $\mathbb{Z}$ coefficients. Further, we extend this result to
any compact connected orientable $4$-manifold $M$ with boundary and give three
possible representations of $M$ in terms of handles.
|
Measurements of Higgs boson production cross sections and couplings in events
where the Higgs boson decays into a pair of photons are reported. Events are
selected from a sample of proton-proton collisions at $\sqrt{s} =$ 13 TeV
collected by the CMS detector at the LHC from 2016 to 2018, corresponding to an
integrated luminosity of 137 fb$^{-1}$. Analysis categories enriched in Higgs
boson events produced via gluon fusion, vector boson fusion, vector boson
associated production, and production associated with top quarks are
constructed. The total Higgs boson signal strength, relative to the standard
model (SM) prediction, is measured to be 1.12 $\pm$ 0.09. Other properties of
the Higgs boson are measured, including SM signal strength modifiers,
production cross sections, and its couplings to other particles. These include
the most precise measurements of gluon fusion and vector boson fusion Higgs
boson production in several different kinematic regions, the first measurement
of Higgs boson production in association with a top quark pair in five regions
of the Higgs boson transverse momentum, and an upper limit on the rate of Higgs
boson production in association with a single top quark. All results are found
to be in agreement with the SM expectations.
|
Recent progress in pretraining language models on large corpora has resulted
in large performance gains on many NLP tasks. These large models acquire
linguistic knowledge during pretraining, which helps to improve performance on
downstream tasks via fine-tuning. To assess what kind of knowledge is acquired,
language models are commonly probed by querying them with `fill in the blank'
style cloze questions. Existing probing datasets mainly focus on knowledge
about relations between words and entities. We introduce WDLMPro (Word
Definition Language Model Probing) to evaluate word understanding directly
using dictionary definitions of words. In our experiments, three popular
pretrained language models struggle to match words and their definitions. This
indicates that they understand many words poorly and that our new probing task
is a difficult challenge that could help guide research on LMs in the future.
|
In this paper, we study a new operation named pushforward on diffeological
vector pseudo-bundles, which is left adjoint to the pullback. We show how to
pushforward projective diffeological vector pseudo-bundles to get projective
diffeological vector spaces, producing many concrete new examples. This brings
new objects to diffeology from classical vector bundle theory.
|
Semi-supervised learning (SSL) is an effective means to leverage unlabeled
data to improve a model's performance. Typical SSL methods like FixMatch assume
that labeled and unlabeled data share the same label space. However, in
practice, unlabeled data can contain categories unseen in the labeled set,
i.e., outliers, which can significantly harm the performance of SSL algorithms.
To address this problem, we propose a novel Open-set Semi-Supervised Learning
(OSSL) approach called OpenMatch. Learning representations of inliers while
rejecting outliers is essential for the success of OSSL. To this end, OpenMatch
unifies FixMatch with novelty detection based on one-vs-all (OVA) classifiers.
The OVA-classifier outputs the confidence score of a sample being an inlier,
providing a threshold to detect outliers. Another key contribution is an
open-set soft-consistency regularization loss, which enhances the smoothness of
the OVA-classifier with respect to input transformations and greatly improves
outlier detection. OpenMatch achieves state-of-the-art performance on three
datasets, and even outperforms a fully supervised model in detecting outliers
unseen in unlabeled data on CIFAR10.
|
The competition between thermal fluctuations and potential forces is the
foundation of our understanding of phase transitions and matter in equilibrium.
Driving matter out of equilibrium allows for a new class of interactions which
are neither attractive nor repulsive but transverse. The existence of such
transverse forces immediately raises the question of how they interfere with
basic principles of material self-organization. Despite a recent surge of
interest, this question remains open. Here, we show that activating transverse
forces by homogeneous rotation of colloidal units generically turns otherwise
quiescent solids into a crystal whorl state dynamically shaped by
self-propelled dislocations. Simulations of both a minimal model and a full
hydrodynamics model establish the generic nature of the chaotic dynamics of
these self-kneading polycrystals. Using a continuum theory, we explain how odd
and Hall stresses conspire to destabilize chiral crystals from within. This
chiral instability produces dislocations that are unbound by their
self-propulsion. Their proliferation eventually leads to a crystalline whorl
state out of reach of equilibrium matter.
|
We study stability of the sharp Poincar{\'e} constant of the invariant
probability measure of a reversible diffusion process satisfying some natural
conditions. The proof is based on the spectral interpretation of Poincar{\'e}
inequalities and Stein's method. In particular, these results are applied to
the gamma distributions and to strictly log-concave measures in dimension one,
giving stability for Brascamp-Lieb inequalities.
|
In order to apply Optical Character Recognition (OCR) to historical printings
of Latin script fully automatically, we report on our efforts to construct a
widely-applicable polyfont recognition model yielding text with a Character
Error Rate (CER) around 2% when applied out-of-the-box. Moreover, we show how
this model can be further finetuned to specific classes of printings with
little manual and computational effort. The mixed or polyfont model is trained
on a wide variety of materials, in terms of age (from the 15th to the 19th
century), typography (various types of Fraktur and Antiqua), and languages
(among others, German, Latin, and French). To optimize the results we combined
established techniques of OCR training like pretraining, data augmentation, and
voting. In addition, we used various preprocessing methods to enrich the
training data and obtain more robust models. We also implemented a two-stage
approach which first trains on all available, considerably unbalanced data and
then refines the output by training on a selected more balanced subset.
Evaluations on 29 previously unseen books resulted in a CER of 1.73%,
outperforming a widely used standard model with a CER of 2.84% by almost 40%.
Training a more specialized model for some unseen Early Modern Latin books
starting from our mixed model led to a CER of 1.47%, an improvement of up to
50% compared to training from scratch and up to 30% compared to training from
the aforementioned standard model. Our new mixed model is made openly available
to the community.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.