abstract
stringlengths 42
2.09k
|
---|
We analyze the optimal information design in a click-through auction with
fixed valuations per click, but stochastic click-through rates. While the
auctioneer takes as given the auction rule of the click-through auction, namely
the generalized second-price auction, the auctioneer can design the information
flow regarding the click-through rates among the bidders. A natural requirement
in this context is to ask for the information structure to be calibrated in the
learning sense. With this constraint, the auction needs to rank the ads by a
product of the bid and an unbiased estimator of the click-through rates, and
the task of designing an optimal information structure is thus reduced to the
task of designing an optimal unbiased estimator.
We show that in a symmetric setting with uncertainty about the click-through
rates, the optimal information structure attains both social efficiency and
surplus extraction. The optimal information structure requires private (rather
than public) signals to the bidders. It also requires correlated (rather than
independent) signals, even when the underlying uncertainty regarding the
click-through rates is independent. Beyond symmetric settings, we show that the
optimal information structure requires partial information disclosure.
|
Abstract We study the analytic expression for four flavor neutrino
oscillation in the presence of matter. We calculate the time evolution operator
on flavor and mass basis. We find the matter dependent mass square difference
and neutrino transition probabilities for (3+1) four flavor neutrino
oscillation.
|
We study dynanics of $SU(N-4)$ gauge theories with fermions in rank-2
symmetric tensor and $N$ anti-fundamental representations, by perturbing
supersymmetric theories with anomaly-mediated supersymmetry breaking. We find
the $SU(N)\times U(1)$ global symmetry is dynamically broken to $SO(N)$ for
$N\geq 17$, a different result from conjectures in the literature. For $N<17$,
theories flow to infrared fixed points.
|
In this paper we discuss contrastive explanations for formal argumentation -
the question why a certain argument (the fact) can be accepted, whilst another
argument (the foil) cannot be accepted under various extension-based semantics.
The recent work on explanations for argumentation-based conclusions has mostly
focused on providing minimal explanations for the (non-)acceptance of
arguments. What is still lacking, however, is a proper argumentation-based
interpretation of contrastive explanations. We show under which conditions
contrastive explanations in abstract and structured argumentation are
meaningful, and how argumentation allows us to make implicit foils explicit.
|
Deep learning has demonstrated its strengths in numerous binary analysis
tasks, including function boundary detection, binary code search, function
prototype inference, value set analysis, etc. When applying deep learning to
binary analysis tasks, we need to decide what input should be fed into the
neural network model. More specifically, we need to answer how to represent an
instruction in a fixed-length vector. The idea of automatically learning
instruction representations is intriguing, however the existing schemes fail to
capture the unique characteristics of disassembly. These schemes ignore the
complex intra-instruction structures and mainly rely on control flow in which
the contextual information is noisy and can be influenced by compiler
optimizations.
In this paper, we propose to pre-train an assembly language model called
PalmTree for generating general-purpose instruction embeddings by conducting
self-supervised training on large-scale unlabeled binary corpora. PalmTree
utilizes three pre-training tasks to capture various characteristics of
assembly language. These training tasks overcome the problems in existing
schemes, thus can help to generate high-quality representations. We conduct
both intrinsic and extrinsic evaluations, and compare PalmTree with other
instruction embedding schemes. PalmTree has the best performance for intrinsic
metrics, and outperforms the other instruction embedding schemes for all
downstream tasks.
|
Recent work (Takanobu et al., 2020) proposed the system-wise evaluation on
dialog systems and found that improvement on individual components (e.g., NLU,
policy) in prior work may not necessarily bring benefit to pipeline systems in
system-wise evaluation. To improve the system-wise performance, in this paper,
we propose new joint system-wise optimization techniques for the pipeline
dialog system. First, we propose a new data augmentation approach which
automates the labeling process for NLU training. Second, we propose a novel
stochastic policy parameterization with Poisson distribution that enables
better exploration and offers a principled way to compute policy gradient.
Third, we propose a reward bonus to help policy explore successful dialogs. Our
approaches outperform the competitive pipeline systems from Takanobu et al.
(2020) by big margins of 12% success rate in automatic system-wise evaluation
and of 16% success rate in human evaluation on the standard multi-domain
benchmark dataset MultiWOZ 2.1, and also outperform the recent state-of-the-art
end-to-end trained model from DSTC9.
|
Orthogonal frequency division multiplexing (OFDM) is one of the dominant
waveforms in wireless communication systems due to its efficient
implementation. However, it suffers from a loss of spectral efficiency as it
requires a cyclic prefix (CP) to mitigate inter-symbol interference (ISI) and
pilots to estimate the channel. We propose in this work to address these
drawbacks by learning a neural network (NN)-based receiver jointly with a
constellation geometry and bit labeling at the transmitter, that allows CP-less
and pilotless communication on top of OFDM without a significant loss in bit
error rate (BER). Our approach enables at least 18% throughput gains compared
to a pilot and CP-based baseline, and at least 4% gains compared to a system
that uses a neural receiver with pilots but no CP.
|
Theory of choreographic languages typically includes a number of complex
results that are proved by structural induction. The high number of cases and
the subtle details in some of them lead to long reviewing processes, and
occasionally to errors being found in published proofs. In this work, we take a
published proof of Turing completeness of a choreographic language and
formalise it in Coq. Our development includes formalising the choreographic
language and its basic properties, Kleene's theory of partial recursive
functions, the encoding of these functions as choreographies, and proving this
encoding correct.
With this effort, we show that theorem proving can be a very useful tool in
the field of choreographic languages: besides the added degree of confidence
that we get from a mechanised proof, the formalisation process led us to a
significant simplification of the underlying theory. Our results offer a
foundation for the future formal development of choreographic languages.
|
In this paper, we propose a novel fault attack termed as Single Event
Transient Fault Analysis (SETFA) attack, which is well suited for hardware
implementations. The proposed approach pinpoints hotspots in the cypher's Sbox
combinational logic circuit that significantly reduce the key entropy when
subjected to faults. ELEPHANT is a parallel authenticated encryption and
associated data (AEAD) scheme targeted to hardware implementations, a finalist
in the Lightweight cryptography (LWC) competition launched by NIST. In this
work, we investigate vulnerabilities of ELEPHANT against fault analysis. We
observe that the use of 128-bit random nonce makes it resistant against many
cryptanalysis techniques like differential, linear, etc., and their variants.
However, the relaxed nature of Statistical Fault Analysis (SFA) methods makes
them widely applicable in restrictive environments. We propose a SETFA-based
key recovery attack on Elephant. We performed Single experiments with random
plaintexts and keys, on Dumbo, a Sponge-based instance of the Elephant-AEAD
scheme. Our proposed approach could recover the secret key in 85-250
ciphertexts. In essence, this work investigates new vulnerabilities towards
fault analysis that may require to be addressed to ensure secure computations
and communications in IoT scenarios.
|
This paper proposes a robust beamforming (BF) scheme to enhance physical
layer security (PLS) of the downlink of a multibeam satellite system in the
presence of either uncoordinated or coordinated eavesdroppers (Eves).
Specifically, with knowing only the approximate locations of the Eves, we aim
at maximizing the worst-case achievable secrecy rate (ASR) of the legitimate
user (LU), subject to the constraints of per-antenna transmit power and quality
of service (QoS) requirement of the LU. Since the optimization problem is
non-convex, we first adopt the discretization method to deal with the unknown
regions of the Eves and then exploit the log-sum-exp function to approximate
the objective function. Afterwards, a BF method joint alternating direction
method of multipliers (ADMM) with Dinkelbach iteration is presented to solve
this non-convex problem. Finally, simulation results verify that our robust BF
algorithm can effectively improve the security of multibeam satellite systems.
|
We investigate the role of the Higgs \emph{doublet} in the thermal decoupling
of multi-TeV dark matter coupled to the Weak interactions of the Standard Model
and the Higgs. The Higgs doublet can mediate a long-range force that affects
the annihilation processes and binds dark matter into bound states. More
importantly, the emission of a Higgs doublet by a pair of dark matter particles
can give rise to extremely rapid monopole bound-state formation processes and
bound-to-bound transitions. We compute these effects in the unbroken
electroweak phase. To this end, we consider the simplest renormalisable
fermionic model, consisting of a singlet and a doublet under $SU_{L}(2)$ that
are stabilised by a $\mathbb{Z}_2$ symmetry, in the regime where the two
multiplets coannihilate. In a companion paper, we use the results to show that
the formation of metastable bound states via Higgs-doublet emission and their
decay decrease the relic density very significantly.
|
The proton radiography diagnostic is widely used in laser-plasma experiments
to make magnetic field measurements. Recent developments in analysis have
enabled quantitative reconstruction of path-integrated magnetic field values,
but making conclusions about the three-dimensional structure of the fields
remains challenging. In this Letter we propose and demonstrate in kinetic
simulations a novel target geometry which makes possible the production of
multiple proton beams from a single laser pulse, enabling the application of
tomographic methods to proton radiography.
|
In the current work, we consider diversion of childbirth patients who arrive
seeking emergency admission to public primary health centers (PHCs). PHCs are
the first point of contact for an Indian patient with formal medical care, and
offer medical care on an outpatient basis, and limited inpatient and childbirth
care. In this context, real-time prediction of the wait time of the arriving
patient becomes important in order to determine whether the patient must be
diverted to another PHC or not. We study this problem using a discrete event
simulation that we develop of medical care operations in two PHCs in India. We
approximate the labour room service at each PHC as an M/G/1 queueing system and
show how the accuracy of real-time delay predictors impacts the extent of the
change in operational outcomes at each PHC. We simulate patient diversion using
actual delays as well as the delay estimates generated by various delay
predictors based on the state of the system such as queue-length, elapsed
service time, and observed delay histories. The simulation of the diversion
process also incorporates travel time between the PHCs. We also propose a new
delay predictor that incorporates information regarding the system state as
well as the service time distribution. We compare the operational outcomes at
both PHCs without diversion and with diversion using the above delay
predictors. We show numerically that more accurate delay predictors lead to
more equitable distribution of resources involved in provision of childbirth
care across both PHCs.
|
We study eight different gamma-ray burst (GRB) data sets to examine whether
current GRB measurements -- that probe a largely unexplored part of
cosmological redshift ($z$) space -- can be used to reliably constrain
cosmological model parameters. We use three Amati-correlation samples and five
Combo-correlation samples to simultaneously derive correlation and cosmological
model parameter constraints. The intrinsic dispersion of each GRB data set is
taken as a goodness measurement. We examine the consistency between the
cosmological bounds from GRBs with those determined from better-established
cosmological probes, such as baryonic acoustic oscillation (BAO) and Hubble
parameter $H(z)$ measurements. We use the Markov chain Monte Carlo method
implemented in \textsc{MontePython} to find best-fit correlation and
cosmological parameters, in six different cosmological models, for the eight
GRB samples, alone or in conjunction with BAO and $H(z)$ data. For the Amati
correlation case, we compile a data set of 118 bursts, the A118 sample, which
is the largest -- about half of the total Amati-correlation GRBs -- current
collection of GRBs suitable for constraining cosmological parameters. This
updated GRB compilation has the smallest intrinsic dispersion of the three
Amati-correlation GRB data sets we examined. We are unable to define a
collection of reliable bursts for current Combo-correlation GRB data.
Cosmological constraints determined from the A118 sample are consistent with --
but significantly weaker than -- those from BAO and $H(z)$ data. They also are
consistent with the spatially-flat $\Lambda$CDM model as well as with dynamical
dark energy models and non-spatially-flat models. Since GRBs probe a largely
unexplored region of $z$, it is well worth acquiring more and better-quality
burst data which will give a more definitive answer to the question of the
title.
|
Interactive technologies are getting closer to our bodies and permeate the
infrastructure of our homes. While such technologies offer many benefits, they
can also cause an initial feeling of unease in users. It is important for
Human-Computer Interaction to manage first impressions and avoid designing
technologies that appear creepy. To that end, we developed the Perceived
Creepiness of Technology Scale (PCTS), which measures how creepy a technology
appears to a user in an initial encounter with a new artefact. The scale was
developed based on past work on creepiness and a set of ten focus groups
conducted with users from diverse backgrounds. We followed a structured process
of analytically developing and validating the scale. The PCTS is designed to
enable designers and researchers to quickly compare interactive technologies
and ensure that they do not design technologies that produce initial feelings
of creepiness in users.
|
Understanding player strategies is a key question when analyzing player
behavior both for academic researchers and industry practitioners. For game
designers and game user researchers, it is important to gauge the distance
between intended strategies and emergent strategies; this comparison allows
identification of glitches or undesirable behaviors. For academic researchers
using games for serious purposes such as education, the strategies adopted by
players are indicative of their cognitive progress in relation to serious
goals, such as learning process. Current techniques and systems created to
address these needs present a few drawbacks. Qualitative methods are difficult
to scale upwards to include large number of players and are prone to subjective
biases. Other approaches such as visualization and analytical tools are either
designed to provide an aggregated overview of the data, losing the nuances of
individual player behaviors, or, in the attempt of accounting for individual
behavior, are not specifically designed to reduce the visual cognitive load. In
this work, we propose a novel visualization technique that specifically
addresses the tasks of comparing behavior sequences in order to capture an
overview of the strategies enacted by players and at the same time examine
individual player behaviors to identify differences and outliers. This approach
allows users to form hypotheses about player strategies and verify them. We
demonstrate the effectiveness of the technique through a case study: utilizing
a prototype system to investigate data collected from a commercial educational
puzzle game. While the prototype's usability can be improved, initial testing
results show that core features of the system proved useful to our potential
users for understanding player strategies.
|
In this analysis, we work with the data set that was compiled by Darren
Linvill and Patrick Warren, along with a representative sample of Facebook ads
that were released by the House Intelligence Committee Minority. The goal of
this analysis is to use the categories defined by Linvill and Warren in the
Twitter data and investigate if these categories exist in Facebook ads. This
begin to give us insights to the tactics used between the two social media
services. Further, we try to replicate Linvill and Warren's original
categorization of the Twitter data. Lastly, we investigate what categories may
exist in the Facebook data.
|
This paper presents a novel hierarchical motion planning approach based on
Rapidly-Exploring Random Trees (RRT) for global planning and Model Predictive
Control (MPC) for local planning. The approach targets a three-wheeled cycle
rickshaw (trishaw) used for autonomous urban transportation in shared spaces.
Due to the nature of the vehicle, the algorithms had to be adapted in order to
adhere to non-holonomic kinematic constraints using the Kinematic Single-Track
Model.
The vehicle is designed to offer transportation for people and goods in
shared environments such as roads, sidewalks, bicycle lanes but also open
spaces that are often occupied by other traffic participants. Therefore, the
algorithm presented in this paper needs to anticipate and avoid dynamic
obstacles, such as pedestrians or bicycles, but also be fast enough in order to
work in real-time so that it can adapt to changes in the environment. Our
approach uses an RRT variant for global planning that has been modified for
single-track kinematics and improved by exploiting dead-end nodes. This allows
us to compute global paths in unstructured environments very fast. In a second
step, our MPC-based local planner makes use of the global path to compute the
vehicle's trajectory while incorporating dynamic obstacles such as pedestrians
and other road users.
Our approach has shown to work both in simulation as well as first real-life
tests and can be easily extended for more sophisticated behaviors.
|
AlphaZero has achieved impressive performance in deep reinforcement learning
by utilizing an architecture that combines search and training of a neural
network in self-play. Many researchers are looking for ways to reproduce and
improve results for other games/tasks. However, the architecture is designed to
learn from scratch, tabula rasa, accepting a cold-start problem in self-play.
Recently, a warm-start enhancement method for Monte Carlo Tree Search was
proposed to improve the self-play starting phase. It employs a fixed parameter
$I^\prime$ to control the warm-start length. Improved performance was reported
in small board games. In this paper we present results with an adaptive switch
method. Experiments show that our approach works better than the fixed
$I^\prime$, especially for "deep," tactical, games (Othello and Connect Four).
We conjecture that the adaptive value for $I^\prime$ is also influenced by the
size of the game, and that on average $I^\prime$ will increase with game size.
We conclude that AlphaZero-like deep reinforcement learning benefits from
adaptive rollout based warm-start, as Rapid Action Value Estimate did for
rollout-based reinforcement learning 15 years ago.
|
For $n\geq s> r\geq 1$ and $k\geq 2$, write $n \rightarrow (s)_{k}^r$ if
every hyperedge colouring with $k$ colours of the complete $r$-uniform
hypergraph on $n$ vertices has a monochromatic subset of size $s$. Improving
upon previous results by \textcite{AGLM14} and \textcite{EHMR84} we show that
\[ \text{if } r \geq 3 \text{ and } n \nrightarrow (s)_k^r \text{ then } 2^n
\nrightarrow (s+1)_{k+3}^{r+1}. \] This yields an improvement for some of the
known lower bounds on multicolour hypergraph Ramsey numbers.
Given a hypergraph $H=(V,E)$, we consider the Ramsey-like problem of
colouring all $r$-subsets of $V$ such that no hyperedge of size $\geq r+1$ is
monochromatic. We provide upper and lower bounds on the number of colours
necessary in terms of the chromatic number $\chi(H)$. In particular we show
that this number is $O(\log^{(r-1)} (r \chi(H)) + r)$.
|
Reflecting our experiences in areas, like Algebraic Specifications, Abstract
Model Theory, Graph Transformations, and Model Driven Software Engineering
(MDSE), we present a general, category independent approach to Logics of
First-Order Constraints (LFOC). Traditional First-Order Logic, Description
Logic and the sketch framework are discussed as examples. We use the concept of
institution [Diaconescu08,GoguenBurstall92] as a guideline to describe LFOC's.
The main result states that any choice of the six parameters, we are going to
describe, gives us a corresponding "institution of constraints" at hand. The
"presentations" for an institution of constraints can be characterized as
"first-order sketches". As a corresponding variant of the "sketch-entailments"
in [Makkai97], we finally introduce "sketch rules" to equip LFOC's with the
necessary expressive power.
|
Podcasts are spoken documents across a wide-range of genres and styles, with
growing listenership across the world, and a rapidly lowering barrier to entry
for both listeners and creators. The great strides in search and recommendation
in research and industry have yet to see impact in the podcast space, where
recommendations are still largely driven by word of mouth. In this perspective
paper, we highlight the many differences between podcasts and other media, and
discuss our perspective on challenges and future research directions in the
domain of podcast information access.
|
Pairwise comparison matrices are increasingly used in settings where some
pairs are missing. However, there exist few inconsistency indices for similar
incomplete data sets and no reasonable measure has an associated threshold.
This paper generalises the famous rule of thumb for the acceptable level of
inconsistency, proposed by Saaty, to incomplete pairwise comparison matrices.
The extension is based on choosing the missing elements such that the maximal
eigenvalue of the incomplete matrix is minimised. Consequently, the
well-established values of the random index cannot be adopted: the
inconsistency of random matrices is found to be the function of matrix size and
the number of missing elements, with a nearly linear dependence in the case of
the latter variable. Our results can be directly built into decision-making
software and used by practitioners as a statistical criterion for accepting or
rejecting an incomplete pairwise comparison matrix.
|
The recent discovery of AV$_3$Sb$_5$ (A=K,Rb,Cs) has uncovered an intriguing
arena for exotic Fermi surface instabilities in a kagome metal. Among them,
superconductivity is found in the vicinity of multiple van Hove singularities,
exhibiting indications of unconventional pairing. We show that the sublattice
interference mechanism is central to understanding the formation of
superconductivity in a kagome metal. Starting from an appropriately chosen
minimal tight-binding model with multiple with multiple van Hove singularities
close to the Fermi level for AV$_3$Sb$_5$, we provide a random phase
approximation analysis of superconducting instabilities. Non-local Coulomb
repulsion, the sublattice profile of the van Hove bands, and the bare
interaction strength turn out to be the crucial parameters to determine the
preferred pairing symmetry. Implications for potentially topological surface
states are discussed, along with a proposal for additional measurements to pin
down the nature of superconductivity in AV$_3$Sb$_5$.
|
Silicon can be isotopically enriched, allowing for the fabrication of highly
coherent semiconductor spin qubits. However, the conduction band of bulk Si
exhibits a six-fold valley degeneracy, which may adversely impact the
performance of silicon quantum devices. To date, the spatial characterization
of valley states in Si remains limited. Moreover, techniques for probing valley
states in functional electronic devices are needed. We describe here a
cryogen-free scanning gate microscope for the characterization of
Si/Si$_{0.7}$Ge$_{0.3}$ quantum devices at mK temperatures. The microscope is
based on the Pan-walker design, with coarse positioning piezo stacks and a fine
scanning piezo tube. A tungsten microscope tip is attached to a tuning fork for
active control of the tip-to-sample distance. To reduce vibration noise from
the pulse tube cooler, we utilize both active and passive vibration isolation
mechanisms, and achieve a root-mean-square noise in $z$ of $\sim$ 2 nm. Our
microscope is designed to characterize fully functioning
Si/Si$_{0.7}$Ge$_{0.3}$ quantum devices. As a proof of concept, we use the
microscope to manipulate the charge occupation of a Si quantum dot, opening up
a range of possibilities for the exploration of quantum devices and materials.
|
We develop a theory of the spin battery effect in
superconductor/ferromagnetic insulator (SC/FI) systems taking into account the
magnetic proximity effect. We demonstrate that the spin-energy mixing enabled
by the superconductivity leads to the enhancement of spin accumulation by
several orders of magnitude relative to the normal state. This finding can
explain the recently observed giant inverse spin Hall effect generated by
thermal magnons in the SC/FI system. We suggest a non-local electrical
detection scheme which can directly probe the spin accumulation driven by the
magnetization dynamics. We predict a giant Seebeck effect converting the magnon
temperature bias into the non-local voltage signal. We also show how this can
be used to enhance the sensitivity of magnon detection even up to the
single-magnon level.
|
Deep neural networks (DNNs) have been widely used for medical image analysis.
However, the lack of access a to large-scale annotated dataset poses a great
challenge, especially in the case of rare diseases, or new domains for the
research society. Transfer of pre-trained features, from the relatively large
dataset is a considerable solution. In this paper, we have explored supervised
segmentation using domain adaptation for optic nerve and orbital tumor, when
only small sampled CT images are given. Even the lung image database consortium
image collection (LIDC-IDRI) is a cross-domain to orbital CT, but the proposed
domain adaptation method improved the performance of attention U-Net for the
segmentation in public optic nerve dataset and our clinical orbital tumor
dataset. The code and dataset are available at https://github.com/cmcbigdata.
|
The increasing amount of distributed energy resources including renewable
energy systems and electric vehicles is expected to change electric power grids
significantly, where conventional consumers are transformed to prosumers since
they can produce electricity as well. In such an ecosystem, prosumers can start
offering their excess energy to supply demands of the other customers on the
grids behind the meter without interference of distribution system operators
(DSO). Besides, DSOs require more accurate and more frequent data form
prosumers' net demand to be able to operate their network efficiently. The main
challenge in these new distribution grids is the amount of data that needs to
be collected in this platform is unbelievably high, and more immortally,
prosumers will likely refuse to share their information with DSOs due to their
potential privacy and economic concerns. Blockchain technology as an efficient
distributed solution for management of data and financial transactions, has
been considered to solve this trust issue. With blockchain-based solutions,
data and financial transactions between all parties will take placed through
distributed ledgers without any interference from an intermediary. In this
paper, impacts of blockchain technologies on electric power industry is
studied. The paper specifically focuses on LO3 Energy -- one of startups
applying blockchain to electric power grids -- their blockchain-based solution
called Exergy, and their use cases to implement such solutions.
|
Mean dimension is a topological invariant of dynamical systems, which
originates with Mikhail Gromov in 1999 and which was studied with deep
applications around 2000 by Elon Lindenstrauss and Benjamin Weiss within the
framework of amenable group actions. Let a countable discrete amenable group
$G$ act continuously on compact metrizable spaces $X$ and $Y$. Consider the
product action of $G$ on the product space $X\times Y$. The product inequality
for mean dimension is well known: $\mathrm{mdim}(X\times
Y,G)\le\mathrm{mdim}(X,G)+\mathrm{mdim}(Y,G)$, while it was unknown for a long
time if the product inequality could be an equality. In 2019, Masaki Tsukamoto
constructed the first example of two different continuous actions of $G$ on
compact metrizable spaces $X$ and $Y$, respectively, such that the product
inequality becomes strict. However, there is still one longstanding problem
which remains open in this direction, asking if there exists a continuous
action of $G$ on some compact metrizable space $X$ such that
$\mathrm{mdim}(X\times X,G)<2\cdot\mathrm{mdim}(X,G)$. We solve this problem.
Somewhat surprisingly, we prove, in contrast to (topological) dimension theory,
a rather satisfactory theorem: If an infinite (countable discrete) amenable
group $G$ acts continuously on a compact metrizable space $X$, then we have
$\mathrm{mdim}(X^n,G)=n\cdot\mathrm{mdim}(X,G)$, for any positive integer $n$.
Our product formula for mean dimension, together with the example and
inequality (stated previously), eventually allows mean dimension of product
actions to be fully understood.
|
We study the homogenous quenching processes in a holographic s+p model with
reentrant phase transitions. We first realize the reentrant phase transition in
the holographic model in probe limit and draw the phase diagram. Next, we
compare the time evolution of the two condensates in two groups of numerical
quenching experiments across the reentrant region, with different quenching
speed as well as different width of the reentrant region, respectively. We also
study the dynamical competition between the two orders in quenching processes
from the normal phase to the superconductor phase.
|
We propose and discuss sensitivity metrics for reliability analysis, which
are based on the value of information. These metrics are easier to interpret
than other existing sensitivity metrics in the context of a specific decision
and they are applicable to any type of reliability assessment, including those
with dependent inputs. We develop computational strategies that enable
efficient evaluation of these metrics, in some scenarios without additional
runs of the deterministic model. The metrics are investigated by application to
numerical examples.
|
Synonymous keyword retrieval has become an important problem for sponsored
search ever since major search engines relax the exact match product's matching
requirement to a synonymous level. Since the synonymous relations between
queries and keywords are quite scarce, the traditional information retrieval
framework is inefficient in this scenario. In this paper, we propose a novel
quotient space-based retrieval framework to address this problem. Considering
the synonymy among keywords as a mathematical equivalence relation, we can
compress the synonymous keywords into one representative, and the corresponding
quotient space would greatly reduce the size of the keyword repository. Then an
embedding-based retrieval is directly conducted between queries and the keyword
representatives. To mitigate the semantic gap of the quotient space-based
retrieval, a single semantic siamese model is utilized to detect both the
keyword--keyword and query-keyword synonymous relations. The experiments show
that with our quotient space-based retrieval method, the synonymous keyword
retrieving performance can be greatly improved in terms of memory cost and
recall efficiency. This method has been successfully implemented in Baidu's
online sponsored search system and has yielded a significant improvement in
revenue.
|
We compute the motivic Euler characteristic of Ayoub's nearby cycles by
strata of a semi-stable reduction, for a degeneration to multiple isolated
quasi-homogeneous singularities resolved by a single weighted blow-up. This
allows to compare the local picture at the singularities with the global
conductor formula for hypersurfaces developed by Levine, Pepin Lehalleur and
Srinivas, revealing that the formula is local in nature, thus extending it to
the more general setting considered in this paper. This gives a quadratic
refinement for the classical Milnor number formula with multiple singularities
of a certain type.
|
We propose a robust in-time predictor for in-hospital COVID-19 patient's
probability of requiring mechanical ventilation. A challenge in the risk
prediction for COVID-19 patients lies in the great variability and irregular
sampling of patient's vitals and labs observed in the clinical setting.
Existing methods have strong limitations in handling time-dependent features'
complex dynamics, either oversimplifying temporal data with summary statistics
that lose information or over-engineering features that lead to less robust
outcomes. We propose a novel in-time risk trajectory predictive model to handle
the irregular sampling rate in the data, which follows the dynamics of risk of
performing mechanical ventilation for individual patients. The model
incorporates the Multi-task Gaussian Process using observed values to learn the
posterior joint multi-variant conditional probability and infer the missing
values on a unified time grid. The temporal imputed data is fed into a
multi-objective self-attention network for the prediction task. A novel
positional encoding layer is proposed and added to the network for producing
in-time predictions. The positional layer outputs a risk score at each
user-defined time point during the entire hospital stay of an inpatient. We
frame the prediction task into a multi-objective learning framework, and the
risk scores at all time points are optimized altogether, which adds robustness
and consistency to the risk score trajectory prediction. Our experimental
evaluation on a large database with nationwide in-hospital patients with
COVID-19 also demonstrates that it improved the state-of-the-art performance in
terms of AUC (Area Under the receiver operating characteristic Curve) and AUPRC
(Area Under the Precision-Recall Curve) performance metrics, especially at
early times after hospital admission.
|
Interference is the cornerstone of Huygens source design for reshaping and
controlling scattering patterns. The conventional underpinning principle, such
as for the Kerker effect, is the interference of electric and magnetic dipole
and quadrupole modes. Here a route to realize transverse Kerker scattering
through employing only the interference between the electric dipole and
magnetic quadrupole is demonstrated. The proposed approach is numerically
validated in an ultra-thin Silicon square nanoplate metasurface, and is further
verified by multipole decomposition. The metasurface is shown to be invisible
fornear-infrared wavelengths and with an enhanced electric field in the region
of the nanoparticle. Additionally, we develop further the proposed approach
with practical implementation for invisibility applications by exploring the
effects of the aspect ratio of the square plate nanoresonator, the
inter-particle separation, and the presence of a substrate. Further it is
demonstrated that invisibility can be observed at oblique incidence up to
60{\deg} for a transverse magnetic plane wave. The results are relevant for
Huygens metasurface design for perfect reflectors, invisibility and devices for
harmonic generation manipulation.
|
We study a one dimensional quantum XY spin chain driven by a local noisy spin
impurity with finite correlation time, along the transverse field direction. We
recover the celebrated Zeno crossover and we show that entanglement can be used
as a proxy for the heating and strong-measurement regimes. We compute the
entanglement entropy of a block of spins and we observe that its velocity
spreading decreases at strong dissipation, as a result of the Zeno effect. Upon
increasing the correlation time of the noise, the location of the Zeno
crossover shifts at stronger dissipation rates opening up a broader heating
phase. We offer insight on the mechanisms underlying the dynamics of the
entanglement entropy by monitoring different time traces of the local
transverse magnetisation profile. Our results aim at starting a complementary
viewpoint on the field of dissipative quantum impurities, based on a
theoretical quantum information perspective.
|
In this short article we show a particular version of the Hedberg inequality
which can be used to derive, in a very simple manner, functional inequalities
involving Sobolev and Besov spaces in the general setting of Lebesgue spaces of
variable exponents and in the framework of Orlicz spaces.
|
Recently, two-dimensional monolayer MoSi2N4 with hexagonal structure was
successfully synthesized in experiment (Hong et al 2020 Science 369, 670). The
fabricated monolayer MoSi2N4 is predicted to have excellent mechanical
properties. Motived by the experiment, we perform first-principles calculations
to investigate the mechanical properties of monolayer MoSi2N4, including its
ideal tensile strengths, critical strains, and failure mechanisms. Our results
demonstrate that monolayer MoSi2N4 can withstand stresses up to 51.6 and 49.2
GPa along zigzag and armchair directions, respectively. The corresponding
critical strains are 26.5% and 17.5%, respectively. For biaxial strain, the
ideal tensile strength is 50.6 GPa with a critical strain of 19.5%. Compared
with monolayer MoS2, monolayer MoSi2N4 possesses much higher elastic moduli and
ideal tensile strengths for both uniaxial and biaxial strains. Interestingly,
the critical strain and failure mechanism of zigzag direction in MoSi2N4 are
almost the same as those of armchair direction in MoS2, while the critical
strain and failure mechanism of armchair direction for MoSi2N4 are similar to
the ones of zigzag direction for MoS2. Our work reveals the remarkable
mechanical characteristics of monolayer MoSi2N4.
|
We present a new learning-based method for identifying safe and navigable
regions in off-road terrains and unstructured environments from RGB images. Our
approach consists of classifying groups of terrains based on their navigability
levels using coarse-grained semantic segmentation. We propose a bottleneck
transformer-based deep neural network architecture that uses a novel group-wise
attention mechanism to distinguish between navigability levels of different
terrains. Our group-wise attention heads enable the network to explicitly focus
on the different groups and improve the accuracy. We show through extensive
evaluations on the RUGD and RELLIS-3D datasets that our learning algorithm
improves visual perception accuracy in off-road terrains for navigation. We
compare our approach with prior work on these datasets and achieve an
improvement over the state-of-the-art mIoU by 6.74-39.1% on RUGD and
3.82-10.64% on RELLIS-3D. In addition, we deploy our method on a Clearpath
Jackal robot. Our approach improves the performance of the navigation algorithm
in terms of average progress towards the goal by 54.73% and the false positives
in terms of forbidden region by 29.96%.
|
This paper introduces the \emph{Simultaneous Assignment Problem}. Here, we
are given an assignment problem on some of the subgraphs of a given graph, and
we are looking for a heaviest assignment which is feasible when restricted to
any of the assignment problems. More precisely, we are given a graph with a
weight- and a capacity function on its edges and a set of its subgraphs
$H_1,\dots,H_k$ along with a degree upper bound function for each of them. In
addition, we are also given a laminar system on the node set with an upper
bound on the degree-sum of the nodes in each set in the system. We want to
assign each edge a non-negative integer below its capacity such that the total
weight is maximized, the degrees in each subgraph are below the degree upper
bound associated with the subgraph, and the degree-sum bound is respected in
each set of the laminar system.
The problem is shown to be APX-hard in the unweighted case even if the graph
is a forest and $k=2$. This also implies that the Distance matching problem is
APX-hard in the weighted case and that the Cyclic distance matching problem is
APX-hard in the unweighted case. We identify multiple special cases when the
problem can be solved in strongly polynomial time. One of these cases, the
so-called locally laminar case, is a common generalization of the Hierarchical
b-matching problem and the Laminar matchoid problem, and it implies that both
of these problems can be solved efficiently in the weighted, capacitated case
-- improving upon the most general polynomial-time algorithms for these
problems. The problem can be constant approximated when $k$ is a constant, and
we show that the approximation factor matches the integrality gap of a
strengthened LP-relaxation for small $k$. We give improved approximation
algorithms for special cases, for example, when the degree bounds are uniform
or the graph is sparse.
|
Let $V$ be an $n$-dimensional vector space over a finite field
$\mathbb{F}_q$, where $q$ is a prime power. Define the \emph{generalized
$q$-Kneser graph} $K_q(n,k,t)$ to be the graph whose vertices are the
$k$-dimensional subspaces of $V$ and two vertices $F_1$ and $F_2$ are adjacent
if $\dim(F_1\cap F_2)<t$. Then $K_q(n,k,1)$ is the well-known $q$-Kneser graph.
In this paper, we determine the treewidth of $K_q(n,k,t)$ for $n\geq
2t(k-t+1)+k+1$ and $t\ge 1$ exactly. Note that $K_q(n,k,k-1)$ is the complement
of the Grassmann graph $G_q(n,k)$. We give a more precise result for the
treewidth of $\overline{G_q(n,k)}$ for any possible $n$, $k$ and $q$.
|
The creation of an electron-positron pair in the collision of two real
photons, namely the linear Breit-Wheeler process, has never been detected
directly in the laboratory since its prediction in 1934 despite its fundamental
importance in quantum electrodynamics and astrophysics. In the last few years,
several experimental setup have been proposed to observe this process in the
laboratory, relying either on thermal radiation, Bremsstrahlung, linear or
multiphoton inverse Compton scattering photons sources created by lasers or by
the mean of a lepton collider coupled with lasers. In these propositions, the
influence of the photons' energy distribution on the total number of produced
pairs has been taken into account with an analytical model only for two of
these cases. We hereafter develop a general and original, semi-analytical model
to estimate the influence of the photons energy distribution on the total
number of pairs produced by the collision of two such photon beams, and give
optimum energy parameters for some of the proposed experimental configurations.
Our results shows that the production of optimum Bremsstrahlung and linear
inverse Compton sources are, only from energy distribution considerations,
already reachable in today's facilities. Despite its less interesting energy
distribution features for the LBW pair production, the photon sources generated
via multiphoton inverse Compton scattering by the propagation of a laser in a
micro-channel can also be interesting, thank to the high collision luminosity
that could eventually be reached by such configurations. These results then
gives important insights for the design of experiments intended to detect
linear Breit-Wheeler produced positrons in the laboratory for the first time.
|
The quasiparticle formalism invented by Lev Landau for description of
conventional Fermi liquids is generalized to exotic superconductivity
attributed to Cooper pairing, whose measured properties defy explanation within
the standard BCS-Fermi Liquid description. We demonstrate that in such systems
the quasiparticle number remains equal to particle number, just as in common
Fermi liquids. We are then able to explain the puzzling relationship between
the variation with doping $x$ of two key properties of the family
La$_{2-x}$Sr$_x$Cu0$_4$ of exotic superconductors, namely the $T=0$ superfluid
density $\rho_{s0}(x)$ and the coefficient $A_1(x)$ in the linear-in-$T$
component of the normal-state low-$T$ resistivity $\rho(T)=\rho_0+A_1T+A_2T^2$,
in terms of the presence of interaction-induced flat bands in the ground states
of these metals.
|
CTB 80 (G69.0+2.7) is a relatively old (50--80 kyr) supernova remnant (SNR)
with a complex radio morphology showing three extended radio arms and a radio
and X-ray nebula near the location of the pulsar PSR B1951+32. We report on a
study of the GeV emission in the region of CTB 80 with \emph{Fermi}-LAT data.
An extended source with a size of 1.3$^\circ$, matching the size of the
infrared shell associated to the SNR, was discovered. The GeV emission,
detected up to an energy of $\sim 20$ GeV, is more significant at the location
of the northern radio arm where previous observations imply that the SNR shock
is interacting with ambient material. Both hadronic and leptonic scenarios can
reproduce the multiwavelength data reasonably well. The hadronic cosmic ray
energy density required is considerably larger than the local Galactic value
and the gamma-ray leptonic emission is mainly due to bremsstrahlung
interactions. We conclude that GeV particles are still trapped or accelerated
by the SNR producing the observed high-energy emission when interacting with
ambient material.
|
We prove unique weak solvability and Feller property for stochastic
differential equations with drift in a large class of time-dependent vector
fields. This class contains, in particular, the critical
Ladyzhenskaya-Prodi-Serrin class, the weak $L^d$ class as well as some vector
fields that are not even in $L^{2+\varepsilon}_{\rm loc}$, $\varepsilon>0$.
|
For a digraph $G$ and $v \in V(G)$, let $\delta^+(v)$ be the number of
out-neighbors of $v$ in $G$. The Caccetta-H\"{a}ggkvist conjecture states that
for all $k \ge 1$, if $G$ is a digraph with $n = |V(G)|$ such that $\delta^+(v)
\ge k$ for all $v \in V(G)$, then $G$ contains a directed cycle of length at
most $\lceil n/k \rceil$. Aharoni proposed a generalization of this conjecture,
that a simple edge-colored graph on $n$ vertices with $n$ color classes, each
of size $k$, has a rainbow cycle of length at most $\lceil n/k \rceil$. With
Pelik\'anov\'a and Pokorn\'a, we showed that this conjecture is true if each
color class has size ${\Omega}(k\log k)$. In this paper, we present a proof of
the conjecture if each color class has size ${\Omega}(k)$, which improved the
previous result and is only a constant factor away from Aharoni's conjecture.
We also consider what happens when the condition on the number of colors is
relaxed.
|
In this work we present a dual-mode mid-infrared workflow [6], for detecting
sub-superficial mural damages in frescoes artworks. Due to the large nature of
frescoes, multiple thermal images are recorded. Thus, the experimental setup
may introduce measurements errors, seen as inter-frame changes in the image
contrast, after mosaicking. An approach to lowering errors is to post-process
the mosaic [10] via osmosis partial differential equation (PDE) [12, 13], which
preserves details, mass and balance the lights: efficient numerical study for
osmosis on large images is proposed [2, 11], based on operator splitting [8].
Our range of Cultural Heritage applications include the detection of
sub-superficial voids in Monocromo (L. Da Vinci, Castello Sforzesco, Milan)
[5], the light-balance for multi-spectral imaging and the data integration on
the Archimedes Palimpsest [10].
|
Machine learning technologies using deep neural networks (DNNs), especially
convolutional neural networks (CNNs), have made automated, accurate, and fast
medical image analysis a reality for many applications, and some DNN-based
medical image analysis systems have even been FDA-cleared. Despite the
progress, challenges remain to build DNNs as reliable as human expert doctors.
It is known that DNN classifiers may not be robust to noises: by adding a small
amount of noise to an input image, a DNN classifier may make a wrong
classification of the noisy image (i.e., in-distribution adversarial sample),
whereas it makes the right classification of the clean image. Another issue is
caused by out-of-distribution samples that are not similar to any sample in the
training set. Given such a sample as input, the output of a DNN will become
meaningless. In this study, we investigated the in-distribution (IND) and
out-of-distribution (OOD) adversarial robustness of a representative CNN for
lumbar disk shape reconstruction from spine MR images. To study the
relationship between dataset size and robustness to IND adversarial attacks, we
used a data augmentation method to create training sets with different levels
of shape variations. We utilized the PGD-based algorithm for IND adversarial
attacks and extended it for OOD adversarial attacks to generate OOD adversarial
samples for model testing. The results show that IND adversarial training can
improve the CNN robustness to IND adversarial attacks, and larger training
datasets may lead to higher IND robustness. However, it is still a challenge to
defend against OOD adversarial attacks.
|
The Posner-Robinson Theorem states that for any reals $Z$ and $A$ such that
$Z \oplus 0' \leq_\mathrm{T} A$ and $0 <_\mathrm{T} Z$, there exists $B$ such
that $A \equiv_\mathrm{T} B' \equiv_\mathrm{T} B \oplus Z \equiv_\mathrm{T} B
\oplus 0'$. Consequently, any nonzero Turing degree
$\operatorname{deg}_\mathrm{T}(Z)$ is a Turing jump relative to some $B$. Here
we prove the hyperarithmetical analog, based on an unpublished proof of Slaman,
namely that for any reals $Z$ and $A$ such that $Z \oplus \mathcal{O}
\leq_\mathrm{T} A$ and $0 <_\mathrm{HYP} Z$, there exists $B$ such that $A
\equiv_\mathrm{T} \mathcal{O}^B \equiv_\mathrm{T} B \oplus Z \equiv_\mathrm{T}
B \oplus \mathcal{O}$. As an analogous consequence, any nonhyperarithmetical
Turing degree $\operatorname{deg}_\mathrm{T}(Z)$ is a hyperjump relative to
some $B$.
|
We provide a sharp lower bound on the $p$-norm of a sum of independent
uniform random variables in terms of its variance when $0 < p < 1$. We address
an analogous question for $p$-R\'enyi entropy for $p$ in the same range.
|
Understanding turbulence is the key to our comprehension of many natural and
technological flow processes. At the heart of this phenomenon lies its
intricate multi-scale nature, describing the coupling between different-sized
eddies in space and time. Here we introduce a new paradigm for analyzing the
structure of turbulent flows by quantifying correlations between different
length scales using methods inspired from quantum many-body physics. We present
results for interscale correlations of two paradigmatic flow examples, and use
these insights along with tensor network theory to design a structure-resolving
algorithm for simulating turbulent flows. With this algorithm, we find that the
incompressible Navier-Stokes equations can be accurately solved within a
computational space reduced by over an order of magnitude compared to direct
numerical simulation. Our quantum-inspired approach provides a pathway towards
conducting computational fluid dynamics on quantum computers.
|
We investigate one-dimensional three-body systems composed of two identical
bosons and one imbalanced atom (impurity) with two-body and three-body
zero-range interactions. For the case in the absence of three-body interaction,
we give a complete phase diagram of the number of three-body bound states in
the whole region of mass ratio via the direct calculation of the
Skornyakov-Ter-Martirosyan equations. We demonstrate that other low-lying
three-body bound states emerge when the mass of the impurity particle is not
equal to another two identical particles. We can obtain not only the binding
energies but also the corresponding wave functions. When the mass of impurity
atom is vary large, there are at most three three-body bound states. We then
study the effect of three-body zero-range interaction and unveil that it can
induces one more three-body bound state at a certain region of coupling
strength ratio under a fixed mass ratio.
|
We consider an input-to-response (ItR) system characterized by (1)
parameterized input with a known probability distribution and (2) stochastic
ItR function with heteroscedastic randomness. Our purpose is to efficiently
quantify the extreme response probability when the ItR function is expensive to
evaluate. The problem setup arises often in physics and engineering problems,
with randomness in ItR coming from either intrinsic uncertainties (say, as a
solution to a stochastic equation) or additional (critical) uncertainties that
are not incorporated in a low-dimensional input parameter space (as a result of
dimension reduction applied to the original high-dimensional input space). To
reduce the required sampling numbers, we develop a sequential Bayesian
experimental design method leveraging the variational heteroscedastic Gaussian
process regression (VHGPR) to account for the stochastic ItR, along with a new
criterion to select the next-best samples sequentially. The validity of our new
method is first tested in two synthetic problems with the stochastic ItR
functions defined artificially. Finally, we demonstrate the application of our
method to an engineering problem of estimating the extreme ship motion
probability in irregular waves, where the uncertainty in ItR naturally
originates from standard wave group parameterization, which reduces the
original high-dimensional wave field into a two-dimensional parameter space.
|
It is a long-standing conjecture that any CFT with a large central charge and
a large gap $\Delta_{\text{gap}}$ in the spectrum of higher-spin single-trace
operators must be dual to a local effective field theory in AdS. We prove a
sharp form of this conjecture by deriving numerical bounds on bulk Wilson
coefficients in terms of $\Delta_{\text{gap}}$ using the conformal bootstrap.
Our bounds exhibit the scaling in $\Delta_{\text{gap}}$ expected from
dimensional analysis in the bulk. Our main tools are dispersive sum rules that
provide a dictionary between CFT dispersion relations and S-matrix dispersion
relations in appropriate limits. This dictionary allows us to apply
recently-developed flat-space methods to construct positive CFT functionals. We
show how AdS$_{4}$ naturally resolves the infrared divergences present in 4D
flat-space bounds. Our results imply the validity of twice-subtracted
dispersion relations for any S-matrix arising from the flat-space limit of
AdS/CFT.
|
We describe a new addition to the WebVectors toolkit which is used to serve
word embedding models over the Web. The new ELMoViz module adds support for
contextualized embedding architectures, in particular for ELMo models. The
provided visualizations follow the metaphor of `two-dimensional text' by
showing lexical substitutes: words which are most semantically similar in
context to the words of the input sentence. The system allows the user to
change the ELMo layers from which token embeddings are inferred. It also
conveys corpus information about the query words and their lexical substitutes
(namely their frequency tiers and parts of speech). The module is well
integrated into the rest of the WebVectors toolkit, providing lexical
hyperlinks to word representations in static embedding models. Two web services
have already implemented the new functionality with pre-trained ELMo models for
Russian, Norwegian and English.
|
We present a novel binding mechanism where a neutral Rydberg atom and an
atomic ion form a molecular bound state at large internuclear distance. The
binding mechanism is based on Stark shifts and level crossings which are
induced in the Rydberg atom due to the electric field of the ion. At particular
internuclear distances between Rydberg atom and ion, potential wells occur
which can hold atom-ion molecular bound states. Apart from the binding
mechanism we describe important properties of the long-range atom-ion Rydberg
molecule, such as its lifetime and decay paths, its vibrational and rotational
structure, and its large dipole moment. Furthermore, we discuss methods how to
produce and detect it. The unusual properties of the long-range atom-ion
Rydberg molecule give rise to interesting prospects for studies of wave packet
dynamics in engineered potential energy landscapes.
|
In a previous study, we presented VT-Lane, a three-step framework for
real-time vehicle detection, tracking, and turn movement classification at
urban intersections. In this study, we present a case study incorporating the
highly accurate trajectories and movement classification obtained via VT-Lane
for the purpose of speed estimation and driver behavior calibration for traffic
at urban intersections. First, we use a highly instrumented vehicle to verify
the estimated speeds obtained from video inference. The results of the speed
validation show that our method can estimate the average travel speed of
detected vehicles in real-time with an error of 0.19 m/sec, which is equivalent
to 2% of the average observed travel speeds in the intersection of the study.
Instantaneous speeds (at the resolution of 30 Hz) were found to be estimated
with an average error of 0.21 m/sec and 0.86 m/sec respectively for
free-flowing and congested traffic conditions. We then use the estimated speeds
to calibrate the parameters of a driver behavior model for the vehicles in the
area of study. The results show that the calibrated model replicates the
driving behavior with an average error of 0.45 m/sec, indicating the high
potential for using this framework for automated, large-scale calibration of
car-following models from roadside traffic video data, which can lead to
substantial improvements in traffic modeling via microscopic simulation.
|
Visual attention mechanisms are a key component of neural network models for
computer vision. By focusing on a discrete set of objects or image regions,
these mechanisms identify the most relevant features and use them to build more
powerful representations. Recently, continuous-domain alternatives to discrete
attention models have been proposed, which exploit the continuity of images.
These approaches model attention as simple unimodal densities (e.g. a
Gaussian), making them less suitable to deal with images whose region of
interest has a complex shape or is composed of multiple non-contiguous patches.
In this paper, we introduce a new continuous attention mechanism that produces
multimodal densities, in the form of mixtures of Gaussians. We use the EM
algorithm to obtain a clustering of relevant regions in the image, and a
description length penalty to select the number of components in the mixture.
Our densities decompose as a linear combination of unimodal attention
mechanisms, enabling closed-form Jacobians for the backpropagation step.
Experiments on visual question answering in the VQA-v2 dataset show competitive
accuracies and a selection of regions that mimics human attention more closely
in VQA-HAT. We present several examples that suggest how multimodal attention
maps are naturally more interpretable than their unimodal counterparts, showing
the ability of our model to automatically segregate objects from ground in
complex scenes.
|
Given a dynamic network, where edges appear and disappear over time, we are
interested in finding sets of edges that have similar temporal behavior and
form a dense subgraph. Formally, we define the problem as the enumeration of
the maximal subgraphs that satisfy specific density and similarity thresholds.
To measure the similarity of the temporal behavior, we use the correlation
between the binary time series that represent the activity of the edges. For
the density, we study two variants based on the average degree. For these
problem variants we enumerate the maximal subgraphs and compute a compact
subset of subgraphs that have limited overlap. We propose an approximate
algorithm that scales well with the size of the network, while achieving a high
accuracy. We evaluate our framework on both real and synthetic datasets. The
results of the synthetic data demonstrate the high accuracy of the
approximation and show the scalability of the framework.
|
Fisher-KPP equation is proved to be the scaling limit of a system of Brownian
particles with local interaction. Particles proliferate and die depending on
the local concentration of other particles. Opposite to discrete models,
controlling concentration of particles is a major difficulty in Brownian
particle interaction; local interactions instead of mean field or moderate ones
makes it more difficult to implement the law of large numbers properties. The
approach taken here to overcome these difficulties is largely inspired by A.
Hammond and F. Rezakhanlou [10] implemented there in the mean free path case
instead of the local interaction regime.
|
We extend previous works by considering two additional radio frequencies (K
band and X/Ka band) with the aim to study the frequency dependence of the
source positions and its potential connection with the physical properties of
the underlying AGN. We compared the absolute source positions measured at four
different wavelengths, that is, the optical position from the Gaia Early Data
Release 3 (EDR3) and the radio positions at the dual S/X, X/Ka combinations and
at K band, as available from the third realization of the International
Celestial Reference Frame (ICRF3), for 512 common sources. We first aligned the
three ICRF3 individual catalogs onto the Gaia EDR3 frame and compare the
optical-to-radio offsets before and after the alignment. Then we studied the
correlation of optical-to-radio offsets with the observing (radio) frequency,
source morphology, magnitude, redshift, and source type. The deviation among
optical-to-radio offsets determined in the different radio bands is less than
0.5 mas, but there is statistical evidence that the optical-to-radio offset is
smaller at K band compared to S/X band for sources showing extended structures.
The optical-to-radio offset was found to statistically correlate with the
structure index. Large optical-to-radio offsets appear to favor faint sources
but are well explained by positional uncertainty, which is also larger for
these sources. We did not detect any statistically significant correlation
between the optical-to-radio offset and the redshift. The radio source
structure might also be a major cause for the radio-to-optical offset. For the
alignment of with the Gaia celestial reference frame, the S/X band frame
remains the preferred choice at present.
|
Robot manipulation of unknown objects in unstructured environments is a
challenging problem due to the variety of shapes, materials, arrangements and
lighting conditions. Even with large-scale real-world data collection, robust
perception and manipulation of transparent and reflective objects across
various lighting conditions remain challenging. To address these challenges we
propose an approach to performing sim-to-real transfer of robotic perception.
The underlying model, SimNet, is trained as a single multi-headed neural
network using simulated stereo data as input and simulated object segmentation
masks, 3D oriented bounding boxes (OBBs), object keypoints, and disparity as
output. A key component of SimNet is the incorporation of a learned stereo
sub-network that predicts disparity. SimNet is evaluated on 2D car detection,
unknown object detection, and deformable object keypoint detection and
significantly outperforms a baseline that uses a structured light RGB-D sensor.
By inferring grasp positions using the OBB and keypoint predictions, SimNet can
be used to perform end-to-end manipulation of unknown objects in both easy and
hard scenarios using our fleet of Toyota HSR robots in four home environments.
In unknown object grasping experiments, the predictions from the baseline RGB-D
network and SimNet enable successful grasps of most of the easy objects.
However, the RGB-D baseline only grasps 35% of the hard (e.g., transparent)
objects, while SimNet grasps 95%, suggesting that SimNet can enable robust
manipulation of unknown objects, including transparent objects, in unknown
environments.
|
Internet of Things (IoT) is being considered as the growth engine for
industrial revolution 4.0. The combination of IoT, cloud computing and
healthcare can contribute in ensuring well-being of people. One important
challenge of IoT network is maintaining privacy and to overcome security
threats. This paper provides a systematic review of the security aspects of
IoT. Firstly, the application of IoT in industrial and medical service
scenarios are described, and the security threats are discussed for the
different layers of IoT healthcare architecture. Secondly, different types of
existing malware including spyware, viruses, worms, keyloggers, and trojan
horses are described in the context of IoT. Thirdly, some of the recent malware
attacks such as Mirai, echobot and reaper are discussed. Next, a comparative
discussion is presented on the effectiveness of different machine learning
algorithms in mitigating the security threats. It is found that the k-nearest
neighbor (kNN) machine learning algorithm exhibits excellent accuracy in
detecting malware. This paper also reviews different tools for ransomware
detection, classification and analysis. Finally, a discussion is presented on
the existing security issues, open challenges and possible future scopes in
ensuring IoT security.
|
Let $\pi$ be a set of primes such that $|\pi|\geqslant 2$ and $\pi$ differs
from the set of all primes. Denote by $r$ the smallest prime which does not
belong to $\pi$ and set $m=r$ if $r=2,3$ and $m=r-1$ if $r\geqslant 5$. We
study the following conjecture: a conjugacy class $D$ of a finite group $G$ is
contained in $O\pi(G)$ if and only if every $m$ elements of $D$ generate a
$\pi$-subgroup. We confirm this conjecture for each group $G$ whose nonabelian
composition factors are isomorphic to alternating, linear and unitary simple
groups.
|
Isolated mechanical systems -- e.g., those floating in space, in free-fall,
or on a frictionless surface -- are able to achieve net rotation by cyclically
changing their shape, even if they have no net angular momentum. Similarly,
swimmers immersed in "perfect fluids" are able to use cyclic shape changes to
both translate and rotate even if the swimmer-fluid system has no net linear or
angular momentum. Finally, systems fully constrained by direct nonholonomic
constraints (e.g., passive wheels) can push against these constraints to move
through the world. Previous work has demonstrated that the net displacement
induced by these shape changes corresponds to the amount of *constraint
curvature* that the gaits enclose.
To properly assess or optimize the utility of a gait, however, we must also
consider the time or resources required to execute it: A gait that produces a
small displacement per cycle, but that can be executed in a short time, may
produce a faster average velocity than a gait that produces a large
displacement per cycle, but takes much longer to complete a cycle at the same
average instantaneous effort.
In this paper, we consider two effort-based cost functions for assessing the
costs associated with executing these cycles. For each of these cost functions,
we demonstrate that fixing the average instantaneous cost to a unit value
allows us to transform the effort costs into time-to-execute costs for any
given gait cycle. We then illustrate how the interaction between the constraint
curvature and these costs leads to characteristic geometries for optimal
cycles, in which the gait trajectories resemble elastic hoops distended from
within by internal pressures.
|
Snapshot hyperspectral imaging can capture the 3D hyperspectral image (HSI)
with a single 2D measurement and has attracted increasing attention recently.
Recovering the underlying HSI from the compressive measurement is an ill-posed
problem and exploiting the image prior is essential for solving this ill-posed
problem. However, existing reconstruction methods always start from modeling
image prior with the 1D vector or 2D matrix and cannot fully exploit the
structurally spectral-spatial nature in 3D HSI, thus leading to a poor
fidelity. In this paper, we propose an effective high-order tensor optimization
based method to boost the reconstruction fidelity for snapshot hyperspectral
imaging. We first build high-order tensors by exploiting the spatial-spectral
correlation in HSI. Then, we propose a weight high-order singular value
regularization (WHOSVR) based low-rank tensor recovery model to characterize
the structure prior of HSI. By integrating the structure prior in WHOSVR with
the system imaging process, we develop an optimization framework for HSI
reconstruction, which is finally solved via the alternating minimization
algorithm. Extensive experiments implemented on two representative systems
demonstrate that our method outperforms state-of-the-art methods.
|
Sparse matrices, more specifically SpGEMM kernels, are commonly found in a
wide range of applications, spanning graph-based path-finding to machine
learning algorithms (e.g., neural networks). A particular challenge in
implementing SpGEMM kernels has been the pressure placed on DRAM memory. One
approach to tackle this problem is to use an inner product method for the
SpGEMM kernel implementation. While the inner product produces fewer
intermediate results, it can end up saturating the memory bandwidth, given the
high number of redundant fetches of the input matrix elements. Using an outer
product-based SpGEMM kernel can reduce redundant fetches, but at the cost of
increased overhead due to extra computation and memory accesses for
producing/managing partial products.
In this thesis, we introduce a novel SpGEMM kernel implementation based on
the row-wise product approach. We leverage atomic instructions to merge
intermediate partial products as they are generated. The use of atomic
instructions eliminates the need to create partial product matrices.
To evaluate our row-wise product approach, we map an optimized SpGEMM kernel
to a custom accelerator designed to accelerate graph-based applications. The
targeted accelerator is an experimental system named PIUMA, being developed by
Intel. PIUMA provides several attractive features, including fast context
switching, user-configurable caches, globally addressable memory, non-coherent
caches, and asynchronous pipelines. We tailor our SpGEMM kernel to exploit many
of the features of the PIUMA fabric.
This thesis compares our SpGEMM implementation against prior solutions, all
mapped to the PIUMA framework. We briefly describe some of the PIUMA
architecture features and then delve into the details of our optimized SpGEMM
kernel. Our SpGEMM kernel can achieve 9.4x speedup as compared to competing
approaches.
|
The Gamma Factory (GF) is an ambitious proposal, currently explored within
the CERN Physics Beyond Colliders program, for a source of photons with
energies up to $\approx 400\,$MeV and photon fluxes (up to $\approx 10^{17}$
photons per second) exceeding those of the currently available gamma sources by
orders of magnitude. The high-energy (secondary) photons are produced via
resonant scattering of the primary laser photons by highly relativistic
partially-stripped ions circulating in the accelerator. The secondary photons
are emitted in a narrow cone and the energy of the beam can be monochromatized,
eventually down to the $\approx1$ ppm level, via collimation, at the expense of
the photon flux. This paper surveys the new opportunities that may be afforded
by the GF in nuclear physics and related fields.
|
The application of strain to 2D materials allows manipulating the electronic,
magnetic, and thermoelectric properties. These physical properties are
sensitive to slight variations induced by tensile and compressive strain and to
the uniaxial strain direction. Herein, we take advantage of the reversible
semiconductor-metal transition observed in certain monolayers to propose a
hetero-bilayer device. We propose to pill up phosphorene (layered black
phosphorus) and carbon monosulfide monolayers. In the first, such transition
appears for positive strain, while the second appears for negative strain. Our
first-principle calculations show that depending on the direction of the
applied uniaxial strain; it is possible to achieve reversible control in the
layer that behaves as an electronic conductor while the other layer remains as
a thermal conductor. The described strain-controlled selectivity could be used
in the design of novel devices.
|
The recently proposed high-order TENO scheme [Fu et al., Journal of
Computational Physics, 305, pp.333-359] has shown great potential in predicting
complex fluids owing to the novel weighting strategy, which ensures the
high-order accuracy, the low numerical dissipation, and the sharp
shock-capturing capability. However, the applications are still restricted to
simple geometries with Cartesian or curvilinear meshes. In this work, a new
class of high-order shock-capturing TENO schemes for unstructured meshes are
proposed. Similar to the standard TENO schemes and some variants of WENO
schemes, the candidate stencils include one large stencil and several small
third-order stencils. Following a strong scale-separation procedure, a tailored
novel ENO-like stencil selection strategy is proposed such that the high-order
accuracy is restored in smooth regions by selecting the candidate
reconstruction on the large stencil while the ENO property is enforced near
discontinuities by adopting the candidate reconstruction from smooth small
stencils. The nonsmooth stencils containing genuine discontinuities are
explicitly excluded from the final reconstruction, leading to excellent
numerical stability. Different from the WENO concept, such unique sharp stencil
selection retains the low numerical dissipation without sacrificing the
shock-capturing capability. The newly proposed framework enables arbitrarily
high-order TENO reconstructions on unstructured meshes. For conceptual
verification, the TENO schemes with third- to sixth-order accuracy are
constructed. Without parameter tuning case by case, the performance of the
proposed TENO schemes is demonstrated by examining a set of benchmark cases
with broadband flow length scales.
|
Reading is a complex process which requires proper understanding of texts in
order to create coherent mental representations. However, comprehension
problems may arise due to hard-to-understand sections, which can prove
troublesome for readers, while accounting for their specific language skills.
As such, steps towards simplifying these sections can be performed, by
accurately identifying and evaluating difficult structures. In this paper, we
describe our approach for the SemEval-2021 Task 1: Lexical Complexity
Prediction competition that consists of a mixture of advanced NLP techniques,
namely Transformer-based language models, pre-trained word embeddings, Graph
Convolutional Networks, Capsule Networks, as well as a series of hand-crafted
textual complexity features. Our models are applicable on both subtasks and
achieve good performance results, with a MAE below 0.07 and a Person
correlation of .73 for single word identification, as well as a MAE below 0.08
and a Person correlation of .79 for multiple word targets. Our results are just
5.46% and 6.5% lower than the top scores obtained in the competition on the
first and the second subtasks, respectively.
|
Negative viscosity seems to be an impossible parameter for any thermodynamic
system. But for some special boundary conditions the viscosity of a fluid has
apparently become negative, like for secondary flow of a fluid or in a plasma
flow interacting with a dominant magnetic field. This work studied the effect
of negative viscosity for a fluid flow over a cylinder. Four different
viscosities are considered, in which the positive viscosities of Air and CO2
has been considered at 300 K temperature and their negative pair of viscosities
are considered in this work. The results show a vast difference in the vortex
formation and pattern. General incompressible Navier Stokes equation has been
employed for the analysis. The thermodynamic feasibility, vortex formation,
variation of X direction velocity, variation of the VA factor and variation of
drag coefficient has been studied subsequently in this work. SimFlow CFD
software has been used in this work, which uses the OpenFOAM solver.
|
We introduce operational quantum tasks based on betting with risk-aversion --
or quantum betting tasks for short -- inspired by standard quantum state
discrimination and classical horse betting with risk-aversion and side
information. In particular, we introduce the operational tasks of quantum state
betting (QSB), noisy quantum state betting (nQSB), and quantum channel betting
(QCB) played by gamblers with different risk tendencies. We prove that the
advantage that informative measurements (non-constant channels) provide in QSB
(nQSB) is exactly characterised by Arimoto's $\alpha$-mutual information, with
the order $\alpha$ determining the risk aversion of the gambler. More
generally, we show that Arimoto-type information-theoretic quantities
characterise the advantage that resourceful objects offer at playing quantum
betting tasks when compared to resourceless objects, for general quantum
resource theories (QRTs) of measurements, channels, states, and
state-measurement pairs, with arbitrary resources. In limiting cases, we show
that QSB (QCB) recovers the known tasks of quantum state (channel)
discrimination when $\alpha \rightarrow \infty$, and quantum state (channel)
exclusion when $\alpha \rightarrow -\infty$. Inspired by these connections, we
also introduce new quantum R\'enyi divergences for measurements, and derive a
new family of resource monotones for the QRT of measurement informativeness.
This family of resource monotones recovers in the same limiting cases as above,
the generalised robustness and the weight of informativeness. Altogether, these
results establish a broad and continuous family of four-way correspondences
between operational tasks, mutual information measures, quantum R\'enyi
divergences, and resource monotones, that can be seen to generalise two
limiting correspondences that were recently discovered for the QRT of
measurement informativeness.
|
We present a perturbative approach to solving the three-nucleon continuum
Faddeev equation. This approach is particularly well suited to dealing with
variable strengths of contact terms in a chiral three-nucleon force. We use
examples of observables in the elastic nucleon-deuteron scattering as well as
in the deuteron breakup reaction to demonstrate high precision of the proposed
procedure and its capability to reproduce exact results. A significant
reduction of computer time achieved by the perturbative approach in comparison
to exact treatment makes this approach valuable for fine-tuning of the
three-nucleon Hamiltonian parameters.
|
We study the limit behaviour of singularly-perturbed elliptic functionals of
the form \[ \mathcal F_k(u,v)=\int_A v^2\,f_k(x,\nabla
u)\.dx+\frac{1}{\varepsilon_k}\int_A g_k(x,v,\varepsilon_k\nabla v)\.dx\,, \]
where $u$ is a vector-valued Sobolev function, $v \in [0,1]$ a phase-field
variable, and $\varepsilon_k>0$ a singular-perturbation parameter, i.e.,
$\varepsilon_k \to 0$, as $k\to +\infty$.
Under mild assumptions on the integrands $f_k$ and $g_k$, we show that if
$f_k$ grows superlinearly in the gradient-variable, then the functionals
$\mathcal F_k$ $\Gamma$-converge (up to subsequences) to a brittle
energy-functional, i.e., to a free-discontinuity functional whose surface
integrand does not depend on the jump-amplitude of $u$. This result is achieved
by providing explicit asymptotic formulas for the bulk and surface integrands
which show, in particular, that volume and surface term in $\mathcal F_k$
decouple in the limit.
The abstract $\Gamma$-convergence analysis is complemented by a stochastic
homogenisation result for stationary random integrands.
|
We characterize stable differential-algebraic equations (DAEs) using a
generalized Lyapunov inequality. The solution of this inequality is then used
to rewrite stable DAEs as dissipative Hamiltonian (dH) DAEs on the subspace
where the solutions evolve. Conversely, we give sufficient conditions
guaranteeing stability of dH DAEs. Further, for stabilizable descriptor systems
we construct solutions of generalized algebraic Bernoulli equations which can
then be used to rewrite these systems as pH descriptor systems. Furthermore, we
show how to describe the stable and stabilizable systems using Dirac and
Lagrange structures.
|
We consider the decay of the false vacuum, realised within a quantum quench
into an anti-confining regime of the Ising spin chain with a magnetic field
opposite to the initial magnetisation. Although the effective linear potential
between the domain walls is repulsive, the time evolution of correlations still
shows a suppression of the light cone and a reduction of vacuum decay. The
suppressed decay is a lattice effect, and can be assigned to emergent Bloch
oscillations.
|
For compact, isometrically embedded Riemannian manifolds $ N \hookrightarrow
\mathbb{R}^L$, we introduce a fourth-order version of the wave map equation. By
energy estimates, we prove an priori estimate for smooth local solutions in the
energy subcritical dimension $ n = 1,2$. The estimate excludes blow-up of a
Sobolev norm in finite existence times. In particular, combining this with
recent work of local well-posedness of the Cauchy problem, it follows that for
smooth initial data with compact support, there exists a (smooth) unique global
solution in dimension $n = 1,2$. We also give a proof of the uniqueness of
solutions that are bounded in these Sobolev norms.
|
Hjorth, assuming ${\sf{AD+ZF+DC}}$, showed that there is no sequence of
length $\omega_2$ consisting of distinct $\Sigma^1_2$-sets. We show that the
same theory implies that for $n\geq 0$, there is no sequence of length
$\delta^1_{2n+2}$ consisting of distinct $\Sigma^1_{2n+2}$ sets. The theorem
settles Question 30.21 of Kanamori, which was also conjectured by Kechris.
|
In this paper, we introduce a definition of Fenchel conjugate and Fenchel
biconjugate on Hadamard manifolds based on the tangent bundle. Our definition
overcomes the inconvenience that the conjugate depends on the choice of a
certain point on the manifold, as previous definitions required. On the other
hand, this new definition still possesses properties known to hold in the
Euclidean case. It even yields a broader interpretation of the Fenchel
conjugate in the Euclidean case itself. Most prominently, our definition of the
Fenchel conjugate provides a Fenchel-Moreau Theorem for geodesically convex,
proper, lower semicontinuous functions. In addition, this framework allows us
to develop a theory of separation of convex sets on Hadamard manifolds, and a
strict separation theorem is obtained.
|
Several areas have been improved with Deep Learning during the past years.
For non-safety related products adoption of AI and ML is not an issue, whereas
in safety critical applications, robustness of such approaches is still an
issue. A common challenge for Deep Neural Networks (DNN) occur when exposed to
out-of-distribution samples that are previously unseen, where DNNs can yield
high confidence predictions despite no prior knowledge of the input.
In this paper we analyse two supervisors on two well-known DNNs with varied
setups of training and find that the outlier detection performance improves
with the quality of the training procedure. We analyse the performance of the
supervisor after each epoch during the training cycle, to investigate
supervisor performance as the accuracy converges. Understanding the
relationship between training results and supervisor performance is valuable to
improve robustness of the model and indicates where more work has to be done to
create generalized models for safety critical applications.
|
The temperature dependencies of the excess conductivity $\sigma'(T)$ and
possible pseudogap (PG), in a Dy$_{0.6}$Y$_{0.4}$Rh$_{3.85}$Ru$_{0.15}$B$_4$
polycrystal were studied for the first time. It was shown that $\sigma'(T)$
near T$_{c}$ is well described by the Aslamazov Larkin fluctuation theory,
demonstrating a crossover with increasing temperature. Using the crossover
temperature $T_0$, the coherence length along the c axis $\xi_c(0)$, was
determined. Above the level of $T_{2D}>T_{0}$, an unusual dependence
$\sigma'(T)$ was found, which is not described by the fluctuation theories in
the range from $T_{0}$ to $T_{FM}$, at which a ferromagnetic transition occurs.
The range in which superconducting fluctuations exist is apparently quite
narrow and amounts to $\Delta T_{fl}=2.8 K$. The resulting temperature
dependence of the PG parameter $\Delta^*(T)$ has the form typical of magnetic
superconductors with features at $T_{max}=154 K$ and the temperature of a
possible structural transition at $T_{s}=95 K$. Below $T_{s}$, dependence
$\Delta^*{T}$ has a shape typical for PG in cuprates, which suggests that the
PG state can be realized in Dy$_{0.6}$Y$_{0.4}$Rh$_{3.85}$Ru$_{0.15}$B$_4$ in
this temperature range. Comparison of $\Delta^*(T)$ with the Peters Bauer
theory made it possible to determine the density of local pairs ~0.35, near
T$_{c}$, which is 1.17 times greater than in optimally doped
YBa$_{2}$Cu$_{3}$O$_{7-\delta}$ single crystals.
|
Tracking multiple objects in videos relies on modeling the spatial-temporal
interactions of the objects. In this paper, we propose a solution named
TransMOT, which leverages powerful graph transformers to efficiently model the
spatial and temporal interactions among the objects. TransMOT effectively
models the interactions of a large number of objects by arranging the
trajectories of the tracked objects as a set of sparse weighted graphs, and
constructing a spatial graph transformer encoder layer, a temporal transformer
encoder layer, and a spatial graph transformer decoder layer based on the
graphs. TransMOT is not only more computationally efficient than the
traditional Transformer, but it also achieves better tracking accuracy. To
further improve the tracking speed and accuracy, we propose a cascade
association framework to handle low-score detections and long-term occlusions
that require large computational resources to model in TransMOT. The proposed
method is evaluated on multiple benchmark datasets including MOT15, MOT16,
MOT17, and MOT20, and it achieves state-of-the-art performance on all the
datasets.
|
The orthant model is a directed percolation model on $\mathbb{Z}^d$, in which
all clusters are infinite. We prove a sharp threshold result for this model: if
$p$ is larger than the critical value above which the cluster of $0$ is
contained in a cone, then the shift from $0$ that is required to contain the
cluster of $0$ in that cone is exponentially small. As a consequence, above
this critical threshold, a shape theorem holds for the cluster of $0$, as well
as ballisiticity of the random walk on this cluster.
|
Having engaging and informative conversations with users is the utmost goal
for open-domain conversational systems. Recent advances in transformer-based
language models and their applications to dialogue systems have succeeded to
generate fluent and human-like responses. However, they still lack control over
the generation process towards producing contentful responses and achieving
engaging conversations. To achieve this goal, we present \textbf{DiSCoL}
(\textbf{Di}alogue \textbf{S}ystems through \textbf{Co}versational
\textbf{L}ine guided response generation). DiSCoL is an open-domain dialogue
system that leverages conversational lines (briefly \textbf{convlines}) as
controllable and informative content-planning elements to guide the generation
model produce engaging and informative responses. Two primary modules in
DiSCoL's pipeline are conditional generators trained for 1) predicting relevant
and informative convlines for dialogue contexts and 2) generating high-quality
responses conditioned on the predicted convlines. Users can also change the
returned convlines to \textit{control} the direction of the conversations
towards topics that are more interesting for them. Through automatic and human
evaluations, we demonstrate the efficiency of the convlines in producing
engaging conversations.
|
There are multiple mappings that can be used to generate what we call the
'edge geometry' of a regular N-gon, but they are all based on piecewise
isometries acting on the extended edges of N to form a 'singularity' set W.
This singularity set is also known as the 'web' because it is connected and
consists of rays or line segments, with possible accumulation points in the
limit. We will use three such maps here, all of which appear to share the same
local geometry of W. These mappings are the outer-billiards map Tau, the
digital-filter map Df and the 'dual-center' map Dc. In 'Outer-billiards,
digital filters and kicked Hamiltonians' (arXiv:1206.5223) we show that the Df
and Dc maps are equivalent to a 'shear and rotation' in a toral space and in
the complex plane respectively, and in 'First Families of Regular Polygons and
their Mutations' (arXiv:1612.09295) we show that the web for Tau can also be
reduced to a shear and rotation. This equivalence of maps supports the premise
that this web geometry is inherent in the N-gon. Here we describe the edge
geometry up to N = 25 and in Part 2 this will be extended to N = 50. In all
cases this geometry defines an invariant region local to N. Typically this
region contains multiple S[k] 'tiles' from the First Family of N, but our
emphasis is on the S[1] and S[2] tiles adjacent to N. Since the web evolves in
a multi-step fashion, it is possible to make predictions about the
'next-generation' tiles which will survive in the early web of S[1] and S[2].
The Edge Conjecture defines just 8 classes of N-gons based on this edge
geometry. Since the webs are recursive these predictions have long-term
implications.
|
When solving a complex task, humans will spontaneously form teams and to
complete different parts of the whole task, respectively. Meanwhile, the
cooperation between teammates will improve efficiency. However, for current
cooperative MARL methods, the cooperation team is constructed through either
heuristics or end-to-end blackbox optimization. In order to improve the
efficiency of cooperation and exploration, we propose a structured
diversification emergence MARL framework named {\sc{Rochico}} based on
reinforced organization control and hierarchical consensus learning.
{\sc{Rochico}} first learns an adaptive grouping policy through the
organization control module, which is established by independent multi-agent
reinforcement learning. Further, the hierarchical consensus module based on the
hierarchical intentions with consensus constraint is introduced after team
formation. Simultaneously, utilizing the hierarchical consensus module and a
self-supervised intrinsic reward enhanced decision module, the proposed
cooperative MARL algorithm {\sc{Rochico}} can output the final diversified
multi-agent cooperative policy. All three modules are organically combined to
promote the structured diversification emergence. Comparative experiments on
four large-scale cooperation tasks show that {\sc{Rochico}} is significantly
better than the current SOTA algorithms in terms of exploration efficiency and
cooperation strength.
|
Reducing the complexity of the pipeline of instance segmentation is crucial
for real-world applications. This work addresses this issue by introducing an
anchor-box free and single-shot instance segmentation framework, termed
PolarMask, which reformulates the instance segmentation problem as predicting
the contours of objects in the polar coordinate, with several appealing
benefits. (1) The polar representation unifies instance segmentation (masks)
and object detection (bounding boxes) into a single framework, reducing the
design and computational complexity. (2) Two modules are carefully designed
(i.e. soft polar centerness and polar IoU loss) to sample high-quality center
examples and optimize polar contour regression, making the performance of
PolarMask does not depend on the bounding box prediction results and thus
becomes more efficient in training. (3) PolarMask is fully convolutional and
can be easily embedded into most off-the-shelf detection methods. To further
improve the accuracy of the framework, a Refined Feature Pyramid is introduced
to further improve the feature representation at different scales, termed
PolarMask++. Extensive experiments demonstrate the effectiveness of both
PolarMask and PolarMask++, which achieve competitive results on instance
segmentation in the challenging COCO dataset with single-model and single-scale
training and testing, as well as new state-of-the-art results on rotate text
detection and cell segmentation. We hope the proposed polar representation can
provide a new perspective for designing algorithms to solve single-shot
instance segmentation. The codes and models are available at:
github.com/xieenze/PolarMask.
|
Ultra intense lasers are a promising source of energetic ions for various
applications. An interesting approach described in Ferri et al. 2019 argues
from Particle-in-Cell simulations that using two laser pulses of half energy
(half intensity) arriving with close to 45 degrees angle of incidence is
significantly more effective at accelerating ions than one pulse at full energy
(full intensity). For a variety of reasons, at the time of this writing there
has not yet been a true experimental confirmation of this enhancement. In this
paper we perform 2D Particle-in-Cell simulations to examine if a milliJoule
class, 5x10^18 W cm^-2 peak intensity laser system could be used for such a
demonstration experiment. Laser systems in this class can operate at a kHz rate
which should be helpful for addressing some of the challenges of performing
this experiment. Despite investigating a 3.5 times lower intensity than Ferri
et al. 2019 did, we find that the double pulse approach enhances the peak
proton energy and the energy conversion to protons by a factor of about three
compared to a single laser pulse with the same total laser energy. We also
comment on the nature of the enhancement and describe simulations that examine
how the enhancement may depend on the spatial or temporal alignment of the two
pulses.
|
Ensuring performance robustness for a variety of situations that can occur in
real-world environments is one of the challenging tasks in sound event
classification. One of the unpredictable and detrimental factors in
performance, especially in indoor environments, is reverberation. To alleviate
this problem, we propose a conditioning method that provides room impulse
response (RIR) information to help the network become less sensitive to
environmental information and focus on classifying the desired sound.
Experimental results show that the proposed method successfully reduced
performance degradation caused by the reverberation of the room. In particular,
our proposed method works even with similar RIR that can be inferred from the
room type rather than the exact one, which has the advantage of potentially
being used in real-world applications.
|
The aims of this paper are: 1) to identify "worst smells", i.e., bad smells
that never have a good reason to exist, 2) to determine the frequency,
change-proneness, and severity associated with worst smells, and 3) to identify
the "worst reasons", i.e., the reasons for introducing these worst smells in
the first place. To achieve these aims we ran a survey with 71 developers. We
learned that 80 out of 314 catalogued code smells are "worst"; that is,
developers agreed that these 80 smells should never exist in any code base. We
then checked the frequency and change-proneness of these worst smells on 27
large Apache open-source projects. Our results show insignificant differences,
in both frequency and change proneness, between worst and non-worst smells.
That is to say, these smells are just as damaging as other smells, but there is
never any justifiable reason to introduce them. Finally, in follow-up phone
interviews with five developers we confirmed that these smells are indeed
worst, and the interviewees proposed seven reasons for why they may be
introduced in the first place. By explicitly identifying these seven reasons,
project stakeholders can, through quality gates or reviews, ensure that such
smells are never accepted in a code base, thus improving quality without
compromising other goals such as agility or time to market.
|
Content-based image retrieval (CBIR) systems on pixel domain use low-level
features, such as colour, texture and shape, to retrieve images. In this
context, two types of image representations i.e. local and global image
features have been studied in the literature. Extracting these features from
pixel images and comparing them with images from the database is very
time-consuming. Therefore, in recent years, there has been some effort to
accomplish image analysis directly in the compressed domain with lesser
computations. Furthermore, most of the images in our daily transactions are
stored in the JPEG compressed format. Therefore, it would be ideal if we could
retrieve features directly from the partially decoded or compressed data and
use them for retrieval. Here, we propose a unified model for image retrieval
which takes DCT coefficients as input and efficiently extracts global and local
features directly in the JPEG compressed domain for accurate image retrieval.
The experimental findings indicate that our proposed model performed similarly
to the current DELG model which takes RGB features as an input with reference
to mean average precision while having a faster training and retrieval speed.
|
End-to-end (E2E) spoken language understanding (SLU) can infer semantics
directly from speech signal without cascading an automatic speech recognizer
(ASR) with a natural language understanding (NLU) module. However, paired
utterance recordings and corresponding semantics may not always be available or
sufficient to train an E2E SLU model in a real production environment. In this
paper, we propose to unify a well-optimized E2E ASR encoder (speech) and a
pre-trained language model encoder (language) into a transformer decoder. The
unified speech-language pre-trained model (SLP) is continually enhanced on
limited labeled data from a target domain by using a conditional masked
language model (MLM) objective, and thus can effectively generate a sequence of
intent, slot type, and slot value for given input speech in the inference. The
experimental results on two public corpora show that our approach to E2E SLU is
superior to the conventional cascaded method. It also outperforms the present
state-of-the-art approaches to E2E SLU with much less paired data.
|
The mission statement (MS) is the most used organizational strategic planning
tool worldwide. The relationship between an MS and an organizations financial
performance has been shown to be significantly positive, albeit small. However,
an MSs relationship to the macroeconomic environment and to organizational
innovation has not been investigated. We implemented a Structural Equation
Modeling using the SCImago Institutional Ranking (SIR) as a global baseline
sample and assessment of organizational research and innovation (RandI), an
automated MS content analysis, and the Economic Complexity Index (ECI) as a
comprehensive macroeconomic environment measure. We found that the median
performance of organizations that do not report an MS is significantly higher
than that of reporting organizations, and that a path-dependence driven by the
State's long-term view and investment is a better explanatory variable for
organizational RandI performance than the MS construct or the intermediate-term
macroeconomic environment.
|
Neural architecture search (NAS) and hyperparameter optimization (HPO) make
deep learning accessible to non-experts by automatically finding the
architecture of the deep neural network to use and tuning the hyperparameters
of the used training pipeline. While both NAS and HPO have been studied
extensively in recent years, NAS methods typically assume fixed hyperparameters
and vice versa - there exists little work on joint NAS + HPO. Furthermore, NAS
has recently often been framed as a multi-objective optimization problem, in
order to take, e.g., resource requirements into account. In this paper, we
propose a set of methods that extend current approaches to jointly optimize
neural architectures and hyperparameters with respect to multiple objectives.
We hope that these methods will serve as simple baselines for future research
on multi-objective joint NAS + HPO. To facilitate this, all our code is
available at https://github.com/automl/multi-obj-baselines.
|
The emission mechanism for hard $\gamma$-ray spectra from supernova remnants
(SNRs) is still a matter of debate. Recent multi-wavelength observations of TeV
source HESS J1912+101 show that it is associated with an SNR with an age of
$\sim 100$ kyrs, making it unlikely produce the TeV $\gamma$-ray emission via
leptonic processes. We analyzed Fermi observations of it and found an extended
source with a hard spectrum. HESS J1912+101 may represent a peculiar stage of
SNR evolution that dominates the acceleration of TeV cosmic rays. By fitting
the multi-wavelength spectra of 13 SNRs with hard GeV $\gamma$-ray spectra with
simple emission models with a density ratio of GeV electrons to protons of
$\sim 10^{-2}$, we obtain reasonable mean densities and magnetic fields with a
total energy of $\sim 10^{50}$ ergs for relativistic ions in each SNR. Among
these sources, only two of them, namely SN 1006 and RCW 86, favor a leptonic
origin for the $\gamma$-ray emission. The magnetic field energy is found to be
comparable to that of the accelerated relativistic ions and their ratio has a
tendency of increase with the age of SNRs. These results suggest that TeV
cosmic rays mainly originate from SNRs with hard $\gamma$-ray spectra.
|
"No-till" and cover cropping are often identified as the leading simple, best
management practices for carbon sequestration in agriculture. However, the root
of the problem is more complex, with the potential benefits of these approaches
depending on numerous factors including a field's soil type(s), topography, and
management history. Instead of using computer vision approaches to simply
classify a field a still vs. no-till, we instead seek to identify the degree of
residue coverage across afield through a probabilistic deep learning
segmentation approach to enable more accurate analysis of carbon holding
potential and realization. This approach will not only provide more precise
insights into currently implemented practices, but also enable a more accurate
identification process of fields with the greatest potential for adopting new
practices to significantly impact carbon sequestration in agriculture.
|
It was recently pointed out that so-called "superhydrides", hydrogen-rich
materials that appear to become superconducting at high temperatures and
pressures, exhibit physical properties that are different from both
conventional and unconventional standard type I and type II superconductors
[1,2]. Here we consider magnetic field expulsion in the first material in this
class discovered in 2015, sulfur hydride [3]. A nuclear resonant scattering
experiment has been interpreted as demonstration that the Meissner effect takes
place in this material [4,5]. Here we point out that the observed effect, under
the assumption that the system is in thermodynamic equilibrium, implies a
Meissner pressure [6] in this material that is {\it much larger} than that of
standard superconductors. This suggests that hydride superconductors are
qualitatively different from the known standard superconductors {\it if} they
are superconductors.
|
Scaling arguments provide valuable analysis tools across physics and complex
systems yet are often employed as one generic method, without explicit
reference to the various mathematical concepts underlying them. A careful
understanding of these concepts empowers us to unlock their full potential.
|
The number of units of a network dynamical system, its size, arguably
constitutes its most fundamental property. Many units of a network, however,
are typically experimentally inaccessible such that the network size is often
unknown. Here we introduce a \emph{detection matrix }that suitably arranges
multiple transient time series from the subset of accessible units to detect
network size via matching rank constraints. The proposed method is model-free,
applicable across system types and interaction topologies and applies to
non-stationary dynamics near fixed points, as well as periodic and chaotic
collective motion. Even if only a small minority of units is perceptible and
for systems simultaneously exhibiting nonlinearities, heterogeneities and
noise, \emph{exact} size detection is feasible. We illustrate applicability for
a paradigmatic class of biochemical reaction networks.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.