abstract
stringlengths 42
2.09k
|
---|
Tree shape statistics provide valuable quantitative insights into
evolutionary mechanisms underpinning phylogenetic trees, a commonly used graph
representation of evolution systems ranging from viruses to species. By
developing limit theorems for a version of extended P\'olya urn models in which
negative entries are permitted for their replacement matrices, we present
strong laws of large numbers and central limit theorems for asymptotic joint
distributions of two subtree counting statistics, the number of cherries and
that of pitchforks, for random phylogenetic trees generated by two widely used
null tree models: the proportional to distinguishable arrangements (PDA) and
the Yule-Harding-Kingman (YHK) models. Our results indicate that the limiting
behaviour of these two statistics, when appropriately scaled, are independent
of the initial trees used in the tree generating process.
|
Detecting novel objects from few examples has become an emerging topic in
computer vision recently. However, these methods need fully annotated training
images to learn new object categories which limits their applicability in real
world scenarios such as field robotics. In this work, we propose a
probabilistic multiple instance learning approach for few-shot Common Object
Localization (COL) and few-shot Weakly Supervised Object Detection (WSOD). In
these tasks, only image-level labels, which are much cheaper to acquire, are
available. We find that operating on features extracted from the last layer of
a pre-trained Faster-RCNN is more effective compared to previous episodic
learning based few-shot COL methods. Our model simultaneously learns the
distribution of the novel objects and localizes them via
expectation-maximization steps. As a probabilistic model, we employ von
Mises-Fisher (vMF) distribution which captures the semantic information better
than Gaussian distribution when applied to the pre-trained embedding space.
When the novel objects are localized, we utilize them to learn a linear
appearance model to detect novel classes in new images. Our extensive
experiments show that the proposed method, despite being simple, outperforms
strong baselines in few-shot COL and WSOD, as well as large-scale WSOD tasks.
|
Recommender system usually faces popularity bias issues: from the data
perspective, items exhibit uneven (long-tail) distribution on the interaction
frequency; from the method perspective, collaborative filtering methods are
prone to amplify the bias by over-recommending popular items. It is undoubtedly
critical to consider popularity bias in recommender systems, and existing work
mainly eliminates the bias effect. However, we argue that not all biases in the
data are bad -- some items demonstrate higher popularity because of their
better intrinsic quality. Blindly pursuing unbiased learning may remove the
beneficial patterns in the data, degrading the recommendation accuracy and user
satisfaction.
This work studies an unexplored problem in recommendation -- how to leverage
popularity bias to improve the recommendation accuracy. The key lies in two
aspects: how to remove the bad impact of popularity bias during training, and
how to inject the desired popularity bias in the inference stage that generates
top-K recommendations. This questions the causal mechanism of the
recommendation generation process. Along this line, we find that item
popularity plays the role of confounder between the exposed items and the
observed interactions, causing the bad effect of bias amplification. To achieve
our goal, we propose a new training and inference paradigm for recommendation
named Popularity-bias Deconfounding and Adjusting (PDA). It removes the
confounding popularity bias in model training and adjusts the recommendation
score with desired popularity bias via causal intervention. We demonstrate the
new paradigm on latent factor model and perform extensive experiments on three
real-world datasets. Empirical studies validate that the deconfounded training
is helpful to discover user real interests and the inference adjustment with
popularity bias could further improve the recommendation accuracy.
|
Let $\mathfrak{F}_n$ be the set of all cuspidal automorphic representations
$\pi$ of $\mathrm{GL}_n$ with unitary central character over a number field
$F$. We prove the first unconditional zero density estimate for the set
$\mathcal{S}=\{L(s,\pi\times\pi')\colon\pi\in\mathfrak{F}_n\}$ of
Rankin-Selberg $L$-functions, where $\pi'\in\mathfrak{F}_{n'}$ is fixed. We use
this density estimate to prove (i) a strong average form of effective
multiplicity one for $\mathrm{GL}_n$; (ii) that given $\pi\in\mathfrak{F}_n$
defined over $\mathbb{Q}$, the convolution $\pi\times\tilde{\pi}$ has a
positive level of distribution in the sense of Bombieri-Vinogradov; (iii) that
almost all $L(s,\pi\times\pi')\in \mathcal{S}$ have a hybrid-aspect
subconvexity bound on $\mathrm{Re}(s)=\frac{1}{2}$; (iv) a hybrid-aspect
power-saving upper bound for the variance in the discrepancy of the measures
$|\varphi(x+iy)|^2 y^{-2}dxdy$ associated to $\mathrm{GL}_2$ Hecke-Maass
newforms $\varphi$ with trivial nebentypus, extending work of Luo and Sarnak
for level 1 cusp forms; and (v) a nonsplit analogue of quantum ergodicity:
almost all restrictions of Hilbert Hecke-Maass newforms to the modular surface
dissipate as their Laplace eigenvalues grow.
|
In this paper, we introduce the task of multi-view RGB-based 3D object
detection as an end-to-end optimization problem. To address this problem, we
propose ImVoxelNet, a novel fully convolutional method of 3D object detection
based on monocular or multi-view RGB images. The number of monocular images in
each multi-view input can variate during training and inference; actually, this
number might be unique for each multi-view input. ImVoxelNet successfully
handles both indoor and outdoor scenes, which makes it general-purpose.
Specifically, it achieves state-of-the-art results in car detection on KITTI
(monocular) and nuScenes (multi-view) benchmarks among all methods that accept
RGB images. Moreover, it surpasses existing RGB-based 3D object detection
methods on the SUN RGB-D dataset. On ScanNet, ImVoxelNet sets a new benchmark
for multi-view 3D object detection. The source code and the trained models are
available at https://github.com/saic-vul/imvoxelnet.
|
LAMOST Data Release 5, covering $\sim$17,000 $deg^2$ from $-10^{\circ}$ to
$80^{\circ}$ in declination, contains 9 millions co-added low resolution
spectra of celestial objects, each spectrum combined from repeat exposure of
two to tens of times during Oct 2011 to Jun 2017. In this paper, We present the
spectra of individual exposures for all the objects in LAMOST Data Release 5.
For each spectrum, equivalent width of 60 lines from 11 different elements are
calculated with a new method combining the actual line core and fitted line
wings. For stars earlier than F type, the Balmer lines are fitted with both
emission and absorption profiles once two components are detected. Radial
velocity of each individual exposure is measured by minimizing ${\chi}^2$
between the spectrum and its best template. Database for equivalent widths of
spectral lines and radial velocities of individual spectra are available
online. Radial velocity uncertainties with different stellar type and
signal-to-noise ratio are quantified by comparing different exposure of the
same objects. We notice that the radial velocity uncertainty depends on the
time lag between observations. For stars observed in the same day and with
signal-to-noise ratio higher than 20, the radial velocity uncertainty is below
5km/s, and increase to 10km/s for stars observed in different nights.
|
The rapid progress in clinical data management systems and artificial
intelligence approaches enable the era of personalized medicine. Intensive care
units (ICUs) are the ideal clinical research environment for such development
because they collect many clinical data and are highly computerized
environments. We designed a retrospective clinical study on a prospective ICU
database using clinical natural language to help in the early diagnosis of
heart failure in critically ill children. The methodology consisted of
empirical experiments of a learning algorithm to learn the hidden
interpretation and presentation of the French clinical note data. This study
included 1386 patients' clinical notes with 5444 single lines of notes. There
were 1941 positive cases (36 % of total) and 3503 negative cases classified by
two independent physicians using a standardized approach. The multilayer
perceptron neural network outperforms other discriminative and generative
classifiers. Consequently, the proposed framework yields an overall
classification performance with 89 % accuracy, 88 % recall, and 89 % precision.
This study successfully applied learning representation and machine learning
algorithms to detect heart failure from clinical natural language in a single
French institution. Further work is needed to use the same methodology in other
institutions and other languages.
|
We present a comprehensive overview of chirality and its optical
manifestation in plasmonic nanosystems and nanostructures. We discuss top-down
fabricated structures that range from solid metallic nanostructures to
groupings of metallic nanoparticles arranged in three dimensions. We also
present the large variety of bottom-up synthesized structures. Using DNA,
peptides, or other scaffolds, complex nanoparticle arrangements of up to
hundreds of individual nanoparticles have been realized. Beyond this static
picture, we also give an overview of recent demonstrations of active chiral
plasmonic systems, where the chiral optical response can be controlled by an
external stimulus. We discuss the prospect of using the unique properties of
complex chiral plasmonic systems for enantiomeric sensing schemes.
|
Programming language detection is a common need in the analysis of large
source code bases. It is supported by a number of existing tools that rely on
several features, and most notably file extensions, to determine file types. We
consider the problem of accurately detecting the type of files commonly found
in software code bases, based solely on textual file content. Doing so is
helpful to classify source code that lack file extensions (e.g., code snippets
posted on the Web or executable scripts), to avoid misclassifying source code
that has been recorded with wrong or uncommon file extensions, and also shed
some light on the intrinsic recognizability of source code files. We propose a
simple model that (a) use a language-agnostic word tokenizer for textual files,
(b) group tokens in 1-/2-grams, (c) build feature vectors based on N-gram
frequencies, and (d) use a simple fully connected neural network as classifier.
As training set we use textual files extracted from GitHub repositories with at
least 1000 stars, using existing file extensions as ground truth. Despite its
simplicity the proposed model reaches 85% in our experiments for a relatively
high number of recognized classes (more than 130 file types).
|
An essential feature of the subdiffusion equations with the $\alpha$-order
time fractional derivative is the weak singularity at the initial time. The
weak regularity of the solution is usually characterized by a regularity
parameter $\sigma\in (0,1)\cup(1,2)$. Under this general regularity assumption,
we here obtain the pointwise-in-time error estimate of the widely used L1
scheme for nonlinear subdiffusion equations. To the end, we present a refined
discrete fractional-type Gr\"onwall inequality and a rigorous analysis for the
truncation errors. Numerical experiments are provided to demonstrate the
effectiveness of our theoretical analysis.
|
In this paper, we presented a new method for deformation control of
deformable objects, which utilizes both visual and tactile feedback. At
present, manipulation of deformable objects is basically formulated by assuming
positional constraints. But in fact, in many situations manipulation has to be
performed under actively applied force constraints. This scenario is considered
in this research. In the proposed scheme a tactile feedback is integrated to
ensure a stable contact between the robot end-effector and the soft object to
be manipulated. The controlled contact force is also utilized to regulate the
deformation of the soft object with its shape measured by a vision sensor. The
effectiveness of the proposed method is demonstrated by a book page turning and
shaping experiment.
|
With the introduction of Artificial Intelligence (AI) and related
technologies in our daily lives, fear and anxiety about their misuse as well as
the hidden biases in their creation have led to a demand for regulation to
address such issues. Yet blindly regulating an innovation process that is not
well understood, may stifle this process and reduce benefits that society may
gain from the generated technology, even under the best intentions. In this
paper, starting from a baseline model that captures the fundamental dynamics of
a race for domain supremacy using AI technology, we demonstrate how socially
unwanted outcomes may be produced when sanctioning is applied unconditionally
to risk-taking, i.e. potentially unsafe, behaviours. As an alternative to
resolve the detrimental effect of over-regulation, we propose a voluntary
commitment approach wherein technologists have the freedom of choice between
independently pursuing their course of actions or establishing binding
agreements to act safely, with sanctioning of those that do not abide to what
they pledged. Overall, this work reveals for the first time how voluntary
commitments, with sanctions either by peers or an institution, leads to
socially beneficial outcomes in all scenarios envisageable in a short-term race
towards domain supremacy through AI technology. These results are directly
relevant for the design of governance and regulatory policies that aim to
ensure an ethical and responsible AI technology development process.
|
Chinese pre-trained language models usually process text as a sequence of
characters, while ignoring more coarse granularity, e.g., words. In this work,
we propose a novel pre-training paradigm for Chinese -- Lattice-BERT, which
explicitly incorporates word representations along with characters, thus can
model a sentence in a multi-granularity manner. Specifically, we construct a
lattice graph from the characters and words in a sentence and feed all these
text units into transformers. We design a lattice position attention mechanism
to exploit the lattice structures in self-attention layers. We further propose
a masked segment prediction task to push the model to learn from rich but
redundant information inherent in lattices, while avoiding learning unexpected
tricks. Experiments on 11 Chinese natural language understanding tasks show
that our model can bring an average increase of 1.5% under the 12-layer
setting, which achieves new state-of-the-art among base-size models on the CLUE
benchmarks. Further analysis shows that Lattice-BERT can harness the lattice
structures, and the improvement comes from the exploration of redundant
information and multi-granularity representations. Our code will be available
at https://github.com/alibaba/pretrained-language-models/LatticeBERT.
|
We address the problem of novel view synthesis (NVS) from a few sparse source
view images. Conventional image-based rendering methods estimate scene geometry
and synthesize novel views in two separate steps. However, erroneous geometry
estimation will decrease NVS performance as view synthesis highly depends on
the quality of estimated scene geometry. In this paper, we propose an
end-to-end NVS framework to eliminate the error propagation issue. To be
specific, we construct a volume under the target view and design a source-view
visibility estimation (SVE) module to determine the visibility of the
target-view voxels in each source view. Next, we aggregate the visibility of
all source views to achieve a consensus volume. Each voxel in the consensus
volume indicates a surface existence probability. Then, we present a soft
ray-casting (SRC) mechanism to find the most front surface in the target view
(i.e. depth). Specifically, our SRC traverses the consensus volume along
viewing rays and then estimates a depth probability distribution. We then warp
and aggregate source view pixels to synthesize a novel view based on the
estimated source-view visibility and target-view depth. At last, our network is
trained in an end-to-end self-supervised fashion, thus significantly
alleviating error accumulation in view synthesis. Experimental results
demonstrate that our method generates novel views in higher quality compared to
the state-of-the-art.
|
In this paper, we present a first-order projection-free method, namely, the
universal conditional gradient sliding (UCGS) method, for solving
$\varepsilon$-approximate solutions to convex differentiable optimization
problems. For objective functions with H\"older continuous gradients, we show
that UCGS is able to terminate with $\varepsilon$-solutions with at most
$O((M_\nu D_X^{1+\nu}/{\varepsilon})^{2/(1+3\nu)})$ gradient evaluations and
$O((M_\nu D_X^{1+\nu}/{\varepsilon})^{4/(1+3\nu)})$ linear objective
optimizations, where $\nu\in (0,1]$ and $M_\nu>0$ are the exponent and constant
of the H\"older condition. Furthermore, UCGS is able to perform such
computations without requiring any specific knowledge of the smoothness
information $\nu$ and $M_\nu$. In the weakly smooth case when $\nu\in (0,1)$,
both complexity results improve the current state-of-the-art $O((M_\nu
D_X^{1+\nu}/{\varepsilon})^{1/\nu})$ results on first-order projection-free
method achieved by the conditional gradient method. Within the class of
sliding-type algorithms, to the best of our knowledge, this is the first time a
sliding-type algorithm is able to improve not only the gradient complexity but
also the overall complexity for computing an approximate solution. In the
smooth case when $\nu=1$, UCGS matches the state-of-the-art complexity result
but adds more features allowing for practical implementation.
|
Let $M\stackrel{\rho_0}{\curvearrowleft}S$ be a $C^\infty$ locally free
action of a connected simply connected solvable Lie group $S$ on a closed
manifold $M$. Roughly speaking, $\rho_0$ is parameter rigid if any $C^\infty$
locally free action of $S$ on $M$ having the same orbits as $\rho_0$ is
$C^\infty$ conjugate to $\rho_0$. In this paper we prove two types of result on
parameter rigidity.
First let $G$ be a connected semisimple Lie group with finite center of real
rank at least $2$ without compact factors nor simple factors locally isomorphic
to $\mathrm{SO}_0(n,1)$ $(n\geq2)$ or $\mathrm{SU}(n,1)$ $(n\geq2)$, and let
$\Gamma$ be an irreducible cocompact lattice in $G$. Let $G=KAN$ be an Iwasawa
decomposition. We prove that the action $\Gamma\backslash G\curvearrowleft AN$
by right multiplication is parameter rigid. One of the three main ingredients
of the proof is the rigidity theorems of Pansu and Kleiner-Leeb on the
quasiisometries of Riemannian symmetric spaces of noncompact type.
Secondly we show, if $M\stackrel{\rho_0}{\curvearrowleft}S$ is parameter
rigid, then the zeroth and first cohomology of the orbit foliation of $\rho_0$
with certain coefficients must vanish. This is a partial converse to the
results in the author's [Vanishing of cohomology and parameter rigidity of
actions of solvable Lie groups. Geom. Topol. 21(1) (2017), 157-191], where we
saw sufficient conditions for parameter rigidity in terms of vanishing of the
first cohomology with various coefficients.
|
This letter studies an unmanned aerial vehicle (UAV) aided multicasting (MC)
system, which is enabled by simultaneous free space optics (FSO) backhaul and
power transfer. The UAV applies the power-splitting technique to harvest
wireless power and decode backhaul information simultaneously over the FSO
link, while at the same time using the harvested power to multicast the
backhauled information over the radio frequency (RF) links to multiple ground
users (GUs). We derive the UAV's achievable MC rate under the Poisson point
process (PPP) based GU distribution. By jointly designing the FSO and RF links
and the UAV altitude, we maximize the system-level energy efficiency (EE),
which can be equivalently expressed as the ratio of the UAV's MC rate over the
optics base station (OBS) transmit power, subject to the UAV's sustainable
operation and reliable backhauling constraints. Due to the non-convexity of
this problem, we propose suboptimal solutions with low complexity. Numerical
results show the close-to-optimal EE performance by properly balancing the
power-rate tradeoff between the FSO power and the MC data transmissions.
|
The first mobile camera phone was sold only 20 years ago, when taking
pictures with one's phone was an oddity, and sharing pictures online was
unheard of. Today, the smartphone is more camera than phone. How did this
happen? This transformation was enabled by advances in computational
photography -the science and engineering of making great images from small form
factor, mobile cameras. Modern algorithmic and computing advances, including
machine learning, have changed the rules of photography, bringing to it new
modes of capture, post-processing, storage, and sharing. In this paper, we give
a brief history of mobile computational photography and describe some of the
key technological components, including burst photography, noise reduction, and
super-resolution. At each step, we may draw naive parallels to the human visual
system.
|
We study dynamic clustering problems from the perspective of online learning.
We consider an online learning problem, called \textit{Dynamic $k$-Clustering},
in which $k$ centers are maintained in a metric space over time (centers may
change positions) such as a dynamically changing set of $r$ clients is served
in the best possible way. The connection cost at round $t$ is given by the
\textit{$p$-norm} of the vector consisting of the distance of each client to
its closest center at round $t$, for some $p\geq 1$ or $p = \infty$. We present
a \textit{$\Theta\left( \min(k,r) \right)$-regret} polynomial-time online
learning algorithm and show that, under some well-established computational
complexity conjectures, \textit{constant-regret} cannot be achieved in
polynomial-time. In addition to the efficient solution of Dynamic
$k$-Clustering, our work contributes to the long line of research on
combinatorial online learning.
|
Photonic quantum networking relies on entanglement distribution between
distant nodes, typically realized by swapping procedures. However, entanglement
swapping is a demanding task in practice, mainly because of limited
effectiveness of entangled photon sources and Bell-state measurements necessary
to realize the process. Here we experimentally activate a remote distribution
of two-photon polarization entanglement which supersedes the need for initial
entangled pairs and traditional Bell-state measurements. This alternative
procedure is accomplished thanks to the controlled spatial indistinguishability
of four independent photons in three separated nodes of the network, which
enables us to perform localized product-state measurements on the central node
acting as a trigger. This experiment proves that the inherent
indistinguishability of identical particles supplies new standards for feasible
quantum communication in multinode photonic quantum networks.
|
Adapting the idea of training CartPole with Deep Q-learning agent, we are
able to find a promising result that prevent the pole from falling down. The
capacity of reinforcement learning (RL) to learn from the interaction between
the environment and agent provides an optimal control strategy. In this paper,
we aim to solve the classic pendulum swing-up problem that making the learned
pendulum to be in upright position and balanced. Deep Deterministic Policy
Gradient algorithm is introduced to operate over continuous action domain in
this problem. Salient results of optimal pendulum are proved with increasing
average return, decreasing loss, and live video in the code part.
|
We report the detection of [O I]145.5um in the BR 1202-0725 system, a compact
group at z=4.7 consisting of a quasar (QSO), a submillimeter-bright galaxy
(SMG), and three faint Lya emitters. By taking into account the previous
detections and upper limits, the [O I]/[C II] line ratios of the now five known
high-z galaxies are higher than or on the high-end of the observed values in
local galaxies ([O I]/[C II]$\gtrsim$0.13). The high [O I]/[C II] ratios and
the joint analysis with the previous detection of [N II] lines for both the QSO
and the SMG suggest the presence of warm and dense neutral gas in these highly
star-forming galaxies. This is further supported by new CO (12-11) line
detections and a comparison with cosmological simulations. There is a possible
positive correlation between the [NII]122/205 line ratio and the [O I]/[C II]
ratio when all local and high-z sources are taken into account, indicating that
the denser the ionized gas, the denser and warmer the neutral gas (or vice
versa). The detection of the [O I] line in the BR1202-0725 system with a
relatively short amount of integration with ALMA demonstrates the great
potential of this line as a dense gas tracer for high-z galaxies.
|
We study the nonsingular black hole in Anti de-Sitter background taking the
negative cosmological constant as the pressure of the system. We investigate
the horizon structure, and find the critical values $m_0$ and $\tilde{k}_0$,
such that $m>m_0$ (or $\tilde{k}<\tilde{k}_0$) corresponds to a black solution
with two horizons, namely the Cauchy horizon $x_-$ and the event horizon $x_+$.
For $m=m_0$ (or $\tilde{k}=\tilde{k}_0$), there exist an extremal black hole
with degenerate horizon $x_0=x_{\pm}$ and for $m<m_0$ (or
$\tilde{k}>\tilde{k}_0$), no black hole solution exists. In turn, we calculate
the thermodynamical properties and by observing the behaviour of Gibb's free
energy and specific heat, we find that this black hole solution exhibits first
order (small to large black hole) and second order phase transition. Further,
we study the $P-V$ criticality of system and then calculate the critical
exponents showing that they are the same as those of the Van der Waals fluid.
|
In the context of autonomous vehicles, one of the most crucial tasks is to
estimate the risk of the undertaken action. While navigating in complex urban
environments, the Bayesian occupancy grid is one of the most popular types of
maps, where the information of occupancy is stored as the probability of
collision. Although widely used, this kind of representation is not well suited
for risk assessment: because of its discrete nature, the probability of
collision becomes dependent on the tessellation size. Therefore, risk
assessments on Bayesian occupancy grids cannot yield risks with meaningful
physical units. In this article, we propose an alternative framework called
Dynamic Lambda-Field that is able to assess generic physical risks in dynamic
environments without being dependent on the tessellation size. Using our
framework, we are able to plan safe trajectories where the risk function can be
adjusted depending on the scenario. We validate our approach with quantitative
experiments, showing the convergence speed of the grid and that the framework
is suitable for real-world scenarios.
|
The use of crowdworkers in NLP research is growing rapidly, in tandem with
the exponential increase in research production in machine learning and AI.
Ethical discussion regarding the use of crowdworkers within the NLP research
community is typically confined in scope to issues related to labor conditions
such as fair pay. We draw attention to the lack of ethical considerations
related to the various tasks performed by workers, including labeling,
evaluation, and production. We find that the Final Rule, the common ethical
framework used by researchers, did not anticipate the use of online
crowdsourcing platforms for data collection, resulting in gaps between the
spirit and practice of human-subjects ethics in NLP research. We enumerate
common scenarios where crowdworkers performing NLP tasks are at risk of harm.
We thus recommend that researchers evaluate these risks by considering the
three ethical principles set up by the Belmont Report. We also clarify some
common misconceptions regarding the Institutional Review Board (IRB)
application. We hope this paper will serve to reopen the discussion within our
community regarding the ethical use of crowdworkers.
|
Motivated by applications to single-particle cryo-electron microscopy
(cryo-EM), we study several problems of function estimation in a low SNR
regime, where samples are observed under random rotations of the function
domain. In a general framework of group orbit estimation with linear
projection, we describe a stratification of the Fisher information eigenvalues
according to a sequence of transcendence degrees in the invariant algebra, and
relate critical points of the log-likelihood landscape to a sequence of
method-of-moments optimization problems. This extends previous results for a
discrete rotation group without projection.
We then compute these transcendence degrees and the forms of these moment
optimization problems for several examples of function estimation under $SO(2)$
and $SO(3)$ rotations, including a simplified model of cryo-EM as introduced by
Bandeira, Blum-Smith, Kileel, Perry, Weed, and Wein. For several of these
examples, we affirmatively resolve numerical conjectures that
$3^\text{rd}$-order moments are sufficient to locally identify a generic signal
up to its rotational orbit.
For low-dimensional approximations of the electric potential maps of two
small protein molecules, we empirically verify that the noise-scalings of the
Fisher information eigenvalues conform with these theoretical predictions over
a range of SNR, in a model of $SO(3)$ rotations without projection.
|
The recently introduced harmonic resolvent framework is concerned with the
study of the input-output dynamics of nonlinear flows in the proximity of a
known time-periodic orbit. These dynamics are governed by the harmonic
resolvent operator, which is a linear operator in the frequency domain whose
singular value decomposition sheds light on the dominant input-output
structures of the flow. Although the harmonic resolvent is a mathematically
well-defined operator, the numerical computation of its singular value
decomposition requires inverting a matrix that becomes exactly singular as the
periodic orbit approaches an exact solution of the nonlinear governing
equations. The very poor condition properties of this matrix hinder the
convergence of classical Krylov solvers, even in the presence of
preconditioners, thereby increasing the computational cost required to perform
the harmonic resolvent analysis. In this paper we show that a suitable
augmentation of the (nearly) singular matrix removes the singularity, and we
provide a lower bound for the smallest singular value of the augmented matrix.
We also show that the desired decomposition of the harmonic resolvent can be
computed using the augmented matrix, whose improved condition properties lead
to a significant speedup in the convergence of classical iterative solvers. We
demonstrate this simple, yet effective, computational procedure on the
Kuramoto-Sivashinsky equation in the proximity of an unstable time-periodic
orbit.
|
This objective of this report is to review existing enterprise blockchain
technologies - EOSIO powered systems, Hyperledger Fabric and Besu, Consensus
Quorum, R3 Corda and Ernst and Young's Nightfall - that provide data privacy
while leveraging the data integrity benefits of blockchain. By reviewing and
comparing how and how well these technologies achieve data privacy, a snapshot
is captured of the industry's current best practices and data privacy models.
Major enterprise technologies are contrasted in parallel to EOSIO to better
understand how EOSIO can evolve to meet the trends seen in enterprise
blockchain privacy. The following strategies and trends were generally observed
in these technologies:
Cryptography: the hashing algorithm was found to be the most used
cryptographic primitive in enterprise or changeover privacy solutions.
Coordination via on-chain contracts - a common strategy was to use a shared
publicly ledger to coordinate data privacy groups and more generally managed
identities and access control.
Transaction and contract code sharing: there was a variety of different
levels of privacy around the business logic (smart contract code) visibility.
Some solutions only allowed authorised peers to view code while others made
this accessible to everybody that was a member of the shared ledger.
Data migrations for data privacy applications: significant challenges exist
when using cryptographically stored data in terms of being able to run system
upgrades.
Multiple blockchain ledgers for data privacy: solutions attempted to create a
new private blockchain for every private data relationship which was eventually
abandoned in favour of one shared ledger with private data
collections/transactions that were anchored to the ledger with a hash in order
to improve scaling.
|
Wavefront aberrations can reflect the imaging quality of high-performance
optical systems better than geometric aberrations. Although laser
interferometers have emerged as the main tool for measurement of transmitted
wavefronts, their application is greatly limited, as they are typically
designed for operation at specific wavelengths. In a previous study, we
proposed a method for determining the wavefront transmitted by an optical
system at any wavelength in a certain band. Although this method works well for
most monochromatic systems, where the image plane is at the focal point for the
transmission wavelength, for general multi-color systems, it is more practical
to measure the wavefront at the defocused image plane. Hence, in this paper, we
have developed a complete method for determining transmitted wavefronts in a
broad bandwidth at any defocused position, enabling wavefront measurements for
multi-color systems. Here, we assume that in small ranges, the Zernike
coefficients have a linear relationship with position, such that Zernike
coefficients at defocused positions can be derived from measurements performed
at the focal point. We conducted experiments to verify these assumptions,
validating the new method. The experimental setup has been improved so that it
can handle multi-color systems, and a detailed experimental process is
summarized. With this technique, application of broadband transmission
wavefront measurement can be extended to most general optical systems, which is
of great significance for characterization of achromatic and apochromatic
optical lenses.
|
We study for the first time the $p\Sigma^-\to K^-d$ and $K^-d\to p\Sigma^-$
reactions close to threshold and show that they are driven by a triangle
mechanism, with the $\Lambda(1405)$, a proton and a neutron as intermediate
states, which develops a triangle singularity close to the $\bar{K}d$
threshold. We find that a mechanism involving virtual pion exchange and the
$K^-p\to\pi^+\Sigma^-$ amplitude dominates over another one involving kaon
exchange and the $K^-p\to K^-p$ amplitude. Moreover, of the two $\Lambda(1405)$
states, the one with higher mass around $1420$ MeV, gives the largest
contribution to the process. We show that the cross section, well within
measurable range, is very sensitive to different models that, while reproducing
$\bar{K}N$ observables above threshold, provide different extrapolations of the
$\bar{K}N$ amplitudes below threshold. The observables of this reaction will
provide new constraints on the theoretical models, leading to more reliable
extrapolations of the $\bar{K}N$ amplitudes below threshold and to more
accurate predictions of the $\Lambda(1405)$ state of lower mass.
|
We present a newly enlarged census of the compact radio population towards
the Orion Nebula Cluster (ONC) using high-sensitivity continuum maps (3-10
$\mu$Jy bm$^{-1}$) from a total of $\sim30$ h centimeter-wavelength
observations over an area of $\sim$20$'\times20'$ obtained in the C-band (4$-$8
GHz) with the Karl G. Jansky Very Large Array (VLA) in its high-resolution
A-configuration. We thus complement our previous deep survey of the innermost
areas of the ONC, now covering the field of view of the Chandra Orion
Ultra-deep Project (COUP). Our catalog contains 521 compact radio sources of
which 198 are new detections. Overall, we find that 17% of the (mostly stellar)
COUP sources have radio counterparts, while 53% of the radio sources have COUP
counterparts. Most notably, the radio detection fraction of X-ray sources is
higher in the inner cluster and almost constant for $r>3'$ (0.36 pc) from
$\theta^1$ Ori C suggesting a correlation between the radio emission mechanism
of these sources and their distance from the most massive stars at the center
of the cluster, for example due to increased photoionisation of circumstellar
disks. The combination with our previous observations four years prior lead to
the discovery of fast proper motions of up to $\sim$373 km s$^{-1}$ from faint
radio sources associated with ejecta of the OMC1 explosion. Finally, we search
for strong radio variability. We found changes in flux density by a factor of
$\lesssim$5 within our observations and a few sources with changes by a factor
$>$10 on long timescales of a few years.
|
With the advent of the Internet-of-Things (IoT) era, the ever-increasing
number of devices and emerging applications have triggered the need for
ubiquitous connectivity and more efficient computing paradigms. These stringent
demands have posed significant challenges to the current wireless networks and
their computing architectures. In this article, we propose a high-altitude
platform (HAP) network-enabled edge computing paradigm to tackle the key issues
of massive IoT connectivity. Specifically, we first provide a comprehensive
overview of the recent advances in non-terrestrial network-based edge computing
architectures. Then, the limitations of the existing solutions are further
summarized from the perspectives of the network architecture, random access
procedure, and multiple access techniques. To overcome the limitations, we
propose a HAP-enabled aerial cell-free massive multiple-input multiple-output
network to realize the edge computing paradigm, where multiple HAPs cooperate
via the edge servers to serve IoT devices. For the case of a massive number of
devices, we further adopt a grant-free massive access scheme to guarantee
low-latency and high-efficiency massive IoT connectivity to the network.
Besides, a case study is provided to demonstrate the effectiveness of the
proposed solution. Finally, to shed light on the future research directions of
HAP network-enabled edge computing paradigms, the key challenges and open
issues are discussed.
|
Supervised machine learning, in which models are automatically derived from
labeled training data, is only as good as the quality of that data. This study
builds on prior work that investigated to what extent 'best practices' around
labeling training data were followed in applied ML publications within a single
domain (social media platforms). In this paper, we expand by studying
publications that apply supervised ML in a far broader spectrum of disciplines,
focusing on human-labeled data. We report to what extent a random sample of ML
application papers across disciplines give specific details about whether best
practices were followed, while acknowledging that a greater range of
application fields necessarily produces greater diversity of labeling and
annotation methods. Because much of machine learning research and education
only focuses on what is done once a "ground truth" or "gold standard" of
training data is available, it is especially relevant to discuss issues around
the equally-important aspect of whether such data is reliable in the first
place. This determination becomes increasingly complex when applied to a
variety of specialized fields, as labeling can range from a task requiring
little-to-no background knowledge to one that must be performed by someone with
career expertise.
|
Conversational Artificial Intelligence (AI) used in industry settings can be
trained to closely mimic human behaviors, including lying and deception.
However, lying is often a necessary part of negotiation. To address this, we
develop a normative framework for when it is ethical or unethical for a
conversational AI to lie to humans, based on whether there is what we call
"invitation of trust" in a particular scenario. Importantly, cultural norms
play an important role in determining whether there is invitation of trust
across negotiation settings, and thus an AI trained in one culture may not be
generalizable to others. Moreover, individuals may have different expectations
regarding the invitation of trust and propensity to lie for human vs. AI
negotiators, and these expectations may vary across cultures as well. Finally,
we outline how a conversational chatbot can be trained to negotiate ethically
by applying autoregressive models to large dialog and negotiations datasets.
|
Technology has the opportunity to assist older adults as they age in place,
coordinate caregiving resources, and meet unmet needs through access to
resources. Currently, older adults use consumer technologies to support
everyday life, however these technologies are not always accessible or as
useful as they can be. Indeed, industry has attempted to create smart home
technologies with older adults as a target user group, however these solutions
are often more focused on the technical aspects and are short lived. In this
paper, we advocate for older adults being involved in the design process - from
initial ideation to product development to deployment. We encourage federally
funded researchers and industry to create compensated, diverse older adult
advisory boards to address stereotypes about aging while ensuring their needs
are considered.
We envision artificial intelligence systems that augment resources instead of
replacing them - especially in under-resourced communities. Older adults rely
on their caregiver networks and community organizations for social, emotional,
and physical support; thus, AI should be used to coordinate resources better
and lower the burden of connecting with these resources. Although
sociotechnical smart systems can help identify needs of older adults, the lack
of affordable research infrastructure and translation of findings into consumer
technology perpetuates inequities in designing for diverse older adults. In
addition, there is a disconnect between the creation of smart sensing systems
and creating understandable, actionable data for older adults and caregivers to
utilize. We ultimately advocate for a well-coordinated research effort across
the United States that connects older adults, caregivers, community
organizations, and researchers together to catalyze innovative and practical
research for all stakeholders.
|
Let $k \geq 1$ be an integer and $n=3k-1$. Let $\mathbb{Z}_n$ denote the
additive group of integers modulo $n$ and let $C$ be the subset of
$\mathbb{Z}_n$ consisting of the elements congruent to 1 modulo 3. The Cayley
graph $Cay(\mathbb{Z}_n; C)$ is known as the Andr\'asfia graph And($k$). In
this note, we wish to determine the automorphism group of this graph. We will
show that $Aut(And(k))$ is isomorphic with the dihedral group
$\mathbb{D}_{2n}$.
|
The Landau form of the Fokker-Planck equation is the gold standard for
plasmas dominated by small angle collisions, however its $\Order{N^2}$ work
complexity has limited its practicality. This paper extends previous work on a
fully conservative finite element method for this Landau collision operator
with adaptive mesh refinement, optimized for vector machines, by porting the
algorithm to the Cuda programming model with implementations in Cuda and
Kokkos, and by reporting results within a Vlasov-Maxwell-Landau model of a
plasma thermal quench. With new optimizations of the Landau kernel and ports of
this kernel, the sparse matrix assembly and algebraic solver to Cuda, the cost
of a well resolved Landau collision time advance is shown to be practical for
kinetic plasma applications. This fully implicit Landau time integrator and the
plasma quench model is available in the PETSc (Portable, Extensible, Toolkit
for Scientific computing) numerical library.
|
Let $X$ and $Y$ be two smooth manifolds of the same dimension. It was proved
by Seeger, Sogge and Stein in \cite{SSS} that the Fourier integral operators
with real non-degenerate phase functions in the class $I^{\mu}_1(X,Y;\Lambda),$
$\mu\leq -(n-1)/2,$ are bounded from $H^1$ to $L^1.$ The sharpness of the order
$-(n-1)/2,$ for any elliptic operator was also proved in \cite{SSS} and
extended to other types of canonical relations in \cite{Ruzhansky1999}. That
the operators in the class $I^{\mu}_1(X,Y;\Lambda),$ $\mu\leq -(n-1)/2,$
satisfy the weak (1,1) inequality was proved by Tao \cite{Tao:weak11}. In this
note, we prove that the weak (1,1) inequality for the order $ -(n-1)/2$ is
sharp for any elliptic Fourier integral operator, as well as its versions for
canonical relations satisfying additional rank conditions.
|
A novel data-processing method was developed to facilitate scintillation
detector characterization. Combined with fan-beam calibration, this method can
be used to quickly and conveniently calibrate gamma-ray detectors for SPECT,
PET, homeland security or astronomy. Compared with traditional calibration
methods, this new technique can accurately calibrate a photon-counting
detector, including DOI information, with greatly reduced time. The enabling
part of this technique is fan-beam scanning combined with a data-processing
strategy called the common-data subset (CDS) method, which was used to
synthesize the detector's mean detector response functions (MDRFs). Using this
approach, $2N$ scans ($N$ in x and $N$ in y direction) are necessary to finish
calibration of a 2D detector as opposed to $N^2$ scans with a pencil beam. For
a 3D detector calibration, only $3N$ scans are necessary to achieve the 3D
detector MDRFs that include DOI information. Moreover, this calibration
technique can be used for detectors with complicated or irregular MDRFs. We
present both Monte-Carlo simulations and experimental results that support the
feasibility of this method.
|
While numerous attempts have been made to jointly parse syntax and semantics,
high performance in one domain typically comes at the price of performance in
the other. This trade-off contradicts the large body of research focusing on
the rich interactions at the syntax-semantics interface. We explore multiple
model architectures which allow us to exploit the rich syntactic and semantic
annotations contained in the Universal Decompositional Semantics (UDS) dataset,
jointly parsing Universal Dependencies and UDS to obtain state-of-the-art
results in both formalisms. We analyze the behaviour of a joint model of syntax
and semantics, finding patterns supported by linguistic theory at the
syntax-semantics interface. We then investigate to what degree joint modeling
generalizes to a multilingual setting, where we find similar trends across 8
languages.
|
This paper considers the Gaussian multiple-access channel (MAC) in the
asymptotic regime where the number of users grows linearly with the code
length. We propose efficient coding schemes based on random linear models with
approximate message passing (AMP) decoding and derive the asymptotic error rate
achieved for a given user density, user payload (in bits), and user energy. The
tradeoff between energy-per-bit and achievable user density (for a fixed user
payload and target error rate) is studied, and it is demonstrated that in the
large system limit, a spatially coupled coding scheme with AMP decoding
achieves near-optimal tradeoffs for a wide range of user densities.
Furthermore, in the regime where the user payload is large, we also study the
spectral efficiency versus energy-per-bit tradeoff and discuss methods to
reduce decoding complexity at large payload sizes.
|
Computational thinking has been a recent focus of education research within
the sciences. However, there is a dearth of scholarly literature on how best to
teach and to assess this topic, especially in disciplinary science courses.
Physics classes with computation integrated into the curriculum are a fitting
setting for investigating computational thinking. In this paper, we lay the
foundation for exploring computational thinking in introductory physics
courses. First, we review relevant literature to synthesize a set of potential
learning goals that students could engage in when working with computation. The
computational thinking framework that we have developed features 14 practices
contained within 6 different categories. We use in-class video data as
existence proofs of the computational thinking practices proposed in our
framework. In doing this work, we hope to provide ways for teachers to assess
their students' development of computational thinking, while also giving
physics education researchers some guidance on how to study this topic in
greater depth.
|
In this study, an algorithm to blind and automatic modulation classification
has been proposed. It well benefits combined machine leaning and signal feature
extraction to recognize diverse range of modulation in low signal power to
noise ratio (SNR). The presented algorithm contains four. First, it advantages
spectrum analyzing to branching modulated signal based on regular and irregular
spectrum character. Seconds, a nonlinear soft margin support vector (NS SVM)
problem is applied to received signal, and its symbols are classified to
correct and incorrect (support vectors) symbols. The NS SVM employment leads to
discounting in physical layer noise effect on modulated signal. After that, a
k-center clustering can find center of each class. finally, in correlation
function estimation of scatter diagram is correlated with pre-saved ideal
scatter diagram of modulations. The correlation outcome is classification
result. For more evaluation, success rate, performance, and complexity in
compare to many published methods are provided. The simulation prove that the
proposed algorithm can classified the modulated signal in less SNR. For
example, it can recognize 4-QAM in SNR=-4.2 dB, and 4-FSK in SNR=2.1 dB with
%99 success rate. Moreover, due to using of kernel function in dual problem of
NS SVM and feature base function, the proposed algorithm has low complexity and
simple implementation in practical issues.
|
Jupiter family comets contribute a significant amount of debris to near-Earth
space. However, telescopic observations of these objects seem to suggest they
have short physical lifetimes. If this is true, the material generated will
also be short-lived, but fireball observation networks still detect material on
cometary orbits. This study examines centimeter-meter scale sporadic meteoroids
detected by the Desert Fireball Network from 2014-2020 originating from Jupiter
family comet-like orbits. Analyzing each event's dynamic history and physical
characteristics, we confidently determined whether they originated from the
main asteroid belt or the trans-Neptunian region. Our results indicate that
$<4\%$ of sporadic meteoroids on JFC-like orbits are genetically cometary. This
observation is statistically significant and shows that cometary material is
too friable to survive in near-Earth space. Even when considering shower
contributions, meteoroids on JFC-like orbits are primarily from the main-belt.
Thus, the presence of genuine cometary meteorites in terrestrial collections is
highly unlikely.
|
Leakage of data from publicly available Machine Learning (ML) models is an
area of growing significance as commercial and government applications of ML
can draw on multiple sources of data, potentially including users' and clients'
sensitive data. We provide a comprehensive survey of contemporary advances on
several fronts, covering involuntary data leakage which is natural to ML
models, potential malevolent leakage which is caused by privacy attacks, and
currently available defence mechanisms. We focus on inference-time leakage, as
the most likely scenario for publicly available models. We first discuss what
leakage is in the context of different data, tasks, and model architectures. We
then propose a taxonomy across involuntary and malevolent leakage, available
defences, followed by the currently available assessment metrics and
applications. We conclude with outstanding challenges and open questions,
outlining some promising directions for future research.
|
Fitting concentric geometric objects to digitized data is an important
problem in many areas such as iris detection, autonomous navigation, and
industrial robotics operations. There are two common approaches to fitting
geometric shapes to data: the geometric (iterative) approach and algebraic
(non-iterative) approach. The geometric approach is a nonlinear iterative
method that minimizes the sum of the squares of Euclidean distances of the
observed points to the ellipses and regarded as the most accurate method, but
it needs a good initial guess to improve the convergence rate. The algebraic
approach is based on minimizing the algebraic distances with some constraints
imposed on parametric space. Each algebraic method depends on the imposed
constraint, and it can be solved with the aid of the generalized eigenvalue
problem. Only a few methods in literature were developed to solve the problem
of concentric ellipses. Here we study the statistical properties of existing
methods by firstly establishing a general mathematical and statistical
framework for this problem. Using rigorous perturbation analysis, we derive the
variances and biasedness of each method under the small-sigma model. We also
develop new estimators, which can be used as reliable initial guesses for other
iterative methods. Then we compare the performance of each method according to
their theoretical accuracy. Not only do our methods described here outperform
other existing non-iterative methods, they are also quite robust against large
noise. These methods and their practical performances are assessed by a series
of numerical experiments on both synthetic and real data.
|
Gradient-based adversarial attacks on deep neural networks pose a serious
threat, since they can be deployed by adding imperceptible perturbations to the
test data of any network, and the risk they introduce cannot be assessed
through the network's original training performance. Denoising and
dimensionality reduction are two distinct methods that have been independently
investigated to combat such attacks. While denoising offers the ability to
tailor the defense to the specific nature of the attack, dimensionality
reduction offers the advantage of potentially removing previously unseen
perturbations, along with reducing the training time of the network being
defended. We propose strategies to combine the advantages of these two defense
mechanisms. First, we propose the cascaded defense, which involves denoising
followed by dimensionality reduction. To reduce the training time of the
defense for a small trade-off in performance, we propose the hidden layer
defense, which involves feeding the output of the encoder of a denoising
autoencoder into the network. Further, we discuss how adaptive attacks against
these defenses could become significantly weak when an alternative defense is
used, or when no defense is used. In this light, we propose a new metric to
evaluate a defense which measures the sensitivity of the adaptive attack to
modifications in the defense. Finally, we present a guideline for building an
ordered repertoire of defenses, a.k.a. a defense infrastructure, that adjusts
to limited computational resources in presence of uncertainty about the attack
strategy.
|
Gender-based crime is one of the most concerning scourges of contemporary
society. Governments worldwide have invested lots of economic and human
resources to radically eliminate this threat. Despite these efforts, providing
accurate predictions of the risk that a victim of gender violence has of being
attacked again is still a very hard open problem. The development of new
methods for issuing accurate, fair and quick predictions would allow police
forces to select the most appropriate measures to prevent recidivism. In this
work, we propose to apply Machine Learning (ML) techniques to create models
that accurately predict the recidivism risk of a gender-violence offender. The
relevance of the contribution of this work is threefold: (i) the proposed ML
method outperforms the preexisting risk assessment algorithm based on classical
statistical techniques, (ii) the study has been conducted through an official
specific-purpose database with more than 40,000 reports of gender violence, and
(iii) two new quality measures are proposed for assessing the effective police
protection that a model supplies and the overload in the invested resources
that it generates. Additionally, we propose a hybrid model that combines the
statistical prediction methods with the ML method, permitting authorities to
implement a smooth transition from the preexisting model to the ML-based model.
This hybrid nature enables a decision-making process to optimally balance
between the efficiency of the police system and aggressiveness of the
protection measures taken.
|
Global System for Mobile Communications (GSM) is a cellular network that is
popular and has been growing in recent years. It was developed to solve
fragmentation issues of the first cellular system, and it addresses digital
modulation methods, level of the network structure, and services. It is
fundamental for organizations to become learning organizations to keep up with
the technology changes for network services to be at a competitive level. A
simulation analysis using the NetSim tool in this paper is presented for
comparing different cellular network codecs for GSM network performance. These
parameters such as throughput, delay, and jitter are analyzed for the quality
of service provided by each network codec. Unicast application for the cellular
network is modeled for different network scenarios. Depending on the evaluation
and simulation, it was discovered that G.711, GSM_FR, and GSM-EFR performed
better than the other codecs, and they are considered to be the best codecs for
cellular networks. These codecs will be of best use to better the performance
of the network in the near future.
|
User-facing software services are becoming increasingly reliant on remote
servers to host Deep Neural Network (DNN) models, which perform inference tasks
for the clients. Such services require the client to send input data to the
service provider, who processes it using a DNN and returns the output
predictions to the client. Due to the rich nature of the inputs such as images
and speech, the input often contains more information than what is necessary to
perform the primary inference task. Consequently, in addition to the primary
inference task, a malicious service provider could infer secondary (sensitive)
attributes from the input, compromising the client's privacy. The goal of our
work is to improve inference privacy by injecting noise to the input to hide
the irrelevant features that are not conducive to the primary classification
task. To this end, we propose Adaptive Noise Injection (ANI), which uses a
light-weight DNN on the client-side to inject noise to each input, before
transmitting it to the service provider to perform inference. Our key insight
is that by customizing the noise to each input, we can achieve state-of-the-art
trade-off between utility and privacy (up to 48.5% degradation in
sensitive-task accuracy with <1% degradation in primary accuracy),
significantly outperforming existing noise injection schemes. Our method does
not require prior knowledge of the sensitive attributes and incurs minimal
computational overheads.
|
Getman et al. (2021) reports the discovery, energetics, frequencies, and
effects on environs of $>1000$ X-ray super-flares with X-ray energies $E_X \sim
10^{34}-10^{38}$~erg from pre-main sequence (PMS) stars identified in the
$Chandra$ MYStIX and SFiNCs surveys. Here we perform detailed plasma evolution
modeling of $55$ bright MYStIX/SFiNCs super-flares from these events. They
constitute a large sample of the most powerful stellar flares analyzed in a
uniform fashion. They are compared with published X-ray super-flares from young
stars in the Orion Nebula Cluster, older active stars, and the Sun. Several
results emerge. First, the properties of PMS X-ray super-flares are independent
of the presence or absence of protoplanetary disks inferred from infrared
photometry, supporting the solar-type model of PMS flaring magnetic loops with
both footpoints anchored in the stellar surface. Second, most PMS super-flares
resemble solar long duration events (LDEs) that are associated with coronal
mass ejections. Slow rise PMS super-flares are an interesting exception. Third,
strong correlations of super-flare peak emission measure and plasma temperature
with the stellar mass are similar to established correlations for the PMS X-ray
emission composed of numerous smaller flares. Fourth, a new correlation of loop
geometry is linked to stellar mass; more massive stars appear to have thicker
flaring loops. Finally, the slope of a long-standing relationship between the
X-ray luminosity and magnetic flux of various solar-stellar magnetic elements
appears steeper in PMS super-flares than for solar events.
|
The swampland is the set of seemingly consistent low-energy effective field
theories that cannot be consistently coupled to quantum gravity. In this review
we cover some of the conjectural properties that effective theories should
possess in order not to fall in the swampland, and we give an overview of their
main applications to particle physics. The latter include predictions on
neutrino masses, bounds on the cosmological constant, the electroweak and QCD
scales, the photon mass, the Higgs potential and some insights about
supersymmetry.
|
This paper examines the use of Lie group and Lie Algebra theory to construct
the geometry of pairwise comparisons matrices. The Hadamard product (also known
as coordinatewise, coordinate-wise, elementwise, or element-wise product) is
analyzed in the context of inconsistency and inaccuracy by the decomposition
method.
The two designed components are the approximation and orthogonal components.
The decomposition constitutes the theoretical foundation for the multiplicative
pairwise comparisons.
Keywords: approximate reasoning, subjectivity, inconsistency,
consistency-driven, pairwise comparison, matrix Lie group, Lie algebra,
approximation, orthogonality, decomposition.
|
Dark matter (DM) scattering and its subsequent capture in the Sun can boost
the local relic density, leading to an enhanced neutrino flux from DM
annihilations that is in principle detectable at neutrino telescopes. We
calculate the event rates expected for a radiative seesaw model containing both
scalar triplet and singlet-doublet fermion DM candidates. In the case of scalar
DM, the absence of a spin dependent scattering on nuclei results in a low
capture rate in the Sun, which is reflected in an event rate of less than one
per year in the current IceCube configuration with 86 strings. For
singlet-doublet fermion DM, there is a spin dependent scattering process next
to the spin independent one, which significantly boosts the event rate and thus
makes indirect detection competitive with respect to the direct detection
limits imposed by PICO-60. Due to a correlation between both scattering
processes, the limits on the spin independent cross section set by XENON1T
exclude also parts of the parameter space that can be probed at IceCube.
Previously obtained limits by ANTARES, IceCube and Super-Kamiokande from the
Sun and the Galactic Center are shown to be much weaker.
|
Recent work has established that, for every positive integer $k$, every
$n$-node graph has a $(2k-1)$-spanner on $O(f^{1-1/k} n^{1+1/k})$ edges that is
resilient to $f$ edge or vertex faults. For vertex faults, this bound is tight.
However, the case of edge faults is not as well understood: the best known
lower bound for general $k$ is $\Omega(f^{\frac12 - \frac{1}{2k}} n^{1+1/k}
+fn)$. Our main result is to nearly close this gap with an improved upper
bound, thus separating the cases of edge and vertex faults. For odd $k$, our
new upper bound is $O_k(f^{\frac12 - \frac{1}{2k}} n^{1+1/k} + fn)$, which is
tight up to hidden $poly(k)$ factors. For even $k$, our new upper bound is
$O_k(f^{1/2} n^{1+1/k} +fn)$, which leaves a gap of $poly(k) f^{1/(2k)}$. Our
proof is an analysis of the fault-tolerant greedy algorithm, which requires
exponential time, but we also show that there is a polynomial-time algorithm
which creates edge fault tolerant spanners that are larger only by factors of
$k$.
|
In warehouse and manufacturing environments, manipulation platforms are
frequently deployed at conveyor belts to perform pick and place tasks. Because
objects on the conveyor belts are moving, robots have limited time to pick them
up. This brings the requirement for fast and reliable motion planners that
could provide provable real-time planning guarantees, which the existing
algorithms do not provide. Besides the planning efficiency, the success of
manipulation tasks relies heavily on the accuracy of the perception system
which is often noisy, especially if the target objects are perceived from a
distance. For fast moving conveyor belts, the robot cannot wait for a perfect
estimate before it starts executing its motion. In order to be able to reach
the object in time, it must start moving early on (relying on the initial noisy
estimates) and adjust its motion on-the-fly in response to the pose updates
from perception. We propose a planning framework that meets these requirements
by providing provable constant-time planning and replanning guarantees. To this
end, we first introduce and formalize a new class of algorithms called
Constant-Time Motion Planning algorithms (CTMP) that guarantee to plan in
constant time and within a user-defined time bound. We then present our
planning framework for grasping objects off a conveyor belt as an instance of
the CTMP class of algorithms.
|
Fluid-structure interactions are a widespread phenomenon in nature. Although
their numerical modeling have come a long way, the application of numerical
design tools to these multiphysics problems is still lagging behind.
Gradient-based optimization is the most popular approach in topology
optimization currently. Hence, it's a necessity to utilize mesh deformation
techniques that have continuous, smooth derivatives. In this work, we address
mesh deformation techniques for structured, quadrilateral meshes. We discuss
and comment on two legacy mesh deformation techniques; namely the spring
analogy model and the linear elasticity model. In addition, we propose a new
technique based on the Yeoh hyperelasticity model. We focus on mesh quality as
a gateway to mesh admissibility. We propose layered selective stiffening such
that the elements adjacent to the fluid-structure interface - where the bulk of
the mesh distortion occurs - are stiffened in consecutive layers. The legacy
and the new models are able to sustain large deformations without deprecating
the mesh quality, and the results are enhanced with using layered selective
stiffening.
|
The ability to generate high-fidelity synthetic data is crucial when
available (real) data is limited or where privacy and data protection standards
allow only for limited use of the given data, e.g., in medical and financial
data-sets. Current state-of-the-art methods for synthetic data generation are
based on generative models, such as Generative Adversarial Networks (GANs).
Even though GANs have achieved remarkable results in synthetic data generation,
they are often challenging to interpret.Furthermore, GAN-based methods can
suffer when used with mixed real and categorical variables.Moreover, loss
function (discriminator loss) design itself is problem specific, i.e., the
generative model may not be useful for tasks it was not explicitly trained for.
In this paper, we propose to use a probabilistic model as a synthetic data
generator. Learning the probabilistic model for the data is equivalent to
estimating the density of the data. Based on the copula theory, we divide the
density estimation task into two parts, i.e., estimating univariate marginals
and estimating the multivariate copula density over the univariate marginals.
We use normalising flows to learn both the copula density and univariate
marginals. We benchmark our method on both simulated and real data-sets in
terms of density estimation as well as the ability to generate high-fidelity
synthetic data
|
Logistic Regression (LR) is a widely used statistical method in empirical
binary classification studies. However, real-life scenarios oftentimes share
complexities that prevent from the use of the as-is LR model, and instead
highlight the need to include high-order interactions to capture data
variability. This becomes even more challenging because of: (i) datasets
growing wider, with more and more variables; (ii) studies being typically
conducted in strongly imbalanced settings; (iii) samples going from very large
to extremely small; (iv) the need of providing both predictive models and
interpretable results. In this paper we present a novel algorithm, Learning
high-order Interactions via targeted Pattern Search (LIPS), to select
interaction terms of varying order to include in a LR model for an imbalanced
binary classification task when input data are categorical. LIPS's rationale
stems from the duality between item sets and categorical interactions. The
algorithm relies on an interaction learning step based on a well-known frequent
item set mining algorithm, and a novel dissimilarity-based interaction
selection step that allows the user to specify the number of interactions to be
included in the LR model. In addition, we particularize two variants (Scores
LIPS and Clusters LIPS), that can address even more specific needs. Through a
set of experiments we validate our algorithm and prove its wide applicability
to real-life research scenarios, showing that it outperforms a benchmark
state-of-the-art algorithm.
|
The concept of exceptional point of degeneracy (EPD) is used to conceive a
degenerate synchronization regime that is able to enhance the level of output
power and power conversion efficiency for backward wave oscillators (BWOs)
operating at millimeter-wave and Terahertz frequencies. Standard BWOs operating
at such high frequency ranges typically generate output power not exceeding
tens of watts with very poor power conversion efficiency in the order of 1%.
The novel concept of degenerate synchronization for the BWO based on a folded
waveguide is implemented by engineering distributed gain and power extraction
along the slow-wave waveguide. The distributed power extraction along the
folded waveguide is useful to satisfy the necessary conditions to have an EPD
at the synchronization point. Particle-in-cell (PIC) simulation results shows
that BWO operating at an EPD regime is capable of generating output power
exceeding 3 kwatts with conversion efficiency of exceeding 20% at frequency of
88.5 GHz.
|
We consider nonlinear impulsive systems on Banach spaces subjected to
disturbances and look for dwell-time conditions guaranteeing the the ISS
property. In contrary to many existing results our conditions cover the case
where both continuous and discrete dynamics can be unstable simultaneously.
Lyapunov type methods are use for this purpose. The effectiveness of our
approach is illustrated on a rather nontrivial example, which is feedback
connection of an ODE and a PDE systems.
|
In this work, we consider the problem of joint calibration and
direction-of-arrival (DOA) estimation using sensor arrays. This joint
estimation problem is referred to as self calibration. Unlike many previous
iterative approaches, we propose geometry independent convex optimization
algorithms for jointly estimating the sensor gain and phase errors as well as
the source DOAs. We derive these algorithms based on both the conventional
element-space data model and the covariance data model. We focus on sparse and
regular arrays formed using scalar sensors as well as vector sensors. The
developed algorithms are obtained by transforming the underlying bilinear
calibration model into a linear model, and subsequently by using standard
convex relaxation techniques to estimate the unknown parameters. Prior to the
algorithm discussion, we also derive identifiability conditions for the
existence of a unique solution to the self calibration problem. To demonstrate
the effectiveness of the developed techniques, numerical experiments and
comparisons to the state-of-the-art methods are provided. Finally, the results
from an experiment that was performed in an anechoic chamber using an acoustic
vector sensor array are presented to demonstrate the usefulness of the proposed
self calibration techniques.
|
Heterogeneous graph neural networks (HGNNs) as an emerging technique have
shown superior capacity of dealing with heterogeneous information network
(HIN). However, most HGNNs follow a semi-supervised learning manner, which
notably limits their wide use in reality since labels are usually scarce in
real applications. Recently, contrastive learning, a self-supervised method,
becomes one of the most exciting learning paradigms and shows great potential
when there are no labels. In this paper, we study the problem of
self-supervised HGNNs and propose a novel co-contrastive learning mechanism for
HGNNs, named HeCo. Different from traditional contrastive learning which only
focuses on contrasting positive and negative samples, HeCo employs
cross-viewcontrastive mechanism. Specifically, two views of a HIN (network
schema and meta-path views) are proposed to learn node embeddings, so as to
capture both of local and high-order structures simultaneously. Then the
cross-view contrastive learning, as well as a view mask mechanism, is proposed,
which is able to extract the positive and negative embeddings from two views.
This enables the two views to collaboratively supervise each other and finally
learn high-level node embeddings. Moreover, two extensions of HeCo are designed
to generate harder negative samples with high quality, which further boosts the
performance of HeCo. Extensive experiments conducted on a variety of real-world
networks show the superior performance of the proposed methods over the
state-of-the-arts.
|
We study the average behaviour of the Iwasawa invariants for Selmer groups of
elliptic curves, considered over anticyclotomic $\mathbb{Z}_p$-extensions in
both the definite and indefinite settings. The results in this paper lie at the
intersection of arithmetic statistics and Iwasawa theory.
|
We propose a novel numerical method for high dimensional
Hamilton--Jacobi--Bellman (HJB) type elliptic partial differential equations
(PDEs). The HJB PDEs, reformulated as optimal control problems, are tackled by
the actor-critic framework inspired by reinforcement learning, based on neural
network parametrization of the value and control functions. Within the
actor-critic framework, we employ a policy gradient approach to improve the
control, while for the value function, we derive a variance reduced
least-squares temporal difference method using stochastic calculus. To
numerically discretize the stochastic control problem, we employ an adaptive
step size scheme to improve the accuracy near the domain boundary. Numerical
examples up to $20$ spatial dimensions including the linear quadratic
regulators, the stochastic Van der Pol oscillators, the diffusive Eikonal
equations, and fully nonlinear elliptic PDEs derived from a regulator problem
are presented to validate the effectiveness of our proposed method.
|
High-contrast imaging observations are fundamentally limited by the spatially
and temporally correlated noise source called speckles. Suppression of speckle
noise is the key goal of wavefront control and adaptive optics (AO),
coronagraphy, and a host of post-processing techniques. Speckles average at a
rate set by the statistical speckle lifetime, and speckle-limited integration
time in long exposures is directly proportional to this lifetime. As progress
continues in post-coronagraph wavefront control, residual atmospheric speckles
will become the limiting noise source in high-contrast imaging, so a complete
understanding of their statistical behavior is crucial to optimizing
high-contrast imaging instruments. Here we present a novel power spectral
density (PSD) method for calculating the lifetime, and develop a semi-analytic
method for predicting intensity PSDs behind a coronagraph. Considering a
frozen-flow turbulence model, we analyze the residual atmosphere speckle
lifetimes in a MagAO-X-like AO system as well as 25--39 m giant segmented
mirror telescope (GSMT) scale systems. We find that standard AO control
shortens atmospheric speckle lifetime from ~130 ms to ~50 ms, and predictive
control will further shorten the lifetime to ~20 ms on 6.5 m MagAO-X. We find
that speckle lifetimes vary with diameter, wind speed, seeing, and location
within the AO control region. On bright stars lifetimes remain within a rough
range of ~20 ms to ~100 ms. Due to control system dynamics there are no simple
scaling laws which apply across a wide range of system characteristics.
Finally, we use these results to argue that telemetry-based post-processing
should enable ground-based telescopes to achieve the photon-noise limit in
high-contrast imaging.
|
The Eliashberg theory of superconductivity accounts for the fundamental
physics of conventional electron-phonon superconductors, including the
retardation of the interaction and the effect of the Coulomb pseudopotential,
to predict the critical temperature $T_c$ and other properties. McMillan,
Allen, and Dynes derived approximate closed-form expressions for the critical
temperature predicted by this theory, which depends essentially on the
electron-phonon spectral function $\alpha^2F(\omega)$, using $\alpha^2F$ for
low-$T_c$ superconductors. Here we show that modern machine learning techniques
can substantially improve these formulae, accounting for more general shapes of
the $\alpha^2F$ function. Using symbolic regression and the sure independence
screening and sparsifying operator (SISSO) framework, together with a database
of artificially generated $\alpha^2F$ functions, ranging from multimodal
Einstein-like models to calculated spectra of polyhydrides, as well as
numerical solutions of the Eliashberg equations, we derive a formula for $T_c$
that performs as well as Allen-Dynes for low-$T_c$ superconductors, and
substantially better for higher-$T_c$ ones. The expression identified through
our data-driven approach corrects the systematic underestimation of $T_c$ while
reproducing the physical constraints originally outlined by Allen and Dynes.
This equation should replace the Allen-Dynes formula for the prediction of
higher-temperature superconductors and for the estimation of $\lambda$ from
experimental data.
|
The HI Ly$\alpha$ (1215.67 $\unicode{xC5}$) emission line dominates the
far-UV spectra of M dwarf stars, but strong absorption from neutral hydrogen in
the interstellar medium makes observing Ly$\alpha$ challenging even for the
closest stars. As part of the Far-Ultraviolet M-dwarf Evolution Survey (FUMES),
the Hubble Space Telescope has observed 10 early-to-mid M dwarfs with ages
ranging from $\sim$24 Myr to several Gyrs to evaluate how the incident UV
radiation evolves through the lifetime of exoplanetary systems. We reconstruct
the intrinsic Ly$\alpha$ profiles from STIS G140L and E140M spectra and achieve
reconstructed fluxes with 1-$\sigma$ uncertainties ranging from 5% to a factor
of two for the low resolution spectra (G140L) and 3-20% for the high resolution
spectra (E140M). We observe broad, 500-1000 km s$^{-1}$ wings of the Ly$\alpha$
line profile, and analyze how the line width depends on stellar properties. We
find that stellar effective temperature and surface gravity are the dominant
factors influencing the line width with little impact from the star's magnetic
activity level, and that the surface flux density of the Ly$\alpha$ wings may
be used to estimate the chromospheric electron density. The Ly$\alpha$
reconstructions on the G140L spectra are the first attempted on
$\lambda/\Delta\lambda\sim$1000 data. We find that the reconstruction precision
is not correlated with SNR of the observation, rather, it depends on the
intrinsic broadness of the stellar Ly$\alpha$ line. Young, low-gravity stars
have the broadest lines and therefore provide more information at low spectral
resolution to the fit to break degeneracies among model parameters.
|
We study the current-induced torques in asymmetric magnetic tunnel junctions
containing a conventional ferromagnet and a magnetic Weyl semimetal contact.
The Weyl semimetal hosts chiral bulk states and topologically protected Fermi
arc surface states which were found to govern the voltage behavior and
efficiency of current-induced torques. We report how bulk chirality dictates
the sign of the non-equilibrium torques acting on the ferromagnet and discuss
the existence of large field-like torques acting on the magnetic Weyl semimetal
which exceeds the theoretical maximum of conventional magnetic tunnel
junctions. The latter are derived from the Fermi arc spin texture and display a
counter-intuitive dependence on the Weyl nodes separation. Our results shed
light on the new physics of multilayered spintronic devices comprising of
magnetic Weyl semimetals, which might open doors for new energy efficient
spintronic devices.
|
This paper contains two finite-sample results about the sign test. First, we
show that the sign test is unbiased against two-sided alternatives even when
observations are not identically distributed. Second, we provide simple
theoretical counterexamples to show that correlation that is unaccounted for
leads to size distortion and over-rejection. Our results have implication for
practitioners, who are increasingly employing randomization tests for
inference.
|
A pair of biadjoint functors between two categories produces a collection of
elements in the centers of these categories, one for each isotopy class of
nested circles in the plane. If the centers are equipped with a trace map into
the ground field, then one assigns an element of that field to a diagram of
nested circles. We focus on the self-adjoint functor case of this construction
and study the reverse problem of recovering such a functor and a category given
values associated to diagrams of nested circles.
|
Superpixels serve as a powerful preprocessing tool in numerous computer
vision tasks. By using superpixel representation, the number of image
primitives can be largely reduced by orders of magnitudes. With the rise of
deep learning in recent years, a few works have attempted to feed deeply
learned features / graphs into existing classical superpixel techniques.
However, none of them are able to produce superpixels in near real-time, which
is crucial to the applicability of superpixels in practice. In this work, we
propose a two-stage graph-based framework for superpixel segmentation. In the
first stage, we introduce an efficient Deep Affinity Learning (DAL) network
that learns pairwise pixel affinities by aggregating multi-scale information.
In the second stage, we propose a highly efficient superpixel method called
Hierarchical Entropy Rate Segmentation (HERS). Using the learned affinities
from the first stage, HERS builds a hierarchical tree structure that can
produce any number of highly adaptive superpixels instantaneously. We
demonstrate, through visual and numerical experiments, the effectiveness and
efficiency of our method compared to various state-of-the-art superpixel
methods.
|
This paper is an excerpt of an early version of Chapter 2 of the book
"Validity, Reliability, and Significance. Empirical Methods for NLP and Data
Science", by Stefan Riezler and Michael Hagmann, published in December 2021 by
Morgan & Claypool. Please see the book's homepage at
https://www.morganclaypoolpublishers.com/catalog_Orig/product_info.php?products_id=1688
for a more recent and comprehensive discussion.
|
Training deep reinforcement learning agents on environments with multiple
levels / scenes from the same task, has become essential for many applications
aiming to achieve generalization and domain transfer from simulation to the
real world. While such a strategy is helpful with generalization, the use of
multiple scenes significantly increases the variance of samples collected for
policy gradient computations. Current methods, effectively continue to view
this collection of scenes as a single Markov decision process (MDP), and thus
learn a scene-generic value function V(s). However, we argue that the sample
variance for a multi-scene environment is best minimized by treating each scene
as a distinct MDP, and then learning a joint value function V(s,M) dependent on
both state s and MDP M. We further demonstrate that the true joint value
function for a multi-scene environment, follows a multi-modal distribution
which is not captured by traditional CNN / LSTM based critic networks. To this
end, we propose a dynamic value estimation (DVE) technique, which approximates
the true joint value function through a sparse attention mechanism over
multiple value function hypothesis / modes. The resulting agent not only shows
significant improvements in the final reward score across a range of OpenAI
ProcGen environments, but also exhibits enhanced navigation efficiency and
provides an implicit mechanism for unsupervised state-space skill
decomposition.
|
Deep learning as a service (DLaaS) has been intensively studied to facilitate
the wider deployment of the emerging deep learning applications. However, DLaaS
may compromise the privacy of both clients and cloud servers. Although some
privacy preserving deep neural network (DNN) based inference techniques have
been proposed by composing cryptographic primitives, the challenges on
computational efficiency have not been well-addressed due to the complexity of
DNN models and expensive cryptographic primitives. In this paper, we propose a
novel privacy preserving cloud-based DNN inference framework (namely, "PROUD"),
which greatly improves the computational efficiency. Finally, we conduct
extensive experiments on two commonly-used datasets to validate both
effectiveness and efficiency for the PROUD, which also outperforms the
state-of-the-art techniques.
|
We derive the interaction of fermions with a dynamical space-time based on
the postulate that the description of physics should be independent of the
reference frame, which means to require the form-invariance of the fermion
action under diffeomorphisms. The derivation is worked out in the Hamiltonian
formalism as a canonical transformation along the line of non-Abelian gauge
theories. This yields a closed set of field equations for fermions,
unambiguously fixing their coupling to dynamical space-time. We encounter, in
addition to the well-known minimal coupling, anomalous couplings to curvature
and torsion. In torsion-free geometries that anomalous interaction reduces to a
Pauli-type coupling with the curvature scalar via a spontaneously emerged new
coupling constant with the dimension of mass resp.\ inverse length. A
consistent model Hamiltonian for the free gravitational field and the impact of
its functional form on the structure of the dynamical geometry space-time is
discussed.
|
This paper introduces a shoebox room simulator able to systematically
generate synthetic datasets of binaural room impulse responses (BRIRs) given an
arbitrary set of head-related transfer functions (HRTFs). The evaluation of
machine hearing algorithms frequently requires BRIR datasets in order to
simulate the acoustics of any environment. However, currently available
solutions typically consider only HRTFs measured on dummy heads, which poorly
characterize the high variability in spatial sound perception. Our solution
allows to integrate a room impulse response (RIR) simulator with different HRTF
sets represented in Spatially Oriented Format for Acoustics (SOFA). The source
code and the compiled binaries for different operating systems allow to both
advanced and non-expert users to benefit from our toolbox, see
https://github.com/spatialaudiotools/sofamyroom/ .
|
Backwards Stochastic Differential Equations (BSDEs) have been widely employed
in various areas of applied and financial mathematics. In particular, BSDEs
appear extensively in the pricing and hedging of financial derivatives,
stochastic optimal control problems and optimal stopping problems. Most BSDEs
cannot be solved analytically and thus numerical methods must be applied in
order to approximate their solutions. There have been many numerical methods
proposed over the past few decades, for the most part, in a complex and
scattered manner, with each requiring a variety of different and similar
assumptions and conditions. The aim of the present paper is thus to
systematically survey various numerical methods for BSDEs, and in particular,
compare and categorise them. To this end, we focus on the core features of each
method: the main assumptions, the numerical algorithm itself, key convergence
properties and advantages and disadvantages, in order to provide an exhaustive
up-to-date coverage of numerical methods for BSDEs, with insightful summaries
of each and useful comparison and categorization.
|
With the continuing rapid development of artificial microrobots and active
particles, questions of microswimmer guidance and control are becoming ever
more relevant and prevalent. In both the applications and theoretical study of
such microscale swimmers, control is often mediated by an engineered property
of the swimmer, such as in the case of magnetically propelled microrobots. In
this work, we will consider a modality of control that is applicable in more
generality, effecting guidance via modulation of a background fluid flow. Here,
considering a model swimmer in a commonplace flow and simple geometry, we
analyse and subsequently establish the efficacy of flow-mediated microswimmer
positional control, later touching upon a question of optimal control. Moving
beyond idealised notions of controllability and towards considerations of
practical utility, we then evaluate the robustness of this control modality to
sources of variation that may be present in applications, examining in
particular the effects of measurement inaccuracy and rotational noise. This
exploration gives rise to a number of cautionary observations, which, overall,
demonstrate the need for the careful assessment of both policy and behavioural
robustness when designing control schemes for use in practice.
|
We theoretically investigate the fluorescence intensity correlation (FIC) of
Ar clusters and Mo-doped iron oxide nanoparticles subjected to intense,
femtosecond and sub-femtosecond XFEL pulses for high-resolution and elemental
contrast imaging. We present the FIC of {\Ka} and {\Kah} emission in Ar
clusters and discuss the impact of sample damage on retrieving high-resolution
structural information and compare the obtained structural information with
those from the coherent difractive imaging (CDI) approach. We found that, while
sub-femtosecond pulses will substantially benefit the CDI approach,
few-femtosecond pulses may be sufficient for achieving high-resolution
information with FIC. Furthermore, we show that the fluorescence intensity
correlation computed from the fluorescence of Mo atoms in Mo-doped iron oxide
nanoparticles can be used to image dopant distributions.
|
Today, almost all banks have adopted ICT as a means of enhancing their
banking service quality. These banks provide ICT based electronic service which
is also called electronic banking, internet banking or online banking etc to
their customers. Despite the increasing adoption of electronic banking and it
relevance towards end users satisfaction, few investigations has been conducted
on factors that enhanced end users satisfaction perception. In this research,
an empirical analysis has been conducted on factors that influence electronic
banking user's satisfaction perception and the relationship between these
factors and the customer's satisfaction. The study will help bank industries in
improving the level of their customer's satisfaction and increase the bond
between a bank and its customer.
|
Monitoring the state of contact is essential for robotic devices, especially
grippers that implement gecko-inspired adhesives where intimate contact is
crucial for a firm attachment. However, due to the lack of deformable sensors,
few have demonstrated tactile sensing for gecko grippers. We present Viko, an
adaptive gecko gripper that utilizes vision-based tactile sensors to monitor
contact state. The sensor provides high-resolution real-time measurements of
contact area and shear force. Moreover, the sensor is adaptive, low-cost, and
compact. We integrated gecko-inspired adhesives into the sensor surface without
impeding its adaptiveness and performance. Using a robotic arm, we evaluate the
performance of the gripper by a series of grasping test. The gripper has a
maximum payload of 8N even at a low fingertip pitch angle of 30 degrees. We
also showcase the gripper's ability to adjust fingertip pose for better contact
using sensor feedback. Further, everyday object picking is presented as a
demonstration of the gripper's adaptiveness.
|
A starlike univalent function $f$ is characterized by the function
$zf'(z)/f(z)$; several subclasses of these functions were studied in the past
by restricting the function $zf'(z)/f(z)$ to take values in a region $\Omega$
on the right-half plane, or, equivalently, by requiring the function
$zf'(z)/f(z)$ to be subordinate to the corresponding mapping of the unit disk
$\mathbb{D}$ to the region $\Omega$.
The mappings $w_1(z):=z+\sqrt{1+z^2}, w_2(z):=\sqrt{1+z}$ and $w_3(z):=e^z$
maps the unit disk $\mathbb{D}$ to various regions in the right half plane. For
normalized analytic functions $f$ satisfying the conditions that $f(z)/g(z),
g(z)/zp(z)$ and $p(z)$ are subordinate to the functions $w_i, i=1,2,3$ in
various ways for some analytic functions $g(z)$ and $p(z)$, we determine the
sharp radius for them to belong to various subclasses of starlike functions.
|
We report on the discovery of FRB 20200120E, a repeating fast radio burst
(FRB) with low dispersion measure (DM), detected by the Canadian Hydrogen
Intensity Mapping Experiment (CHIME)/FRB project. The source DM of 87.82 pc
cm$^{-3}$ is the lowest recorded from an FRB to date, yet is significantly
higher than the maximum expected from the Milky Way interstellar medium in this
direction (~ 50 pc cm$^{-3}$). We have detected three bursts and one candidate
burst from the source over the period 2020 January-November. The baseband
voltage data for the event on 2020 January 20 enabled a sky localization of the
source to within $\simeq$ 14 sq. arcmin (90% confidence). The FRB localization
is close to M81, a spiral galaxy at a distance of 3.6 Mpc. The FRB appears on
the outskirts of M81 (projected offset $\sim$ 20 kpc) but well inside its
extended HI and thick disks. We empirically estimate the probability of chance
coincidence with M81 to be $< 10^{-2}$. However, we cannot reject a Milky Way
halo origin for the FRB. Within the FRB localization region, we find several
interesting cataloged M81 sources and a radio point source detected in the Very
Large Array Sky Survey (VLASS). We searched for prompt X-ray counterparts in
Swift/BAT and Fermi/GBM data, and for two of the FRB 20200120E bursts, we rule
out coincident SGR 1806$-$20-like X-ray bursts. Due to the proximity of FRB
20200120E, future follow-up for prompt multi-wavelength counterparts and
sub-arcsecond localization could be constraining of proposed FRB models.
|
This paper investigates the transmission power control in over-the-air
federated edge learning (Air-FEEL) system. Different from conventional power
control designs (e.g., to minimize the individual mean squared error (MSE) of
the over-the-air aggregation at each round), we consider a new power control
design aiming at directly maximizing the convergence speed. Towards this end,
we first analyze the convergence behavior of Air-FEEL (in terms of the
optimality gap) subject to aggregation errors at different communication
rounds. It is revealed that if the aggregation estimates are unbiased, then the
training algorithm would converge exactly to the optimal point with mild
conditions; while if they are biased, then the algorithm would converge with an
error floor determined by the accumulated estimate bias over communication
rounds. Next, building upon the convergence results, we optimize the power
control to directly minimize the derived optimality gaps under both biased and
unbiased aggregations, subject to a set of average and maximum power
constraints at individual edge devices. We transform both problems into convex
forms, and obtain their structured optimal solutions, both appearing in a form
of regularized channel inversion, by using the Lagrangian duality method.
Finally, numerical results show that the proposed power control policies
achieve significantly faster convergence for Air-FEEL, as compared with
benchmark policies with fixed power transmission or conventional MSE
minimization.
|
The social media platform is a convenient medium to express personal thoughts
and share useful information. It is fast, concise, and has the ability to reach
millions. It is an effective place to archive thoughts, share artistic content,
receive feedback, promote products, etc. Despite having numerous advantages
these platforms have given a boost to hostile posts. Hate speech and derogatory
remarks are being posted for personal satisfaction or political gain. The
hostile posts can have a bullying effect rendering the entire platform
experience hostile. Therefore detection of hostile posts is important to
maintain social media hygiene. The problem is more pronounced languages like
Hindi which are low in resources. In this work, we present approaches for
hostile text detection in the Hindi language. The proposed approaches are
evaluated on the Constraint@AAAI 2021 Hindi hostility detection dataset. The
dataset consists of hostile and non-hostile texts collected from social media
platforms. The hostile posts are further segregated into overlapping classes of
fake, offensive, hate, and defamation. We evaluate a host of deep learning
approaches based on CNN, LSTM, and BERT for this multi-label classification
problem. The pre-trained Hindi fast text word embeddings by IndicNLP and
Facebook are used in conjunction with CNN and LSTM models. Two variations of
pre-trained multilingual transformer language models mBERT and IndicBERT are
used. We show that the performance of BERT based models is best. Moreover, CNN
and LSTM models also perform competitively with BERT based models.
|
In this paper, we address zero-shot learning (ZSL), the problem of
recognizing categories for which no labeled visual data are available during
training. We focus on the transductive setting, in which unlabelled visual data
from unseen classes is available. State-of-the-art paradigms in ZSL typically
exploit generative adversarial networks to synthesize visual features from
semantic attributes. We posit that the main limitation of these approaches is
to adopt a single model to face two problems: 1) generating realistic visual
features, and 2) translating semantic attributes into visual cues. Differently,
we propose to decouple such tasks, solving them separately. In particular, we
train an unconditional generator to solely capture the complexity of the
distribution of visual data and we subsequently pair it with a conditional
generator devoted to enrich the prior knowledge of the data distribution with
the semantic content of the class embeddings. We present a detailed ablation
study to dissect the effect of our proposed decoupling approach, while
demonstrating its superiority over the related state-of-the-art.
|
We construct closed immersions from initial degenerations of the spinor
variety $\mathbb{S}_n$ to inverse limits of strata associated to even
$\Delta$-matroids. As an application, we prove that these initial degenerations
are smooth and irreducible for $n\leq 5$ and identify the log canonical model
of the Chow quotient of $\mathbb{S}_5$ by the action of the diagonal torus of
$\operatorname{GL}(5)$.
|
We provide an abstract characterization for the Cuntz semigroup of unital
commutative AI-algebras, as well as a characterization for abstract Cuntz
semigroups of the form $\text{Lsc} (X,\overline{\mathbb{N}})$ for some
$T_1$-space $X$. In our investigations, we also uncover new properties that the
Cuntz semigroup of all AI-algebras satisfies.
|
Understanding the effects of interventions, such as restrictions on community
and large group gatherings, is critical to controlling the spread of COVID-19.
Susceptible-Infectious-Recovered (SIR) models are traditionally used to
forecast the infection rates but do not provide insights into the causal
effects of interventions. We propose a spatiotemporal model that estimates the
causal effect of changes in community mobility (intervention) on infection
rates. Using an approximation to the SIR model and incorporating spatiotemporal
dependence, the proposed model estimates a direct and indirect (spillover)
effect of intervention. Under an interference and treatment ignorability
assumption, this model is able to estimate causal intervention effects, and
additionally allows for spatial interference between locations. Reductions in
community mobility were measured by cell phone movement data. The results
suggest that the reductions in mobility decrease Coronavirus cases 4 to 7 weeks
after the intervention.
|
Governments, Healthcare, and Private Organizations in the global scale have
been using digital tracking to keep COVID-19 outbreaks under control. Although
this method could limit pandemic contagion, it raises significant concerns
about user privacy. Known as ~"Contact Tracing Apps", these mobile applications
are facilitated by Cellphone Service Providers (CSPs), who enable the spatial
and temporal real-time user tracking. Accordingly, it might be speculated that
CSPs collect information violating the privacy policies such as GDPR, CCPA, and
others. To further clarify, we conducted an in-depth analysis comparing privacy
legislations with the real-world practices adapted by CSPs. We found that three
of the regulations (GDPR, COPPA, and CCPA) analyzed defined mobile location
data as private information, and two (T-Mobile US, Boost Mobile) of the five
CSPs that were analyzed did not comply with the COPPA regulation. Our results
are crucial in view of the threat these violations represent, especially when
it comes to children's data. As such proper security and privacy auditing is
necessary to curtail such violations. We conclude by providing actionable
recommendations to address concerns and provide privacy-preserving monitoring
of the COVID-19 spread through the contact tracing applications.
|
Nonlinear surface-plasmon polaritons~(NSPPs) in nanophotonic waveguides
excite with dissimilar temporal properties due to input field modifications and
material characteristics, but they possess similar nonlinear spectral
evolution. In this work, we uncover the origin of this similarity and establish
that the spectral dynamics is an inherent property of the system that depends
on the synthetic dimension and is beyond waveguide geometrical dimensionality.
To this aim, we design an ultra-low loss nonlinear plasmonic waveguide, to
establish the invariance of the surface plasmonic frequency combs~(FCs) and
phase singularities for plasmonic peregrine waves and Akhmediev breather. By
finely tuning the nonlinear coefficient of the interaction interface, we
uncover the conservation conditions through this plasmonic system and employ
the mean-value evolution of the quantum NSPP field commensurate with the
Schr\"odinger equation to evaluate spectral dynamics of the plasmonic
FCs~(PFCs). Through providing suppressed interface losses and modified
nonlinearity as dual requirements for conservative conditions, we propose
exciting PFCs as equally spaced invariant quantities of this plasmonic scheme
and prove that the spectral dynamics of the NSPPs within the interaction
interface yields the formation of plasmonic analog of the synthetic photonic
lattice, which we termed \textit{synthetic plasmonic lattice}~(SPL).
|
We continue our previous study of cylindrically symmetric, static
electrovacuum spacetimes generated by a magnetic field, involving optionally
the cosmological constant, and investigate several classes of exact solutions.
These spacetimes are due to magnetic fields that are perpendicular to the axis
of symmetry.
|
Factorial designs are widely used due to their ability to accommodate
multiple factors simultaneously. The factor-based regression with main effects
and some interactions is the dominant strategy for downstream data analysis,
delivering point estimators and standard errors via one single regression.
Justification of these convenient estimators from the design-based perspective
requires quantifying their sampling properties under the assignment mechanism
conditioning on the potential outcomes. To this end, we derive the sampling
properties of the factor-based regression estimators from both saturated and
unsaturated models, and demonstrate the appropriateness of the robust standard
errors for the Wald-type inference. We then quantify the bias-variance
trade-off between the saturated and unsaturated models from the design-based
perspective, and establish a novel design-based Gauss--Markov theorem that
ensures the latter's gain in efficiency when the nuisance effects omitted
indeed do not exist. As a byproduct of the process, we unify the definitions of
factorial effects in various literatures and propose a location-shift strategy
for their direct estimation from factor-based regressions. Our theory and
simulation suggest using factor-based inference for general factorial effects,
preferably with parsimonious specifications in accordance with the prior
knowledge of zero nuisance effects.
|
One of the most ubiquitous and technologically important phenomena in nature
is the nucleation of homogeneous flowing systems. The microscopic effects of
shear on a nucleating system are still imperfectly understood, although in
recent years a consistent picture has emerged. The opposing effects of shear
can be split into two major contributions for simple liquids: increase of the
energetic cost of nucleation, and enhancement of the kinetics. In this
perspective, we describe the latest computational and theoretical techniques
which have been developed over the past two decades. We collate and unify the
overarching influences of shear, temperature, and supersaturation on the
process of homogeneous nucleation. Experimental techniques and capabilities are
discussed, against the backdrop of results from simulations and theory.
Although we primarily focus on simple liquids, we also touch upon the sheared
nucleation of more complex systems, including glasses and polymer melts. We
speculate on the promising directions and possible advances that could come to
fruition in the future.
|
In this paper, we develop general techniques for computing the G-index of a
closed, spin, hyperbolic 2- or 4-manifold, and apply these techniques to
compute the G-index of the fully symmetric spin structure of the Davis
hyperbolic 4-manifold.
|
In binary classification, kernel-free linear or quadratic support vector
machines are proposed to avoid dealing with difficulties such as finding
appropriate kernel functions or tuning their hyper-parameters. Furthermore,
Universum data points, which do not belong to any class, can be exploited to
embed prior knowledge into the corresponding models so that the generalization
performance is improved. In this paper, we design novel kernel-free Universum
quadratic surface support vector machine models. Further, we propose the L1
norm regularized version that is beneficial for detecting potential sparsity
patterns in the Hessian of the quadratic surface and reducing to the standard
linear models if the data points are (almost) linearly separable. The proposed
models are convex such that standard numerical solvers can be utilized for
solving them. Nonetheless, we formulate a least squares version of the L1 norm
regularized model and next, design an effective tailored algorithm that only
requires solving one linear system. Several theoretical properties of these
models are then reported/proved as well. We finally conduct numerical
experiments on both artificial and public benchmark data sets to demonstrate
the feasibility and effectiveness of the proposed models.
|
To operate efficiently across a wide range of workloads with varying power
requirements, a modern processor applies different current management
mechanisms, which briefly throttle instruction execution while they adjust
voltage and frequency to accommodate for power-hungry instructions (PHIs) in
the instruction stream. Doing so 1) reduces the power consumption of non-PHI
instructions in typical workloads and 2) optimizes system voltage regulators'
cost and area for the common use case while limiting current consumption when
executing PHIs.
However, these mechanisms may compromise a system's confidentiality
guarantees. In particular, we observe that multilevel side-effects of
throttling mechanisms, due to PHI-related current management mechanisms, can be
detected by two different software contexts (i.e., sender and receiver) running
on 1) the same hardware thread, 2) co-located Simultaneous Multi-Threading
(SMT) threads, and 3) different physical cores.
Based on these new observations on current management mechanisms, we develop
a new set of covert channels, IChannels, and demonstrate them in real modern
Intel processors (which span more than 70% of the entire client and server
processor market). Our analysis shows that IChannels provides more than 24x the
channel capacity of state-of-the-art power management covert channels. We
propose practical and effective mitigations to each covert channel in IChannels
by leveraging the insights we gain through a rigorous characterization of real
systems.
|
We develop the integration theory of two-parameter controlled paths $Y$
allowing us to define integrals of the form \begin{equation}
\int_{[s,t] \times [u,v]}
Y_{r,r'}
\;d(X_{r}, X_{r'}) \end{equation} where $X$ is the geometric $p$-rough path
that controls $Y$. This extends to arbitrary regularity the definition
presented for $2\leq p<3$ in the recent paper of Hairer and Gerasimovi\v{c}s
where it is used in the proof of a version of H\"{o}rmander's theorem for a
class of SPDEs. We extend the Fubini type theorem of the same paper by showing
that this two-parameter integral coincides with the two iterated one-parameter
integrals \[
\int_{[s,t] \times [u,v]}
Y_{r,r'}
\;d(X_{r}, X_{r'})
=
\int_{s}^{t}
\int_{u}^{v}
Y_{r,r'}
\;dX_{r'}
\;dX_{r'}
=
\int_{u}^{v}
\int_{s}^{t}
Y_{r,r'}
\;dX_{r}
\;dX_{r'}. \] A priori these three integrals have distinct definitions, and
so this parallels the classical Fubini's theorem for product measures. By
extending the two-parameter Young-Towghi inequality in this context, we derive
a maximal inequality for the discrete integrals approximating the two-parameter
integral. We also extend the analysis to consider integrals of the form
\begin{equation*}
\int_{[s,t] \times [u,v]}
Y_{r,r'}
\;
d(X_{r}, \tilde{X}_{r'}) \end{equation*} for possibly different rough paths
$X$ and $\tilde{X}$, and obtain the corresponding Fubini type theorem. We prove
continuity estimates for these integrals in the appropriate rough path
topologies. As an application we consider the signature kernel, which has
recently emerged as a useful tool in data science, as an example of a
two-parameter controlled rough path which also solves a two-parameter rough
integral equation.
|
We revisit the calculation of vacuum energy density in compact space times.
By explicitly computing the effective action through the heat kernel method, we
compute vacuum energy density for the general case of $k$ compact spatial
dimensions in $p+k$ dimensional Minkowski space time. Additionally, we use this
formalism to calculate the Casimir force on a piston placed in such space
times, and note the deviations from previously reported results in the
literature.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.