abstract
stringlengths 42
2.09k
|
---|
This report is part of the DataflowOpt project on optimization of modern
dataflows and aims to introduce a data quality-aware cost model that covers the
following aspects in combination: (1) heterogeneity in compute nodes, (2)
geo-distribution, (3) massive parallelism, (4) complex DAGs and (5) streaming
applications. Such a cost model can be then leveraged to devise cost-based
optimization solutions that deal with task placement and operator
configuration.
|
This work evaluates the applicability of super-resolution generative
adversarial networks (SRGANs) as a methodology for the reconstruction of
turbulent-flow quantities from coarse wall measurements. The method is applied
both for the resolution enhancement of wall fields and the estimation of
wall-parallel velocity fields from coarse wall measurements of shear stress and
pressure. The analysis has been carried out with a database of a turbulent
open-channel flow with friction Reynolds number $Re_{\tau}=180$ generated
through direct numerical simulation. Coarse wall measurements have been
generated with three different downsampling factors $f_d=[4,8,16]$ from the
high-resolution fields, and wall-parallel velocity fields have been
reconstructed at four inner-scaled wall-normal distances $y^+=[15,30,50,100]$.
We first show that SRGAN can be used to enhance the resolution of coarse wall
measurements. If compared with direct reconstruction from the sole coarse wall
measurements, SRGAN provides better instantaneous reconstructions, both in
terms of mean-squared error and spectral-fractional error. Even though lower
resolutions in the input wall data make it more challenging to achieve highly
accurate predictions, the proposed SRGAN-based network yields very good
reconstruction results. Furthermore, it is shown that even for the most
challenging cases the SRGAN is capable of capturing the large-scale structures
that populate the flow. The proposed novel methodology has great potential for
closed-loop control applications relying on non-intrusive sensing.
|
We present a novel deep neural model for text detection in document images.
For robust text detection in noisy scanned documents, the advantages of
multi-task learning are adopted by adding an auxiliary task of text
enhancement. Namely, our proposed model is designed to perform noise reduction
and text region enhancement as well as text detection. Moreover, we enrich the
training data for the model with synthesized document images that are fully
labeled for text detection and enhancement, thus overcome the insufficiency of
labeled document image data. For the effective exploitation of the synthetic
and real data, the training process is separated in two phases. The first phase
is training only synthetic data in a fully-supervised manner. Then real data
with only detection labels are added in the second phase. The enhancement task
for the real data is weakly-supervised with information from their detection
labels. Our methods are demonstrated in a real document dataset with
performances exceeding those of other text detection methods. Moreover,
ablations are conducted and the results confirm the effectiveness of the
synthetic data, auxiliary task, and weak-supervision. Whereas the existing text
detection studies mostly focus on the text in scenes, our proposed method is
optimized to the applications for the text in scanned documents.
|
The outbreak of the COVID-19 pandemic triggers infodemic over online social
networks. It is thus important for governments to ensure their official
messages outpace misinformation and efficiently reach the public. Some
countries and regions that are currently worst affected by the virus including
Europe, South America and India, encounter an additional difficulty:
multilingualism. Understanding the specific role of multilingual users in the
process of information diffusion is critical to adjust their publishing
strategies for the governments of such countries and regions. In this paper, we
investigate the role of multilingual users in diffusing information during the
COVID-19 pandemic on popular social networks. We collect a large-scale dataset
of Twitter from a populated multilingual region from the beginning of the
pandemic. With this dataset, we successfully show that multilingual users act
as bridges in diffusing COVID-19 related information. We further study the
mental health of multilingual users and show that being the bridges,
multilingual users tend to be more negative. This is confirmed by a recent
psychological study stating that excessive exposure to social media may result
in a negative mood.
|
Cell-free (CF) massive multiple-input multiple-output (MIMO) is a promising
solution to provide uniform good performance for unmanned aerial vehicle (UAV)
communications. In this paper, we propose the UAV communication with wireless
power transfer (WPT) aided CF massive MIMO systems, where the harvested energy
(HE) from the downlink WPT is used to support both uplink data and pilot
transmission. We derive novel closed-form downlink HE and uplink spectral
efficiency (SE) expressions that take hardware impairments of UAV into account.
UAV communications with current small cell (SC) and cellular massive MIMO
enabled WPT systems are also considered for comparison. It is significant to
show that CF massive MIMO achieves two and five times higher 95\%-likely uplink
SE than the ones of SC and cellular massive MIMO, respectively. Besides, the
large-scale fading decoding receiver cooperation can reduce the interference of
the terrestrial user. Moreover, the maximum SE can be achieved by changing the
time-splitting fraction. We prove that the optimal time-splitting fraction for
maximum SE is determined by the number of antennas, altitude and hardware
quality factor of UAVs. Furthermore, we propose three UAV trajectory design
schemes to improve the SE. It is interesting that the angle search scheme
performs best than both AP search and line path schemes. Finally, simulation
results are presented to validate the accuracy of our expressions.
|
This paper is concerned with the Richards equation in a heterogeneous domain,
each subdomain of which is homogeneous and represents a rocktype. Our first
contribution is to rigorously prove convergence toward a weak solution of
cell-centered finite-volume schemes with upstream mobility and without
Kirchhoff's transform. Our second contribution is to numerically demonstrate
the relevance of locally refining the grid at the interface between subregions,
where discontinuities occur, in order to preserve an acceptable accuracy for
the results computed with the schemes under consideration.
|
Following the pandemic outbreak, several works have proposed to diagnose
COVID-19 with deep learning in computed tomography (CT); reporting performance
on-par with experts. However, models trained/tested on the same in-distribution
data may rely on the inherent data biases for successful prediction, failing to
generalize on out-of-distribution samples or CT with different scanning
protocols. Early attempts have partly addressed bias-mitigation and
generalization through augmentation or re-sampling, but are still limited by
collection costs and the difficulty of quantifying bias in medical images. In
this work, we propose Mixing-AdaSIN; a bias mitigation method that uses a
generative model to generate de-biased images by mixing texture information
between different labeled CT scans with semantically similar features. Here, we
use Adaptive Structural Instance Normalization (AdaSIN) to enhance de-biasing
generation quality and guarantee structural consistency. Following, a
classifier trained with the generated images learns to correctly predict the
label without bias and generalizes better. To demonstrate the efficacy of our
method, we construct a biased COVID-19 vs. bacterial pneumonia dataset based on
CT protocols and compare with existing state-of-the-art de-biasing methods. Our
experiments show that classifiers trained with de-biased generated images
report improved in-distribution performance and generalization on an external
COVID-19 dataset.
|
We propose a solution to the longstanding permalloy problem$-$why the
particular composition of permalloy, Fe$_{21.5}$Ni$_{78.5}$, achieves a
dramatic drop in hysteresis, while its material constants show no obvious
signal of this behavior. We use our recently developed coercivity tool to show
that a delicate balance between local instabilities and magnetic material
constants are necessary to explain the dramatic drop of hysteresis at 78.5% Ni.
Our findings are in agreement with the permalloy experiments and, more broadly,
provide theoretical guidance for the discovery of novel low hysteresis magnetic
alloys.
|
In this letter, we present a novel method for automatic extrinsic calibration
of high-resolution LiDARs and RGB cameras in targetless environments. Our
approach does not require checkerboards but can achieve pixel-level accuracy by
aligning natural edge features in the two sensors. On the theory level, we
analyze the constraints imposed by edge features and the sensitivity of
calibration accuracy with respect to edge distribution in the scene. On the
implementation level, we carefully investigate the physical measuring
principles of LiDARs and propose an efficient and accurate LiDAR edge
extraction method based on point cloud voxel cutting and plane fitting. Due to
the edges' richness in natural scenes, we have carried out experiments in many
indoor and outdoor scenes. The results show that this method has high
robustness, accuracy, and consistency. It can promote the research and
application of the fusion between LiDAR and camera. We have open-sourced our
code on GitHub to benefit the community.
|
We systematically investigated the heating of coronal loops on metal-free
stars with various stellar masses and magnetic fields by magnetohydrodynamic
simulations. It is found that the coronal property is dependent on the coronal
magnetic field strength $B_{\rm c}$ because it affects the difference of the
nonlinearity of the Alfv\'{e}nic waves. Weaker $B_{\rm c}$ leads to cooler and
less dense coronae because most of the input waves dissipate in the lower
atmosphere on account of the larger nonlinearity. Accordingly EUV and X-ray
luminosities also correlate with $B_{\rm c}$, while they are emitted in a wide
range of the field strength. Finally we extend our results to evaluating the
contribution from low-mass Population III coronae to the cosmic reionization.
Within the limited range of our parameters on magnetic fields and loop lengths,
the EUV and X-ray radiations give a weak impact on the ionization and heating
of the gas at high redshifts. However, there still remains a possibility of the
contribution to the reionization from energetic flares involving long magnetic
loops.
|
This paper introduces a new benchmark for large-scale image similarity
detection. This benchmark is used for the Image Similarity Challenge at
NeurIPS'21 (ISC2021). The goal is to determine whether a query image is a
modified copy of any image in a reference corpus of size 1~million. The
benchmark features a variety of image transformations such as automated
transformations, hand-crafted image edits and machine-learning based
manipulations. This mimics real-life cases appearing in social media, for
example for integrity-related problems dealing with misinformation and
objectionable content. The strength of the image manipulations, and therefore
the difficulty of the benchmark, is calibrated according to the performance of
a set of baseline approaches. Both the query and reference set contain a
majority of "distractor" images that do not match, which corresponds to a
real-life needle-in-haystack setting, and the evaluation metric reflects that.
We expect the DISC21 benchmark to promote image copy detection as an important
and challenging computer vision task and refresh the state of the art.
|
We address the problem of learning of continuous exponential family
distributions with unbounded support. While a lot of progress has been made on
learning of Gaussian graphical models, we are still lacking scalable algorithms
for reconstructing general continuous exponential families modeling
higher-order moments of the data beyond the mean and the covariance. Here, we
introduce a computationally efficient method for learning continuous graphical
models based on the Interaction Screening approach. Through a series of
numerical experiments, we show that our estimator maintains similar
requirements in terms of accuracy and sample complexity compared to alternative
approaches such as maximization of conditional likelihood, while considerably
improving upon the algorithm's run-time.
|
We examine rotational transitions of HCl in collisions with H$_2$ by carrying
out quantum mechanical close-coupling and quasi-classical trajectory
calculations on a recently developed globally accurate full-dimensional ab
initio potential energy surface for the H$_3$Cl system. Signatures of rainbow
scattering in rotationally inelastic collisions are found in the state resolved
integral and differential cross sections as functions of the impact parameter
(initial orbital angular momentum) and final rotational quantum number. We show
the coexistence of distinct dynamical regimes for the HCl rotational transition
driven by the short-range repulsive and long-range attractive forces whose
relative importance depends on the collision energy and final rotational state
suggesting that classification of rainbow scattering into rotational and
$l$-type rainbows is effective for H$_2$+HCl collisions. While the
quasi-classical trajectory method satisfactorily predicts the overall behavior
of the rotationally inelastic cross sections, its capability to accurately
describe signatures of rainbow scattering appears to be limited for the present
system.
|
The ongoing shift of cloud services from monolithic designs to microservices
creates high demand for efficient and high performance datacenter networking
stacks, optimized for fine-grained workloads. Commodity networking systems
based on software stacks and peripheral NICs introduce high overheads when it
comes to delivering small messages.
We present Dagger, a hardware acceleration fabric for cloud RPCs based on
FPGAs, where the accelerator is closely-coupled with the host processor over a
configurable memory interconnect. The three key design principle of Dagger are:
(1) offloading the entire RPC stack to an FPGA-based NIC, (2) leveraging memory
interconnects instead of PCIe buses as the interface with the host CPU, and (3)
making the acceleration fabric reconfigurable, so it can accommodate the
diverse needs of microservices. We show that the combination of these
principles significantly improves the efficiency and performance of cloud RPC
systems while preserving their generality. Dagger achieves 1.3-3.8x higher
per-core RPC throughput compared to both highly-optimized software stacks, and
systems using specialized RDMA adapters. It also scales up to 84 Mrps with 8
threads on 4 CPU cores, while maintaining state-of-the-art us-scale tail
latency. We also demonstrate that large third-party applications, like
memcached and MICA KVS, can be easily ported on Dagger with minimal changes to
their codebase, bringing their median and tail KVS access latency down to 2.8 -
3.5us and 5.4 - 7.8us, respectively. Finally, we show that Dagger is beneficial
for multi-tier end-to-end microservices with different threading models by
evaluating it using an 8-tier application implementing a flight check-in
service.
|
Numerous electronic cash schemes have been proposed over the years - however
none have been embraced by financial institutions as an alternative to fiat
currency. David Chaum's ecash scheme was the closest to something that mimicked
a modern day currency system, with the important property that it provided
anonymity for users when purchasing coins from a bank, and subsequently
spending them at a merchant premises. However it lacked a crucial element
present in current fiat-based systems - the ability to continuously spend or
transfer coins. Bitcoin reignited the interest in cryptocurrencies in the last
decade but is now seen as more of an asset store as opposed to a financial
instrument. One interesting thing that has come out of the Bitcoin system is
blockchains and the associated distributed consensus protocols. In this paper
we propose a transferable electronic cash scheme using blockchain technology
which allows users to continuously reuse coins within the system.
|
We present a bottom-up differentiable relaxation of the process of drawing
points, lines and curves into a pixel raster. Our approach arises from the
observation that rasterising a pixel in an image given parameters of a
primitive can be reformulated in terms of the primitive's distance transform,
and then relaxed to allow the primitive's parameters to be learned. This
relaxation allows end-to-end differentiable programs and deep networks to be
learned and optimised and provides several building blocks that allow control
over how a compositional drawing process is modelled. We emphasise the
bottom-up nature of our proposed approach, which allows for drawing operations
to be composed in ways that can mimic the physical reality of drawing rather
than being tied to, for example, approaches in modern computer graphics. With
the proposed approach we demonstrate how sketches can be generated by directly
optimising against photographs and how auto-encoders can be built to transform
rasterised handwritten digits into vectors without supervision. Extensive
experimental results highlight the power of this approach under different
modelling assumptions for drawing tasks.
|
When the density of the fluid surrounding suspended Brownian particles is
appreciable, in addition to the forces appearing in the traditional Ornstein
and Uhlenbeck theory of Brownian motion, additional forces emerge as the
displaced fluid in the vicinity of the randomly moving Brownian particle acts
back on the particle giving rise to long-range force correlations which
manifest as a ``long-time tail'' in the decay of the velocity autocorrelation
function known as hydrodynamic memory. In this paper, after recognizing that
for Brownian particles immersed in a Newtonian, viscous fluid, the hydrodynamic
memory term in the generalized Langevin equation is essentially the 1/2
fractional derivative of the velocity of the Brownian particle, we present a
rheological analogue for Brownian motion with hydrodynamic memory which
consists of a linear dashpot of a fractional Scott-Blair element and an
inerter. The synthesis of the proposed mechanical network that is suggested
from the structure of the generalized Langevin equation simplifies appreciably
the calculations of the mean-square displacement and its time-derivatives which
can also be expressed in terms of the two-parameter Mittag--Leffler function.
|
The discrimination of quantum processes, including quantum states, channels,
and superchannels, is a fundamental topic in quantum information theory. It is
often of interest to analyze the optimal performance that can be achieved when
discrimination strategies are restricted to a given subset of all strategies
allowed by quantum mechanics. In this paper, we present a general formulation
of the task of finding the maximum success probability for discriminating
quantum processes as a convex optimization problem whose Lagrange dual problem
exhibits zero duality gap. The proposed formulation can be applied to any
restricted strategy. We also derive necessary and sufficient conditions for an
optimal restricted strategy to be optimal within the set of all strategies. We
provide a simple example in which the dual problem given by our formulation can
be much easier to solve than the original problem. We also show that the
optimal performance of each restricted process discrimination problem can be
written in terms of a certain robustness measure. This finding has the
potential to provide a deeper insight into the discrimination performance of
various restricted strategies.
|
The spinel-structure CuIr$_{2}$S$_{4}$ compound displays a rather unusual
orbitally-driven three-dimensional Peierls-like insulator-metal transition. The
low-T symmetry-broken insulating state is especially interesting due to the
existence of a metastable irradiation-induced disordered weakly conducting
state. Here we study intense femtosecond optical pulse irradiation effects by
means of the all-optical ultrafast multi-pulse time-resolved spectroscopy. We
show that the structural coherence of the low-T broken symmetry state is
strongly suppressed on a sub-picosecond timescale above a threshold excitation
fluence resulting in a structurally inhomogeneous transient state which
persists for several-tens of picoseconds before reverting to the low-T
disordered weakly conducting state. The electronic order shows a transient gap
filling at a significantly lower fluence threshold. The data suggest that the
photoinduced-transition dynamics to the high-T metallic phase is governed by
first-order-transition nucleation kinetics that prevents the complete ultrafast
structural transition even when the absorbed energy significantly exceeds the
equilibrium enthalpy difference to the high-T metallic phase. In contrast, the
dynamically-decoupled electronic order is transiently suppressed on a
sub-picosecond timescale rather independently due to a photoinduced Mott
transition.
|
The Compressed Baryonic Matter~(CBM) experiment in the upcoming Facility for
Antiproton and Ion Research~(FAIR), designed to take data in nuclear collisions
at very high interaction rates of up to 10 MHz, will employ a free-streaming
data acquisition with self-triggered readout electronics, without any hardware
trigger. A simulation framework with a realistic digitization of the detectors
in the muon chamber (MuCh) subsystem in CBM has been developed to provide a
realistic simulation of the time-stamped data stream. In this article, we
describe the implementation of the free-streaming detector simulation and the
basic data related effects on the detector with respect to the interaction
rate.
|
Under the last-in, first-out (LIFO) discipline, jobs arriving later at a
class always receive priority of service over earlier arrivals at any class
belonging to the same station. Subcritical LIFO queueing networks with Poisson
external arrivals are known to be stable, but an open problem has been whether
this is also the case when external arrivals are given by renewal processes.
Here, we show that this weaker assumption is not sufficient for stability by
constructing a family of examples where the number of jobs in the network
increases to infinity over time.
This behavior contrasts with that for the other classical disciplines:
processor sharing (PS), infinite server (IS), and first-in, first-out (FIFO),
which are stable under general conditions on the renewals of external arrivals.
Together with LIFO, PS and IS constitute the classical symmetric disciplines;
with the inclusion of FIFO, these disciplines constitute the classical
homogeneous disciplines. Our examples show that a general theory for stability
of either family is doubtful.
|
Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is a highly
contagious virus responsible for coronavirus disease 2019 (CoViD-19). The
symptoms of CoViD-19 are essentially reflected in the respiratory system,
although other organs are also affected. More than 2.4 million people have died
worldwide due to this disease. Despite CoViD-19 vaccines are already being
administered, alternative treatments that address the immunopathology of the
infection are still needed. For this end, a deeper knowledge on how our immune
system responds to SARS-CoV-2 is required. In this study, we propose a
non-integer order model to understand the dynamics of cytotoxic T lymphocytes
in the presence of SARS-CoV-2. We calculated the basic reproduction number and
analysed the values of the model parameters in order to understand which ones
inhibit or enhance the progression of the infection. Numerical simulations were
performed for different values of the order of the fractional derivative and
for different proliferation functions of cytotoxic T lymphocytes.
|
Recently, nearly complete intersection ideals were defined by Boocher and
Seiner to establish lower bounds on Betti numbers for monomial ideals
(arXiv:1706.09866). Stone and Miller then characterized nearly complete
intersections using the theory of edge ideals (arXiv:2101.07901). We extend
their work to fully characterize nearly complete intersections of arbitrary
generating degrees and use this characterization to compute minimal free
resolutions of nearly complete intersections from their degree 2 part.
|
We propose to reinterpret Einstein's field equations as a nonlinear
eigenvalue problem, where the cosmological constant $\Lambda$ plays the role of
the (smallest) eigenvalue. This interpretation is fully worked out for a simple
model of scalar gravity. The essential ingredient for the feasibility of this
approach is that the classical field equations be nonlinear, i.e., that the
gravitational field is itself a source of gravity. The cosmological
consequences and implications of this approach are developed and discussed.
|
Given a real function $f$, the rate function for the large deviations of the
diffusion process of drift $\nabla f$ given by the Freidlin-Wentzell theorem
coincides with the time integral of the energy dissipation for the gradient
flow associated with $f$. This paper is concerned with the stability in the
hilbertian framework of this common action functional when $f$ varies. More
precisely, we show that if $(f_h)_h$ is uniformly $\lambda$-convex for some
$\lambda \in \mathbb{R}$ and converges towards $f$ in the sense of Mosco
convergence, then the related functionals $\Gamma$-converge in the strong
topology of curves.
|
Adding noises to artificial neural network(ANN) has been shown to be able to
improve robustness in previous work. In this work, we propose a new technique
to compute the pathwise stochastic gradient estimate with respect to the
standard deviation of the Gaussian noise added to each neuron of the ANN. By
our proposed technique, the gradient estimate with respect to noise levels is a
byproduct of the backpropagation algorithm for estimating gradient with respect
to synaptic weights in ANN. Thus, the noise level for each neuron can be
optimized simultaneously in the processing of training the synaptic weights at
nearly no extra computational cost. In numerical experiments, our proposed
method can achieve significant performance improvement on robustness of several
popular ANN structures under both black box and white box attacks tested in
various computer vision datasets.
|
Mobile edge computing (MEC) is regarded as a promising wireless access
architecture to alleviate the intensive computation burden at resource limited
mobile terminals (MTs). Allowing the MTs to offload partial tasks to MEC
servers could significantly decrease task processing delay. In this study, to
minimize the processing delay for a multi-user MEC system, we jointly optimize
the local content splitting ratio, the transmission/computation power
allocation, and the MEC server selection under a dynamic environment with
time-varying task arrivals and wireless channels. The reinforcement learning
(RL) technique is utilized to deal with the considered problem. Two deep RL
strategies, that is, deep Q-learning network (DQN) and deep deterministic
policy gradient (DDPG), are proposed to efficiently learn the offloading
policies adaptively. The proposed DQN strategy takes the MEC selection as a
unique action while using convex optimization approach to obtain the remaining
variables. And the DDPG strategy takes all dynamic variables as actions.
Numerical results demonstrates that both proposed strategies perform better
than existing schemes. And the DDPG strategy is superior to the DQN strategy as
it can learn all variables online although it requires relatively large
complexity.
|
We present counterfactual planning as a design approach for creating a range
of safety mechanisms that can be applied in hypothetical future AI systems
which have Artificial General Intelligence.
The key step in counterfactual planning is to use an AGI machine learning
system to construct a counterfactual world model, designed to be different from
the real world the system is in. A counterfactual planning agent determines the
action that best maximizes expected utility in this counterfactual planning
world, and then performs the same action in the real world.
We use counterfactual planning to construct an AGI agent emergency stop
button, and a safety interlock that will automatically stop the agent before it
undergoes an intelligence explosion. We also construct an agent with an input
terminal that can be used by humans to iteratively improve the agent's reward
function, where the incentive for the agent to manipulate this improvement
process is suppressed. As an example of counterfactual planning in a non-agent
AGI system, we construct a counterfactual oracle.
As a design approach, counterfactual planning is built around the use of a
graphical notation for defining mathematical counterfactuals. This two-diagram
notation also provides a compact and readable language for reasoning about the
complex types of self-referencing and indirect representation which are
typically present inside machine learning agents.
|
We reassess an alternative CPT-odd electrodynamics obtained from a
Palatini-like procedure. Starting from a more general situation, we analyze the
physical consistency of the model for different values of the parameter
introduced in the mass tensor. We show that there is a residual gaugeinvariance
in the model if the local transformation is taken to vary only in the direction
of the Lorentz-breaking vector.
|
The atomic-level vdW heterostructures have been one of the most interesting
quantum material systems, due to their exotic physical properties. The
interlayer coupling in these systems plays a critical role to realize novel
physical observation and enrich interface functionality. However, there is
still lack of investigation on the tuning of interlayer coupling in a
quantitative way. A prospective strategy to tune the interlayer coupling is to
change the electronic structure and interlayer distance by high pressure, which
is a well-established method to tune the physical properties. Here, we
construct a high-quality WS2/MoSe2 heterostructure in a DAC and successfully
tuned the interlayer coupling through hydrostatic pressure. Typical
photoluminescence spectra of the monolayer MoSe2 (ML-MoSe2), monolayer WS2
(ML-WS2) and WS2/MoSe2 heterostructure have been observed and it's intriguing
that their photoluminescence peaks shift with respect to applied pressure in a
quite different way. The intralayer exciton of ML-MoSe2 and ML-WS2 show blue
shift under high pressure with a coefficient of 19.8 meV/GPa and 9.3 meV/GPa,
respectively, while their interlayer exciton shows relative weak pressure
dependence with a coefficient of 3.4 meV/GPa. Meanwhile, external pressure
helps to drive stronger interlayer interaction and results in a higher ratio of
interlayer/intralayer exciton intensity, indicating the enhanced interlayer
exciton behavior. The first-principles calculation reveals the stronger
interlayer interaction which leads to enhanced interlayer exciton behavior in
WS2/MoSe2 heterostructure under external pressure and reveals the robust peak
of interlayer exciton. This work provides an effective strategy to study the
interlayer interaction in vdW heterostructures, which could be of great
importance for the material and device design in various similar quantum
systems.
|
It is well-known that each statistic in the family of power divergence
statistics, across $n$ trials and $r$ classifications with index parameter
$\lambda\in\mathbb{R}$ (the Pearson, likelihood ratio and Freeman-Tukey
statistics correspond to $\lambda=1,0,-1/2$, respectively) is asymptotically
chi-square distributed as the sample size tends to infinity. In this paper, we
obtain explicit bounds on this distributional approximation, measured using
smooth test functions, that hold for a given finite sample $n$, and all index
parameters ($\lambda>-1$) for which such finite sample bounds are meaningful.
We obtain bounds that are of the optimal order $n^{-1}$. The dependence of our
bounds on the index parameter $\lambda$ and the cell classification
probabilities is also optimal, and the dependence on the number of cells is
also respectable. Our bounds generalise, complement and improve on recent
results from the literature.
|
We present the first results of the Fermilab Muon g-2 Experiment for the
positive muon magnetic anomaly $a_\mu \equiv (g_\mu-2)/2$. The anomaly is
determined from the precision measurements of two angular frequencies.
Intensity variation of high-energy positrons from muon decays directly encodes
the difference frequency $\omega_a$ between the spin-precession and cyclotron
frequencies for polarized muons in a magnetic storage ring. The storage ring
magnetic field is measured using nuclear magnetic resonance probes calibrated
in terms of the equivalent proton spin precession frequency
${\tilde{\omega}'^{}_p}$ in a spherical water sample at 34.7$^{\circ}$C. The
ratio $\omega_a / {\tilde{\omega}'^{}_p}$, together with known fundamental
constants, determines $a_\mu({\rm FNAL}) = 116\,592\,040(54)\times 10^{-11}$
(0.46\,ppm). The result is 3.3 standard deviations greater than the standard
model prediction and is in excellent agreement with the previous Brookhaven
National Laboratory (BNL) E821 measurement. After combination with previous
measurements of both $\mu^+$ and $\mu^-$, the new experimental average of
$a_\mu({\rm Exp}) = 116\,592\,061(41)\times 10^{-11}$ (0.35\,ppm) increases the
tension between experiment and theory to 4.2 standard deviations
|
Recently, antiferromagnets have received revived interest due to their
significant potential for developing next-generation ultrafast magnetic
storage. Here we report dc spin pumping by the acoustic resonant mode in a
canted easy-plane antiferromagnet {\alpha}-Fe2O3 enabled by the
Dzyaloshinskii-Moriya interaction. Systematic angle and frequency dependent
measurements demonstrate that the observed spin pumping signals arise from
resonance-induced spin injection and inverse spin Hall effect in
{\alpha}-Fe2O3/metal heterostructures, mimicking the behavior of spin pumping
in conventional ferromagnet/nonmagnet systems. The pure spin current nature is
further corroborated by reversal of the polarity of spin pumping signals when
the spin detector is switched from platinum to tungsten which has an opposite
sign of the spin Hall angle. Our results highlight the potential opportunities
offered by the low-frequency acoustic resonant mode in canted easy-plane
antiferromagnets for developing next-generation, functional spintronic devices.
|
The worldsheet string theory dual to free 4d ${\cal N}=4$ super Yang-Mills
theory was recently proposed in arXiv:2104.08263. It is described by a free
field sigma model on the twistor space of ${\rm AdS}_5\times {\rm S}^5$, and is
a direct generalisation of the corresponding model for tensionless string
theory on ${\rm AdS}_3\times {\rm S}^3$. As in the case of ${\rm AdS}_3$, the
worldsheet theory contains spectrally flowed representations. We proposed in
arXiv:2104.08263 that in each such sector only a finite set of generalised zero
modes (`wedge modes') are physical. Here we show that after imposing the
appropriate residual gauge conditions, this worldsheet description reproduces
precisely the spectrum of the planar gauge theory. More specifically, the
states in the sector with $w$ units of spectral flow match with single trace
operators built out of $w$ super Yang-Mills fields (`letters'). The resulting
physical picture is a covariant version of the BMN light-cone string, now with
a finite number of twistorial string bit constituents of an essentially
topological worldsheet.
|
Questions concerning quantitative and asymptotic properties of the elliptic
measure corresponding to a uniformly elliptic divergence form operator have
been the focus of recent studies. In this setting we show that the elliptic
measure of an operator with coefficients satisfying a vanishing Carleson
condition in the upper half space is an asymptotically optimal $A_\infty$
weight. In particular, for such operators the logarithm of the elliptic kernel
is in the space of (locally) vanishing mean oscillation.
To achieve this, we prove local, quantitative estimates on a quantity
(introduced by Fefferman, Kenig and Pipher) that controls the $A_\infty$
constant. Our work uses recent results obtained by David, Li and Mayboroda.
These quantitative estimates may offer a new framework to approach similar
problems.
|
Previous gesture elicitation studies have found that user proposals are
influenced by legacy bias which may inhibit users from proposing gestures that
are most appropriate for an interaction. Increasing production during
elicitation studies has shown promise moving users beyond legacy gestures.
However, variety decreases as more symbols are produced. While several studies
have used increased production since its introduction, little research has
focused on understanding the effect on the proposed gesture quality, on why
variety decreases, and on whether increased production should be limited. In
this paper, we present a gesture elicitation study aimed at understanding the
impact of increased production. We show that users refine the most promising
gestures and that how long it takes to find promising gestures varies by
participant. We also show that gestural refinements provide insight into the
gestural features that matter for users to assign semantic meaning and discuss
implications for training gesture classifiers.
|
This paper proposes a learning and scheduling algorithm to minimize the
expected cumulative holding cost incurred by jobs, where statistical parameters
defining their individual holding costs are unknown a priori. In each time
slot, the server can process a job while receiving the realized random holding
costs of the jobs remaining in the system. Our algorithm is a learning-based
variant of the $c\mu$ rule for scheduling: it starts with a preemption period
of fixed length which serves as a learning phase, and after accumulating enough
data about individual jobs, it switches to nonpreemptive scheduling mode. The
algorithm is designed to handle instances with large or small gaps in jobs'
parameters and achieves near-optimal performance guarantees. The performance of
our algorithm is captured by its regret, where the benchmark is the minimum
possible cost attained when the statistical parameters of jobs are fully known.
We prove upper bounds on the regret of our algorithm, and we derive a regret
lower bound that is almost matching the proposed upper bounds. Our numerical
results demonstrate the effectiveness of our algorithm and show that our
theoretical regret analysis is nearly tight.
|
In this paper we show how to compute port behaviour of multiports which have
port equations of the general form $B v_P - Q i_P = s,$ which cannot even be
put into the hybrid form, indeed may have number of equations ranging from $0$
to $2n,$ where $n$ is the number of ports.
We do this through repeatedly solving with different source inputs, a larger
network obtained by terminating the multiport by its adjoint through a gyrator.
The method works for linear multiports which are consistent for arbitrary
internal source values and further have the property that the port conditions
uniquely determine internal conditions.
We also present the most general version of maximum power transfer theorem
possible. This version of the theorem states that `stationarity' (derivative
zero condition) of power transfer occurs when the multiport is terminated by
its adjoint, provided the resulting network has a solution.
If this network does not have a solution there is no port condition for which
stationarity holds. This theorem does not require that the multiport has a
hybrid immittance matrix.
|
We characterise the sensitivity of several additive tensor decompositions
with respect to perturbations of the original tensor. These decompositions
include canonical polyadic decompositions, block term decompositions, and sums
of tree tensor networks. Our main result shows that the condition number of all
these decompositions is invariant under Tucker compression. This result can
dramatically speed up the computation of the condition number in practical
applications. We give the example of an $265\times 371\times 7$ tensor of rank
$3$ from a food science application whose condition number was computed in
$6.9$ milliseconds by exploiting our new theorem, representing a speedup of
four orders of magnitude over the previous state of the art.
|
Eventually after Dieudonn\'e-Grothendieck, we give intrinsic definitions of
\'etale, lisse and non-ramifi\'e morphisms for general adic rings and general
locally convex rings. And we investigate the corresponding \'etale-like,
lisse-like and non-ramifi\'e-like morphisms for general $\infty$-Banach,
$\infty$-Born\'e and $\infty$-ind-Fr\'echet $\infty$-rings and
$\infty$-functors into $\infty$-groupoid (as in the work of
Bambozzi-Ben-Bassat-Kremnizer) in some intrinsic way by using the corresponding
infinitesimal stacks and crystalline stacks. The two directions of
generalization will intersect at Huber's book in the strongly noetherian
situation.
|
In this paper, we propose a novel graph learning framework for phrase
grounding in the image. Developing from the sequential to the dense graph
model, existing works capture coarse-grained context but fail to distinguish
the diversity of context among phrases and image regions. In contrast, we pay
special attention to different motifs implied in the context of the scene graph
and devise the disentangled graph network to integrate the motif-aware
contextual information into representations. Besides, we adopt interventional
strategies at the feature and the structure levels to consolidate and
generalize representations. Finally, the cross-modal attention network is
utilized to fuse intra-modal features, where each phrase can be computed
similarity with regions to select the best-grounded one. We validate the
efficiency of disentangled and interventional graph network (DIGN) through a
series of ablation studies, and our model achieves state-of-the-art performance
on Flickr30K Entities and ReferIt Game benchmarks.
|
I report a tentative ($\sim4\sigma$) emission line at $\nu=100.84\,$GHz from
"COS-3mm-1'", a 3mm-selected galaxy reported by Williams et al. 2019 that is
undetected at optical and near infrared wavelengths. The line was found in the
ALMA Science Archive after re-processing ALMA band 3 observations targeting a
different source. Assuming the line corresponds to the $\rm CO(6\to5)$
transition, this tentative detection implies a spectroscopic redshift of
$z=5.857$, in agreement with the galaxy's redshift constraints from
multi-wavelength photometry. This would make this object the highest redshift
3mm-selected galaxy and one of the highest redshift dusty star-forming galaxies
known to-date. Here, I report the characteristics of this tentative detection
and the physical properties that can be inferred assuming the line is real.
Finally, I advocate for follow-up observations to corroborate this
identification and to confirm the high-redshift nature of this optically-dark
dusty star-forming galaxy.
|
We here report a monitor of the BL Lac object 1ES 1218+304 in both B- and
R-bands by the GWAC-F60A telescope in eight nights, when it was triggerd to be
at its highest X-ray flux in history by the VERITAS Observatory and Swift
follow-ups. Both ANOVA and $\chi^2$-test enable us to clearly reveal an
intra-day variability in optical wavelengths in seven out of the eight nights.
A bluer-when-brighter chromatic relationship has been clearly identified in
five out of the eight nights, which can be well explained by the shock-in-jet
model. In addtion, a quasi-periodic oscilation phenomenon in both bands could
be tentatively identified in the first night. A positive delay between the two
bands has been revealed in three out of the eight nights, and a negative one in
the other nights. The identfied minimum time delay enables us to estimate the
$M_{\mathrm{BH}}=2.8\times10^7 \rm M_{\odot}$that is invalid.
|
Motivated by challenges in Earth mantle convection, we present a massively
parallel implementation of an Eulerian-Lagrangian method for the
advection-diffusion equation in the advection-dominated regime. The advection
term is treated by a particle-based, characteristics method coupled to a
block-structured finite-element framework. Its numerical and computational
performance is evaluated in multiple, two- and three-dimensional benchmarks,
including curved geometries, discontinuous solutions, pure advection, and it is
applied to a coupled non-linear system modeling buoyancy-driven convection in
Stokes flow. We demonstrate the parallel performance in a strong and weak
scaling experiment, with scalability to up to $147,456$ parallel processes,
solving for more than $5.2 \times 10^{10}$ (52 billion) degrees of freedom per
time-step.
|
We extend the new approach introduced in arXiv:1912.02064v2 [math.PR] and
arXiv:2102.10119v1 [math.PR] for dealing with stochastic Volterra equations
using the ideas of Rough Path theory and prove global existence and uniqueness
results. The main idea of this approach is simple: Instead of the iterated
integrals of a path comprising the data necessary to solve any equation driven
by that path, now iterated integral convolutions with the Volterra kernel
comprise said data. This leads to the corresponding abstract objects called
Volterra-type Rough Paths, as well as the notion of the convolution product, an
extension of the natural tensor product used in Rough Path Theory.
|
Handwritten document image binarization is challenging due to high
variability in the written content and complex background attributes such as
page style, paper quality, stains, shadow gradients, and non-uniform
illumination. While the traditional thresholding methods do not effectively
generalize on such challenging real-world scenarios, deep learning-based
methods have performed relatively well when provided with sufficient training
data. However, the existing datasets are limited in size and diversity. This
work proposes LS-HDIB - a large-scale handwritten document image binarization
dataset containing over a million document images that span numerous real-world
scenarios. Additionally, we introduce a novel technique that uses a combination
of adaptive thresholding and seamless cloning methods to create the dataset
with accurate ground truths. Through an extensive quantitative and qualitative
evaluation over eight different deep learning based models, we demonstrate the
enhancement in the performance of these models when trained on the LS-HDIB
dataset and tested on unseen images.
|
In the developing countries, most of the Manual Material Handling (MMH)
related tasks are labor-intensive. It is not possible for these countries to
assess injuries during lifting heavy weight, as multi-camera motion capture,
force plate and electromyography (EMG) systems are very expensive. In this
study, we proposed an easy to use, portable and low cost system, which will
help the developing countries to evaluate injuries for their workers. The
system consists of two hardware and three software. The joint angle profiles
are collected using smartphone camera and Kinovea software. The vertical Ground
Reaction Force (GRF) is collected using Wii balance board and Brainblox
software. Finally, the musculoskeletal analysis is performed using OpenSim
Static Optimization tool to find the muscle force. The system will give us
access to a comprehensive biomechanical analysis, from collecting joint angle
profiles to generating muscle force profiles. This proposed framework has the
potential to assess and prevent MMH related injuries in developing countries.
|
When fitting statistical models, some predictors are often found to be
correlated with each other, and functioning together. Many group variable
selection methods are developed to select the groups of predictors that are
closely related to the continuous or categorical response. These existing
methods usually assume the group structures are well known. For example,
variables with similar practical meaning, or dummy variables created by
categorical data. However, in practice, it is impractical to know the exact
group structure, especially when the variable dimensional is large. As a
result, the group variable selection results may be selected. To solve the
challenge, we propose a two-stage approach that combines a variable clustering
stage and a group variable stage for the group variable selection problem. The
variable clustering stage uses information from the data to find a group
structure, which improves the performance of the existing group variable
selection methods. For ultrahigh dimensional data, where the predictors are
much larger than observations, we incorporated a variable screening method in
the first stage and shows the advantages of such an approach. In this article,
we compared and discussed the performance of four existing group variable
selection methods under different simulation models, with and without the
variable clustering stage. The two-stage method shows a better performance, in
terms of the prediction accuracy, as well as in the accuracy to select active
predictors. An athlete's data is also used to show the advantages of the
proposed method.
|
Click-Through Rate (CTR) prediction plays an important role in many
industrial applications, and recently a lot of attention is paid to the deep
interest models which use attention mechanism to capture user interests from
historical behaviors. However, most current models are based on sequential
models which truncate the behavior sequences by a fixed length, thus have
difficulties in handling very long behavior sequences. Another big problem is
that sequences with the same length can be quite different in terms of time,
carrying completely different meanings. In this paper, we propose a
non-sequential approach to tackle the above problems. Specifically, we first
represent the behavior data in a sparse key-vector format, where the vector
contains rich behavior info such as time, count and category. Next, we enhance
the Deep Interest Network to take such rich information into account by a novel
attention network. The sparse representation makes it practical to handle large
scale long behavior sequences. Finally, we introduce a multidimensional
partition framework to mine behavior interactions. The framework can partition
data into custom designed time buckets to capture the interactions among
information aggregated in different time buckets. Similarly, it can also
partition the data into different categories and capture the interactions among
them. Experiments are conducted on two public datasets: one is an advertising
dataset and the other is a production recommender dataset. Our models
outperform other state-of-the-art models on both datasets.
|
We present an idealized study of the baroclinic structure of a rotating,
stratified throughflow across a finite amplitude ridge in the f-plane that is
forced by a steady, uniform inflow and outflow of equal magnitude. The
resulting equilibrated circulation is characterized by unstable,
western-intensified boundary currents and flow along f/H contours atop the
ridge near the crest that closes the forced circulation. We find that bottom
Ekman dynamics localized to lateral boundary currents amplify the
stratification expected by simple stretching/squashing: a strongly stratified
bottom front (high PV anomaly) along the anticyclonic boundary current
associated with net upslope transport, and a bottom mixed layer front
(vanishing PV anomaly) localized to the cyclonic boundary current where there
is net downslope transport. PV anomalies associated with the two fronts are
advected by both the mean flow and eddies that result from baroclinic growth,
resulting in a spatial distribution where high PV is concentrated along the
ridge, and low PV is advected into the interior ocean downstream from the
ridge, at mid-depth. Using a framework of volume integrated PV conservation
which incorporates the net fluxes associated with bottom topography, we conform
an approximate integral balance between the PV injection from the bottom
boundary layer on the ridge and net advection across the ridge. Implications of
these findings for understanding the interplay between large scale and bottom
boundary dynamics are discussed.
|
We elucidate the problem of estimating large-dimensional covariance matrices
in the presence of correlations between samples. To this end, we generalize the
Marcenko-Pastur equation and the Ledoit-Peche shrinkage estimator using methods
of random matrix theory and free probability. We develop an efficient algorithm
that implements the corresponding analytic formulas, based on the Ledoit-Wolf
kernel estimation technique. We also provide an associated open-source Python
library, called "shrinkage", with a user-friendly API to assist in practical
tasks of estimation of large covariance matrices. We present an example of its
usage for synthetic data generated according to exponentially-decaying
auto-correlations.
|
The chemical composition of the Sun is a fundamental yardstick in astronomy,
relative to which essentially all cosmic objects are referenced. We reassess
the solar abundances of all 83 long-lived elements, using highly realistic
solar modelling and state-of-the-art spectroscopic analysis techniques coupled
with the best available atomic data and observations. Our new improved analysis
confirms the relatively low solar abundances of C, N, and O obtained in our
previous 3D-based studies: $\log\epsilon_{\text{C}}=8.46\pm0.04$,
$\log\epsilon_{\text{N}}=7.83\pm0.07$, and
$\log\epsilon_{\text{O}}=8.69\pm0.04$. The revised solar abundances for the
other elements also typically agree well with our previously recommended values
with just Li, F, Ne, Mg, Cl, Kr, Rb, Rh, Ba, W, Ir, and Pb differing by more
than $0.05$ dex. The here advocated present-day photospheric metal mass
fraction is only slightly higher than our previous value, mainly due to the
revised Ne abundance from Genesis solar wind measurements: $X_{\rm
surface}=0.7438\pm0.0054$, $Y_{\rm surface}=0.2423\pm 0.0054$, $Z_{\rm
surface}=0.0139\pm 0.0006$, and $Z_{\rm surface}/X_{\rm surface}=0.0187\pm
0.0009$. Overall the solar abundances agree well with those of CI chondritic
meteorites but we identify a correlation with condensation temperature such
that moderately volatile elements are enhanced by $\approx 0.04$ dex in the CI
chondrites and refractory elements possibly depleted by $\approx 0.02$ dex,
conflicting with conventional wisdom of the past half-century. Instead the
solar chemical composition resembles more closely that of the fine-grained
matrix of CM chondrites. The so-called solar modelling problem remains intact
with our revised solar abundances, suggesting shortcomings with the computed
opacities and/or treatment of mixing below the convection zone in existing
standard solar models.
|
Causal inference methods are widely applied in various decision-making
domains such as precision medicine, optimal policy and economics. Central to
these applications is the treatment effect estimation of intervention
strategies. Current estimation methods are mostly restricted to the
deterministic treatment, which however, is unable to address the stochastic
space treatment policies. Moreover, previous methods can only make binary
yes-or-no decisions based on the treatment effect, lacking the capability of
providing fine-grained effect estimation degree to explain the process of
decision making. In our study, we therefore advance the causal inference
research to estimate stochastic intervention effect by devising a new
stochastic propensity score and stochastic intervention effect estimator (SIE).
Meanwhile, we design a customized genetic algorithm specific to stochastic
intervention effect (Ge-SIO) with the aim of providing causal evidence for
decision making. We provide the theoretical analysis and conduct an empirical
study to justify that our proposed measures and algorithms can achieve a
significant performance lift in comparison with state-of-the-art baselines.
|
Let $p\geq 5$ be a prime. We construct modular Galois representations for
which the $\mathbb{Z}_p$-corank of the $p$-primary Selmer group over the
cyclotomic $\mathbb{Z}_p$-extension is large. The method is based on a purely
Galois theoretic lifting construction.
|
In this work we prove the non-degeneracy of the critical points of the Robin
function for the Fractional Laplacian under symmetry and convexity assumptions
on the domain $\Omega$. This work extends to the fractional setting the results
of M. Grossi concerning the classical Laplace operator.
|
Despite their remarkable expressibility, convolution neural networks (CNNs)
still fall short of delivering satisfactory results on single image dehazing,
especially in terms of faithful recovery of fine texture details. In this
paper, we argue that the inadequacy of conventional CNN-based dehazing methods
can be attributed to the fact that the domain of hazy images is too far away
from that of clear images, rendering it difficult to train a CNN for learning
direct domain shift through an end-to-end manner and recovering texture details
simultaneously. To address this issue, we propose to add explicit constraints
inside a deep CNN model to guide the restoration process. In contrast to direct
learning, the proposed mechanism shifts and narrows the candidate region for
the estimation output via multiple confident neighborhoods. Therefore, it is
capable of consolidating the expressibility of different architectures,
resulting in a more accurate indirect domain shift (IDS) from the hazy images
to that of clear images. We also propose two different training schemes,
including hard IDS and soft IDS, which further reveal the effectiveness of the
proposed method. Our extensive experimental results indicate that the dehazing
method based on this mechanism outperforms the state-of-the-arts.
|
Noncoplanar radiation therapy treatment planning has the potential to improve
dosimetric quality as compared to traditional coplanar techniques. Likewise,
automated treatment planning algorithms can reduce a planner's active treatment
planning time and remove inter-planner variability. To address the limitations
of traditional treatment planning, we have been developing a suite of
algorithms called station parameter optimized radiation therapy (SPORT). Within
the SPORT suite of algorithms, we propose a method called NC-POPS to produce
noncoplanar (NC) plans using the fully automated Pareto Optimal Projection
Search (POPS) algorithm. Our NC-POPS algorithm extends the original POPS
algorithm to the noncoplanar setting with potential applications to both IMRT
and VMAT. The proposed algorithm consists of two main parts: 1) noncoplanar
beam angle optimization (BAO) and 2) fully automated inverse planning using the
POPS algorithm. We evaluate the performance of NC-POPS by comparing between
various noncoplanar and coplanar configurations. To evaluate plan quality, we
compute the homogeneity index (HI), conformity index (CI), and dose-volume
histogram (DVH) statistics for various organs-at-risk (OARs). As compared to
the evaluated coplanar baseline methods, the proposed NC-POPS method achieves
significantly better OAR sparing, comparable or better dose conformity, and
similar dose homogeneity. Our proposed NC-POPS algorithm provides a modular
approach for fully automated treatment planning of noncoplanar IMRT cases with
the potential to substantially improve treatment planning workflow and plan
quality.
|
Recently, there has been a significant interest in performing convolution
over irregularly sampled point clouds. Since point clouds are very different
from regular raster images, it is imperative to study the generalization of the
convolution networks more closely, especially their robustness under variations
in scale and rotations of the input data. This paper investigates different
variants of PointConv, a convolution network on point clouds, to examine their
robustness to input scale and rotation changes. Of the variants we explored,
two are novel and generated significant improvements. The first is replacing
the multilayer perceptron based weight function with much simpler third degree
polynomials, together with a Sobolev norm regularization. Secondly, for 3D
datasets, we derive a novel viewpoint-invariant descriptor by utilizing 3D
geometric properties as the input to PointConv, in addition to the regular 3D
coordinates. We have also explored choices of activation functions,
neighborhood, and subsampling methods. Experiments are conducted on the 2D
MNIST & CIFAR-10 datasets as well as the 3D SemanticKITTI & ScanNet datasets.
Results reveal that on 2D, using third degree polynomials greatly improves
PointConv's robustness to scale changes and rotations, even surpassing
traditional 2D CNNs for the MNIST dataset. On 3D datasets, the novel
viewpoint-invariant descriptor significantly improves the performance as well
as robustness of PointConv. We achieve the state-of-the-art semantic
segmentation performance on the SemanticKITTI dataset, as well as comparable
performance with the current highest framework on the ScanNet dataset among
point-based approaches.
|
We study the fluctuations of eigenstate expectation values in a
microcanonical ensemble. Assuming the eigenstate thermalization hypothesis, an
analytical formula for the finite-size scaling of the fluctuations is derived.
The same problem was studied by Beugeling et al. [Phys. Rev. E 89, 042112
(2014)]. We compare our results with theirs.
|
We derive upper and lower bounds on the sum of distances of a spherical code
of size $N$ in $n$ dimensions when $N\sim n^\alpha, 0<\alpha\le 2.$ The bounds
are derived by specializing recent general, universal bounds on energy of
spherical sets. We discuss asymptotic behavior of our bounds along with several
examples of codes whose sum of distances closely follows the upper bound.
|
This article describes our approach to quantifying the characteristics of
meteors such as temperature, chemical composition, and others. We are using a
new approach based on colourimetry. We analyze an image of Leonid meteor-6230
obtained by Mike Hankey in 2012. Analysis of the temporal features of the
meteoroid trail is performed. For determining the meteor characteristics we use
the "tuning technique" in combination with a simulation model of intrusion. The
progenitor of the meteor was found as an object weighing 900 kg at a speed of
36.5 km/s. The meteoroid reached a critical value of the pressure at an
altitude of about 29 km in a time of about 4.6 sec with a residual mass of
about 20 kg, and a residual speed of about 28 km/s. At this moment, a meteoroid
exploded and destroyed. We use the meteor multicolour light curves revealed
from a DSLR image in the RGB colour standard. We switch from the RGB colour
system to Johnson's RVB colour system introducing colour corrections. This
allows one to determine the colour characteristics of the meteor radiation. We
are using a new approach based on colourimetry. Colourimetry of BGR three-beam
light curves allows the identification of the brightest spectral lines. Our
approach based on colourimetry allows direct measurements of temperature in the
meteor trail. We find a part of the trajectory where the meteoroid radiates as
an absolutely black body. The R/G and B/G light curves ratio allow one to
identify the wavelengths of the emission lines using the transmission curves of
the RGB filters. At the end of the trajectory, the meteoroid radiates in the
lines Ca II H, K 393, 397 nm, Fe I 382, 405 nm, Mg I 517 nm, Na I 589 nm, as
well as atmospheric O I 779 nm.
|
In this paper, we have proposed a model of accelerating Universe with binary
mixture of bulk viscous fluid and dark energy. and probed the model parameters:
present values of Hubble's constant $H_{0}$, Equation of state paper of dark
energy $\omega_{de}$ and density parameter of dark energy $(\Omega_{de})_{0}$
with recent OHD as well as joint Pantheon compilation of SN Ia data and OHD.
Using cosmic chronometric technique, we obtain $H_{0} = 69.80 \pm
1.64~km~s^{-1}Mpc^{-1}$ and $70.0258 \pm 1.72~km~s^{-1}Mpc^{-1}$ by restricting
our derived model with recent OHD and joint Pantheon compilation SN Ia data and
OHD respectively. The age of the Universe in derived model is estimated as
$t_{0} = 13.82 \pm 0.33\; Gyrs$. Also, we observe that derived model represents
a model of transitioning Universe with transition redshift $z_{t} = 0.7286$. We
have constrained the present value of jerk parameter as $j_{0} = 0.969 \pm
0.0075$ with joint OHD and Pantheon data. From this analysis, we observed that
the model of the Universe, presented in this paper shows a marginal departure
from $\Lambda$CDM model.
|
In 1848 Ch.~Hermite asked if there exists some way to write cubic
irrationalities periodically. A little later in order to approach the problem
C.G.J.~Jacobi and O.~Perron generalized the classical continued fraction
algorithm to the three-dimensional case, this algorithm is called now the
Jacobi-Perron algorithm. This algorithm is known to provide periodicity only
for some cubic irrationalities.
In this paper we introduce two new algorithms in the spirit of Jacobi-Perron
algorithm: the heuristic algebraic periodicity detecting algorithm and the
$\sin^2$-algorithm. The heuristic algebraic periodicity detecting algorithm is
a very fast and efficient algorithm, its output is periodic for numerous
examples of cubic irrationalities, however its periodicity for cubic
irrationalities is not proven. The $\sin^2$-algorithm is limited to the
totally-real cubic case (all the roots of cubic polynomials are real numbers).
In the recent paper~\cite{Karpenkov2021} we proved the periodicity of the
$\sin^2$-algorithm for all cubic totally-real irrationalities. To our best
knowledge this is the first Jacobi-Perron type algorithm for which the cubic
periodicity is proven. The $\sin^2$-algorithm provides the answer to Hermite's
problem for the totally real case (let us mention that the case of cubic
algebraic numbers with complex conjugate roots remains open).
We conclude this paper with one important application of Jacobi-Perron type
algorithms to computation of independent elements in the maximal groups of
commuting matrices of algebraic irrationalities.
|
We show how the Shannon entropy function H(p,q)is expressible as a linear
combination of other Shannon entropy functions involving quotients of
polynomials in p,q of degree n for any given positive integer n. An application
to cryptographic keys is presented.
|
Idioms are unlike other phrases in two important ways. First, the words in an
idiom have unconventional meanings. Second, the unconventional meaning of words
in an idiom are contingent on the presence of the other words in the idiom.
Linguistic theories disagree about whether these two properties depend on one
another, as well as whether special theoretical machinery is needed to
accommodate idioms. We define two measures that correspond to these two
properties, and we show that idioms fall at the expected intersection of the
two dimensions, but that the dimensions themselves are not correlated. Our
results suggest that idioms are no more anomalous than other types of phrases,
and that introducing special machinery to handle idioms may not be warranted.
|
We study anticommutative algebras with the property that commutator of any
two multiplications is a derivation.
|
Hidden Markov chain, or Markov field, models, with observations in a
Euclidean space, play a major role across signal and image processing. The
present work provides a statistical framework which can be used to extend these
models, along with related, popular algorithms (such as the Baum-Welch
algorithm), to the case where the observations lie in a Riemannian manifold. It
is motivated by the potential use of hidden Markov chains and fields, with
observations in Riemannian manifolds, as models for complex signals and images.
|
Video annotation and analysis is an important activity for teaching with and
about audiovisual media artifacts because it helps students to learn how to
identify textual and formal connections in media products. But school teachers
lack adequate tools for video annotation and analysis in media education that
are easy-to-use, integrate into established teaching organization, and support
quick collaborative work. To address these challenges, we followed a
design-based research approach and conducted qualitative interviews with
teachers to develop TRAVIS GO, a web application for simple and collaborative
video annotation. TRAVIS GO allows for quick and easy use within established
teaching settings. The web application provides basic analytical features in an
adaptable work space. Key didactic features include tagging and commenting on
posts, sharing and exporting projects, and working in live collaboration.
Teachers can create assignments according to grade level, learning subject, and
class size. Our work contributes further insights for the CSCW community about
how to implement user demands into developing educational tools.
|
Using a nominally symmetric annular combustor, we present experimental
evidence of a predicted spontaneous symmetry breaking and an unexpected
explicit symmetry breaking in the neighborhood of the Hopf bifurcation, which
separates linearly-stable azimuthal thermoacoustic modes from self-oscillating
modes. We derive and solve a multidimensional Fokker-Planck equation to unravel
a unified picture of the phase space topology. We demonstrate that symmetric
probability density functions of the thermoacoustic state vector are elusive,
because the effect of asymmetries, even imperceptible ones, is magnified close
to the bifurcation.
|
We propose the operation of \textbf{LEvEL}, the Low-Energy Neutrino
Experiment at the LHC, a neutrino detector near the Large Hadron Collider Beam
Dump. Such a detector is capable of exploring an intense, low-energy neutrino
flux and can measure neutrino cross sections that have previously never been
observed. These cross sections can inform other future neutrino experiments,
such as those aiming to observe neutrinos from supernovae, allowing such
measurements to accomplish their fundamental physics goals. We perform detailed
simulations to determine neutrino production at the LHC beam dump, as well as
neutron and muon backgrounds. Measurements at a few to ten percent precision of
neutrino-argon charged current and neutrino-nucleus coherent scattering cross
sections are attainable with 100~ton-year and 1~ton-year exposures at LEvEL,
respectively, concurrent with the operation of the High Luminosity LHC. We also
estimate signal and backgrounds for an experiment exploiting the forward
direction of the LHC beam dump, which could measure neutrinos above 100 GeV.
|
Deflection of light due to massive objects was predicted by Einstein in his
General Theory of Relativity. This deflection of light has been calculated by
many researchers in past, for spherically symmetric objects. But, in reality,
most of these gravitating objects are not spherical instead they are
ellipsoidal ( oblate) in shape. The objective of the present work is to study
theoretically the effect of this ellipticity on the trajectory of a light ray.
Here, we obtain a converging series expression for the deflection of a light
ray due to an ellipsoidal gravitating object, characterised by an ellipticity
parameter. As a boundary condition, by setting the ellipticity parameter to be
equal to zero, we get back the same expression for deflection as due to
Schwarzschild object. It is also found that the additional contribution in
deflection angle due to this ellipticity though small, but could be typically
higher than the similar contribution caused by the rotation of a celestial
object. Therefore for a precise estimate of the deflection due to a celestial
object, the calculations presented here would be useful.
|
A new method is proposed to search for the double beauty tetraquark
$bb\bar{u}\bar{d}$ with a mass below the $BB$ threshold. The
$b\bar{d}-\bar{b}d$ mixing can result in evolution of the tetraquark content to
$b\bar{b}\bar{u}d$ with the following strong decay into the
$\Upsilon$(1S)$\pi^-$ and $\Upsilon$(2S)$\pi^-$ final states in a secondary
vertex displaced from the primary $pp$ collision vertex. This experimental
signature is clean, has a high selection efficiency and can be well separated
from backgrounds. Experimental signatures for searches for other double heavy
tetraquarks are also discussed, suggesting the possibility of the $b\bar{d}$ or
$b\bar{s}$ mixing.
|
We show that splitting forcing does not have the weak Sacks property below
any condition, answering a question of Laguzzi, Mildenberger and
Stuber-Rousselle. We also show how some partition results for splitting trees
hold or fail and we determine the value of cardinal invariants after an
$\omega_2$-length countable support iteration of splitting forcing.
|
The recently discovered layered kagome metals AV$_3$Sb$_5$ (A = K, Rb, and
Cs) with vanadium kagome networks provide a novel platform to explore
correlated quantum states intertwined with topological band structures. Here we
report the prominent effect of hole doping on both superconductivity and charge
density wave (CDW) order, achieved by selective oxidation of exfoliated thin
flakes. A superconducting dome is revealed as a function of the effective
doping content. The superconducting transition temperature ($T_{\mathrm{c}}$)
and upper critical field in thin flakes are significantly enhanced compared
with the bulk, which is accompanied by the suppression of CDW. Our detailed
analyses establish the pivotal role of van Hove singularities (VHSs) in
promoting correlated quantum orders in these kagome metals. Our experiment not
only demonstrates the intriguing nature of superconducting and CDW orders, but
also provides a novel route to tune the carrier concentration through both
selective oxidation and electric gating. This establishes AV$_3$Sb$_5$ as a
tunable 2D platform for the further exploration of topology and correlation
among 3$d$ electrons in kagome lattices.
|
This article provides a self-contained pedagogical introduction to the
relativistic kinetic theory of a dilute gas propagating on a curved spacetime
manifold (M,g) of arbitrary dimension. Special emphasis is made on geometric
aspects of the theory in order to achieve a formulation which is manifestly
covariant on the relativistic phase space. Whereas most previous work has
focussed on the tangent bundle formulation, here we work on the cotangent
bundle associated with (M,g) which is more naturally adapted to the Hamiltonian
framework of the theory. In the first part of this work we discuss the relevant
geometric structures of the cotangent bundle T*M, starting with the natural
symplectic form on T*M, the one-particle Hamiltonian and the Liouville vector
field, defined as the corresponding Hamiltonian vector field. Next, we discuss
the Sasaki metric on T*M and its most important properties, including the role
it plays for the physical interpretation of the one-particle distribution
function. In the second part of this work we describe the general relativistic
theory of a collisionless gas, starting with the derivation of the
collisionless Boltzmann equation for a neutral simple gas. Subsequently, the
description is generalized to a charged gas consisting of several species of
particles and the general relativistic Vlasov-Maxwell equations are derived for
this system. The last part of this work is devoted to a transparent derivation
of the collision term, leading to the general relativistic Boltzmann equation
on (M,g). The meaning of global and local equilibrium and the strident
restrictions for the existence of the former on a curved spacetime are
discussed. We close this article with an application of our formalism to the
expansion of a homogeneous and isotropic universe filled with a collisional
simple gas and its behavior in the early and late epochs. [abbreviated version]
|
We investigate asymptotic behavior of polynomials $p^{\omega}_n(z)$
satisfying varying non-Hermitian orthogonality relations $$ \int_{-1}^{1}
x^kp^{\omega}_n(x)h(x) e^{\mathrm{i} \omega x}\mathrm{d} x =0, \quad
k\in\{0,\ldots,n-1\}, $$ where $h(x) = h^*(x) (1 - x)^{\alpha} (1 + x)^{\beta},
\ \omega = \lambda n, \ \lambda \geq 0 $ and $h(x)$ is holomorphic and
non-vanishing in a certain neighborhood in the plane. These polynomials are an
extension of so-called kissing polynomials ($\alpha = \beta = 0$) introduced in
connection with complex Gaussian quadrature rules with uniform good properties
in $\omega$.
|
COVID-19 has resulted in over 100 million infections and caused worldwide
lock downs due to its high transmission rate and limited testing options.
Current diagnostic tests can be expensive, limited in availability,
time-intensive and require risky in-person appointments. It has been
established that symptomatic COVID-19 seriously impairs normal functioning of
the respiratory system, thus affecting the coughing acoustics. The 2021 DiCOVA
Challenge @ INTERSPEECH was designed to find scientific and engineering
insights to the question by enabling participants to analyze an acoustic
dataset gathered from COVID-19 positive and non-COVID-19 individuals. In this
report we describe our participation in the Challenge (Track 1). We achieved
82.37% AUC ROC on the blind test outperforming the Challenge's baseline of
69.85%.
|
We establish unconditional sharp upper bounds of the $k$-th moments of the
family of quadratic Dirichlet $L$-functions at the central point for $0 \leq k
\leq 2$.
|
Vehicle trajectory optimization is essential to ensure vehicles travel
efficiently and safely. This paper presents an infrastructure assisted
constrained connected automated vehicles (CAVs) trajectory optimization method
on curved roads. This paper systematically formulates the problem based on a
curvilinear coordinate which is flexible to model complex road geometries.
Further, to deal with the spatial varying road obstacles, traffic regulations,
and geometric characteristics, two-dimensional vehicle kinematics is given in a
spatial formulation with exact road information provided by the infrastructure.
Consequently, we applied a multi-objective model predictive control (MPC)
approach to optimize the trajectories in a rolling horizon while satisfying the
collision avoidances and vehicle kinematics constraints. To verify the
efficiency of our method, a numerical simulation is conducted. As the results
suggest, the proposed method can provide smooth vehicular trajectories, avoid
road obstacles, and simultaneously follow traffic regulations, which is robust
to road geometries and disturbances.
|
WarpX is a general purpose electromagnetic particle-in-cell code that was
originally designed to run on many-core CPU architectures. We describe the
strategy followed to allow WarpX to use the GPU-accelerated nodes on OLCF's
Summit supercomputer, a strategy we believe will extend to the upcoming
machines Frontier and Aurora. We summarize the challenges encountered, lessons
learned, and give current performance results on a series of relevant benchmark
problems.
|
Quiver representations arise naturally in many areas across mathematics. Here
we describe an algorithm for calculating the vector space of sections, or
compatible assignments of vectors to vertices, of any finite-dimensional
representation of a finite quiver. Consequently, we are able to define and
compute principal components with respect to quiver representations. These
principal components are solutions to constrained optimisation problems defined
over the space of sections, and are eigenvectors of an associated matrix
pencil.
|
We demonstrate that global observations of high-energy cosmic rays contribute
to understanding unique characteristics of a large-scale magnetic flux rope
causing a magnetic storm in August 2018. Following a weak interplanetary shock
on 25 August 2018, a magnetic flux rope caused an unexpectedly large
geomagnetic storm. It is likely that this event became geoeffective because the
flux rope was accompanied by a corotating interaction region and compressed by
high-speed solar wind following the flux rope. In fact, a Forbush decrease was
observed in cosmic-ray data inside the flux rope as expected, and a significant
cosmic-ray density increase exceeding the unmodulated level before the shock
was also observed near the trailing edge of the flux rope. The cosmic-ray
density increase can be interpreted in terms of the adiabatic heating of cosmic
rays near the trailing edge of the flux rope, as the corotating interaction
region prevents free expansion of the flux rope and results in the compression
near the trailing edge. A northeast-directed spatial gradient in the cosmic-ray
density was also derived during the cosmic-ray density increase, suggesting
that the center of the heating near the trailing edge is located northeast of
Earth. This is one of the best examples demonstrating that the observation of
high-energy cosmic rays provides us with information that can only be derived
from the cosmic ray measurements to observationally constrain the
three-dimensional macroscopic picture of the interaction between coronal mass
ejections and the ambient solar wind, which is essential for prediction of
large magnetic storms.
|
The generalization of representations learned via contrastive learning
depends crucially on what features of the data are extracted. However, we
observe that the contrastive loss does not always sufficiently guide which
features are extracted, a behavior that can negatively impact the performance
on downstream tasks via "shortcuts", i.e., by inadvertently suppressing
important predictive features. We find that feature extraction is influenced by
the difficulty of the so-called instance discrimination task (i.e., the task of
discriminating pairs of similar points from pairs of dissimilar ones). Although
harder pairs improve the representation of some features, the improvement comes
at the cost of suppressing previously well represented features. In response,
we propose implicit feature modification (IFM), a method for altering positive
and negative samples in order to guide contrastive models towards capturing a
wider variety of predictive features. Empirically, we observe that IFM reduces
feature suppression, and as a result improves performance on vision and medical
imaging tasks. The code is available at: \url{https://github.com/joshr17/IFM}.
|
The generalised extreme value (GEV) distribution is a three parameter family
that describes the asymptotic behaviour of properly renormalised maxima of a
sequence of independent and identically distributed random variables. If the
shape parameter $\xi$ is zero, the GEV distribution has unbounded support,
whereas if $\xi$ is positive, the limiting distribution is heavy-tailed with
infinite upper endpoint but finite lower endpoint. In practical applications,
we assume that the GEV family is a reasonable approximation for the
distribution of maxima over blocks, and we fit it accordingly. This implies
that GEV properties, such as finite lower endpoint in the case $\xi>0$, are
inherited by the finite-sample maxima, which might not have bounded support.
This is particularly problematic when predicting extreme observations based on
multiple and interacting covariates. To tackle this usually overlooked issue,
we propose a blended GEV distribution, which smoothly combines the left tail of
a Gumbel distribution (GEV with $\xi=0$) with the right tail of a Fr\'echet
distribution (GEV with $\xi>0$) and, therefore, has unbounded support. Using a
Bayesian framework, we reparametrise the GEV distribution to offer a more
natural interpretation of the (possibly covariate-dependent) model parameters.
Independent priors over the new location and spread parameters induce a joint
prior distribution for the original location and scale parameters. We introduce
the concept of property-preserving penalised complexity (P$^3$C) priors and
apply it to the shape parameter to preserve first and second moments. We
illustrate our methods with an application to NO$_2$ pollution levels in
California, which reveals the robustness of the bGEV distribution, as well as
the suitability of the new parametrisation and the P$^3$C prior framework.
|
Long linear carbon-chains have been attracting intense interest arising from
the remarkable properties predicted and their potential applications in future
nanotechnology. Here we comprehensively interrogate the excitonic transitions
and the associated relaxation dynamics of nanotube confined long linear
carbon-chains by using steady state and time-resolved Raman spectroscopies. The
exciton relaxation dynamics on the confined carbon-chains occurs on a hundreds
of picoseconds timescale, in strong contrast to the host dynamics that occurs
on a few picosecond timescale. A prominent time-resolved Raman response is
observed over a broad energy range extending from 1.2 to 2.8 eV, which includes
the strong Raman resonance region around 2.2 eV. Evidence for a strong coupling
between the chain and the nanotube host is found from the dynamics at high
excitation energies which provides a clear evidence for an efficient energy
transfer from the host carbon nanotube to the chain. Our experimental study
presents the first unique characterization of the long linear carbon-chain
exciton dynamics, providing indispensable knowledge for the understanding of
the interactions between different carbon allotropes.
|
Recent work has shown that every 3D root system allows the construction of a
correponding 4D root system via an `induction theorem'. In this paper, we look
at the icosahedral case of $H_3\rightarrow H_4$ in detail and perform the
calculations explicitly. Clifford algebra is used to perform group theoretic
calculations based on the versor theorem and the Cartan-Dieudonn\'e theorem,
giving a simple construction of the Pin and Spin covers. Using this connection
with $H_3$ via the induction theorem sheds light on geometric aspects of the
$H_4$ root system (the $600$-cell) as well as other related polytopes and their
symmetries, such as the famous Grand Antiprism and the snub 24-cell. The
uniform construction of root systems from 3D and the uniform procedure of
splitting root systems with respect to subrootsystems into separate invariant
sets allows further systematic insight into the underlying geometry. All
calculations are performed in the even subalgebra of Cl(3), including the
construction of the Coxeter plane, which is used for visualising the
complementary pairs of invariant polytopes, and are shared as supplementary
computational work sheets. This approach therefore constitutes a more
systematic and general way of performing calculations concerning groups, in
particular reflection groups and root systems, in a Clifford algebraic
framework.
|
The notion of hypothetical bias (HB) constitutes, arguably, the most
fundamental issue in relation to the use of hypothetical survey methods.
Whether or to what extent choices of survey participants and subsequent
inferred estimates translate to real-world settings continues to be debated.
While HB has been extensively studied in the broader context of contingent
valuation, it is much less understood in relation to choice experiments (CE).
This paper reviews the empirical evidence for HB in CE in various fields of
applied economics and presents an integrative framework for how HB relates to
external validity. Results suggest mixed evidence on the prevalence, extent and
direction of HB as well as considerable context and measurement dependency.
While HB is found to be an undeniable issue when conducting CEs, the empirical
evidence on HB does not render CEs unable to represent real-world preferences.
While health-related choice experiments often find negligible degrees of HB,
experiments in consumer behaviour and transport domains suggest that
significant degrees of HB are ubiquitous. Assessments of bias in environmental
valuation studies provide mixed evidence. Also, across these disciplines many
studies display HB in their total willingness to pay estimates and opt-in rates
but not in their hypothetical marginal rates of substitution (subject to scale
correction). Further, recent findings in psychology and brain imaging studies
suggest neurocognitive mechanisms underlying HB that may explain some of the
discrepancies and unexpected findings in the mainstream CE literature. The
review also observes how the variety of operational definitions of HB prohibits
consistent measurement of HB in CE. The paper further identifies major sources
of HB and possible moderating factors. Finally, it explains how HB represents
one component of the wider concept of external validity.
|
Federated learning (FL) has become a prevalent distributed machine learning
paradigm with improved privacy. After learning, the resulting federated model
should be further personalized to each different client. While several methods
have been proposed to achieve personalization, they are typically limited to a
single local device, which may incur bias or overfitting since data in a single
device is extremely limited. In this paper, we attempt to realize
personalization beyond a single client. The motivation is that during FL, there
may exist many clients with similar data distribution, and thus the
personalization performance could be significantly boosted if these similar
clients can cooperate with each other. Inspired by this, this paper introduces
a new concept called federated adaptation, targeting at adapting the trained
model in a federated manner to achieve better personalization results. However,
the key challenge for federated adaptation is that we could not outsource any
raw data from the client during adaptation, due to privacy concerns. In this
paper, we propose PFA, a framework to accomplish Privacy-preserving Federated
Adaptation. PFA leverages the sparsity property of neural networks to generate
privacy-preserving representations and uses them to efficiently identify
clients with similar data distributions. Based on the grouping results, PFA
conducts an FL process in a group-wise way on the federated model to accomplish
the adaptation. For evaluation, we manually construct several practical FL
datasets based on public datasets in order to simulate both the class-imbalance
and background-difference conditions. Extensive experiments on these datasets
and popular model architectures demonstrate the effectiveness of PFA,
outperforming other state-of-the-art methods by a large margin while ensuring
user privacy. We will release our code at: https://github.com/lebyni/PFA.
|
Micro-expression can reflect people's real emotions. Recognizing
micro-expressions is difficult because they are small motions and have a short
duration. As the research is deepening into micro-expression recognition, many
effective features and methods have been proposed. To determine which direction
of movement feature is easier for distinguishing micro-expressions, this paper
selects 18 directions (including three types of horizontal, vertical and
oblique movements) and proposes a new low-dimensional feature called the
Histogram of Single Direction Gradient (HSDG) to study this topic. In this
paper, HSDG in every direction is concatenated with LBP-TOP to obtain the LBP
with Single Direction Gradient (LBP-SDG) and analyze which direction of
movement feature is more discriminative for micro-expression recognition. As
with some existing work, Euler Video Magnification (EVM) is employed as a
preprocessing step. The experiments on the CASME II and SMIC-HS databases
summarize the effective and optimal directions and demonstrate that HSDG in an
optimal direction is discriminative, and the corresponding LBP-SDG achieves
state-of-the-art performance using EVM.
|
This paper presents the advantages of alternative data from Super-Apps to
enhance user' s income estimation models. It compares the performance of these
alternative data sources with the performance of industry-accepted bureau
income estimators that takes into account only financial system information;
successfully showing that the alternative data manage to capture information
that bureau income estimators do not. By implementing the TreeSHAP method for
Stochastic Gradient Boosting Interpretation, this paper highlights which of the
customer' s behavioral and transactional patterns within a Super-App have a
stronger predictive power when estimating user' s income. Ultimately, this
paper shows the incentive for financial institutions to seek to incorporate
alternative data into constructing their risk profiles.
|
State-of-the-art computer vision systems are trained to predict a fixed set
of predetermined object categories. This restricted form of supervision limits
their generality and usability since additional labeled data is needed to
specify any other visual concept. Learning directly from raw text about images
is a promising alternative which leverages a much broader source of
supervision. We demonstrate that the simple pre-training task of predicting
which caption goes with which image is an efficient and scalable way to learn
SOTA image representations from scratch on a dataset of 400 million (image,
text) pairs collected from the internet. After pre-training, natural language
is used to reference learned visual concepts (or describe new ones) enabling
zero-shot transfer of the model to downstream tasks. We study the performance
of this approach by benchmarking on over 30 different existing computer vision
datasets, spanning tasks such as OCR, action recognition in videos,
geo-localization, and many types of fine-grained object classification. The
model transfers non-trivially to most tasks and is often competitive with a
fully supervised baseline without the need for any dataset specific training.
For instance, we match the accuracy of the original ResNet-50 on ImageNet
zero-shot without needing to use any of the 1.28 million training examples it
was trained on. We release our code and pre-trained model weights at
https://github.com/OpenAI/CLIP.
|
We recast elliptic surfaces over the projective line in terms of the
non-commutative tori with real multiplication. The correspondence is used to
study the Picard numbers, the ranks and the minimal models of such surfaces. As
an example, we calculate the Picard numbers of elliptic surfaces with complex
multiplication.
|
In this paper, we perform a detailed analysis of the phase shift phenomenon
of the classical soliton cellular automaton known as the box-ball system,
ultimately resulting in a statement and proof of a formula describing this
phase shift. This phenomenon has been observed since the nineties, when the
system was first introduced by Takahashi and Satsuma, but no explicit global
description was made beyond its observation. By using the
Gessel-Viennot-Lindstr\"{o}m lemma and path-counting arguments, we present here
a novel proof of the classical phase shift formula for the continuous-time Toda
lattice, as discovered by Moser, and use this proof to derive a discrete-time
Toda lattice analogue of the phase shift phenomenon. By carefully analysing the
connection between the box-ball system and the discrete-time Toda lattice,
through the mechanism of tropicalisation/dequantisation, we translate this
discrete-time Toda lattice phase shift formula into our new formula for the
box-ball system phase shift.
|
Existing computer vision research in categorization struggles with
fine-grained attributes recognition due to the inherently high intra-class
variances and low inter-class variances. SOTA methods tackle this challenge by
locating the most informative image regions and rely on them to classify the
complete image. The most recent work, Vision Transformer (ViT), shows its
strong performance in both traditional and fine-grained classification tasks.
In this work, we propose a multi-stage ViT framework for fine-grained image
classification tasks, which localizes the informative image regions without
requiring architectural changes using the inherent multi-head self-attention
mechanism. We also introduce attention-guided augmentations for improving the
model's capabilities. We demonstrate the value of our approach by experimenting
with four popular fine-grained benchmarks: CUB-200-2011, Stanford Cars,
Stanford Dogs, and FGVC7 Plant Pathology. We also prove our model's
interpretability via qualitative results.
|
Monitoring electronic properties of 2D materials is an essential step to open
a way for applications such as electronic devices and sensors. From this
perspective, Bernal bilayer graphene (BLG) is a fairly simple system that
offers great possibilities for tuning electronic gap and charge carriers'
mobility by selective functionalization (adsorptions of atoms or molecules).
Here, we present a detailed numerical study of BLG electronic properties when
two types of adsorption site are present simultaneously. We focus on realistic
cases that could be realized experimentally with adsorbate concentration c
varying from 0.25% to 5%. For a given value of c, when the electronic doping is
lower than c we show that quantum effects, which are ignored in usual
semi-classical calculations, strongly affect the electronic structure and the
transport properties. A wide range of behaviors is indeed found, such as gap
opening, metallic behavior or abnormal conductivity, which depend on the
adsorbate positions, the c value, the doping, and eventually the coupling
between midgap states which can create a midgap band. These behaviors are
understood by simple arguments based on the fact that BLG lattice is bipartite.
We also analyze the conductivity at low temperature, where multiple scattering
effects cannot be ignored. Moreover, when the Fermi energy lies in the band of
midgap states, the average velocity of charge carriers cancels but conduction
is still possible thanks to quantum fluctuations of the velocity.
|
Spear Phishing is a harmful cyber-attack facing business and individuals
worldwide. Considerable research has been conducted recently into the use of
Machine Learning (ML) techniques to detect spear-phishing emails. ML-based
solutions may suffer from zero-day attacks; unseen attacks unaccounted for in
the training data. As new attacks emerge, classifiers trained on older data are
unable to detect these new varieties of attacks resulting in increasingly
inaccurate predictions. Spear Phishing detection also faces scalability
challenges due to the growth of the required features which is proportional to
the number of the senders within a receiver mailbox. This differs from
traditional phishing attacks which typically perform only a binary
classification between phishing and benign emails. Therefore, we devise a
possible solution to these problems, named RAIDER: Reinforcement AIded Spear
Phishing DEtectoR. A reinforcement-learning based feature evaluation system
that can automatically find the optimum features for detecting different types
of attacks. By leveraging a reward and penalty system, RAIDER allows for
autonomous features selection. RAIDER also keeps the number of features to a
minimum by selecting only the significant features to represent phishing emails
and detect spear-phishing attacks. After extensive evaluation of RAIDER over
11,000 emails and across 3 attack scenarios, our results suggest that using
reinforcement learning to automatically identify the significant features could
reduce the dimensions of the required features by 55% in comparison to existing
ML-based systems. It also improves the accuracy of detecting spoofing attacks
by 4% from 90% to 94%. In addition, RAIDER demonstrates reasonable detection
accuracy even against a sophisticated attack named Known Sender in which
spear-phishing emails greatly resemble those of the impersonated sender.
|
Radiative seesaw models have the attractive property of providing dark matter
candidates in addition to generation of neutrino masses. Here we present a
study of neutrino signals from the annihilation of dark matter particles which
have been gravitationally captured in the Sun, in the framework of the
scotogenic model. We compute expected event rates in the IceCube detector in
its 86-string configuration. As fermionic dark matter does not accumulate in
the Sun, we study the case of scalar dark matter, with a scan over the
parameter space. Due to a naturally small mass splitting between the two
neutral scalar components, inelastic scattering processes with nucleons can
occur. We find that for small mass splittings, the model yields very high event
rates. If a detailed analysis at IceCube can exclude these parameter points,
our findings can be translated into a lower limit on one of the scalar
couplings in the model. For larger mass splittings only the elastic case needs
to be considered. We find that in this scenario the XENON1T limits exclude all
points with sufficiently large event rates.
|
We consider a unitary circuit where the underlying gates are chosen to be
R-matrices satisfying the Yang-Baxter equation and correlation functions can be
expressed through a transfer matrix formalism. These transfer matrices are no
longer Hermitian and differ from the ones guaranteeing local conservation laws,
but remain mutually commuting at different values of the spectral parameter
defining the circuit. Exact eigenstates can still be constructed as a Bethe
ansatz, but while these transfer matrices are diagonalizable in the
inhomogeneous case, the homogeneous limit corresponds to an exceptional point
where multiple eigenstates coalesce and Jordan blocks appear. Remarkably, the
complete set of (generalized) eigenstates is only obtained when taking into
account a combinatorial number of nontrivial vacuum states. In all cases, the
Bethe equations reduce to those of the integrable spin-1 chain and exhibit a
global SU(2) symmetry, significantly reducing the total number of eigenstates
required in the calculation of correlation functions. A similar construction is
shown to hold for the calculation of out-of-time-order correlations.
|
This paper focuses on understanding how the generalization error scales with
the amount of the training data for deep neural networks (DNNs). Existing
techniques in statistical learning require computation of capacity measures,
such as VC dimension, to provably bound this error. It is however unclear how
to extend these measures to DNNs and therefore the existing analyses are
applicable to simple neural networks, which are not used in practice, e.g.,
linear or shallow ones or otherwise multi-layer perceptrons. Moreover, many
theoretical error bounds are not empirically verifiable. We derive estimates of
the generalization error that hold for deep networks and do not rely on
unattainable capacity measures. The enabling technique in our approach hinges
on two major assumptions: i) the network achieves zero training error, ii) the
probability of making an error on a test point is proportional to the distance
between this point and its nearest training point in the feature space and at a
certain maximal distance (that we call radius) it saturates. Based on these
assumptions we estimate the generalization error of DNNs. The obtained estimate
scales as O(1/(\delta N^{1/d})), where N is the size of the training data and
is parameterized by two quantities, the effective dimensionality of the data as
perceived by the network (d) and the aforementioned radius (\delta), both of
which we find empirically. We show that our estimates match with the
experimentally obtained behavior of the error on multiple learning tasks using
benchmark data-sets and realistic models. Estimating training data requirements
is essential for deployment of safety critical applications such as autonomous
driving etc. Furthermore, collecting and annotating training data requires a
huge amount of financial, computational and human resources. Our empirical
estimates will help to efficiently allocate resources.
|
We study a class of elliptic and parabolic equations in non-divergence form
with singular coefficients in an upper half space with the homogeneous
Dirichlet boundary condition. Intrinsic weighted Sobolev spaces are found in
which the existence and uniqueness of strong solutions are proved when the
partial oscillations of coefficients in small parabolic cylinders are small.
Our results are new even when the coefficients are constants
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.