abstract
stringlengths 42
2.09k
|
---|
We prove radial symmetry for bounded nonnegative solutions of a weighted
anisotropic problem.
Given the anisotropic setting that we deal with, the term "radial" is
understood in the Finsler framework.
In the whole space, J. Serra obtained the symmetry result in the isotropic
unweighted setting. In this case we provide the extension of his result to the
anisotropic setting. This provides a generalization to the anisotropic setting
of a celebrated result due to Gidas-Ni-Nirenberg and such a generalization is
new even for in the case of linear operators whenever the dimension is greater
than 2.
In proper cones, the results presented are new even in the isotropic and
unweighted setting for suitable nonlinear cases. Even for the previously known
case of unweighted isotropic setting, the present paper provides an approach to
the problem by exploiting integral (in)equalities which is new for $N>2$: this
complements the corresponding symmetry result obtained via the moving planes
method by Berestycki-Pacella.
|
In this paper, we introduce weighted fractional generalized cumulative past
entropy of a nonnegative absolutely continuous random variable with bounded
support. Various properties of the proposed weighted fractional measure are
studied. Bounds and stochastic orderings are derived. A connection between the
proposed measure and the left-sided Riemann-Liouville fractional integral is
established. Further, the proposed measure is studied for the proportional
reversed hazard rate models. Next, a nonparametric estimator of the weighted
fractional generalized cumulative past entropy is suggested based on the
empirical distribution function. Various examples with a real life data set are
considered for the illustration purposes. Finally, large sample properties of
the proposed empirical estimator are studied.
|
The BERT model has shown significant success on various natural language
processing tasks. However, due to the heavy model size and high computational
cost, the model suffers from high latency, which is fatal to its deployments on
resource-limited devices. To tackle this problem, we propose a dynamic
inference method on BERT via trainable gate variables applied on input tokens
and a regularizer that has a bi-modal property. Our method shows reduced
computational cost on the GLUE dataset with a minimal performance drop.
Moreover, the model adjusts with a trade-off between performance and
computational cost with the user-specified hyperparameter.
|
In introductory level electromagnetism courses the calculation of
electrostatic potential and electric field in an arbitrary point is a very
common exercise. One of the most viewed cases is the calculation of
electrostatic potential and electric field in the symmetry axis of a centered
ring and it has been widely studied the potential off the axis of a charged
ring centered in the origin coordinate. In this work, we calculated the
electrostatic potential and electric field in the $z$ axis of a non centered
charged ring using elliptic integrals as an pedagogical example of the
application of special functions in electromagnetism.
|
Nova outbursts play an important role in the chemical evolution of galaxies,
especially they are the main source of synthetic $^{13}\rm C$, $^{15}\rm N$,
$^{17}\rm O$ and some radioactive isotopes like $^{22}\rm Na$ and $^{26}\rm
Al$. The enrichment of He in nova ejecta indicates that the accreted material
may mix with the He-shell (He-mixing). The purpose of this work is to
investigate how the He-mixing affects the nova outbursts in a systematic way.
We evolved a series of accreting WD models, and found that the mass fraction of
H and He in nova ejecta can be influenced by different He-mixing fractions
significantly. We also found that both the nova cycle duration and ejected mass
increase with the He-mixing fractions. Meanwhile, the nuclear energy production
from $p$-$p$ chains decreases with the He-mixing fraction during the nova
outbursts, whereas the CNO-cycle increases. The present work can reproduce the
chemical abundances in the ejecta of some novae, such as GQ Mus, ASASSN-18fv,
HR Del, T Aur and V443 Sct. This implies that the He-mixing process cannot be
neglected when studying nova outbursts. This study also develops a He-mixing
meter (i.e. $\rm He/H$) that can be used to estimate the He-mixing fraction in
classical nova systems.
|
We study the stochastic gravitational waves from string gas cosmology. With
the help of the Lambert W function, we derive the exact energy density spectrum
of the stochastic gravitational waves in term of tensor-to-scalar. New feathers
with the spectrum are found. First, the non-Hagedorn phase can be ruled out by
the current B-mode polarization in the cosmic microwave background. Second, the
exact spectrum from the Hagedorn phase with a logarithmic term is shown to be
unique in the measurable frequency range. Third, which is the most important,
we find the string length can be constrained to be lower than 7 $\sim$ orders
of that Planck scale.
|
We argue against the use of generally weighted moving average (GWMA) control
charts. Our primary reasons are the following: 1) There is no recursive formula
for the GWMA control chart statistic, so all previous data must be stored and
used in the calculation of each chart statistic. 2) The Markovian property does
not apply to the GWMA statistics, so computer simulation must be used to
determine control limits and the statistical performance. 3) An appropriately
designed, and much simpler, exponentially weighted moving average (EWMA) chart
provides as good or better statistical performance. 4) In some cases the GWMA
chart gives more weight to past data values than to current values.
|
Modern online services rely on data stores that replicate their data across
geographically distributed data centers. Providing strong consistency in such
data stores results in high latencies and makes the system vulnerable to
network partitions. The alternative of relaxing consistency violates crucial
correctness properties. A compromise is to allow multiple consistency levels to
coexist in the data store. In this paper we present UniStore, the first
fault-tolerant and scalable data store that combines causal and strong
consistency. The key challenge we address in UniStore is to maintain liveness
despite data center failures: this could be compromised if a strong transaction
takes a dependency on a causal transaction that is later lost because of a
failure. UniStore ensures that such situations do not arise while paying the
cost of durability for causal transactions only when necessary. We evaluate
UniStore on Amazon EC2 using both microbenchmarks and a sample application. Our
results show that UniStore effectively and scalably combines causal and strong
consistency.
|
Sequences of events including infectious disease outbreaks, social network
activities, and crimes are ubiquitous and the data on such events carry
essential information about the underlying diffusion processes between
communities (e.g., regions, online user groups). Modeling diffusion processes
and predicting future events are crucial in many applications including
epidemic control, viral marketing, and predictive policing. Hawkes processes
offer a central tool for modeling the diffusion processes, in which the
influence from the past events is described by the triggering kernel. However,
the triggering kernel parameters, which govern how each community is influenced
by the past events, are assumed to be static over time. In the real world, the
diffusion processes depend not only on the influences from the past, but also
the current (time-evolving) states of the communities, e.g., people's awareness
of the disease and people's current interests. In this paper, we propose a
novel Hawkes process model that is able to capture the underlying dynamics of
community states behind the diffusion processes and predict the occurrences of
events based on the dynamics. Specifically, we model the latent dynamic
function that encodes these hidden dynamics by a mixture of neural networks.
Then we design the triggering kernel using the latent dynamic function and its
integral. The proposed method, termed DHP (Dynamic Hawkes Processes), offers a
flexible way to learn complex representations of the time-evolving communities'
states, while at the same time it allows to computing the exact likelihood,
which makes parameter learning tractable. Extensive experiments on four
real-world event datasets show that DHP outperforms five widely adopted methods
for event prediction.
|
Form understanding is a challenging problem which aims to recognize semantic
entities from the input document and their hierarchical relations. Previous
approaches face significant difficulty dealing with the complexity of the task,
thus treat these objectives separately. To this end, we present a novel deep
neural network to jointly perform both entity detection and link prediction in
an end-to-end fashion. Our model extends the Multi-stage Attentional U-Net
architecture with the Part-Intensity Fields and Part-Association Fields for
link prediction, enriching the spatial information flow with the additional
supervision from entity linking. We demonstrate the effectiveness of the model
on the Form Understanding in Noisy Scanned Documents (FUNSD) dataset, where our
method substantially outperforms the original model and state-of-the-art
baselines in both Entity Labeling and Entity Linking task.
|
IV-VI materials are some of the most efficient bulk thermoelectric materials
due to their proximity to soft-mode phase transitions, which leads to low
lattice thermal conductivity. It has been shown that the lattice thermal
conductivity of PbTe can be considerably reduced by bringing PbTe closer to the
phase transition e.g. via lattice expansion. However, the effect of soft phonon
modes on the electronic thermoelectric properties of such system remains
unknown. Using first principles calculations, we show that the soft zone center
transverse optical phonons do not deteriorate the electronic thermoelectric
properties of PbTe driven closer to the phase transition via lattice expansion
due to external stress, and thus enhance the thermoelectric figure of merit. We
find that the optical deformation potentials change very weakly as the
proximity to the phase transition increases, but the population and scattering
phase space of soft phonon modes increase. Nevertheless, scattering between
electronic states near the band edge and soft optical phonons remains
relatively weak even very near the phase transition.
|
In micro- and nano-scale systems, particles can be moved by using an external
force like gravity or a magnetic field. In the presence of adhesive particles
that can attach to each other, the challenge is to decide whether a shape is
constructible. Previous work provides a class of shapes for which
constructibility can be decided efficiently, when particles move maximally into
the same direction induced by a global signal. In this paper we consider the
single step model, i.e., each particle moves one unit step into the given
direction. We prove that deciding constructibility is NP-complete for
three-dimensional shapes, and that a maximum constructible shape can be
approximated. The same approximation algorithm applies for 2D. We further
present linear-time algorithms to decide whether or not a tree-shape in 2D or
3D is constructible. Scaling a shape yields constructibility; in particular we
show that the $2$-scaled copy of every non-degenerate polyomino is
constructible. In the three-dimensional setting we show that the $3$-scaled
copy of every non-degenerate polycube is constructible.
|
Information transmission over a multiple-input-multiple-output (MIMO) fading
channel with imperfect channel state information (CSI) is investigated, under a
new receiver architecture which combines the recently proposed generalized
nearest neighbor decoding rule (GNNDR) and a successive procedure in the spirit
of successive interference cancellation (SIC). Recognizing that the channel
input-output relationship is a nonlinear mapping under imperfect CSI, the GNNDR
is capable of extracting the information embedded in the joint observation of
channel output and imperfect CSI more efficiently than the conventional linear
scheme, as revealed by our achievable rate analysis via generalized mutual
information (GMI). Numerical results indicate that the proposed scheme achieves
performance close to the channel capacity with perfect CSI, and significantly
outperforms the conventional pilot-assisted scheme, which first estimates the
CSI and then uses the estimated CSI as the true one for coherent decoding.
|
This paper presents our approach to address the EACL WANLP-2021 Shared Task
1: Nuanced Arabic Dialect Identification (NADI). The task is aimed at
developing a system that identifies the geographical location(country/province)
from where an Arabic tweet in the form of modern standard Arabic or dialect
comes from. We solve the task in two parts. The first part involves
pre-processing the provided dataset by cleaning, adding and segmenting various
parts of the text. This is followed by carrying out experiments with different
versions of two Transformer based models, AraBERT and AraELECTRA. Our final
approach achieved macro F1-scores of 0.216, 0.235, 0.054, and 0.043 in the four
subtasks, and we were ranked second in MSA identification subtasks and fourth
in DA identification subtasks.
|
The prominence of figurative language devices, such as sarcasm and irony,
poses serious challenges for Arabic Sentiment Analysis (SA). While previous
research works tackle SA and sarcasm detection separately, this paper
introduces an end-to-end deep Multi-Task Learning (MTL) model, allowing
knowledge interaction between the two tasks. Our MTL model's architecture
consists of a Bidirectional Encoder Representation from Transformers (BERT)
model, a multi-task attention interaction module, and two task classifiers. The
overall obtained results show that our proposed model outperforms its
single-task counterparts on both SA and sarcasm detection sub-tasks.
|
Atomic probe tomography (APT), based on the work of Erwin Mueller, is able to
generate three-dimensional chemical maps in atomic resolution. The required
instruments for APT have evolved over the last 20 years from an experimental to
an established method of materials analysis. Here, we describe the realization
of a new instrument concept that allows the direct attachment of APT to a dual
beam SEM microscope with the main achievement of fast and direct sample
transfer. New operational modes are enabled regarding sample geometry,
alignment of tips and microelectrode. The instrument is optimized to handle
cryo-samples at all stages of preparation and storage. The instrument comes
with its own software for evaluation and reconstruction. The performance in
terms of mass resolution, aperture angle, and detection efficiency is
demonstrated with a few application examples.
|
We consider systems that require timely monitoring of sources over a
communication network, where the cost of delayed information is unknown,
time-varying and possibly adversarial. For the single source monitoring
problem, we design algorithms that achieve sublinear regret compared to the
best fixed policy in hindsight. For the multiple source scheduling problem, we
design a new online learning algorithm called
Follow-the-Perturbed-Whittle-Leader and show that it has low regret compared to
the best fixed scheduling policy in hindsight, while remaining computationally
feasible. The algorithm and its regret analysis are novel and of independent
interest to the study of online restless multi-armed bandit problems. We
further design algorithms that achieve sublinear regret compared to the best
dynamic policy when the environment is slowly varying. Finally, we apply our
algorithms to a mobility tracking problem. We consider non-stationary and
adversarial mobility models and illustrate the performance benefit of using our
online learning algorithms compared to an oblivious scheduling policy.
|
Population aging in Brazil and in the world occurs at the same time of
advances and evolutions in technology. Thus, opportunities for new solutions
arise for the elderly, such as innovations in Home Care. With the Internet of
Things, it is possible to improve the elderly autonomy, safety and quality of
life. However, the design of IoT solutions for elderly Home Care poses new
challenges. In this context, this technical report aims to detail activities
developed as a case study to evaluate the IoT-PMHCS Method, which was developed
in the context of the Master's program in Computer Science at UNIFACCAMP,
Brazil. This report includes the planning and results of interviews,
participatory workshops, validations, simulation of solutions, among other
activities. This document reports the practical experience of applying the
IoT-PMHCS Method.
--
O envelhecimento populacional no Brasil e no mundo ocorre ao mesmo tempo que
os avan\c{c}os e evolu\c{c}\~oes na tecnologia. Desta forma, surgem
oportunidades de novas solu\c{c}\~oes para o p\'ublico idoso, tais como
inova\c{c}\~oes em Home Care. Com a Internet das Coisas \'e poss\'ivel promover
maior autonomia, seguran\c{c}a e qualidade de vida aos idosos. Entretanto, o
design de solu\c{c}\~oes de IoT para Home Care de pessoas idosas traz novos
desafios. Diante disto, este relat\'orio t\'ecnico tem o objetivo de detalhar
atividades desenvolvidas como estudo de caso para avalia\c{c}\~ao do M\'etodo
IoT-PMHCS, desenvolvido no contexto do programa de Mestrado em Ci\^encia da
Computa\c{c}\~ao da UNIFACCAMP, Brasil. O relat\'orio inclui o planejamento e
resultados de entrevistas, workshops participativos, pesquisas de
valida\c{c}\~ao, simula\c{c}\~ao de solu\c{c}\~oes, dentre outras atividades.
Este documento relata a experi\^encia pr\'atica da aplica\c{c}\~ao do M\'etodo
IoT-PMHCS.
|
The Stokes-Brinkman equations model fluid flow in highly heterogeneous porous
media. In this paper, we consider the numerical solution of the Stokes-Brinkman
equations with stochastic permeabilities, where the permeabilities in
subdomains are assumed to be independent and uniformly distributed within a
known interval. We employ a truncated anchored ANOVA decomposition alongside
stochastic collocation to estimate the moments of the velocity and pressure
solutions. Through an adaptive procedure selecting only the most important
ANOVA directions, we reduce the number of collocation points needed for
accurate estimation of the statistical moments. However, for even modest
stochastic dimensions, the number of collocation points remains too large to
perform high-fidelity solves at each point. We use reduced basis methods to
alleviate the computational burden by approximating the expensive high-fidelity
solves with inexpensive approximate solutions on a low-dimensional space. We
furthermore develop and analyze rigorous a posteriori error estimates for the
reduced basis approximation. We apply these methods to 2D problems considering
both isotropic and anisotropic permeabilities.
|
Semantic Segmentation is a crucial component in the perception systems of
many applications, such as robotics and autonomous driving that rely on
accurate environmental perception and understanding. In literature, several
approaches are introduced to attempt LiDAR semantic segmentation task, such as
projection-based (range-view or birds-eye-view), and voxel-based approaches.
However, they either abandon the valuable 3D topology and geometric relations
and suffer from information loss introduced in the projection process or are
inefficient. Therefore, there is a need for accurate models capable of
processing the 3D driving-scene point cloud in 3D space. In this paper, we
propose S3Net, a novel convolutional neural network for LiDAR point cloud
semantic segmentation. It adopts an encoder-decoder backbone that consists of
Sparse Intra-channel Attention Module (SIntraAM), and Sparse Inter-channel
Attention Module (SInterAM) to emphasize the fine details of both within each
feature map and among nearby feature maps. To extract the global contexts in
deeper layers, we introduce Sparse Residual Tower based upon sparse convolution
that suits varying sparsity of LiDAR point cloud. In addition, geo-aware
anisotrophic loss is leveraged to emphasize the semantic boundaries and
penalize the noise within each predicted regions, leading to a robust
prediction. Our experimental results show that the proposed method leads to a
large improvement (12\%) compared to its baseline counterpart (MinkNet42
\cite{choy20194d}) on SemanticKITTI \cite{DBLP:conf/iccv/BehleyGMQBSG19} test
set and achieves state-of-the-art mIoU accuracy of semantic segmentation
approaches.
|
In this paper, we present a hybrid deep learning framework named CTNet which
combines convolutional neural network and transformer together for the
detection of COVID-19 via 3D chest CT images. It consists of a CNN feature
extractor module with SE attention to extract sufficient features from CT
scans, together with a transformer model to model the discriminative features
of the 3D CT scans. Compared to previous works, CTNet provides an effective and
efficient method to perform COVID-19 diagnosis via 3D CT scans with data
resampling strategy. Advanced results on a large and public benchmarks,
COV19-CT-DB database was achieved by the proposed CTNet, over the
state-of-the-art baseline approachproposed together with the dataset.
|
We report analysis of sub-Alfv\'enic magnetohydrodynamic (MHD) perturbations
in the low-\b{eta} radial-field solar wind using the Parker Solar Probe
spacecraft data from 31 October to 12 November 2018. We calculate wave vectors
using the singular value decomposition method and separate the MHD
perturbations into three types of linear eigenmodes (Alfv\'en, fast, and slow
modes) to explore the properties of the sub-Alfv\'enic perturbations and the
role of compressible perturbations in solar wind heating. The MHD perturbations
there show a high degree of Alfv\'enicity in the radial-field solar wind, with
the energy fraction of Alfv\'en modes dominating (~45%-83%) over those of fast
modes (~16%-43%) and slow modes (~1%-19%). We present a detailed analysis of a
representative event on 10 November 2018. Observations show that fast modes
dominate magnetic compressibility, whereas slow modes dominate density
compressibility. The energy damping rate of compressible modes is comparable to
the heating rate, suggesting the collisionless damping of compressible modes
could be significant for solar wind heating. These results are valuable for
further studies of the imbalanced turbulence near the Sun and possible heating
effects of compressible modes at MHD scales in low-\b{eta} plasma.
|
We compute the partition function for 6d $\mathcal{N}=1$ $SO(2N)$ gauge
theories compactified on a circle with $\mathbb{Z}_2$ outer automorphism twist.
We perform the computation based on 5-brane webs with two O5-planes using
topological vertex with two O5-planes. As representative examples, we consider
6d $SO(8)$ and $SU(3)$ gauge theories with $\mathbb{Z}_2$ twist. We confirm
that these partition functions obtained from the topological vertex with
O5-planes indeed agree with the elliptic genus computations.
|
Due to the brittle feature of carbon fiber reinforced plastic laminates,
mechanical multi-joint within these composite components show uneven load
distribution for each bolt, which weaken the strength advantage of composite
laminates. In order to reduce this defect and achieve the goal of even load
distribution in mechanical joints, we propose a machine learning-based
framework as an optimization method. Since that the friction effect has been
proven to be a significant factor in determining bolt load distribution, our
framework aims at providing optimal parameters including bolt-hole clearances
and tightening torques for a minimum unevenness of bolt load. A novel circuit
model is established to generate data samples for the training of artificial
networks at a relatively low computational cost. A database for all the
possible inputs in the design space is built through the machine learning
model. The optimal dataset of clearances and torques provided by the database
is validated by both the finite element method, circuit model, and an
experimental measurement based on the linear superposition principle, which
shows the effectiveness of this general framework for the optimization problem.
Then, our machine learning model is further compared and worked in
collaboration with commonly used optimization algorithms, which shows the
potential of greatly increasing computational efficiency for the inverse design
problem.
|
An ultra-light bosonic particle of mass around $10^{-22}\,\mathrm{eV}/c^2$ is
of special interest as a dark matter candidate, as it both has particle physics
motivations, and may give rise to notable differences in the structures on
highly non-linear scales due to the manifestation of quantum-physical wave
effects on macroscopic scales, which could address a number of contentious
small-scale tensions in the standard cosmological model, $\Lambda$CDM. Using a
spectral technique, we here discuss simulations of such fuzzy dark matter
(FDM), including the full non-linear wave dynamics, with a comparatively large
dynamic range and for larger box sizes than considered previously. While the
impact of suppressed small-scale power in the initial conditions associated
with FDM has been studied before, the characteristic FDM dynamics are often
neglected; in our simulations, we instead show the impact of the full
non-linear dynamics on physical observables. We focus on the evolution of the
matter power spectrum, give first results for the FDM halo mass function
directly based on full FDM simulations, and discuss the computational
challenges associated with the FDM equations. FDM shows a pronounced
suppression of power on small scales relative to cold dark matter (CDM), which
can be understood as a damping effect due to 'quantum pressure'. In certain
regimes, however, the FDM power can exceed that of CDM, which may be
interpreted as a reflection of order-unity density fluctuations occurring in
FDM. In the halo mass function, FDM shows a significant abundance reduction
below a characteristic mass scale only. This could in principle alleviate the
need to invoke very strong feedback processes in small galaxies to reconcile
$\Lambda$CDM with the observed galaxy luminosity function, but detailed studies
that also include baryons will be needed to ultimately judge the viability of
FDM.
|
We introduce a class of systems of Hamilton-Jacobi equations that
characterize critical points of functionals associated to centroidal
tessellations of domains, i.e. tessellations where generators and centroids
coincide,
such as centroidal Voronoi tessellations and centroidal power diagrams. An
appropriate version of the Lloyd algorithm, combined with a Fast Marching
method on unstructured grids for the Hamilton-Jacobi equation, allows computing
the solution of the system. We propose various numerical examples to illustrate
the features of the technique.
|
Neural dependency parsing has achieved remarkable performance for many
domains and languages. The bottleneck of massive labeled data limits the
effectiveness of these approaches for low resource languages. In this work, we
focus on dependency parsing for morphological rich languages (MRLs) in a
low-resource setting. Although morphological information is essential for the
dependency parsing task, the morphological disambiguation and lack of powerful
analyzers pose challenges to get this information for MRLs. To address these
challenges, we propose simple auxiliary tasks for pretraining. We perform
experiments on 10 MRLs in low-resource settings to measure the efficacy of our
proposed pretraining method and observe an average absolute gain of 2 points
(UAS) and 3.6 points (LAS). Code and data available at:
https://github.com/jivnesh/LCM
|
The band structure of bilayer graphene is tunable by introducing a relative
twist angle between the two layers, unlocking exotic phases, such as
superconductor and Mott insulator, and providing a fertile ground for new
physics. At intermediate twist angles around 10{\deg}, highly degenerate
electronic transitions hybridize to form excitonic states, a quite unusual
phenomenon in a metallic system. We probe the bright exciton mode using
resonant Raman scattering measurements to track the evolution of the intensity
of the graphene Raman G peak, corresponding to the E2g phonon. By cryogenically
cooling the sample, we are able to resolve both the incoming and outgoing
resonance in the G peak intensity evolution as a function of excitation energy,
a prominent manifestation of the bright exciton serving as the intermediate
state in the Raman process. For a sample with twist angle 8.6{\deg}, we report
a weakly temperature dependent resonance broadening ${\gamma}$ ${\approx}$ 0.07
eV. In the limit of small inhomogeneous broadening, the observed ${\gamma}$
places a lower bound for the bright exciton scattering lifetime at 10 fs in the
presence of charges and excitons excited by the light pulse for Raman
measurement, limited by the rapid exciton-exciton and exciton-charge scattering
in graphene.
|
We consider a dynamic assortment selection problem where a seller has a fixed
inventory of $N$ substitutable products and faces an unknown demand that
arrives sequentially over $T$ periods. In each period, the seller needs to
decide on the assortment of products (of cardinality at most $K$) to offer to
the customers. The customer's response follows an unknown multinomial logit
model (MNL) with parameters $v$. The goal of the seller is to maximize the
total expected revenue given the fixed initial inventory of $N$ products. We
give a policy that achieves a regret of $\tilde O\left(K \sqrt{K N T}\left(1 +
\frac{\sqrt{v_{\max}}}{q_{\min}}\text{OPT}\right) \right)$ under a mild
assumption on the model parameters. In particular, our policy achieves a
near-optimal $\tilde O(\sqrt{T})$ regret in the large inventory setting.
Our policy builds upon the UCB-based approach for MNL-bandit without
inventory constraints in [1] and addresses the inventory constraints through an
exponentially sized LP for which we present a tractable approximation while
keeping the $\tilde O(\sqrt{T})$ regret bound.
|
The extensive use of medical CT has raised a public concern over the
radiation dose to the patient. Reducing the radiation dose leads to increased
CT image noise and artifacts, which can adversely affect not only the
radiologists judgement but also the performance of downstream medical image
analysis tasks. Various low-dose CT denoising methods, especially the recent
deep learning based approaches, have produced impressive results. However, the
existing denoising methods are all downstream-task-agnostic and neglect the
diverse needs of the downstream applications. In this paper, we introduce a
novel Task-Oriented Denoising Network (TOD-Net) with a task-oriented loss
leveraging knowledge from the downstream tasks. Comprehensive empirical
analysis shows that the task-oriented loss complements other task agnostic
losses by steering the denoiser to enhance the image quality in the task
related regions of interest. Such enhancement in turn brings general boosts on
the performance of various methods for the downstream task. The presented work
may shed light on the future development of context-aware image denoising
methods.
|
In the dynamics of open quantum systems, the backflow of information to the
reduced system under study has been suggested as the actual physical mechanism
inducing memory and thus leading to non-Markovian quantum dynamics. To this
aim, the trace-distance or Bures-distance revivals between distinct evolved
system states have been shown to be subordinated to the establishment of
system-environment correlations or changes in the environmental state. We show
that this interpretation can be substantiated also for a class of entropic
quantifiers. We exploit a suitably regularized version of Umegaki's quantum
relative entropy, known as telescopic relative entropy, that is tightly
connected to the quantum Jensen-Shannon divergence. In particular, we derive
general upper bounds on the telescopic relative entropy revivals conditioned
and determined by the formation of correlations and changes in the environment.
We illustrate our findings by means of examples, considering the
Jaynes-Cummings model and a two-qubit dynamics.
|
Transmission eigenfunctions are certain interior resonant modes that are of
central importance to the wave scattering theory. In this paper, we present the
discovery of novel global rigidity properties of the transmission
eigenfunctions associated with the Maxwell system. It is shown that the
transmission eigenfunctions carry the geometrical and topological information
of the underlying domain. We present both analytical and numerical results of
these intriguing rigidity properties. As an interesting application, we propose
an illusion scheme of artificially generating a mirage image of any given
optical object.
|
We consider the ten confidently detected gravitational wave signals in the
GWTC-1 catalog [1] which are consistent with mergers of binary black hole
systems, and re-analyze them with waveform models that contain subdominant
spherical harmonic modes. This analysis is based on the current (fourth)
generation of the IMRPhenom family of phenomenological waveform models, which
consists of the IMRPhenomX frequency-domain models [2-5] and the IMRPhenomT
time-domain models [6-8]. We find overall consistent results, with all
Jensen-Shannon divergences between the previous results using IMRPhenomPv2 and
our default IMRPhenomXPHM posterior results below 0.045 bits. Effects of
subdominant harmonics are however visible for several events, and for GW170729
our new time domain IMRPhenomTPHM model provides the best fit and shifts the
posterior further toward more unequal masses and a higher primary mass of
$57.3^{+12.0}_{-10.9}$ solar masses at the lower end of the PISN mass gap.
|
The first detection of gravitational waves from the binary neutron star
merger GW170817 by the LIGO-Virgo Collaboration has provided fundamental new
insights into the astrophysical site for r-process nucleosynthesis and on the
nature of dense neutron-star matter. The detected gravitational wave signal
depends upon the tidal distortion of the neutron stars as they approach merger.
We report on relativistic numerical simulations of the approach to binary
merger in the conformally flat, quasi-circular orbit approximation. We show
that this event serves as a calibration to the quasi-circular approximation and
a confirmation of the validity of the conformally flat approximation to the
three-metric. We then examine how the detected chirp depends upon the adopted
equation of state. This establishes a new efficient means to constrain the
nuclear equation of state in binary neutron star mergers.
|
We propose a new class of robust and Fisher-consistent estimators for mixture
models. These estimators can be used to construct robust model-based clustering
procedures. We study in detail the case of multivariate normal mixtures and
propose a procedure that uses S estimators of multivariate location and
scatter. We develop an algorithm to compute the estimators and to build the
clusters which is quite similar to the EM algorithm. An extensive Monte Carlo
simulation study shows that our proposal compares favorably with other robust
and non robust model-based clustering procedures. We apply ours and alternative
procedures to a real data set and again find that the best results are obtained
using our proposal.
|
The EXperiment for Cryogenic Large-Aperture Intensity Mapping (EXCLAIM) is a
cryogenic balloon-borne instrument that will map carbon monoxide and
singly-ionized carbon emission lines across redshifts from 0 to 3.5, using an
intensity mapping approach. EXCLAIM will broaden our understanding of these
elemental and molecular gases and the role they play in star formation
processes across cosmic time scales. The focal plane of EXCLAIM's cryogenic
telescope features six {\mu}-Spec spectrometers. {\mu}-Spec is a compact,
integrated grating-analog spectrometer, which uses meandered superconducting
niobium microstrip transmission lines on a single-crystal silicon dielectric to
synthesize the grating. It features superconducting aluminum microwave kinetic
inductance detectors (MKIDs), also in a microstrip architecture. The
spectrometers for EXCLAIM couple to the telescope optics via a hybrid planar
antenna coupled to a silicon lenslet. The spectrometers operate from 420 to 540
GHz with a resolving power R={\lambda}/{\Delta}{\lambda}=512 and employ an
array of 355 MKIDs on each spectrometer. The spectrometer design targets a
noise equivalent power (NEP) of 2x10-18W/\sqrt{Hz} (defined at the input to the
main lobe of the spectrometer lenslet beam, within a 9-degree half width),
enabled by the cryogenic telescope environment, the sensitive MKID detectors,
and the low dielectric loss of single-crystal silicon. We report on these
spectrometers under development for EXCLAIM, providing an overview of the
spectrometer and component designs, the spectrometer fabrication process,
fabrication developments since previous prototype demonstrations, and the
current status of their development for the EXCLAIM mission.
|
The eigenstate thermalization hypothesis provides to date the most successful
description of thermalization in isolated quantum systems by conjecturing
statistical properties of matrix elements of typical operators in the
(quasi-)energy eigenbasis. Here we study the distribution of matrix elements
for a class of operators in dual-unitary quantum circuits in dependence of the
frequency associated with the corresponding eigenstates. We provide an exact
asymptotic expression for the spectral function, i.e., the second moment of
this frequency resolved distribution. The latter is obtained from the decay of
dynamical correlations between local operators which can be computed exactly
from the elementary building blocks of the dual-unitary circuits. Comparing the
asymptotic expression with results obtained by exact diagonalization we find
excellent agreement. Small fluctuations at finite system size are explicitly
related to dynamical correlations at intermediate times and the deviations from
their asymptotical dynamics. Moreover, we confirm the expected Gaussian
distribution of the matrix elements by computing higher moments numerically.
|
The Boussinesq $abcd$ system arises in the modeling of long wave small
amplitude water waves in a channel, where the four parameters $(a,b,c,d)$
satisfy one constraint. In this paper we focus on the solitary wave solutions
to such a system. In particular we work in two parameter regimes where the
system does not admit a Hamiltonian structure (corresponding to $b \ne d$). We
prove via analytic global bifurcation techniques the existence of solitary
waves in such parameter regimes. Some qualitative properties of the solutions
are also derived, from which sharp results can be obtained for the global
solution curves. Specifically, we first construct solutions bifurcating from
the stationary waves, and obtain a global continuous curve of solutions that
exhibits a loss of ellipticity in the limit. The second family of solutions
bifurcate from the classical Boussinesq supercritical waves. We show that the
curve associated to the second class either undergoes a loss of ellipticity in
the limit or becomes arbitrarily close to having a stagnation point.
|
The deep theory of approximate subgroups establishes 3-step product growth
for subsets of finite simple groups $G$ of Lie type of bounded rank. In this
paper we obtain 2-step growth results for representations of such groups $G$
(including those of unbounded rank), where products of subsets are replaced by
tensor products of representations.
Let $G$ be a finite simple group of Lie type and $\chi$ a character of $G$.
Let $|\chi|$ denote the sum of the squares of the degrees of all (distinct)
irreducible characters of $G$ which are constituents of $\chi$. We show that
for all $\delta>0$ there exists $\epsilon>0$, independent of $G$, such that if
$\chi$ is an irreducible character of $G$ satisfying $|\chi| \le
|G|^{1-\delta}$, then $|\chi^2| \ge |\chi|^{1+\epsilon}$. We also obtain
results for reducible characters, and establish faster growth in the case where
$|\chi| \le |G|^{\delta}$.
In another direction, we explore covering phenomena, namely situations where
every irreducible character of $G$ occurs as a constituent of certain products
of characters. For example, we prove that if $|\chi_1| \cdots |\chi_m|$ is a
high enough power of $|G|$, then every irreducible character of $G$ appears in
$\chi_1\cdots\chi_m$. Finally, we obtain growth results for compact semisimple
Lie groups.
|
We study in this paper lower bounds for the generalization error of models
derived from multi-layer neural networks, in the regime where the size of the
layers is commensurate with the number of samples in the training data. We show
that unbiased estimators have unacceptable performance for such nonlinear
networks in this regime. We derive explicit generalization lower bounds for
general biased estimators, in the cases of linear regression and of two-layered
networks. In the linear case the bound is asymptotically tight. In the
nonlinear case, we provide a comparison of our bounds with an empirical study
of the stochastic gradient descent algorithm. The analysis uses elements from
the theory of large random matrices.
|
We construct and study a new class $\mathscr{M}=\{\mathscr{M}_n\}_{n\ge 4}$
of compact hyperbolic $3$-manifolds with totally geodesic boundary. The members
of $\mathscr{M}_n$ are defined via triples of pairwise compatible Eulerian
cycles in $4$-regular $n$-vertex graphs. We show that each $M$ in
$\mathscr{M}_n$ is of Matveev complexity $n$ and has a unique minimal ideal
triangulation, which consists of $n$ tetrahedra. We exploit these properties to
show that $n!\,4^n > |\mathscr{M}_n| > n!$ for each sufficiently large
$n\in\mathbb{N}$.
|
Accelerated degradation tests are used to provide accurate estimation of
lifetime properties of highly reliable products within a relatively short
testing time. There data from particular tests at high levels of stress
(e.\,g.\ temperature, voltage, or vibration) are extrapolated, through a
physically meaningful model, to obtain estimates of lifetime quantiles under
normal use conditions. In this work, we consider repeated measures accelerated
degradation tests with multiple stress variables, where the degradation paths
are assumed to follow a linear mixed effects model which is quite common in
settings when repeated measures are made. We derive optimal experimental
designs for minimizing the asymptotic variance for estimating the median
failure time under normal use conditions when the time points for measurements
are either fixed in advance or are also to be optimized.
|
Classical turnpikes correspond to optimal steady states which are attractors
of optimal control problems. In this paper, motivated by mechanical systems
with symmetries, we generalize this concept to manifold turnpikes.
Specifically, the necessary optimality conditions on a symmetry-induced
manifold coincide with those of a reduced-order problem under certain
conditions. We also propose sufficient conditions for the existence of manifold
turnpikes based on a tailored notion of dissipativity with respect to
manifolds. We show how the classical Legendre transformation between
Euler-Lagrange and Hamilton formalisms can be extended to the adjoint
variables. Finally, we draw upon the Kepler problem to illustrate our findings.
|
For the Helmholtz equation posed in the exterior of a Dirichlet obstacle, we
prove that if there exists a family of quasimodes (as is the case when the
exterior of the obstacle has stable trapped rays), then there exist near-zero
eigenvalues of the standard variational formulation of the exterior Dirichlet
problem (recall that this formulation involves truncating the exterior domain
and applying the exterior Dirichlet-to-Neumann map on the truncation boundary).
Our motivation for proving this result is that a) the finite-element method
for computing approximations to solutions of the Helmholtz equation is based on
the standard variational formulation, and b) the location of eigenvalues, and
especially near-zero ones, plays a key role in understanding how iterative
solvers such as the generalised minimum residual method (GMRES) behave when
used to solve linear systems, in particular those arising from the
finite-element method. The result proved in this paper is thus the first step
towards rigorously understanding how GMRES behaves when applied to
discretisations of high-frequency Helmholtz problems under strong trapping (the
subject of the companion paper [Marchand, Galkowski, Spence, Spence, 2021]).
|
Let $X$ be a finite set, $Z \subseteq X$ and $y \notin X$. Marcel Ern\'{e}
showed in 1981, that the number of posets on $X$ containing $Z$ as an antichain
equals the number of posets $R$ on $X \cup \{ y \}$ in which the points of $Z
\cup \{ y \}$ are exactly the maximal points of $R$. We prove the following
generalization: For every poset $Q$ with carrier $Z$, the number of posets on
$X$ containing $Q$ as an induced sub-poset equals the number of posets $R$ on
$X \cup \{ y \}$ which contain $Q^d + A_y$ as an induced sub-poset and in which
the maximal points of $Q^d + A_y$ are exactly the maximal points of $R$. Here,
$Q^d$ is the dual of $Q$, $A_y$ is the singleton-poset on $y$, and $Q^d + A_y$
denotes the direct sum of $Q^d$ and $A_y$.
|
We present a new analysis of the light curve of the young planet-hosting star
TOI 451 in the light of new observations from TESS Cycle 3. Our joint analysis
of the transits of all three planets, using all available TESS data, results in
an improved ephemeris for TOI 451 b and TOI 451 c, which will help to plan
follow-up observations. The updated mid-transit times are
$\textrm{BJD}-2,457\,000=$ $1410.9896_{ - 0.0029 }^{ + 0.0032 }$,
$1411.7982_{-0.0020}^{+0.0022}$, and $1416.63407_{-0.00100}^{+0.00096}$ for TOI
451 b, c, and d, respectively, and the periods are
$1.8587028_{-10e-06}^{+08e-06}$, $9.192453_{-3.3e-05}^{+4.1e-05}$, and
$16.364932_{-3.5e-05}^{+3.6e-05 }$ days. We also model the out-of-transit light
curve using a Gaussian Process with a quasi-periodic kernel and infer a change
in the properties of the active regions on the surface of TOI 451 between TESS
Cycles 1 and 3.
|
Multi-relational graph is a ubiquitous and important data structure, allowing
flexible representation of multiple types of interactions and relations between
entities. Similar to other graph-structured data, link prediction is one of the
most important tasks on multi-relational graphs and is often used for knowledge
completion. When related graphs coexist, it is of great benefit to build a
larger graph via integrating the smaller ones. The integration requires
predicting hidden relational connections between entities belonged to different
graphs (inter-domain link prediction). However, this poses a real challenge to
existing methods that are exclusively designed for link prediction between
entities of the same graph only (intra-domain link prediction). In this study,
we propose a new approach to tackle the inter-domain link prediction problem by
softly aligning the entity distributions between different domains with optimal
transport and maximum mean discrepancy regularizers. Experiments on real-world
datasets show that optimal transport regularizer is beneficial and considerably
improves the performance of baseline methods.
|
For a single server system, Shortest Remaining Processing Time (SRPT) is an
optimal size-based policy. In this paper, we discuss scheduling a single-server
system when exact information about the jobs' processing times is not
available. When the SRPT policy uses estimated processing times, the
underestimation of large jobs can significantly degrade performance. We propose
a simple heuristic, Size Estimate Hedging (SEH), that only uses jobs' estimated
processing times for scheduling decisions. A job's priority is increased
dynamically according to an SRPT rule until it is determined that it is
underestimated, at which time the priority is frozen. Numerical results suggest
that SEH has desirable performance when estimation errors are not unreasonably
large.
|
We propose to use ultra-high intensity laser pulses with wavefront rotation
(WFR) to produce short, ultra-intense surface plasma waves (SPW) on grating
targets for electron acceleration. Combining a smart grating design with
optimal WFR conditions identified through simple analytical modeling and
particle-in-cell simulation allows to decrease the SPW duration (down to few
optical cycles) and increase its peak amplitude. In the relativistic regime,
for $I\lambda_0^2=3.4 \times 10^{19}{\rm W/cm^2\mu m^2}$, such SPW are found to
accelerate high-charge (few 10's of pC), high-energy (up to 70 MeV) and
ultra-short (few fs) electron bunches.
|
On this note we show how one specific proposal of solution to the problem of
`closing' the well motivated 331 model to the Standard Model actually implies a
lower bound for the otherwise theoretically free vacuum expectation value
$v_\chi$.
|
We show that while orbital magnetic field and disorder, acting individually
weaken superconductivity, acting together they produce an intriguing evolution
of a two-dimensional type-II s-wave superconductor. For weak disorder, the
critical field H_c at which the superfluid density collapses is coincident with
the field at which the superconducting energy gap gets suppressed. However,
with increasing disorder these two fields diverge from each other creating a
pseudogap region. The nature of vortices also transform from Abrikosov vortices
with a metallic core for weak disorder to Josephson vortices with gapped and
insulating cores for higher disorder. Our results naturally explain two
outstanding puzzles: (1) the gigantic magnetoresistance peak observed as a
function of magnetic field in thin disordered superconducting films; and (2)
the disappearance of the celebrated zero-bias Caroli-de Gennes-Matricon peak in
disordered superconductors.
|
Various applications which run on the machines in a network such as
Internet-of-Things require different bandwidths. So each machine may select one
of its multiple Radio Frequency (RF) interfaces for machine-to-machine or
machine-to base-station communications according to required bandwidth. We have
proposed a generalized framework for joint dynamic optimal RF interface setting
and next-hop selection, which is suitable for networks with multiple base
stations, and source nodes that have the same requests for bandwidth.
Simulation results show average data rate of the source nodes may be increased
up to 117%.
|
Wavelength-sized microdisk resonators were fabricated on a single crystalline
4H-silicon-carbide-oninsulator platform (4H-SiCOI). By carrying out
microphotoluminescence measurements at room temperature, we show that the
microdisk resonators support whispering-gallery modes (WGMs) with quality
factors up to $5.25 \times 10^3$ and mode volumes down to $2.69 \times(\lambda
/n)^3$ at the visible and near-infrared wavelengths. Moreover, the demonstrated
wavelength-sized microdisk resonators exhibit WGMs whose resonant wavelengths
compatible with the zero-phonon lines of spin defects in 4H-SiCOI, making them
a promising candidate for applications in cavity quantum electrodynamics and
integrated quantum photonic circuits.
|
Yield stress fluids (YSFs) display a dual nature highlighted by the existence
of a yield stress such that YSFs are solid below the yield stress, whereas they
flow like liquids above it. Under an applied shear rate $\dot\gamma$, the
solid-to-liquid transition is associated with a complex spatiotemporal
scenario. Still, the general phenomenology reported in the literature boils
down to a simple sequence that can be divided into a short-time response
characterized by the so-called "stress overshoot", followed by stress
relaxation towards a steady state. Such relaxation can be either long-lasting,
which usually involves the growth of a shear band that can be only transient or
that may persist at steady-state, or abrupt, in which case the solid-to-liquid
transition resembles the failure of a brittle material, involving avalanches.
Here we use a continuum model based on a spatially-resolved fluidity approach
to rationalize the complete scenario associated with the shear-induced yielding
of YSFs. Our model provides a scaling for the coordinates of the stress maximum
as a function of $\dot\gamma$, which shows excellent agreement with
experimental and numerical data extracted from the literature. Moreover, our
approach shows that such a scaling is intimately linked to the growth dynamics
of a fluidized boundary layer in the vicinity of the moving boundary. Yet, such
scaling is independent of the fate of that layer, and of the long-term behavior
of the YSF. Finally, when including the presence of "long-range" correlations,
we show that our model displays a ductile to brittle transition, i.e., the
stress overshoot reduces into a sharp stress drop associated with avalanches,
which impacts the scaling of the stress maximum with $\dot\gamma$. Our work
offers a unified picture of shear-induced yielding in YSFs, whose complex
spatiotemporal dynamics are deeply connected to non-local effects.
|
We propose a novel IaaS composition framework that selects an optimal set of
consumer requests according to the provider's qualitative preferences on
long-term service provisions. Decision variables are included in the temporal
conditional preference networks (TempCP-net) to represent qualitative
preferences for both short-term and long-term consumers. The global preference
ranking of a set of requests is computed using a \textit{k}-d tree indexing
based temporal similarity measure approach. We propose an extended
three-dimensional Q-learning approach to maximize the global preference
ranking. We design the on-policy based sequential selection learning approach
that applies the length of request to accept or reject requests in a
composition. The proposed on-policy based learning method reuses historical
experiences or policies of sequential optimization using an agglomerative
clustering approach. Experimental results prove the feasibility of the proposed
framework.
|
Motivated by the theoretical interest in reconstructing long 3D trajectories
of individual birds in large flocks, we developed CoMo, a co-moving camera
system of two synchronized high speed cameras coupled with rotational stages,
which allow us to dynamically follow the motion of a target flock. With the
rotation of the cameras we overcome the limitations of standard static systems
that restrict the duration of the collected data to the short interval of time
in which targets are in the cameras common field of view, but at the same time
we change in time the external parameters of the system, which have then to be
calibrated frame-by-frame. We address the calibration of the external
parameters measuring the position of the cameras and their three angles of yaw,
pitch and roll in the system "home" configuration (rotational stage at an angle
equal to 0deg and combining this static information with the time dependent
rotation due to the stages. We evaluate the robustness and accuracy of the
system by comparing reconstructed and measured 3D distances in what we call 3D
tests, which show a relative error of the order of 1%. The novelty of the work
presented in this paper is not only on the system itself, but also on the
approach we use in the tests, which we show to be a very powerful tool in
detecting and fixing calibration inaccuracies and that, for this reason, may be
relevant for a broad audience.
|
We revisit an algorithm constructing elliptic tori, that was originally
designed for applications to planetary hamiltonian systems. The scheme is
adapted to properly work with models of chains of $N+1$ particles interacting
via anharmonic potentials, thus covering also the case of FPU chains. After
having preliminarily settled the Hamiltonian in a suitable way, we perform a
sequence of canonical transformations removing the undesired perturbative terms
by an iterative procedure. This is done by using the Lie series approach, that
is explicitly implemented in a programming code with the help of a software
package, which is especially designed for computer algebra manipulations. In
the cases of FPU chains with $N=4,\, 8$, we successfully apply our new
algorithm to the construction of elliptic tori for wide sets of the parameter
ruling the size of the perturbation, i.e., the total energy of the system.
Moreover, we explore the stability regions surrounding 1D elliptic tori. We
compare our semi-analytical results with those provided by numerical
explorations of the FPU-model dynamics, where the latter ones are obtained by
using techniques based on the so called frequency analysis. We find that our
procedure works up to values of the total energy that are of the same order of
magnitude with respect to the maximal ones, for which elliptic tori are
detected by numerical methods.
|
Finding an effective formula for describing a discriminant of a quadrinomial
(a formula which can be easily computed for high values of degrees of
quadrinomials) is a difficult problem. In 2018 Otake and Shaska using advanced
matrix operations found an explicit expression of $\Delta(x^n+t(x^2+ax+b))$. In
this paper we focus on deriving similar results, taking advantage of
alternative elementary approach, for quadrinomials of the form $x^n+ax^k+bx+c$,
where $ k \in \{2,3,n-1\}$. Moreover, we make some notes about
$\Delta(x^{2n}+ax^n+bx^l+c)$ such that $n>2l$.
|
We analyse the near infrared colour magnitude diagram of a field including
the giant molecular cloud G0.253+0.016 (a.k.a. The Brick) observed at high
spatial resolution, with HAWK-I at the VLT. The distribution of red clump stars
in a line of sight crossing the cloud, compared with that in a direction just
beside it, and not crossing it, allow us to measure the distance of the cloud
from the Sun to be 7.20, with a statistical uncertainty of +/-0.16 and a
systematic error of +/-0.20 kpc. This is significantly closer than what is
generally assumed, i.e., that the cloud belongs to the near side of the central
molecular zone, at 60 pc from the Galactic center. This assumption was based on
dynamical models of the central molecular zone, observationally constrained
uniquely by the radial velocity of this and other clouds. Determining the true
position of the Brick cloud is relevant because this is the densest cloud of
the Galaxy not showing any ongoing star formation. This puts the cloud off by 1
order of magnitude from the Kennicutt-Schmidt relation between the density of
the dense gas and the star formation rate. Several explanations have been
proposed for this absence of star formation, most of them based on the
dynamical evolution of this and other clouds, within the Galactic center
region. Our result emphasizes the need to include constraints coming from
stellar observations in the interpretation of our Galaxy central molecular
zone.
|
We present a compressive radar design that combines multitone linear
frequency modulated (LFM) waveforms in the transmitter with a classical stretch
processor and sub-Nyquist sampling in the receiver. The proposed compressive
illumination scheme has fewer random elements resulting in reduced storage and
complexity for implementation than previously proposed compressive radar
designs based on stochastic waveforms. We analyze this illumination scheme for
the task of a joint range-angle of arrival estimation in the multi-input and
multi-output (MIMO) radar system. We present recovery guarantees for the
proposed illumination technique. We show that for a sufficiently large number
of modulating tones, the system achieves high-resolution in range and
successfully recovers the range and angle-of-arrival of targets in a sparse
scene. Furthermore, we present an algorithm that estimates the target range,
angle of arrival, and scattering coefficient in the continuum. Finally, we
present simulation results to illustrate the recovery performance as a function
of system parameters.
|
Telehealth helps to facilitate access to medical professionals by enabling
remote medical services for the patients. These services have become gradually
popular over the years with the advent of necessary technological
infrastructure. The benefits of telehealth have been even more apparent since
the beginning of the COVID-19 crisis, as people have become less inclined to
visit doctors in person during the pandemic. In this paper, we focus on
facilitating the chat sessions between a doctor and a patient. We note that the
quality and efficiency of the chat experience can be critical as the demand for
telehealth services increases. Accordingly, we develop a smart auto-response
generation mechanism for medical conversations that helps doctors respond to
consultation requests efficiently, particularly during busy sessions. We
explore over 900,000 anonymous, historical online messages between doctors and
patients collected over nine months. We implement clustering algorithms to
identify the most frequent responses by doctors and manually label the data
accordingly. We then train machine learning algorithms using this preprocessed
data to generate the responses. The considered algorithm has two steps: a
filtering (i.e., triggering) model to filter out infeasible patient messages
and a response generator to suggest the top-3 doctor responses for the ones
that successfully pass the triggering phase. The method provides an accuracy of
83.28\% for precision@3 and shows robustness to its parameters.
|
A high-order finite element method is proposed to solve the nonlinear
convection-diffusion equation on a time-varying domain whose boundary is
implicitly driven by the solution of the equation. The method is semi-implicit
in the sense that the boundary is traced explicitly with a high-order
surface-tracking algorithm, while the convection-diffusion equation is solved
implicitly with high-order backward differentiation formulas and
fictitious-domain finite element methods. By two numerical experiments for
severely deforming domains, we show that optimal convergence orders are
obtained in energy norm for third-order and fourth-order methods.
|
The continual success of superconducting photon-detection technologies in
quantum photonics asserts cryogenic-compatible systems as a cornerstone of full
quantum photonic integration. Here, we present a way to reversibly fine-tune
the optical properties of individual waveguide structures through local changes
to their geometry using solidified xenon. Essentially, we remove the need for
additional on-chip calibration elements, effectively zeroing the power
consumption tied to reconfigurable elements, with virtually no detriment to
photonic device performance. We enable passive circuit tuning in
pressure-controlled environments, locally manipulating the cladding thickness
over portions of optical waveguides. We realize this in a cryogenic
environment, through controlled deposition of xenon gas and precise tuning of
its thickness using sublimation, triggered by on-chip resistive heaters. $\pi$
phase shifts occur over a calculated length of just $L_{\pi}$ = 12.3$\pm$0.3
$\mu m$. This work paves the way towards the integration of compact,
reconfigurable photonic circuits alongside superconducting detectors, devices,
or otherwise.
|
Recovering badly damaged face images is a useful yet challenging task,
especially in extreme cases where the masked or damaged region is very large.
One of the major challenges is the ability of the system to generalize on faces
outside the training dataset. We propose to tackle this extreme inpainting task
with a conditional Generative Adversarial Network (GAN) that utilizes
structural information, such as edges, as a prior condition. Edge information
can be obtained from the partially masked image and a structurally similar
image or a hand drawing. In our proposed conditional GAN, we pass the
conditional input in every layer of the encoder while maintaining consistency
in the distributions between the learned weights and the incoming conditional
input. We demonstrate the effectiveness of our method with badly damaged face
examples.
|
The SPAdes assembler for metagenome assembly is a long-running application
commonly used at the NERSC supercomputing site. However, NERSC, like many other
sites, has a 48-hour limit on resource allocations. The solution is to chain
together multiple resource allocations in a single run, using
checkpoint-restart. This case study provides insights into the "pain points" in
applying a well-known checkpointing package (DMTCP: Distributed MultiThreaded
CheckPointing) to long-running production workloads of SPAdes. This work has
exposed several bugs and limitations of DMTCP, which were fixed to support the
large memory and fragmented intermediate files of SPAdes. But perhaps more
interesting for other applications, this work reveals a tension between the
transparency goals of DMTCP and performance concerns due to an I/O bottleneck
during the checkpointing process when supporting large memory and many files.
Suggestions are made for overcoming this I/O bottleneck, which provides
important "lessons learned" for similar applications.
|
We report the results of our experimental studies on the magnetic, transport
and thermoelectric properties of the ferromagnetic metal CoMnSb. Sizable
anomalous Hall conductivity $\sigma_{yx}$ and transverse thermoelectric
conductivity $\alpha_{yx}$ are found experimentally and comparable in size to
the values estimated from density-functional theory. Our experiment further
reveals that CoMnSb exhibits $-T\ln T$ critical behavior in $\alpha_{yx}(T)$,
deviating from Fermi liquid behavior $\alpha_{yx}\sim T$ over a decade of
temperature between 10 K to 400 K, similar to ferromagnetic Weyl and nodal-line
semimetals. Our theoretical calculation for CoMnSb also predicts the $-T\ln T$
behavior when the Fermi energy locates near the Weyl nodes in momentum space.
|
We propose a framework to use Nesterov's accelerated method for constrained
convex optimization problems. Our approach consists of first reformulating the
original problem as an unconstrained optimization problem using a continuously
differentiable exact penalty function. This reformulation is based on replacing
the Lagrange multipliers in the augmented Lagrangian of the original problem by
Lagrange multiplier functions. The expressions of these Lagrange multiplier
functions, which depend upon the gradients of the objective function and the
constraints, can make the unconstrained penalty function non-convex in general
even if the original problem is convex. We establish sufficient conditions on
the objective function and the constraints of the original problem under which
the unconstrained penalty function is convex. This enables us to use Nesterov's
accelerated gradient method for unconstrained convex optimization and achieve a
guaranteed rate of convergence which is better than the state-of-the-art
first-order algorithms for constrained convex optimization. Simulations
illustrate our results.
|
Despite the growing availability of high-quality public datasets, the lack of
training samples is still one of the main challenges of deep-learning for skin
lesion analysis. Generative Adversarial Networks (GANs) appear as an enticing
alternative to alleviate the issue, by synthesizing samples indistinguishable
from real images, with a plethora of works employing them for medical
applications. Nevertheless, carefully designed experiments for skin-lesion
diagnosis with GAN-based data augmentation show favorable results only on
out-of-distribution test sets. For GAN-based data anonymization $-$ where the
synthetic images replace the real ones $-$ favorable results also only appear
for out-of-distribution test sets. Because of the costs and risks associated
with GAN usage, those results suggest caution in their adoption for medical
applications.
|
Human motion recognition (HMR) based on wireless sensing is a low-cost
technique for scene understanding. Current HMR systems adopt support vector
machines (SVMs) and convolutional neural networks (CNNs) to classify radar
signals. However, whether a deeper learning model could improve the system
performance is currently not known. On the other hand, training a machine
learning model requires a large dataset, but data gathering from experiment is
cost-expensive and time-consuming. Although wireless channel models can be
adopted for dataset generation, current channel models are mostly designed for
communication rather than sensing. To address the above problems, this paper
proposes a deep spectrogram network (DSN) by leveraging the residual mapping
technique to enhance the HMR performance. Furthermore, a primitive based
autoregressive hybrid (PBAH) channel model is developed, which facilitates
efficient training and testing dataset generation for HMR in a virtual
environment. Experimental results demonstrate that the proposed PBAH channel
model matches the actual experimental data very well and the proposed DSN
achieves significantly smaller recognition error than that of CNN.
|
Tropical toric varieties are partial compactifications of finite dimensional
real vector spaces associated with rational polyhedral fans. We introduce
plurisubharmonic functions and a Bedford--Taylor product for Lagerberg currents
on open subsets of a tropical toric variety. The resulting tropical toric
pluripotential theory provides the link to give a canonical correspondence
between complex and non-archimedean pluripotential theories of invariant
plurisubharmonic functions on toric varieties. We will apply this
correspondence to solve invariant non-archimedean Monge--Amp\`ere equations on
toric and abelian varieties over arbitrary non-archimedean fields.
|
We discuss our recent theoretical work on vibronic coupling mechanisms in a
model energy transfer system in the context of previous 2DEV experiments on a
natural light-harvesting system, light-harvesting complex II (LHCII), where
vibronic signatures were suggested to be involved in energy transfer. In this
comparison, we directly assign the vibronic coupling mechanism in LHCII as
arising from Herzberg-Teller activity and show how this coupling modulates the
energy transfer dynamics in this photosynthetic system.
|
We investigate universal estimate of finite Morse index solutions to
polyharmonic equation in a proper open subset of $\mathbb{R}^n$. Differently to
previous works \cite{DDF, fa, H1, WY} , we propose here a direct proof under
large superlinear and subcritical growth conditions on the term source where we
show that the universal constant evolves as a polynomial function of the Morse
index. To do so, we introduce a new interpolation inequality and we make use of
Pohozaev's identity and a delicate boot strap argument. Thanks to our
interpolation inequality, we improve previous nonexistence results \cite{H1,
FH} dealing with stable at infinity weak solutions to the $p$-polyharmonic
equation in the subcritical range.
|
In this paper, based on the classical K. Yano's formula, we first establish
an optimal integral inequality for compact Lagrangian submanifolds in the
complex space forms, which involves the Ricci curvature in the direction
$J\vec{H}$ and the norm of the covariant differentiation of the second
fundamental form $h$, where $J$ is the almost complex structure and $\vec{H}$
is the mean curvature vector field. Second and analogously, for compact
Legendrian submanifolds in the Sasakian space forms with Sasakian structure
$(\varphi,\xi,\eta,g)$, we also establish an optimal integral inequality
involving the Ricci curvature in the direction $\varphi\vec{H}$ and the norm of
the modified covariant differentiation of the second fundamental form. The
integral inequality is optimal in the sense that all submanifolds attaining the
equality are completely classified. As direct consequences, we obtain new and
global characterizations for the Whitney spheres in complex space forms as well
as the contact Whitney spheres in Sasakian space forms. Finally, we show that,
just as the Whitney spheres in complex space forms, the contact Whitney spheres
in Sasakian space forms are locally conformally flat manifolds with sectional
curvatures non-constant.
|
In this paper, we investigate how the COVID-19 pandemics and more precisely
the lockdown of a sector of the economy may have changed our habits and,
there-fore, altered the demand of some goods even after the re-opening. In a
two-sector infinite horizon economy, we show that the demand of the goods
produced by the sector closed during the lockdown could shrink or expand with
respect to their pre-pandemic level depending on the length of the lockdown and
the relative strength of the satiation effect and the substitutability effect.
We also provide conditions under which this sector could remain inactive even
after the lockdown as well as an insight on the policy which should be adopted
to avoid this outcome.
|
In order to create user-centric and personalized privacy management tools,
the underlying models must account for individual users' privacy expectations,
preferences, and their ability to control their information sharing activities.
Existing studies of users' privacy behavior modeling attempt to frame the
problem from a request's perspective, which lack the crucial involvement of the
information owner, resulting in limited or no control of policy management.
Moreover, very few of them take into the consideration the aspect of
correctness, explainability, usability, and acceptance of the methodologies for
each user of the system. In this paper, we present a methodology to formally
model, validate, and verify personalized privacy disclosure behavior based on
the analysis of the user's situational decision-making process. We use a model
checking tool named UPPAAL to represent users' self-reported privacy disclosure
behavior by an extended form of finite state automata (FSA), and perform
reachability analysis for the verification of privacy properties through
computation tree logic (CTL) formulas. We also describe the practical use cases
of the methodology depicting the potential of formal technique towards the
design and development of user-centric behavioral modeling. This paper, through
extensive amounts of experimental outcomes, contributes several insights to the
area of formal methods and user-tailored privacy behavior modeling.
|
Non-reciprocal plasmons in current-driven, isotropic, and homogenous graphene
with proximal metallic gates is theoretically explored. Nearby metallic gates
screen the Coulomb interactions, leading to linearly dispersive acoustic
plasmons residing close to its particle-hole continuum counterpart. We show
that the applied bias leads to spectral broadband focused plasmons whose
resonance linewidth is dependent on the angular direction relative to the
current flow due to Landau damping. We predict that forward focused
non-reciprocal plasmons are possible with accessible experimental parameters
and setup.
|
We theoretically study the superconductivity in multiorbital superconductors
based on a three-orbital tight-banding model. With appropriate values of the
nearest-neighbour exchange $J_{1}^{\alpha \beta}$ and the
next-nearest-neighbour exchange $J_{2}^{\alpha \beta}$, we find a two-dome
structure in the $T_{c}-n$ phase diagram: one dome in the doping range $n<3.9$
where the superconducting (SC) state is mainly $s_{x^{2} y^{2}}$ component
contributed by inter-orbital pairing, the other dome in the doping range
$3.9<n<4.46$ where the SC state is mainly $s_{x^{2} y^{2}}+s_{x^{2}+y^{2}}$
components contributed by intra-orbital pairing. We find that the competition
between different orbital pairing leads to two-dome SC phase diagrams in
multiorbital superconductors, and different matrix elements of $J_{1}$ and
$J_{2}$ considerably affect the boundary of two SC domes.
|
We survey recent developments in the study of Hodge theoretic aspects of
Alexander-type invariants associated with smooth complex algebraic varieties.
|
For fixed integers $D \geq 0$ and $c \geq 3$, we demonstrate how to use
$2$-adic valuation trees of sequences to analyze Diophantine equations of the
form $x^2+D=2^cy$ and $x^3+D=2^cy$, for $y$ odd. Further, we show for what
values $D \in \mathbb{Z}^+$, the numbers $x^3+D$ will generate infinite
valuation trees, which lead to infinite solutions to the above Diophantine
equations.
|
This paper introduces a new specification for the nonparametric
production-frontier based on Data Envelopment Analysis (DEA) when dealing with
decision-making units whose economic performances are correlated with those of
the neighbors (spatial dependence). To illustrate the bias reduction that the
SpDEA provides with respect to standard DEA methods, an analysis of the
regional production frontiers for the NUTS-2 European regions during the period
2000-2014 was carried out. The estimated SpDEA scores show a bimodal
distribution do not detected by the standard DEA estimates. The results confirm
the crucial role of space, offering important new insights on both the causes
of regional disparities in labour productivity and the observed polarization of
the European distribution of per capita income.
|
In this paper we consider the relationship between monomial-size and
bit-complexity in Sums-of-Squares (SOS) in Polynomial Calculus Resolution over
rationals (PCR/$\mathbb{Q}$). We show that there is a set of polynomial
constraints $Q_n$ over Boolean variables that has both SOS and PCR/$\mathbb{Q}$
refutations of degree 2 and thus with only polynomially many monomials, but for
which any SOS or PCR/$\mathbb{Q}$ refutation must have exponential
bit-complexity, when the rational coefficients are represented with their
reduced fractions written in binary.
|
Automatic speaker verification (ASV), one of the most important technology
for biometric identification, has been widely adopted in security-critical
applications, including transaction authentication and access control. However,
previous work has shown that ASV is seriously vulnerable to recently emerged
adversarial attacks, yet effective countermeasures against them are limited. In
this paper, we adopt neural vocoders to spot adversarial samples for ASV. We
use the neural vocoder to re-synthesize audio and find that the difference
between the ASV scores for the original and re-synthesized audio is a good
indicator for discrimination between genuine and adversarial samples. This
effort is, to the best of our knowledge, among the first to pursue such a
technical direction for detecting adversarial samples for ASV, and hence there
is a lack of established baselines for comparison. Consequently, we implement
the Griffin-Lim algorithm as the detection baseline. The proposed approach
achieves effective detection performance that outperforms all the baselines in
all the settings. We also show that the neural vocoder adopted in the detection
framework is dataset-independent. Our codes will be made open-source for future
works to do comparison.
|
Loop compilation for Tightly Coupled Processor Arrays (TCPAs), a class of
massively parallel loop accelerators, entails solving NP-hard problems, yet
depends on the loop bounds and number of available processing elements (PEs),
parameters known only at runtime because of dynamic resource management and
input sizes. Therefore, this article proposes a two-phase approach called
symbolic loop compilation: At compile time, the necessary NP-complete problems
are solved and the solutions compiled into a space-efficient symbolic
configuration. At runtime, a concrete configuration is generated from the
symbolic configuration according to the parameters values. We show that the
latter phase, called instantiation, runs in polynomial time with its most
complex step, program instantiation, not depending on the number of PEs. As
validation, we performed symbolic loop compilation on real-world loops and
measured time and space requirements. Our experiments confirm that a symbolic
configuration is space-efficient and suited for systems with little memory --
often, a symbolic configuration is smaller than a single concrete configuration
-- and that program instantiation scales well with the number of PEs -- for
example, when instantiating a symbolic configuration of a matrix-matrix
multiplication, the execution time is similar for $4\times 4$ and $32\times 32$
PEs.
|
Learning behavior mechanism is widely anticipated in managed settings through
the formal syllabus. However, heading for learning stimulus whilst daily
mobility practices through urban transit is the novel feature in learning
sciences. Theory of planned behavior (TPB), technology acceptance model (TAM),
and service quality of transit are conceptualized to assess the learning
behavioral intention (LBI) of commuters in Greater Kuala Lumpur. An online
survey was conducted to understand the LBI of 117 travelers who use the
technology to engage in the informal learning process during daily commuting.
The results explored that all the model variables i.e., perceived ease of use,
perceived usefulness, service quality, and subjective norms are significant
predictors of LBI. The perceived usefulness of learning during traveling and
transit service quality has a vibrant impact on LBI. The research will support
the informal learning mechanism from commuters point of view. The study is a
novel contribution to transport and learning literature that will open the new
prospect of research in urban mobility and its connotation with personal
learning and development.
|
Diffuse interface models are widely used to describe evolution of multi-phase
systems of different nature. Dispersed "inclusions", described by the phase
field distribution, are usually three dimensional objects. When describing
elastic fracture evolution, elements of the dispersed phase are effectively 2d
objects. An example of the model which governs evolution of effectively 1d
dispersed inclusions is phase field model for electric breakdown in solids.
Phase field model is defined by appropriate free energy functional, which
depends on phase field and its derivatives. In this work we show that
codimension of the dispersed "inclusion" significantly restrict the functional
dependency of system energy on the derivatives of the problem state variables.
It is shown that free energy of any phase field model suitable to describe
codimension 2 diffuse objects necessary depends on higher order derivatives of
the phase field or need an additional smoothness of the solution - it should
have first derivatives integrable with a power greater then two. To support
theoretical discussion, some numerical experiments are presented.
|
Giant radio pulses (GRPs) are sporadic bursts emitted by some pulsars,
lasting a few microseconds. GRPs are hundreds to thousands of times brighter
than regular pulses from these sources. The only GRP-associated emission
outside radio wavelengths is from the Crab Pulsar, where optical emission is
enhanced by a few percent during GRPs. We observed the Crab Pulsar
simultaneously at X-ray and radio wavelengths, finding enhancement of the X-ray
emission by $3.8\pm0.7\%$ (a 5.4$\sigma$ detection) coinciding with GRPs. This
implies that the total emitted energy from GRPs is tens to hundreds of times
higher than previously known. We discuss the implications for the pulsar
emission mechanism and extragalactic fast radio bursts.
|
In transition metal compounds, due to the interplay of charge, spin, lattice
and orbital degrees of freedom, many intertwined orders exist with close
energies. One of the commonly observed states is the so-called nematic electron
state, which breaks the in-plane rotational symmetry. This nematic state
appears in cuprates, iron-based superconductor, etc. Nematicity may coexist,
affect, cooperate or compete with other orders. Here we show the anisotropic
in-plane electronic state and superconductivity in a recently discovered kagome
metal CsV$_3$Sb$_5$ by measuring $c$-axis resistivity with the in-plane
rotation of magnetic field. We observe a twofold symmetry of superconductivity
in the superconducting state and a unique in-plane nematic electronic state in
normal state when rotating the in-plane magnetic field. Interestingly these two
orders are orthogonal to each other in terms of the field direction of the
minimum resistivity. Our results shed new light in understanding non-trivial
physical properties of CsV$_3$Sb$_5$.
|
CSI (Channel State Information) of WiFi systems contains the environment
channel response between the transmitter and the receiver, so the
people/objects and their movement in between can be sensed. To get CSI, the
receiver performs channel estimation based on the pre-known training field of
the transmitted WiFi signal. CSI related technology is useful in many cases,
but it also brings concerns on privacy and security. In this paper, we open
sourced a CSI fuzzer to enhance the privacy and security of WiFi CSI
applications. It is built and embedded into the transmitter of openwifi, which
is an open source full-stack WiFi chip design, to prevent unauthorized sensing
without sacrificing the WiFi link performance. The CSI fuzzer imposes an
artificial channel response to the signal before it is transmitted, so the CSI
seen by the receiver will indicate the actual channel response combined with
the artificial response. Only the authorized receiver, that knows the
artificial response, can calculate the actual channel response and perform the
CSI sensing. Another potential application of the CSI fuzzer is covert channels
based on a set of pre-defined artificial response patterns. Our work resolves
the pain point of implementing the anti-sensing idea based on the commercial
off-the-shelf WiFi devices.
|
Evidence that visual communication preceded written language and provided a
basis for it goes back to prehistory, in forms such as cave and rock paintings
depicting traces of our distant ancestors. Emergent communication research has
sought to explore how agents can learn to communicate in order to
collaboratively solve tasks. Existing research has focused on language, with a
learned communication channel transmitting sequences of discrete tokens between
the agents. In this work, we explore a visual communication channel between
agents that are allowed to draw with simple strokes. Our agents are
parameterised by deep neural networks, and the drawing procedure is
differentiable, allowing for end-to-end training. In the framework of a
referential communication game, we demonstrate that agents can not only
successfully learn to communicate by drawing, but with appropriate inductive
biases, can do so in a fashion that humans can interpret. We hope to encourage
future research to consider visual communication as a more flexible and
directly interpretable alternative of training collaborative agents.
|
Machine learning has been increasingly used as a first line of defense for
Windows malware detection. Recent work has however shown that learning-based
malware detectors can be evaded by carefully-perturbed input malware samples,
referred to as adversarial EXEmples, thus demanding for tools that can ease and
automate the adversarial robustness evaluation of such detectors. To this end,
we present secml-malware, the first Python library for computing adversarial
attacks on Windows malware detectors. \secmlmalware implements state-of-the-art
white-box and black-box attacks on Windows malware classifiers, by leveraging a
set of feasible manipulations that can be applied to Windows programs while
preserving their functionality. The library can be used to perform the
penetration testing and assessment of the adversarial robustness of Windows
malware detectors, and it can be easily extended to include novel attack
strategies. Our library is available at
https://github.com/pralab/secml_malware.
|
Motivated by the need for estimating the 3D pose of arbitrary objects, we
consider the challenging problem of class-agnostic object viewpoint estimation
from images only, without CAD model knowledge. The idea is to leverage features
learned on seen classes to estimate the pose for classes that are unseen, yet
that share similar geometries and canonical frames with seen classes. We train
a direct pose estimator in a class-agnostic way by sharing weights across all
object classes, and we introduce a contrastive learning method that has three
main ingredients: (i) the use of pre-trained, self-supervised, contrast-based
features; (ii) pose-aware data augmentations; (iii) a pose-aware contrastive
loss. We experimented on Pascal3D+, ObjectNet3D and Pix3D in a cross-dataset
fashion, with both seen and unseen classes. We report state-of-the-art results,
including against methods that additionally use CAD models as input.
|
This paper deals with the classification of groups G such that power graphs
and proper power graphs of G are line graphs. In fact, We classify all finite
nilpotent groups whose power graphs are line graphs. Also we categorize all
finite nilpotent groups (except non abelian 2-groups) whose proper power graph
are line graphs. Moreover, we study that the proper power graphs of generalized
quaternion groups are line graphs. Besides, we derive a condition on the order
of the dihedral groups for which the proper power graphs of the dihedral groups
are line graphs.
|
With the development of IoT, the sensor usage has been elevated to a new
level, and it becomes more crucial to maintain reliable sensor networks. In
this paper, we provide how to efficiently and reliably manage the sensor
monitoring system for securing fresh data at the data center (DC). A sensor
transmits its sensing information regularly to the DC, and the freshness of the
information at the DC is characterized by the age of information (AoI) that
quantifies the timeliness of information. By considering the effect of the AoI
and the spatial distance from the sensor on the information error at the DC, we
newly define an error-tolerable sensing (ETS) coverage as the area that the
estimated information is with smaller error than the target value. We then
derive the average AoI and the AoI violation probability of the sensor
monitoring system, and finally present the {\eta}-coverage probability, which
is the probability that the ETS coverage is greater than {\eta} ratio of the
maximum sensor coverage. We also provide the optimal transmission power of the
sensor, which minimizes the average energy consumption while guaranteeing
certain level of the {\eta}-coverage probability. Numerical results validate
the theoretical analysis and show the tendency of the optimal transmission
power according to the maximum number of retransmissions. This paper can pave
the way to efficient design of the AoI-sensitive sensor networks for IoT.
|
Music emotion recognition is an important task in MIR (Music Information
Retrieval) research. Owing to factors like the subjective nature of the task
and the variation of emotional cues between musical genres, there are still
significant challenges in developing reliable and generalizable models. One
important step towards better models would be to understand what a model is
actually learning from the data and how the prediction for a particular input
is made. In previous work, we have shown how to derive explanations of model
predictions in terms of spectrogram image segments that connect to the
high-level emotion prediction via a layer of easily interpretable perceptual
features. However, that scheme lacks intuitive musical comprehensibility at the
spectrogram level. In the present work, we bridge this gap by merging audioLIME
-- a source-separation based explainer -- with mid-level perceptual features,
thus forming an intuitive connection chain between the input audio and the
output emotion predictions. We demonstrate the usefulness of this method by
applying it to debug a biased emotion prediction model.
|
The fractional dark energy (FDE) model describes the accelerated expansion of
the Universe through a nonrelativistic gas of particles with a noncanonical
kinetic term. This term is proportional to the absolute value of the
three-momentum to the power of $3w$, where $w$ is simply the dark energy
equation of state parameter, and the corresponding energy leads to an energy
density that mimics the cosmological constant. In this paper we expand the
fractional dark energy model considering a non-zero chemical potential and we
show that it may thermodynamically describe a phantom regime. The Planck
constraints on the equation of state parameter put upper limits on the allowed
value of the ratio of the chemical potential to the temperature. In the second
part, we investigate the system of fractional dark energy particles with
negative absolute temperatures (NAT). NAT are possible in quantum systems and
in cosmology, if there exists an upper bound on the energy. This maximum energy
is one ingredient of the FDE model and indicates a connection between FDE and
NAT, if FDE is composed of fermions. In this scenario, the equation of state
parameter is equal to minus one and, using cosmological observations, we find
that the transition from positive to negative temperatures is allowed at any
redshift larger than one.
|
With the depletion of spectrum, wireless communication systems turn to
exploit large antenna arrays to achieve the degree of freedom in space domain,
such as millimeter wave massive multi-input multioutput (MIMO), reconfigurable
intelligent surface assisted communications and cell-free massive MIMO. In
these systems, how to acquire accurate channel state information (CSI) is
difficult and becomes a bottleneck of the communication links. In this article,
we introduce the concept of channel extrapolation that relies on a small
portion of channel parameters to infer the remaining channel parameters. Since
the substance of channel extrapolation is a mapping from one parameter subspace
to another, we can resort to deep learning (DL), a powerful learning
architecture, to approximate such mapping function. Specifically, we first
analyze the requirements, conditions and challenges for channel extrapolation.
Then, we present three typical extrapolations over the antenna dimension, the
frequency dimension, and the physical terminal, respectively. We also
illustrate their respective principles, design challenges and DL strategies. It
will be seen that channel extrapolation could greatly reduce the transmission
overhead and subsequently enhance the performance gains compared with the
traditional strategies. In the end, we provide several potential research
directions on channel extrapolation for future intelligent communications
systems.
|
The signature of noncommutativity on various measures of entanglement has
been observed by considering the holographic dual of noncommutative super
Yang-Mills theory. We have followed a systematic analytical approach in order
to compute the holographic entanglement entropy corresponding to a strip like
subsystem of length $l$. The relationship between the subsystem size (in
dimensionless form) $\frac{l}{a}$ and the turning point (in dimensionless form)
introduces a critical length scale $\frac{l_c}{a}$ which leads to three domains
in the theory, namely, the deep UV domain ($l< l_c$; $au_{t}\gg 1$, $au_{t}\sim
au_{b}$), deep noncommutative domain ($l> l_c,~au_{b}>au_t\gg 1$) and deep IR
domain ($l> l_c,~au_t\ll 1$). This in turn means that the length scale $l_c$
distinctly points out the UV/IR mixing property of the non-local theory under
consideration. We have carried out the holographic study of entanglement
entropy for each of these domains by employing both analytical and numerical
techniques. The broken Lorentz symmetry induced by noncommutativity has
motivated us to redefine the entropic $c$-function. We have obtained the
noncommutative correction to the $c$-function upto leading order in the
noncommutative parameter. We then move on to compute the minimal cross-section
area of the entanglement wedge by considering two disjoint subsystems $A$ and
$B$. On the basis of $E_P = E_W$ duality, this leads to the holographic
computation of the entanglement of purification. The correlation between two
subsystems, namely, the holographic mutual information $I(A:B)$ has also been
computed. Moreover, the computations of $E_W$ and $I(A:B)$ has been done for
each of the domains in the theory. Finally, we consider a black hole geometry
with a noncommutative parameter and study the influence of both
noncommutativity and finite temperature on the various measures of quantum
entanglement.
|
We introduce Hausdorff operators over the unit disc and give conditions for
boundedness of such operator in Bloch, Bergman, and Hardy spaces on the disc.
Identity approximation by Hausdorff operators is also considered.
|
Restarting a deterministic process always impedes its completion. However, it
is known that restarting a random process can also lead to an opposite outcome
-- expediting completion. Hence, the effect of restart is contingent on the
underlying statistical heterogeneity of the process' completion times. To
quantify this heterogeneity we bring a novel approach to restart: the
methodology of inequality indices, which is widely applied in economics and in
the social sciences to measure income and wealth disparity. Using this approach
we establish an `inequality roadmap' for the mean-performance of sharp restart:
a whole new set of universal inequality criteria that determine when restart
with sharp timers (i.e. with fixed deterministic timers) decreases/increases
mean completion. The criteria are based on a host of inequality indices
including Bonferroni, Gini, Pietra, and other Lorenz-curve indices; each index
captures a different angle of the restart-inequality interplay. Utilizing the
fact that sharp restart can match the mean-performance of any general restart
protocol, we prove -- with unprecedented precision and resolution -- the
validity of the following statement: restart impedes/expedites mean completion
when the underlying statistical heterogeneity is low/high.
|
In the continual effort to improve product quality and decrease operations
costs, computational modeling is increasingly being deployed to determine
feasibility of product designs or configurations. Surrogate modeling of these
computer experiments via local models, which induce sparsity by only
considering short range interactions, can tackle huge analyses of complicated
input-output relationships. However, narrowing focus to local scale means that
global trends must be re-learned over and over again. In this article, we
propose a framework for incorporating information from a global sensitivity
analysis into the surrogate model as an input rotation and rescaling
preprocessing step. We discuss the relationship between several sensitivity
analysis methods based on kernel regression before describing how they give
rise to a transformation of the input variables. Specifically, we perform an
input warping such that the "warped simulator" is equally sensitive to all
input directions, freeing local models to focus on local dynamics. Numerical
experiments on observational data and benchmark test functions, including a
high-dimensional computer simulator from the automotive industry, provide
empirical validation.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.