abstract
stringlengths 42
2.09k
|
---|
Stuttering is a complex speech disorder identified by repeti-tions,
prolongations of sounds, syllables or words and blockswhile speaking. Specific
stuttering behaviour differs strongly,thus needing personalized therapy.
Therapy sessions requirea high level of concentration by the therapist. We
introduceSTAN, a system to aid speech therapists in stuttering therapysessions.
Such an automated feedback system can lower thecognitive load on the therapist
and thereby enable a more con-sistent therapy as well as allowing analysis of
stuttering overthe span of multiple therapy sessions.
|
We present a study of far and near-ultraviolet emission from the accretion
disk in a powerful Seyfert 1 galaxy IC4329A using observations performed with
the Ultraviolet Imaging Telescope (UVIT) onboard AstroSat. These data provide
the highest spatial resolution and deepest images of IC4329A in the far and
near UV bands acquired to date. The excellent spatial resolution of the UVIT
data has allowed us to accurately separate the extended emission from the host
galaxy and the AGN emission in the far and near UV bands. We derive the
intrinsic AGN flux after correcting for the Galactic and internal reddening, as
well as for the contribution of emission lines from the broad and narrow-line
regions. The intrinsic UV continuum emission shows a marked deficit compared to
that expected from the "standard" models of the accretion disk around an
estimated black hole mass of 1-2x10^8Msun when the disk extends to the
innermost stable circular orbit. We find that the intrinsic UV continuum is
fully consistent with the standard disk models, but only if the disk emits from
distances larger than 80-150 gravitational radii.
|
Modern programming follows the continuous integration (CI) and continuous
deployment (CD) approach rather than the traditional waterfall model. Even the
development of modern programming languages uses the CI/CD approach to swiftly
provide new language features and to adapt to new development environments.
Unlike in the conventional approach, in the modern CI/CD approach, a language
specification is no more the oracle of the language semantics because both the
specification and its implementations can co-evolve. In this setting, both the
specification and implementations may have bugs, and guaranteeing their
correctness is non-trivial.
In this paper, we propose a novel N+1-version differential testing to resolve
the problem. Unlike the traditional differential testing, our approach consists
of three steps: 1) to automatically synthesize programs guided by the syntax
and semantics from a given language specification, 2) to generate conformance
tests by injecting assertions to the synthesized programs to check their final
program states, 3) to detect bugs in the specification and implementations via
executing the conformance tests on multiple implementations, and 4) to localize
bugs on the specification using statistical information. We actualize our
approach for the JavaScript programming language via JEST, which performs
N+1-version differential testing for modern JavaScript engines and ECMAScript,
the language specification describing the syntax and semantics of JavaScript in
a natural language. We evaluated JEST with four JavaScript engines that support
all modern JavaScript language features and the latest version of ECMAScript
(ES11, 2020). JEST automatically synthesized 1,700 programs that covered 97.78%
of syntax and 87.70% of semantics from ES11. Using the assertion-injection, it
detected 44 engine bugs in four engines and 27 specification bugs in ES11.
|
Wilson's theorem for the factorial got generalized to the moduli $p^2$ in
1900 and $p^3$ in 2000 by J.W.L. Glaisher and Z-H. Sun respectively. This paper
which studies more generally the multiple harmonic sums $\mathcal{H}_{\lbrace
s\rbrace^{2l}=1;p-1},2\leq 2l\leq p-1$ modulo $p^4$ in association with the
Stirling numbers $\left[\begin{array}{l}\;\;\;p\\2s-1\end{array}\right], 2\leq
2s\leq p-1$ modulo $p^4$ is concerned with establishing a generalization of
Wilson, Glaisher and Sun's results to the modulus $p^4$. We also break
p-residues of convolutions of three divided Bernoulli numbers of respective
orders $p-1$, $p-3$ and $p-5$ into smaller pieces and generalize some results
of Sun for some of the generalized harmonic numbers of order $p-1$ modulo
$p^4$.
|
We address the two issues raised by Bayle, Vallisneri, Babak, and Petiteau
(in their gr-qc document arXiv:2106.03976) about our matrix formulation of
Time-Delay Interferometry (TDI) (arXiv:2105.02054) \cite{TDJ21}. In so doing we
explain and quantify our concerns about the results derived by Vallisneri,
Bayle, Babak and Petiteau \cite{Vallisneri2020} by applying their data
processing technique (named TDI-$\infty$) to the two heterodyne measurements
made by a two-arm space-based GW interferometer. First we show that the
solutions identified by the TDI-$\infty$ algorithm derived by Vallisneri,
Bayle, Babak and Petiteau \cite{Vallisneri2020} {\underbar {do}} depend on the
boundary-conditions selected for the two-way Doppler data. We prove this by
adopting the (non-physical) boundary conditions used by Vallisneri {\it et al.}
and deriving the corresponding analytic expression for a laser-noise-canceling
combination. We show it to be characterized by a number of Doppler measurement
terms that grows with the observation time and works for any time-dependent
time delays. We then prove that, for a constant-arm-length interferometer whose
two-way light times are equal to twice and three-times the sampling time, the
solutions identified by TDI-$\infty$ are linear combinations of the TDI
variable $X$. In the second part of this document we address the concern
expressed by Bayle {\it et al.} regarding our matrix formulation of TDI when
the two-way light-times are constant but not equal to integer multiples of the
sampling time. We mathematically prove the homomorphism between the delay
operators and their matrix representation \cite{TDJ21} holds in general. By
sequentially applying two order-$m$ Fractional-Delay (FD) Lagrange filters of
delays $l_1$, $l_2$ we find its result to be equal to applying an order-$m$ FD
Lagrange filter of delay $l_1 + l_2$.
|
We propose a semantic similarity metric for image registration. Existing
metrics like Euclidean Distance or Normalized Cross-Correlation focus on
aligning intensity values, giving difficulties with low intensity contrast or
noise. Our approach learns dataset-specific features that drive the
optimization of a learning-based registration model. We train both an
unsupervised approach using an auto-encoder, and a semi-supervised approach
using supplemental segmentation data to extract semantic features for image
registration. Comparing to existing methods across multiple image modalities
and applications, we achieve consistently high registration accuracy. A learned
invariance to noise gives smoother transformations on low-quality images.
|
This paper surveys several recent abstract summarization methods: T5,
Pegasus, and ProphetNet. We implement the systems in two languages: English and
Indonesian languages. We investigate the impact of pre-training models (one T5,
three Pegasuses, three ProphetNets) on several Wikipedia datasets in English
and Indonesian language and compare the results to the Wikipedia systems'
summaries. The T5-Large, the Pegasus-XSum, and the ProphetNet-CNNDM provide the
best summarization. The most significant factors that influence ROUGE
performance are coverage, density, and compression. The higher the scores, the
better the summary. Other factors that influence the ROUGE scores are the
pre-training goal, the dataset's characteristics, the dataset used for testing
the pre-trained model, and the cross-lingual function. Several suggestions to
improve this paper's limitation are: 1) assure that the dataset used for the
pre-training model must sufficiently large, contains adequate instances for
handling cross-lingual purpose; 2) Advanced process (finetuning) shall be
reasonable. We recommend using the large dataset consists of comprehensive
coverage of topics from many languages before implementing advanced processes
such as the train-infer-train procedure to the zero-shot translation in the
training stage of the pre-training model.
|
We discuss $C^1$ regularity and developability of isometric immersions of
flat domains into $\mathbb R^3$ enjoying a local fractional Sobolev $W^{1+s,
\frac2s}$ regularity for $2/3 \le s< 1 $, generalizing the known results on
Sobolev and H\"older regimes. Ingredients of the proof include analysis of the
weak Codazzi-Mainardi equations of the isometric immersions and study of
$W^{2,\frac2s}$ planar deformations with symmetric Jacobian derivative and
vanishing distributional Jacobian determinant. On the way, we also show that
the distributional Jacobian determinant, conceived as an operator defined on
the Jacobian matrix, behaves like determinant of gradient matrices under
products by scalar functions.
|
SPT-3G is the third survey receiver operating on the South Pole Telescope
dedicated to high-resolution observations of the cosmic microwave background
(CMB). Sensitive measurements of the temperature and polarization anisotropies
of the CMB provide a powerful dataset for constraining cosmology. Additionally,
CMB surveys with arcminute-scale resolution are capable of detecting galaxy
clusters, millimeter-wave bright galaxies, and a variety of transient
phenomena. The SPT-3G instrument provides a significant improvement in mapping
speed over its predecessors, SPT-SZ and SPTpol. The broadband optics design of
the instrument achieves a 430 mm diameter image plane across observing bands of
95 GHz, 150 GHz, and 220 GHz, with 1.2 arcmin FWHM beam response at 150 GHz. In
the receiver, this image plane is populated with 2690 dual-polarization,
tri-chroic pixels (~16000 detectors) read out using a 68X digital
frequency-domain multiplexing readout system. In 2018, SPT-3G began a multiyear
survey of 1500 deg$^{2}$ of the southern sky. We summarize the unique optical,
cryogenic, detector, and readout technologies employed in SPT-3G, and we report
on the integrated performance of the instrument.
|
We clarify and generalize the ant on a rubber rope paradox, which is a
mathematical puzzle with a solution that appears counterintuitive. In this
paper, we show that the ant can still reach the end of the rope even if we
consider the step length of the ant and stretching length of the rubber rope as
random variables.
|
The young T Tauri star WW Cha was recently proposed to be a close binary
object with strong infrared and submillimeter excess associated with
circum-system emission. This makes WW Cha a very interesting source for
studying the influence of dynamical effects on circumstellar as well as
circumbinary material. We derive the relative astrometric positions and flux
ratios of the stellar companion in WW Cha from the interferometric model
fitting of observations made with the VLTI instruments AMBER, PIONIER, and
GRAVITY in the near-infrared from 2011 to 2020. For two epochs, the resulting
uv-coverage in spatial frequencies permits us to perform the first image
reconstruction of the system in the K band. The positions of nine epochs are
used to determine the orbital elements and the total mass of the system. We
find the secondary star orbiting the primary with a period of T=206.55 days, a
semimajor axis of a=1.01 au, and a relatively high eccentricity of e=0.45.
Combining the orbital solution with distance measurements from Gaia DR2 and the
analysis of evolutionary tracks, the dynamical mass of Mtot=3.20 Msol can be
explained by a mass ratio between ~0.5 and 1. The orbital angular momentum
vector is in close alignment with the angular momentum vector of the outer disk
as measured by ALMA and SPHERE. The analysis of the relative photometry
suggests the presence of infrared excess surviving in the system and likely
originating from truncated circumstellar disks. The flux ratio between the two
components appears variable, in particular in the K band, and may hint at
periods of triggered higher and lower accretion or changes in the disks'
structures. The knowledge of the orbital parameters, combined with a relatively
short period, makes WW Cha an ideal target for studying the interaction of a
close young T Tauri binary with its surrounding material, such as
time-dependent accretion phenomena.
|
This paper describes a system by which Unmanned Aerial Vehicles (UAVs) can
gather high-quality face images that can be used in biometric identification
tasks. Success in face-based identification depends in large part on the image
quality, and a major factor is how frontal the view is. Face recognition
software pipelines can improve identification rates by synthesizing frontal
views from non-frontal views by a process call {\em frontalization}. Here we
exploit the high mobility of UAVs to actively gather frontal images using
components of a synthetic frontalization pipeline. We define a frontalization
error and show that it can be used to guide an UAVs to capture frontal views.
Further, we show that the resulting image stream improves matching quality of a
typical face recognition similarity metric. The system is implemented using an
off-the-shelf hardware and software components and can be easily transfered to
any ROS enabled UAVs.
|
The article presents a matrix differential operator and a pseudoinverse
matrix differential operator for finding a particular solution to
nonhomogeneous linear ordinary differential equations (ODE) with constant
coefficients with special types of the right-hand side. Calculation requires
the determination of an inverse or pseudoinverse matrix. If the matrix is
singular, the Moore-Penrose pseudoinverse matrix is used for the calculation,
which is simply calculated as the inverse submatrix of the considered matrix.
It is shown that block matrices are effectively used to calculate a particular
solution.
|
The role of bipolar jets in the formation of stars, and in particular how
they are launched, is still not well understood. We probe the protostellar jet
launching mechanism, via high resolution observations of the near-IR [FeII]
1.53,1.64 micron lines. We consider the bipolar jet from the Classical T Tauri
star, DO Tau, & investigate jet morphology & kinematics close to the star,
using AO-assisted IFU observations from GEMINI/NIFS. The brighter, blue-shifted
jet is collimated quickly after launch. This early collimation requires the
presence of magnetic fields. We confirm velocity asymmetries between the two
jet lobes, & confirm no time variability in the asymmetry over a 20 year
interval. This sustained asymmetry is in accordance with recent simulations of
magnetised disk-winds. We examine the data for jet rotation. We report an upper
limit on differences in radial velocity of 6.3 & 8.7 km/s for the blue &
red-shifted jets, respectively. Interpreting this as an upper limit on jet
rotation implies that any steady, axisymmetric magneto-centrifugal model of jet
launching is constrained to a launch radius in the disk-plane of 0.5 & 0.3 au
for the blue & red-shifted jets, respectively. This supports an X-wind or
narrow disk-wind model. This pertains only to the observed high velocity [FeII]
emission, & does not rule out a wider flow launched from a wider radius. We
report detection of small amplitude jet axis wiggling in both lobes. We rule
out orbital motion of the jet source as the cause. Precession can better
account for the observations but requires double the precession angle, & a
different phase for the counter-jet. Such non-solid body precession could arise
from an inclined massive Jupiter companion, or a warping instability induced by
launching a magnetic disk-wind. Overall, our observations are consistent with
an origin of the DO Tau jets from the inner regions of the disk.
|
Infrared dark clouds (IRDCs) are potential hosts of the elusive early phases
of high-mass star formation (HMSF). Here we conduct an in-depth analysis of the
fragmentation properties of a sample of 10 IRDCs, which have been highlighted
as some of the best candidates to study HMSF within the Milky Way. To do so, we
have obtained a set of large mosaics covering these IRDCs with ALMA at band 3
(or 3mm). These observations have a high angular resolution (~3arcsec or
~0.05pc), and high continuum and spectral line sensitivity (~0.15mJy/beam and
~0.2K per 0.1km/s channel at the N2H+(1-0) transition). From the dust continuum
emission, we identify 96 cores ranging from low- to high-mass (M = 3.4 to
50.9Msun) that are gravitationally bound (alpha_vir = 0.3 to 1.3) and which
would require magnetic field strengths of B = 0.3 to 1.0mG to be in virial
equilibrium. We combine these results with a homogenised catalogue of
literature cores to recover the hierarchical structure within these clouds over
four orders of magnitude in spatial scale (0.01pc to 10pc). Using supplementary
observations at an even higher angular resolution, we find that the smallest
fragments (<0.02pc) within this hierarchy do not currently have the mass and/or
the density required to form high-mass stars. Nonetheless, the new ALMA
observations presented in this paper have facilitated the identification of 19
(6 quiescent and 13 star-forming) cores that retain >16Msun without further
fragmentation. These high-mass cores contain trans-sonic non-thermal motions,
are kinematically sub-virial, and require moderate magnetic field strengths for
support against collapse. The identification of these potential sites of
high-mass star formation represents a key step in allowing us to test the
predictions from high-mass star and cluster formation theories.
|
Recent observations of the HDO/H$_2$O ratio toward protostars in isolated and
clustered environments show an apparent dichotomy, where isolated sources show
higher D/H ratios than clustered counterparts. Establishing which physical and
chemical processes create this differentiation can provide insights into the
chemical evolution of water during star formation and the chemical diversity
during the star formation process and in young planetary systems. Methods: The
evolution of water is modeled using 3D physicochemical models of a dynamic
star-forming environment. The physical evolution during the protostellar
collapse is described by tracer particles from a 3D MHD simulation of a
molecular cloud region. Each particle trajectory is post-processed using
RADMC-3D to calculate the temperature and radiation field. The chemical
evolution is simulated using a three-phase grain-surface chemistry model and
the results are compared with interferometric observations of H$_2$O, HDO, and
D$_2$O in hot corinos toward low-mass protostars. Results: The physicochemical
model reproduces the observed HDO/H$_2$O and D$_2$O/HDO ratios in hot corinos,
but shows no correlation with cloud environment for similar identical
conditions. The observed dichotomy in water D/H ratios requires variation in
the initial conditions (e.g., the duration and temperature of the prestellar
phase). Reproducing the observed D/H ratios in hot corinos requires a
prestellar phase duration $t\sim$1-3 Myr and temperatures in the range $T \sim$
10-20 K prior to collapse. This work demonstrates that the observed
differentiation between clustered and isolated protostars stems from
differences in the molecular cloud or prestellar core conditions and does not
arise during the protostellar collapse itself.
|
In this paper we report on detailed temperature and magnetic field dependence
of m agnetization of IV-VI semiconductor PbTe doped with mixed valence
transition metal Cr$^{2+/3+}$. The material is studied solely by an integral
superconducting quantum interference device magnetometer in order to
quantitatively determine the contribution of single substitutional Cr$^{3+}$ as
well as of various Cr-Te magnetic nanocrystals, including their identification.
The applied experimental procedure reveals the presence of about
$10^{19}$~cm$^{-3}$ paramagnetic Cr$^{3+}$ ions formed via self-ionization of
Cr$^{2+}$ resonant donors. These are known to improve the thermoelectric figure
of merit parameter zT of this semiconductor. The magnetic finding excellently
agrees with previous Hall effect studies thus providing a new experimental
support for the proposed electronic structure model of PbTe:Cr system with
resonant Cr$^{2+/3+}$ state located (at low temperatures) about 100 meV above
the bottom of the conduction band. Below room temperature a ferromagnetic-like
signal points to the presence of Cr-rich nanocrystalline precipitates. Two most
likely candidates, namely: Cr$_2$Te$_3$ and Cr$_5$Te$_8$ are identified upon
dedicated temperature cycling of the sample at the remnant state. As an
ensemble, the nanocrystals exhibits (blocked) superparamagnetic properties. The
magnetic susceptibility of both n- and p-type PbTe in the temperature range
$100 < T < 400$~K has been established. These magnitudes are essential in
proper accounting for the high temperature magnetic susceptibility of PbTe:Cr.
|
The largest experiments in machine learning now require resources far beyond
the budget of all but a few institutions. Fortunately, it has recently been
shown that the results of these huge experiments can often be extrapolated from
the results of a sequence of far smaller, cheaper experiments. In this work, we
show that not only can the extrapolation be done based on the size of the
model, but on the size of the problem as well. By conducting a sequence of
experiments using AlphaZero and Hex, we show that the performance achievable
with a fixed amount of compute degrades predictably as the game gets larger and
harder. Along with our main result, we further show that the test-time and
train-time compute available to an agent can be traded off while maintaining
performance.
|
Deep Reinforcement Learning (DRL) has shown outstanding performance on
inducing effective action policies that maximize expected long-term return on
many complex tasks. Much of DRL work has been focused on sequences of events
with discrete time steps and ignores the irregular time intervals between
consecutive events. Given that in many real-world domains, data often consists
of temporal sequences with irregular time intervals, and it is important to
consider the time intervals between temporal events to capture latent
progressive patterns of states. In this work, we present a general Time-Aware
RL framework: Time-aware Q-Networks (TQN), which takes into account physical
time intervals within a deep RL framework. TQN deals with time irregularity
from two aspects: 1) elapsed time in the past and an expected next observation
time for time-aware state approximation, and 2) action time window for the
future for time-aware discounting of rewards. Experimental results show that by
capturing the underlying structures in the sequences with time irregularities
from both aspects, TQNs significantly outperform DQN in four types of contexts
with irregular time intervals. More specifically, our results show that in
classic RL tasks such as CartPole and MountainCar and Atari benchmark with
randomly segmented time intervals, time-aware discounting alone is more
important while in the real-world tasks such as nuclear reactor operation and
septic patient treatment with intrinsic time intervals, both time-aware state
and time-aware discounting are crucial. Moreover, to improve the agent's
learning capacity, we explored three boosting methods: Double networks, Dueling
networks, and Prioritized Experience Replay, and our results show that for the
two real-world tasks, combining all three boosting methods with TQN is
especially effective.
|
Anti-unification in logic programming refers to the process of capturing
common syntactic structure among given goals, computing a single new goal that
is more general called a generalization of the given goals. Finding an
arbitrary common generalization for two goals is trivial, but looking for those
common generalizations that are either as large as possible (called largest
common generalizations) or as specific as possible (called most specific
generalizations) is a non-trivial optimization problem, in particular when
goals are considered to be \textit{unordered} sets of atoms. In this work we
provide an in-depth study of the problem by defining two different
generalization relations. We formulate a characterization of what constitutes a
most specific generalization in both settings. While these generalizations can
be computed in polynomial time, we show that when the number of variables in
the generalization needs to be minimized, the problem becomes NP-hard. We
subsequently revisit an abstraction of the largest common generalization when
anti-unification is based on injective variable renamings, and prove that it
can be computed in polynomially bounded time.
|
Wavelet scattering networks, which are convolutional neural networks (CNNs)
with fixed filters and weights, are promising tools for image analysis.
Imposing symmetry on image statistics can improve human interpretability, aid
in generalization, and provide dimension reduction. In this work, we introduce
a fast-to-compute, translationally invariant and rotationally equivariant
wavelet scattering network (EqWS) and filter bank of wavelets (triglets). We
demonstrate the interpretability and quantify the invariance/equivariance of
the coefficients, briefly commenting on difficulties with implementing scale
equivariance. On MNIST, we show that training on a rotationally invariant
reduction of the coefficients maintains rotational invariance when generalized
to test data and visualize residual symmetry breaking terms. Rotation
equivariance is leveraged to estimate the rotation angle of digits and
reconstruct the full rotation dependence of each coefficient from a single
angle. We benchmark EqWS with linear classifiers on EMNIST and CIFAR-10/100,
introducing a new second-order, cross-color channel coupling for the color
images. We conclude by comparing the performance of an isotropic reduction of
the scattering coefficients and RWST, a previous coefficient reduction, on an
isotropic classification of magnetohydrodynamic simulations with astrophysical
relevance.
|
From spiking activity in neuronal networks to force chains in granular
materials, the behavior of many real-world systems depends on a network of both
strong and weak interactions. These interactions give rise to complex and
higher-order system behaviors, and are encoded using data as the network's
edges. However, distinguishing between true weak edges and low-weight edges
caused by noise remains a challenge. We address this problem by examining the
higher-order structure of noisy, weak edges added to model networks. We find
that the structure of low-weight, noisy edges varies according to the topology
of the model network to which it is added. By investigating this variation more
closely, we see that at least three qualitative classes of noise structure
emerge. Furthermore, we observe that the structure of noisy edges contains
enough model-specific information to classify the model networks with moderate
accuracy. Finally, we offer network generation rules that can drive different
types of structure in added noisy edges. Our results demonstrate that noise
does not present as a monolithic nuisance, but rather as a nuanced,
topology-dependent, and even useful entity in characterizing higher-order
network interactions. Hence, we provide an alternate approach to noise
management by embracing its role in such interactions.
|
We argue that long optical storage times are required to establish
entanglement at high rates over large distances using memory-based quantum
repeaters. Triggered by this conclusion, we investigate the $^3$H$_6$
$\leftrightarrow$ $^3$H$_4$ transition at 795.325 nm of Tm:Y$_3$Ga$_5$O$_{12}$
(Tm:YGG). Most importantly, we show that the optical coherence time can reach
1.1 ms, and, using laser pulses, we demonstrate optical storage based on the
atomic frequency comb protocol up to 100 $\mu$s as well as a memory decay time
T$_M$ of 13.1 $\mu$s. Possibilities of how to narrow the gap between the
measured value of T$_m$ and its maximum of 275 $\mu$s are discussed. In
addition, we demonstrate quantum state storage using members of non-classical
photon pairs. Our results show the potential of Tm:YGG for creating quantum
memories with long optical storage times, and open the path to building
extended quantum networks.
|
The asteroid exploration project "Hayabusa2" has successfully returned
samples from the asteroid (162173) Ryugu. In this study, we measured the linear
polarization degrees of Ryugu using four ground-based telescopes from 2020
September 27 to December 25, covering a wide-phase angle (Sun-target-observer's
angle) range from 28$^\circ$ to 104$^\circ$. We found that the polarization
degree of Ryugu reached 53$\%$ around a phase angle of 100$^\circ$, the highest
value among all asteroids and comets thus far reported. The high polarization
degree of Ryugu can be attributed to the scattering properties of its surface
layers, in particular the relatively small contribution of multiply-scattered
light. Our polarimetric results indicate that Ryugu's surface is covered with
large grains. On the basis of a comparison with polarimetric measurements of
pulverized meteorites, we can infer the presence of submillimeter-sized grains
on the surface layer of Ryugu. We also conjecture that this size boundary
represents the grains that compose the aggregate. It is likely that a very
brittle structure has been lost in the recovered samples, although they may
hold a record of its evolution. Our data will be invaluable for future
experiments aimed at reproducing the surface structure of Ryugu.
|
Pencil x-ray beam imaging provides superior spatial resolution than other
imaging geometries like sheet beam and cone beam geometries due to the
illumination of a line instead of an area or volume. However, the pencil beam
geometry suffers from long scan times and concerns over dose discourage
laboratory use of pencil beam x-ray sources. Molecular imaging techniques like
XLCT imaging benefit most from pencil beam imaging to accurately localize the
distribution of contrast agents embedded in a small animal object. To
investigate the dose deposited by pencil beam x-ray imaging in XLCT, dose
estimations from one angular projection scan by three different x-ray source
energies were performed on a small animal object composed of water, bone, and
blood with a Monte Carlo simulation platform, GATE (Geant4 Application for
Tomographic Emission). Our results indicate that, with an adequate x-ray
benchtop source with high brilliance and quasi-monochromatic properties like
the Sigray source, the dose concerns can be reduced. With the Sigray source,
the bone marrow was estimated to have a radiation dose of 30 mGy for a typical
XLCT imaging, in which we have 6 angular projections, 100 micrometer scan step
size, and 10^6 x-ray photons per linear scan.
|
As of today, the fifth generation (5G) mobile communication system has been
rolled out in many countries and the number of 5G subscribers already reaches a
very large scale. It is time for academia and industry to shift their attention
towards the next generation. At this crossroad, an overview of the current
state of the art and a vision of future communications are definitely of
interest. This article thus aims to provide a comprehensive survey to draw a
picture of the sixth generation (6G) system in terms of drivers, use cases,
usage scenarios, requirements, key performance indicators (KPIs), architecture,
and enabling technologies. First, we attempt to answer the question of "Is
there any need for 6G?" by shedding light on its key driving factors, in which
we predict the explosive growth of mobile traffic until 2030, and envision
potential use cases and usage scenarios. Second, the technical requirements of
6G are discussed and compared with those of 5G with respect to a set of KPIs in
a quantitative manner. Third, the state-of-the-art 6G research efforts and
activities from representative institutions and countries are summarized, and a
tentative roadmap of definition, specification, standardization, and regulation
is projected. Then, we identify a dozen of potential technologies and introduce
their principles, advantages, challenges, and open research issues. Finally,
the conclusions are drawn to paint a picture of "What 6G may look like?". This
survey is intended to serve as an enlightening guideline to spur interests and
further investigations for subsequent research and development of 6G
communications systems.
|
Let $A$ and $B$ be $C^*$-algebras, $A$ separable and $I$ an ideal in $B$. We
show that for any completely positive contractive linear map $\psi\colon A\to
B/I$ there is a continuous family $\Theta_t\colon A\to B$, for $t\in
[1,\infty)$, of lifts of $\psi$ that are asymptotically linear, asymptotically
completely positive and asymptotically contractive. If $\psi$ is orthogonality
preserving, then $\Theta_t$ can be chosen to have this property asymptotically.
If $A$ and $B$ carry continuous actions of a second countable locally compact
group $G$ such that $I$ is $G$-invariant and $\psi$ is equivariant, we show
that the family $\Theta_t$ can be chosen to be asymptotically equivariant. If a
linear completely positive lift for $\psi$ exists, we can arrange that
$\Theta_t$ is linear and completely positive for all $t\in [1,\infty)$. In the
equivariant setting, if $A$, $B$ and $\psi$ are unital, we show that
asymptotically linear unital lifts are only guaranteed to exist if $G$ is
amenable. This leads to a new characterization of amenability in terms of the
existence of asymptotically equivariant unital sections for quotient maps.
|
Batch normalization is a key component of most image classification models,
but it has many undesirable properties stemming from its dependence on the
batch size and interactions between examples. Although recent work has
succeeded in training deep ResNets without normalization layers, these models
do not match the test accuracies of the best batch-normalized networks, and are
often unstable for large learning rates or strong data augmentations. In this
work, we develop an adaptive gradient clipping technique which overcomes these
instabilities, and design a significantly improved class of Normalizer-Free
ResNets. Our smaller models match the test accuracy of an EfficientNet-B7 on
ImageNet while being up to 8.7x faster to train, and our largest models attain
a new state-of-the-art top-1 accuracy of 86.5%. In addition, Normalizer-Free
models attain significantly better performance than their batch-normalized
counterparts when finetuning on ImageNet after large-scale pre-training on a
dataset of 300 million labeled images, with our best models obtaining an
accuracy of 89.2%. Our code is available at https://github.com/deepmind/
deepmind-research/tree/master/nfnets
|
In this paper, we present a general numerical platform for designing
accurate, efficient, and stable numerical algorithms for incompressible
hydrodynamic models that obeys the thermodynamical laws. The obtained numerical
schemes are automatically linear in time. It decouples the hydrodynamic
variable and other state variables such that only small-size linear problems
need to be solved at each time marching step. Furthermore, if the classical
velocity projection method is utilized, the velocity field and pressure field
can be decoupled. In the end, only a few elliptic-type equations shall be
solved in each time step. This strategy is made possible through a sequence of
model reformulations by fully exploring the models' thermodynamic structures.
The generalized Onsager principle directly guides these reformulation
procedures. In the reformulated but equivalent models, the reversible and
irreversible components can be identified, guiding the numerical platform to
decouple the reversible and irreversible dynamics. This eventually leads to
decoupled numerical algorithms, given that the coupling terms only involve
irreversible dynamics. To further demonstrate the numerical platform's power,
we apply it to several specific incompressible hydrodynamic models. The energy
stability of the proposed numerical schemes is shown in detail. The
second-order accuracy in time is verified numerically through time step
refinement tests. Several benchmark numerical examples are presented to further
illustrate the proposed numerical framework's accuracy, stability, and
efficiency.
|
We place observational constraints on two models within a class of scenarios
featuring an elastic interaction between dark energy and dark matter that only
produces momentum exchange up to first order in cosmological perturbations. The
first one corresponds to a perfect-fluid model of the dark components with an
explicit interacting Lagrangian, where dark energy acts as a dark radiation at
early times and behaves as a cosmological constant at late times. The second
one is a dynamical dark energy model with a dark radiation component, where the
momentum exchange covariantly modifies the conservation equations in the dark
sector. Using Cosmic Microwave Background (CMB), Baryon Acoustic Oscillations
(BAO), and Supernovae type Ia (SnIa) data, we show that the Hubble tension can
be alleviated due to the additional radiation, while the $\sigma_8$ tension
present in the $\Lambda$-Cold-Dark-Matter model can be eased by the weaker
galaxy clustering that occurs in these interacting models. Furthermore, we show
that, while CMB+BAO+SnIa data put only upper bounds on the coupling strength,
adding low-redshift data in the form of a constraint on the parameter $S_8$
strongly favours nonvanishing values of the interaction parameters. Our
findings are in line with other results in the literature that could signal a
universal trend of the momentum exchange among the dark sector.
|
We study analytically how noninteracting weakly active particles, for which
passive Brownian diffusion cannot be neglected and activity can be treated
perturbatively, distribute and behave near boundaries in various geometries. In
particular, we develop a perturbative approach for the model of active
particles driven by an exponentially correlated random force (active
Ornstein-Uhlenbeck particles). This approach involves a relatively simple
expansion of the distribution in powers of the P\'{e}clet number and in terms
of Hermite polynomials. We use this approach to cleanly formulate boundary
conditions, which allows us to study weakly active particles in several
geometries: confinement by a single wall or between two walls in 1D,
confinement in a circular or wedge-shaped region in 2D, motion near a
corrugated boundary, and finally absorption onto a sphere. We consider how
quantities such as the density, pressure, and flow of the active particles
change as we gradually increase the activity away from a purely passive system.
These results for the limit of weak activity help us gain insight into how
active particles behave in the presence of various types of boundaries.
|
Quantum Darwinism proposes that the proliferation of redundant information
plays a major role in the emergence of objectivity out of the quantum world. Is
this kind of objectivity necessarily classical? We show that if one takes
Spekkens' notion of noncontextuality as the notion of classicality and the
approach of Brand\~{a}o, Piani and Horodecki to quantum Darwinism, the answer
to the above question is `yes', if the environment encodes sufficiently well
the proliferated information. Moreover, we propose a threshold on this
encoding, above which one can unambiguously say that classical objectivity has
emerged under quantum Darwinism.
|
Some audio declipping methods produce waveforms that do not fully respect the
physical process of clipping, which is why we refer to them as inconsistent.
This letter reports what effect on perception it has if the solution by
inconsistent methods is forced consistent by postprocessing. We first propose a
simple sample replacement method, then we identify its main weaknesses and
propose an improved variant. The experiments show that the vast majority of
inconsistent declipping methods significantly benefit from the proposed
approach in terms of objective perceptual metrics. In particular, we show that
the SS PEW method based on social sparsity combined with the proposed method
performs comparable to top methods from the consistent class, but at a
computational cost of one order of magnitude lower.
|
The strong constraints from the Fermi-LAT data on the isotropic gamma-ray
background suggest that the neutrinos observed by IceCube might possibly come
from sources that are hidden to gamma-ray observations. A possibility recently
discussed in the literature is that neutrinos may come from jets of collapsing
massive stars which fail to break out of the stellar envelope, and for this
reason they are known as choked jets, or choked Gamma-Ray Bursts (GRBs). In
this paper, we estimate the neutrino flux and spectrum expected from these
sources, focusing on Type II SNe. We perform detailed calculations of pg
interactions, accounting for all the neutrino production channels and
scattering angles. We provide predictions of expected event rates for operating
neutrino telescopes, such as ANTARES and IceCube, as well as for the future
generation telescope KM3NeT. We find that for GRB energies channeled into
protons spanning between 10^51 - 10^53 erg, choked GRBs may substantially
contribute to the observed astrophysical neutrino flux, if their local rate is
80 - 1 Gpc^-3 yr^-1 respectively.
|
It is becoming increasingly common to study complex associations between
multiple phenotypes and high-dimensional genomic features in biomedicine.
However, it requires flexible and efficient joint statistical models if there
are correlations between multiple response variables and between
high-dimensional predictors. We propose a structured multivariate Bayesian
variable selection model to identify sparse predictors associated with multiple
correlated response variables. The approach makes use of known structure
information between the multiple response variables and high-dimensional
predictors via a Markov random field (MRF) prior for the latent indicator
variables of the coefficient matrix of a sparse seemingly unrelated regressions
(SSUR). The structure information included in the MRF prior can improve the
model performance (i.e., variable selection and response prediction) compared
to other common priors. In addition, we employ random effects to capture
heterogeneity of grouped samples. The proposed approach is validated by
simulation studies and applied to a pharmacogenomic study which includes
pharmacological profiling and multi-omics data (i.e., gene expression, copy
number variation and mutation) from in vitro anti-cancer drug sensitivity
screening.
|
This paper proposes a framework to investigate the value of sharing
privacy-protected smart meter data between domestic consumers and load serving
entities. The framework consists of a discounted differential privacy model to
ensure individuals cannot be identified from aggregated data, a ANN-based
short-term load forecasting to quantify the impact of data availability and
privacy protection on the forecasting error and an optimal procurement problem
in day-ahead and balancing markets to assess the market value of the
privacy-utility trade-off. The framework demonstrates that when the load
profile of a consumer group differs from the system average, which is
quantified using the Kullback-Leibler divergence, there is significant value in
sharing smart meter data while retaining individual consumer privacy.
|
Rare-earth (R) nickelates (such as perovskite RNiO3, trilayer R4Ni3O10, and
infinite layer RNiO2) have attracted tremendous interest very recently.
However, unlike widely studied RNiO3 and RNiO2 films, the synthesis of trilayer
nickelate R4Ni3O10 films is rarely reported. Here, single-crystalline
(Nd0.8Sr0.2)4Ni3O10 epitaxial films were coherently grown on SrTiO3 substrates
by high-pressure magnetron sputtering. The crystal and electronic structures of
(Nd0.8Sr0.2)4Ni3O10 films were characterized by high-resolution X-ray
diffraction and X-ray photoemission spectroscopy, respectively. The electrical
transport measurements reveal a metal-insulator transition near 82 K and
negative magnetoresistance in (Nd0.8Sr0.2)4Ni3O10 films. Our work provides a
novel route to synthesize high-quality trilayer nickelate R4Ni3O10 films.
|
This paper presents a new mathematical signal transform that is especially
suitable for decoding information related to non-rigid signal displacements. We
provide a measure theoretic framework to extend the existing Cumulative
Distribution Transform [ACHA 45 (2018), no. 3, 616-641] to arbitrary (signed)
signals on $\overline{\mathbb{R}}$. We present both forward (analysis) and
inverse (synthesis) formulas for the transform, and describe several of its
properties including translation, scaling, convexity, linear separability and
others. Finally, we describe a metric in transform space, and demonstrate the
application of the transform in classifying (detecting) signals under random
displacements.
|
This study is devoted to the implications of scale-dependent gravity in
Cosmology. Redshift-space distortion data indicate that there is a tension
between $\Lambda$CDM and available observations as far as the value of the rms
density fluctuation, $\sigma_8$, is concerned. It has been pointed out that
this tension may be alleviated in alternative theories in which gravity is
weaker at red-shift $z \sim 1$. We study the evolution of density perturbations
for non-relativistic matter on top of a spatially flat FLRW Universe, and we
compute the combination $A=f \sigma_8$ in the framework of scale-dependent
gravity, where both Newton's constant and the cosmological constant are allowed
to vary with time. Upon comparison between available observational data
(supernovae data as well as redshift-space distortion data) and theoretical
predictions of the model, we determine the numerical value of $\sigma_8$ that
best fits the data.
|
Active learning aims to achieve greater accuracy with less training data by
selecting the most useful data samples from which it learns. Single-criterion
based methods (i.e., informativeness and representativeness based methods) are
simple and efficient; however, they lack adaptability to different real-world
scenarios. In this paper, we introduce a multiple-criteria based active
learning algorithm, which incorporates three complementary criteria, i.e.,
informativeness, representativeness and diversity, to make appropriate
selections in the active learning rounds under different data types. We
consider the selection process as a Determinantal Point Process, which good
balance among these criteria. We refine the query selection strategy by both
selecting the hardest unlabeled data sample and biasing towards the classifiers
that are more suitable for the current data distribution. In addition, we also
consider the dependencies and relationships between these data points in data
selection by means of centroidbased clustering approaches. Through evaluations
on synthetic and real-world datasets, we show that our method performs
significantly better and is more stable than other multiple-criteria based AL
algorithms.
|
Studies of the production of heavy-flavour baryons are of prominent
importance to investigate hadronization mechanisms at the LHC, in particular
through the study of the evolution of the baryon-over-meson production ratio.
Measurements performed in pp and p--Pb collisions at the LHC have revealed
unexpected features, qualitatively similar to what was observed in heavy-ion
collisions and, in the charm sector, not in line with the expectations based on
previous measurements from $\rm e^+e^-$ colliders and in ep collisions. These
results suggest that charmed baryon formation might not be universal and that
the baryon-over-meson ratio depends on the collision system or multiplicity.
A review of ALICE measurements of charmed baryons, including $\rm
\Lambda_c^+/D^0$ as a function of charged-particle multiplicity in pp, p--Pb
and Pb--Pb collisions, $\rm \Sigma_c^{0, +, ++}/D^0$ and $\rm \Xi_c^{0, +}/D^0$
as a function of $p_{\rm T}$ in pp collisions and $\rm
\Gamma(\Xi_c^0\rightarrow\Xi^-e^+\nu_e)/\Gamma(\Xi_c^0\rightarrow\Xi^-\pi^+)$,
will be presented. Comparison to phenomenological models will be also
discussed. Emphasis will be given to the discussion of the impact of these
studies on the understanding of hadronization processes.
|
In this paper, the interaction between two immiscible fluids with a finite
mobility ratio is investigated numerically within a Hele-Shaw cell. Fingering
instabilities initiated at the interface between a low viscosity fluid and a
high viscosity fluid are analysed at varying capillary numbers and mobility
ratios using a finite mobility ratio model. The present work is motivated by
the possible development of interfacial instabilities that can occur in porous
media during the process of CO$_2$ sequestration, but does not pretend to
analyse this complex problem. Instead, we present a detailed study of the
analogous problem occurring in a Hele-Shaw cell, giving indications of possible
plume patterns that can develop during the CO$_2$ injection. The numerical
scheme utilises a boundary element method in which the normal velocity at the
interface of the two fluids is directly computed through the evaluation of a
hypersingular integral. The boundary integral equation is solved using a
Neumann convergent series with cubic B-Spline boundary discretisation,
exhibiting 6th order spatial convergence. The convergent series allows the long
term non-linear dynamics of growing viscous fingers to be explored accurately
and efficiently. Simulations in low mobility ratio regimes reveal large
differences in fingering patterns compared to those predicted by previous high
mobility ratio models. Most significantly, classical finger shielding between
competing fingers is inhibited. Secondary fingers can possess significant
velocity, allowing greater interaction with primary fingers compared to high
mobility ratio flows. Eventually, this interaction can lead to base thinning
and the breaking of fingers into separate bubbles.
|
HTCondor is a major workload management system used in distributed high
throughput computing (dHTC) environments, e.g., the Open Science Grid. One of
the distinguishing features of HTCondor is the native support for data
movement, allowing it to operate without a shared filesystem. Coupling data
handling and compute scheduling is both convenient for users and allows for
significant infrastructure flexibility but does introduce some limitations. The
default HTCondor data transfer mechanism routes both the input and output data
through the submission node, making it a potential bottleneck. In this document
we show that by using a node equipped with a 100 Gbps network interface (NIC)
HTCondor can serve data at up to 90 Gbps, which is sufficient for most current
use cases, as it would saturate the border network links of most research
universities at the time of writing.
|
A stochastic process with movement, return, and rest phases is considered in
this paper. For the movement phase, the particles move following the dynamics
of Gaussian process or ballistic type of L\'evy walk, and the time of each
movement is random. For the return phase, the particles will move back to the
origin with a constant velocity or acceleration or under the action of a
harmonic force after each movement, so that this phase can also be treated as a
non-instantaneous resetting. After each return, a rest with a random time at
the origin follows. The asymptotic behaviors of the mean squared displacements
with different kinds of movement dynamics, random resting time, and returning
are discussed. The stationary distributions are also considered when the
process is localized. Besides, the mean first passage time is considered when
the dynamic of movement phase is Brownian motion.
|
We reconsider the widely held view that the Mannheim--Kazanas (MK) vacuum
solution for a static, spherically-symmetric system in conformal gravity (CG)
predicts flat rotation curves, such as those observed in galaxies, without the
need for dark matter. This prediction assumes that test particles have fixed
rest mass and follow timelike geodesics in the MK metric in the vacuum region
exterior to a spherically-symmetric representation of the galactic mass
distribution. Such geodesics are not conformally invariant, however, which
leads to an apparent discrepancy with the analogous calculation performed in
the conformally-equivalent Schwarzschild-de-Sitter (SdS) metric, where the
latter does not predict flat rotation curves. This difference arises since the
mass of particles in CG must instead be generated dynamically through
interaction with a scalar field. The energy-momentum of this required scalar
field means that, in a general conformal frame from the equivalence class of CG
solutions outside a static, spherically-symmetric matter distribution, the
spacetime is not given by the MK vacuum solution. A unique frame does exist,
however, for which the metric retains the MK form, since the scalar field
energy-momentum vanishes despite the field being non-zero and radially
dependent. Nonetheless, we show that in both this MK frame and the Einstein
frame, in which the scalar field is constant, massive particles follow timelike
geodesics of the SdS metric, thereby resolving the apparent frame dependence of
physical predictions and unambiguously yielding rotation curves with no flat
region. We also comment on how our analysis resolves the long-standing
uncertainty regarding gravitational lensing in the MK metric. (Abridged)
|
Traumatic Brain Injury (TBI) is a common cause of death and disability.
However, existing tools for TBI diagnosis are either subjective or require
extensive clinical setup and expertise. The increasing affordability and
reduction in size of relatively high-performance computing systems combined
with promising results from TBI related machine learning research make it
possible to create compact and portable systems for early detection of TBI.
This work describes a Raspberry Pi based portable, real-time data acquisition,
and automated processing system that uses machine learning to efficiently
identify TBI and automatically score sleep stages from a single-channel
Electroen-cephalogram (EEG) signal. We discuss the design, implementation, and
verification of the system that can digitize EEG signal using an Analog to
Digital Converter (ADC) and perform real-time signal classification to detect
the presence of mild TBI (mTBI). We utilize Convolutional Neural Networks (CNN)
and XGBoost based predictive models to evaluate the performance and demonstrate
the versatility of the system to operate with multiple types of predictive
models. We achieve a peak classification accuracy of more than 90% with a
classification time of less than 1 s across 16 s - 64 s epochs for TBI vs
control conditions. This work can enable development of systems suitable for
field use without requiring specialized medical equipment for early TBI
detection applications and TBI research. Further, this work opens avenues to
implement connected, real-time TBI related health and wellness monitoring
systems.
|
Active metric learning is the problem of incrementally selecting high-utility
batches of training data (typically, ordered triplets) to annotate, in order to
progressively improve a learned model of a metric over some input domain as
rapidly as possible. Standard approaches, which independently assess the
informativeness of each triplet in a batch, are susceptible to highly
correlated batches with many redundant triplets and hence low overall utility.
While a recent work \cite{kumari2020batch} proposes batch-decorrelation
strategies for metric learning, they rely on ad hoc heuristics to estimate the
correlation between two triplets at a time. We present a novel batch active
metric learning method that leverages the Maximum Entropy Principle to learn
the least biased estimate of triplet distribution for a given set of prior
constraints. To avoid redundancy between triplets, our method collectively
selects batches with maximum joint entropy, which simultaneously captures both
informativeness and diversity. We take advantage of the submodularity of the
joint entropy function to construct a tractable solution using an efficient
greedy algorithm based on Gram-Schmidt orthogonalization that is provably
$\left( 1 - \frac{1}{e} \right)$-optimal. Our approach is the first batch
active metric learning method to define a unified score that balances
informativeness and diversity for an entire batch of triplets. Experiments with
several real-world datasets demonstrate that our algorithm is robust,
generalizes well to different applications and input modalities, and
consistently outperforms the state-of-the-art.
|
The quality of the semiconductor-barrier interface plays a pivotal role in
the demonstration of high quality reproducible quantum dots for quantum
information processing. In this work, we have measured SiMOSFET Hall bars on
undoped Si substrates in order to investigate the quality of the devices
fabricated in a full CMOS process. We report a record mobility of 17'500 cm2/Vs
with a sub-10 nm oxide thickness indicating a high quality interface, suitable
for future qubit applications. We also study the influence of gate materials on
the mobilities and discuss the underlying mechanisms, giving insight into
further material optimization for large scale quantum processors.
|
Recently, the authors have proposed and analyzed isogeometric tearing and
interconnecting (IETI-DP) solvers for multi-patch discretizations in
Isogeometric Analysis. Conforming and discontinuous Galerkin settings have been
considered. In both cases, we have assumed that the interfaces between the
patches consist of whole edges. In this paper, we present a generalization that
allows us to drop this requirement. This means that the patches can meet in
T-junctions, which increases the flexibility of the geometric model
significantly. We use vertex-based primal degrees of freedom. For the
T-junctions, we propose to follow the idea of "fat vertices".
|
In a series of publications, Kocherginsky and Gruebele presented a systematic
framework for chemical transport and thermodiffusion to predict the Soret
coefficients from thermodynamics. A macroscopic derivation of the Onsager
reciprocal relations without recourse to microscopic fluctuations or equations
of motion was also discussed. Their important contributions and some confusions
are discussed.
|
A planetary system consists of a host star and one or more planets, arranged
into a particular configuration. Here, we consider what information belongs to
the configuration, or ordering, of 4286 Kepler planets in their 3277 planetary
systems. First, we train a neural network model to predict the radius and
period of a planet based on the properties of its host star and the radii and
period of its neighbors. The mean absolute error of the predictions of the
trained model is a factor of 2.1 better than the MAE of the predictions of a
naive model which draws randomly from dynamically allowable periods and radii.
Second, we adapt a model used for unsupervised part-of-speech tagging in
computational linguistics to investigate whether planets or planetary systems
fall into natural categories with physically interpretable "grammatical rules."
The model identifies two robust groups of planetary systems: (1) compact
multi-planet systems and (2) systems around giant stars ($\log{g} \lesssim
4.0$), although the latter group is strongly sculpted by the selection bias of
the transit method. These results reinforce the idea that planetary systems are
not random sequences -- instead, as a population, they contain predictable
patterns that can provide insight into the formation and evolution of planetary
systems.
|
The unprecedented wide bandgap tunability (~1 eV) of
Al$_x$In$_{1-x}$As$_y$Sb$_{1-y}$ latticed-matched to GaSb enables the
fabrication of photodetectors over a wide range from near-infrared to
mid-infrared. In this paper, the valence band-offsets in AlxIn1-xAsySb1-y with
different Al compositions are analyzed by tight-binding calculations and X-ray
photoelectron spectroscopy (XPS) measurements. The observed weak variation in
valence band offsets is consistent with the lack of any minigaps in the valence
band, compared to the conduction band.
|
In this paper, we give upper and lower bounds for the spectral norms of
r-circulant matrices with the generalized bi-periodic Fibonacci numbers.
Moreover, we investigate the eigenvalues and determinants of these matrices.
|
The looping pendulum is a simple physical system consisting of two masses
connected by a string that passes over a rod. We derive equations of motion for
the looping pendulum using Newtonian mechanics, and show that these equations
can be solved numerically to give a good description of the system's dynamics.
The numerical solution captures complex aspects of the looping pendulum's
behavior, and is in good agreement with the experimental results.
|
We consider Markov Decision Processes (MDPs) with deterministic transitions
and study the problem of regret minimization, which is central to the analysis
and design of optimal learning algorithms. We present logarithmic
problem-specific regret lower bounds that explicitly depend on the system
parameter (in contrast to previous minimax approaches) and thus, truly quantify
the fundamental limit of performance achievable by any learning algorithm.
Deterministic MDPs can be interpreted as graphs and analyzed in terms of their
cycles, a fact which we leverage in order to identify a class of deterministic
MDPs whose regret lower bound can be determined numerically. We further
exemplify this result on a deterministic line search problem, and a
deterministic MDP with state-dependent rewards, whose regret lower bounds we
can state explicitly. These bounds share similarities with the known
problem-specific bound of the multi-armed bandit problem and suggest that
navigation on a deterministic MDP need not have an effect on the performance of
a learning algorithm.
|
All the laws of particle physics are time-reversible. A time arrow emerges
only when ensembles of classical particles are treated probabilistically,
outside of physics laws, and the second law of thermodynamics is introduced. In
quantum physics, despite its intrinsically probabilistic nature, no mechanism
for a time arrow has been proposed. We propose an entropy for quantum physics,
which may conduce to the emergence of a time arrow. The proposed entropy is a
measure of randomness over the degrees of freedom of a quantum state. It is
dimensionless, it is a relativistic scalar, it is invariant under coordinate
transformation of position and momentum that maintain conjugate properties, and
under CPT transformations; and its minimum is positive due to the uncertainty
principle.
|
In many real-world games, such as traders repeatedly bargaining with
customers, it is very hard for a single AI trader to make good deals with
various customers in a few turns, since customers may adopt different
strategies even the strategies they choose are quite simple. In this paper, we
model this problem as fast adaptive learning in the finitely repeated games. We
believe that past game history plays a vital role in such a learning procedure,
and therefore we propose a novel framework (named, F3) to fuse the past and
current game history with an Opponent Action Estimator (OAE) module that uses
past game history to estimate the opponent's future behaviors. The experiments
show that the agent trained by F3 can quickly defeat opponents who adopt
unknown new strategies. The F3 trained agent obtains more rewards in a fixed
number of turns than the agents that are trained by deep reinforcement
learning. Further studies show that the OAE module in F3 contains
meta-knowledge that can even be transferred across different games.
|
Recent result by Adachi-Iyama-Reiten has shown a bijective correspondence
between support $\tau$-tilting modules and functorially finite torsion classes.
On the other hand, the techniques of gluing torsion classes along a recollement
were investigated by Liu-Vit\'oria-Yang and Ma-Huang. In this article, we show
that gluing torsion classes can be restricted to functorially finiteness
condition in the symmetric ladders of height 2. Then using the above
correspondence with support $\tau$-tilting modules, we present explicit
constructions of gluing of support $\tau$-tilting modules via symmetric ladders
of height 2. Finally, we apply the results to triangular matrix algebras to
give a more detailed version of the known Jasso's reduction and study maximal
green sequences.
|
Let $(\Sigma, g)$ be a closed, oriented, negatively curved surface, and fix
pairwise disjoint simple closed geodesics $\gamma_{\star,1}, \dots
\gamma_{\star, r}$. We give an asymptotic growth as $L \to +\infty$ of the
number of primitive closed geodesic of length less than $L$ intersecting
$\gamma_{\star,j}$ exactly $n_j$ times, where $n_1, \dots, n_r$ are fixed
nonnegative integers. This is done by introducing a dynamical scattering
operator associated to the surface with boundary obtained by cutting $\Sigma$
along $\gamma_{\star,1}, \dots, \gamma_{\star, r}$ and by using the theory of
Pollicott-Ruelle resonances for open systems.
|
We present exact black hole solutions endowed with magnetic charge coming
from exponential and logarithmic nonlinear electrodynamics (NLED). Classically,
we analyze the null and timelike geodesics, all of which contain both the bound
and the scattering orbits. Using the effective geometry formalism, we found
that photon can have nontrivial stable (both circular and non-circular) bound
orbits. The noncircular bound orbits for the one-horizon case mostly take the
form of precessed ellipse. For the extremal and three-horizon cases we find
many-world orbits where photon crosses the outer horizon but bounces back
without hitting the true (or second, respectively) horizon, producing the
epicycloid and epitrochoid paths. Semiclassically, we investigate their Hawking
temperature, stability, and phase transition. The nonlinearity enables black
hole stability with smaller radius than its RN counterpart. However, for
very-strong nonlinear regime, the thermodynamic behavior tends to be
Schwarzschild-like.
|
Autoencoders are widely used in machine learning applications, in particular
for anomaly detection. Hence, they have been introduced in high energy physics
as a promising tool for model-independent new physics searches. We scrutinize
the usage of autoencoders for unsupervised anomaly detection based on
reconstruction loss to show their capabilities, but also their limitations. As
a particle physics benchmark scenario, we study the tagging of top jet images
in a background of QCD jet images. Although we reproduce the positive results
from the literature, we show that the standard autoencoder setup cannot be
considered as a model-independent anomaly tagger by inverting the task: due to
the sparsity and the specific structure of the jet images, the autoencoder
fails to tag QCD jets if it is trained on top jets even in a semi-supervised
setup. Since the same autoencoder architecture can be a good tagger for a
specific example of an anomaly and a bad tagger for a different example, we
suggest improved performance measures for the task of model-independent anomaly
detection. We also improve the capability of the autoencoder to learn
non-trivial features of the jet images, such that it is able to achieve both
top jet tagging and the inverse task of QCD jet tagging with the same setup.
However, we want to stress that a truly model-independent and powerful
autoencoder-based unsupervised jet tagger still needs to be developed.
|
We introduce a family of Markov Chain Monte Carlo (MCMC) methods designed to
sample from target distributions with irregular geometry using an adaptive
scheme. In cases where targets exhibit non-Gaussian behaviour, we propose that
adaption should be regional in nature as opposed to global. Our algorithms
minimize the information projection side of the Kullback-Leibler (KL)
divergence between the proposal distribution class and the target to encourage
proposals distributed similarly to the regional geometry of the target. Unlike
traditional adaptive MCMC, this procedure rapidly adapts to the geometry of the
current position as it explores the space without the need for a large batch of
samples. We extend this approach to multimodal targets by introducing a heavily
tempered chain to enable faster mixing between regions of interest. The
divergence minimization algorithms are tested on target distributions with
multiple irregularly shaped modes and we provide results demonstrating the
effectiveness of our methods.
|
Within a structural health monitoring (SHM) framework, we propose a
simulation-based classification strategy to move towards online damage
localization. The procedure combines parametric Model Order Reduction (MOR)
techniques and Fully Convolutional Networks (FCNs) to analyze raw vibration
measurements recorded on the monitored structure. First, a dataset of possible
structural responses under varying operational conditions is built through a
physics-based model, allowing for a finite set of predefined damage scenarios.
Then, the dataset is used for the offline training of the FCN. Because of the
extremely large number of model evaluations required by the dataset
construction, MOR techniques are employed to reduce the computational burden.
The trained classifier is shown to be able to map unseen vibrational
recordings, e.g. collected on-the-fly from sensors placed on the structure, to
the actual damage state, thus providing information concerning the presence and
also the location of damage. The proposed strategy has been validated by means
of two case studies, concerning a 2D portal frame and a 3D portal frame railway
bridge; MOR techniques have allowed us to respectively speed up the analyses
about 30 and 420 times. For both the case studies, after training the
classifier has attained an accuracy greater than 85%.
|
Polymer blends consisting of two or more polymers are important for a wide
variety of industries and processes, but, the precise mechanism of their
thermomechanical behaviour is incompletely understood. In order to understand
clearly, it is essential to determine the miscibility and interactions between
the components in polymer blend and its macroscopic thermomechanical
properties. In this study, we performed experiments on SEBS and isotactic PP
blends (SP) as well as molecular dynamics simulations, aiming to know the role
played by molecular interactions on the thermomechanical properties. To
investigate the glass transition temperature (Tg) of SEBS, PP and their blends
at different ratio, the unit cell of the polymer molecular structure of each
was established. The LAMMPS molecular dynamics method was used to predict the
density, specific volume, free volume, enthalpy, kinetic energy, potential
energy and bond energy. The (Tg) s of the SEBS, PP and SP blends were predicted
by analysing these properties. Interestingly, the simulated values of the Tg of
SEBS, PP and their blends showed good agreement with our experimental results
obtained from dynamic mechanical analysis (DMA). This technique used in this
work can be used in studying glass transition of other complex polymer blends.
|
This article aims to present a unified framework for grading-based voting
processes. The idea is to represent the grades of each voter on d candidates as
a point in R^d and to define the winner of the vote using the deepest point of
the scatter plot. The deepest point is obtained by the maximization of a depth
function. Universality, unanimity, and neutrality properties are proved to be
satisfied. Monotonicity and Independence to Irrelevant Alternatives are also
studied. It is shown that usual voting processes correspond to specific choices
of depth functions. Finally, some basic paradoxes are explored for these voting
processes.
|
Zero-shot learning (ZSL) aims to discriminate images from unseen classes by
exploiting relations to seen classes via their attribute-based descriptions.
Since attributes are often related to specific parts of objects, many recent
works focus on discovering discriminative regions. However, these methods
usually require additional complex part detection modules or attention
mechanisms. In this paper, 1) we show that common ZSL backbones (without
explicit attention nor part detection) can implicitly localize attributes, yet
this property is not exploited. 2) Exploiting it, we then propose SELAR, a
simple method that further encourages attribute localization, surprisingly
achieving very competitive generalized ZSL (GZSL) performance when compared
with more complex state-of-the-art methods. Our findings provide useful insight
for designing future GZSL methods, and SELAR provides an easy to implement yet
strong baseline.
|
We have studied the charged BTZ black holes in noncommutative spaces arising
from two independent approaches. First, by using the Seiberg-Witten map
followed by a dynamic choice of gauge in the Chern-Simons gauge theory. Second,
by inducing the fuzziness in the mass and charge by a Lorentzian distribution
function with the width being the same as the minimal length of the associated
noncommutativity. In the first approach, we have found the existence of
non-static and non-stationary BTZ black holes in noncommutative spaces for the
first time in the literature, while the second approach facilitates us to
introduce a proper bound on the noncommutative parameter so that the
corresponding black hole becomes stable and physical. We have used a
contemporary tunneling formalism to study the thermodynamics of the black holes
arising from both of the approaches and analyze their behavior within the
context.
|
The genericity of Arnold diffusion in the analytic category is an open
problem. In this paper, we study this problem in the following a priori
unstable Hamiltonian system with a time-periodic perturbation
\[\mathcal{H}_\varepsilon(p,q,I,\varphi,t)=h(I)+\sum_{i=1}^n\pm
\left(\frac{1}{2}p_i^2+V_i(q_i)\right)+\varepsilon H_1(p,q,I,\varphi, t), \]
where $(p,q)\in \mathbb{R}^n\times\mathbb{T}^n$,
$(I,\varphi)\in\mathbb{R}^d\times\mathbb{T}^d$ with $n, d\geq 1$, $V_i$ are
Morse potentials, and $\varepsilon$ is a small non-zero parameter. The
unperturbed Hamiltonian is not necessarily convex, and the induced inner
dynamics does not need to satisfy a twist condition. Using geometric methods we
prove that Arnold diffusion occurs for generic analytic perturbations $H_1$.
Indeed, the set of admissible $H_1$ is $C^\omega$ dense and $C^3$ open (a
fortiori, $C^\omega$ open). Our perturbative technique for the genericity is
valid in the $C^k$ topology for all $k\in [3,\infty)\cup\{\infty, \omega\}$.
|
We address the fluid-structure interaction of flexible fin models oscillated
in a water flow. Here, we investigate in particular the dependence of
hydrodynamic force distributions on fin geometry and flapping frequency. For
this purpose, we employ state-of-the-art techniques in pressure evaluation to
describe fluid force maps with high temporal and spatial resolution on the
deforming surfaces of the hydrofoils. Particle tracking velocimetry (PTV) is
used to measure the 3D fluid velocity field, and the hydrodynamic stress tensor
is subsequently calculated based on the Navier-Stokes equation. The shape and
kinematics of the fin-like foils are linked to their ability to generate
propulsive thrust efficiently, as well as the accumulation of external contact
forces and the resulting internal tension throughout a flapping cycle.
|
In this paper, we shall consider spherically symmetric spacetime solutions
describing the interior of stellar compact objects, in the context of
higher-order curvature theory of the f(R) type. We shall derive the non--vacuum
field equations of the higher-order curvature theory, without assuming any
specific form of the $\mathrm{f(R)}$ theory, specifying the analysis for a
spherically symmetric spacetime with two unknown functions. We obtain a system
of highly non-linear differential equations, which consists of four
differential equations with six unknown functions. To solve such a system, we
assume a specific form of metric potentials, using the Krori-Barua ansatz. We
successfully solve the system of differential equations, and we derive all the
components of the energy-momentum tensor. Moreover, we derive the non-trivial
general form of $\mathrm{f(R)}$ that may generate such solutions and calculate
the dynamic Ricci scalar of the anisotropic star. Accordingly, we calculate the
asymptotic form of the function $\mathrm{f(R)}$, which is a polynomial
function. We match the derived interior solution with the exterior one, which
was derived in \cite{Nashed:2019tuk}, with the latter also resulting in a
non-trivial form of the Ricci scalar. Notably but rather expected, the exterior
solution differs from the Schwarzschild one in the context of general
relativity. The matching procedure will eventually relate two constants with
the mass and radius of the compact stellar object. We list the necessary
conditions that any compact anisotropic star must satisfy and explain in detail
that our model bypasses all of these conditions for a special compact star
$\textit {Her X--1 }$, which has an estimated mass and radius \textit {(mass =
0.85 $\pm 0.15M_{\circledcirc}$\,\, and\, \,radius $= 8.1 \pm 0.41$km)}.
|
The deformation theory of curves is studied by using the canonical ideal. The
problem of lifting curves with automorphisms is reduced to a lifting problem of
linear representations. As an application we prove that the dihedral group
$D_{p^h}$ of order $2p^h$ is a local Oort group.
|
RISC-V is a relatively new, open instruction set architecture with a mature
ecosystem and an official formal machine-readable specification. It is
therefore a promising playground for formal-methods research.
However, we observe that different formal-methods research projects are
interested in different aspects of RISC-V and want to simplify, abstract,
approximate, or ignore the other aspects. Often, they also require different
encoding styles, resulting in each project starting a new formalization
from-scratch. We set out to identify the commonalities between projects and to
represent the RISC-V specification as a program with holes that can be
instantiated differently by different projects.
Our formalization of the RISC-V specification is written in Haskell and
leverages existing tools rather than requiring new domain-specific tools,
contrary to other approaches. To our knowledge, it is the first RISC-V
specification able to serve as the interface between a processor-correctness
proof and a compiler-correctness proof, while supporting several other projects
with diverging requirements as well.
|
Facial Expression Recognition (FER) in the wild is an extremely challenging
task in computer vision due to variant backgrounds, low-quality facial images,
and the subjectiveness of annotators. These uncertainties make it difficult for
neural networks to learn robust features on limited-scale datasets. Moreover,
the networks can be easily distributed by the above factors and perform
incorrect decisions. Recently, vision transformer (ViT) and data-efficient
image transformers (DeiT) present their significant performance in traditional
classification tasks. The self-attention mechanism makes transformers obtain a
global receptive field in the first layer which dramatically enhances the
feature extraction capability. In this work, we first propose a novel pure
transformer-based mask vision transformer (MVT) for FER in the wild, which
consists of two modules: a transformer-based mask generation network (MGN) to
generate a mask that can filter out complex backgrounds and occlusion of face
images, and a dynamic relabeling module to rectify incorrect labels in FER
datasets in the wild. Extensive experimental results demonstrate that our MVT
outperforms state-of-the-art methods on RAF-DB with 88.62%, FERPlus with
89.22%, and AffectNet-7 with 64.57%, respectively, and achieves a comparable
result on AffectNet-8 with 61.40%.
|
We study the emergence of chaotic behavior of Follow-the-Regularized Leader
(FoReL) dynamics in games. We focus on the effects of increasing the population
size or the scale of costs in congestion games, and generalize recent results
on unstable, chaotic behaviors in the Multiplicative Weights Update dynamics to
a much larger class of FoReL dynamics. We establish that, even in simple linear
non-atomic congestion games with two parallel links and any fixed learning
rate, unless the game is fully symmetric, increasing the population size or the
scale of costs causes learning dynamics to become unstable and eventually
chaotic, in the sense of Li-Yorke and positive topological entropy.
Furthermore, we show the existence of novel non-standard phenomena such as the
coexistence of stable Nash equilibria and chaos in the same game. We also
observe the simultaneous creation of a chaotic attractor as another chaotic
attractor gets destroyed. Lastly, although FoReL dynamics can be strange and
non-equilibrating, we prove that the time average still converges to an exact
equilibrium for any choice of learning rate and any scale of costs.
|
We derive Legendre polynomials using Cauchy determinants with a
generalization to power functions with real exponents greater than -1/2. We
also provide a geometric formulation of Gram-Schmidt orthogonalization using
the Hodge star operator.
|
We establish the validity of the Euler$+$Prandtl approximation for solutions
of the Navier-Stokes equations in the half plane with Dirichlet boundary
conditions, in the vanishing viscosity limit, for initial data which are
analytic only near the boundary, and Sobolev smooth away from the boundary. Our
proof does not require higher order correctors, and works directly by
estimating an $L^{1}$-type norm for the vorticity of the error term in the
expansion Navier-Stokes$-($Euler$+$Prandtl$)$. An important ingredient in the
proof is the propagation of local analyticity for the Euler equation, a result
of independent interest.
|
Thermal machines exploit interactions with multiple heat baths to perform
useful tasks, such as work production and refrigeration. In the quantum regime,
tasks with no classical counterpart become possible. Here, we consider the
minimal setting for quantum thermal machines, namely two-qubit autonomous
thermal machines that use only incoherent interactions with their environment,
and investigate the fundamental resources needed to generate entanglement. Our
investigation is systematic, covering different types of interactions, bosonic
and fermionic environments, and different resources that can be supplied to the
machine. We adopt an operational perspective in which we assess the
nonclassicality of the generated entanglement through its ability to perform
useful tasks such as Einstein-Podolsky-Rosen steering, quantum teleportation
and Bell nonlocality. We provide both constructive examples of nonclassical
effects and general no-go results that demarcate the fundamental limits in
autonomous entanglement generation. Our results open up a path toward
understanding nonclassical phenomena in thermal processes.
|
Electricity supply must be matched with demand at all times. This helps
reduce the chances of issues such as load frequency control and the chances of
electricity blackouts. To gain a better understanding of the load that is
likely to be required over the next 24h, estimations under uncertainty are
needed. This is especially difficult in a decentralized electricity market with
many micro-producers which are not under central control.
In this paper, we investigate the impact of eleven offline learning and five
online learning algorithms to predict the electricity demand profile over the
next 24h. We achieve this through integration within the long-term agent-based
model, ElecSim. Through the prediction of electricity demand profile over the
next 24h, we can simulate the predictions made for a day-ahead market. Once we
have made these predictions, we sample from the residual distributions and
perturb the electricity market demand using the simulation, ElecSim. This
enables us to understand the impact of errors on the long-term dynamics of a
decentralized electricity market.
We show we can reduce the mean absolute error by 30% using an online
algorithm when compared to the best offline algorithm, whilst reducing the
required tendered national grid reserve required. This reduction in national
grid reserves leads to savings in costs and emissions. We also show that large
errors in prediction accuracy have a disproportionate error on investments made
over a 17-year time frame, as well as electricity mix.
|
Two-dimensional (2D) Dirac states with linear band dispersion have attracted
enormous interest since the discovery of graphene. However, to date, 2D Dirac
semimetals are still very rare due to the fact that 2D Dirac states are
generally fragile against perturbations such as spin-orbit couplings.
Nonsymmorphic crystal symmetries can enforce the formation of Dirac nodes,
providing a new route to establishing symmetry-protected Dirac states in 2D
materials. Here we report the symmetry-protected Dirac states in nonsymmorphic
alpha-antimonene. The antimonene was synthesized by the method of molecular
beam epitaxy. Two Dirac cones with large anisotropy were observed by
angle-resolved photoemission spectroscopy. The Dirac state in alpha-antimonene
is of spin-orbit type in contrast to the spinless Dirac states in graphene. The
result extends the 'graphene' physics into a new family of 2D materials where
spin-orbit coupling is present.
|
Neutron diffraction and X-ray pair distribution function (XPDF) experiments
were performed in order to investigate the magnetic and local crystal
structures of Ba2FeSbSe5 and to compare them to the average (i.e. long-range)
structural model, previously obtained by single crystal X-ray diffraction.
Changes in the local crystal structure (i.e. in the second coordination sphere)
are observed upon cooling from 295 K to 95 K resulting in deviations from the
average (i.e. long-range) crystal structure. This work demonstrates, that these
observations cannot be explained by local or long-range magnetoelastic effects
involving Fe-Fe correlations. Instead, we found, that the observed differences
between local and average crystal structure can be explained by Sb-5s lone pair
dynamics. We also find, that below the N\'eel temperature (TN = 58 K), the two
distinct magnetic Fe3+ sites order collinearly, such that a combination of
antiparallel and parallel spin arrangements along the b-axis results. The
nearest-neighbor arrangement (J1 = 6 {\AA}) is fully antiferromagnetic, while
next-nearest-neighbor interactions are ferromagnetic in nature.
|
A growing number of applications require the reconstructionof 3D objects from
a very small number of views. In this research, we consider the problem of
reconstructing a 3D object from only 4 Flash X-ray CT views taken during the
impact of a Kolsky bar. For such ultra-sparse view datasets, even model-based
iterative reconstruction (MBIR) methods produce poor quality results.
In this paper, we present a framework based on a generalization of
Plug-and-Play, known as Multi-Agent Consensus Equilibrium (MACE), for
incorporating complex and nonlinear prior information into ultra-sparse CT
reconstruction. The MACE method allows any number of agents to simultaneously
enforce their own prior constraints on the solution. We apply our method on
simulated and real data and demonstrate that MACE reduces artifacts, improves
reconstructed image quality, and uncovers image features which were otherwise
indiscernible.
|
Ultrasound tongue imaging is widely used for speech production research, and
it has attracted increasing attention as its potential applications seem to be
evident in many different fields, such as the visual biofeedback tool for
second language acquisition and silent speech interface. Unlike previous
studies, here we explore the feasibility of age estimation using the ultrasound
tongue image of the speakers. Motivated by the success of deep learning, this
paper leverages deep learning on this task. We train a deep convolutional
neural network model on the UltraSuite dataset. The deep model achieves mean
absolute error (MAE) of 2.03 for the data from typically developing children,
while MAE is 4.87 for the data from the children with speech sound disorders,
which suggest that age estimation using ultrasound is more challenging for the
children with speech sound disorder. The developed method can be used a tool to
evaluate the performance of speech therapy sessions. It is also worthwhile to
notice that, although we leverage the ultrasound tongue imaging for our study,
the proposed methods may also be extended to other imaging modalities (e.g.
MRI) to assist the studies on speech production.
|
This paper presents U-LanD, a framework for joint detection of key frames and
landmarks in videos. We tackle a specifically challenging problem, where
training labels are noisy and highly sparse. U-LanD builds upon a pivotal
observation: a deep Bayesian landmark detector solely trained on key video
frames, has significantly lower predictive uncertainty on those frames vs.
other frames in videos. We use this observation as an unsupervised signal to
automatically recognize key frames on which we detect landmarks. As a test-bed
for our framework, we use ultrasound imaging videos of the heart, where sparse
and noisy clinical labels are only available for a single frame in each video.
Using data from 4,493 patients, we demonstrate that U-LanD can exceedingly
outperform the state-of-the-art non-Bayesian counterpart by a noticeable
absolute margin of 42% in R2 score, with almost no overhead imposed on the
model size. Our approach is generic and can be potentially applied to other
challenging data with noisy and sparse training labels.
|
In this paper we prove a Faber-Krahn type inequality for the first eigenvalue
of the Hermite operator with Robin boundary condition. We prove that the
optimal set is an half-space and we also address the equality case in such
inequality.
|
Motivated by kidney exchange, we study the following mechanism-design
problem: On a directed graph (of transplant compatibilities among patient-donor
pairs), the mechanism must select a simple path (a chain of transplantations)
starting at a distinguished vertex (an altruistic donor) such that the total
length of this path is as large as possible (a maximum number of patients
receive a kidney). However, the mechanism does not have direct access to the
graph. Instead, the vertices are partitioned over multiple players (hospitals),
and each player reports a subset of her vertices to the mechanism. In
particular, a player may strategically omit vertices to increase how many of
her vertices lie on the path returned by the mechanism.
Our objective is to find mechanisms that limit incentives for such
manipulation while producing long paths. Unfortunately, in worst-case
instances, competing with the overall longest path is impossible while
incentivizing (approximate) truthfulness, i.e., requiring that hiding nodes
cannot increase a player's utility by more than a factor of $1 + o(1)$. We
therefore adopt a semi-random model where a small ($o(n)$) number of random
edges are added to worst-case instances. While it remains impossible for
truthful mechanisms to compete with the overall longest path, we give a
truthful mechanism that competes with a weaker but non-trivial benchmark: the
length of any path whose subpaths within each player have a minimum average
length. In fact, our mechanism satisfies even a stronger notion of
truthfulness, which we call matching-time incentive compatibility. This notion
of truthfulness requires that each player not only reports her nodes truthfully
but also does not stop the returned path at any of her nodes in order to divert
it to a continuation inside her own subgraph.
|
Iterative Green's function, based on cyclic reduction of block tridiagonal
matrices, has been the ideal algorithm, through tight-binding models, to
compute the surface density-of-states of semi-infinite topological electronic
materials. In this paper, we apply this method to photonic and acoustic
crystals, using finite-element discretizations and a generalized eigenvalue
formulation, to calculate the local density-of-states on a single surface of
semi-infinite lattices. The three-dimensional (3D) examples of gapless
helicoidal surface states in Weyl and Dirac crystals are shown and the
computational cost, convergence and accuracy are analyzed.
|
Electroweak probes are potential tool to study the properties of the hot and
dense strongly interacting matter produced in relativistic nuclear collisions
due to their unique nature. A selection of the new experimental analysis and
results from theory calculations on electromagnetic and weak probes presented
at the Hard Probes 2020 are discussed in this contribution.
|
The goal of the paper is to provide a detailed explanation on how the
(continuous) cosine transform and the discrete(-time) cosine transform arise
naturally as certain manifestations of the celebrated Gelfand transform. We
begin with the introduction of the cosine convolution $\star_c$, which can be
viewed as an "arithmetic mean" of the classical convolution and its "twin
brother", the anticonvolution. The d'Alambert property of $\star_c$ plays a
pivotal role in establishing the bijection between $\Delta(L^1(G),\star_c)$ and
the cosine class $\mathcal{COS}(G),$ which turns out to be an open map if
$\mathcal{COS}(G)$ is equipped with the topology of uniform convergence on
compacta $\tau_{ucc}$. Subsequently, if $G = \mathbb{R},\mathbb{Z}, S^1$ or
$\mathbb{Z}_n$ we find a relatively simple topological space which is
homeomorphic to $\Delta(L^1(G),\star_c).$ Finally, we witness the "reduction"
of the Gelfand transform to the aforementioned cosine transforms.
|
Magnetic braking (MB) likely plays a vital role in the evolution of low-mass
X-ray binaries (LMXBs). However, it is still uncertain about the physics of MB,
and there are various proposed scenarios for MB in the literature. To examine
and discriminate the efficiency of MB, we investigate the LMXB evolution with
five proposed MB laws. Combining detailed binary evolution calculation with
binary population synthesis, we obtain the expected properties of LMXBs and
their descendants binary millisecond pulsars. We then discuss the strength and
weakness of each MB law by comparing the calculated results with observations.
We conclude that the $\tau$-boosted MB law seems to best match the
observational characteristics.
|
Atomtronics experiments with ultracold atomic gases allow us to explore
quantum transport phenomena of a weakly-interacting Bose-Einstein condensate
(BEC). Here, we focus on two-terminal transport of such a BEC in the vicinity
of zero temperature. By using the tunnel Hamiltonian and Bogoliubov theory, we
obtain a DC Josephson current expression in the BEC and apply it to
experimentally relevant situations such as quantum point contact and planar
junction. Due to the absence of Andreev bound states but the presence of
couplings associated with condensation elements, a current-phase relation in
the BEC is found to be different from one in an s-wave superconductor. In
addition, it turns out that the DC Josephson current in the BEC depends on the
sign of tunneling elements, which allows to realize the so-called $\pi$
junction by using techniques of artificial gauge fields.
|
The Galactic B[e] supergiant MWC 137 is surrounded by a large-scale optical
nebula. To shed light on the physical conditions and kinematics of the nebula,
we analyze the optical forbidden emission lines [NII] 6548,6583 and [SII]
6716,6731 in long-slit spectra taken with ALFOSC at the Nordic Optical
Telescope. The radial velocities display a complex behavior but, in general,
the northern nebular features are predominantly approaching while the southern
ones are mostly receding. The electron density shows strong variations across
the nebula with values spreading from about zero to ~800 cm$^{-3}$. Higher
densities are found closer to MWC~137 and in regions of intense emission,
whereas in regions with high radial velocities the density decreases
significantly. We also observe the entire nebula in the two [SII] lines with
the scanning Fabry-Perot interferometer attached to the 6-m telescope of the
Special Astrophysical Observatory. These data reveal a new bow-shaped feature
at PA = 225-245 and a distance 80" from MWC 137. A new H$\alpha$ image has been
taken with the Danish 1.54-m telescope on La Silla. No expansion or changes in
the nebular morphology appear within 18.1 years. We derive a mass of 37 (+9/-5)
solar masses and an age of $4.7\pm0.8$ Myr for MWC 137. Furthermore, we detect
a period of 1.93 d in the time series photometry collected with the TESS
satellite, which could suggest stellar pulsations. Other, low-frequency
variability is seen as well. Whether these signals are caused by internal
gravity waves in the early-type star or by variability in the wind and
circumstellar matter currently cannot be distinguished.
|
This article proposes an artificial data generating algorithm that is simple
and easy to customize. The fundamental concept is to perform random permutation
of Monte Carlo generated random numbers which conform to the unconditional
probability distribution of the original real time series. Similar to
constraint surrogate methods, random permutations are only accepted if a given
objective function is minimized. The objective function is selected in order to
describe the most important features of the stochastic process. The algorithm
is demonstrated by producing simulated log-returns of the S\&P 500 stock index.
|
Creating programs with block-based programming languages like Scratch is easy
and fun. Block-based programs can nevertheless contain bugs, in particular when
learners have misconceptions about programming. Even when they do not, Scratch
code is often of low quality and contains code smells, further inhibiting
understanding, reuse, and fun. To address this problem, in this paper we
introduce LitterBox, a linter for Scratch programs. Given a program or its
public project ID, LitterBox checks the program against patterns of known bugs
and code smells. For each issue identified, LitterBox provides not only the
location in the code, but also a helpful explanation of the underlying reason
and possible misconceptions. Learners can access LitterBox through an easy to
use web interface with visual information about the errors in the block-code,
while for researchers LitterBox provides a general, open source, and extensible
framework for static analysis of Scratch programs.
|
In the first paper of this series (Rhea et al. 2020), we demonstrated that
neural networks can robustly and efficiently estimate kinematic parameters for
optical emission-line spectra taken by SITELLE at the Canada-France-Hawaii
Telescope. This paper expands upon this notion by developing an artificial
neural network to estimate the line ratios of strong emission-lines present in
the SN1, SN2, and SN3 filters of SITELLE. We construct a set of 50,000
synthetic spectra using line ratios taken from the Mexican Million Model
database replicating Hii regions. Residual analysis of the network on the test
set reveals the network's ability to apply tight constraints to the line
ratios. We verified the network's efficacy by constructing an activation map,
checking the [N ii] doublet fixed ratio, and applying a standard k-fold
cross-correlation. Additionally, we apply the network to SITELLE observation of
M33; the residuals between the algorithm's estimates and values calculated
using standard fitting methods show general agreement. Moreover, the neural
network reduces the computational costs by two orders of magnitude. Although
standard fitting routines do consistently well depending on the signal-to-noise
ratio of the spectral features, the neural network can also excel at
predictions in the low signal-to-noise regime within the controlled environment
of the training set as well as on observed data when the source spectral
properties are well constrained by models. These results reinforce the power of
machine learning in spectral analysis.
|
We present a polarization-resolved, high-resolution Raman scattering study of
the three consecutive charge density wave (CDW) regimes in $1T$-TaS$_2$ single
crystals, supported by \textit{ab initio} calculations. Our analysis of the
spectra within the low-temperature commensurate (C-CDW) regime shows
$\mathrm{P3}$ symmetry of the system, thus excluding the previously proposed
triclinic stacking of the "star-of-David" structure, and promoting trigonal or
hexagonal stacking instead. The spectra of the high-temperature incommensurate
(IC-CDW) phase directly project the phonon density of states due to the
breaking of the translational invariance, supplemented by sizeable
electron-phonon coupling. Between 200 and 352\,K, our Raman spectra show
contributions from both the IC-CDW and the C-CDW phase, indicating their
coexistence in the so-called nearly-commensurate (NC-CDW) phase. The
temperature-dependence of the symmetry-resolved Raman conductivity indicates
the stepwise reduction of the density of states in the CDW phases, followed by
a Mott transition within the C-CDW phase. We determine the size of the Mott gap
to be $\Omega_{\rm gap}\approx 170-190$ meV, and track its temperature
dependence.
|
Often neglected in traditional education, spatial thinking has played a
critical role in science, technology, engineering, and mathematics (STEM)
education. Spatial thinking skills can be enhanced by training, life
experience, and practice. One approach to train these skills is through 3D
modeling (also known as Computer-Aided Design or CAD). Although 3D modeling
tools have shown promising results in training and enhancing spatial thinking
skills in undergraduate engineering students when it comes to novices,
especially middle and high-school students, they are not sufficient to provide
rich 3D experience since the 3D models created in CAD are isolated the actual
3D physical world. Resulting in novice students finding it difficult to create
error-free 3D models that would 3D print successfully. This leads to student
frustration where students are not motivated to create 3D models themselves;
instead, they prefer to download them from online repositories. To address this
problem, researchers are focusing on integrating 3D models and displays into
the physical world with the help of technologies like Augmented Reality (AR).
In this demo, we present an AR application, 3DARVisualizer, that helps us
explore the role of AR as a 3D model debugger, including enhancing 3D modeling
abilities and spatial thinking skills of middle- and high-school students.
|
The stochastic block model (SBM) and degree-corrected block model (DCBM) are
network models often selected as the fundamental setting in which to analyze
the theoretical properties of community detection methods. We consider the
problem of spectral clustering of SBM and DCBM networks under a local form of
edge differential privacy. Using a randomized response privacy mechanism called
the edge-flip mechanism, we develop theoretical guarantees for differentially
private community detection, demonstrating conditions under which this strong
privacy guarantee can be upheld while achieving spectral clustering convergence
rates that match the known rates without privacy. We prove the strongest
theoretical results are achievable for dense networks (those with node degree
linear in the number of nodes), while weak consistency is achievable under mild
sparsity (node degree greater than $\sqrt{n}$). We empirically demonstrate our
results on a number of network examples.
|
A beautifully simple free generating set for the commutator subgroup of a
free group was constructed by Tomaszewski. We give a new geometric proof of his
theorem, and show how to give a similar free generating set for the commutator
subgroup of a surface group. We also give a simple representation-theoretic
description of the structure of the abelianizations of these commutator
subgroups and calculate their homology.
|
Accurate and comprehensive diatomic molecular spectroscopic data have long
been vital in a wide variety of applications for measuring and monitoring
astrophysical, industrial and other gaseous environments. These data are also
used extensively for benchmarking quantum chemistry and applications from
quantum computers, ultracold chemistry and the search for physics beyond the
standard model. Useful data can be highly detailed like line lists or summative
like molecular constants, and obtained from theory, experiment or a
combination. There are plentiful (though not yet sufficient) data available,
but these data are often scattered. For example, molecular constants have not
been compiled since 1979 despite the existing compilation still being cited
more than 200 times annually. Further, the data are interconnected but updates
in one type of data are not yet routinely applied to update interconnected
data: in particular, new experimental and ab-initio data are not routinely
unified with other data on the molecule. This paper provide information and
strategies to strengthen the connection between data producers (e.g. ab-initio
electronic structure theorists and experimental spectroscopists), data
modellers (e.g. line list creators and others who connect data on one aspect of
the molecule to the full energetic and spectroscopic description) and data
users (astronomers, chemical physicists etc). All major data types are
described including their source, use, compilation and interconnectivity.
Explicit advice is provided for theoretical and experimental data producers,
data modellers and data users to facilitate optimal use of new data with
appropriate attribution.
|
Data markets have the potential to foster new data-driven applications and
help growing data-driven businesses. When building and deploying such markets
in practice, regulations such as the European Union's General Data Protection
Regulation (GDPR) impose constraints and restrictions on these markets
especially when dealing with personal or privacy-sensitive data. In this paper,
we present a candidate architecture for a privacy-preserving personal data
market, relying on cryptographic primitives such as multi-party computation
(MPC) capable of performing privacy-preserving computations on the data.
Besides specifying the architecture of such a data market, we also present a
privacy-risk analysis of the market following the LINDDUN methodology.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.