text
stringlengths 11
9.77k
| label
stringlengths 2
104
|
---|---|
In this paper we study the well-posedness of the Cauchy problem for a wave equation with multiplicities and space-dependent irregular coefficients. As in \cite{GR:14} in order to give a meaningful notion of solution, we employ the notion of very weak solution, which construction is based on a parameter dependent regularisation of the coefficients via mollifiers. We prove that, even with distributional coefficients, a very weak solution exists for our Cauchy problem and it converges to the classical one when the coefficients are smooth. The dependence on the mollifiers of very weak solutions is investigated at the end of the paper in some instructive examples. | mathematics |
In this work, we present the study of the stationary structures and the breathing mode behavior of a two-dimensional self-bound binary Bose droplet. We employ an analytical approach using a variational ansatz with a super-Gaussian trial order parameter and compare it with the numerical solutions of the extended Gross-Pitaevskii equation. We find that the super-Gaussian is superior to the often used Gaussian ansatz in describing the stationary and dynamical properties of the system. We find that for sufficiently large non-rotating droplets the breathing mode is energetically favourable compared to the self-evaporating process. For small self-bound systems our results differ based on the ansatz. Inducing angular momentum by imprinting multiply quantized vortices at the droplet center, this preference for the breathing mode persists independent of the norm. | condensed matter |
Significant manifestation of interplay of superconductivity and charge density wave, spin density wave or magnetism is dome-like variation in superconducting critical temperature (Tc) for cuprate, iron-based and heavy Fermion superconductors. Overall behavior is that the ordered temperature is gradually suppressed and the Tc is enhanced under external control parameters. Many phenomena like pesudogap, quantum critical point and strange metal emerge in the different doping range. Exploring dome-shaped Tc in new superconductors is of importance to detect emergent effects. Here, we report that the observation of superconductivity in new layered Cu-based compound RE2Cu5As3O2 (RE=La, Pr, Nd), in which the Tc exhibits dome-like variation with maximum Tc of 2.5 K, 1.2 K and 1.0 K as substituting Cu by large amount of Ni ions. The transitions of T* in former two compounds can be suppressed by either Ni doping or rare earth replacement. Simultaneously, the structural parameters like As-As bond length and c/a ratio exhibit unusual variations as Ni-doping level goes through the optimal value. The robustness of superconductivity, up to 60% of Ni doping, reveals the unexpected impurity effect on inducing and enhancing superconductivity in this novel layered materials | condensed matter |
The extraction of filamentary structure from a point cloud is discussed. The filaments are modeled as ridge lines or higher dimensional ridges of an underlying density. We propose two novel algorithms, and provide theoretical guarantees for their convergences. We consider the new algorithms as alternatives to the Subspace Constraint Mean Shift (SCMS) algorithm that do not suffer from a shortcoming of the SCMS that is also revealed in this paper. | statistics |
Given an oriented surface bounding a handlebody, we study the subgroup of its mapping class group defined as the intersection of the handlebody group and the second term of the Johnson filtration: $\mathcal{A} \cap J_2$. We introduce two trace-like operators, inspired by Morita's trace, and show that their kernels coincide with the images by the second Johnson homomorphism $\tau_2$ of $J_2$ and $\mathcal{A} \cap J_2$, respectively. In particular, we answer by the negative to a question asked by Levine about an algebraic description of $\tau_2(\mathcal{A} \cap J_2)$. By the same techniques, and for a Heegaard surface in $S^3$, we also compute the image by $\tau_2$ of the intersection of the Goeritz group $\mathcal{G}$ with $J_2$. | mathematics |
In this paper we introduce Feature Gradients, a gradient-based search algorithm for feature selection. Our approach extends a recent result on the estimation of learnability in the sublinear data regime by showing that the calculation can be performed iteratively (i.e., in mini-batches) and in linear time and space with respect to both the number of features D and the sample size N . This, along with a discrete-to-continuous relaxation of the search domain, allows for an efficient, gradient-based search algorithm among feature subsets for very large datasets. Crucially, our algorithm is capable of finding higher-order correlations between features and targets for both the N > D and N < D regimes, as opposed to approaches that do not consider such interactions and/or only consider one regime. We provide experimental demonstration of the algorithm in small and large sample-and feature-size settings. | statistics |
In this paper, we consider a hybrid mobile edge computing (H-MEC) platform, which includes ground stations (GSs), ground vehicles (GVs) and unmanned aerial vehicle (UAVs), all with mobile edge cloud installed to enable user equipments (UEs) or Internet of thing (IoT) devices with intensive computing tasks to offload. Our objective is to obtain an online offloading algorithm to minimize the energy consumption of all the UEs, by jointly optimizing the positions of GVs and UAVs, user association and resource allocation in real-time, while considering the dynamic environment. To this end, we propose a hybrid deep learning based online offloading (H2O) framework where a large-scale path-loss fuzzy c-means (LSFCM) algorithm is first proposed and used to predict the optimal positions of GVs and UAVs. Secondly, a fuzzy membership matrix U-based particle swarm optimization (U-PSO) algorithm is applied to solve the mixed integer nonlinear programming (MINLP) problems and generate the sample datasets for the deep neural network (DNN) where the fuzzy membership matrix can capture the small-scale fading effects and the information of mutual interference. Thirdly, a DNN with the scheduling layer is introduced to provide user association and computing resource allocation under the practical latency requirement of the tasks and limited available computing resource of H-MEC. In addition, different from traditional DNN predictor, we only input one UE information to the DNN at one time, which will be suitable for the scenarios where the number of UE is varying and avoid the curse of dimensionality in DNN. | electrical engineering and systems science |
Low-frequency gravitational-wave astronomy can perform precision tests of general relativity and probe fundamental physics in a regime previously inaccessible. A space-based detector will be a formidable tool to explore gravity's role in the cosmos, potentially telling us if and where Einstein's theory fails and providing clues about some of the greatest mysteries in physics and astronomy, such as dark matter and the origin of the Universe. | astrophysics |
Methods for estimating heterogeneous treatment effect in observational data have largely focused on continuous or binary outcomes, and have been relatively less vetted with survival outcomes. Using flexible machine learning methods in the counterfactual framework is a promising approach to address challenges due to complex individual characteristics, to which treatments need to be tailored. To evaluate the operating characteristics of recent survival machine learning methods for the estimation of treatment effect heterogeneity and inform better practice, we carry out a comprehensive simulation study presenting a wide range of settings describing confounded heterogeneous survival treatment effects and varying degrees of covariate overlap. Our results suggest that the nonparametric Bayesian Additive Regression Trees within the framework of accelerated failure time model (AFT-BART-NP) consistently yields the best performance, in terms of bias, precision and expected regret. Moreover, the credible interval estimators from AFT-BART-NP provide close to nominal frequentist coverage for the individual survival treatment effect when the covariate overlap is at least moderate. Including a non-parametrically estimated propensity score as an additional fixed covariate in the AFT-BART-NP model formulation can further improve its efficiency and frequentist coverage. Finally, we demonstrate the application of flexible causal machine learning estimators through a comprehensive case study examining the heterogeneous survival effects of two radiotherapy approaches for localized high-risk prostate cancer. | statistics |
X-ray flares can be accounted for a hint to the late time activity of Gamma-ray bursts (GRBs) central engines. Such a long term activity has been described through some models, one of which is the viscous evolution of the outer disc fragments that proposed by Perna et al. (2006), and developed quantitatively by Dall Osso et al. (2017). Here, we reconstruct Dall Osso et al. (2017) framework through taking both small and large scale effects of magnetic field into account. To consider the magnetic barrier as a possible mechanism that might govern the accretion process of each magnetized clump, we make a simple pattern in boundary condition through which this mechanism might happen. Regarding various model parameters, we probe for their influence and proceed some key analogies between our model predictions and previous phenomenological estimates, for two different choices of boundary conditions (with and without a magnetic barrier). Our model is remarkably capable of matching flare bolometric and X-ray light-curves, as well as reproducing their statistical properties, such as the ratios between rise and decay time, width parameter and peak time, and the power-law correlation between peak luminosity and peak time. Combining our results with the conclusions of previous studies, we are led to interpret magnetic barrier as a less probable mechanism that might control the evolution of these clumps, especially the later created (or viscously evolved) ones. | astrophysics |
Conventional information processors freely convert information between different physical carriers to process, store, or transmit information. It seems plausible that quantum information will also be held by different physical carriers in applications such as tests of fundamental physics, quantum-enhanced sensors, and quantum information processing. Quantum-controlled molecules in particular could transduce quantum information across a wide range of quantum-bit (qubit) frequencies, from a few kHz for transitions within the same rotational manifold, a few GHz for hyperfine transitions, up to a few THz for rotational transitions, to hundreds of THz for fundamental and overtone vibrational and electronic transitions, possibly all within the same molecule. Here, we report the first demonstration of entanglement between states of the rotation of a $\rm^{40}CaH^+$ molecular ion and internal states of a $\rm^{40}Ca^+$ atomic ion. The qubit addressed in the molecule has a frequency of either 13.4 kHz or 855 GHz, highlighting the versatility of molecular qubits. This work demonstrates how molecules can transduce quantum information between qubits with different frequencies to enable hybrid quantum systems. We anticipate that quantum control and measurement of molecules as demonstrated here will create opportunities for quantum information science, quantum sensors, fundamental and applied physics, and controlled quantum chemistry. | quantum physics |
This paper discusses the opportunity of bringing the concept of zero-shot adaptation into learning-based millimeter-wave (mmWave) communication systems, particularly in environments with unstable urban infrastructures. Here, zero-shot adaptation implies that a learning agent adapts to unseen scenarios during training without any adaptive fine-tuning. By considering learning-based beam-tracking of a mmWave node placed on an overhead messenger wire, we first discuss the importance of zero-shot adaptation. More specifically, we confirm that the gap between the values of wire tension and total wire mass in training and test scenarios deteriorates the beam-tracking performance in terms of the received power. Motivated by this discussion, we propose a robust beam-tracking method to adapt to a broad range of test scenarios in a zero-shot manner, i.e., without requiring any retraining to adapt the scenarios. The key idea is to leverage a recent, robust adversarial reinforcement learning technique, where such training and test gaps are regarded as disturbances from adversaries. In our case, a beam-tracking agent performs training competitively bases on an intelligent adversary who causes beam misalignments. Numerical evaluations confirm the feasibility of zero-shot adaptation by showing that the on-wire node achieves feasible beam-tracking performance without any adaptive fine-tuning in unseen scenarios. | computer science |
Leveraging health administrative data (HAD) datasets for predicting the risk of chronic diseases including diabetes has gained a lot of attention in the machine learning community recently. In this paper, we use the largest health records datasets of patients in Ontario,Canada. Provided by the Institute of Clinical Evaluative Sciences (ICES), this database is age, gender and ethnicity-diverse. The datasets include demographics, lab measurements,drug benefits, healthcare system interactions, ambulatory and hospitalizations records. We perform one of the first large-scale machine learning studies with this data to study the task of predicting diabetes in a range of 1-10 years ahead, which requires no additional screening of individuals.In the best setup, we reach a test AUC of 80.3 with a single-model trained on an observation window of 5 years with a one-year buffer using all datasets. A subset of top 15 features alone (out of a total of 963) could provide a test AUC of 79.1. In this paper, we provide extensive machine learning model performance and feature contribution analysis, which enables us to narrow down to the most important features useful for diabetes forecasting. Examples include chronic conditions such as asthma and hypertension, lab results, diagnostic codes in insurance claims, age and geographical information. | statistics |
The relativistic charged spinor matter field is quantized in the background of a straight cosmic string with nonvanishing transverse size. The most general boundary conditions ensuring the impossibility for matter to penetrate through the edge of the string core are considered. The role of discrete symmetries is elucidated, and analytic expressions for the temporal and spatial components of the induced vacuum current are derived in the case of either P or CT invariant boundary condition with two parameters varying arbitrarily from point to point of the edge. The requirement of physical plausibility for the global induced vacuum characteristics is shown to remove completely an arbitrariness in boundary conditions. We find out that a magnetic field is induced in the vacuum and that a sheath in the form of a tube of the magnetic flux lines encloses a cosmic string. The dependence of the induced vacuum magnetic field strength on the string flux and tension, as well as on the transverse size of the string and on the distance from the string, is unambiguously determined. | high energy physics theory |
The renormalization procedure is proved to be a rigorous way to get finite answers in a renormalizable class of field theories. We claim, however, that it is redundant if one reduces the requirement of finiteness to S-matrix elements only and does not require finiteness of intermediate quantities like the off-shell Green functions. We suggest a novel view on the renormalization procedure. It is based on the usual BPHZ R-operation, which is equally applicable to any local QFT, renormalizable or not. The key point is the replacement of the multiplicative renormalization, used in renormalizable theories, by an operation when the renormalization constants depend on the fields and momenta that have to be integrated inside the subgraphs. This approach does not distinguish between renormalizable and non-renormalizable interactions and provides the basis for getting finite scattering amplitudes in both cases. The arbitrariness of the subtraction procedure is fixed by imposing a normalization condition on the scattering amplitude as a whole rather than on an infinite series of new operators appearing in non-renormalizable theories. Using the property of locality of counter-terms, we get recurrence relations connecting leading, subleading, etc., UV divergences in all orders of PT in any local theory. This allows one to get generalized RG equations that have an integro-differential form and sum up the leading logarithms. This way one can cure the problem of violation of unitarity in non-renormalizable theories by summing up the leading asymptotics. We illustrate the basic features of our approach by several examples. Our main statement is that non-renormalizable theories are self-consistent, they can be well treated within the usual BPHZ R-operation, and the arbitrariness can be fixed to a finite number of parameters just as in the renormalizable case. | high energy physics theory |
Few-shot text classification is a fundamental NLP task in which a model aims to classify text into a large number of categories, given only a few training examples per category. This paper explores data augmentation -- a technique particularly suitable for training with limited data -- for this few-shot, highly-multiclass text classification setting. On four diverse text classification tasks, we find that common data augmentation techniques can improve the performance of triplet networks by up to 3.0% on average. To further boost performance, we present a simple training strategy called curriculum data augmentation, which leverages curriculum learning by first training on only original examples and then introducing augmented data as training progresses. We explore a two-stage and a gradual schedule, and find that, compared with standard single-stage training, curriculum data augmentation trains faster, improves performance, and remains robust to high amounts of noising from augmentation. | computer science |
In this paper we explore the application of methods for classical judgment aggregation in pooling probabilistic opinions on logically related issues. For this reason, we first modify the Boolean judgment aggregation framework in the way that allows handling probabilistic judgments and then define probabilistic aggregation functions obtained by generalization of the classical ones. In addition, we discuss essential desirable properties for the aggregation functions and explore impossibility results. | computer science |
Integrated modelocked lasers with high power are of utmost importance for next generation optical systems that can be field-deployable and mass produced. Here we study fully integrated modelocked laser designs that have the potential to generate ultrashort, high power, and high quality pulses. We explore a large mode area laser for high power pulse generation and study the various mode-locking regimes of dispersion managed soliton pulses in net anomalous and net normal dispersion cavities. Furthermore, we study numerically and experimentally general properties and tunability of a fast integrated saturable absorber based on low loss silicon nitride nonlinear interferometer. We believe this work guides the exploration of the future integrated high power modelocked lasers. | physics |
Ram pressure (RP) can influence the evolution of cold gas content and star formation rates of galaxies. One of the key parameters for the strength of RP is the density of intra-group medium ($\rho_{\rm igm}$), which is difficult to estimate if the X-ray emission from it is too weak to be observed. We propose a new way to constrain $\rho_{\rm igm}$ through an application of convolutional neural networks (CNNs) to simulated gas density and kinematic maps galaxies under strong RP. We train CNNs using $9\times{}10^4$ 2D images of galaxies under various RP conditions, then validate performance with $10^4$ new test images. This new method can be applied to real observational data from ongoing WALLABY and SKA surveys to quickly obtain estimates of $\rho_{\rm igm}$. Simulated galaxy images have $1.0$ kpc resolution, which is consistent with that expected from the future WALLABY survey. The trained CNN models predict the normalised IGM density, $\hat{\rho}_{\rm igm}$ where $0.0 \le \hat{\rho}_{\rm igm, n} < 10.0$, accurately with root mean squared error values ($\rm RMSE$) of $0.72$, $0.83$ and $0.74$ for the density, kinematic and joined 2D maps, respectively. Trained models are unable to predict the relative velocity of galaxies with respect to the IGM ($v_{\rm rel}$) precisely, and struggle to generalise for different RP conditions. We apply our CNNs to the observed HI column density map of NGC 1566 in the Dorado group to estimate its IGM density. | astrophysics |
The open charm strong decay widths and certain ratio of branching fractions of a charmed strange baryon $\Xi_c(2970)$ are calculated in a $^3P_0$ model. The results are compatible with the latest experimental data. The theoretical ratio of decay branching fractions $R = \mathcal{B} [ \Xi_{c}(2970)^+ \rightarrow \Xi_c(2645)^{0}\pi^{+} ] / \mathcal{B} [ \Xi_{c}(2970)^+ \rightarrow \Xi_c^{\prime0}\pi^{+}] \approx 1.0$. The spin-parity $J^P = 1/2^+$ and $3/2^+$ for different assignments are analyzed. From the results of our calculation, $\Xi_{c}(2970)$ can be interpreted as a $2S$-wave state with $J^P(s_l) = 1/2^+(0)$. The distinguishing between the 2S-wave $n_\rho$- and $n_\lambda$-excitation states and between states with $s_l = 0$ and $s_l = 1$ and between states with total spin $1/2$ and $3/2$($s_l = 1$) are also discussed. | high energy physics phenomenology |
In this article, we study the axialvector-diquark-axialvector-antidiquark ($AA$) type and scalar-diquark-scalar-antidiquark ($SS$) type fully open flavor $cs\bar{u}\bar{d}$ tetraquark states with the spin-parity $J^P={0}^+$ via the QCD sum rules. The predicted masses $M_{AA}=2.91\pm0.12\,\rm{GeV}$ and $M_{SS}=3.05\pm0.10\,\rm{GeV}$ support assigning the $X_0(2900)$ to be the $AA$-type scalar $cs\bar{u}\bar{d}$ tetraquark state. | high energy physics phenomenology |
We define bulk/boundary maps corresponding to quantum gravity states in the tensorial group field theory formalism, for quantum geometric models sharing the same type of quantum states of loop quantum gravity. The maps are defined in terms of a partition of the quantum geometric data associated to a graph with open edges into bulk and boundary ones, in the spin representation. We determine the general condition on the entanglement structure of the state that makes the bulk/boundary map isometric (a necessary condition for holographic behaviour), and we analyse different types of quantum states, identifying those that define isometric bulk/boundary maps. | high energy physics theory |
The observed light curves and other properties of the two extragalactic fast x-ray transients, CDF-S XT1 and CDF-S XT2, which were discovered recently in archival data of the Chandra Deep Field-South (CDF-S) observations, indicate that they belong to two different populations of X-ray transients. XT1 seems to be an x-ray flash (XRF), i.e., a narrowly beamed long duration gamma ray burst viewed from far off-axis while XT2 seems to be a nebular emission powered by a newly born millisecond pulsar in a neutron stars binary merger. | astrophysics |
Spectroscopy of 333 NGC 6819 stars and Gaia astrometry are used to map Li evolution from the giant branch tip to 0.5 mag below the Li dip. Isochrone comparison with [Fe/H] = -0.04, based upon neural network spectroscopic analysis, produces an age of 2.25 (2.4) Gyr for E(B-V) = 0.16 (0.14) and (m-M) = 12.40 (12.29). Despite originating outside the Li dip, only 10% of single subgiants/giants have measurable Li. Above the Li dip, the limiting A(Li) for single stars is 3.2 +/- 0.1 but the lower range is comparable to that found within the dip. The F-dwarf Li dip profile agrees with the Hyades/Praesepe, evolved forward. The Li level among stars populating the plateau fainter than the Li dip is A(Li) = 2.83 +/- 0.16; the dispersion is larger than expected from spectroscopic error alone. Comparison of Li and V_ROT distributions among turnoff stars in NGC 7789, NGC 2506, NGC 3680, and NGC 6819 indicates that rotational spindown from the main sequence is critical in defining the boundaries of the Li dip. For higher mass dwarfs, spindown is likewise correlated with Li depletion, creating a second dip, but at higher mass and on a longer time scale. The Li distribution among evolved stars of NGC 6819 is more representative of the older M67, where subgiant and giant stars emerge from within the Li dip, than the younger NGC 7789, where a broad range in V_ROT among the turnoff stars likely produces a range in mass among the giants. | astrophysics |
Pulmonary lobe segmentation is an important preprocessing task for the analysis of lung diseases. Traditional methods relying on fissure detection or other anatomical features, such as the distribution of pulmonary vessels and airways, could provide reasonably accurate lobe segmentations. Deep learning based methods can outperform these traditional approaches, but require large datasets. Deep multi-task learning is expected to utilize labels of multiple different structures. However, commonly such labels are distributed over multiple datasets. In this paper, we proposed a multi-task semi-supervised model that can leverage information of multiple structures from unannotated datasets and datasets annotated with different structures. A focused alternating training strategy is presented to balance the different tasks. We evaluated the trained model on an external independent CT dataset. The results show that our model significantly outperforms single-task alternatives, improving the mean surface distance from 7.174 mm to 4.196 mm. We also demonstrated that our approach is successful for different network architectures as backbones. | electrical engineering and systems science |
The phase-integral and worldline-instanton methods are two widely used methods to calculate Schwinger pair-production densities in electric fields of fixed direction that depend on just one time or space coordinate in the same fixed plane of the electromagnetic field tensor. We show that for charged spinless bosons the leading results of the phase-integral method integrated up through quadratic momenta are equivalent to those of the worldline-instanton method including prefactors. We further apply the phase-integral method to fermion production and time-dependent electric fields parallel to a constant magnetic field. | high energy physics theory |
Cutoff energy in a synchrotron radiation spectrum of a supernova remnant (SNR) contains a key parameter of ongoing particle acceleration. We systematically analyze 11 young SNRs, including all historical SNRs, to measure the cutoff energy, thus shedding light on the nature of particle acceleration at the early stage of SNR evolution. The nonthermal (synchrotron) dominated spectra in filament-like outer rims are selectively extracted and used for spectral fitting because our model assumes that accelerated electrons are concentrated in the vicinity of the shock front due to synchrotron cooling. The cutoff energy parameter ($\varepsilon_0$) and shock speed ($v_{\rm sh}$) are related as $ \varepsilon_0 \propto v_{\rm sh}^2 \eta^{-1}$ with a Bohm factor of $\eta$. Five SNRs provide us with spatially resolved $\varepsilon_0$-$v_{\rm sh}$ plots across the remnants, indicating a variety of particle acceleration. With all SNRs considered together, the systematic tendency of $\eta$ clarifies a correlation between $\eta$ and an age of $t$ (or an expansion parameter of $m$) as $\eta \propto t^{-0.4}$ ($\eta \propto m^{4}$). This might be interpreted as the magnetic field becomes more turbulent and self-generated, as particles are accelerated at a greater rate with time. The maximum energy achieved in SNRs can be higher if we consider the newly observed time dependence on $\eta$. | astrophysics |
We present bright [CII] 158 $\mu$m line detections from a strongly magnified and multiply-imaged ($\mu\sim20-160$) sub-$L^{*}$ ($M_{\rm UV}$ = $-19.75^{+0.55}_{-0.44}$) Lyman-break galaxy (LBG) at $z=6.0719\pm0.0004$ from the ALMA Lensing Cluster Survey (ALCS). Emission lines are identified at 268.7 GHz at $\geq$ 8$\sigma$ exactly at positions of two multiple images of the LBG behind the massive galaxy cluster RXCJ0600$-$2007. Our lens models, updated with the latest spectroscopy from VLT/MUSE, indicate that a sub region of the LBG crosses the caustic and is lensed into a long ($\sim6''$) arc with a local magnification of $\mu\sim 160$, for which the [CII] line is also significantly detected. The source-plane reconstruction resolves the interstellar medium (ISM) structure, showing that the [CII] line is co-spatial with the rest-frame UV continuum at the scale of $\sim$300 pc. The [CII] line properties suggest that the LBG is a rotation-dominated system whose velocity gradient explains a slight difference of redshifts between the whole LBG and its sub region. The star formation rate (SFR)-$L_{\rm [CII]}$ relations from the sub to the whole regions of the LBG are consistent with those of local galaxies. We evaluate the lower limit of the faint-end of the [CII] luminosity function at $z=6$, and find that it is consistent with predictions from semi analytical models and from the local SFR-$L_{\rm [CII]}$ relation with a SFR function at $z=6$. These results imply that the local SFR-$L_{\rm [CII]}$ relation is universal for a wide range of scales including the spatially resolved ISM, the whole region of galaxy, and the cosmic scale, even in the epoch of reionization. | astrophysics |
This study simulated negative-capacitance double gate FinFETs with channel lengths ranging from 25nm to 100nm using TCAD. The results show that negative capacitance significantly reduces subthreshold swing as well as drain induced barrier lowering effects. The improvement is found to be significantly more prominent for short channel devices than long ones, which demonstrates the tremendous advantage of negative capacitance gate stack for scaled MOSFETs. A compact analytical formulation is developed to quantify sub-threshold swing improvement for short channel devices. | physics |
We use archival daily spot coverage measurements from Howard et al. (1984) to study the rotational modulation of the Sun as though it were a distant star. A quasi-periodic Gaussian process measures the solar rotation period $P_\mathrm{rot} = 26.3 \pm 0.1$ days, and activity cycle period $P_\mathrm{cyc} = 10.7 \pm 0.3$ years. We attempt to search for evidence of differential rotation in variations of the apparent rotation period throughout the activity cycle and do not detect a clear signal of differential rotation, consistent with the null results of the hare-and-hounds exercise of Aigrain et al. (2015). The full reconstructed solar light curve is available online. | astrophysics |
We consider the problem of efficiently simulating random quantum states and random unitary operators, in a manner which is convincing to unbounded adversaries with black-box oracle access. This problem has previously only been considered for restricted adversaries. Against adversaries with an a priori bound on the number of queries, it is well-known that $t$-designs suffice. Against polynomial-time adversaries, one can use pseudorandom states (PRS) and pseudorandom unitaries (PRU), as defined in a recent work of Ji, Liu, and Song; unfortunately, no provably secure construction is known for PRUs. In our setting, we are concerned with unbounded adversaries. Nonetheless, we are able to give stateful quantum algorithms which simulate the ideal object in both settings of interest. In the case of Haar-random states, our simulator is polynomial-time, has negligible error, and can also simulate verification and reflection through the simulated state. This yields an immediate application to quantum money: a money scheme which is information-theoretically unforgeable and untraceable. In the case of Haar-random unitaries, our simulator takes polynomial space, but simulates both forward and inverse access with zero error. These results can be seen as the first significant steps in developing a theory of lazy sampling for random quantum objects. | quantum physics |
We consider an exactly solvable model for topological phases in (3+1)d whose input data is a strict 2-group. This model, which has a higher gauge theory interpretation, provides a lattice Hamiltonian realisation of the Yetter homotopy 2-type topological quantum field theory. The Hamiltonian yields bulk flux and charge composite excitations that are either point-like or loop-like. Applying a generalised tube algebra approach, we reveal the algebraic structure underlying these excitations and derive the irreducible modules of this algebra, which in turn classify the elementary excitations of the model. As a further application of the tube algebra approach, we demonstrate that the ground state subspace of the three-torus is described by the central subalgebra of the tube algebra for torus boundary, demonstrating the ground state degeneracy is given by the number of elementary loop-like excitations. | condensed matter |
Commonly, the comparisons of treatment groups versus a control is performed for location effects only where possible scale effects are considered as disturbing. Sometimes scale effects are also relevant, as a kind of early indicator for changes. Here several approaches for Dunnett-type tests for location or scale effects are proposed and compared by a simulation study. Two real data examples are analysed accordingly and the related R-code is available in the Appendix. | statistics |
We study the quantum evolution under the combined action of the exponentials of two not necessarily commuting operators. We consider the limit in which the two evolutions alternate at infinite frequency. This case appears in a plethora of situations, both in physics (Feynman integral) and mathematics (product formulas). We focus on the case in which the two evolution times are scaled differently in the limit and generalize standard techniques and results. | quantum physics |
This paper investigates further how the presence of a single reflecting plane wall modifies the usual Planckian forms in the thermodynamics of the massless scalar radiation in $N$-dimensional Minkowski spacetime. This is done in a rather unconventional way by integrating the energy density over space to obtain the internal energy and from that the Helmholtz free energy. The reflecting wall is modelled by assuming the Dirichlet or the Neumann boundary conditions on the wall. It is found that when $N>2$ integration over space eliminates dependence on the curvature coupling parameter $\xi$. Unexpectedly though, when $N=2$, the internal energy and the corresponding thermodynamics turn out to be dependent on $\xi$. For instance, the correction to the two-dimensional Planckian heat capacity is $\mp \xi k_{B}$ (minus for Dirichlet, plus for Neumann). Other aspects of this dependence on $\xi$ are also discussed. Results are confronted with those in the literature concerning related setups of reflecting walls (such as slabs) where conventional (i.e., global) approaches to obtain thermodynamics have been used. | high energy physics theory |
Let $p\in(0,1)$, $\alpha:=1/p-1$ and, for any $\tau\in [0,\infty)$, $\Phi_{p}(\tau):=\tau/(1+\tau^{1-p})$. Let $H^p(\mathbb R^n)$, $h^p(\mathbb R^n)$ and $\Lambda_{n\alpha}(\mathbb{R}^n)$ be, respectively, the Hardy space, the local Hardy space and the inhomogeneous Lipschitz space on $\mathbb{R}^n$. In this article, applying the inhomogeneous renormalization of wavelets, the authors establish a bilinear decomposition for multiplications of elements in $h^p(\mathbb R^n)$ [or $H^p(\mathbb R^n)$] and $\Lambda_{n\alpha}(\mathbb{R}^n)$, and prove that these bilinear decompositions are sharp in some sense. As applications, the authors also obtain some estimates of the product of elements in the local Hardy space $h^p(\mathbb R^n)$ with $p\in(0,1]$ and its dual space, respectively, with zero $\lfloor n\alpha\rfloor$-inhomogeneous curl and zero divergence, where $\lfloor n\alpha\rfloor$ denotes the largest integer not greater than $n\alpha$. Moreover, the authors find new structures of $h^{\Phi_p}(\mathbb R^n)$ and $H^{\Phi_p}(\mathbb R^n)$ by showing that $h^{\Phi_p}(\mathbb R^n)=h^1(\mathbb R^n)+h^p(\mathbb R^n)$ and $H^{\Phi_p}(\mathbb R^n)=H^1(\mathbb R^n)+H^p(\mathbb R^n)$ with equivalent quasi-norms, and also prove that the dual spaces of both $h^{\Phi_p}(\mathbb R^n)$ and $h^p(\mathbb R^n)$ coincide. These results give a complete picture on the multiplication between the local Hardy space and its dual space. | mathematics |
Expanding on [1], we analyze in detail the single field chaotic inflationary models plus a cosine modulation term, augmented by a light scalar field with inflaton dependent oscillatory mass term. We work out in detail the Feynman diagrams and compute one, two and in general estimate higher loop two and three point functions in the in-in formulation. We explicitly establish how the oscillatory mass term can amplify one-loop effects to dominate over the tree as well as the higher loop contributions. The power spectrum of curvature perturbations of this model is hence enhanced compared to the simple single field chaotic model. As a consequence, one can suppress the tensor to scalar ratio r and have a different expression for scalar spectral tilt and the running of the tilt, opening the way to reconcile chaotic models with convex potential and the Planck data. As in monodromy inflation models, we also have a cosine modulation in the spectral tilt. We also analyze the bispectrum, which can be dominated by the amplified one-loop effects, yielding a new shape in non-Gassuianty. We discuss the bounds on parameter space from all available CMB observables and possible implications for reheating. | high energy physics theory |
We present a concise derivation of geometric optics in the presence of axionic fields in a curved space-time. Whenever light can be described via geometric optics (the eikonal approximation), the only difference to the situation without axionic field is the phenomenon of achromatic birefringence. Consequently, redshift of light and distance estimates based on propagating light rays, as well as shear and magnification due to gravitational lensing are not affected by the interaction of light with an axionic field. | high energy physics phenomenology |
We present multiwavelength light curves and polarimetric data of the Flat Spectrum Radio Quasar 3C 273 over 8 years. The wavelength range of our data set extends from radio to gamma-rays. We found that the optical emission in this source is dominated by the accretion disk during the entire time-frame of study. We additionally find that in contrast with the observed behaviour in other blazars, 3C 273 does not show a correlation between the gamma-ray spectral index and the gamma-ray luminosity. Finally, we identified an anti-correlation between the 15 GHz and V-band light curves for the time-range $JD_{245} = 4860 - 5760$, which we speculate is the consequence of the inner part of the accretion disk falling into the black hole, followed by the ejection of a component into the jet. | astrophysics |
We develop a new technique for computing a class of four-point correlation functions of heavy half-BPS operators in planar N=4 SYM theory which admit factorization into a product of two octagon form factors with an arbitrary bridge length. We show that the octagon can be expressed as the Fredholm determinant of the integrable Bessel operator and demonstrate that this representation is very efficient in finding the octagons both at weak and strong coupling. At weak coupling, in the limit when the four half-BPS operators become null separated in a sequential manner, the octagon obeys the Toda lattice equations and can be found in a closed form. At strong coupling, we exploit the strong Szego limit theorem to derive the leading asymptotic behavior of the octagon and, then, apply the method of differential equations to determine the remaining subleading terms of the strong coupling expansion to any order in the inverse coupling. To achieve this goal, we generalize results available in the literature for the asymptotic behavior of the determinant of the Bessel operator. As a byproduct of our analysis, we formulate a Szego-Akhiezer-Kac formula for the determinant of the Bessel operator with a Fisher-Hartwig singularity and develop a systematic approach to account for subleading power suppressed contributions. | high energy physics theory |
We compute the symbol of the two-loop five-point scattering amplitude in $\mathcal{N}$ = 4 super-Yang-Mills theory, including its full color dependence. This requires constructing the symbol of all two-loop five-point nonplanar massless master integrals, for which we give explicit results. | high energy physics theory |
Packet-loss is a common problem in data transmission, using Voice over IP. The problem is an old problem, and there has been a variety of classical approaches that were developed to overcome this problem. However, with the rise of deep learning and generative models like Generative Adversarial Networks and Autoencoders, a new avenue has emerged for attempting to solve packet-loss using deep learning, by generating replacements for lost packets. In this mini-survey, we review all the literature we found to date, that attempt to solve the packet-loss in speech using deep learning methods. Additionally, we briefly review how the problem of packet-loss in a realistic setting is modelled, and how to evaluate Packet Loss Concealment techniques. Moreover, we review a few modern deep learning techniques in related domains that have shown promising results. These techniques shed light on future potentially better solutions for PLC and additional challenges that need to be considered simultaneously with packet-loss. | electrical engineering and systems science |
Streaming session-based recommendation (SSR) is a challenging task that requires the recommender system to do the session-based recommendation (SR) in the streaming scenario. In the real-world applications of e-commerce and social media, a sequence of user-item interactions generated within a certain period are grouped as a session, and these sessions consecutively arrive in the form of streams. Most of the recent SR research has focused on the static setting where the training data is first acquired and then used to train a session-based recommender model. They need several epochs of training over the whole dataset, which is infeasible in the streaming setting. Besides, they can hardly well capture long-term user interests because of the neglect or the simple usage of the user information. Although some streaming recommendation strategies have been proposed recently, they are designed for streams of individual interactions rather than streams of sessions. In this paper, we propose a Global Attributed Graph (GAG) neural network model with a Wasserstein reservoir for the SSR problem. On one hand, when a new session arrives, a session graph with a global attribute is constructed based on the current session and its associate user. Thus, the GAG can take both the global attribute and the current session into consideration to learn more comprehensive representations of the session and the user, yielding a better performance in the recommendation. On the other hand, for the adaptation to the streaming session scenario, a Wasserstein reservoir is proposed to help preserve a representative sketch of the historical data. Extensive experiments on two real-world datasets have been conducted to verify the superiority of the GAG model compared with the state-of-the-art methods. | computer science |
We consider a model with three Higgs doublet in a discrete $B - L\times \mathbb{Z}_3$ discrete symmetries. Two of the scalar doublets are inert due to the $\mathbb{Z}_3$ symmetry. We calculated all the mass spectra in the scalar and lepton sectors and accommodate the leptonic mixing matrix as well. | high energy physics phenomenology |
Particles in a yet unexplored dark sector with sufficiently large mass and small gauge coupling may form purely gravitational atoms (quantum gravitational bound states) with a rich phenomenology. In particular, we investigate the possibility of having an observable signal of gravitational waves or ultra high energy cosmic rays from the decay of gravitational atoms. We show that if ordinary Einstein gravity holds up to the Planck scale, then, within the $\Lambda \text{CDM}$ model, the frequency of the gravitational wave signal produced by the decays is always higher than $10^{13} \, \text{Hz}$. An observable signal of gravitational waves with smaller frequency from such decays, in addition to probing near Planckian dark physics, would also imply a departure from Einstein gravity near the Planck scale or an early epoch of non-standard cosmology. As an example, we consider an early universe cosmology with a matter-dominated phase, violating our assumption that the universe is radiation dominated after reheating, which gives a signal in an interesting frequency range for near Planckian bound states. We also show how gravitational atoms arise in the minimal PIDM scenario and compute their gravitational wave signature. | high energy physics phenomenology |
We use the Auriga cosmological simulations of Milky Way (MW)-mass galaxies and their surroundings to study the satellite populations of dwarf galaxies in $\Lambda$CDM. As expected from prior work, the number of satellites above a fixed stellar mass is a strong function of the mass of the primary dwarf. For galaxies as luminous as the Large Magellanic Cloud (LMC), and for halos as massive as expected for the LMC (determined by its rotation speed), the simulations predict about 3 satellites with stellar masses exceeding $M_*>10^5\, M_\odot$. If the LMC is on its first pericentric passage, then these satellites should be near the LMC and should have orbital angular momenta roughly coincident with that of the LMC. We use 3D positions and velocities from the 2nd data release of the Gaia mission to revisit which of the "classical" MW dwarf spheroidals could plausibly be LMC satellites. The new proper motions of the Fornax and Carina dwarf spheroidals place them on orbits closely aligned with the orbital plane of the Magellanic Clouds, hinting at a potential Magellanic association. Together with the Small Magellanic Cloud (SMC), this result raises to $3$ the number of LMC satellites with $M_*>10^5\, M_\odot$, as expected from simulations. This also fills the 12-mag luminosity gap between the SMC and the ultra-faints Hyi1, Car2, Hor1, and Car3, the few ultra-faint satellites confirmed to have orbits consistent with a Magellanic origin. | astrophysics |
Regular expressions with backreferences (regex, for short), as supported by most modern libraries for regular expression matching, have an NP-complete matching problem. We define a complexity parameter of regex, called active variable degree, such that regex with this parameter bounded by a constant can be matched in polynomial-time. Moreover, we formulate a novel type of determinism for regex (on an automaton-theoretic level), which yields the class of memory-deterministic regex that can be matched in time O(|w|p(|r|)) for a polynomial p (where r is the regex and w the word). Natural extensions of these concepts lead to properties of regex that are intractable to check. | computer science |
Multilayers of heavy metals and ferromagnets are the basis for the formation of various structures with inhomogeneous chiral magnetic distributions. The most informative method for studying magnetic states in such structures with a high spatial resolution is Lorentz transmission electron microscopy (L-TEM). Here, we report on the observation of chiral N\'eel domain walls by means of L-TEM in Co/Pt multilayers with perpendicular magnetic anisotropy. We explain the existence of chiral N\'eel internal structure due to the overcritical, giant interfacial induced Dzyaloshinskii-Moriya interaction. Our results are confirmed by micromagnetic modeling, which allows taking into account the effect of local inhomogeneity of the magnetic anisotropy of polycrystalline films on formation of spin textures. | condensed matter |
We theoretically investigate the fluorescence intensity correlation (FIC) of Ar clusters and Mo-doped iron oxide nanoparticles subjected to intense, femtosecond and sub-femtosecond XFEL pulses for high-resolution and elemental contrast imaging. We present the FIC of {\Ka} and {\Kah} emission in Ar clusters and discuss the impact of sample damage on retrieving high-resolution structural information and compare the obtained structural information with those from the coherent difractive imaging (CDI) approach. We found that, while sub-femtosecond pulses will substantially benefit the CDI approach, few-femtosecond pulses may be sufficient for achieving high-resolution information with FIC. Furthermore, we show that the fluorescence intensity correlation computed from the fluorescence of Mo atoms in Mo-doped iron oxide nanoparticles can be used to image dopant distributions. | physics |
We present simple conditions under which the limiting genealogical process associated with a class of interacting particle systems with non-neutral selection mechanisms, as the number of particles grows, is a time-rescaled Kingman coalescent. Sequential Monte Carlo algorithms are popular methods for approximating integrals in problems such as non-linear filtering and smoothing which employ this type of particle system. Their performance depends strongly on the properties of the induced genealogical process. We verify the conditions of our main result for standard sequential Monte Carlo algorithms with a broad class of low-variance resampling schemes, as well as for conditional sequential Monte Carlo with multinomial resampling. | statistics |
One of the most exciting technology breakthroughs in the last few years has been the rise of deep learning. State-of-the-art deep learning models are being widely deployed in academia and industry, across a variety of areas, from image analysis to natural language processing. These models have grown from fledgling research subjects to mature techniques in real-world use. The increasing scale of data, computational power and the associated algorithmic innovations are the main drivers for the progress we see in this field. These developments also have a huge potential for the automotive industry and therefore the interest in deep learning-based technology is growing. A lot of the product innovations, such as self-driving cars, parking and lane-change assist or safety functions, such as autonomous emergency braking, are powered by deep learning algorithms. Deep learning is poised to offer gains in performance and functionality for most ADAS (Advanced Driver Assistance System) solutions. Virtual sensing for vehicle dynamics application, vehicle inspection/heath monitoring, automated driving and data-driven product development are key areas that are expected to get the most attention. This article provides an overview of the recent advances and some associated challenges in deep learning techniques in the context of automotive applications. | computer science |
In this paper we describe the recent 3GPP Release 16 specification for positioning in 5G networks. It specifies positioning signals, measurements, procedures, and architecture to meet requirements from a plethora of regulatory, commercial and industrial use cases. 5G thereby significantly extends positioning capabilities compared to what was possible with LTE. The indicative positioning performance is evaluated in agreed representative 3GPP simulation scenarios, showing a 90 percentile accuracy of a few meters down to a few decimeters depending on scenarios and assumptions. | computer science |
There is a growing interest in using cold-atom systems to explore the effects of strong interactions in topological band structures. Here we investigate interacting bosons in a Cruetz ladder, which is characterised by topological flat energy bands where it has been proposed that interactions can lead to the formation of bound atomic pairs giving rise to pair superfluidity. By investigating realistic experimental implementations, we understand how the lattice topology enhances the properties of bound pairs giving rise to relatively large effective pair-tunnelling in these systems which can lead to robust pair superfluidity, and we find lattice supersolid phases involving only pairs. We identify schemes for preparation of these phases via time-dependent parameter variation and look at ways to detect and characterise these systems in a lattice. This work provides a starting point for investigating the interplay between the effects of topology, interactions and pairing in more general systems, with potential future connections to quantum simulation of topological materials. | condensed matter |
We investigate a novel cross-lingual multi-speaker text-to-speech synthesis approach for generating high-quality native or accented speech for native/foreign seen/unseen speakers in English and Mandarin. The system consists of three separately trained components: an x-vector speaker encoder, a Tacotron-based synthesizer and a WaveNet vocoder. It is conditioned on 3 kinds of embeddings: (1) speaker embedding so that the system can be trained with speech from many speakers will little data from each speaker; (2) language embedding with shared phoneme inputs; (3) stress and tone embedding which improves naturalness of synthesized speech, especially for a tonal language like Mandarin. By adjusting the various embeddings, MOS results show that our method can generate high-quality natural and intelligible native speech for native/foreign seen/unseen speakers. Intelligibility and naturalness of accented speech is low as expected. Speaker similarity is good for native speech from native speakers. Interestingly, speaker similarity is also good for accented speech from foreign speakers. We also find that normalizing speaker embedding x-vectors by L2-norm normalization or whitening improves output quality a lot in many cases, and the WaveNet performance seems to be language-independent: our WaveNet is trained with Cantonese speech and can be used to generate Mandarin and English speech very well. | electrical engineering and systems science |
We construct a continuous one parameter family of $AdS_4\times S^1\times S^5$ S-fold solutions of type IIB string theory which have nontrivial $SL(2,\mathbb{Z})$ monodromy in the $S^1$ direction. The solutions span a subset of a conformal manifold that contains the known $\mathcal{N}=4$ S-fold SCFT in $d=3$, and generically preserve $\mathcal{N}=2$ supersymmetry. We also construct RG flows across dimensions, from $AdS_5\times S^5$, dual to $\mathcal{N}=4$, $d=4$ SYM compactified with a twisted spatial circle, to various $AdS_4\times S^1\times S^5$ S-fold solutions, dual to $d=3$ SCFTs. We construct additional flows between the $AdS_5$ dual of the Leigh-Strassler SCFT and an $\mathcal{N}=2$ S-fold as well as RG flows between various S-folds. | high energy physics theory |
The data from the Large Area Telescope (LAT) onboard the Fermi Gamma-ray Space Telescope have recently been updated. We thus re-analyze the LAT data for the supernova remnant (SNR) SN 1006. Two parts of gamma-ray emission from the region is clearly resolved, which correspond to the north-east (NE) and south-west (SW) limbs of the SNR. The former was detected in the previous LAT data (Xing et al. 2016), but the latter is newly detected in this work. The detection of the two limbs are at a 4 sigma significance level, and the spectral results for the NE limb is consistent with those obtained in previous detection analyses. We construct the broadband spectral energy distribution (SED) for the SW limb. Different scenarios are considered for the SED in gamma-ray energies. We conclude very similar to that of the NE limb, the high-energy and very high-energy emission from the SW limb is likely dominated by the leptonic process, in which high-energy electrons accelerated from the shell region of the SNR inverse-Compton scatter background photons to gamma-rays. | astrophysics |
Water quality is of great importance for humans and for the environment and has to be monitored continuously. It is determinable through proxies such as the chlorophyll a concentration, which can be monitored by remote sensing techniques. This study focuses on the trade-off between the spatial and the spectral resolution of six simulated satellite-based data sets when estimating the chlorophyll a concentration with supervised machine learning models. The initial dataset for the spectral simulation of the satellite missions contains spectrometer data and measured chlorophyll a concentration of 13 different inland waters. Focusing on the regression performance, it appears that the machine learning models achieve almost as good results with the simulated Sentinel data as with the simulated hyperspectral data. Regarding the applicability, the Sentinel 2 mission is the best choice for small inland waters due to its high spatial and temporal resolution in combination with a suitable spectral resolution. | electrical engineering and systems science |
We quantum-simulated particle-antiparticle pair production with a bosonic quantum gas in an optical lattice by emulating the requisite 1d Dirac equation and uniform electric field. We emulated field strengths far in excess of Sauter-Schwinger's limit for pair production in quantum electrodynamics, and therefore readily produced particles from "the Dirac vacuum" in quantitative agreement with theory. The observed process is equivalently described by Landau-Zener tunneling familiar in the atomic physics context. | quantum physics |
Controlling the temporal mode shape of quantum light pulses has wide ranging application to quantum information science and technology. Techniques have been developed to control the bandwidth, allow shifting in the time and frequency domains, and perform mode-selective beam-splitter-like transformations. However, there is no present scheme to perform targeted multimode unitary transformations on temporal modes. Here we present a practical approach to realize general transformations for temporal modes. We show theoretically that any unitary transformation on temporal modes can be performed using a series of phase operations in the time and frequency domains. Numerical simulations show that several key transformations on temporal modes can be performed with greater than 95% fidelity using experimentally feasible specifications. | physics |
A Superoscillatory lens (SOL) is known to produce a sub-diffraction hotspot which is useful for high-resolution imaging. However, high-energy rings called sidelobes coexist with the central hotspot. Additionally, SOLs have not yet been directly used to image reflective objects due to low efficiency and poor imaging properties. We propose a novel reflection confocal nanoscope which mitigates these issues by relaying the SOL intensity pattern onto the object and use conventional optics for detection. We experimentally demonstrate super-resolution by imaging double bars with 330 nm separation using a 632.8 nm excitation and a 0.95 NA objective. We also discuss the enhanced contrast properties of the SOL nanoscope against a laser confocal microscope, and the degradation of performance while imaging large objects. | physics |
Autonomous vehicles operate in highly dynamic environments necessitating an accurate assessment of which aspects of a scene are moving and where they are moving to. A popular approach to 3D motion estimation -- termed scene flow -- is to employ 3D point cloud data from consecutive LiDAR scans, although such approaches have been limited by the small size of real-world, annotated LiDAR data. In this work, we introduce a new large scale benchmark for scene flow based on the Waymo Open Dataset. The dataset is $\sim$1,000$\times$ larger than previous real-world datasets in terms of the number of annotated frames and is derived from the corresponding tracked 3D objects. We demonstrate how previous works were bounded based on the amount of real LiDAR data available, suggesting that larger datasets are required to achieve state-of-the-art predictive performance. Furthermore, we show how previous heuristics for operating on point clouds such as artificial down-sampling heavily degrade performance, motivating a new class of models that are tractable on the full point cloud. To address this issue, we introduce the model architecture FastFlow3D that provides real time inference on the full point cloud. Finally, we demonstrate that this problem is amenable to techniques from semi-supervised learning by highlighting open problems for generalizing methods for predicting motion on unlabeled objects. We hope that this dataset may provide new opportunities for developing real world scene flow systems and motivate a new class of machine learning problems. | computer science |
In a recent very interesting and illuminating proposal, Yao et al. (2014) have discussed the use of the strain energy release rate (SERR) as a parameter to characterize fatigue delamination growth in composite materials. They consider fatigue delamination data strongly affected by R-curve behaviour due to fibres bridging and argue that a better approach is to correlate the crack advance with the total work per cycle measured in the testing machine. This seems to work better than estimating the compliance as a linear fit of experimental curves from Modified Compliance Calibration ASTM standards equations for the SERR in the classical Linear Elastic Fracture Mechanics framework. We show however that if we assume indeed linear behaviour (i.e. LEFM), the approach they introduce is perfectly equivalent to the SERR one, i.e. Paris type of laws. As well known form Barenblatt and Botvina, fatigue crack growth is a weak form of scaling, and it gives Paris classical dependence only when the crack is much longer than any other characteristic sizes. Paris' law is not a fundamental law of physics, is not an energy balance equation like Griffith, and strong size effects due to cohesive zones have been found already in concrete by Bazant. The proposal is very simple, and interesting as it would seem to suggest that a proper scaling with a cohesive model at crack tip could be predicted, although this doesn't seem to have been attempted in the Literature. The main drawback of the present proposal is that it is not predictive, but purely observational, as it requires the actual measurement of work input during the fatigue process. | condensed matter |
Domain mismatch is a noteworthy issue in acoustic event detection tasks, as the target domain data is difficult to access in most real applications. In this study, we propose a novel CNN-based discriminative training framework as a domain compensation method to handle this issue. It uses a parallel CNN-based discriminator to learn a pair of high-level intermediate acoustic representations. Together with a binary discriminative loss, the discriminators are forced to maximally exploit the discrimination of heterogeneous acoustic information in each audio clip with target events, which results in a robust paired representations that can well discriminate the target events and background/domain variations separately. Moreover, to better learn the transient characteristics of target events, a frame-wise classifier is designed to perform the final classification. In addition, a two-stage training with the CNN-based discriminator initialization is further proposed to enhance the system training. All experiments are performed on the DCASE 2018 Task3 datasets. Results show that our proposal significantly outperforms the official baseline on cross-domain conditions in AUC by relative $1.8-12.1$% without any performance degradation on in-domain evaluation conditions. | electrical engineering and systems science |
We propose two optimization-based heuristics for structure selection and identification of PieceWise Affine (PWA) models with exogenous inputs. The first method determines the number of affine sub-models assuming known model order of the sub-models, while the second approach estimates the model order for a given number of affine sub-models. Both approaches rely on the use of regularization-based shrinking strategies, that are exploited within a coordinate-descent algorithm. This allows us to estimate the structure of the PWA models along with its model parameters. Starting from an over-parameterized model, the key idea is to alternate between an identification step and structure refinement, based on the sparse estimates of the model parameters. The performance of the presented strategies is assessed over two benchmark examples. | electrical engineering and systems science |
Neural Networks are currently one of the most widely deployed machine learning algorithms. In particular, Convolutional Neural Networks (CNNs), are gaining popularity and are evaluated for deployment in safety critical applications such as self driving vehicles. Modern CNNs feature enormous memory bandwidth and high computational needs, challenging existing hardware platforms to meet throughput, latency and power requirements. Functional safety and error tolerance need to be considered as additional requirement in safety critical systems. In general, fault tolerant operation can be achieved by adding redundancy to the system, which is further exacerbating the computational demands. Furthermore, the question arises whether pruning and quantization methods for performance scaling turn out to be counterproductive with regards to fail safety requirements. In this work we present a methodology to evaluate the impact of permanent faults affecting Quantized Neural Networks (QNNs) and how to effectively decrease their effects in hardware accelerators. We use FPGA-based hardware accelerated error injection, in order to enable the fast evaluation. A detailed analysis is presented showing that QNNs containing convolutional layers are by far not as robust to faults as commonly believed and can lead to accuracy drops of up to 10%. To circumvent that, we propose two different methods to increase their robustness: 1) selective channel replication which adds significantly less redundancy than used by the common triple modular redundancy and 2) a fault-aware scheduling of processing elements for folded implementations | electrical engineering and systems science |
Double parton distribution functions (dPDFs), entering the double parton scattering (DPS) cross section, are unknown fundamental quantities encoding new interesting properties of hadrons. Here, the pion dPDFs are investigated within different holographic QCD quark models in order to access their basic features. Results of the calculation,s obtained within the AdS/QCD soft-wall approach, have been compared with predictions of lattice QCD evaluations of the pion two-current correlation functions. The present analysis confirms that double parton correlations, affecting dPDFs, are very important and not direct accessible from generalised parton distribution functions and electromagnetic form factors. The comparison between lattice data and quark model calculations unveils the relevance of the contributions of high partonic Fock states in the pion. Nevertheless, by using a complete general procedure, results of lattice QCD have been used, for the first time, to estimate the mean value of the so called $\sigma_{eff}$, a relevant experimental observable for DPS processes. In addition, the results of the first calculations of the $\rho$ meson dPDFs are discussed in order to make predictions. | high energy physics phenomenology |
As renewable sources increasingly replace existing conventional generation, the dynamics of the grid drastically changes, posing new challenges for transmission system operations, but also arising new opportunities as converter-based generation is highly controllable in faster timescales. This paper investigates grid stability under the massive integration of grid-forming converters. We utilize detailed converter and synchronous machine models and describe frequency behavior under different penetration levels. First, we show that the transition from 0% to 100% can be achieved from a frequency stability point of view. This is achieved by re-tuning power system stabilizers at high penetration values. Second, we explore the evolution of the nadir and RoCoF for each generator as a function of the amount of inverter-based generation in the grid. This work sheds some light on two major challenges in low and no-inertia systems: defining novel performance metrics that better characterize grid behaviour, and adapting present paradigms in PSS design. | electrical engineering and systems science |
Vehicular fog computing (VFC) pushes the cloud computing capability to the distributed fog nodes at the edge of the Internet, enabling compute-intensive and latency-sensitive computing services for vehicles through task offloading. However, a heterogeneous mobility environment introduces uncertainties in terms of resource supply and demand, which are inevitable bottlenecks for the optimal offloading decision. Also, these uncertainties bring extra challenges to task offloading under the oblivious adversary attack and data privacy risks. In this article, we develop a new adversarial online algorithm with bandit feedback based on the adversarial multi-armed bandit theory, to enable scalable and low-complex offloading decision making on the fog node selection toward minimizing the offloading service cost in terms of delay and energy. The key is to implicitly tune exploration bonus in selection and assessment rules of the designed algorithm, taking into account volatile resource supply and demand. We theoretically prove that the input-size dependent selection rule allows to choose a suitable fog node without exploring the sub-optimal actions, and also an appropriate score patching rule allows to quickly adapt to evolving circumstances, which reduces variance and bias simultaneously, thereby achieving better exploitation exploration balance. Simulation results verify the effectiveness and robustness of the proposed algorithm. | computer science |
Cosmic-ray electrons (CREs) originating from the star-forming discs of spiral galaxies frequently form extended radio haloes that are best observable in edge-on galaxies. For the present study we selected two nearby edge-on galaxies from the CHANG-ES survey, NGC 891 and 4565, which differ largely in halo size and SFR. To figure out how such differences are related to the CRE transport in disc and halo, we use wide-band 1.5 and 6 GHz VLA observations obtained in the B, C, and D configurations, and combine the 6 GHz images with Effelsberg observations to correct for missing short spacings. We study the spatially resolved non-thermal spectral index distribution in terms of CRE spectral ageing, compute total magnetic field strengths assuming energy equipartition between CRs and magnetic fields, and also determine synchrotron scale heights. Based on the vertical profiles of synchrotron intensity and spectral index, we create purely advective and purely diffusive CRE transport models by numerically solving the 1D diffusion-loss equation. In particular, we investigate for the first time the radial dependence of synchrotron and magnetic field scale heights, advection speeds and diffusion coefficients in these two galaxies. We find the spectral index distribution of NGC 891 to be mostly consistent with continuous CRE injection, while in NGC 4565 the local synchrotron spectra are more in line with discrete-epoch CRE injection (JP or KP models). This implies that CRE injection timescales are lower than the synchrotron cooling timescales. The scale height of NGC 891 increases with radius, indicating that synchrotron losses are significant. NGC 891 is probably dominated by advective CRE transport at a velocity of $\gtrsim150\,\mathrm{km\,s^{-1}}$. In contrast, NGC 4565 is diffusion-dominated up to $z=1$ kpc or higher, with a diffusion coefficient of $\geq2\times10^{28}\,\mathrm{cm^2\,s^{-1}}$. | astrophysics |
Transportation agencies monitor freeway performance using various measures such as VMT (Vehicle Miles Traveled), VHD (Vehicle Hours of Delay), and VHT (Vehicle Hours Traveled). Public transportation agencies typically rely on point detector data to estimate these freeway performance measures. Point detectors, such as inductive loops cannot capture the travel time for a corridor, which can lead to inaccurate performance measure estimation. This research develops a hybrid method, which estimates of freeway performance measures using a mix of probe data from third-parties and data from traditional point detectors. Using a simulated I-210 model, the overall framework using multiple data sources is evaluated, and compared with the traditional point detector-based estimation method. In the traditional method, point speeds are estimated with the flow and occupancy values using the g-factors. Data from 5% of the total vehicle are used to generate the third-party vendor provided travel time data. The analysis is conducted for multiple scenarios, including peak and off-peak periods. Findings suggest that fusing data from both third-party vendors and point detectors can help estimate performance measures better, compared to the traditional method, in scenarios that have noticeable traffic demand on freeways. | electrical engineering and systems science |
Relativistic outflows are believed to be a common feature of black hole X-ray binaries at the lowest accretion rates, when they are in their `quiescent' spectral state. However, we still lack a detailed understanding of how quiescent jet emission varies with time. Here we present 24 years of archival radio observations (from the Very Large Array and the Very Long Baseline Array) of the black hole X-ray binary V404 Cygni in quiescence (totalling 150 observations from 1.4 -- 22 GHz). The observed flux densities follow lognormal distributions with means and standard deviations of (<log f_nu>, sigma) = (-0.53, 0.19) and (-0.53, 0.30) at 4.9 and 8.4 GHz, respectively (where f_nu is the flux density in units of mJy). As expected, the average radio spectrum is flat with a mean and standard deviation of (<alpha_r >, sigma_alpha_r)= (0.02, 0.65) where f_nu \propto nu^alpha_r. We find that radio flares that increase the flux density by factors of 2 -- 4 over timescales as short as <10 min are commonplace, and that long-term variations (over 10--4000 day timescales) are consistent with shot noise impulses that decay to stochastic variations on timescales <10 days (and perhaps as short as tens of minutes to several hours). We briefly compare the variability characteristics of V404 Cygni to jetted active galactic nuclei, and we conclude with recommendations on how to account for variability when placing quiescent black hole X-ray binary candidates with radio luminosities comparable to V404 Cygni (L_r ~ 1e28 erg/s) onto the radio/X-ray luminosity plane. | astrophysics |
The panel data regression models have gained increasing attention in different areas of research including but not limited to econometrics, environmental sciences, epidemiology, behavioral and social sciences. However, the presence of outlying observations in panel data may often lead to biased and inefficient estimates of the model parameters resulting in unreliable inferences when the least squares (LS) method is applied. We propose extensions of the M-estimation approach with a data-driven selection of tuning parameters to achieve desirable level of robustness against outliers without loss of estimation efficiency. The consistency and asymptotic normality of the proposed estimators have also been proved under some mild regularity conditions. The finite sample properties of the existing and proposed robust estimators have been examined through an extensive simulation study and an application to macroeconomic data. Our findings reveal that the proposed methods often exhibits improved estimation and prediction performances in the presence of outliers and are consistent with the traditional LS method when there is no contamination. | statistics |
Understanding the dynamics of authors is relevant to predict and quantify performance in science. While the relationship between recent and future citation counts is well-known, many relationships between scholarly metrics at the author-level remain unknown. In this context, we performed an analysis of author-level metrics extracted from subsequent periods, focusing on visibility, productivity and interdisciplinarity. First, we investigated how metrics controlled by the authors (such as references diversity and productivity) affect their visibility and citation diversity. We also explore the relation between authors' interdisciplinarity and citation counts. The analysis in a subset of Physics papers revealed that there is no strong correlation between authors' productivity and future visibility for most of the authors. A higher fraction of strong positive correlations though was found for those with a lower number of publications. We also found that reference diversity computed at the author-level may impact positively authors' future visibility. The analysis of metrics impacting future interdisciplinarity suggests that productivity may play a role only for low productivity authors. We also found a surprisingly strong positive correlation between references diversity and interdisciplinarity, suggesting that an increase in diverse citing behavior may be related to a future increase in authors interdisciplinarity. Finally, interdisciplinarity and visibility were found to be moderated positively associated: significant positive correlations were observed for 30% of authors with lower productivity. | computer science |
We study the orientifold of the ${\mathcal{N}} = 1$ superconformal field theories describing D3-branes probing the Suspended Pinch Point singularity, as well as the orientifolds of its non-chiral $\mathbb{Z}'_n$ orbifolds. Using $a$ maximization, we find that these models realize a mechanism analogous to the one recently found for the orientifold of the complex Calabi-Yau cone over the Pseudo del Pezzo surface PdP$_{3c}$: they all flow to a new IR fixed point such that the value of the $a$-charge at large $N$ is less than half the one of the oriented theory. We also find that the value of $a$ coincides with the charge of specific orientifolds of the toric singularities $L^{(\bar{n},\bar{n},\bar{n})}$ with $\bar{n}=3n/2$ for $n$ even or $L^{(\bar{n},\bar{n}+1,\bar{n})}$ with $\bar{n}=(3n{-}1)/2$ for $n$ odd, suggesting the existence of an IR duality. | high energy physics theory |
In type II superstring theory, the vacuum amplitude at a given loop order $g$ can receive contributions from the boundary of the compactified, genus $g$ supermoduli space of curves $\overline{\mathfrak M}_g$. These contributions capture the long distance or infrared behaviour of the amplitude. The boundary parametrises degenerations of genus $g$ super Riemann surfaces. A holomorphic projection of the supermoduli space onto its reduced space would then provide a way to integrate the holomorphic, superstring measure and thereby give the superstring vacuum amplitude at $g$-loop order. However, such a projection does not generally exist over the bulk of the supermoduli spaces in higher genera. Nevertheless, certain boundary divisors in $\partial\overline{\mathfrak M}_g$ may holomorphically map onto a bosonic space upon composition with universal morphisms, thereby enabling an integration of the holomorphic, superstring measure here. Making use of ansatz factorisations of the superstring measure near the boundary, our analysis shows that the boundary contributions to the three loop vacuum amplitude will vanish in closed oriented type II superstring theory with unbroken spacetime supersymmetry. | high energy physics theory |
The mean values of non-homogeneously parameterized generating exponential are obtained and investigated for the periodic Heisenberg XX model. The norm-trace generating function of boxed plane partitions with fixed volume of their diagonal parts is obtained as N-particles average of the generating exponential. The generating function of self-avoiding walks of random turns vicious walkers is obtained in terms of the circulant matrices that leads to generalizations of the Ramus's identity. Under various specifications of the generating exponential, the N-particles averages arise for a set of inconsecutive flipped spins and for powers of the first moment of flipped spins distribution at large length of the chain. These averages are expressed through the numbers of closed trajectories with constrained initial/final positions. The estimates at large temporal parameter are expressed through the numbers of diagonally restricted plane partitions characterized by fixed values of the main diagonal trace or by fixed heights of the diagonal columns in one-to-one correspondence with the flipped spins positions. | condensed matter |
In this paper, we investigate a five-dimensional Dirac fermion on a quantum graph that consists of a single vertex and $N$ loops. We find that the model possesses a rich structure of boundary conditions for wavefunctions on the quantum graph and they can be classified into $(2N+1)$ distinct categories. It is then shown that there appear degenerate four-dimensional chiral massless fermions in the four-dimensional mass spectrum. We briefly discuss how our model could naturally solve the problems of the fermion generation, the fermion mass hierarchy and the origin of the $CP$-violating phase. | high energy physics theory |
Aggregation of large databases in a specific format is a frequently used process to make the data easily manageable. Interval-valued data is one of the data types that is generated by such an aggregation process. Using traditional methods to analyze interval-valued data results in loss of information, and thus, several interval-valued data models have been proposed to gather reliable information from such data types. On the other hand, recent technological developments have led to high dimensional and complex data in many application areas, which may not be analyzed by traditional techniques. Functional data analysis is one of the most commonly used techniques to analyze such complex datasets. While the functional extensions of much traditional statistical techniques are available, the functional form of the interval-valued data has not been studied well. This paper introduces the functional forms of some well-known regression models that take interval-valued data. The proposed methods are based on the function-on-function regression model, where both the response and predictor/s are functional. Through several Monte Carlo simulations and empirical data analysis, the finite sample performance of the proposed methods is evaluated and compared with the state-of-the-art. | statistics |
The long therm behavior of chaotic flows is investigated by means of time dependent frequency analysis. The system under test consists of an electrically conducting fluid, confined between two differentially rotating spheres. The spherical setup is exposed to an axial magnetic field. The classical Fourier Transform method provides a first estimation of the time dependence of the frequencies associated to the flow, as well as its volume-averaged properties. It is however unable to detect strange attractors close to regular solutions in the Feigenbaum as well as Newhouse-Ruelle-Takens bifurcation scenarios. It is shown that Laskar's frequency algorithm is sufficiently accurate to identify these strange attractors and thus is an efficient tool for classification of chaotic flows in high dimensional dynamical systems. Our analysis of several chaotic solutions, obtained at different magnetic field strengths, reveals a strong robustness of the main frequency of the flow. This frequency is associated to an azimuthal drift and it is very close to the frequency of the underlying unstable rotating wave. In contrast, the main frequency of volume-averaged properties can vary almost one order of magnitude as the magnetic forcing is decreased. We conclude that, at the moderate differential rotation considered, unstable rotating waves provide a good description of the variation of the main time scale of any flow with respective variations in the magnetic field. | physics |
Light storage in an optical fiber is an attractive component in quantum optical delay line technologies. Although silica-core optical fibers are excellent in transmitting broadband optical signals, it is challenging to tailor their dispersive property to slow down a light pulse or store it in the silica-core for a long delay time. Coupling a dispersive and coherent medium with an optical fiber is promising in supporting long optical delay. Here, we load cold Rb atomic vapor into an optical trap inside a hollow-core photonic crystal fiber, and store the phase of the light in a long-lived spin-wave formed by atoms and retrieve it after a fully controllable delay time using electromagnetically-induced-transparency (EIT). We achieve over 50 ms of storage time and the result is equivalent to 8.7x10^-5 dB s^-1 of propagation loss in an optical fiber. Our demonstration could be used for buffering and regulating classical and quantum information flow between remote networks. | physics |
We examine the propagation and flavor oscillations of neutrinos under the influence of gravitational waves (GWs) with an arbitrary polarization. We rederive the effective Hamiltonian for the system of three neutrino flavors using the perturbative approach. Then, using this result, we consider the evolution of neutrino flavors in stochastic GWs with the general energy density spectrum.The equation for the density matrix for neutrino flavors is obtained and solved analytically. As an application, we study the flavor content of a neutrino beam emitted in a supernova type-II explosion. We obtain the analytical expressions for the contributions of GWs to the neutrino fluxes and for the damping decrement, which describes the attenuation of the fluxes to their asymptotic values. We are able to evaluate qualitatively the effect of various sources of stochastic GWs on the evolution of neutrino fluxes. We prove that the major contribution is by GWs emitted by merging supermassive black holes. The implication of the obtained results for the measurement of astrophysical neutrinos with neutrino telescopes is discussed. | high energy physics phenomenology |
It has been recently claimed by two different groups that the spectral modulation observed in gamma rays from Galactic pulsars and supernova remnants can be due to conversion of photons into ultra-light axion-like-particles (ALPs) in large-scale Galactic magnetic fields. While we show the required best-fit photon-ALP coupling, $g_{a\gamma} \sim 2 \times 10^{-10}$ GeV${}^{-1}$, to be consistent with constraints from observations of photon-ALPs mixing in vacuum, this is in conflict with other bounds, specifically from the CAST solar axion limit, from the helium-burning lifetime in globular clusters, and from the non-observations of gamma rays in coincidence with SN 1987A. In order to reconcile these different results, we propose that environmental effects in matter would suppress the ALP production in dense astrophysical plasma, allowing to relax previous bounds and make them compatible with photon-ALP conversions in the low-density Galactic medium. If this explanation is correct, the claimed ALP signal would be on the reach of next-generations laboratory experiments such as ALPS II. | high energy physics phenomenology |
Despite the advent of Grover's algorithm for the unstructured search, its successful implementation on near-term quantum devices is still limited. We apply three strategies to reduce the errors associated with implementing quantum search algorithms. Our improved search algorithms have been implemented on the IBM quantum processors. Using them, we demonstrate three- and four-qubit search algorithm with higher success probabilities compared to previous works. We present the successful execution of the five-qubit search on the IBM quantum processor for the first time. The results have been benchmarked using the degraded ratio, which is the ratio between the experimental and the theoretical success probabilities. The fast decay of the degraded ratio supports our divide-and-conquer strategy. Our proposed strategies are also useful for implementation of quantum search algorithms in the post-NISQ era. | quantum physics |
The self-organization of strongly interacting electrons into superlattice structures underlies the properties of many quantum materials. How these electrons arrange within the superlattice dictates what symmetries are broken and what ground states are stabilized. Here we show that cryogenic scanning transmission electron microscopy enables direct mapping of local symmetries and order at the intra-unit-cell level in the model charge-ordered system Nd$_{1/2}$Sr$_{1/2}$MnO$_{3}$. In addition to imaging the prototypical site-centered charge order, we discover the nanoscale coexistence of an exotic intermediate state which mixes site and bond order and breaks inversion symmetry. We further show that nonlinear coupling of distinct lattice modes controls the selection between competing ground states. The results demonstrate the importance of lattice coupling for understanding and manipulating the character of electronic self-organization and highlight a novel method for probing local order in a broad range of strongly correlated systems. | condensed matter |
In this work we study equisingularity in a one-parameter flat family of generically reduced curves. We consider some equisingular criteria as topological triviality, Whitney equisingularity and strong simultaneous resolution. In this context, we prove that Whitney equisingularity is equivalent to strong simultaneous resolution and it is also equivalent to the constancy of the Milnor number and the multiplicity of the fibers. These results are extensions to the case of flat deformations of generically reduced curves, of known results on reduced curves. When the family $(X,0)$ is topologically trivial, we also characterize Whitney equisingularity through Cohen-Macaulay property of a certain local ring associated to the parameter space of the family. | mathematics |
We present a generalized phenomenological formalism for analyzing the original E\"{o}tv\"{o}s experiment in the presence of gravity and a generic "5th force." To date no evidence for a 5th force has emerged since its presence was suggested by a 1986 reanalysis of the 1922 publication coauthored by E\"{o}tv\"{o}s, Pek\`ar, and Fekete (EPF). However, our generalized analysis introduces new mechanisms capable in principle of accounting for the EPF data, while at the same time avoiding detection by most recent experiments carried out to date. As an example, some of these mechanisms raise the possibility that the EPF signal could have arisen from an unexpected direction if it originated from the motion of the Earth through a medium. | high energy physics phenomenology |
The formation of the solar system's terrestrial planets concluded with a period of giant impacts. Previous works examining the volatile loss caused by the impact shock in the moon-forming impact find atmospheric losses of at most 20-30 per cent and essentially no loss of oceans. However, giant impacts also result in thermal heating, which can lead to significant atmospheric escape via a Parker-type wind. Here we show that H2O and other high-mean molecular weight outgassed species can be efficiently lost through this thermal wind if present in a hydrogen-dominated atmosphere, substantially altering the final volatile inventory of terrestrial planets. Specifically, we demonstrate that a giant impact of a Mars-sized embryo with a proto-Earth can remove several Earth oceans' worth of H2O, and other heavier volatile species, together with a primordial hydrogen-dominated atmosphere. These results may offer an explanation for the observed depletion in Earth's light noble gas budget and for its depleted xenon inventory, which suggest that Earth underwent significant atmospheric loss by the end of its accretion. Because planetary embryos are massive enough to accrete primordial hydrogen envelopes and because giant impacts are stochastic and occur concurrently with other early atmospheric evolutionary processes, our results suggest a wide diversity in terrestrial planet volatile budgets. | astrophysics |
The fifth generation of mobile communication networks will support a large set of new services and applications. One important use case is the remote area coverage for broadband Internet access. This use case ha significant social and economic impact, since a considerable percentage of the global population living in low populated area does not have Internet access and the communication infrastructure in rural areas can be used to improve agribusiness productivity. The aim of this paper is to analyze the performance of a 5G for Remote Areas transceiver, implemented on field programmable gate array based hardware for real-time processing. This transceiver employs the latest digital communication techniques, such as generalized frequency division multiplexing waveform combined with 2 by 2 multiple-input multiple-output diversity scheme and polar channel coding. The performance of the prototype is evaluated regarding its out-of-band emissions and bit error rate under AWGN channel. | electrical engineering and systems science |
It has been shown that if $T$ is a complex matrix, then {\small\begin{align*} \omega(T)&=\frac{1}{n}\sup\left\{|\mathrm{Tr}\ X|;\ X\in W^n(T)\right\}\\ &=\frac{1}{n}\sup\left\{\|X\|_1;\ X\in W^n(T)\right\}\\ &= \sup\left\{ \omega(X);\ X\in W^n(T)\right\} \end{align*} } where $n$ is a positive integer, $\omega(T)$ is the numerical radius and $W^n(T)$ is the $n$'th matricial range of $T$. | mathematics |
The powerful FR II radio galaxy Cygnus A exhibits primary and secondary hotspots in each lobe. A 2 Msec Chandra X-ray image of Cygnus A has revealed an approximately circular hole, with a radius of 3.9 kpc, centered on the primary hotspot in the eastern radio lobe, hotspot E. We infer the distribution of X-ray emission on our line-of-sight from an X-ray surface brightness profile of the radio lobe adjacent to the hole and use it to argue that the hole is excavated from the radio lobe. The surface brightness profile of the hole implies a depth at least 1.7 $\pm$ 0.3 times greater than its projected width, requiring a minimum depth of 13.3 $\pm$ 2.3 kpc. A similar hole observed in the 5 GHz VLA radio map reinforces the argument for a cavity lying within the lobe. We argue that the jet encounters the shock compressed intracluster medium at hotspot E, passing through one or more shocks as it is deflected back into the radio lobe. The orientation of Cygnus A allows the outflow from hotspot E to travel almost directly away from us, creating an elongated cavity, as observed. These results favor models for multiple hotspots in which an FR II jet is deflected at a primary hotspot, then travels onward to deposit the bulk of its power at a secondary hotspot, rather than the dentist drill model. | astrophysics |
The significant increase in world population and urbanisation has brought several important challenges, in particular regarding the sustainability, maintenance and planning of urban mobility. At the same time, the exponential increase of computing capability and of available sensor and location data have offered the potential for innovative solutions to these challenges. In this work, we focus on the challenge of traffic forecasting and review the recent development and application of graph neural networks (GNN) to this problem. GNNs are a class of deep learning methods that directly process the input as graph data. This leverages more directly the spatial dependencies of traffic data and makes use of the advantages of deep learning producing state-of-the-art results. We introduce and review the emerging topic of GNNs, including their most common variants, with a focus on its application to traffic forecasting. We address the different ways of modelling traffic forecasting as a (temporal) graph, the different approaches developed so far to combine the graph and temporal learning components, as well as current limitations and research opportunities. | computer science |
The resilience of quantum entanglement to a classicality-inducing environment is tied to fundamental aspects of quantum many-body systems. The dynamics of entanglement has recently been studied in the context of measurement-induced entanglement transitions, where the steady-state entanglement collapses from a volume-law to an area-law at a critical measurement probability $p_{c}$. Interestingly, there is a distinction in the value of $p_{c}$ depending on how well the underlying unitary dynamics scramble quantum information. For strongly chaotic systems, $p_{c} > 0$, whereas for weakly chaotic systems, such as integrable models, $p_{c} = 0$. In this work, we investigate these measurement-induced entanglement transitions in a system where the underlying unitary dynamics are many-body localized (MBL). We demonstrate that the emergent integrability in an MBL system implies a qualitative difference in the nature of the measurement-induced transition depending on the measurement basis, with $p_{c} > 0$ when the measurement basis is scrambled and $p_{c} = 0$ when it is not. This feature is not found in Haar-random circuit models, where all local operators are scrambled in time. When the transition occurs at $p_{c} > 0$, we use finite-size scaling to obtain the critical exponent $\nu = 1.3(2)$, close to the value for 2+0D percolation. We also find a dynamical critical exponent of $z = 0.98(4)$ and logarithmic scaling of the R\'{e}nyi entropies at criticality, suggesting an underlying conformal symmetry at the critical point. This work further demonstrates how the nature of the measurement-induced entanglement transition depends on the scrambling nature of the underlying unitary dynamics. This leads to further questions on the control and simulation of entangled quantum states by measurements in open quantum systems. | quantum physics |
Tree-cut width is a graph parameter introduced by Wollan that is an analogue of treewidth for the immersion order on graphs in the following sense: the tree-cut width of a graph is functionally equivalent to the largest size of a wall that can be found in it as an immersion. In this work we propose a variant of the definition of tree-cut width that is functionally equivalent to the original one, but for which we can state and prove a tight duality theorem relating it to naturally defined dual objects: appropriately defined brambles and tangles. Using this result we also propose a game characterization of tree-cut width. | mathematics |
Quality estimation (QE) is the task of automatically evaluating the quality of translations without human-translated references. Calculating BLEU between the input sentence and round-trip translation (RTT) was once considered as a metric for QE, however, it was found to be a poor predictor of translation quality. Recently, various pre-trained language models have made breakthroughs in NLP tasks by providing semantically meaningful word and sentence embeddings. In this paper, we employ semantic embeddings to RTT-based QE. Our method achieves the highest correlations with human judgments, compared to previous WMT 2019 quality estimation metric task submissions. While backward translation models can be a drawback when using RTT, we observe that with semantic-level metrics, RTT-based QE is robust to the choice of the backward translation system. Additionally, the proposed method shows consistent performance for both SMT and NMT forward translation systems, implying the method does not penalize a certain type of model. | computer science |
We derive the Hamiltonian of a superconducting circuit that comprises a single-Josephson-junction flux qubit and an LC oscillator. If we keep the qubit's lowest two energy levels, the derived circuit Hamiltonian takes the form of the quantum Rabi Hamiltonian, which describes a two-level system coupled to a harmonic oscillator, regardless of the coupling strength. To investigate contributions from the qubit's higher energy levels, we numerically calculate the transition frequencies of the circuit Hamiltonian. We find that the qubit's higher energy levels mainly cause an overall shift of the entire spectrum, but the energy level structure up to the seventh excited states can still be fitted well by the quantum Rabi Hamiltonian even in the case where the coupling strength is larger than the frequencies of the qubit and the oscillator, i.e., when the qubit-oscillator circuit is in the deep-strong-coupling regime. We also confirm that some of the paradoxical properties of the quantum Rabi Hamiltonian in the deep-strong-coupling regime, e.g. the non-negligible number of photons and the nonzero expectation value of the flux in the oscillator in the ground state, arise from the circuit Hamiltonian as well. | quantum physics |
The natural BMO (bounded mean oscillation) conditions suggested by scalar-valued results are known to be insufficient for the boundedness of operator-valued paraproducts. Accordingly, the boundedness of operator-valued singular integrals has only been available under versions of the classical ``$T(1)\in BMO$'' assumptions that are not easily checkable. Recently, Hong, Liu and Mei (J. Funct. Anal. 2020) observed that the situation improves remarkably for singular integrals with a symmetry assumption, so that a classical $T(1)$ criterion still guarantees their $L^2$-boundedness on Hilbert space -valued functions. Here, these results are extended to general UMD (unconditional martingale differences) spaces with the same natural BMO condition for symmetrised paraproducts, and requiring in addition only the usual replacement of uniform bounds by $R$-bounds in the case of general singular integrals. In particular, under these assumptions, we obtain boundedness results on non-commutative $L^p$ spaces for all $1<p<\infty$, without the need to replace the domain or the target by a related non-commutative Hardy space as in the results of Hong et al. for $p\neq 2$. | mathematics |
The design of new devices and experiments in science and engineering has historically relied on the intuitions of human experts. This credo, however, has changed. In many disciplines, computer-inspired design processes, also known as inverse-design, have augmented the capability of scientists. Here we visit different fields of physics in which computer-inspired designs are applied. We will meet vastly diverse computational approaches based on topological optimization, evolutionary strategies, deep learning, reinforcement learning or automated reasoning. Then we draw our attention specifically on quantum physics. In the quest for designing new quantum experiments, we face two challenges: First, quantum phenomena are unintuitive. Second, the number of possible configurations of quantum experiments explodes combinatorially. To overcome these challenges, physicists began to use algorithms for computer-designed quantum experiments. We focus on the most mature and \textit{practical} approaches that scientists used to find new complex quantum experiments, which experimentalists subsequently have realized in the laboratories. The underlying idea is a highly-efficient topological search, which allows for scientific interpretability. In that way, some of the computer-designs have led to the discovery of new scientific concepts and ideas -- demonstrating how computer algorithm can genuinely contribute to science by providing unexpected inspirations. We discuss several extensions and alternatives based on optimization and machine learning techniques, with the potential of accelerating the discovery of practical computer-inspired experiments or concepts in the future. Finally, we discuss what we can learn from the different approaches in the fields of physics, and raise several fascinating possibilities for future research. | quantum physics |
Disease classification relying solely on imaging data attracts great interest in medical image analysis. Current models could be further improved, however, by also employing Electronic Health Records (EHRs), which contain rich information on patients and findings from clinicians. It is challenging to incorporate this information into disease classification due to the high reliance on clinician input in EHRs, limiting the possibility for automated diagnosis. In this paper, we propose \textit{variational knowledge distillation} (VKD), which is a new probabilistic inference framework for disease classification based on X-rays that leverages knowledge from EHRs. Specifically, we introduce a conditional latent variable model, where we infer the latent representation of the X-ray image with the variational posterior conditioning on the associated EHR text. By doing so, the model acquires the ability to extract the visual features relevant to the disease during learning and can therefore perform more accurate classification for unseen patients at inference based solely on their X-ray scans. We demonstrate the effectiveness of our method on three public benchmark datasets with paired X-ray images and EHRs. The results show that the proposed variational knowledge distillation can consistently improve the performance of medical image classification and significantly surpasses current methods. | electrical engineering and systems science |
This work (Part (I)) together with its companion (Part (II) [45]) develops a new framework for stochastic functional Kolmogorov equations, which are nonlinear stochastic differential equations depending on the current as well as the past states. Because of the complexity of the results, it seems to be instructive to divide our contributions to two parts. In contrast to the existing literature, our effort is to advance the knowledge by allowing delay and past dependence, yielding essential utility to a wide range of applications. A long-standing question of fundamental importance pertaining to biology and ecology is: What are the minimal necessary and sufficient conditions for long-term persistence and extinction (or for long-term coexistence of interacting species) of a population? Regardless of the particular applications encountered, persistence and extinction are properties shared by Kolmogorov systems. While there are many excellent treaties of stochastic-differential-equation-based Kolmogorov equations, the work on stochastic Kolmogorov equations with past dependence is still scarce. Our aim here is to answer the aforementioned basic question. This work, Part (I), is devoted to characterization of persistence, whereas its companion, Part (II) [45], is devoted to extinction. The main techniques used in this paper include the newly developed functional It^o formula and asymptotic coupling and Harris-like theory for infinite dimensional systems specialized to functional equations. General theorems for stochastic functional Kolmogorov equations are developed first. Then a number of applications are examined to obtain new results substantially covering, improving, and extending the existing literature. Furthermore, these conditions reduce to that of Kolmogorov systems when there is no past dependence. | mathematics |
We consider various probabilistic games with piles for one player or two players. In each round of the game, a player randomly chooses to add $a$ or $b$ chips to his pile under the condition that $a$ and $b$ are not necessarily positive. If a player has a negative number of chips after making his play, then the number of chips he collects will stay at $0$ and the game will continue. All the games we considered satisfy these rules. The game ends when one collects $n$ chips for the first time. Each player is allowed to start with $s$ chips where $s\geq 0$. We consider various cases of $(a,b)$ including the pairs $(1,-1)$ and $(2,-1)$ in particular. We investigate the probability generating functions of the number of turns required to end the games. We derive interesting recurrence relations for the sequences of such functions in $n$ and write these generating functions as rational functions. As an application, we derive other statistics for the games which include the average number of turns required to end the game and other higher moments. | mathematics |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.