text
stringlengths 11
9.77k
| label
stringlengths 2
104
|
---|---|
Imputation of missing values is a strategy for handling non-responses in surveys or data loss in measurement processes, which may be more effective than ignoring them. When the variable represents a count, the literature dealing with this issue is scarce. Likewise, if problems of over- or under-dispersion are observed, generalisations of the Poisson distribution are recommended for carrying out imputation. In order to assess the performance of various regression models in the imputation of a discrete variable compared to classical counting models, this work presents a comprehensive simulation study considering a variety of scenarios and real data. To do so we compared the results of estimations using only complete data, and using imputations based on the Poisson, negative binomial, Hermite, and COMPoisson distributions, and the ZIP and ZINB models for excesses of zeros. The results of this work reveal that the COMPoisson distribution provides in general better results in any dispersion scenario, especially when the amount of missing information is large. When the variable presenting missing values is a count, the most widely used method is to assume that a classical Poisson model is the best alternative to impute the missing counts; however, in real-life research this assumption is not always correct, and it is common to find count variables exhibiting overdispersion or underdispersion, for which the Poisson model is no longer the best to use in imputation. In several of the scenarios considered the performance of the methods analysed differs, something which indicates that it is important to analyse dispersion and the possible presence of excess zeros before deciding on the imputation method to use. The COMPoisson model performs well as it is flexible regarding the handling of counts with characteristics of over- and under-dispersion, as well as with equidispersion.
|
statistics
|
For a fixed set of positive integers $R$, we say $\mathcal{H}$ is an $R$-uniform hypergraph, or $R$-graph, if the cardinality of each edge belongs to $R$. An $R$-graph $\mathcal{H}$ is \emph{covering} if every vertex pair of $\mathcal{H}$ is contained in some hyperedge. For a graph $G=(V,E)$, a hypergraph $\mathcal{H}$ is called a \textit{Berge}-$G$, denoted by $BG$, if there exists an injection $f: E(G) \to E(\mathcal{H})$ such that for every $e \in E(G)$, $e \subseteq f(e)$. In this note, we define a new type of Ramsey number, namely the \emph{cover Ramsey number}, denoted as $\hat{R}^R(BG_1, BG_2)$, as the smallest integer $n_0$ such that for every covering $R$-uniform hypergraph $\mathcal{H}$ on $n \geq n_0$ vertices and every $2$-edge-coloring (blue and red) of $\mathcal{H}$ , there is either a blue Berge-$G_1$ or a red Berge-$G_2$ subhypergraph. We show that for every $k\geq 2$, there exists some $c_k$ such that for any finite graphs $G_1$ and $G_2$, $R(G_1, G_2) \leq \hat{R}^{[k]}(BG_1, BG_2) \leq c_k \cdot R(G_1, G_2)^3$. Moreover, we show that for each positive integer $d$ and $k$, there exists a constant $c = c(d,k)$ such that if $G$ is a graph on $n$ vertices with maximum degree at most $d$, then $\hat{R}^{[k]}(BG,BG) \leq cn$.
|
mathematics
|
Seeds have been packed in a dielectric barrier device where cold atmospheric plasma has been generated to improve their germinative properties. A special attention has been paid on understanding the resulting plasma electrical properties through an equivalent electrical model whose experimental validity has been demonstrated here. In this model, the interelectrode gap is subdivided into 4 types of elementary domains, according to whether they contain electric charges (or not) and according to their type of medium (gas, seed or insulator). The model enables to study the influence of seeds on the plasma electrical properties by measuring and deducing several parameters (charge per filament, gas capacitance, plasma power, ...) either in no-bed configuration (i.e. no seed in the reactor) or in packed-bed configuration (seeds in the reactor). In that second case, we have investigated how seeds can influence the plasma electrical parameters considering six specimens of seeds (beans, radishes, corianders, lentils, sunflowers and corns). The influence of molecular oxygen (0-100 sccm) mixed with a continuous flow rate of helium (2 slm) is also investigated, especially through filaments breakdown voltages, charge per filament and plasma power. It is demonstrated that such bed-packing drives to an increase in the gas capacitance, to a decrease in the beta-parameter and to variations of the filaments' breakdown voltages in a seed-dependent manner. Finally, we show how the equivalent electrical model can be used to assess the total volume of the contact points, the capacitance of the seeds in the packed-bed configuration and we demonstrate that germinative effects can be induced by plasma on four of the six agronomical specimens.
|
physics
|
Many exact coherent states (ECS) arising in wall-bounded shear flows have an asymptotic structure at extreme Reynolds number Re in which the effective Reynolds number governing the streak and roll dynamics is O(1). Consequently, these viscous ECS are not suitable candidates for quasi-coherent structures away from the wall that necessarily are inviscid in the mean. Specifically, viscous ECS cannot account for the singular nature of the inertial domain, where the flow self-organizes into uniform momentum zones (UMZs) separated by internal shear layers and the instantaneous streamwise velocity develops a staircase-like profile. In this investigation, a large-Re asymptotic analysis is performed to explore the potential for a three-dimensional, short streamwise- and spanwise-wavelength instability of the embedded shear layers to sustain a spatially-distributed array of much larger-scale, effectively inviscid streamwise roll motions. In contrast to other self-sustaining process theories, the rolls are sufficiently strong to differentially homogenize the background shear flow, thereby providing a mechanistic explanation for the formation and maintenance of UMZs and interlaced shear layers that respects the leading-order balance structure of the mean dynamics.
|
physics
|
Hadronic showers transfer a relevant amount of their energy to electromagnetic subshowers. We show that the generation of "secondary" dark photons in these sub-showers is significant and typically dominates the production at low dark photon masses. The resulting dark photons are however substantially less energetic than the ones originating from mesons decay. We illustrate this point both semi-analytically and through Monte Carlo simulations. Existing limits on vector-mediator scenarios for light dark matter are updated with the inclusion of the new production processes.
|
high energy physics phenomenology
|
We identify two Batalin-Vilkovisky algebra structures, one obtained by Kowalzig and Krahmer on the Hochschild cohomology of an Artin-Schelter regular algebra with semisimple Nakayama automorphism and the other obtained by Lambre, Zhou and Zimmermann on the Hochschild cohomology of a Frobenius algebra also with semisimple Nakayama automorphism, provided that these two algebras are Koszul dual to each other.
|
mathematics
|
A radiatively stable de Sitter spacetime is constructed by considering an intrinsically non-commutative and generalized-geometric formulation of string theory, which is related to a family of F-theory models endowed with non-trivial anisotropic axion-dilaton backgrounds. In particular, the curvature of the canonically conjugate dual space provides for a positive cosmological constant to leading order, that satisfies a radiatively stable see-saw-like formula, which in turn induces the dark energy in the observed spacetime. We also comment on the non-commutative phase of the non-perturbative formulations of string theory/quantum gravity implied by this approach.
|
high energy physics theory
|
Designing a convolution for a spherical neural network requires a delicate tradeoff between efficiency and rotation equivariance. DeepSphere, a method based on a graph representation of the sampled sphere, strikes a controllable balance between these two desiderata. This contribution is twofold. First, we study both theoretically and empirically how equivariance is affected by the underlying graph with respect to the number of vertices and neighbors. Second, we evaluate DeepSphere on relevant problems. Experiments show state-of-the-art performance and demonstrates the efficiency and flexibility of this formulation. Perhaps surprisingly, comparison with previous work suggests that anisotropic filters might be an unnecessary price to pay. Our code is available at https://github.com/deepsphere
|
computer science
|
The lack of adequate training data is one of the major hurdles in WiFi-based activity recognition systems. In this paper, we propose Wi-Fringe, which is a WiFi CSI-based device-free human gesture recognition system that recognizes named gestures, i.e., activities and gestures that have a semantically meaningful name in English language, as opposed to arbitrary free-form gestures. Given a list of activities (only their names in English text), along with zero or more training examples (WiFi CSI values) per activity, Wi-Fringe is able to detect all activities at runtime. In other words, a subset of activities that Wi-Fringe detects do not require any training examples at all.
|
electrical engineering and systems science
|
We construct a $C^1$ symplectic twist map $g$ of the annulus that has an essential invariant curve $\Gamma$ such that $\Gamma$ is not differentiable and $g$ restricted to $\Gamma$ is minimal.
|
mathematics
|
Distributed generation is widely being utilized, so the basic theme of this research is to have a hands-on experience to synchronize a Distributed Energy Resource (DER) to the Mains Grid. A control algorithm is implemented for energy supply from both sources i.e. DER and Mains Grid, according to the cost and availability of each source. When load changes, so that the synchronization does not lose. LabVIEW software is used for the implementation of the desired system. Validation is done by experimenting in the lab. In order to keep the system stable, the algorithm has been developed to shift the load from one bus to the other depending upon the load requirements. The project is a test bed and can be used for further experimentation and prove helpful in developing a basic understanding of Smart Switching using LabVIEW.
|
electrical engineering and systems science
|
We prove over fields of power series the analogues of several Diophantine approximation results obtained over the field of real numbers. In particular we establish the power series analogue of Kronecker's theorem for matrices, together with a quantitative form of it, which can also be seen as a transference inequality between uniform approximation and inhomogeneous approximation. Special attention is devoted to the one dimensional case. Namely, we give a necessary and sufficient condition on an irrational power series $\alpha$ which ensures that, for some positive $\eps$, the set $$ \liminf_{Q \in \mathbb{F}_q[z], \,\, \deg Q \to \infty} \| Q \| \cdot |\langle Q \alpha - \theta \rangle| \geq \eps $$ has full Hausdorff dimension.
|
mathematics
|
It is well-known that 1-planar graphs have minimum degree at most 7, and not hard to see that some 1-planar graphs have minimum degree exactly 7. In this note we show that any such 1-planar graph has at least 24 vertices, and this is tight.
|
mathematics
|
We study magnetoelectric (magneto-current) effects in a diamond structure under antiferro quadrupole (AFQ) orders. The AFQ orders break the spacial inversion symmetry and cause the current-induced magnetization. The current-induced magnetization strongly depends on the types of the order parameters and the direction of the current. This gives a way to the experimental identification of AFQ order parameters. We also discuss the current-induced magnetization under the AFQ orders in the diamond structure can be intuitively understood in terms of charge-imbalanced solenoids.
|
condensed matter
|
One of the hallmarks of quantum statistics, tightly entwined with the concept of topological phases of matter, is the prediction of anyons. Although anyons are predicted to be realized in certain fractional quantum Hall systems, they have not yet been unambiguously detected in experiment. Here we introduce a simple quantum impurity model, where bosonic or fermionic impurities turn into anyons as a consequence of their interaction with the surrounding many-particle bath. A cloud of phonons dresses each impurity in such a way that it effectively attaches fluxes/vortices to it and thereby converts it into an Abelian anyon. The corresponding quantum impurity model, first, provides a new approach to the numerical solution of the many-anyon problem, along with a new concrete perspective of anyons as emergent quasiparticles built from composite bosons or fermions. More importantly, the model paves the way towards realizing anyons using impurities in crystal lattices as well as ultracold gases. In particular, we consider two heavy electrons interacting with a two-dimensional lattice crystal in a magnetic field, and show that when the impurity-bath system is rotated at the cyclotron frequency, impurities behave as anyons as a consequence of the angular momentum exchange between the impurities and the bath. A possible experimental realization is proposed by identifying the statistics parameter in terms of the mean square distance of the impurities and the magnetization of the impurity-bath system, both of which are accessible to experiment. Another proposed application are impurities immersed in a two-dimensional weakly interacting Bose gas.
|
condensed matter
|
We present the first linear-polarization mosaicked observations performed by the Atacama Large Millimeter/submillimeter Array (ALMA). We mapped the Orion-KLeinmann-Low (Orion-KL) nebula using super-sampled mosaics at 3.1 and 1.3 mm as part of the ALMA Extension and Optimization of Capabilities (EOC) program. We derive the magnetic field morphology in the plane of the sky by assuming that dust grains are aligned with respect to the ambient magnetic field. At the center of the nebula, we find a quasi-radial magnetic field pattern that is aligned with the explosive CO outflow up to a radius of approximately 12 arc-seconds (~ 5000 au), beyond which the pattern smoothly transitions into a quasi-hourglass shape resembling the morphology seen in larger-scale observations by the James-Clerk-Maxwell Telescope (JCMT). We estimate an average magnetic field strength $\langle B\rangle = 9.4$ mG and a total magnetic energy of 2 x 10^45 ergs, which is three orders of magnitude less than the energy in the explosive CO outflow. We conclude that the field has been overwhelmed by the outflow and that a shock is propagating from the center of the nebula, where the shock front is seen in the magnetic field lines at a distance of ~ 5000 au from the explosion center.
|
astrophysics
|
The OSIRIS-REx asteroid sample-return mission is investigating primitive near-Earth asteroid (101955) Bennu. Thousands of images will be acquired by the MapCam instrument onboard the spacecraft, an imager with four color filters based on the Eight-Color Asteroid Survey (ECAS): $b$' (473 nm), $v$ (550 nm), $w$ (698 nm), and $x$ (847 nm). This set of filters will allow identification and characterization of the absorption band centered at 700 nm and associated with hydrated silicates. In this work, we present and validate a spectral clustering methodology for application to the upcoming MapCam images of the surface of Bennu. Our procedure starts with the projection, calibration, and photometric correction of the images. In a second step, we apply a K-means algorithm and we use the Elbow criterion to identify natural clusters. This methodology allows us to find distinct areas with spectral similarities, which are characterized by parameters such as the spectral slope $S$' and the center and depth of the 700-nm absorption band, if present. We validate this methodology using images of (1) Ceres from NASA's Dawn mission. In particular, we analyze the Occator crater and Ahuna Mons. We identify one spectral cluster--located in the outer parts of the Occator crater interior--showing the 700-nm hydration band centered at 698 $\pm$ 7 nm and with a depth of 3.4 $\pm$ 1.0 \%. We interpret this finding in the context of the crater's near-surface geology.
|
astrophysics
|
The Quantum Approximate Optimisation Algorithm was proposed as a heuristic method for solving combinatorial optimisation problems on near-term quantum computers and may be among the first algorithms to perform useful computations in the post-supremacy, noisy, intermediate scale era of quantum computing. In this work, we exploit the recently proposed digital-analog quantum computation paradigm, in which the versatility of programmable universal quantum computers and the error resilience of quantum simulators are combined to improve platforms for quantum computation. We show that the digital-analog paradigm is suited to the variational quantum approximate optimisation algorithm, due to its inherent resilience against coherent errors, by performing large-scale simulations and providing analytical bounds for its performance in devices with finite single-qubit operation times. We observe regimes of single-qubit operation speed in which the considered variational algorithm provides a significant improvement over non-variational counterparts.
|
quantum physics
|
Network models provide an efficient way to represent many real life problems mathematically. In the last few decades, the field of network optimization has witnessed an upsurge of interest among researchers and practitioners. The network models considered in this thesis are broadly classified into four types including transportation problem, shortest path problem, minimum spanning tree problem and maximum flow problem. Quite often, we come across situations, when the decision parameters of network optimization problems are not precise and characterized by various forms of uncertainties arising from the factors, like insufficient or incomplete data, lack of evidence, inappropriate judgements and randomness. Considering the deterministic environment, there exist several studies on network optimization problems. However, in the literature, not many investigations on single and multi objective network optimization problems are observed under diverse uncertain frameworks. This thesis proposes seven different network models under different uncertain paradigms. Here, the uncertain programming techniques used to formulate the uncertain network models are (i) expected value model, (ii) chance constrained model and (iii) dependent chance constrained model. Subsequently, the corresponding crisp equivalents of the uncertain network models are solved using different solution methodologies. The solution methodologies used in this thesis can be broadly categorized as classical methods and evolutionary algorithms. The classical methods, used in this thesis, are Dijkstra and Kruskal algorithms, modified rough Dijkstra algorithm, global criterion method, epsilon constraint method and fuzzy programming method. Whereas, among the evolutionary algorithms, we have proposed the varying population genetic algorithm with indeterminate crossover and considered two multi objective evolutionary algorithms.
|
computer science
|
We introduce a framework for Bayesian experimental design (BED) with implicit models, where the data-generating distribution is intractable but sampling from it is still possible. In order to find optimal experimental designs for such models, our approach maximises mutual information lower bounds that are parametrised by neural networks. By training a neural network on sampled data, we simultaneously update network parameters and designs using stochastic gradient-ascent. The framework enables experimental design with a variety of prominent lower bounds and can be applied to a wide range of scientific tasks, such as parameter estimation, model discrimination and improving future predictions. Using a set of intractable toy models, we provide a comprehensive empirical comparison of prominent lower bounds applied to the aforementioned tasks. We further validate our framework on a challenging system of stochastic differential equations from epidemiology.
|
statistics
|
At high latitudes, many cities adopt a centralized heating system to improve the energy generation efficiency and to reduce pollution. In multi-tier systems, so-called district heating, there are a few efficient approaches for the flow rate control during the heating process. In this paper, we describe the theoretical methods to solve this problem by deep reinforcement learning and propose a cloud-based heating control system for implementation. A real-world case study shows the effectiveness and practicability of the proposed system controlled by humans, and the simulated experiments for deep reinforcement learning show about 1985.01 gigajoules of heat quantity and 42276.45 tons of water are saved per hour compared with manual control.
|
electrical engineering and systems science
|
We introduce a general scheme to consistently truncate equations of motion for Green's functions. Our scheme is guaranteed to generate physical Green's functions with real excitation energies and positive spectral weights. There are free parameters in our scheme akin to mean field parameters that may be determined to get as good an approximation to the physics as possible. As a test case we apply our scheme to a two-pole approximation for the 2D Hubbard model. At half-filling we find an insulating solution with several interesting properties: it has low expectation value of the energy and it gives upper and lower Hubbard bands with the full non-interacting bandwidth in the large U limit. Away from half-filling, in particular in the intermediate interaction regime, our scheme allows for several different phases with different number of Fermi surfaces and topologies.
|
condensed matter
|
For the field of high energy physics to continue to have a bright future, priority within the field must be given to investments in the development of both evolutionary and transformational detector development that is coordinated across the national laboratories and with the university community, international partners and other disciplines. While the fundamental science questions addressed by high energy physics have never been more compelling, there is acute awareness of the challenging budgetary and technical constraints when scaling current technologies. Furthermore, many technologies are reaching their sensitivity limit and new approaches need to be developed to overcome the currently irreducible technological challenges. This situation is unfolding against a backdrop of declining funding for instrumentation, both at the national laboratories and in particular at the universities. This trend has to be reversed for the country to continue to play a leadership role in particle physics, especially in this most promising era of imminent new discoveries that could finally break the hugely successful, but limited, Standard Model of fundamental particle interactions. In this challenging environment it is essential that the community invest anew in instrumentation and optimize the use of the available resources to develop new innovative, cost-effective instrumentation, as this is our best hope to successfully accomplish the mission of high energy physics. This report summarizes the current status of instrumentation for high energy physics, the challenges and needs of future experiments and indicates high priority research areas.
|
physics
|
It is shown that field propagation in linear and quadratic gradient-index (GRIN) media obeys the same rules of free propagation in the sense that a field propagating in free space has a (mathematical) form that may be {\it exported} to those particular GRIN media. The Bohm potential is introduced in order to explain the reason of such behavior: it changes the dynamics by modifying the original potential. The concrete cases of two different initials conditions for each potential are analyzed.
|
physics
|
The roll-out of a flexible ramping product provides Independent System Operators (ISOs) with the ability to address ramping capacity shortages. ISOs procure flexible ramping capability by committing more generating units or reserving a certain amount of head room of committed units. In this paper, we raise the concern of the possibility that the procured ramping capability cannot be deployed in real-time operations. As a solution to the non-delivery issue, we provide a new ramping product designed to improve reliability and reduce the expected operating cost. The trajectory of start-up and shutdown processes is also considered in determining the ramping capability. A new optimization problem is formulated using mixed integer linear programming to be readily applied to practical power system operation. The performance of this proposed method is verified through simulations using a small-scale system and IEEE 118-bus systems. The simulation results demonstrate that the proposed formulation can realize improved generation scheduling alleviating capacity shortages.
|
mathematics
|
The processes that shape the extended atmospheres of red supergiants (RSGs), heat their chromospheres, create molecular reservoirs, drive mass loss, and create dust remain poorly understood. Betelgeuse's V-band "Great Dimming" event of 2019 September /2020 February and its subsequent rapid brightening provides a rare opportunity to study these phenomena. Two different explanations have emerged to explain the dimming; new dust appeared in our line of sight attenuating the photospheric light, or a large portion of the photosphere had cooled. Here we present five years of Wing three-filter (A, B, and C band) TiO and near-IR photometry obtained at the Wasatonic Observatory. These reveal that parts of the photosphere had a mean effective temperature $(T_{\rm eff}$) significantly lower than that found by (Levesque & Massey 2020). Synthetic photometry from MARCS -model photospheres and spectra reveal that the V band, TiO index, and C-band photometry, and previously reported 4000-6800 Angstrom spectra can be quantitatively reproduced if there are multiple photospheric components, as hinted at by VLT-SPHERE images (Montarges et al. 2020). If the cooler component has $\Delta T_{\rm eff} \ge 250$ K cooler than 3650 K, then no new dust is required to explain the available empirical constraints. A coincidence of the dominant short- ($\sim 430$ day) and long-period ($\sim 5.8$ yr) V-band variations occurred near the time of deep minimum (Guinan et al. 2019). This is in tandem with the strong correlation of V mag and photospheric radial velocities, recently reported by Dupree et al. (2020b). These suggest that the cooling of a large fraction of the visible star has a dynamic origin related to the photospheric motions, perhaps arising from pulsation or large-scale convective motions.
|
astrophysics
|
We study the evolution of the binary black hole (BBH) mass distribution across cosmic time. The second gravitational-wave transient catalog (GWTC-2) from LIGO/Virgo contains BBH events out to redshifts $z \sim 1$, with component masses in the range $\sim5$--$80\,M_\odot$. In this catalog, the biggest black holes, with $m_1 \gtrsim 45\,M_\odot$, are only found at the highest redshifts, $z \gtrsim 0.4$. We ask whether the absence of high-mass BBH observations at low redshift indicates that the astrophysical BBH mass distribution evolves: the biggest BBHs only merge at high redshift, and cease merging at low redshift. Alternatively, this feature might be explained by gravitational-wave selection effects. Modeling the BBH primary mass spectrum as a power law with a sharp maximum mass cutoff (Truncated model), we find that the cutoff increases with redshift ($> 99.9\%$ credibility). An abrupt cutoff in the mass spectrum is expected from (pulsational) pair instability supernova simulations; however, GWTC-2 is only consistent with a Truncated mass model if the location of the cutoff increases from $45^{+13}_{-5}\,M_\odot$ at $z < 0.4$ to $80^{+16}_{-13}\,M_\odot$ at $z > 0.4$. Alternatively, if the primary mass spectrum has a break in the power law (Broken power law) at ${38^{+15}_{-8}\,M_\odot}$, rather than a sharp cutoff, the data are consistent with a non-evolving mass distribution. In this case, the overall rate of mergers, at all masses, increases with increasing redshift. Future observations will confidently distinguish between a sharp maximum mass cutoff that evolves with redshift and a non-evolving mass distribution with a gradual taper, such as a Broken power law. After $\sim 100$ BBH merger observations, a continued absence of high-mass, low-redshift events would provide a clear signature that the mass distribution evolves with redshift.
|
astrophysics
|
Skyrmion crystals (SkX) are periodic alignment of magnetic skyrmions, i.e., a type of topologically protected spin textures. Compared with ordinary crystals, they can be drastically deformed under anisotropic effects because they are composed of field patterns whose deformation does not cause any bond-breaking. This exotic ductility of SkX bring about great tunability of its collective excitations called emergent phonons, which are vital for magnonics application. The question is how to quantitatively determine the emergent phonons of distorted SkX. Here we systematically study the long wavelength emergent phonons of SkX distorted by (a) a negative exchange anisotropy, and (b) a tilted magnetic field. In both cases, deformation and structural transitions of SkX thoroughly influence the frequency, anisotropy of vibrational pattern and dispersion relation, and coupling between lattice vibration and in-lattice vibration for all modes. Tilted magnetic fields are very effective in tuning the emergent phonons, such that all modes except the Goldstone mode can be excited by AC magnetic fields when a tilted bias field is presented.
|
condensed matter
|
Explicit string models which can realize inflation and low-energy supersymmetry are notoriously difficult to achieve. Given that sequestering requires very specific configurations, supersymmetric particles are in general expected to be very heavy implying that the neutralino dark matter should be overproduced in a standard thermal history. However, in this paper we point out that this is generically not the case since early matter domination driven by string moduli can dilute the dark matter abundance down to the observed value. We argue that generic features of string compactifications, namely a high supersymmetry breaking scale and late time epochs of modulus domination, might imply superheavy neutralino dark matter with mass around $10^{10}-10^{11}$ GeV. Interestingly, this is the right range to explain the recent detection of ultra-high-energy neutrinos by IceCube and ANITA via dark matter decay.
|
high energy physics phenomenology
|
We present results of a computation of NLO QCD corrections to the production of an off-shell top--antitop pair in association with an off-shell $\text{W}^+$ boson in proton--proton collisions. As the calculation is based on the full matrix elements for the process $\text{p}\text{p}\to {\text{e}}^+\nu_{\text{e}}\,\mu^-\bar{\nu}_\mu\,\tau^+\nu_\tau\,{\text{b}}\,\bar{\text{b}}$, all off-shell, spin-correlation, and interference effects are included. The NLO QCD corrections are about $20\%$ for the integrated cross-section. Using a dynamical scale, the corrections to most distributions are at the same level, while some distributions show much larger $K$-factors in suppressed regions of phase space. We have performed a second calculation based on a double-pole approximation. While the corresponding results agree with the full calculation within few per cent for integrated cross-sections, the discrepancy can reach $10\%$ and more in regions of phase space that are not dominated by top--antitop production. As a consequence, on-shell calculations should only be trusted to this level of accuracy.
|
high energy physics phenomenology
|
We design sub-wavelength high-performing non-reciprocal optical devices using recently discovered magnetic Weyl semimetals. These passive bulk topological materials exhibit anomalous Hall effect which results in magnetooptical effects that are orders of magnitude higher than those in conventional materials, without the need of any external magnetic bias. We design two optical isolators of both Faraday and Voigt geometries. These isolators have dimensions that are reduced by three orders of magnitude compared to conventional magneto-optical configurations. Our results indicate that the magnetic Weyl semimetals may open up new avenues in photonics for the design of various nonreciprocal components.
|
physics
|
The Zakharov-Kuznetsov equation in spatial dimension $d\geq 5$ is considered. The Cauchy problem is shown to be globally well-posed for small initial data in critical spaces and it is proved that solutions scatter to free solutions as $t \to \pm \infty$. The proof is based on i) novel endpoint non-isotropic Strichartz estimates which are derived from the $(d-1)$-dimensional Schr\"odinger equation, ii) transversal bilinear restriction estimates, and iii) an interpolation argument in critical function spaces. Under an additional radiality assumption, a similar result is obtained in dimension $d=4$.
|
mathematics
|
We consider a scheme for on-demand teleportation of a dual-rail electron qubit state, based on single-electron sources and detectors. The scheme has a maximal efficiency of 25%, which is limited both by the shared entangled state as well as the Bell-state measurement. We consider two experimental implementations, realizable with current technology. The first relies on surface acoustic waves, where all the ingredients are readily available. The second is based on Lorentzian voltage pulses in quantum Hall edge channels. As single-electron detection is not yet experimentally established in these systems, we consider a tomographic detection of teleportation using current correlators up to (and including) third order. For both implementations we take into account environmental effects.
|
quantum physics
|
We study a simple extension of the Zee model, in which a discrete $Z_2$ symmetry imposed in the original model is replaced by a global $U(1)$ symmetry retaining the same particle content. Due to the $U(1)$ symmetry with flavor dependent charge assignments, the lepton sector has an additional source of flavor violating Yukawa interactions with a controllable structure, while the quark sector does not at tree level. We show that current neutrino oscillation data can be explained under constraints from lepton flavor violating decays of charged leptons in a successful charge assignment of the $U(1)$ symmetry. In such scenario, we find a characteristic pattern of lepton flavor violating decays of additional Higgs bosons, which can be a smoking gun signature at collider experiments.
|
high energy physics phenomenology
|
In dimension $n = 2m-2 \geq 4$ adjunction theoretic scrolls over a smooth $m$-fold may not be classical scrolls, due to the existence of divisorial fibers. A $4$-dimensional scroll $(X,L)$ over $\mathbb P^3$ of this type is considered, and the equation of its Hilbert curve $\Gamma$ is determined in two ways, one of which relies on the fact that $(X,L)$ is at the same time a classical scroll over a threefold $Y \not=\mathbb P^3$. It turns out that $\Gamma$ does not perceive divisorial fibers. The equation we obtain also shows that a question raised in a previous article by Beltrametti, Lanteri and Sommese, has negative answer in general for non-classical scrolls over a $3$-fold. More precisely, the answer for $(X,L)$ is negative or positive according to whether $(X,L)$ is regarded as an adjunction theoretic scroll or as a classical scroll; in other words, it is the answer to this question to distinguish between the existence of jumping fibers or not.
|
mathematics
|
We study the long-term behaviour of sunspot penumbra to umbra area ratio by analyzing the recently digitized Kodaikanal white-light data (1923-2011). We implement an automatic umbra extraction method and compute the ratio over eight solar cycles (Cycles 16-23). Although the average ratio doesn't show any variation with spot latitudes, cycle phases and strengths, it increases from 5.5 to 6 as the sunspot size increases from 100 $\mu$hem to 2000 $\mu$hem. Interestingly, our analysis also reveals that this ratio for smaller sunspots (area $<$ 100 $\mu$hem) does not have any long-term systematic trend which was earlier reported from the Royal Observatory, Greenwich (RGO) photographic results. To verify the same, we apply our automated extraction technique on Solar and Heliospheric Observatory (SOHO)/Michelson Doppler Imager (MDI) continuum images (1996-2010). Results from this data not only confirm our previous findings, but also show the robustness of our analysis method.
|
astrophysics
|
We consider the kinetic theory of dilute gases in the Boltzmann--Grad limit. We propose a new perspective based on a large deviation estimate for the probability of the empirical distribution dynamics. Assuming Boltzmann molecular chaos hypothesis (Stosszahlansatz), we derive a large deviation rate function, or action, that describes the stochastic process for the empirical distribution. The quasipotential for this action is the negative of the entropy, as should be expected. While the Boltzmann equation appears as the most probable evolution, corresponding to a law of large numbers, the action describes a genuine reversible stochastic process for the empirical distribution, in agreement with the microscopic reversibility. As a consequence, this large deviation perspective gives the expected meaning to the Boltzmann equation and explains its irreversibility as the natural consequence of limiting the physical description to the most probable evolution. More interestingly, it also quantifies the probability of any dynamical evolution departing from solutions of the Boltzmann equation. This picture is fully compatible with the heuristic classical view of irreversibility, but makes it much more precise in various ways. We also explain that this large deviation action provides a natural gradient structure for the Boltzmann equation.
|
condensed matter
|
We consider the quantisation of the Artin dynamical system defined on the fundamental region of the modular group. In classical regime the geodesic flow in the fundamental region represents one of the most chaotic dynamical systems, it has mixing of all orders, Lebesgue spectrum and non-zero Kolmogorov entropy. As a result, the classical correlation functions decay exponentially. In order to investigate the influence of the classical chaotic behaviour on the quantum-mechanical properties of the Artin system we calculated the corresponding thermal quantum-mechanical correlation functions. It was conjectured by Maldacena, Shenker and Stanford that the classical chaos can be diagnosed in thermal quantum systems by using an out-of-time-order correlation function as well as the square of the commutator of operators separated in time. We demonstrated that the two- and four-point correlation functions of the Louiville-like operators decay exponentially with a temperature dependent exponent. As conjectured the square of the commutator of the Louiville-like operators separated in time grows exponentially, similar to the exponential divergency of trajectories in the classical regime. The corresponding exponent does not saturate the maximal growth condition.
|
high energy physics theory
|
With the increase of dirty data, data cleaning turns into a crux of data analysis. Most of the existing algorithms rely on either qualitative techniques (e.g., data rules) or quantitative ones (e.g., statistical methods). In this paper, we present a novel hybrid data cleaning framework on top of Markov logic networks (MLNs), termed as MLNClean, which is capable of cleaning both schema-level and instance-level errors. MLNClean mainly consists of two cleaning stages, namely, first cleaning multiple data versions separately (each of which corresponds to one data rule), and then deriving the final clean data based on multiple data versions. Moreover, we propose a series of techniques/concepts, e.g., the MLN index, the concepts of reliability score and fusion score, to facilitate the cleaning process. Extensive experimental results on both real and synthetic datasets demonstrate the superiority of MLNClean to the state-of-the-art approach in terms of both accuracy and efficiency.
|
computer science
|
A concise approach is proposed to determine a reduced order control design oriented dynamical model of a multi-stage hot sheet metal forming process starting from a high-dimensional coupled thermo-mechanical model. The obtained reduced order nonlinear parametric model serves as basis for the design of an Extended Kalman filter to estimate the spatial-temporal temperature distribution in the sheet metal blank during the forming process based on sparse local temperature measurements. To address modeling and approximation errors and to capture physical effects neglected during the approximation such as phase transformation from austenite to martensite a disturbance model is integrated into the Kalman filter to achieve joint state and disturbance estimation. The extension to spatial-temporal property estimation is introduced. The approach is evaluated for a hole-flanging process using a thermo-mechanical simulation model evaluated using LS-DYNA. Here, the number of states is reduced from approximately 17 000 to 30 while preserving the relevant dynamics and the computational time is 1000 times shorter. The performance of the combined temperature and disturbance estimation is validated in different simulation scenarios with three spatially fixed temperature measurements.
|
electrical engineering and systems science
|
Despite quantum networking concepts, designs, and hardware becoming increasingly mature, there is no consensus on the optimal wavelength for free-space systems. We present an in-depth analysis of a daytime free-space quantum channel as a function of wavelength and atmospheric spatial coherence (Fried coherence length). We choose quantum key distribution bit yield as a performance metric in order to reveal the ideal wavelength choice for an actual qubit-based protocol under realistic atmospheric conditions. Furthermore, our analysis represents a rigorous framework to analyze requirements for spatial, spectral, and temporal filtering. These results will help guide the development of free-space quantum communication and networking systems. In particular, our results suggest that shorter wavelengths should be considered for free-space quantum communication systems. Our results are also interpreted in the context of atmospheric compensation by higher-order adaptive optics.
|
quantum physics
|
The Square Kilometre Array (SKA) will be the first low-frequency instrument with the capability to directly image the structures of the Epoch of Reionization (EoR). Indeed, deep imaging of the EoR over 5 targeted fields of 20 square degrees each has been selected as the highest priority science objective for SKA1. Aiming at preparing for this highly challenging observation, we perform an extensive pre-selection of the `quietest' and `cleanest' candidate fields in the southern sky to be suited for deep imaging of the EoR using existing catalogs and observations over a broad frequency range. The candidate fields should meet a number of strict criteria to avoid contaminations from foreground structures and sources. The candidate fields should also exhibit both the lowest average surface brightness and smallest variance to ensure uniformity and high quality deep imaging over the fields. Our selection eventually yields a sample of 7 `ideal' fields of 20 square degrees in the southern sky that could be targeted for deep imaging of the EoR. Finally, these selected fields are convolved with the synthesized beam of SKA1-low stations to ensure that the effect of sidelobes from the far field bright sources is also weak.
|
astrophysics
|
The energy-energy correlator (EEC) is an event shape observable which probes the angular correlations of energy depositions in detectors at high energy collider facilities. It has been investigated extensively in the context of precision QCD. In this work, we introduce a novel definition of EEC adapted to the Breit frame in deep-inelastic scattering (DIS). In the back-to-back limit, the observable we propose is sensitive to the universal transverse momentum dependent (TMD) parton distribution functions and fragmentation functions, and it can be studied within the traditional TMD factorization formalism. We further show that the new observable is insensitive to experimental pseudorapidity cuts, often imposed in the Laboratory frame due to detector acceptance limitations. In this work the singular distributions for the new observable are obtained in soft collinear effective theory up to $\mathcal{O}(\alpha_s^3)$ and are verified by the full QCD calculations up to $\mathcal{O}(\alpha_s^2)$. The resummation in the singular limit is performed up to next-to-next-to-next-to-leading logarithmic accuracy. After incorporating non-perturbative effects, we present a comparison of our predictions to PYTHIA 8 simulations.
|
high energy physics phenomenology
|
Computer vision (CV) techniques try to mimic human capabilities of visual perception to support labor-intensive and time-consuming tasks like the recognition and localization of critical objects. Nowadays, CV increasingly relies on artificial intelligence (AI) to automatically extract useful information from images that can be utilized for decision support and business process automation. However, the focus of extant research is often exclusively on technical aspects when designing AI-based CV systems while neglecting socio-technical facets, such as trust, control, and autonomy. For this purpose, we consider the design of such systems from a hybrid intelligence (HI) perspective and aim to derive prescriptive design knowledge for CV-based HI systems. We apply a reflective, practice-inspired design science approach and accumulate design knowledge from six comprehensive CV projects. As a result, we identify four design-related mechanisms (i.e., automation, signaling, modification, and collaboration) that inform our derived meta-requirements and design principles. This can serve as a basis for further socio-technical research on CV-based HI systems.
|
computer science
|
In supernovae neutrinos are emitted from a region with a width $r_{\rm eff}$ of a few kilometers (rather than from a surface of infinitesimal width). We study the effect of integration (averaging) over such an extended emission region on collective oscillations. The averaging leads to additional suppression of the correlation (off-diagonal element of the density matrix) by a factor $ \sim 1/r_{\rm eff} V_e \sim 10^{-10}$, where $V_e$ is the matter potential. This factor enters the initial condition for further collective oscillations and, consequently, leads to a delay of the strong flavour transitions. We justify and quantify this picture using a simple example of collective effects in two intersecting fluxes. We have derived the evolution equation for the density matrix elements integrated over the emission region and solved it both numerically and analytically. For the analytic solution we have used linearized equations. We show that the delay of the development of the instability and the collective oscillations depends on the suppression factor due to the averaging (integration) logarithmically. If the instability develops inside the production region, the integration leads not only to a delay but also to a modification of the exponential grow.
|
high energy physics phenomenology
|
The Next Generation Transit Survey (NGTS) is a photometric survey for transiting exoplanets, consisting of twelve identical 0.2-m telescopes. We report a measurement of the transit of HD106315c using a novel observing mode in which multiple NGTS telescopes observed the same target with the aim of increasing the signal-to-noise. Combining the data allows the robust detection of the transit, which has a depth less than 0.1 per cent, rivalling the performance of much larger telescopes. We demonstrate the capability of NGTS to contribute to the follow-up of K2 and TESS discoveries using this observing mode. In particular, NGTS is well-suited to the measurement of shallow transits of bright targets. This is particularly important to improve orbital ephemerides of relatively long-period planets, where only a small number of transits are observed from space.
|
astrophysics
|
We show that the low temperature phase of a conjugate pair of uncoupled, quantum chaotic, nonhermitian systems such as the Sachdev-Ye-Kitaev (SYK) model or the Ginibre ensemble of random matrices are dominated by replica symmetry breaking configurations with a nearly flat free energy that terminates in a first order phase transition. In the case of the SYK model, we show explicitly that the spectrum of the effective replica theory has a gap. These features are strikingly similar to those induced by wormholes in the gravity path integral which suggests a close relation between both configurations. For a non-chaotic SYK, the results are qualitatively different: the spectrum is gapless in the low temperature phase and there is an infinite number of second order phase transitions unrelated to the restoration of replica symmetry.
|
high energy physics theory
|
Random graph alignment refers to recovering the underlying vertex correspondence between two random graphs with correlated edges. This can be viewed as an average-case and noisy version of the well-known NP-hard graph isomorphism problem. For the correlated Erd\"os-R\'enyi model, we prove an impossibility result for partial recovery in the sparse regime, with constant average degree and correlation, as well as a general bound on the maximal reachable overlap. Our bound is tight in the noiseless case (the graph isomorphism problem) and we conjecture that it is still tight with noise. Our proof technique relies on a careful application of the probabilistic method to build automorphisms between tree components of a subcritical Erd\"os-R\'enyi graph.
|
statistics
|
Multi-class classification with a very large number of classes, or extreme classification, is a challenging problem from both statistical and computational perspectives. Most of the classical approaches to multi-class classification, including one-vs-rest or multi-class support vector machines, require the exact estimation of the classifier's margin, at both the training and the prediction steps making them intractable in extreme classification scenarios. In this paper, we study the impact of computing an approximate margin using nearest neighbor (ANN) search structures combined with locality-sensitive hashing (LSH). This approximation allows to dramatically reduce both the training and the prediction time without a significant loss in performance. We theoretically prove that this approximation does not lead to a significant loss of the risk of the model and provide empirical evidence over five publicly available large scale datasets, showing that the proposed approach is highly competitive with respect to state-of-the-art approaches on time, memory and performance measures.
|
statistics
|
A graph algorithm is truly subquadratic if it runs in ${\cal O}(m^b)$ time on connected $m$-edge graphs, for some positive $b < 2$. Roditty and Vassilevska Williams (STOC'13) proved that under plausible complexity assumptions, there is no truly subquadratic algorithm for computing the diameter of general graphs. In this work, we present positive and negative results on the existence of such algorithms for computing the diameter on some special graph classes. Specifically, three vertices in a graph form an asteroidal triple (AT) if between any two of them there exists a path that avoids the closed neighbourhood of the third one. We call a graph AT-free if it does not contain an AT. We first prove that for all $m$-edge AT-free graphs, one can compute all the eccentricities in truly subquadratic ${\cal O}(m^{3/2})$ time. Then, we extend our study to several subclasses of chordal graphs -- all of them generalizing interval graphs in various ways --, as an attempt to understand which of the properties of AT-free graphs, or natural generalizations of the latter, can help in the design of fast algorithms for the diameter problem on broader graph classes. For instance, for all chordal graphs with a dominating shortest path, there is a linear-time algorithm for computing a diametral pair if the diameter is at least four. However, already for split graphs with a dominating edge, under plausible complexity assumptions, there is no truly subquadratic algorithm for deciding whether the diameter is either $2$ or $3$.
|
computer science
|
Following the recent discovery of the Lauricella string scattering amplitudes (LSSA) and their associated exact SL(K+3,C) symmetry, we give a brief comment on Gross conjecture regarding "High energy symmetry of string theory".
|
high energy physics theory
|
Extreme events that arise spontaneously in chaotic dynamical systems often have an adverse impact on the system or the surrounding environment. As such, their mitigation is highly desirable. Here, we introduce a novel control strategy for mitigating extreme events in a turbulent shear flow. The controller combines a probabilistic prediction of the extreme events with a deterministic actuator. The predictions are used to actuate the controller only when an extreme event is imminent. When actuated, the controller only acts on the degrees of freedom that are involved in the formation of the extreme events, exerting minimal interference with the flow dynamics. As a result, the attractors of the controlled and uncontrolled systems share the same chaotic core (containing the non-extreme events) and only differ in the tail of their distributions. We propose that such adaptive low-dimensional controllers should be used to mitigate extreme events in general chaotic dynamical systems, beyond the shear flow considered here.
|
physics
|
Recently deep generative models have achieved impressive progress in modeling the distribution of training data. In this work, we present for the first time a generative model for 4D light field patches using variational autoencoders to capture the data distribution of light field patches. We develop a generative model conditioned on the central view of the light field and incorporate this as a prior in an energy minimization framework to address diverse light field reconstruction tasks. While pure learning-based approaches do achieve excellent results on each instance of such a problem, their applicability is limited to the specific observation model they have been trained on. On the contrary, our trained light field generative model can be incorporated as a prior into any model-based optimization approach and therefore extend to diverse reconstruction tasks including light field view synthesis, spatial-angular super resolution and reconstruction from coded projections. Our proposed method demonstrates good reconstruction, with performance approaching end-to-end trained networks, while outperforming traditional model-based approaches on both synthetic and real scenes. Furthermore, we show that our approach enables reliable light field recovery despite distortions in the input.
|
electrical engineering and systems science
|
Two alternative routes are taken to derive, on the basis of the dynamics of a finite number of dumbbells, viscoelasticity in terms of a conformation tensor with fluctuations. The first route is a direct approach using stochastic calculus only, and it serves as a benchmark for the second route, which is guided by thermodynamic principles. In the latter, the Helmholtz free energy and a generalized relaxation tensor play a key role. It is shown that the results of the two routes agree only if a finite-size contribution to the Helmholtz free energy of the conformation tensor is taken into account. Using statistical mechanics, this finite-size contribution is derived explicitly in this paper for a large class of models; this contribution is non-zero whenever the number of dumbbells in the volume of observation is finite. It is noted that the generalized relaxation tensor for the conformation tensor does not need any finite-size correction.
|
condensed matter
|
The electroweak phase transition (EWPT) is considered in the framework of 3-3-1-1 model for Dark Matter. The phase structure within three or two periods is approximated for the theory with many vacuum expectation values (VEVs) at TeV and Electroweak scales. In the mentioned model, there are two pictures. The first picture containing two periods of EWPT, has a transition $SU(3) \rightarrow SU(2)$ at 6 TeV scale and another is $SU(2) \rightarrow U(1)$ transition which is the like-standard model EWPT. The second picture is an EWPT structure containing three periods, in which two first periods are similar to those of the first picture and another one is the symmetry breaking process of $U(1)_N$ subgroup. Our study leads to the conclusion that EWPTs are the first order phase transitions when new bosons are triggers and their masses are within range of some TeVs. Especially, in two pictures, the maximum strength of the $SU(2) \rightarrow U(1)$ phase transition is equal to 2.12 so this EWPT is not strong. Moreover, neutral fermions, which are candidates for Dark Matter and obey the Fermi-Dirac distribution, can be a negative trigger for EWPT. However, they do not make lose the first-order EWPT at TeV scale. Furthermore, in order to be the strong first-order EWPT at TeV scale, the symmetry breaking processes must produce more bosons than fermions or the mass of bosons must be much larger than that of fermions.
|
high energy physics phenomenology
|
Maximum likelihood (ML) and adversarial learning are two popular approaches for training generative models, and from many perspectives these techniques are complementary. ML learning encourages the capture of all data modes, and it is typically characterized by stable training. However, ML learning tends to distribute probability mass diffusely over the data space, $e.g.$, yielding blurry synthetic images. Adversarial learning is well known to synthesize highly realistic natural images, despite practical challenges like mode dropping and delicate training. We propose an $\alpha$-Bridge to unify the advantages of ML and adversarial learning, enabling the smooth transfer from one to the other via the $\alpha$-divergence. We reveal that generalizations of the $\alpha$-Bridge are closely related to approaches developed recently to regularize adversarial learning, providing insights into that prior work, and further understanding of why the $\alpha$-Bridge performs well in practice.
|
computer science
|
Geometrical frustration, quantum entanglement and disorder may prevent long-range order of localized spins with strong exchange interactions, resulting in a novel state of matter. $\kappa$-(BEDT-TTF)$_2$-Cu$_2$(CN)$_3$ is considered the best approximation of this elusive quantum-spin-liquid state, but its ground-state properties remain puzzling. Here we present a multi-frequency electron-spin resonance study down to millikelvin temperatures, revealing a rapid drop of the spin susceptibility at $T^*=6\,\mathrm{K}$. This opening of a spin gap, accompanied by structural modifications, suggests the enigmatic `$6\,\mathrm{K}$-anomaly' as the transition to a valence-bond-solid ground state. We identify an impurity contribution that becomes dominant when the intrinsic spins form singlets. Only probing the electrons directly manifests the pivotal role of defects for the low-energy properties of quantum-spin systems without magnetic order.
|
condensed matter
|
Given a polynomial ring $P$ over a field $K$, an element $g \in P$, and a $K$-subalgebra $S$ of $P$, we deal with the problem of saturating $S$ with respect to $g$, i.e. computing $Sat_g(S) = S[g, g^{-1}]\cap P$. In the general case we describe a procedure/algorithm to compute a set of generators for $Sat_g(S)$ which terminates if and only if it is finitely generated. Then we consider the more interesting case when $S$ is graded. In particular, if $S$ is graded by a positive matrix $W$ and $g$ is an indeterminate, we show that if we choose a term ordering $\sigma$ of $g$-DegRev type compatible with $W$, then the two operations of computing a $\sigma$-SAGBI basis of $S$ and saturating $S$ with respect to $g$ commute. This fact opens the doors to nice algorithms for the computation of $Sat_g(S)$. In particular, under special assumptions on the grading one can use the truncation of a $\sigma$-SAGBI basis and get the desired result. Notably, this technique can be applied to the problem of directly computing some $U$-invariants, classically called semi-invariants, even in the case that $K$ is not the field of complex numbers.
|
mathematics
|
The closest potentially habitable worlds outside our Solar system orbit a different kind of star than our Sun: smaller red dwarf stars. Such stars can flare frequently, bombarding their planets with biologically damaging high-energy UV radiation, placing planetary atmospheres at risk of erosion and bringing the habitability of these worlds into question. However, the surface UV flux on these worlds is unknown. Here we show the first models of the surface UV environments of the four closest potentially habitable exoplanets: Proxima-b, TRAPPIST-1e, Ross-128b, and LHS-1140b assuming different atmospheric compositions, spanning Earth-analogue to eroded and anoxic atmospheres and compare them to levels for Earth throughout its geological evolution. Even for planet models with eroded and anoxic atmospheres, surface UV radiation remains below early Earth levels, even during flares. Given that the early Earth was inhabited, we show that UV radiation should not be a limiting factor for the habitability of planets orbiting M stars. Our closest neighbouring worlds remain intriguing targets for the search for life beyond our Solar system.
|
astrophysics
|
The reported ${17~}$MeV boson - which has been proposed as an explanation to the $^{8}$Be and $^{4}$He anomaly - investigated in the context of its possible influence to the neutron stars structure. Implementing the $m_{v}$=17 MeV to the nuclear equation of state using different incompressibility values K$_{0}$=245 MeV and K$_{0}$=260 MeV and solving the Tolman-Oppenheimer-Volkoff equations, we estimate an upper limit of ${M_{TOV}\leqslant 2.4M\odot}$ for a non rotating neutron star with span in radius ${R}$ between ${11.5~}$km to ${14~}$km. Moving away from the ${\beta}$ - equilibrium with admixture of 10\% protons and simulating possible softening of equation of state due to hyperons, we see that our estimated limits fit quite well inside the newest reported studies, coming from the neutron stars merge procedure.
|
high energy physics phenomenology
|
We describe a model for unitary black hole evaporation with no information loss in terms of a quantum computation. We assume that there is a fine tuned interaction between the qubits of the black hole Bell states which is the inverse of premeasurement. Evaporation is unitary due to a projection on the black hole state at the end of each computation whereas information comes out of the black hole by teleportation. The model requires only four operations; the qubit interactions, the Hadamard and CNOT gates and the projection which is nonunitary. The model is a concrete quantum computation that realizes the final black hole state idea with some modifications.
|
high energy physics theory
|
Open quantum systems and study of decoherence are important for our fundamental understanding of quantum physical phenomena. For practical purposes, there exists a large number of quantum protocols exploiting quantum resources, e.g. entanglement, which allows to go beyond what is possible to achieve by classical means. We combine concepts from open quantum systems and quantum information science, and give a proof-of-principle experimental demonstration -- with teleportation -- that it is possible to implement efficiently a quantum protocol via non-Markovian open system. The results show that, at the time of implementation of the protocol, it is not necessary to have the quantum resource in the degree of freedom used for the basic protocol -- as long as there exists some other degree of freedom, or environment of an open system, which contains useful resources. The experiment is based on a pair of photons, where their polarizations act as open system qubits and frequencies as their environments -- while the path degree of freedom of one of the photons represents the state of Alice's qubit to be teleported to Bob's polarization qubit.
|
quantum physics
|
We revisit several results concerning club principles and nonsaturation of the nonstationary ideal, attempting to improve them in various ways. So we typically deal with a (non necessarily normal) ideal $J$ extending the nonstationary ideal on a regular uncountable (non necessarily successor) cardinal $\kappa$, our goal being to witness the nonsaturation of $J$ by the existence of towers (of length possibly greater than $\kappa^+$).
|
mathematics
|
Non-reciprocal plasmons in current-driven, isotropic, and homogenous graphene with proximal metallic gates is theoretically explored. Nearby metallic gates screen the Coulomb interactions, leading to linearly dispersive acoustic plasmons residing close to its particle-hole continuum counterpart. We show that the applied bias leads to spectral broadband focused plasmons whose resonance linewidth is dependent on the angular direction relative to the current flow due to Landau damping. We predict that forward focused non-reciprocal plasmons are possible with accessible experimental parameters and setup.
|
condensed matter
|
Audio-Visual Scene-Aware Dialog (AVSD) is a task to generate responses when chatting about a given video, which is organized as a track of the 8th Dialog System Technology Challenge (DSTC8). To solve the task, we propose a universal multimodal transformer and introduce the multi-task learning method to learn joint representations among different modalities as well as generate informative and fluent responses. Our method extends the natural language generation pre-trained model to multimodal dialogue generation task. Our system achieves the best performance in both objective and subjective evaluations in the challenge.
|
computer science
|
We propose an approach to estimate the effect of multiple simultaneous interventions in the presence of hidden confounders. To overcome the problem of hidden confounding, we consider the setting where we have access to not only the observational data but also sets of single-variable interventions in which each of the treatment variables is intervened on separately. We prove identifiability under the assumption that the data is generated from a nonlinear continuous structural causal model with additive Gaussian noise. In addition, we propose a simple parameter estimation method by pooling all the data from different regimes and jointly maximizing the combined likelihood. We also conduct comprehensive experiments to verify the identifiability result as well as to compare the performance of our approach against a baseline on both synthetic and real-world data.
|
statistics
|
Automatic height and age estimation of speakers using acoustic features is widely used for the purpose of human-computer interaction, forensics, etc. In this work, we propose a novel approach of using attention mechanism to build an end-to-end architecture for height and age estimation. The attention mechanism is combined with Long Short-Term Memory(LSTM) encoder which is able to capture long-term dependencies in the input acoustic features. We modify the conventionally used Attention -- which calculates context vectors the sum of attention only across timeframes -- by introducing a modified context vector which takes into account total attention across encoder units as well, giving us a new cross-attention mechanism. Apart from this, we also investigate a multi-task learning approach for jointly estimating speaker height and age. We train and test our model on the TIMIT corpus. Our model outperforms several approaches in the literature. We achieve a root mean square error (RMSE) of 6.92cm and6.34cm for male and female heights respectively and RMSE of 7.85years and 8.75years for male and females ages respectively. By tracking the attention weights allocated to different phones, we find that Vowel phones are most important whistlestop phones are least important for the estimation task.
|
computer science
|
The intelligibility of natural speech is seriously degraded when exposed to adverse noisy environments. In this work, we propose a deep learning-based speech modification method to compensate for the intelligibility loss, with the constraint that the root mean square (RMS) level and duration of the speech signal are maintained before and after modifications. Specifically, we utilize an iMetricGAN approach to optimize the speech intelligibility metrics with generative adversarial networks (GANs). Experimental results show that the proposed iMetricGAN outperforms conventional state-of-the-art algorithms in terms of objective measures, i.e., speech intelligibility in bits (SIIB) and extended short-time objective intelligibility (ESTOI), under a Cafeteria noise condition. In addition, formal listening tests reveal significant intelligibility gains when both noise and reverberation exist.
|
electrical engineering and systems science
|
One of the main open problems in quantum communication is the design of efficient quantum-secured networks. This is a challenging goal, because it requires protocols that guarantee both unconditional security and high communication rates, while increasing the number of users. In this scenario, continuous-variable systems provide an ideal platform where high rates can be achieved by using off-the-shelf optical components. At the same time, the measurement-device independent architecture is also appealing for its feature of removing a substantial portion of practical weaknesses. Driven by these ideas, here we introduce a modular design of continuous-variable network where each individual module is a measurement-device-independent star network. In each module, the users send modulated coherent states to an untrusted relay, creating multipartite secret correlations via a generalized Bell detection. Using one-time pad between different modules, the network users may share a quantum-secure conference key over arbitrary distances at constant rate.
|
quantum physics
|
In this paper, we study how to alleviate highway traffic congestion by encouraging plug-in hybrid and electric vehicles to stop at a charging station around peak congestion times. Specifically, we design a pricing policy to make the charging price dynamic and dependent on the traffic congestion, predicted via the cell transmission model, and the availability of charging spots. Furthermore, we develop a novel framework to model how this policy affects the drivers' decisions by formulating a mixed-integer potential game. Technically, we introduce the concept of "road-to-station" (r2s) and "station-to-road" (s2r) flows, and show that the selfish actions of the drivers converge to charging schedules that are individually optimal in the sense of Nash. In the second part of this work, submitted as a separate paper (Part II: Case Study), we validate the proposed strategy on a simulated highway stretch between The Hague and Rotterdam, in The Netherlands.
|
electrical engineering and systems science
|
Phase I dose-escalation trials must be guided by a safety model in order to avoid exposing patients to unacceptably high risk of toxicities. Traditionally, these trials are based on one type of schedule. In more recent practice, however, there is often a need to consider more than one schedule, which means that in addition to the dose itself, the schedule needs to be varied in the trial. Hence, the aim is finding an acceptable dose-schedule combination. However, most established methods for dose-escalation trials are designed to escalate the dose only and ad-hoc choices must be made to adapt these to the more complicated setting of finding an acceptable dose-schedule combination. In this paper, we introduce a Bayesian time-to-event model which takes explicitly the dose amount and schedule into account through the use of pharmacokinetic principles. The model uses a time-varying exposure measure to account for the risk of a dose-limiting toxicity over time. The dose-schedule decisions are informed by an escalation with overdose control criterion. The model is formulated using interpretable parameters which facilitates the specification of priors. In a simulation study, we compared the proposed method with an existing method. The simulation study demonstrates that the proposed method yields similar or better results compared to an existing method in terms of recommending acceptable dose-schedule combinations, yet reduces the number of patients enrolled in most of scenarios. The \texttt{R} and \texttt{Stan} code to implement the proposed method is publicly available from Github (\url{https://github.com/gunhanb/TITEPK_code}).
|
statistics
|
We investigate the $\Lambda$-polytopes, a convex-linear structure recently defined and applied to the classical simulation of quantum computation with magic states by sampling. There is one such polytope, $\Lambda_n$, for every number $n$ of qubits. We establish two properties of the family $\{\Lambda_n, n\in \mathbb{N}\}$, namely (i) Any extremal point (vertex) $A_\alpha \in \Lambda_m$ can be used to construct vertices in $\Lambda_n$, for all $n>m$. (ii) For vertices obtained through this mapping, the classical simulation of quantum computation with magic states can be efficiently reduced to the classical simulation based on the preimage $A_\alpha$. In addition, we describe a new class of vertices in $\Lambda_2$ which is outside the known classification. While the hardness of classical simulation remains an open problem for most extremal points of $\Lambda_n$, the above results extend efficient classical simulation of quantum computations beyond the presently known range.
|
quantum physics
|
Detection of counterfeit chips has emerged as a crucial concern. Physically-unclonable-function (PUF)-based techniques are widely used for authentication, however, require dedicated hardware and large signature database. In this work, we show intrinsic & database-free authentication using back-end capacitors. The discussed technique simplifies authentication setup and reduces the test cost. We show that an analog-to-digital converter (ADC) can be modified for back-end capacitor-based authentication in addition to its regular functionality; hence, a dedicated authentication module is not necessary. Moreover, since back-end capacitors are quite insensitive to temperature and aging-induced variations than transistors, the discussed technique result in a more reliable authentication than transistor PUF-based authentication. The modifications to conventional ADC incur 3.2% power overhead and 75% active-area overhead; however, arguably, the advantages of the discussed intrinsic & database-free authentication outweigh the overheads. Full version of this article is published at IEEE TVLSI.
|
electrical engineering and systems science
|
GeV-scale dark matter is an increasingly attractive target for direct detection, indirect detection, and collider searches. Its annihilation into hadronic final states produces a challenging zoo of light hadronic resonances. We update Herwig7 to study the photon and positron spectra from annihilation through a vector mediator. It covers dark matter masses between 250 MeV and 5 GeV and includes an error estimate.
|
high energy physics phenomenology
|
An intersecting D3-D3' system contains magnetic monopole solutions due to D- strings stretched between two branes. These magnetic charges satisfy the usual Dirac quantization relation. We show that this quantization condition can also be obtained directly by SUSY and gauge invariance arguments of the theory and conclude that the independence of physics from a shift of holonomy is exactly equivalent to regarding a {\it Fayet-Iliopoulos (FI) gauge} for our set-up. So we are led to conjecture that there is a correspondence between the topological point of view of magnetic charges and SYM considerations of their theories. This picture implies that one can attribute a definite quantity to the integration of the vector multiplet over the singular region such that we can identify it with magnetic flux. It also indicates that the FI parameter is proportional to the magnetic charge so it is a quantized number.
|
high energy physics theory
|
The antiferromagnetic topological insulator has attracted lots of attention recently, as its intrinsic magnetism and topological property makes it a potential material to realize the quantum anomalous Hall effect (QAHE) at relative high temperature. Until now, only MnBi$_2$Te$_4$ is predicted and grown successfully. The other MB$_2$T$_4$-family materials predicted (MB$_2$T$_4$:M=transition-metal or rare-earth element, B=Bi or Sb, T=Te, Se, or S) with not only antiferromagnetic topological property but also rich and exotic topological quantum states and dynamically stable (or metastable) structure have not been realized on experiment completely. Here, MnBi$_2$Te$_4$ single crystals have been grown successfully and tested. It shows typical antiferromagnetic character, with Neel temperature of 24.5K and a spin-flop transition at H$\thickapprox$35000 Oe, 1.8K. After obtaining MnBi$_2$Te$_4$ single crystals, we have tried to synthesize the other members of MB$_2$T$_4$-family materials, but things are not going so well. Then it inspires us to discuss the growth mechanism of MnBi$_2$Te$_4$. The growth mode may be the layer-inserting growth mode based on symmetry, which is supported by our X-ray photoelectron spectroscopy (XPS) measurement. The XPS measurement combing with the $Ar^+$ ion sputtering is done to investigate the chemical state of MnBi$_2$Te$_4$. Binding energies (BE) of the MnBi$_2$Te$_4$-related contributions to Mn2p and Te3d spectra agree well with those of inserting material $\alpha$-MnTe. Rising intensity of the Mn2p satellite for divalent Mn (bound to chalcogen) with atomic number of ligand (from MnO to MnBi$_2$Te$_4$) has been observed, thus suggesting classification of MnBi$_2$Te$_4$ as the charge-transfer compound. Understanding the growth mode of MnBi$_2$Te$_4$ can help us to grow the other members of MB$_2$T$_4$-family materials.
|
condensed matter
|
This paper describes variable speed wind turbine (Types 3 and 4, IEC 61400-27-1) simulations based on an open-source solution to be applied to Bachelor and Master Degrees. It is an attempt to improve the education quality of such sustainable energy by giving an open-source experimental environment for both undergraduate and graduate students. Indeed, among the renewable sources, wind energy is currently becoming essential in most power systems. The simulations include both one-mass and two-mass mechanical models, as well as pitch angle control. A general overview of the structure, control, and operation of the variable speed wind turbine is provided by these easy-to-use interactive virtual experiments. In addition, a comparison between commercial and open-source software packages is described and discussed in detail. Examples and extensive results are also included in the paper. The models are available in Scilab-Xcos file exchange for power system education and researcher communities.
|
electrical engineering and systems science
|
Inflation can be supported in very steep potentials if it is generated by rapidly turning fields, which can be natural in negatively curved field spaces. The curvature perturbation, $\zeta$, of these models undergoes an exponential, transient amplification around the time of horizon crossing, but can still be compatible with observations at the level of the power spectrum. However, a recent analysis (based on a proposed single-field effective theory with an imaginary speed of sound) found that the trispectrum and other higher-order, non-Gaussian correlators also undergo similar exponential enhancements. This arguably leads to `hyper-large' non-Gaussianities in stark conflict with observations, and even to the loss of perturbative control of the calculations. In this paper, we provide the first analytic solution of the growth of the perturbations in two-field rapid-turn models, and find it in good agreement with previous numerical and single-field EFT estimates. We also show that the nested structure of commutators of the in-in formalism has subtle and crucial consequences: accounting for these commutators, we show analytically that the naively leading-order piece (which indeed is exponentially large) cancels exactly in all relevant correlators. The remaining non-Gaussianities of these models are modest, and there is no problem with perturbative control from the exponential enhancement of $\zeta$. Thus, rapid-turn inflation with negatively curved field spaces remains a viable and interesting class of candidate theories of the early universe.
|
high energy physics theory
|
Most Markov chain Monte Carlo methods operate in discrete time and are reversible with respect to the target probability. Nevertheless, it is now understood that the use of non-reversible Markov chains can be beneficial in many contexts. In particular, the recently-proposed Bouncy Particle Sampler leverages a continuous-time and non-reversible Markov process and empirically shows state-of-the-art performances when used to explore certain probability densities; however, its implementation typically requires the computation of local upper bounds on the gradient of the log target density. We present the Discrete Bouncy Particle Sampler, a general algorithm based upon a guided random walk, a partial refreshment of direction, and a delayed-rejection step. We show that the Bouncy Particle Sampler can be understood as a scaling limit of a special case of our algorithm. In contrast to the Bouncy Particle Sampler, implementing the Discrete Bouncy Particle Sampler only requires point-wise evaluation of the target density and its gradient. We propose extensions of the basic algorithm for situations when the exact gradient of the target density is not available. In a Gaussian setting, we establish a scaling limit for the radial process as dimension increases to infinity. We leverage this result to obtain the theoretical efficiency of the Discrete Bouncy Particle Sampler as a function of the partial-refreshment parameter, which leads to a simple and robust tuning criterion. A further analysis in a more general setting suggests that this tuning criterion applies more generally. Theoretical and empirical efficiency curves are then compared for different targets and algorithm variations.
|
statistics
|
Deep learning has been widely used for medical image segmentation and a large number of papers has been presented recording the success of deep learning in the field. In this paper, we present a comprehensive thematic survey on medical image segmentation using deep learning techniques. This paper makes two original contributions. Firstly, compared to traditional surveys that directly divide literatures of deep learning on medical image segmentation into many groups and introduce literatures in detail for each group, we classify currently popular literatures according to a multi-level structure from coarse to fine. Secondly, this paper focuses on supervised and weakly supervised learning approaches, without including unsupervised approaches since they have been introduced in many old surveys and they are not popular currently. For supervised learning approaches, we analyze literatures in three aspects: the selection of backbone networks, the design of network blocks, and the improvement of loss functions. For weakly supervised learning approaches, we investigate literature according to data augmentation, transfer learning, and interactive segmentation, separately. Compared to existing surveys, this survey classifies the literatures very differently from before and is more convenient for readers to understand the relevant rationale and will guide them to think of appropriate improvements in medical image segmentation based on deep learning approaches.
|
electrical engineering and systems science
|
We extract the imaginary part of the heavy-quark potential using classical-statistical simulations of real-time Yang-Mills dynamics in classical thermal equilibrium. The $r$-dependence of the imaginary part of the potential is extracted by measuring the temporal decay of Wilson loops of spatial length $r$. We compare our results to continuum expressions obtained using hard thermal loop theory and to semi-analytic lattice perturbation theory calculations using the hard classical loop formalism. We find that, when plotted as a function of $m_D r$, where $m_D$ is the hard classical loop Debye mass, the imaginary part of the heavy-quark potential is independent of the lattice spacing at small $r$ and agrees well with the semi-analytic hard classical loop result. For large quark-antiquark separations, we quantify the magnitude of the non-perturbative long-range corrections to the imaginary part of the heavy-quark potential. We present our results for a wide range of temperatures, lattice spacings, and lattice volumes. Based on our results, we extract an estimate of the heavy-quark transport coefficient $\kappa$ from the short-distance behavior of the classical potential and compare our result with $\kappa$ obtained using hard thermal loops and hard classical loops. This work sets the stage for extracting the imaginary part of the heavy-quark potential in an expanding non-equilibrium Yang Mills plasma.
|
high energy physics phenomenology
|
Medical image annotations are prohibitively time-consuming and expensive to obtain. To alleviate annotation scarcity, many approaches have been developed to efficiently utilize extra information, e.g.,semi-supervised learning further exploring plentiful unlabeled data, domain adaptation including multi-modality learning and unsupervised domain adaptation resorting to the prior knowledge from additional modality. In this paper, we aim to investigate the feasibility of simultaneously leveraging abundant unlabeled data and well-established cross-modality data for annotation-efficient medical image segmentation. To this end, we propose a novel semi-supervised domain adaptation approach, namely Dual-Teacher, where the student model not only learns from labeled target data (e.g., CT), but also explores unlabeled target data and labeled source data (e.g., MR) by two teacher models. Specifically, the student model learns the knowledge of unlabeled target data from intra-domain teacher by encouraging prediction consistency, as well as the shape priors embedded in labeled source data from inter-domain teacher via knowledge distillation. Consequently, the student model can effectively exploit the information from all three data resources and comprehensively integrate them to achieve improved performance. We conduct extensive experiments on MM-WHS 2017 dataset and demonstrate that our approach is able to concurrently utilize unlabeled data and cross-modality data with superior performance, outperforming semi-supervised learning and domain adaptation methods with a large margin.
|
computer science
|
Machine Learning has been applied in a wide range of tasks throughout the last years, ranging from image classification to autonomous driving and natural language processing. Restricted Boltzmann Machine (RBM) has received recent attention and relies on an energy-based structure to model data probability distributions. Notwithstanding, such a technique is susceptible to adversarial manipulation, i.e., slightly or profoundly modified data. An alternative to overcome the adversarial problem lies in the Generative Adversarial Networks (GAN), capable of modeling data distributions and generating adversarial data that resemble the original ones. Therefore, this work proposes to artificially generate RBMs using Adversarial Learning, where pre-trained weight matrices serve as the GAN inputs. Furthermore, it proposes to sample copious amounts of matrices and combine them into ensembles, alleviating the burden of training new models'. Experimental results demonstrate the suitability of the proposed approach under image reconstruction and image classification tasks, and describe how artificial-based ensembles are alternatives to pre-training vast amounts of RBMs.
|
computer science
|
In this work we extend our formalism to study meson-baryon interactions by including $s$- and $u$-channel diagrams for pseudoscalar-baryon systems. We study the coupled systems with strangeness $-1$ and focus on studying the isospin-1 resonance(s), especially in the energy region around 1400 MeV. By constraining the model parameters to fit the cross section data available on several processes involving relevant channels, we find resonances in the isoscalar as well as the isovector sector in the energy region around 1400 MeV.
|
high energy physics phenomenology
|
In light of the recent LHCb observation of CP violation in the charm sector, we review standard model (SM) predictions in the charm sector and in particular for $\Delta A_{CP}$. We get as an upper bound in the SM $| \Delta A_{CP} ^{\rm SM}| \leq 3.6 \times 10^{-4}$, which can be compared to the measurement of $\Delta A_{CP} ^{\rm LHCb2019} = (-15.4 \pm 2.9) \times 10^{-4}$. We discuss resolving this tension within an extension of the SM that includes a flavour violating $Z'$ that couples only to $\bar{s}s$ and $\bar{c}u$. We show that for masses below 80 GeV and flavour violating coupling of the order of $10^{-4}$, this model can successfully resolve the tension and avoid constraints from dijet searches, $D^0-\overline{D}^0$ mixing and measurements of the $Z$ width.
|
high energy physics phenomenology
|
Multiphase titanium alloys are critical materials in high value engineering components, for instance in aero engines. Microstructural complexity is exploited through interface engineering during mechanical processing to realise significant improvements in fatigue and fracture resistance and strength. In this work, we explore the role of select interfaces using in-situ micromechanical testing with concurrent observations from high angular resolution electron backscatter diffraction (HR-EBSD). Our results are supported with post mortem transmission electron microscopy (TEM). Using micro-pillar compression, we performed in-depth analysis of the role of select {\beta}-titanium (body centred cubic) ligaments which separate neighbouring {\alpha}-titanium (hexagonal close packed) regions and inhibit the dislocation motion and impact strength during mechanical deformation. These results shed light on the strengthening mechanisms and those that can lead to strain localisation during fatigue and failure.
|
condensed matter
|
This survey deals with the Cremona group via its subgroups.
|
mathematics
|
The measurement of single quanta in a collection of coherently interacting objects is transformative in the investigations of emergent quantum phenomena. An isolated nuclear-spin ensemble is a remarkable platform owing to its coherence, but detecting its single spin excitations has remained elusive. Here, we use an electron spin qubit in a semiconductor quantum dot to sense a single nuclear-spin excitation (a nuclear magnon) with 1.9-ppm precision via the 200-kHz hyperfine shift on the 28-GHz qubit frequency. We demonstrate this single-magnon precision across multiple modes identified by nuclear species and polarity. Finally, we monitor the coherent dynamics of a nuclear magnon and the emergence of quantum correlations competing against decoherence. A direct extension of this work is to probe engineered quantum states of the ensemble including long-lived memory states.
|
quantum physics
|
A novel framework of intelligent reflecting surface (IRS)-aided multiple-input single-output (MISO) non-orthogonal multiple access (NOMA) network is proposed, where a base station (BS) serves multiple clusters with unfixed number of users in each cluster. The goal is to maximize the sum rate of all users by jointly optimizing the passive beamforming vector at the IRS, decoding order, power allocation coefficient vector and number of clusters, subject to the rate requirements of users. In order to tackle the formulated problem, a three-step approach is proposed. More particularly, a long short-term memory (LSTM) based algorithm is first adopted for predicting the mobility of users. Secondly, a K-means based Gaussian mixture model (K-GMM) algorithm is proposed for user clustering. Thirdly, a deep Q-network (DQN) based algorithm is invoked for jointly determining the phase shift matrix and power allocation policy. Simulation results are provided for demonstrating that the proposed algorithm outperforms the benchmarks, while the throughput gain of 35% can be achieved by invoking NOMA technique instead of orthogonal multiple access (OMA).
|
electrical engineering and systems science
|
We consider general self-adjoint polynomials in several independent random matrices whose entries are centered and have the same variance. We show that under certain conditions the local law holds up to the optimal scale, i.e., the eigenvalue density on scales just above the eigenvalue spacing follows the global density of states which is determined by free probability theory. We prove that these conditions hold for general homogeneous polynomials of degree two and for symmetrized products of independent matrices with i.i.d. entries, thus establishing the optimal bulk local law for these classes of ensembles. In particular, we generalize a similar result of Anderson for anticommutator. For more general polynomials our conditions are effectively checkable numerically.
|
mathematics
|
The paper describes the peculiarities of acquirement and interpretation of images of current distribution through the sample surface when operating in I-AFM mode. It shows that I-AFM and SCM modes can be successfully used only for small scanning fields (no more than 5x5 square micrometer), since during the scanning process the continuous change in the area of the probe tip and, therefore, in the contact area between the probe and the sample surface is observed because of the abrasion of the tip. At the same time electrical modes of AFM could not be recommended for the investigation of nano objects, because there appear a number of difficulties in results interpretation, caused by the big curvature radius of the probe tip and, therefore, by the big surface area of electrical contact. The paper also demonstrates the peculiarities of acquirement and interpretation of CVCs for individual points on the sample surface in I/V Spectroscopy mode. It is shown that it is practically impossible to use I/V Spectroscopy mode for additional investigation of the surface by acquiring of CVCs in the points of interest (where heterogeneities in topography or current through the surface are observed) on the sample surface in I-AFM mode, because of the big temperature drift and hysteresis of piezoceramics. Recommendations for improving the possibilities of the method are given in the paper.
|
physics
|
When an energetic parton propagates in a hot and dense QCD medium it loses energy by elastic scatterings or by medium-induced gluon radiation. The gluon radiation spectrum is suppressed at high frequency due to the LPM effect and encompasses two regimes that are known analytically: at high frequencies $\omega >\omega_c = \hat q L^2$, where $\hat q $ is the jet quenching transport coefficient and $L$ the length of the medium, the spectrum is dominated by a single hard scattering, whereas the regime $\omega <\omega_c$ is dominated by multiple low momentum transfers. In this paper, we extend a recent approach (dubbed the Improved Opacity Expansion (IOE)), which allows an analytic (and systematic) treatment beyond the multiple soft scattering approximation, matching this result with the single hard emission spectrum. We calculate in particular the NNLO correction analytically and numerically and show that it is strongly suppressed compared to the NLO indicating a fast convergence of the IOE scheme and thus, we conclude that it is sufficient to truncate the series at NLO. We also propose a prescription to compare the GW and the HTL potentials and relate their parameters for future phenomenological works.
|
high energy physics phenomenology
|
Hyperspectral images involve abundant spectral and spatial information, playing an irreplaceable role in land-cover classification. Recently, based on deep learning technologies, an increasing number of HSI classification approaches have been proposed, which demonstrate promising performance. However, previous studies suffer from two major drawbacks: 1) the architecture of most deep learning models is manually designed, relies on specialized knowledge, and is relatively tedious. Moreover, in HSI classifications, datasets captured by different sensors have different physical properties. Correspondingly, different models need to be designed for different datasets, which further increases the workload of designing architectures; 2) the mainstream framework is a patch-to-pixel framework. The overlap regions of patches of adjacent pixels are calculated repeatedly, which increases computational cost and time cost. Besides, the classification accuracy is sensitive to the patch size, which is artificially set based on extensive investigation experiments. To overcome the issues mentioned above, we firstly propose a 3D asymmetric neural network search algorithm and leverage it to automatically search for efficient architectures for HSI classifications. By analysing the characteristics of HSIs, we specifically build a 3D asymmetric decomposition search space, where spectral and spatial information are processed with different decomposition convolutions. Furthermore, we propose a new fast classification framework, i,e., pixel-to-pixel classification framework, which has no repetitive operations and reduces the overall cost. Experiments on three public HSI datasets captured by different sensors demonstrate the networks designed by our 3D-ANAS achieve competitive performance compared to several state-of-the-art methods, while having a much faster inference speed.
|
computer science
|
Lattice dynamics in low-dimensional materials and, in particular, the quadratic behaviour of the flexural acoustic modes play a fundamental role in their thermomechanical properties. A first-principles evaluation of these can be very demanding, and can be affected by numerical noise that breaks translational or rotational invariance. In order to overcome these challenges, we study the Gartstein internal-coordinate potential and tune its 13 parameters on the first-principles interatomic force constants for graphene. We show that the resulting potential not only reproduces very well the phonon dispersions of graphene, but also those of carbon nanotubes of any diameter and chirality. The addition of a cubic term allows also to reproduce the dominant anharmonic terms, leading to a very good estimate of the lattice thermal conductivity. Finally, this potential form works very well also for boron nitride, provided it is fitted on the short-range (analytical) part of the interatomic force constants, and augmented thereafter with the long-range dielectric contribution. This consideration underscores how potentials based on short-ranged descriptors should be fit, in polar materials, to the short-range part of the first-principles interactions, and complemented by long-range analytical dielectric models parametrized on the same first-principles calculations.
|
condensed matter
|
In multivariate nonparametric regression the additive models are very useful when a suitable parametric model is difficult to find. The backfitting algorithm is a powerful tool to estimate the additive components. However, due to complexity of the estimators, the asymptotic $p$-value of the associated test is difficult to calculate without a Monte Carlo simulation. Moreover, the conventional tests assume that the predictor variables are strictly continuous. In this paper, a new test is introduced for the additive components with discrete or categorical predictors, where the model may contain continuous covariates. This method is also applied to the semiparametric regression to test the goodness-of-fit of the model. These tests are asymptotically optimal in terms of the rate of convergence, as they can detect a specific class of contiguous alternatives at a rate of $n^{-1/2}$. An extensive simulation study is presented to support the theoretical results derived in this paper. Finally, the method is applied to a real data to model the diamond price based on its quality attributes and physical measurements.
|
statistics
|
I show that a finite density of near-zero localized Dirac modes in the chirally-broken phase of a gauge theory can lead to the disappearance of the massless excitations predicted by the Goldstone theorem at finite temperature.
|
high energy physics theory
|
Venus shares many similarities with the Earth, but concomitantly, some of its features are extremely original. This is especially true for its atmosphere, where high pressures and temperatures are found at the ground level. In these conditions, carbon dioxide, the main component of Venus' atmosphere, is a supercritical fluid. The analysis of VeGa-2 probe data has revealed the high instability of the region located in the last few kilometers above the ground level. Recent works have suggested an explanation based on the existence of a vertical gradient of molecular nitrogen abundances, around 5 ppm per meter. Our goal was then to identify which physical processes could lead to the establishment of this intriguing nitrogen gradient, in the deep atmosphere of Venus. Using an appropriate equation of state for the binary mixture CO2-N2 under supercritical conditions, and also molecular dynamics simulations, we have investigated the separation processes of N2 and CO2 in the Venusian context. Our results show that molecular diffusion is strongly inefficient, and potential phase separation is an unlikely mechanism. We have compared the quantity of CO2 required to form the proposed gradient with what could be released by a diffuse degassing from a low volcanic activity. The needed fluxes of CO2 are not so different from what can be measured over some terrestrial volcanic systems, suggesting a similar effect at work on Venus.
|
astrophysics
|
The automation of posterior inference in Bayesian data analysis has enabled experts and nonexperts alike to use more sophisticated models, engage in faster exploratory modeling and analysis, and ensure experimental reproducibility. However, standard automated posterior inference algorithms are not tractable at the scale of massive modern datasets, and modifications to make them so are typically model-specific, require expert tuning, and can break theoretical guarantees on inferential quality. Building on the Bayesian coresets framework, this work instead takes advantage of data redundancy to shrink the dataset itself as a preprocessing step, providing fully-automated, scalable Bayesian inference with theoretical guarantees. We begin with an intuitive reformulation of Bayesian coreset construction as sparse vector sum approximation, and demonstrate that its automation and performance-based shortcomings arise from the use of the supremum norm. To address these shortcomings we develop Hilbert coresets, i.e., Bayesian coresets constructed under a norm induced by an inner-product on the log-likelihood function space. We propose two Hilbert coreset construction algorithms---one based on importance sampling, and one based on the Frank-Wolfe algorithm---along with theoretical guarantees on approximation quality as a function of coreset size. Since the exact computation of the proposed inner-products is model-specific, we automate the construction with a random finite-dimensional projection of the log-likelihood functions. The resulting automated coreset construction algorithm is simple to implement, and experiments on a variety of models with real and synthetic datasets show that it provides high-quality posterior approximations and a significant reduction in the computational cost of inference.
|
statistics
|
We study plane trees as a model for RNA secondary structure, assigning energy to each tree based on the Nearest Neighbor Thermodynamic Model, and defining a corresponding Gibbs distribution on the trees. Through a bijection between plane trees and 2-Motzkin paths, we design a Markov chain converging to the Gibbs distribution, and establish fast mixing time results by estimating the spectral gap of the chain. The spectral gap estimate is established through a series of decompositions of the chain and also by building on known mixing time results for other chains on Dyck paths. In addition to the mathematical aspects of the result, the resulting algorithm can be used as a tool for exploring the branching structure of RNA and its dependence on energy model parameters. The pseudocode implementing the Markov chain is provided in an appendix.
|
mathematics
|
Optimal control problems with a very large time horizon can be tackled with the Receding Horizon Control (RHC) method, which consists in solving a sequence of optimal control problems with small prediction horizon. The main result of this article is the proof of the exponential convergence (with respect to the prediction horizon) of the control generated by the RHC method towards the exact solution of the problem. The result is established for a class of infinite-dimensional linear-quadratic optimal control problems with time-independent dynamics and integral cost. Such problems satisfy the turnpike property: the optimal trajectory remains most of the time very close to the solution to the associated static optimization problem. Specific terminal cost functions, derived from the Lagrange multiplier associated with the static optimization problem, are employed in the implementation of the RHC method.
|
mathematics
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.