text
stringlengths
11
9.77k
label
stringlengths
2
104
Young giant planets and brown dwarf companions emit near-infrared radiation that can be linearly polarized up to several percent. This polarization can reveal the presence of a circumsubstellar accretion disk, rotation-induced oblateness of the atmosphere, or an inhomogeneous distribution of atmospheric dust clouds. We measured the near-infrared linear polarization of 20 known directly imaged exoplanets and brown dwarf companions with the high-contrast imager SPHERE-IRDIS at the VLT. We reduced the data using the IRDAP pipeline to correct for the instrumental polarization and crosstalk with an absolute polarimetric accuracy <0.1% in the degree of polarization. We report the first detection of polarization originating from substellar companions, with a polarization of several tenths of a percent for DH Tau B and GSC 6214-210 B in H-band. By comparing the measured polarization with that of nearby stars, we find that the polarization is unlikely to be caused by interstellar dust. Because the companions have previously measured hydrogen emission lines and red colors, the polarization most likely originates from circumsubstellar disks. Through radiative transfer modeling, we constrain the position angles of the disks and find that the disks must have high inclinations. The presence of these disks as well as the misalignment of the disk of DH Tau B with the disk around its primary star suggest in situ formation of the companions. For the 18 other companions, we do not detect significant polarization and place subpercent upper limits on their degree of polarization. These non-detections may indicate the absence of circumsubstellar disks, a slow rotation rate of young companions, the upper atmospheres containing primarily submicron-sized dust grains, and/or limited cloud inhomogeneity. Finally, we present images of the circumstellar disks of DH Tau, GQ Lup, PDS 70, Beta Pic, and HD 106906.
astrophysics
Anomaly detection is a classical problem where the aim is to detect anomalous data that do not belong to the normal data distribution. Current state-of-the-art methods for anomaly detection on complex high-dimensional data are based on the generative adversarial network (GAN). However, the traditional GAN loss is not directly aligned with the anomaly detection objective: it encourages the distribution of the generated samples to overlap with the real data and so the resulting discriminator has been found to be ineffective as an anomaly detector. In this paper, we propose simple modifications to the GAN loss such that the generated samples lie at the boundary of the real data distribution. With our modified GAN loss, our anomaly detection method, called Fence GAN (FGAN), directly uses the discriminator score as an anomaly threshold. Our experimental results using the MNIST, CIFAR10 and KDD99 datasets show that Fence GAN yields the best anomaly classification accuracy compared to state-of-the-art methods.
computer science
The equivariant Kazhdan-Lusztig polynomial of a matroid was introduced by Gedeon, Proudfoot, and Young. Gedeon conjectured an explicit formula for the equivariant Kazhdan-Lusztig polynomials of thagomizer matroids with an action of symmetric groups. In this paper, we discover a new formula for these polynomials which is related to the equivariant Kazhdan-Lusztig polynomials of uniform matroids. Based on our new formula, we confirm Gedeon's conjecture by the Pieri rule.
mathematics
Using time-adaptive probabilistic shaped 64QAM driven by a simple SNR prediction algorithm, we demonstrate $>$450 Gbps transmission over a 55-m free-space optics link with enhanced resilience towards rainy weather conditions. Through continuous measurement over 3-hours, we demonstrate $\sim$40~Gbps average bit-rate gain over unsupervised fixed modulation.
electrical engineering and systems science
Proposals are made to describe 1D, N = 4 supersymmetrical systems that extend SYK models by compactifying from 4D, N = 1 supersymmetric Lagrangians involving chiral, vector, and tensor supermultiplets. Quartic fermionic vertices are generated via intergrals over the whole superspace, while 2(q - 1)-point fermionic vertices are generated via superpotentials. The coupling constants in the superfield Lagrangians are arbitrary, and can be chosen to be Gaussian random. In that case, these 1D, N = 4 supersymmetric SYK models would exhibit Wishart-Laguerre randomness, which share the same feature among 1D, N = 1 and N = 2 models in literature. One difference though, is our models contain dynamical bosons, but this is consistent with other 1D, N = 4 and 2D, N = 2 models in literature. Added conjectures on duality and possible mirror symmetry realizations in these models is noted.
high energy physics theory
We introduce a notion of geodesic curvature $k_{\zeta}$ for a smooth horizontal curve $\zeta$ in a three-dimensional contact sub-Riemannian manifold, measuring how much a horizontal curve is far from being a geodesic. We show that the geodesic curvature appears as the first corrective term in the Taylor expansion of the sub-Riemannian distance between two points on a unit speed horizontal curve $$ d_{SR}^2( \zeta(t),\zeta(t+\epsilon))=\epsilon^2-\frac{k_{\zeta}^2(t)}{720} \epsilon^6 +o(\epsilon^{6}). $$ The sub-Riemannian distance is not smooth on the diagonal, hence the result contains the existence of such an asymptotics. This can be seen as a higher-order differentiability property of the sub-Riemannian distance along smooth horizontal curves. It generalizes the previously known results on the Heisenberg group.
mathematics
Previous work has cast doubt on the general framework of uniform convergence and its ability to explain generalization in neural networks. By considering a specific dataset, it was observed that a neural network completely misclassifies a projection of the training data (adversarial set), rendering any existing generalization bound based on uniform convergence vacuous. We provide an extensive theoretical investigation of the previously studied data setting through the lens of infinitely-wide models. We prove that the Neural Tangent Kernel (NTK) also suffers from the same phenomenon and we uncover its origin. We highlight the important role of the output bias and show theoretically as well as empirically how a sensible choice completely mitigates the problem. We identify sharp phase transitions in the accuracy on the adversarial set and study its dependency on the training sample size. As a result, we are able to characterize critical sample sizes beyond which the effect disappears. Moreover, we study decompositions of a neural network into a clean and noisy part by considering its canonical decomposition into its different eigenfunctions and show empirically that for too small bias the adversarial phenomenon still persists.
computer science
The leptophilic weakly interacting massive particle (WIMP) is realized in a minimal renormalizable model scenario where scalar mediators with lepton number establish the WIMP interaction with the standard model (SM) leptons. We perform a comprehensive analysis for such a WIMP scenario for two distinct cases with an SU(2) doublet or singlet mediator considering all the relevant theoretical, cosmological and experimental constraints at present. We show that the mono-photon searches at near-future lepton collider experiments (ILC, FCC-ee, CEPC, etc.) can play a significant role to probe the yet unexplored parameter range allowed by the WIMP relic density constraint. This will complement the search prospect at the near-future hadron collider experiment (HL-LHC). Furthermore, we discuss the combined model scenario including both the doublet and singlet mediator. The combined model is capable of explaining the long-standing muon (g-2) anomaly which is an additional advantage. We demonstrate that the allowed region for anomalous muon (g-2) explanation can also be probed at the future colliders which will thus be a simultaneous authentication of the model scenario.
high energy physics phenomenology
Numerous studies of integrated starlight, stellar counts, and kinematics have confirmed that the Milky Way is a barred galaxy. However, far fewer studies have investigated the bar's stellar population properties, which carry valuable independent information regarding the bar's formation history. Here we conduct a detailed analysis of chemical abundance distributions ([Fe/H] and [Mg/Fe]) in the on-bar and off-bar regions to study the azimuthal variation of star formation history (SFH) in the inner Galaxy. We find that the on-bar and off-bar stars at Galactocentric radii 3 $< r_{\rm GC}<$ 5 kpc have remarkably consistent [Fe/H] and [Mg/Fe] distribution functions and [Mg/Fe]--[Fe/H] relation, suggesting a common SFH shared by the long bar and the disc. In contrast, the bar and disc at smaller radii (2 $< r_{\rm GC} <$ 3 kpc) show noticeable differences, with relatively more very metal-rich ([Fe/H]~0.4) stars but fewer solar abundance stars in the bar. Given the three-phase star formation history proposed for the inner Galaxy in Lian et al. (2020b), these differences could be explained by the off-bar disc having experienced either a faster early quenching process or recent metal-poor gas accretion. Vertical variations of the abundance distributions at small $r_{\rm GC}$ suggest a wider vertical distribution of low-$\alpha$ stars in the bar, which may serve as chemical evidence for vertical heating through the bar buckling process. The lack of such vertical variations outside the bulge may then suggest a lack of vertical heating in the long bar.
astrophysics
Explainable Artificial Intelligence (XAI) has re-emerged in response to the development of modern AI and ML systems. These systems are complex and sometimes biased, but they nevertheless make decisions that impact our lives. XAI systems are frequently algorithm-focused; starting and ending with an algorithm that implements a basic untested idea about explainability. These systems are often not tested to determine whether the algorithm helps users accomplish any goals, and so their explainability remains unproven. We propose an alternative: to start with human-focused principles for the design, testing, and implementation of XAI systems, and implement algorithms to serve that purpose. In this paper, we review some of the basic concepts that have been used for user-centered XAI systems over the past 40 years of research. Based on these, we describe the "Self-Explanation Scorecard", which can help developers understand how they can empower users by enabling self-explanation. Finally, we present a set of empirically-grounded, user-centered design principles that may guide developers to create successful explainable systems.
computer science
We provide a numerical investigation of two families of subsystem quantum codes that are related to hypergraph product codes by gauge-fixing. The first family consists of the Bravyi-Bacon-Shor (BBS) codes which have optimal code parameters for subsystem quantum codes local in 2-dimensions. The second family consists of the constant rate "generalized Shor" codes of Bacon and Cassicino \cite{bacon2006quantum}, which we re-brand as subsystem hypergraph product (SHP) codes. We show that any hypergraph product code can be obtained by entangling the gauge qubits of two SHP codes. To evaluate the performance of these codes, we simulate both small and large examples. For circuit noise, a $[[21,4,3]]$ BBS code and a $[[49,16,3]]$ SHP code have pseudthresholds of $2\times10^{-3}$ and $8\times10^{-4}$, respectively. Simulations for phenomenological noise show that large BBS and SHP codes start to outperform surface codes with similar encoding rate at physical error rates $1\times 10^{-6}$ and $4\times10^{-4}$, respectively.
quantum physics
Using a 3D Lagrangian tracking technique, we determine experimentally the trajectories of non-tumbling E. coli mutants swimming in a Poiseuille flow. We identify a typology of trajectories in agreement with a kinematic "active Bretherton-Jeffery" model, featuring an axi-symmetric self-propelled ellipsoid. In particular, we recover the "swinging" and "shear tumbling" kinematics predicted theoretically by Z\"ottl et al. Moreover using this model, we derive analytically new features such as quasi-planar piece-wise trajectories, associated with the high aspect ratio of the bacteria, as well as the existence of a drift angle around which bacteria perform closed cyclic trajectories. However, the agreement between the model predictions and the experimental results remains local in time, due to the presence of Brownian rotational noise.
physics
By using the general framework of affine Gaudin models, we construct a new class of integrable sigma models. They are defined on a coset of the direct product of $N$ copies of a Lie group over some diagonal subgroup and they depend on $3N-2$ free parameters. For $N=1$ the corresponding model coincides with the well-known symmetric space sigma model. Starting from the Hamiltonian formulation, we derive the Lagrangian for the $N=2$ case and show that it admits a remarkably simple form in terms of the classical $\mathcal{R}$-matrix underlying the integrability of these models. We conjecture that a similar form of the Lagrangian holds for arbitrary $N$. Specifying our general construction to the case of $SU(2)$ and $N=2$, and eliminating one of the parameters, we find a new three-parametric integrable model with the manifold $T^{1,1}$ as its target space. We further comment on the connection of our results with those existing in the literature.
high energy physics theory
We propose a discrete spacetime formulation of quantum electrodynamics in one-dimension (a.k.a the Schwinger model) in terms of quantum cellular automata, i.e. translationally invariant circuits of local quantum gates. These have exact gauge covariance and a maximum speed of information propagation. In this picture, the interacting quantum field theory is defined as a "convergent" sequence of quantum cellular automata, parameterized by the spacetime lattice spacing---encompassing the notions of continuum limit and renormalization, and at the same time providing a quantum simulation algorithm for the dynamics.
quantum physics
Building dialogue systems that naturally converse with humans is being an attractive and an active research domain. Multiple systems are being designed everyday and several datasets are being available. For this reason, it is being hard to keep an up-to-date state-of-the-art. In this work, we present the latest and most relevant retrieval-based dialogue systems and the available datasets used to build and evaluate them. We discuss their limitations and provide insights and guidelines for future work.
computer science
Van der Waals heterostructures offer attractive opportunities to design quantum materials. For instance, transition metal dichalcogenides (TMDs) possess three quantum degrees of freedom: spin, valley index, and layer index. Further, twisted TMD heterobilayers can form moir\'e patterns that modulate the electronic band structure according to atomic registry, leading to spatial confinement of interlayer exciton (IXs). Here we report the observation of spin-layer locking of IXs trapped in moir\'e potentials formed in a heterostructure of bilayer 2H-MoSe$_2$ and monolayer WSe$_2$. The phenomenon of locked electron spin and layer index leads to two quantum-confined IX species with distinct spin-layer-valley configurations. Furthermore, we observe that the atomic registries of the moir\'e trapping sites in the three layers are intrinsically locked together due to the 2H-type stacking characteristic of bilayer TMDs. These results identify the layer index as a useful degree of freedom to engineer tunable few-level quantum systems in two-dimensional heterostructures.
condensed matter
Granger causality is a fundamental technique for causal inference in time series data, commonly used in the social and biological sciences. Typical operationalizations of Granger causality make a strong assumption that every time point of the effect time series is influenced by a combination of other time series with a fixed time delay. However, the assumption of the fixed time delay does not hold in many applications, such as collective behavior, financial markets, and many natural phenomena. To address this issue, we develop variable-lag Granger causality, a generalization of Granger causality that relaxes the assumption of the fixed time delay and allows causes to influence effects with arbitrary time delays. In addition, we propose a method for inferring variable-lag Granger causality relations. We demonstrate our approach on an application for studying coordinated collective behavior and show that it performs better than several existing methods in both simulated and real-world datasets. Our approach can be applied in any domain of time series analysis.
computer science
We study equilibrium and nonequilibrium properties of the single-impurity Anderson model with a power-law pseudogap in the density of states. In equilibrium, the model is known to display a quantum phase transition from a generalized Kondo to a local moment phase. In the present work, we focus on the extension of these phases beyond equilibrium, i.e. under the influence of a bias voltage. Within the auxiliary master equation approach combined with a scheme based on matrix product states (MPS) we are able to directly address the current-carrying steady state. Starting with the equilibrium situation, we first corroborate our results by comparing with a direct numerical evaluation of ground state spectral properties of the system by MPS. Here, a scheme to locate the phase boundary by extrapolating the power-law exponent of the self energy produces a very good agreement with previous results obtained by the numerical renormalization group. Our nonequilibrium study as a function of the applied bias voltage is then carried out for two points on either side of the phase boundary. In the Kondo regime the resonance in the spectral function is splitted as a function of the increasing bias voltage. The local moment regime, instead, displays a dip in the spectrum near the position of the chemical potentials. Similar features are observed in the corresponding self energies. The Kondo split peaks approximately obey a power-law behavior as a function of frequency, whose exponents depend only slightly on voltage. Finally, the differential conductance in the Kondo regime shows a peculiar maximum at finite voltages, whose height, however, is below the accuracy level.
condensed matter
In this work, we present a comparative study of the three of the seesaw models, viz., type II, inverse and linear seesaw models, to investigate about light neutrino masses and mixings, flavour structure, neutrinoless double beta decay ($ 0\nu \beta \beta $) and charged lepton flavour violation (cLFV) decay ($\mu\rightarrow e\gamma$). We consider the $ A_{4} $ flavour symmetry, while some other symmetries, like $U(1)_{X}$, $Z_4$ and $Z_5$ are also included to forbid unwanted terms in the Lagrangian. Taking into account the present experimental data for the known light neutrino parameters from recent global fit data, we compute the currently unknown neutrino parameters such as the lightest neutrino mass ($m_1$), CPV phase (Dirac and Majorana), and effective light neutrino mass in the neutrinoless double beta decay, by considering different VEV alignments of the triplet scalar flavon fields. We also elucidate on the octant of atmospheric neutrino mixing angle, $\theta_{23}$, in the light of our predicted results. Finally, we present the region of parameter spaces of $m_1$, CPV phases, octant of $\theta_{23}$ and effective mass measurable in of neutrino less double beta decay experiments, that can be tested in future experiments. We observe that the branching ratio of ($\mu\rightarrow e\gamma$) can help discriminate the three seesaw models. Further, the favoured Octant of the atmospheric mixing angle $\theta_{23}$ changes with the VEV alignment of the triplet flavon field - i.e., the internal flavour structure of the neutrinos is reflected in their composition (mixing). The constant F determining the scale of flavour symmetry breaking, seesaw scale and coupling constants of the three seesaw models has also been computed, which puts a constraint among them allowed by the present experimental results.
high energy physics phenomenology
We theoretically present a design of self-starting operation of microcombs based on laser-cavity solitons in a system composed of a micro-resonator nested in and coupled to an amplifying laser cavity. We demonstrate that it is possible to engineer the modulational-instability gain of the system's zero state to allow the start-up with a well-defined number of robust solitons. The approach can be implemented by using the system parameters, such as the cavity length mismatch and the gain shape, to control the number and repetition rate of the generated solitons. Because the setting does not require saturation of the gain, the results offer an alternative to standard techniques that provide laser mode-locking.
physics
We investigate dust obscuration as parameterised by the infrared excess IRX$\equiv$$L_{\rm IR}/L_{\rm UV}$ in relation to global galaxy properties, using a sample of $\sim$32$\,$000 local star-forming galaxies (SFGs) selected from SDSS, GALEX and WISE. We show that IRX generally correlates with stellar mass ($M_\ast$), star formation rate (SFR), gas-phase metallicity ($Z$), infrared luminosity ($L_{\rm IR}$) and the half-light radius ($R_{\rm e}$). A weak correlation of IRX with axial ratio (b/a) is driven by the inclination and thus seen as a projection effect. By examining the tightness and the scatter of these correlations, we find that SFGs obey an empirical relation of the form $IRX$=$10^\alpha\,(L_{\rm IR})^{\beta}\,R_{\rm e}^{-\gamma}\,(b/a)^{-\delta}$ where the power-law indices all increase with metallicity. The best-fitting relation yields a scatter of $\sim$0.17$\,$dex and no dependence on stellar mass. Moreover, this empirical relation also holds for distant SFGs out to $z=3$ in a population-averaged sense, suggesting it to be universal over cosmic time. Our findings reveal that IRX approximately increases with $L_{\rm IR}/R_{\rm e}^{[1.3 - 1.5]}$ instead of $L_{\rm IR}/R_{\rm e}^{2}$ (i.e., surface density). We speculate this may be due to differences in the spatial extent of stars versus star formation and/or complex star-dust geometries. We conclude that not stellar mass but IR luminosity, metallicity and galaxy size are the key parameters jointly determining dust obscuration in SFGs.
astrophysics
Decades of research show that students learn more in classes that utilize active learning than they do in traditional, lecture-only classes. Active learning also reduces the achievement gaps that are often present between various demographic groups. Given these well-established results, instructors of upper-division astronomy courses may decide to search the astronomy education research literature in hopes of finding some guidance on common student difficulties, as well as research-validated and research-based active learning curricula. But their search will be in vain. The current literature on upper-division astronomy is essentially non-existent. This is a shame,since many upper-division astronomy students will experience conceptual and problem-solving difficulties with the quantitative problems they encounter. These difficulties may exist even if students have a strong background in mathematics. In this paper, I examine one quantitative problem that is representative of those that upper-division astronomy students are expected to solve. I list many of the subtle pieces of information that students need to understand in order to advance toward a solution and I describe how such a list can be used to generate Peer Instruction (PI) questions. I also provide guidelines for instructors who wish to develop and implement their own PI questions. These PI questions can be used to increase the amount of active learning that occurs in an upper-division astronomy course. They help develop students' understandings of symbolic, mathematical representations and they help improve students' problem-solving skills.
physics
Imaging and manipulating individual atoms with submicrometer separation can be instrumental for quantum simulation of condensed matter Hamiltonians and quantum computation with neutral atoms. Quantum gas microscope experiments in most cases rely on quite costly solutions. Here we present an open-source design of a microscope objective for atomic strontium consisting solely of off-the-shelf lenses that is diffraction-limited for 461${\,}$nm light. A prototype built with a simple stacking design is measured to have a resolution of 0.63(4)${\,\mu}$m, which is in agreement with the predicted value. This performance, together with the near diffraction-limited performance for 532${\,}$nm light makes this design useful for both quantum gas microscopes and optical tweezer experiments with strontium. Our microscope can easily be adapted to experiments with other atomic species such as erbium, ytterbium, and dysprosium, as well as Rydberg experiments with rubidium.
physics
The pulse morphology of fast radio bursts (FRBs) provides key information in both understanding progenitor physics and the plasma medium through which the burst propagates. We present a study of the profiles of 33 bright FRBs detected by the Australian Square Kilometre Array Pathfinder. We identify seven FRBs with measureable intrinsic pulse widths, including two FRBs that have been seen to repeat. In our modest sample we see no evidence for bimodality in the pulse width distribution. We also identify five FRBs with evidence of millisecond timescale pulse broadening caused by scattering in inhomogeneous plasma. We find no evidence for a relationship between pulse broadening and extragalactic dispersion measure. The scattering could be either caused by extreme turbulence in the host galaxy or chance propagation through foreground galaxies. With future high time resolution observations and detailed study of host galaxy properties we may be able to probe line-of-sight turbulence on gigaparsec scales.
astrophysics
Transport of electrolytic solutions under influence of electric fields occurs in phenomena ranging from biology to geophysics. Here, we present a continuum model for single-phase electrohydrodynamic flow, which can be derived from fundamental thermodynamic principles. This results in a generalized Navier-Stokes-Poisson-Nernst-Planck system, where fluid properties such as density and permittivity depend on the ion concentration fields. We propose strategies for constructing numerical schemes for this set of equations, where solving the electrochemical and the hydrodynamic subproblems are decoupled at each time step. We provide time discretizations of the model that suffice to satisfy the same energy dissipation law as the continuous model. In particular, we propose both linear and non-linear discretizations of the electrochemical subproblem, along with a projection scheme for the fluid flow. The efficiency of the approach is demonstrated by numerical simulations using several of the proposed schemes.
physics
The mitotic spindle lies at the heart of the spatio-temporal control over cellular components during cell division. The spindle consists of microtubules, which are not only crosslinked by motor proteins but also by passive binding proteins. These passive crosslinkers stabilize the highly dynamic mitotic spindle by generating friction forces between sliding filaments. However, it remains unclear how the friction coefficient depends on the number of crosslinkers and the size of the overlap between the microtubules. Here, we use theory and computer simulations to study the friction between two filaments that are crosslinked by passive proteins, which can hop between neighboring binding sites while physically excluding each other. The simulations reveal that the movement of one microtubule relative to the other is limited by free-energy barrier crossings, causing rare and discrete jumps of the microtubule that span the distance between adjacent crosslinker binding sites. We derive an exact analytical expression for the free-energy landscape and identify the reaction coordinate that governs the relative movement, which allows us to determine the effective barrier height for the microtubule jumps. Both through simulations and reaction rate theory, we make the experimentally testable prediction that the friction between the microtubules increases superexponentially with the density of crosslinkers.
physics
We consider the task of generating draws from a Markov jump process (MJP) between two time-points at which the process is known. Resulting draws are typically termed bridges and the generation of such bridges plays a key role in simulation-based inference algorithms for MJPs. The problem is challenging due to the intractability of the conditioned process, necessitating the use of computationally intensive methods such as weighted resampling or Markov chain Monte Carlo. An efficient implementation of such schemes requires an approximation of the intractable conditioned hazard/propensity function that is both cheap and accurate. In this paper, we review some existing approaches to this problem before outlining our novel contribution. Essentially, we leverage the tractability of a Gaussian approximation of the MJP and suggest a computationally efficient implementation of the resulting conditioned hazard approximation. We compare and contrast our approach with existing methods using three examples.
statistics
Electromagnetic waves in a dynamical axion background exhibit superluminal group velocities at high frequencies and instabilities at low frequencies, altering how photons propagate through space. Local disturbances propagate causally, but unlike in ordinary Maxwell theory, propagation occurs inside as well as on the lightcone. For the unstable modes, the energy density in the electromagnetic field grows exponentially along timelike displacements. In this paper we derive retarded Green functions in axion electrodynamics in various limits and study the time-domain properties of propagating signals.
high energy physics phenomenology
An unmanned autonomous vehicle (UAV) is sent on a mission to explore and reconstruct an unknown environment from a series of measurements collected by Bayesian optimization. The success of the mission is judged by the UAV's ability to faithfully reconstruct any anomalous features present in the environment, with emphasis on the extremes (e.g., extreme topographic depressions or abnormal chemical concentrations). We show that the criteria commonly used for determining which locations the UAV should visit are ill-suited for this task. We introduce a number of novel criteria that guide the UAV towards regions of strong anomalies by leveraging previously collected information in a mathematically elegant and computationally tractable manner. We demonstrate superiority of the proposed approach in several applications, including reconstruction of seafloor topography from real-world bathymetry data, as well as tracking of dynamic anomalies. A particularly attractive property of our approach is its ability to overcome adversarial conditions, that is, situations in which prior beliefs about the locations of the extremes are imprecise or erroneous.
statistics
We consider two dimensional conformal field theory (CFT) with large central charge c in an excited state obtained by the insertion of an operator \Phi with large dimension \Delta_\Phi ~ O(c) at spatial infinities in the thermal state. We argue that correlation functions of light operators in such a state can be viewed as thermal correlators with a rescaled effective temperature. The effective temperature controls the growth of out-of-time order (OTO) correlators and results in a violation of the universal upper bound on the associated Lyapunov exponent when \Delta_\Phi <0 and the CFT is nonunitary. We present a specific realization of this situation in the holographic Chern-Simons formulation of a CFT with {W}^{(2)}_3 symmetry also known as the Bershadsky-Polyakov algebra. We examine the precise correspondence between the semiclassical (large-c) representations of this algebra and the Chern-Simons formulation, and infer that the holographic CFT possesses a discretuum of degenerate ground states with negative conformal dimension \Delta_\Phi =- c/8. Using the Wilson line prescription to compute entanglement entropy and OTO correlators in the holographic CFT undergoing a local quench, we find the Lyapunov exponent \lambda_L = 4\pi/ \beta, violating the universal chaos bound.
high energy physics theory
We consider double $gg\rightarrow g$ production in the presence of a bias on the unintegrated gluon distribution of the colliding hadron or nuclei. Such bias could be due to the selection of configurations with a greater number of gluons or higher mean transverse momentum squared or, more generally, due to a modified spectral shape of the gluon distribution in the hadrons. Hence, we consider reweighted functional averages over the stochastic ensemble of small-x gluons. We evaluate explicitly the double inclusive gluon transverse momentum spectrum in high-energy collisions, and their azimuthal correlations, for a few simple examples of biases.
high energy physics phenomenology
In this work we construct an approximate time evolution operator for a system composed by two coupled Jaynes-Cummings Hamiltonians. We express the full time evolution operator as a product of exponentials and we analyze the validity of our approximations contrasting our analytical results with those obtained by purely numerical methods.
quantum physics
Retinal vessels are important biomarkers for many ophthalmological and cardiovascular diseases. It is of great significance to develop an accurate and fast vessel segmentation model for computer-aided diagnosis. Existing methods, such as U-Net follows the encoder-decoder pipeline, where detailed information is lost in the encoder in order to achieve a large field of view. Although detailed information could be recovered in the decoder via multi-scale fusion, it still contains noise. In this paper, we propose a deep segmentation model, called detail-preserving network (DPN) for efficient vessel segmentation. To preserve detailed spatial information and learn structural information at the same time, we designed the detail-preserving block (DP-Block). Further, we stacked eight DP-Blocks together to form the DPN. More importantly, there are no down-sampling operations among these blocks. As a result, the DPN could maintain a high resolution during the processing, which is helpful to locate the boundaries of thin vessels. To illustrate the effectiveness of our method, we conducted experiments over three public datasets. Experimental results show, compared to state-of-the-art methods, our method shows competitive/better performance in terms of segmentation accuracy, segmentation speed, extensibility and the number of parameters. Specifically, 1) the AUC of our method ranks first/second/third on the STARE/CHASE_DB1/DRIVE datasets, respectively. 2) Only one forward pass is required of our method to generate a vessel segmentation map, and the segmentation speed of our method is over 20-160x faster than other methods on the DRIVE dataset. 3) We conducted cross-training experiments to demonstrate the extensibility of our method, and results revealed that our method shows superior performance. 4) The number of parameters of our method is only around 96k, less then all comparison methods.
electrical engineering and systems science
We study different maximum principles for non-local non-linear operators with non-standard growth that arise naturally in the context of fractional Orlicz-Sobolev spaces and whose most notable representative is the fractional $g-$Laplacian: \[ (-\Delta_g)^su(x):=\textrm{p.v.}\int_{\mathbb{R}^n}g\left(\frac{u(x)-u(y)}{|x-y|^s}\right)\frac{dy}{|x-y|^{n+s}}, \] being $g$ the derivative of a Young function. We further derive qualitative properties of solutions such as a Liouville type theorem and symmetry results and present several possible extensions and some interesting open questions. These are the first results of this type proved in this setting.
mathematics
Knowledge graphs (KGs) have the advantage of providing fine-grained detail for question-answering systems. Unfortunately, building a reliable KG is time-consuming and expensive as it requires human intervention. To overcome this issue, we propose a novel framework to automatically construct a KG from unstructured documents that does not require external alignment. We first extract surface-form knowledge tuples from unstructured documents and encode them with contextual information. Entities with similar context semantics are then linked through internal alignment to form a graph structure. This allows us to extract the desired information from multiple documents by traversing the generated KG without a manual process. We examine its performance in retrieval based QA systems by reformulating the WikiMovies and MetaQA datasets into a tuple-level retrieval task. The experimental results show that our method outperforms traditional retrieval methods by a large margin.
computer science
We consider continuous-variable quantum key distribution with discrete-alphabet encodings. In particular, we study protocols where information is encoded in the phase of displaced coherent (or thermal) states, even though the results can be directly extended to any protocol based on finite constellations of displaced Gaussian states. In this setting, we provide a composable security analysis in the finite-size regime assuming the realistic but restrictive hypothesis of collective Gaussian attacks. Under this assumption, we can efficiently estimate the parameters of the channel via maximum likelihood estimators and bound the corresponding error in the final secret key rate.
quantum physics
Integrable spinning extension of a free particle on 2-sphere is constructed in which spin degrees of freedom are represented by a 3-vector obeying the Bianchi type-V algebra. Generalizations involving a scalar potential giving rise to two quadratic constants of the motion, or external field of the Dirac monopole, or the motion on the group manifold of SU(2) are built. A link to the model of a relativistic spinning particle propagating on the near horizon 7d Myers-Perry black hole background is considered. Implications of the construction in this work for the D(2,1;a) superconformal mechanics are discussed.
high energy physics theory
The Nystrom method is a popular technique that uses a small number of landmark points to compute a fixed-rank approximation of large kernel matrices that arise in machine learning problems. In practice, to ensure high quality approximations, the number of landmark points is chosen to be greater than the target rank. However, for simplicity the standard Nystrom method uses a sub-optimal procedure for rank reduction. In this paper, we examine the drawbacks of the standard Nystrom method in terms of poor performance and lack of theoretical guarantees. To address these issues, we present an efficient modification for generating improved fixed-rank Nystrom approximations. Theoretical analysis and numerical experiments are provided to demonstrate the advantages of the modified method over the standard Nystrom method. Overall, the aim of this paper is to convince researchers to use the modified method, as it has nearly identical computational complexity, is easy to code, has greatly improved accuracy in many cases, and is optimal in a sense that we make precise.
statistics
We construct new families of deformed supersymmetric field theories which break space-time symmetries but preserve half of the original supersymmetry. We do this by writing deformations as couplings to background multiplets. In many cases it is important to use the off-shell representation as auxiliary fields of the non-dynamical fields must be turned on to preserve supersymmetry. We also consider backgrounds which preserve some superconformal symmetry, finding scale-invariant field profiles, as well as $\mathcal{N} =2$ theories on $S^3$. We discuss how this is related to previous work on interface SCFTs and other holographic calculations.
high energy physics theory
The vacuum expectation value $v_s$ of a Higgs triplet field $\Delta$ carrying two units of lepton number $L$ induces neutrino masses $\propto v_s$. The neutral component of $\Delta$ gives rise to two Higgs particles, a pseudoscalar $A$ and a scalar $S$. The most general renormalizable Higgs potential $V$ for $\Delta $ and the Standard-Model Higgs doublet $\Phi$ does not permit the possibility that the mass of either $A$ or $S$ is small, of order $v_s$, while the other mass is heavy enough to forbid the decay $Z\to A S$ to comply with LEP 1 data. We present a model with additional dimension-6 terms in $V$, in which this feature is absent and either $A$ or $S$ can be chosen light. Subsequently we propose the model as a remedy to cosmological anomalies, namely the tension between observed and predicted tensor-to-scalar mode ratios in the cosmic microwave background and the different values of the Hubble constant measured at different cosmological scales. Furthermore, if $\Delta$ dominantly couples to the third-generation doublet $L_\tau=(\nu_\tau,\tau)$, the deficit of $\nu_\tau$ events at IceCube can be explained. The singly and doubly charged triplet Higgs bosons are lighter than 280 GeV and 400 GeV respectively, and could be found at the LHC.
high energy physics phenomenology
We investigate the application of a holographic entanglement negativity construction to bipartite states of single subsystems in $CFT_d$s with a conserved charge dual to bulk $AdS_{d+1}$ geometries. In this context, we obtain the holographic entanglement negativity for single subsystems with long rectangular strip geometry in $CFT_d$s dual to bulk extremal and nonextremal Reissner-Nordstr{\"o}m (RN)-$AdS_{d+1}$ black holes. Our results demonstrate that for this configuration the holographic entanglement negativity involves subtraction of the thermal entropy from the entanglement entropy confirming earlier results. This conforms to the characterization of entanglement negativity as the upper bound on the distillable entanglement in quantum information theory and constitutes an important consistency check for our higher dimensional construction.
high energy physics theory
Rendering an accurate image of an isosurface in a volumetric field typically requires large numbers of data samples. Reducing the number of required samples lies at the core of research in volume rendering. With the advent of deep learning networks, a number of architectures have been proposed recently to infer missing samples in multi-dimensional fields, for applications such as image super-resolution and scan completion. In this paper, we investigate the use of such architectures for learning the upscaling of a low-resolution sampling of an isosurface to a higher resolution, with high fidelity reconstruction of spatial detail and shading. We introduce a fully convolutional neural network, to learn a latent representation generating a smooth, edge-aware normal field and ambient occlusions from a low-resolution normal and depth field. By adding a frame-to-frame motion loss into the learning stage, the upscaling can consider temporal variations and achieves improved frame-to-frame coherence. We demonstrate the quality of the network for isosurfaces which were never seen during training, and discuss remote and in-situ visualization as well as focus+context visualization as potential applications
computer science
Integrated light (IL) spectroscopy enables studies of stellar populations beyond the Milky Way and its nearest satellites. In this paper, I will review how IL spectroscopy reveals essential information about globular clusters and the assembly histories of their host galaxies, concentrating particularly on the metallicities and detailed chemical abundances of the GCs in M31. I will also briefly mention the effects of multiple populations on IL spectra, and how observations of distant globular clusters help constrain the source(s) of light-element abundance variations. I will end with future perspectives, emphasizing how IL spectroscopy can bridge the gap between Galactic and extragalactic astronomy.
astrophysics
We study pseudo-Goldstone dark matter in the $\mathbb{Z}_{3}$ complex scalar singlet model. Because the direct detection spin-independent cross section is suppressed, such dark matter is allowed in a large mass range. Unlike in the original model stabilized by a parity, due to the cubic coupling of the singlet the $\mathbb{Z}_{3}$ model can accommodate first-order phase transitions that give rise to a stochastic gravitational wave signal potentially observable in future space-based detectors.
high energy physics phenomenology
Modern machine learning-based recognition approaches require large-scale datasets with large number of labelled training images. However, such datasets are inherently difficult and costly to collect and annotate. Hence there is a great and growing interest in automatic dataset collection methods that can leverage the web. % which are collected % in a cheap, efficient and yet unreliable way. Collecting datasets in this way, however, requires robust and efficient ways for detecting and excluding outliers that are common and prevalent. % Outliers are thus a % prominent treat of using these dataset. So far, there have been a limited effort in machine learning community to directly detect outliers for robust classification. Inspired by the recent work on Pre-conditioned LASSO, this paper formulates the outlier detection task using Pre-conditioned LASSO and employs \red{unsupervised} transductive diffusion component analysis to both integrate the topological structure of the data manifold, from labeled and unlabeled instances, and reduce the feature dimensionality. Synthetic experiments as well as results on two real-world classification tasks show that our framework can robustly detect the outliers and improve classification.
computer science
The projected sensitivity of the LUX-ZEPLIN (LZ) experiment to two-neutrino and neutrinoless double beta decay of $^{134}$Xe is presented. LZ is a 10-tonne xenon time projection chamber optimized for the detection of dark matter particles, that is expected to start operating in 2021 at Sanford Underground Research Facility, USA. Its large mass of natural xenon provides an exceptional opportunity to search for the double beta decay of $^{134}$Xe, for which xenon detectors enriched in $^{136}$Xe are less effective. For the two-neutrino decay mode, LZ is predicted to exclude values of the half-life up to 1.7$\times$10$^{24}$ years at 90% confidence level (CL), and has a three-sigma observation potential of 8.7$\times$10$^{23}$ years, approaching the predictions of nuclear models. For the neutrinoless decay mode LZ, is projected to exclude values of the half-life up to 7.3$\times$10$^{24}$ years at 90% CL.
physics
A new approach to solving random matrix models directly in the large $N$ limit is developed. First, a set of numerical values for some low-pt correlation functions is guessed. The large $N$ loop equations are then used to generate values of higher-pt correlation functions based on this guess. Then one tests whether these higher-pt functions are consistent with positivity requirements, e.g., $\langle \text{tr }M^{2k} \rangle \ge 0$. If not, the guessed values are systematically ruled out. In this way, one can constrain the correlation functions of random matrices to a tiny subregion which contains (and perhaps converges to) the true solution. This approach is tested on single and multi-matrix models and handily reproduces known solutions. It also produces strong results for multi-matrix models which are not believed to be solvable. A tantalizing possibility is that this method could be used to search for new critical points, or string worldsheet theories.
high energy physics theory
Despite the wide empirical success of modern machine learning algorithms and models in a multitude of applications, they are known to be highly susceptible to seemingly small indiscernible perturbations to the input data known as adversarial attacks. A variety of recent adversarial training procedures have been proposed to remedy this issue. Despite the success of such procedures at increasing accuracy on adversarially perturbed inputs or robust accuracy, these techniques often reduce accuracy on natural unperturbed inputs or standard accuracy. Complicating matters further the effect and trend of adversarial training procedures on standard and robust accuracy is rather counter intuitive and radically dependent on a variety of factors including the perceived form of the perturbation during training, size/quality of data, model overparameterization, etc. In this paper we focus on binary classification problems where the data is generated according to the mixture of two Gaussians with general anisotropic covariance matrices and derive a precise characterization of the standard and robust accuracy for a class of minimax adversarially trained models. We consider a general norm-based adversarial model, where the adversary can add perturbations of bounded $\ell_p$ norm to each input data, for an arbitrary $p\ge 1$. Our comprehensive analysis allows us to theoretically explain several intriguing empirical phenomena and provide a precise understanding of the role of different problem parameters on standard and robust accuracies.
statistics
High-quality ultra-thin films of niobium nitride (NbN) are developed by plasma-enhanced atomic layer deposition (PEALD) technique. Superconducting nanowire single-photon detectors (SNSPDs) patterned from this material exhibit high switching currents and saturated internal efficiencies over a broad bias range at 1550 nm telecommunication wavelength. Statistical analyses on hundreds of fabricated devices show near-unity throughput yield due to exceptional homogeneity of the films. The ALD-NbN material represents an ideal superconducting material for fabricating large single-photon detector arrays combining high efficiency, low jitter, low dark counts.
physics
The possibility to use the width of the decay $\rho \to e^{+}e^{-}$ to fix the input parameter $g_\rho=5.0$ of the $SU(2) \times SU(2)$ chiral-symmetric Nambu--Jona-Lasinio model is discussed. It is shown that for a consistent simultaneous description of the processes $\rho \to e^{+}e^{-}$, $\rho \to \pi^{+}\pi^{-}$, $\tau^{-} \to \pi^{-}\pi^{0} \nu_{\tau}$, and $e^{+}e^{-} \to \pi^{+}\pi^{-}$ can be constructed. Taking into account the interaction of pions in the final state appears to be important. The obtained theoretical results for the considered processes are in a satisfactory agreement with experimental data.
high energy physics phenomenology
Triangular zigzag nanographenes, such as triangulene and its pi-extended homologues, have received widespread attention as organic nanomagnets for molecular spintronics, and may serve as building blocks for high-spin networks with long-range magnetic order - of immense fundamental and technological relevance. As a first step toward these lines, we present the on-surface synthesis and a proof-of-principle experimental study of magnetism in covalently bonded triangulene dimers. On-surface reactions of rationally-designed precursor molecules on Au(111) lead to the selective formation of triangulene dimers in which the triangulene units are either directly connected through their minority sublattice atoms, or are separated via a 1,4-phenylene spacer. The chemical structures of the dimers have been characterized by bond-resolved scanning tunneling microscopy. Scanning tunneling spectroscopy and inelastic electron tunneling spectroscopy measurements reveal collective singlet-triplet spin excitations in the dimers, demonstrating efficient inter-triangulene magnetic coupling.
condensed matter
We compute the most general embedding space two-point function in arbitrary Lorentz representations in the context of the recently introduced formalism in arXiv:1905.00036 and arXiv:1905.00434. This work provides a first explicit application of this approach and furnishes a number of checks of the formalism. We project the general embedding space two-point function to position space and find a form consistent with conformal covariance. Several concrete examples are worked out in detail. We also derive constraints on the OPE coefficient matrices appearing in the two-point function, which allow us to impose unitarity conditions on the two-point function coefficients for operators in any Lorentz representations.
high energy physics theory
We examine how systems in non-equilibrium steady states close to a continuous phase transition can still be described by a Landau potential if one forgoes the assumption of analyticity. In a system simultaneously coupled to several baths at different temperatures, the non-analytic potential arises from the different density of states of the baths. In periodically driven-dissipative systems, the role of multiple baths is played by a single bath transferring energy at different harmonics of the driving frequency. The mean-field critical exponents become dependent on the low-energy features of the two most singular baths. We propose an extension beyond mean field.
condensed matter
We study the $a_1a_1$ and $Za_1$ decay channels of the next-to-lightest CP-even Higgs boson $h_2$ of the NMSSM at the LHC, where the $h_2$ is produced in gluon fusion. It is found that while the $h_2$ discovery is impossible through the latter channel, the former one in the $ 4\tau$ final state is a promising channel to discover the $h_2$ with masses up to around 250 GeV at the LHC. Such a discovery of the $h_2$ is mostly accompanied with a light $a_1$, which is a clear evidence for distinguishing the NMSSM from the MSSM since such a light $a_1$ is impossible in the MSSM.
high energy physics phenomenology
The density of states of proximitized normal nanowires interrupting superconducting rings can be tuned by the magnetic flux piercing the loop. Using these as the contacts of a single-electron transistor allows to control the energetic mirror asymmetry of the conductor, this way introducing rectification properties. In particular, we show that the system works as a diode that rectifies both charge and heat currents and whose polarity can be reversed by the magnetic field and a gate voltage. We emphasize the role of dissipation at the island. The coupling to substrate phonons enhances the effect and furthermore introduces a channel for phase tunable conversion of heat exchanged with the environment into electrical current.
condensed matter
We explore practical tradeoffs in blockchain-based biometric template storage. We first discuss opportunities and challenges in the integration of blockchain and biometrics, with emphasis in biometric template storage and protection, a key problem in biometrics still largely unsolved. Blockchain technologies provide excellent architectures and practical tools for securing and managing the sensitive and private data stored in biometric templates, but at a cost. We explore experimentally the key tradeoffs involved in that integration, namely: latency, processing time, economic cost, and biometric performance. We experimentally study those factors by implementing a smart contract on Ethereum for biometric template storage, whose cost-performance is evaluated by varying the complexity of state-of-the-art schemes for face and handwritten signature biometrics. We report our experiments using popular benchmarks in biometrics research, including deep learning approaches and databases captured in the wild. As a result, we experimentally show that straightforward schemes for data storage in blockchain (i.e., direct and hash-based) may be prohibitive for biometric template storage using state-of-the-art biometric methods. A good cost-performance tradeoff is shown by using a blockchain approach based on Merkle trees.
computer science
We develop ZnO/p-Si photodetectors by atomic layer deposition (ALD) of ZnO thin films on laser-microstructured silicon and we investigate their electrical and optical behavior, demonstrating high sensitivity and broadband operation. Microstructured p-type silicon was obtained by ns-laser irradiation in SF6 gas, which results in the formation of quasi-ordered and uniform microspikes on the silicon surface. The irradiated silicon contains sulfur impurities, which extend its absorbance to the near infrared. A thin film of ZnO was conformally deposited on the microstructured silicon substrates by ALD. Photoluminescence measurements indicate high crystalline quality of the ZnO film after annealing. Current-voltage (I-V) measurements of the ZnO/p-Si heterodiodes in dark show a non-linear behavior with unusual high current values in reverse bias. Under illumination photocurrent is observed for reverse bias, even for wavelengths below the silicon bandgap in the case of the laser-microstructured photodetectors. Higher current values are measured for the microstructured photodetectors, compared to planar ones. Photoconductivity measurements show enhanced responsivity across the UV-Vis-NIR spectral range for the laser-microstructured devices, due to their increased surface area and light absorption.
physics
We show that perturbative quantum gravity based on the Einstein-Hilbert action, has a novel continuum limit. The renormalized trajectory emanates from the Gaussian fixed point along (marginally) relevant directions but enters the diffeomorphism invariant subspace only well below a dynamically generated scale. We show that for pure quantum gravity to second order in perturbation theory, and with vanishing cosmological constant, the result is the same as computed in the standard quantisation. Although this case is renormalizable at second order for kinematic reasons, the structure we uncover works in general. One possibility is that gravity has a genuine consistent continuum limit even though it has an infinite number couplings. However we also suggest a possible non-perturbative mechanism, based on the parabolic properties of these flow equations, which would fix all higher order couplings in terms of Newton's constant and the cosmological constant.
high energy physics theory
Narrow linewidth optical atomic transitions provide a valuable resource for frequency metrology, and form the basis of today's most precise and accurate clocks. Recent experiments have demonstrated that ensembles of atoms can be interfaced with the mode of an optical cavity using such transitions, and that atom-cavity interactions can dominate over decoherence processes even when the atomic transition that mediates the interactions is very weak. This scenario enables new opportunities for optical frequency metrology, including techniques for nondestructive readout and entanglement enhancement for optical lattice clocks, methods for cavity-enhanced laser frequency stabilization, and high-precision active optical frequency references based on superradiant emission. This tutorial provides a pedagogical description of the physics governing atom-cavity coupling with narrow linewidth optical transitions, and describes several examples of applications to optical frequency metrology.
physics
In the framework of the color-magnetic interaction model, we have systematically calculated the mass splittings for the S-wave triply heavy pentaquark states with the configuration $qqQQ\bar{Q}$ $(Q=c,b;q=u,d,s)$. Their masses are estimated and their stabilities are discussed according to possible rearrangement decay patterns. Our results indicate that there may exist several stable or narrow such states. We hope the present study can help experimentalists to search for exotic pentaquarks.
high energy physics phenomenology
Particle shape plays an important role in the phase behavior of colloidal self-assembly. Recent progress in particle synthesis has made particles of polyhedral shapes and dimpled spherical shapes available. Here using computer simulations of hard particle models, we study face-centered cubic to body-centered cubic (FCC-to-BCC) phase transitions in a convex 432 polyhedral shape family and a concave dimpled sphere family. Particles in both families have four-, three-, and two-fold rotational symmetries. Via free energy calculations we find the FCC-to-BCC transitions in both families are first order. As a previous work reports the FCC-to-BCC phase transition is first order in a convex 332 family of hard polyhedra, our work provides additional insight into the FCC-to-BCC transition and how the convexity or concavity of particle shape affects phase transition pathways.
condensed matter
We discuss the dRGT massive gravity interacting with spin-0, spin-1/2, or spin-1 matter. The effective theory of a massive spin-2 particle coupled to matter particles is constructed directly at the amplitude level. In this setting we calculate the gravitational Compton scattering amplitudes and study their UV properties. While the Compton amplitudes generically grow with energy as $\mathcal{O}(E^6)$, we identify regions of the parameter space where they are softened to $\mathcal{O}(E^4)$ or even $\mathcal{O}(E^3)$, which allows for a larger validity range of the effective theory. In these regions, both positivity and beyond-positivity of the forward Compton amplitudes are fulfilled, and the equivalence principle automatically emerges.
high energy physics theory
Most automation in machine learning focuses on model selection and hyper parameter tuning, and many overlook the challenge of automatically defining predictive tasks. We still heavily rely on human experts to define prediction tasks, and generate labels by aggregating raw data. In this paper, we tackle the challenge of defining useful prediction problems on event-driven time-series data. We introduce MLFriend to address this challenge. MLFriend first generates all possible prediction tasks under a predefined space, then interacts with a data scientist to learn the context of the data and recommend good prediction tasks from all the tasks in the space. We evaluate our system on three different datasets and generate a total of 2885 prediction tasks and solve them. Out of these 722 were deemed useful by expert data scientists. We also show that an automatic prediction task discovery system is able to identify top 10 tasks that a user may like within a batch of 100 tasks.
computer science
Within the classical emission model, where the emission region is placed within the broad line region (BLR), flat spectrum radio quasars (FSRQs) were believed not to emit photons with energies above few tens of GeV because of the absorption with the optical-UV photons from the BLR. However, photons with observed energies up to about $300 \, \rm GeV$ have been detected for few FSRQs, whose most iconic example is PKS 1441+25 at redshift $z = 0.94$. The most conservative explanation for these observations is that the emission occurs at distances comparable to the size of the dusty torus. In this case, absorption of high-energy gamma-ray photons for energies above $200-300 \, {\rm GeV}$ is dominated by the interaction with infrared radiation emitted by the torus. We investigate if current observational data about FSRQs in flaring state can give us information about: (i) the importance of the torus absorption and (ii) the properties of the torus i.e. its temperature and its geometry. We find that present data do not arrive at energies where the torus influence is prominent and as a result it is currently hardly possible to infer torus properties from observations. However, with dedicated simulations, we demonstrate that observations with the forthcoming Cherenkov Telescope Array (CTA) will be able to constrain the torus parameters (temperature and geometry).
astrophysics
A methodology is devised for building optimal bases for the generalized Dicke model based on the symmetry adapted variational solution to the problem. At order zero, the matter sector is constructed by distributing $N_a$ particles in all the possible two-level subsystems connected with electromagnetic radiation; the next order is obtained when the states of $N_a-1$ particles are added and distributed again into the two-level subsystems; and so on. In the electromagnetic sector, the order zero for each mode is the direct sum of the Fock spaces, truncated to a value of the corresponding constants of motion of each two-level subsystem; by including contributions of the other modes, the next orders are obtained. As an example of the procedure we consider $4$ atoms in the $\Xi$ configuration interacting dipolarly with two modes of electromagnetic radiation. The results may be applied to situations in quantum optics, quantum information, and quantum computing.
quantum physics
We point out that the location of renormalon singularities in theory on a circle-compactified spacetime $\mathbb{R}^{d-1} \times S^1$ (with a small radius $R \Lambda \ll 1$) can differ from that on the non-compactified spacetime $\mathbb{R}^d$. We argue this under the following assumptions, which are often realized in large $N$ theories with twisted boundary conditions: (i) a loop integrand of a renormalon diagram is volume independent, i.e. it is not modified by the compactification, and (ii) the loop momentum variable along the $S^1$ direction is not associated with the twisted boundary conditions and takes the values $n/R$ with integer $n$. We find that the Borel singularity is generally shifted by $-1/2$ in the Borel $u$-plane, where the renormalon ambiguity of $\mathcal{O}(\Lambda^k)$ is changed to $\mathcal{O}(\Lambda^{k-1}/R)$ due to the circle compactification $\mathbb{R}^d \to \mathbb{R}^{d-1} \times S^1$. The result is general for any dimension $d$ and is independent of details of the quantities under consideration. As an example, we study the $\mathbb{C} P^{N-1}$ model on $\mathbb{R} \times S^1$ with $\mathbb{Z}_N$ twisted boundary conditions in the large $N$ limit.
high energy physics theory
High-fidelity single-shot readout of spin qubits requires distinguishing states much faster than the T1 time of the spin state. One approach to improving readout fidelity and bandwidth (BW) is cryogenic amplification, where the signal from the qubit is amplified before noise sources are introduced and room-temperature amplifiers can operate at lower gain and higher BW. We compare the performance of two cryogenic amplification circuits: a current-biased heterojunction bipolar transistor circuit (CB-HBT), and an AC-coupled HBT circuit (AC-HBT). Both circuits are mounted on the mixing-chamber stage of a dilution refrigerator and are connected to silicon metal oxide semiconductor (Si-MOS) quantum dot devices on a printed circuit board (PCB). The power dissipated by the CB-HBT ranges from 0.1 to 1 {\mu}W whereas the power of the AC-HBT ranges from 1 to 20 {\mu}W. Referred to the input, the noise spectral density is low for both circuits, in the 15 to 30 fA/$\sqrt{\textrm{Hz}}$ range. The charge sensitivity for the CB-HBT and AC-HBT is 330 {\mu}e/$\sqrt{\textrm{Hz}}$ and 400 {\mu}e/$\sqrt{\textrm{Hz}}$, respectively. For the single-shot readout performed, less than 10 {\mu}s is required for both circuits to achieve bit error rates below $10^{-3}$, which is a putative threshold for quantum error correction.
condensed matter
The intermediate-mass pre-main sequence Herbig Ae/Be stars are key to understanding the differences in formation mechanisms between low- and high-mass stars. The study of the general properties of these objects is hampered by the fact that few and mostly serendipitously discovered sources are known. Our goal is to identify new Herbig Ae/Be candidates to create a homogeneous and well defined catalogue of these objects. We have applied machine learning techniques to 4,150,983 sources with data from Gaia DR2, 2MASS, WISE, and IPHAS or VPHAS+. Several observables were chosen to identify new Herbig Ae/Be candidates based on our current knowledge of this class, which is characterised by infrared excesses, photometric variabilities, and H$\alpha$ emission lines. Classical techniques are not efficient for identifying new Herbig Ae/Be stars mainly because of their similarity with classical Be stars, with which they share many characteristics. By focusing on disentangling these two types of objects, our algorithm has also identified new classical Be stars. We have obtained a large catalogue of 8470 new pre-main sequence candidates and another catalogue of 693 new classical Be candidates with a completeness of $78.8\pm1.4\%$ and $85.5\pm1.2\%$, respectively. Of the catalogue of pre-main sequence candidates, at least 1361 sources are potentially new Herbig Ae/Be candidates according to their position in the Hertzsprung-Russell diagram. In this study we present the methodology used, evaluate the quality of the catalogues, and perform an analysis of their flaws and biases. For this assessment, we make use of observables that have not been accounted for by the algorithm and hence are selection-independent, such as coordinates and parallax based distances. The catalogue of new Herbig Ae/Be stars that we present here increases the number of known objects of the class by an order of magnitude.
astrophysics
The general objective of the work is to study dynamics of dissipative solitons in the framework of a one-dimensional complex Ginzburg-Landau equation (CGLE) of a fractional order. To estimate the shape of solitons in fractional models, we first develop the variational approximation for solitons of the fractional nonlinear Schrodinger equation (NLSE), and an analytical approximation for exponentially decaying tails of the solitons. Proceeding to numerical consideration of solitons in fractional CGLE, we study, in necessary detail, effects of the respective Levy index (LI) on the solitons' dynamics. In particular, dependence of stability domains in the model's parameter space on the LI is identified. Pairs of in-phase dissipative solitons merge into single pulses, with the respective merger distance also determined by LI.
physics
We develop a method to study quantum impurity models, small interacting quantum systems linearly coupled to an environment, in presence of an additional Markovian quantum bath, with a generic non-linear coupling to the impurity. We aim at computing the evolution operator of the reduced density matrix of the impurity, obtained after tracing out all the environmental degrees of freedom. First, we derive an exact real-time hybridization expansion for this quantity, which generalizes the result obtained in absence of the additional Markovian dissipation, and which could be amenable to stochastic sampling through diagrammatic Monte Carlo. Then, we obtain a Dyson equation for this quantity and we evaluate its self-energy with a resummation technique known as the Non-Crossing-Approximation. We apply this novel approach to a simple fermionic impurity coupled to a zero temperature fermionic bath and in presence of Markovian pump, losses and dephasing.
quantum physics
For a multigraph $H$, a graph $G$ is $H$-linked if every injective mapping $\phi: V(H)\to V(G)$ can be extended to an $H$-subdivision in $G$. We study the minimum connectivity required for a graph to be $H$-linked. A $k$-fat-triangle $F_k$ is a multigraph with three vertices and a total of $k$ edges. We determine a sharp connectivity requirement for a graph to be $F_k$-linked. In particular, any $k$-connected graph is $F_k$-linked when $F_k$ is connected. A kite is the graph obtained from $K_4$ by removing two edges at a vertex. As a nontrivial application of $F_k$-linkage, we then prove that every $8$-connected graph is kite-linked, which shows that the required connectivity for a graph to be kite-linked is $7$ or $8$.
mathematics
In this article, we study the first radial excited states of the scalar, axialvector, vector and tensor diquark-antidiquark type $cc\bar{c}\bar{c}$ tetraquark states with the QCD sum rules and obtain the masses and pole residues, then we use the Regge trajectory to obtain the masses of the second radial excited states. The predicted masses support assigning the broad structure from 6.2 to 6.8 GeV in the di-$J/\psi$ mass spectrum to be the first radial excited state of the scalar, axialvector, vector or tensor $cc\bar{c}\bar{c}$ tetraquark state, and assigning the narrow structure at about 6.9 GeV in the di-$J/\psi$ mass spectrum to be the second radial excited state of the scalar or axialvector $cc\bar{c}\bar{c}$ tetraquark state.
high energy physics phenomenology
We find analytic solutions of hyperbolic black holes with scalar hair in AdS space, and they do not have spherical or planar counterparts. The system is obtained by taking a neutral limit of an Einstein-Maxwell-dilaton system whose special cases are maximal gauged supergravities, while the dilaton is kept nontrivial. There are phase transitions between these black holes and the hyperbolic Schwarzschild-AdS black hole. The AdS boundary of hyperbolic black holes is conformal to a de Sitter space in static coordinates. By holography, the system we study describes phase transitions of CFTs in de Sitter space. In addition, we give a C-metric solution as a generalization of the hyperbolic black holes with scalar hair.
high energy physics theory
As the key advancement of the convolutional neural networks (CNNs), depthwise separable convolutions (DSCs) are becoming one of the most popular techniques to reduce the computations and parameters size of CNNs meanwhile maintaining the model accuracy. It also brings profound impact to improve the applicability of the compute- and memory-intensive CNNs to a broad range of applications, such as mobile devices, which are generally short of computation power and memory. However, previous research in DSCs are largely focusing on compositing the limited existing DSC designs, thus, missing the opportunities to explore more potential designs that can achieve better accuracy and higher computation/parameter reduction. Besides, the off-the-shelf convolution implementations offer limited computing schemes, therefore, lacking support for DSCs with different convolution patterns. To this end, we introduce, DSXplore, the first optimized design for exploring DSCs on CNNs. Specifically, at the algorithm level, DSXplore incorporates a novel factorized kernel -- sliding-channel convolution (SCC), featured with input-channel overlapping to balance the accuracy performance and the reduction of computation and memory cost. SCC also offers enormous space for design exploration by introducing adjustable kernel parameters. Further, at the implementation level, we carry out an optimized GPU-implementation tailored for SCC by leveraging several key techniques, such as the input-centric backward design and the channel-cyclic optimization. Intensive experiments on different datasets across mainstream CNNs show the advantages of DSXplore in balancing accuracy and computation/parameter reduction over the standard convolution and the existing DSCs.
computer science
The spectral gap problem - determining whether the energy spectrum of a system has an energy gap above ground state, or if there is a continuous range of low-energy excitations - pervades quantum many-body physics. Recently, this important problem was shown to be undecidable for quantum spin systems in two (or more) spatial dimensions: there exists no algorithm that determines in general whether a system is gapped or gapless, a result which has many unexpected consequences for the physics of such systems. However, there are many indications that one dimensional spin systems are simpler than their higher-dimensional counterparts: for example, they cannot have thermal phase transitions or topological order, and there exist highly-effective numerical algorithms such as DMRG - and even provably polynomial-time ones - for gapped 1D systems, exploiting the fact that such systems obey an entropy area-law. Furthermore, the spectral gap undecidability construction crucially relied on aperiodic tilings, which are not possible in 1D. So does the spectral gap problem become decidable in 1D? In this paper we prove this is not the case, by constructing a family of 1D spin chains with translationally-invariant nearest neighbour interactions for which no algorithm can determine the presence of a spectral gap. This not only proves that the spectral gap of 1D systems is just as intractable as in higher dimensions, but also predicts the existence of qualitatively new types of complex physics in 1D spin chains. In particular, it implies there are 1D systems with constant spectral gap and non-degenerate classical ground state for all systems sizes up to an uncomputably large size, whereupon they switch to a gapless behaviour with dense spectrum.
quantum physics
It is widely recognised that filament disappearances or eruptions are frequently associated with Coronal Mass Ejections (CMEs). Since CMEs are a major source of disturbances of the space environment surrounding the Earth, it is important to investigate these associations in detail for the better prediction of CME occurrence. However, the proportion of filament disappearances associated with CMEs is under debate. The estimates range from $\sim$10% to $\sim$90% and could be affected by the manners to select the events. In this study, we aim to reveal what parameters control the association between filament eruptions and CMEs. We analysed the relationships between CME associations and the physical parameters of filaments including their length, maximum ascending velocity, and direction of eruptions using 28 events of filament eruptions observed in H$\alpha$. We found that the product of the maximum radial velocity and the filament length is well correlated with the CME occurrence. If the product is larger than 8.0$\times$10$^{6}$ km$^{2}$ s$^{-1}$, the filament will become a CME with a probability of 93%, and if the product is smaller than this value, it will not become a CME with a probability of 100%. We suggest a kinetic-energy threshold above which filament eruptions are associated with CMEs. Our findings also suggest the importance of measuring the velocity vector of filament eruption in three-dimensional space for the better prediction of CME occurrence.
astrophysics
It is important to extract good features using an encoder to realize semantic segmentation with high accuracy. Although loss function is optimized in training deep neural network, far layers from the layers for computing loss function are difficult to train. Skip connection is effective for this problem but there are still far layers from the loss function. In this paper, we propose the Feature Random Enhancement Module which enhances the features randomly in only training. By emphasizing the features at far layers from loss function, we can train those layers well and the accuracy was improved. In experiments, we evaluated the proposed module on two kinds of cell image datasets, and our module improved the segmentation accuracy without increasing computational cost in test phase.
electrical engineering and systems science
Lattice Gauge Theories form a very successful framework for studying nonperturbative gauge field physics, in particular in Quantum Chromodynamics. Recently, their quantum simulation on atomic and solid-state platforms has been discussed, aiming at overcoming some of the difficulties still faced by the conventional approaches (such as the sign problem and real time evolution). While the actual implementations of a lattice gauge theory on a quantum simulator may differ in terms of the simulating system and its properties, they are all directed at studying similar physical phenomena, requiring the measurement of nonlocal observables, due to the local symmetry of gauge theories. In this work, general schemes for measuring such nonlocal observables (Wilson loops and mesonic string operators) in general lattice gauge theory quantum simulators that are based merely on local operations are proposed.
quantum physics
The purpose of this paper is to prove a weak convergence result for empirical processes indexed in general classes of functions and with an underlying $\alpha$-mixing sequence of random variables. In particular the uniformly boundedness assumption on the function class, which is required in most of the existing literature, is spared. Furthermore under strict stationarity a weak convergence result for the sequential empirical process indexed in function classes is obtained, as a direct consequence. Two examples in mathematical statistics, that cannot be treated with existing results, are given as possible applications.
mathematics
We solve the long-time dynamics of a high-gain free-electron laser in the quantum regime. In this regime each electron emits at most one photon on average, independently of the initial field. In contrast, the variance of the photon statistics shows a qualitatively different behavior for different initial states of the field. We find that the realization of a seeded Quantum FEL is more feasible than self-amplified spontaneous emission.
quantum physics
We present a systematic interpretation of vector boson scattering (VBS) and diboson measurements from the LHC in the framework of the dimension-six Standard Model Effective Field Theory (SMEFT). We consider all available measurements of VBS fiducial cross-sections and differential distributions from ATLAS and CMS, in most cases based on the full Run II luminosity, and use them to constrain 16 independent directions in the dimension-six EFT parameter space. Compared to the diboson measurements, we find that VBS provides complementary information on several of the operators relevant for the description of the electroweak sector. We also quantify the ultimate EFT reach of VBS measurements via dedicated projections for the High Luminosity LHC. Our results motivate the integration of VBS processes in future global SMEFT interpretations of particle physics data.
high energy physics phenomenology
Lieb lattice, a two-dimensional edge-depleted square lattice, has been predicted to host various exotic electronic properties due to its unusual band structure, i.e., Dirac cone intersected by a flat band (Dirac-flat bands). Until now, although a few artificial Lieb lattices have been discovered in experiments, the realization of a Lieb lattice in a real material is still unachievable. In this article, based on tight-binding modeling and first-principles calculations, we predict that the two covalent organic frameworks (COFs), i.e., sp2C-COF and sp2N-COF, which have been synthesized in the recent experiments, are actually the first two material realizations of organic-ligand-based Lieb lattice. It is found that the lattice distortion can govern the bandwidth of the Dirac-flat bands and in turn determine its electronic instability against spontaneous spin-polarization during carrier doping. The spin-orbit coupling effects could drive these Dirac-flat bands in a distorted Lieb lattice presenting nontrivial topological properties, which depend on the position of Fermi level. Interestingly, as the hole doping concentration increases, the sp2C-COF can experience the phase transitions from a paramagnetic state to a ferromagnetic one and then to a N\'eel antiferromagnetic one. Our findings not only confirm the first material realization of Lieb lattice in COFs, but also offer a possible way to achieve tunable topology and magnetism in d- (f-) orbital-free organic lattices.
condensed matter
Parton Distribution Functions (PDFs) are essential non-perturbative inputs for calculation of any observable with hadronic initial states. These PDFs are released by individual groups as discrete grids as a function of the Bjorken-x and energy scale Q. The LHAPDF project at HepForge maintains a repository of PDFs from various groups in a new standardized LHAPDF6 format, as well as older formats such as the CTEQ PDS grid format. ManeParse is a package that provides PDFs within the Mathematica framework to facilitate calculating and plotting. The program is self-contained so there are no external links to any Fortran, C or C++ programs. The package includes the option to use the built-in Mathematica interpolation or a custom cubic Lagrange interpolation routine which allows for flexibility in the extrapolation (particularly at small x values). ManeParse is fast enough to enable simple calculations (involving even one or two integrations) to be easily computed in the Mathematica framework.
high energy physics phenomenology
Human-Object Interaction (HOI) consists of human, object and implicit interaction/verb. Different from previous methods that directly map pixels to HOI semantics, we propose a novel perspective for HOI learning in an analytical manner. In analogy to Harmonic Analysis, whose goal is to study how to represent the signals with the superposition of basic waves, we propose the HOI Analysis. We argue that coherent HOI can be decomposed into isolated human and object. Meanwhile, isolated human and object can also be integrated into coherent HOI again. Moreover, transformations between human-object pairs with the same HOI can also be easier approached with integration and decomposition. As a result, the implicit verb will be represented in the transformation function space. In light of this, we propose an Integration-Decomposition Network (IDN) to implement the above transformations and achieve state-of-the-art performance on widely-used HOI detection benchmarks. Code is available at https://github.com/DirtyHarryLYL/HAKE-Action-Torch/tree/IDN-(Integrating-Decomposing-Network).
computer science
This is a chapter of the planned monograph "Out of Nowhere: The Emergence of Spacetime in Quantum Theories of Gravity", co-authored by Nick Huggett and Christian W\"uthrich and under contract with Oxford University Press. (More information at https://beyondspacetime.net/.) This chapter introduces the problem of emergence of spacetime in quantum gravity. It introduces the main philosophical challenge to spacetime emergence and sketches our preferred solution to it.
physics
Hyperbolic metamaterials (HMMs), an unusual class of electromagnetic metamaterials, have found important applications in various fields due to their distinctive properties. A surprising feature of HMMs is that even continuous HMMs can possess topological edge modes. However, previous studies based on equal-frequency surface (analogy of Fermi surface) may not correctly capture the topology of entire bands. Here we develop a topological band description for continuous HMMs that can be described by a non-Hermitian Hamiltonian formulated from Maxwell's equations. We find two types of three dimensional non-Hermitian triply-degenerate points with complex linear dispersions and topological charges $\pm 2$ and 0 induced by chiral and gyromagnetic effects. Because of the photonic nature, the vacuum band plays an important role for topological edge states and bulk-edge correspondence in HMMs. The topological band results are numerically confirmed by direct simulation of Maxwell's equations. Our work presents a general non-Hermitian topological band treatment of continuous HMMs, paving the way for exploring interesting topological phases in photonic continua and device implementations of topological HMMs.
condensed matter
Mid-infrared photothermal microscopy is a new chemical imaging technology in which a visible beam senses the photothermal effect induced by a pulsed infrared laser. This technology provides infrared spectroscopic information at sub-micron spatial resolution and enables infrared spectroscopy and imaging of living cells and organisms. Yet, current mid-infrared photothermal imaging sensitivity suffers from a weak dependance of scattering on temperature and the image quality is vulnerable to the speckles caused by scattering. Here, we present a novel version of mid-infrared photothermal microscopy in which thermo-sensitive fluorescent probes are harnessed to sense the mid-infrared photothermal effect. The fluorescence intensity can be modulated at the level of 1% per Kelvin, which is 100 times larger than the modulation of scattering intensity. In addition, fluorescence emission is free of speckles, thus much improving the image quality. Moreover, fluorophores can target specific organelles or biomolecules, thus augmenting the specificity of photothermal imaging. Spectral fidelity is confirmed through fingerprinting a single bacterium. Finally, the photobleaching issue is successfully addressed through the development of a wide-field fluorescence-enhanced mid-infrared photothermal microscope which allows video rate bond-selective imaging of biological specimens.
physics
Inverse probability of treatment weighting (IPTW), which has been used to estimate sample average treatment effects (SATE) using observational data, tenuously relies on the positivity assumption and the correct specification of the treatment assignment model, both of which are problematic assumptions in many observational studies. Various methods have been proposed to overcome these challenges, including truncation, covariate-balancing propensity scores, and stable balancing weights. Motivated by an observational study in spine surgery, in which positivity is violated and the true treatment assignment model is unknown, we present the use of optimal balancing by Kernel Optimal Matching (KOM) to estimate SATE. By uniformly controlling the conditional mean squared error of a weighted estimator over a class of models, KOM simultaneously mitigates issues of possible misspecification of the treatment assignment model and is able to handle practical violations of the positivity assumption, as shown in our simulation study. Using data from a clinical registry, we apply KOM to compare two spine surgical interventions and demonstrate how the result matches the conclusions of clinical trials that IPTW estimates spuriously refute.
statistics
With this paper we participate to the call for ideas issued by the European Space Agency to define the Science Program and plan for space missions from 2035 to 2050. In particular we present five science cases where major advancements can be achieved thanks to space-based spectroscopic observations at ultraviolet (UV) wavelengths. We discuss the possibility to (1) unveil the large-scale structures and cosmic web in emission at redshift <~1.7; (2) study the exchange of baryons between galaxies and their surroundings to understand the contribution of the circumgalactic gas to the evolution and angular-momentum build-up of galaxies; (3) constrain the efficiency of ram-pressure stripping in removing gas from galaxies and its role in quenching star formation; (4) characterize the progenitor population of core-collapse supernovae to reveal the explosion mechanisms of stars; (5) target accreting white dwarfs in globular clusters to determine their evolution and fate. These science themes can be addressed thanks to UV (wavelength range lambda ~ 90 - 350 nm) observations carried out with a panoramic integral field spectrograph (field of view ~ 1 x 1 arcmin^2 ), and medium spectral (R = 4000) and spatial (~ 1" - 3") resolution. Such a UV-optimized instrument will be unique in the coming years, when most of the new large facilities such as the Extremely Large Telescope and the James Webb Space Telescope are optimized for infrared wavelengths.
astrophysics
We prove that there do not exist quasi-isometric embeddings of connected nonabelian nilpotent Lie groups equipped with left invariant Riemannian metrics into a metric measure space satisfying the RCD(0,N), with N > 1. In fact, we can prove that a subRiemannian manifold whose generic degree of nonholonomy is not smaller than 2 can not be biLipschitzly embedded in any Banach space with the Radon-Nikodym property. We also get that every regular sub-Riemannian manifold do not satisfy the CD(K,N) with N > 1. We also prove that the subRiemannian manifold is infinitesimally Hilbert space.
mathematics
The superlattice of alternating graphene/h-BN few-layered heterostructures is found to exhibit strong dependence on the parity of the number of layers within the stack. Odd-parity systems show a unique flamingo-like pattern, whereas their even-parity counterparts exhibit regular hexagonal or rectangular superlattices. When the alternating stack consists of seven layers or more, the flamingo pattern becomes favorable, regardless of parity. Notably, the out-of-plane corrugation of the system strongly depends on the shape of the superstructure resulting in significant parity dependence of its mechanical properties. The predicted phenomenon originates in an intricate competition between moir\'e patterns developing at the interface of consecutive layers. This mechanism is of general nature and is expected to occur in other alternating stacks of closely matched rigid layered materials as demonstrated for homogeneous alternating junctions of twisted graphene and h-BN. Our findings thus allow for the rational design of mechanomutable metamaterials based on Van der Waals heterostructures.
condensed matter
We generalize the Giveon-Kutasov duality by adding possible Chern-Simons interactions for the $U(N)$ gauge group. Some of the generalized dualities are known in the literature and many others are new to the best of our knowledge. The dualities are connected to the non-supersymmetric bosonization duality via mass deformations. For $N=1$, there are an infinite number of magnetic-dual theories.
high energy physics theory
In recent years, extensive research has emerged in affective computing on topics like automatic emotion recognition and determining the signals that characterize individual emotions. Much less studied, however, is expressiveness, or the extent to which someone shows any feeling or emotion. Expressiveness is related to personality and mental health and plays a crucial role in social interaction. As such, the ability to automatically detect or predict expressiveness can facilitate significant advancements in areas ranging from psychiatric care to artificial social intelligence. Motivated by these potential applications, we present an extension of the BP4D+ dataset with human ratings of expressiveness and develop methods for (1) automatically predicting expressiveness from visual data and (2) defining relationships between interpretable visual signals and expressiveness. In addition, we study the emotional context in which expressiveness occurs and hypothesize that different sets of signals are indicative of expressiveness in different contexts (e.g., in response to surprise or in response to pain). Analysis of our statistical models confirms our hypothesis. Consequently, by looking at expressiveness separately in distinct emotional contexts, our predictive models show significant improvements over baselines and achieve comparable results to human performance in terms of correlation with the ground truth.
computer science
This study proposes a supervised learning method that does not rely on labels. We use variables associated with the label as indirect labels, and construct an indirect physics-constrained loss based on the physical mechanism to train the model. In the training process, the model prediction is mapped to the space of value that conforms to the physical mechanism through the projection matrix, and then the model is trained based on the indirect labels. The final prediction result of the model conforms to the physical mechanism between indirect label and label, and also meets the constraints of the indirect label. The present study also develops projection matrix normalization and prediction covariance analysis to ensure that the model can be fully trained. Finally, the effect of the physics-constrained indirect supervised learning is verified based on a well log generation problem.
electrical engineering and systems science
This contribution exploits the duality between a viral infection process and macroscopic air-based molecular communication. Airborne aerosol and droplet transmission through human respiratory processes is modeled as an instance of a multiuser molecular communication scenario employing respiratory-event-driven molecular variable-concentration shift keying. Modeling is aided by experiments that are motivated by a macroscopic air-based molecular communication testbed. In artificially induced coughs, a saturated aqueous solution containing a fluorescent dye mixed with saliva is released by an adult test person. The emitted particles are made visible by means of optical detection exploiting the fluorescent dye. The number of particles recorded is significantly higher in test series without mouth and nose protection than in those with a wellfitting medical mask. A simulation tool for macroscopic molecular communication processes is extended and used for estimating the transmission of infectious aerosols in different environments. Towards this goal, parameters obtained through self experiments are taken. The work is inspired by the recent outbreak of the coronavirus pandemic.
electrical engineering and systems science
Deep Convolutional Neural Networks (DCNNs) are currently the method of choice both for generative, as well as for discriminative learning in computer vision and machine learning. The success of DCNNs can be attributed to the careful selection of their building blocks (e.g., residual blocks, rectifiers, sophisticated normalization schemes, to mention but a few). In this paper, we propose $\Pi$-Nets, a new class of function approximators based on polynomial expansions. $\Pi$-Nets are polynomial neural networks, i.e., the output is a high-order polynomial of the input. The unknown parameters, which are naturally represented by high-order tensors, are estimated through a collective tensor factorization with factors sharing. We introduce three tensor decompositions that significantly reduce the number of parameters and show how they can be efficiently implemented by hierarchical neural networks. We empirically demonstrate that $\Pi$-Nets are very expressive and they even produce good results without the use of non-linear activation functions in a large battery of tasks and signals, i.e., images, graphs, and audio. When used in conjunction with activation functions, $\Pi$-Nets produce state-of-the-art results in three challenging tasks, i.e. image generation, face verification and 3D mesh representation learning. The source code is available at \url{https://github.com/grigorisg9gr/polynomial_nets}.
computer science
In response to the development of recent efficient dense layers, this paper shows that something as simple as replacing linear components in pointwise convolutions with structured linear decompositions also produces substantial gains in the efficiency/accuracy tradeoff. Pointwise convolutions are fully connected layers and are thus prepared for replacement by structured transforms. Networks using such layers are able to learn the same tasks as those using standard convolutions, and provide Pareto-optimal benefits in efficiency/accuracy, both in terms of computation (mult-adds) and parameter count (and hence memory). Code is available at https://github.com/BayesWatch/deficient-efficient.
statistics
We report the first discovery of a fast radio burst (FRB), FRB 20200125A, by the Green Bank Northern Celestial Cap (GBNCC) Pulsar Survey conducted with the Green Bank Telescope at 350 MHz. FRB 20200125A was detected at a Galactic latitude of 58.43 degrees with a dispersion measure of 179 pc cm$^{-3}$, while electron density models predict a maximum Galactic contribution of 25 pc cm$^{-3}$ along this line of sight. Moreover, no apparent Galactic foreground sources of ionized gas that could account for the excess DM are visible in multi-wavelength surveys of this region. This argues that the source is extragalactic. The maximum redshift for the host galaxy is $z_{max}=0.17$, corresponding to a maximum comoving distance of approximately 750 Mpc. The measured peak flux density for FRB 20200125A is 0.37 Jy, and we measure a pulse width of 3.7 ms, consistent with the distribution of FRB widths observed at higher frequencies. Based on this detection and assuming an Euclidean flux density distribution of FRBs, we calculate an all-sky rate at 350 MHz of $3.4^{+15.4}_{-3.3} \times 10^3$ FRBs sky$^{-1}$ day$^{-1}$ above a peak flux density of 0.42 Jy for an unscattered pulse having an intrinsic width of 5 ms, consistent with rates reported at higher frequencies. Given the recent improvements in our single-pulse search pipeline, we also revisit the GBNCC survey sensitivity to various burst properties. Finally, we find no evidence of interstellar scattering in FRB 20200125A, adding to the growing evidence that some FRBs have circumburst environments where free-free absorption and scattering are not significant.
astrophysics
This paper investigates a multilevel inverter with a capability that produces a wide voltage range with high quality. The selective harmonic elimination (SHE) method is considered for a single-phase 5-level cascaded H-bridge (CHB) inverter, in which the particle swarm optimization (PSO) algorithm solves the nonlinear equations. However, eliminating the low-order harmonics has been challenging when a low range of output voltage is required. To surmount such challenges and access to a wide range of output voltage, an adjustable dc-link is introduced that allows the inverter to increase the modulation index, resulting in a significant decrease in total harmonic distortion (THD). In this paper, to regulate the dc-link voltage amount, a 12-pulse rectifier is employed to allow the inverter to produce the output voltage requirements with less distortion. To prove such claims, the PSO algorithm is modified to calculate the optimal angles, as a result, the switching angles are applied in SIMULINK MATLAB to generate a 5-level output voltage.
electrical engineering and systems science
We propose a method for deterministic sampling of arbitrary continuous angular density functions. With deterministic sampling, good estimation results can typically be achieved with much smaller numbers of samples compared to the commonly used random sampling. While the Unscented Kalman Filter uses deterministic sampling as well, it only takes the absolute minimum number of samples. Our method can draw arbitrary numbers of deterministic samples and therefore improve the quality of state estimation. Conformity between the continuous density function (reference) and the Dirac mixture density, i.e., sample locations (approximation) is established by minimizing the difference of the cumulatives of many univariate projections. In other words, we compare cumulatives of probability densities in the Radon space.
electrical engineering and systems science