text
stringlengths 11
9.77k
| label
stringlengths 2
104
|
---|---|
We study the positivity properties of the leading Regge trajectory in higher-dimensional, unitary, conformal field theories (CFTs). These conditions correspond to higher spin generalizations of the averaged null energy condition (ANEC). By studying higher spin ANEC, we will derive new bounds on the dimensions of charged, spinning operators and prove that if the Hofman-Maldacena bounds are saturated, then the theory has a higher spin symmetry. We also derive new, general bounds on CFTs, with an emphasis on theories whose spectrum is close to that of a generalized free field theory. As an example, we consider the Ising CFT and show how the OPE structure of the leading Regge trajectory is constrained by causality. Finally, we use the analytic bootstrap to perform additional checks, in a large class of CFTs, that higher spin ANEC is obeyed at large and finite spin. In the process, we calculate corrections to large spin OPE coefficients to one-loop and higher in holographic CFTs. | high energy physics theory |
We derive a general formula for renormalized entanglement entropy in even dimensional CFTs holographically dual to Einstein gravity in one dimension higher. In order to renormalize, we adapt the Kounterterm method to asymptotically locally AdS manifolds with conical singularities. On the gravity side, the computation considers extrinsic counterterms and the use of the replica trick a la Lewkowycz-Maldacena. The boundary counterterm B_d is shown to satisfy a key property, in direct analogy to the Euler density: when evaluated on a conically singular manifold, it decomposes into a regular part plus a codimension-2 version of itself located at the conical singularity. The renormalized entropy thus obtained is shown to correspond to the universal part of the holographic entanglement entropy, which for spherical entangling surfaces is proportional to the central charge a that is the subject of the a-theorem. We also review and elucidate various aspects of the Kounterterm approach, including in particular its full compatibility with the Dirichlet condition for the metric at the conformal boundary, that is of standard use in holography. | high energy physics theory |
. We consider three dimensional model of the Gaussian polymer chain in the presence of defects to understand the formation of a polymer aggregate where the aggregate is induced by the defects. The defects are acting as an attractive centres of the monomers and it induces aggregation of the monomers of the chain around the defects. It has been shown using the analytical calculations that the formation of a polymer aggregates are favoured when the defects have extensions in all the possible three dimensions. We have also calculated relevant other thermo-dynamical parameters (i. e. the average number of the monomers and the average size of the chain about the defect line) of the polymer aggregates to justify our findings. | condensed matter |
A decision maker starts from a judgmental decision and moves to the closest boundary of the confidence interval. This statistical decision rule is admissible and does not perform worse than the judgmental decision with a probability equal to the confidence level, which is interpreted as a coefficient of statistical risk aversion. The confidence level is related to the decision maker's aversion to uncertainty and can be elicited with laboratory experiments using urns a la Ellsberg. The decision rule is applied to a problem of asset allocation for an investor whose judgmental decision is to keep all her wealth in cash. | statistics |
We derive an explicit formula for the trace of an arbitrary Hecke operator on spaces of twist-minimal holomorphic cusp forms with arbitrary level and character, and weight at least 2. We show that this formula provides an efficient way of computing basis elements for newform or cusp form spaces. This work was motivated by the development of a twist-minimal trace formula in the non-holomorphic case by Booker, Lee and Str\"ombergsson, as well as the presentation of a fully generalised trace formula for the holomorphic case by Cohen and Str\"omberg. | mathematics |
This is a pedagogical review on $\mathrm{T}\overline{\mathrm{T}}$ deformation of two dimensional quantum field theories. It is based on three lectures which the author gave at ITP-CAS in December 2018. This review consists of four parts. The first part is a general introduction to $\mathrm{T}\overline{\mathrm{T}}$ deformation. Special emphasises are put on the deformed classical Lagrangian and the exact solvability of the spectrum. The second part focuses on the torus partition sum of the $\mathrm{T}\overline{{\mathrm{T}}}$/$\mathrm{J}\overline{\mathrm{T}}$ deformed conformal field theories and modular invariance/covariance. In the third part, different perspectives of $\mathrm{T}\overline{\mathrm{T}}$ deformation are presented, including its relation to random geometry, 2d topological gravity and holography. We summarize more recent developments until January 2021 in the last part. | high energy physics theory |
We consider a random matrix model with both pairwise and non-pairwise contracted indices. The partition function of the matrix model is similar to that appearing in some replicated systems with random tensor couplings, such as the p-spin spherical model for the spin glass. We analyze the model using Feynman diagrammatic expansions, and provide an exhaustive characterization of the graphs which dominate when the dimensions of the pairwise and (or) non-pairwise contracted indices are large. We apply this to investigate the properties of the wave function of a toy model closely related to a tensor model in the Hamilton formalism, which is studied in a quantum gravity context, and obtain a result in favor of the consistency of the quantum probabilistic interpretation of this tensor model. | high energy physics theory |
Purpose: The gold standard for colorectal cancer metastases detection in the peritoneum is histological evaluation of a removed tissue sample. For feedback during interventions, real-time in-vivo imaging with confocal laser microscopy has been proposed for differentiation of benign and malignant tissue by manual expert evaluation. Automatic image classification could improve the surgical workflow further by providing immediate feedback. Methods: We analyze the feasibility of classifying tissue from confocal laser microscopy in the colon and peritoneum. For this purpose, we adopt both classical and state-of-the-art convolutional neural networks to directly learn from the images. As the available dataset is small, we investigate several transfer learning strategies including partial freezing variants and full fine-tuning. We address the distinction of different tissue types, as well as benign and malignant tissue. Results: We present a thorough analysis of transfer learning strategies for colorectal cancer with confocal laser microscopy. In the peritoneum, metastases are classified with an AUC of 97.1 and in the colon, the primarius is classified with an AUC of 73.1. In general, transfer learning substantially improves performance over training from scratch. We find that the optimal transfer learning strategy differs for models and classification tasks. Conclusions: We demonstrate that convolutional neural networks and transfer learning can be used to identify cancer tissue with confocal laser microscopy. We show that there is no generally optimal transfer learning strategy and model as well as task-specific engineering is required. Given the high performance for the peritoneum, even with a small dataset, application for intraoperative decision support could be feasible. | computer science |
Thermal evolution of neutron stars (NSs) is studied as a probe of physics beyond the standard model. We first review the standard cooling theory of NSs in detail, with an emphasis on the roles of nucleon superfluidity. Then we discuss non-standard evolution with axion and dark matter (DM); axion production enhances the cooling while DM accretion leads to heating of NSs. To evaluate the effect of DM heating, we need to compare it to the rotochemical heating, which is caused by the out-of-equilibrium beta processes in a NS. In the dissertation, we also investigate the rotochemical heating in the presence of both neutron and proton superfluidity. | high energy physics phenomenology |
Superconductivity in group IV semiconductors is desired for hybrid devices combining both semiconducting and superconducting properties. Following boron doped diamond and Si, superconductivity has been observed in gallium doped Ge, however the obtained specimen is in polycrystalline form [Herrmannsd\"orfer et al., Phys. Rev. Lett. 102, 217003 (2009)]. Here, we present superconducting single-crystalline Ge hyperdoped with gallium or aluminium by ion implantation and rear-side flash lamp annealing. The maximum concentration of Al and Ga incorporated into substitutional positions in Ge is eight times higher than the equilibrium solid solubility. This corresponds to a hole concentration above 10^21 cm-3. Using density functional theory in the local density approximation and pseudopotential plane-wave approach, we show that the superconductivity in p-type Ge is phonon-mediated. According to the ab initio calculations the critical superconducting temperature for Al- and Ga-doped Ge is in the range of 0.45 K for 6.25 at.% of dopant concentration being in a qualitative agreement with experimentally obtained values. | condensed matter |
We propose a new experiment sensitive to the detection of millicharged particles produced at the $30$ GeV proton fixed-target collisions at J-PARC. The potential site for the experiment is B2 of the Neutrino Monitor building, $280$ m away from the target. With $\textrm{N}_\textrm{POT}=10^{22}$, the experiment can provide sensitivity to particles with electric charge $3\times10^{-4}\,e$ for mass less than $0.2$ $\textrm{GeV}/\textrm{c}^2$ and $1.5\times10^{-3}\,e$ for mass less than $1.6$ $\textrm{GeV}/\textrm{c}^2$. This brings a substantial extension to the current constraints on the charge and the mass of such particles. | physics |
Single photon detection generally consists of several stages: the photon has to interact with one or more charged particles, its excitation energy will be converted into other forms of energy, and amplification to a macroscopic signal must occur, thus leading to a "click." We focus here on the part of the detection process before amplification (which we have studied in a separate publication). We discuss how networks consisting of coupled discrete quantum states and structured continua (e.g. band gaps) provide generic models for that first part of the detection process. The input to the network is a continuum (the continuum of single-photon states), the output is again a continuum describing the next irreversible step. The process of a single photon entering the network, its energy propagating through that network and finally exiting into another output continuum of modes can be described by a single dimensionless complex transmission amplitude, $T(\omega)$. We discuss how to obtain from $T(\omega)$ the photo detection efficiency, how to find sets of parameters that maximize this efficiency, as well as expressions for other input-independent quantities such as the frequency-dependent group delay and spectral bandwidth. We then study a variety of networks and discuss how to engineer different transmission functions $T(\omega)$ amenable to photo detection. | quantum physics |
Model-based recursive partitioning (MOB) can be used to identify subgroups with differing treatment effects. The detection rate of treatment-by-covariate interactions and the accuracy of identified subgroups using MOB depend strongly on the sample size. Using data from multiple randomized controlled clinical trials can overcome the problem of too small samples. However, naively pooling data from multiple trials may result in the identification of spurious subgroups as differences in study design, subject selection and other sources of between-trial heterogeneity are ignored. In order to account for between-trial heterogeneity in individual participant data (IPD) meta-analysis random-effect models are frequently used. Commonly, heterogeneity in the treatment effect is modelled using random effects whereas heterogeneity in the baseline risks is modelled by either fixed effects or random effects. In this article, we propose metaMOB, a procedure using the generalized mixed-effects model tree (GLMM tree) algorithm for subgroup identification in IPD meta-analysis. Although the application of metaMOB is potentially wider, e.g. randomized experiments with participants in social sciences or preclinical experiments in life sciences, we focus on randomized controlled clinical trials. In a simulation study, metaMOB outperformed GLMM trees assuming a random intercept only and model-based recursive partitioning (MOB), whose algorithm is the basis for GLMM trees, with respect to the false discovery rates, accuracy of identified subgroups and accuracy of estimated treatment effect. The most robust and therefore most promising method is metaMOB with fixed effects for modelling the between-trial heterogeneity in the baseline risks. | statistics |
We present a novel yet simple approach to produced multiple entangled photon pairs through spontaneous parametric downconversion. We have developed Gaussian masks to subdivide the pump beam before passing it through a nonlinear medium. In this way, we are able to observe simultaneous separate down-converted emission cones with spatial overlap. The technique we employ can be used to greatly increase the dimensionality of entangled photonic systems generated from spontaneous parametric down conversion, affording greater scalability to optical quantum computing than previously explored. | quantum physics |
Using the density matrix renormalization group method, we studied the ground state of the one-dimensional $S=1$ Bose-Hubbard model with local three-body interactions, which can be a superfluid or a Mott insulator state. We drew the phase diagram of this model for both ferromagnetic and antiferromagnetic interaction. Regardless of the sign of the spin-dependent coupling, we obtained that the Mott lobes area decreases as the spin-dependent strength increases, which means that the even-odd asymmetry of the two-body antiferromagnetic chain is absent for local three-body interactions. For antiferromagnetic coupling, we found that the density drives first-order superfluid-Mott insulator transitions for even and odd lobes. Ferromagnetic Mott insulator and superfluid states were obtained with a ferromagnetic coupling, and a tendency to a "long-range" order was observed. | condensed matter |
In this work, we provide a framework analysis of dual-hop hybrid Millimeter Wave Radio Frequency (RF)/Free Space Optical (FSO) MIMO relaying system. The source is equipped with multiple antennas and employs conjugate beamforming while the destination consists of multiple apertures with selection combining. The system also consists of a relay operating at amplify-and-forward mode. The RF channels are subject to Nakagami-m fading while the optical links experience the M\'alaga distribution. In addition, we introduce the impairments to the relay and receiver. In fact, the relay is impaired by the High Power Amplifier (HPA) nonlinearities while the receiver suffers from the In phase and Quadrature Imbalance. Moreover, we assume two types of HPA nonlinearities impairments called Soft Envelope Limiter (SEL) and Traveling Wave Tube Amplifier (TWTA). Closed-forms of the outage probability, the bit error probability, and the ergodic capacity are derived. Capitalizing on these performances, we derive the high SNR asymptotes to unpack insightful metrics such as the diversity gain. We also address the impacts of some key factors on the system performance such as the impairments, the interferers, the number of antennas and apertures and the pointing errors, etc. Finally, the analytical expressions are confirmed by Monte Carlo simulation. | electrical engineering and systems science |
Human gesture recognition has assumed a capital role in industrial applications, such as Human-Machine Interaction. We propose an approach for segmentation and classification of dynamic gestures based on a set of handcrafted features, which are drawn from the skeleton data provided by the Kinect sensor. The module for gesture detection relies on a feedforward neural network which performs framewise binary classification. The method for gesture recognition applies a sliding window, which extracts information from both the spatial and temporal dimensions. Then we combine windows of varying durations to get a multi-temporal scale approach and an additional gain in performance. Encouraged by the recent success of Recurrent Neural Networks for time series domains, we also propose a method for simultaneous gesture segmentation and classification based on the bidirectional Long Short-Term Memory cells, which have shown ability for learning the temporal relationships on long temporal scales. We evaluate all the different approaches on the dataset published for the ChaLearn Looking at People Challenge 2014. The most effective method achieves a Jaccard index of 0.75, which suggests a performance almost on pair with that presented by the state-of-the-art techniques. At the end, the recognized gestures are used to interact with a collaborative robot. | computer science |
Based on our previous QCD sum rule studies on hidden-charm pentaquark states, we discuss possible interpretations of the $P_c(4312)$, $P_c(4440)$, and $P_c(4457)$, which were recently observed by LHCb. Our results suggest that the $P_c(4312)$ can be well interpreted as the $[\Sigma_c^{++} \bar D^-]$ bound state with $J^P = 1/2^-$, while the $P_c(4440)$ and $P_c(4457)$ can be interpreted as the $[\Sigma_c^{+} \bar D^0]$ bound state with $J^P = 1/2^-$, the $[\Sigma_c^{*++} \bar D^{-}]$ and $[\Sigma_c^{+} \bar D^{*0}]$ bound states with $J^P = 3/2^-$, or the $[\Sigma_c^{*+} \bar D^{*0}]$ bound state with $J^P = 5/2^-$. We propose to measure their spin-parity quantum numbers to verify these assignments. | high energy physics phenomenology |
The concept of biological age (BA), although important in clinical practice, is hard to grasp mainly due to the lack of a clearly defined reference standard. For specific applications, especially in pediatrics, medical image data are used for BA estimation in a routine clinical context. Beyond this young age group, BA estimation is mostly restricted to whole-body assessment using non-imaging indicators such as blood biomarkers, genetic and cellular data. However, various organ systems may exhibit different aging characteristics due to lifestyle and genetic factors. Thus, a whole-body assessment of the BA does not reflect the deviations of aging behavior between organs. To this end, we propose a new imaging-based framework for organ-specific BA estimation. In this initial study, we focus mainly on brain MRI. As a first step, we introduce a chronological age (CA) estimation framework using deep convolutional neural networks (Age-Net). We quantitatively assess the performance of this framework in comparison to existing state-of-the-art CA estimation approaches. Furthermore, we expand upon Age-Net with a novel iterative data-cleaning algorithm to segregate atypical-aging patients (BA $\not \approx$ CA) from the given population. We hypothesize that the remaining population should approximate the true BA behavior. We apply the proposed methodology on a brain magnetic resonance image (MRI) dataset containing healthy individuals as well as Alzheimer's patients with different dementia ratings. We demonstrate the correlation between the predicted BAs and the expected cognitive deterioration in Alzheimer's patients. A statistical and visualization-based analysis has provided evidence regarding the potential and current challenges of the proposed methodology. | electrical engineering and systems science |
The recent advances in deep learning are mostly driven by availability of large amount of training data. However, availability of such data is not always possible for specific tasks such as speaker recognition where collection of large amount of data is not possible in practical scenarios. Therefore, in this paper, we propose to identify speakers by learning from only a few training examples. To achieve this, we use a deep neural network with prototypical loss where the input to the network is a spectrogram. For output, we project the class feature vectors into a common embedding space, followed by classification. Further, we show the effectiveness of capsule net in a few shot learning setting. To this end, we utilize an auto-encoder to learn generalized feature embeddings from class-specific embeddings obtained from capsule network. We provide exhaustive experiments on publicly available datasets and competitive baselines, demonstrating the superiority and generalization ability of the proposed few shot learning pipelines. | electrical engineering and systems science |
A basic operation in Convolutional Neural Networks (CNNs) is spatial resizing of feature maps. This is done either by strided convolution (donwscaling) or transposed convolution (upscaling). Such operations are limited to a fixed filter moving at predetermined integer steps (strides). Spatial sizes of consecutive layers are related by integer scale factors, predetermined at architectural design, and remain fixed throughout training and inference time. We propose a generalization of the common Conv-layer, from a discrete layer to a Continuous Convolution (CC) Layer. CC Layers naturally extend Conv-layers by representing the filter as a learned continuous function over sub-pixel coordinates. This allows learnable and principled resizing of feature maps, to any size, dynamically and consistently across scales. Once trained, the CC layer can be used to output any scale/size chosen at inference time. The scale can be non-integer and differ between the axes. CC gives rise to new freedoms for architectural design, such as dynamic layer shapes at inference time, or gradual architectures where the size changes by a small factor at each layer. This gives rise to many desired CNN properties, new architectural design capabilities, and useful applications. We further show that current Conv-layers suffer from inherent misalignments, which are ameliorated by CC layers. | computer science |
We study data on perfumes and their odour descriptors - notes - to understand how note compositions, called accords, influence successful fragrance formulas. We obtain accords which tend to be present in perfumes that receive significantly more customer ratings. Our findings show that the most popular notes and the most over-represented accords are different to those that have the strongest effect to the perfume ratings. We also used network centrality to understand which notes have the highest potential to enhance note compositions. We find that large degree notes, such as musk and vanilla as well as generically-named notes, e.g. floral notes, are amongst the notes that enhance accords the most. This work presents a framework which would be a timely tool for perfumers to explore a multidimensional space of scent compositions. | physics |
A central line of inquiry in condensed matter science has been to understand how the competition between different states of matter give rise to emergent physical properties. Perhaps some of the most studied systems in this respect are the hole-doped LaMnO$_3$ perovskites, with interest in the past three decades being stimulated on account of their colossal magnetoresistance (CMR). However, phase segregation between ferromagnetic (FM) metallic and antiferromagnetic (AFM) insulating states, which itself is believed to be responsible for the colossal change in resistance under applied magnetic field, has until now prevented a full atomistic level understanding of the orbital ordered (OO) state at the optimally doped level. Here, through the detailed crystallographic analysis of the hole-doped phase diagram of a prototype system, we show that the superposition of two distinct lattice modes gives rise to a striped structure of OO Jahn-Teller active Mn$^{3+}$ and charge disordered (CD) Mn$^{3.5+}$ layers in a 1:3 ratio. This superposition leads to an exact cancellation of the Jahn-Teller-like oxygen atom displacements in the CD layers only at the 3/8th doping level, coincident with the maximum CMR response of the manganties. Furthermore, the periodic striping of layers containing Mn$^{3.5+}$, separated by layers of fully ordered Mn$^{3+}$, provides a natural mechanism though which long range OO can melt, a prerequisite for the emergence of the FM conducting state. The competition between insulating and conducting states is seen to be a key feature in understanding the properties in highly correlated electron systems, many of which, such as the CMR and high temperature superconductivity, only emerge at or near specific doping values. | condensed matter |
Using generalized enriched categories, in this paper we show that Rosick\'{y}'s proof of cartesian closedness of the exact completion of the category of topological spaces can be extended to a wide range of topological categories over $\mathsf{Set}$, like metric spaces, approach spaces, ultrametric spaces, probabilistic metric spaces, and bitopological spaces. In order to do so we prove a sufficient criterion for exponentiability of $(\mathbb{T},V)$-categories and show that, under suitable conditions, every $(\mathbb{T},V)$-injective category is exponentiable in $(\mathbb{T},V)\text{-}\mathsf{Cat}$. | mathematics |
The Laser Interferometer Space Antenna (LISA) and its metrology chain have to fulfill stringent performance requirements to enable the space-based detection of gravitational waves. This implies the necessity of performance verification methods. In particular, the extraction of the interferometric phase, implemented by a phasemeter, needs to be probed for linearity and phase noise contributions. This Letter reports on a hexagonal quasimonolithic optical bench implementing a three-signal test for this purpose. Its characterization as sufficiently stable down to picometer levels is presented as well as its usage for a benchmark phasemeter performance measurement under LISA conditions. These results make it a candidate for the core of a LISA metrology verification facility. | astrophysics |
Large-scale natural language understanding (NLU) systems have made impressive progress: they can be applied flexibly across a variety of tasks, and employ minimal structural assumptions. However, extensive empirical research has shown this to be a double-edged sword, coming at the cost of shallow understanding: inferior generalization, grounding and explainability. Grounded language learning approaches offer the promise of deeper understanding by situating learning in richer, more structured training environments, but are limited in scale to relatively narrow, predefined domains. How might we enjoy the best of both worlds: grounded, general NLU? Following extensive contemporary cognitive science, we propose treating environments as "first-class citizens" in semantic representations, worthy of research and development in their own right. Importantly, models should also be partners in the creation and configuration of environments, rather than just actors within them, as in existing approaches. To do so, we argue that models must begin to understand and program in the language of affordances (which define possible actions in a given situation) both for online, situated discourse comprehension, as well as large-scale, offline common-sense knowledge mining. To this end we propose an environment-oriented ecological semantics, outlining theoretical and practical approaches towards implementation. We further provide actual demonstrations building upon interactive fiction programming languages. | computer science |
We derive new estimators of an optimal joint testing and treatment regime under the no direct effect (NDE) assumption that a given laboratory, diagnostic, or screening test has no effect on a patient's clinical outcomes except through the effect of the test results on the choice of treatment. We model the optimal joint strategy using an optimal regime structural nested mean model (opt-SNMM). The proposed estimators are more efficient than previous estimators of the parameters of an opt-SNMM because they efficiently leverage the `no direct effect (NDE) of testing' assumption. Our methods will be of importance to decision scientists who either perform cost-benefit analyses or are tasked with the estimation of the `value of information' supplied by an expensive diagnostic test (such as an MRI to screen for lung cancer). | statistics |
We study a Sobolev critical fast diffusion equation in bounded domains with the Brezis-Nirenberg effect. We obtain extinction profiles of its positive solutions, and show that the convergence rates of the relative error in regular norms are at least polynomial. Exponential decay rates are proved for generic domains. Our proof makes use of its regularity estimates, a curvature type evolution equation, as well as blow up analysis. Results for Sobolev subcritical fast diffusion equations are also obtained. | mathematics |
The vast majority of stars in galaxy groups are contained within their constituent galaxies. Some small fraction of stars is expected, however, to follow the global dark matter potential of the group. In compact groups, interactions between the galaxies should be frequent. This leads to a more intensive material stripping from the group members, which finally forms an intra-group light component (IGL). Therefore, the distribution of the IGL should be related to the distribution of the total mass in the compact group and its dynamical status. In this study we consider the distribution and fraction of the IGL in a sample of 36 Hickson compact groups (HCGs). We use deep observations of these compact groups (down to surface brightness $\sim 28$ mag\,arcsec$^{-2}$ in the $r$ band) obtained with the WISE $28$-inch telescope. For five HCGs with a bright symmetric IGL component, we carry out multicomponent photometric decomposition to simultaneously fit the galaxy profiles and the IGL. For the remaining groups, we only fit the profiles of their constituent galaxies. We find that the mean surface brightness of the IGL correlates with the mean morphology of the group: it becomes brighter in the groups with a larger fraction of early-type galaxies. On the other hand, the IGL brightness depends on the total luminosity of the group. The IGL profile tends to have a S\'ersic index $n\sim0.5-1$, which is generally consistent with the mass density profile of dark matter haloes in compact groups obtained from cosmological simulations. | astrophysics |
The molecular characterization of tumor samples by multiple omics data sets of different types or modalities (e.g. gene expression, mutation, CpG methylation) has become an invaluable source of information for assessing the expected performance of individual drugs and their combinations. Merging the relevant information from the omics data modalities provides the statistical basis for determining suitable therapies for specific cancer patients. Different data modalities may each have their specific structures that need to be taken into account during inference. In this paper, we assume that each omics data modality has a low-rank structure with only few relevant features that affect the prediction and we propose to use a composite local nuclear norm penalization for learning drug sensitivity. Numerical results show that the composite low-rank structure can improve the prediction performance compared to using a global low-rank approach or elastic net regression. | statistics |
We study the frame properties of the Gabor systems $$\mathfrak{G}(g;\alpha,\beta):=\{e^{2\pi i \beta m x}g(x-\alpha n)\}_{m,n\in\mathbb{Z}}.$$ In particular, we prove that for Herglotz windows $g$ such systems always form a frame for $L^2(\mathbb{R})$ if $\alpha,\beta>0$, $\alpha\beta\leq1$. For general rational windows $g\in L^2(\mathbb{R})$ we prove that $\mathfrak{G}(g;\alpha,\beta)$ is a frame for $L^2(\mathbb{R})$ if $0<\alpha,\beta$, $\alpha\beta<1$, $\alpha\beta\not\in\mathbb{Q}$ and $\hat{g}(\xi)\neq0$, $\xi>0$, thus confirming Daubechies conjecture for this class of functions. We also discuss some related questions, in particular sampling in shift-invariant subspaces of $L^2(\mathbb{R})$. | mathematics |
We introduce High-Relative Degree Stochastic Control Lyapunov functions and Barrier Functions as a means to ensure asymptotic stability of the system and incorporate state dependent high relative degree safety constraints on a non-linear stochastic systems. Our proposed formulation also provides a generalisation to the existing literature on control Lyapunov and barrier functions for stochastic systems. The control policies are evaluated using a constrained quadratic program that is based on control Lyapunov and barrier functions. Our proposed control design is validated via simulated experiments on a relative degree 2 system (2 dimensional car navigation) and relative degree 4 system (two-link pendulum with elastic actuator). | electrical engineering and systems science |
Facial recognition is changing the way we live in and interact with our society. Here we discuss the two sides of facial recognition, summarizing potential risks and current concerns. We introduce current policies and regulations in different countries. Very importantly, we point out that the risks and concerns are not only from facial recognition, but also realistically very similar to other biometric recognition technology, including but not limited to gait recognition, iris recognition, fingerprint recognition, voice recognition, etc. To create a responsible future, we discuss possible technological moves and efforts that should be made to keep facial recognition (and biometric recognition in general) developing for social good. | computer science |
Radiomics is an exciting new area of texture research for extracting quantitative and morphological characteristics of pathological tissue. However, to date, only single images have been used for texture analysis. We have extended radiomic texture methods to use multiparametric (mp) data to get more complete information from all the images. These mpRadiomic methods could potentially provide a platform for stratification of tumor grade as well as assessment of treatment response in brain tumors. In brain, multiparametric MRI (mpMRI) are based on contrast enhanced T1-weighted imaging (T1WI), T2WI, Fluid Attenuated Inversion Recovery (FLAIR), Diffusion Weighted Imaging (DWI) and Perfusion Weighted Imaging (PWI). Therefore, we applied our multiparametric radiomic framework (mpRadiomic) on 24 patients with brain tumors (8 grade II and 16 grade IV). The mpRadiomic framework classified grade IV tumors from grade II tumors with a sensitivity and specificity of 93% and 100%, respectively, with an AUC of 0.95. For treatment response, the mpRadiomic framework classified pseudo-progression from true-progression with an AUC of 0.93. In conclusion, the mpRadiomic analysis was able to effectively capture the multiparametric brain MRI texture and could be used as potential biomarkers for distinguishing grade IV from grade II tumors as well as determining true-progression from pseudo-progression. | electrical engineering and systems science |
We propose a new method for probing inflationary models of primordial black hole (PBH) production, using only CMB physics at relatively large scales. In these scenarios, the primordial power spectrum profile for curvature perturbations is characterized by a pronounced dip, followed by a rapid growth towards small scales, leading to a peak responsible for PBH formation. We focus on scales around the dip that are well separated from the peak to analytically compute expressions for the curvature power spectrum and bispectrum. The size of the squeezed bispectrum is enhanced at the position of the dip, and it acquires a characteristic scale dependence that can be probed by cross-correlating CMB $\mu$-distortions and temperature fluctuations. We quantitatively study the properties of such cross-correlations and how they depend on the underlying model, discussing how they can be tested by the next generation of CMB $\mu$-distortion experiments. This method allows one to experimentally probe inflationary PBH scenarios using well-understood CMB physics, without considering non-linearities associated with PBH formation and evolution. | astrophysics |
A new sub-grid-scale model is developed for studying influences of the Hall term on macroscopic aspects of magnetohydrodynamic turbulence. Although the Hall term makes numerical simulations extremely expensive by exciting high-wave-number coefficients and makes magnetohydrodynamic equations stiff, studying macroscopic aspects of magnetohydrodynamic turbulence together with the Hall term is meaningful since this term often influences not only sub-ion-scales but also macroscopic scales. A new sub-ion-scale sub-grid-scale model for large eddy simulations of Hall magnetohydrodynamic turbulence is developed in order to overcome the difficulties. Large eddy simulations by the use of the new model successfully reproduce statistical natures such as the energies and probability density functions of the vorticity and current density, keeping some natures intrinsic to Hall magnetohydrodynamic turbulence. Our new sub-grid-scale model enables numerical simulations of homogeneous and isotropic Hall magnetohydrodynamic turbulence with a small computational cost, improving the essential resolution of an LES from that carried out with earlier models, and retaining the ion-electron separation effects by the Hall term in the grid scales. | physics |
We consider the problem of minimizing a function over the manifold of orthogonal matrices. The majority of algorithms for this problem compute a direction in the tangent space, and then use a retraction to move in that direction while staying on the manifold. Unfortunately, the numerical computation of retractions on the orthogonal manifold always involves some expensive linear algebra operation, such as matrix inversion or matrix square-root. These operations quickly become expensive as the dimension of the matrices grows. To bypass this limitation, we propose the landing algorithm which does not involve retractions. The algorithm is not constrained to stay on the manifold but its evolution is driven by a potential energy which progressively attracts it towards the manifold. One iteration of the landing algorithm only involves matrix multiplications, which makes it cheap compared to its retraction counterparts. We provide an analysis of the convergence of the algorithm, and demonstrate its promises on large-scale problems, where it is faster and less prone to numerical errors than retraction-based methods. | statistics |
Cassiopeia A is a nearby young supernova remnant that provides a unique laboratory for the study of core-collapse supernova explosions. Cassiopeia A is known to be a Type IIb supernova from the optical spectrum of its light echo, but the immediate progenitor of the supernova remains uncertain. Here we report results of near-infrared, high-resolution spectroscopic observations of Cassiopeia A where we detected the pristine circumstellar material of the supernova progenitor. Our observations revealed a strong emission line of iron (Fe) from a circumstellar clump that has not yet been processed by the supernova shock wave. A comprehensive analysis of the observed spectra, together with an HST image, indicates that the majority of Fe in this unprocessed circumstellar material is in the gas phase, not depleted onto dust grains as in the general interstellar medium. This result is consistent with a theoretical model of dust condensation in material that is heavily enriched with CNO-cycle products, supporting the idea that the clump originated near the He core of the progenitor. It has been recently found that Type IIb supernovae can result from the explosion of a blue supergiant with a thin hydrogen envelope, and our results support such a scenario for Cassiopeia A. | astrophysics |
Video anomaly detection (VAD) is currently a challenging task due to the complexity of anomaly as well as the lack of labor-intensive temporal annotations. In this paper, we propose an end-to-end Global Information Guided (GIG) anomaly detection framework for anomaly detection using the video-level annotations (i.e., weak labels). We propose to first mine the global pattern cues by leveraging the weak labels in a GIG module. Then we build a spatial reasoning module to measure the relevance between vectors in spatial domain with the global cue vectors, and select the most related feature vectors for temporal anomaly detection. The experimental results on the CityScene challenge demonstrate the effectiveness of our model. | computer science |
We investigate the parton distribution functions (PDFs) of the pion and kaon from the eigenstates of a light-front effective Hamiltonian in the constituent quark-antiquark representation suitable for low-momentum scale applications. By taking these scales as the only free parameters, the valence quark distribution functions of the pion, after QCD evolving, are consistent with the E615 experiment at Fermilab. In addition, the ratio of the up quark distribution in the kaon to that in the pion also agrees with the NA3 experimental result at CERN. | high energy physics phenomenology |
In this article, we investigate the $\mu^-\to e^-X$ process in a muonic atom, where $X$ is a light neutral boson. By calculating the spectrum of the emitted electron for several cases, we discuss the model-discriminating power of the process. We report the strong model dependence of the spectrum near a high-energy endpoint. Our findings show that future experiments using muonic atoms are helpful to identify the properties of exotic bosons. | high energy physics phenomenology |
In this paper, we discuss interesting potential implications for the supersymmetric (SUSY) universe in light of cosmological problems on (1) the number of the satellite galaxies of the Milky Way (missing satellite problem) and (2) a value of the matter density fluctuation at the scale around 8$h^{-1}$Mpc ($S_{8}$ tension). The implications are extracted by assuming that the gravitino of a particular mass can be of help to alleviate the cosmological tension. We consider two gravitino mass regimes vastly separated, that is, $m_{3/2}\simeq100{\rm eV}$ and $m_{3/2}\simeq100{\rm GeV}$. We discuss non-trivial features of each supersymmetric universe associated with a specific gravitino mass by projecting potential resolutions of the cosmological problems on each of associated SUSY models. | high energy physics phenomenology |
Topological states nurtures the emergence of devices with unprecedented functions in photonics, plasmonics, acoustics and phononics. As one of the recently discovered members, higher-order topological insulators (HOTIs) have been increasingly explored, featuring lower-dimensional topological boundary states, leading to rich mechanisms for topological manipulation, guiding and trapping of classical waves. Here, we provide an overview of current developments of HOTIs in classical waves including basic principles, unique physical properties, various experimental realizations, novel phenomena and potential applications. Based on these discussions, we remark on the trends and challenges in this field and the impacts of higher-order topology on other research fields. | condensed matter |
Micromovements that occur in the joint between dental prostheses and implants can lead to wear-induced degradation. This process can be enhanced by corrosion in the oral environment influenced by the presence of solutions containing fluoride. Moreover, the eventual galvanic interactions between NiCr and Ti alloys can accelerate the wear-corrosion process. In this work, the tribocorrosion process of Ti6Al4V and NiCr alloys used in dental implant rehabilitations immersed in fluoride solutions at different pH values was investigated. The galvanic interaction effect between the alloys was also assessed. Tribocorrosion tests in corrosive media were performed with isolated Ti6Al4V and NiCr alloys, followed by testing with both alloys in contact. The media selected were based on fluoride concentrations and pH values that are possible to be found in oral environments. Analysis of the surfaces after the tribocorrosion tests was carried out using confocal laser microscopy. The wear profile and volume losses were determined by confocal measurements. It was concluded that the galvanic interaction between the alloys increased the tribocorrosion resistance of Ti6Al4V, compared with that of the isolated Ti6Al4V alloy. Ti6Al4V coupled with NiCr reduced the electrochemical potential decay during sliding. The increased resistance was explained by the electrochemical shift of the Ti6Al4V potential from active dissolution to the passive domain. | physics |
Transparent titania coatings have self-cleaning and anti-reflection properties (AR) that are of great importance to minimize soiling effect on photovoltaic modules. In this work, TiO2 nanocolloids prepared by polyol reduction method were successfully used as coating thin films onto borosilicate glass substrates via adsorptive self-assembly process. The nanocolloids were characterized by transmission electron microscopy and x-ray diffraction. The average particle size was around 2.6 nm. The films which have an average thickness of 76.2 nm and refractive index of 1.51 showed distinctive anti soiling properties under desert environment. The film surface topography, uniformity, wettability, thickness and refractive index were characterized using x-ray diffraction, atomic force microscopy, scanning electron microscopy, water contact angle measurements and ellipsometry. The self-cleaning properties were investigated by optical microscopy and UV-Vis spectroscopy. The optical images show 56% reduction of dust deposition rate over the coated surfaces compared with bare glass substrates after 7 days of soiling. The transmission optical spectra of these films collected at normal incidence angle show high anti-reflection properties with the coated substrates having transmission loss of less than 6% compared to bare clean glass. | physics |
In this article we study solutions to second order linear difference equations with variable coefficients. Under mild conditions we provide closed form solutions using finite continued fraction representations. The proof of the results are elementary and based on factoring a quadratic shift operator. As an application, we obtain two new generalized continued fraction formulas for the mathematical constant $\pi^2$. | mathematics |
The status of numerical evaluations of Mellin-Barnes integrals is discussed, in particular, the application of the quasi-Monte Carlo integration package QMC to the efficient calculation of multi-dimensional integrals. | high energy physics phenomenology |
Scanning transmission electron microscopy (STEM) has become the technique of choice for quantitative characterization of atomic structure of materials, where the minute displacements of atomic columns from high-symmetry positions can be used to map strain, polarization, octahedra tilts, and other physical and chemical order parameter fields. The latter can be used as inputs into mesoscopic and atomistic models, providing insight into the correlative relationships and generative physics of materials on the atomic level. However, these quantitative applications of STEM necessitate understanding the microscope induced image distortions and developing the pathways to compensate them both as part of a rapid calibration procedure for in situ imaging, and the post-experimental data analysis stage. Here, we explore the spatiotemporal structure of the microscopic distortions in STEM using multivariate analysis of the atomic trajectories in the image stacks. Based on the behavior of principal component analysis (PCA), we develop the Gaussian process (GP)-based regression method for quantification of the distortion function. The limitations of such an approach and possible strategies for implementation as a part of in-line data acquisition in STEM are discussed. The analysis workflow is summarized in a Jupyter notebook that can be used to retrace the analysis and analyze the reader's data. | condensed matter |
The transverse momentum spectra of charged particles produced in proton(deuteron)-nucleus and nucleus-nucleus collisions at high energies are analyzed by the Hagedorn thermal model and the standard distribution in terms of multi-component. The experimental data measured in central and peripheral gold-gold (Au-Au) and deuteron-gold ($d$-Au) collisions by the PHENIX Collaboration at the Relativistic Heavy Ion Collider (RHIC), as well as in central and peripheral lead-lead (Pb-Pb) and proton-lead ($p$-Pb) collisions by the ALICE Collaboration at the Large Hadron Collider (LHC) are fitted by the two models. The initial, effective, and kinetic freeze-out temperatures are then extracted from the fitting to the transverse momentum spectra. It is shown that the initial temperature is larger than the effective temperature, and the effective temperature is larger than the kinetic freeze-out temperature. The three types of temperatures in central collisions are comparable with those in peripheral collisions, and those at the LHC are comparable with those at the RHIC. | high energy physics phenomenology |
We present the analysis of physical conditions, chemical composition and kinematic properties of two bow shocks -HH529 II and HH529 III- of the fully photoionized Herbig-Haro object HH 529 in the Orion Nebula. The data were obtained with the Ultraviolet and Visual Echelle Spectrograph at the 8.2m Very Large Telescope and 20 years of Hubble Space Telescope imaging. We separate the emission of the high-velocity components of HH529 II and III from the nebular one, determining $n_{\rm e}$ and $T_{\rm e}$ in all components through multiple diagnostics, including some based on recombination lines (RLs). We derive ionic abundances of several ions, based on collisionally excited lines (CELs) and RLs. We find a good agreement between the predictions of the temperature fluctuation paradigm ($t^2$) and the abundance discrepancy factor (ADF) in the main emission of the Orion Nebula. However, $t^2$ can not account for the higher ADF found in HH 529 II and III. We estimate a 6% of Fe in the gas-phase of the Orion Nebula, while this value increases to 14% in HH 529 II and between 10% and 25% in HH 529 III. We find that such increase is probably due to the destruction of dust grains in the bow shocks. We find an overabundance of C, O, Ne, S, Cl and Ar of about 0.1 dex in HH 529 II-III that might be related to the inclusion of H-deficient material from the source of the HH 529 flow. We determine the proper motions of HH 529 finding multiple discrete features. We estimate a flow angle with respect to the sky plane of $58\pm 4^{\circ}$ for HH 529. | astrophysics |
The spontaneous migration of droplets on conical fibers is studied experimentally by depositing silicone oil droplets onto conical glass fibers. Their motion is recorded using optical microscopy and analysed to extract the relevant geometrical parameters of the system. The speed of the droplet can be predicted as a function of geometry and the fluid properties using a simple theoretical model, which balances viscous dissipation against the surface tension driving force. The experimental data are found to be in good agreement with the model. | condensed matter |
Pioneering studies in transition metal dichalcogenides have demonstrated convincingly the co-existence of multiple angular momentum degrees of freedom -- of spin (1/2 $s_z = \pm 1/2$), valley ($\tau = K, K'$ or $\pm 1$), and atomic orbital ($l_z = \pm 2$) origins -- in the valence band with strong interlocking among them, which results in noise-resilient pseudospin states ideal for spintronic type applications. With field modulation a powerful, universal means in physics studies and applications, this work develops, from bare models in the context of complicated band structure, a general effective theory of field-modulated spin-valley-orbital pseudospin physics that is able to describe both intra- and inter- valley dynamics. Based on the theory, it predicts and discusses the linear response of a pseudospin to external fields of arbitrary orientations. Paradigm field configurations are identified for pseudospin control including pseudospin flipping. For a nontrivial example, it presents a spin-valley-orbital quantum computing proposal, where the theory is applied to address all-electrical, simultaneous control of $s_z$, $\tau$, and $l_z$ for qubit manipulation. It demonstrates the viability of such control with static field effects and an additional dynamic electric field. An optimized qubit manipulation time ~ O(ns) is given. | condensed matter |
Trapped ions are sensitive detectors of weak forces and electric fields that excite ion motion. Here measurements of the center-of-mass motion of a trapped-ion crystal that are phase-coherent with an applied weak external force are reported. These experiments are conducted far from the trap motional frequency on a two-dimensional trapped-ion crystal of approximately 100 ions, and determine the fundamental measurement imprecision of our protocol free from noise associated with the center-of-mass mode. The driven sinusoidal displacement of the crystal is detected by coupling the ion crystal motion to the internal spin-degree of freedom of the ions using an oscillating spin-dependent optical dipole force. The resulting induced spin-precession is proportional to the displacement amplitude of the crystal, and is measured with near-projection-noise-limited resolution. A $49\,$pm displacement is detected with a single measurement signal-to-noise ratio of 1, which is an order-of-magnitude improvement over prior phase-incoherent experiments. This displacement amplitude is $40$ times smaller than the zero-point fluctuations. With our repetition rate, a $8.4\,$pm$/\sqrt{\mathrm{Hz}}$ displacement sensitivity is achieved, which implies $12\,$yN$/\mathrm{ion}/\sqrt{\mathrm{Hz}}$ and $77\,\mu$V$/$m$/\sqrt{\mathrm{Hz}}$ sensitivities to forces and electric fields, respectively. This displacement sensitivity, when applied on-resonance with the center-of-mass mode, indicates the possibility of weak force and electric field detection below $10^{-3}\,$yN/ion and $1\,$nV/m, respectively. | physics |
In this paper we propose and examine gap statistics for assessing uniform distribution hypotheses. We provide examples relevant to data integrity testing for which max-gap statistics provide greater sensitivity than chi-square ($\chi^2$), thus allowing the new test to be used in place of or as a complement to $\chi^2$ testing for purposes of distinguishing a larger class of deviations from uniformity. We establish that the proposed max-gap test has the same sequential and parallel computational complexity as $\chi^2$ and thus is applicable for Big Data analytics and integrity verification. | statistics |
The universal building block which is an essential part of all atomic structures on (1 1 0) silicon and germanium surfaces and their vicinals is proposed from first-principles calculations and a comparison of results with available experimental data. The atomic models for the (1 1 0)-(16x2), (1 1 0)-c(8x10), (1 1 0)-(5x8) and (17 15 1)-(2x1) surface reconstructions are developed on the basis of the building block structure. The models exhibit very low surface energies and excellent agreements with bias-dependent scanning tunneling microscopy (STM) images. It is shown experimentally using STM that the Si(47 35 7) surface shares the same building block. Our study closes the long-debated pentagon structures on (1 1 0) silicon and germanium surfaces. | condensed matter |
Neutrino oscillation parameters can be understood in a better way by building a more complete picture of neutrino interactions. This poses a series of important theoretical and experimental challenges, given the elusive nature of neutrino and an inherent difficulty in its detection. This work is an attempt to study the neutrino interactions through a purely theoretical approach using the concept of vacuum fluctuation particle pairs, where an antiparticle from a virtual particle pair is captured via annihilation, allowing the matter particle of the virtual pair to become free. Though experiments are trying hard to shed more light on neutrino interactions, our knowledge of neutrino oscillation parameters is not precise at different energy levels. In this context, one cannot help but give a bold try to marvel at how far our theoretical frameworks can extend. | high energy physics phenomenology |
We study the emergence of Nambu-Goldstone modes due to broken translation symmetry in field theory. Purely spontaneous breaking yields a massless phonon which develops a mass upon introducing a perturbative explicit breaking. The pseudo-phonon mass agrees with Gell Mann-Oakes-Renner relations. We analyze the simplest possible theories featuring gradient Mexican hats and describing space-dependent order parameters. We comment on homogeneous translation breaking and the connections with holographic Q-lattices. | high energy physics theory |
We introduce a novel Skyrme-like conserved current in the effective theory of pions and vector mesons based on the idea of hidden local symmetry. The associated charge is equivalent to the skyrmion charge for any smooth configuration. In addition, there exist singular configurations that can be identified as N_f=1 baryons charged under the new symmetry. Under this identification, the vector mesons play the role of the Chern-Simons vector fields living on the quantum Hall droplet that forms the N_f=1 baryon. We propose that this current is the correct effective expression for the baryon current at low energies. This proposal gives a unified picture for the two types of baryons and allows them to continuously transform one to the other in a natural way. In addition, Chern-Simons dualities on the droplet can be interpreted as a result of Seiberg-like duality between gluons and vector mesons. | high energy physics theory |
We holographically investigate the scalarization in the Einstein-Scalar-Gauss-Bonnet gravity with a negative cosmological constant. We find that instability exists for both Schwarzschild-AdS and Reissner-Nordstrom-AdS black holes with planar horizons when we have proper interactions between the scalar field and the Gauss-Bonnet curvature corrections. We relate such instability to possible holographic scalarization and construct the corresponding hairy black hole solutions in the presence of the cosmological constant. Employing the holographic principle we expect that such bulk scalarization corresponds to the boundary description of the scalar hair condensation without breaking any symmetry, and we calculate the related holographic entanglement entropy of the system. Moreover, we compare the mechanisms of the holographic scalarizations caused by the effect of the coupling of the scalar field to the Gauss-Bonnet term and holographic superconductor effect in the presence of an electromagnetic field, and unveil their differences in the effective mass of the scalar field, the temperature dependent property and the optical conductivity. | high energy physics theory |
We initiate a study of finite temperature transport in gapless and strongly coupled quantum theories with charge and dipole conservation using gauge-gravity duality. In a model with non-dynamical gravity, the bulk fields of our model include a suitable mixed-rank tensor which encodes the boundary multipole symmetry. We describe how such a theory can arise at low energies in a theory with a covariant bulk action. Studying response functions at zero density, we find that charge relaxes via a fourth-order subdiffusion equation, consistent with a recently-developed field-theoretic framework. | high energy physics theory |
Recently a detailed correspondence was established between, on one side, four and five-dimensional large-N supersymmetric gauge theories with $\mathcal{N}=2$ supersymmetry and adjoint matter, and, on the other side, integrable 1+1-dimensional quantum hydrodynamics. Under this correspondence the phenomenon of dimensional transmutation, familiar in asymptotically free QFTs, gets mapped to the transition from the elliptic Calogero-Moser many-body system to the closed Toda chain. In this paper we attempt to formulate the hydrodynamical counterpart of the dimensional transmutation phenomenon inspired by the identification of the periodic Intermediate Long Wave (ILW) equation as the hydrodynamical limit of the elliptic Calogero-Moser/Ruijsenaars-Schneider system. We also conjecture that the chiral flow in the vortex fluid provides the proper framework for the microscopic description of such dimensional transmutation in the 1+1d hydrodynamics. We provide a geometric description of this phenomenon in terms of the ADHM moduli space. | high energy physics theory |
We study synchronization properties of systems of Kuramoto oscillators. The problem can also be understood as a question about the properties of an energy landscape created by a graph. More formally, let $G=(V,E)$ be a connected graph and $(a_{ij})_{i,j=1}^{n}$ denotes its adjacency matrix. Let the function $f:\mathbb{T}^n \rightarrow \mathbb{R}$ be given by $$ f(\theta_1, \dots, \theta_n) = \sum_{i,j=1}^{n}{ a_{ij} \cos{(\theta_i - \theta_j)}}.$$ This function has a global maximum when $\theta_i = \theta$ for all $1\leq i \leq n$. It is known that if every vertex is connected to at least $\mu(n-1)$ other vertices for $\mu$ sufficiently large, then every local maximum is global. Taylor proved this for $\mu \geq 0.9395$ and Ling, Xu \& Bandeira improved this to $\mu \geq 0.7929$. We give a slight improvement to $\mu \geq 0.7889$. Townsend, Stillman \& Strogatz suggested that the critical value might be $\mu_c = 0.75$. | mathematics |
Complex models used to describe biological processes in epidemiology and ecology often have computationally intractable or expensive likelihoods. This poses significant challenges in terms of Bayesian inference but more significantly in the design of experiments. Bayesian designs are found by maximising the expectation of a utility function over a design space, and typically this requires sampling from or approximating a large number of posterior distributions. This renders approaches adopted in inference computationally infeasible to implement in design. Consequently, optimal design in such fields has been limited to a small number of dimensions or a restricted range of utility functions. To overcome such limitations, we propose a synthetic likelihood-based Laplace approximation for approximating utility functions for models with intractable likelihoods. As will be seen, the proposed approximation is flexible in that a wide range of utility functions can be considered, and remains computationally efficient in high dimensions. To explore the validity of this approximation, an illustrative example from epidemiology is considered. Then, our approach is used to design experiments with a relatively large number of observations in two motivating applications from epidemiology and ecology. | statistics |
The physical origin of subwavelength photonic nanojet in retrograde-reflection mode (retro-PNJ) is theoretically considered. This specific type of photonic nanojet emerges upon sequential double focusing of a plane optical wave by a transparent dielectric microparticle located near a flat reflecting mirror. For the first time to the best of our knowledge, we report a unique property of retro-PNJ for increasing its focal length and intensity using microparticles with the optical contrast exceeding limiting value (n > 2) that fundamentally distinguishes retro-PNJ from regular PNJ behavior. This may drastically increase the trapping potential of PNJ-based optical tweezers. | physics |
We perform a series of experiments to measure Lagrangian trajectories of settling and rising particles as they traverse a density interface of thickness $h$ using an index-matched water-salt-ethanol solution. The experiments confirm the substantial deceleration that particles experience as a result of the additional force exerted on the particle due to the sudden change in density. This stratification force is calculated from the measurement data for all particle trajectories. In the absence of suitable parameterizations in the literature, a simple phenomenological model is developed which relies on parameterizations of the effective wake volume and recovery time scale. The model accurately predicts the particle trajectories obtained in our experiments and those of \cite{Fernando1999}. Furthermore, the model demonstrates that the problem depends on four key parameters, namely the entrance Reynolds number $Re_1$, entrance Froude number $Fr$, particle to fluid density ratio $\rho_p/\rho_f$, and relative interface thickness $h/a$. | physics |
It is well known that size-based scheduling policies, which take into account job size (i.e., the time it takes to run them), can perform very desirably in terms of both response time and fairness. Unfortunately, the requirement of knowing a priori the exact job size is a major obstacle which is frequently insurmountable in practice. Often, it is possible to get a coarse estimation of job size, but unfortunately analytical results with inexact job sizes are challenging to obtain, and simulation-based studies show that several size-based algorithm are severely impacted by job estimation errors. For example, Shortest Remaining Processing Time (SRPT), which yields optimal mean sojourn time when job sizes are known exactly, can drastically underperform when it is fed inexact job sizes. Some algorithms have been proposed to better handle size estimation errors, but they are somewhat complex and this makes their analysis challenging. We consider Shortest Processing Time (SPT), a simplification of SRPT that skips the update of "remaining" job size and results in a preemptive algorithm that simply schedules the job with the shortest estimated processing time. When job size is inexact, SPT performs comparably to the best known algorithms in the presence of errors, while being definitely simpler. In this work, SPT is evaluated through simulation, showing near-optimal performance in many cases, with the hope that its simplicity can open the way to analytical evaluation even when inexact inputs are considered. | computer science |
Our previous work has demonstrated that Rayleigh model, which is widely used in polarized skylight navigation to describe skylight polarization patterns, does not contain three-dimensional (3D) attitude information [1]. However, it is still necessary to further explore whether the skylight polarization patterns contain 3D attitude information. So, in this paper, a social spider optimization (SSO) method is proposed to estimate three Euler angles, which considers the difference of each pixel among polarization images based on template matching (TM) to make full use of the captured polarization information. In addition, to explore this problem, we not only use angle of polarization (AOP) and degree of polarization (DOP) information, but also the light intensity (LI) information. So, a sky model is established, which combines Berry model and Hosek model to fully describe AOP, DOP, and LI information in the sky, and considers the influence of four neutral points, ground albedo, atmospheric turbidity, and wavelength. The results of simulation show that the SSO algorithm can estimate 3D attitude and the established sky model contains 3D attitude information. However, when there are measurement noise or model error, the accuracy of 3D attitude estimation drops significantly. Especially in field experiment, it is very difficult to estimate 3D attitude. Finally, the results are discussed in detail. | electrical engineering and systems science |
Euclid, WFIRST, and HETDEX will make emission-line selected galaxies the largest observed constituent in the $z > 1$ universe. However, we only have a limited understanding of the physical properties of galaxies selected via their Ly$\alpha$ or rest-frame optical emission lines. To begin addressing this problem, we present the basic properties of $\sim 2,000$ AEGIS, COSMOS, GOODS-N, GOODS-S, and UDS galaxies identified in the redshift range $1.90 < z < 2.35$ via their [O II], H$\beta$, and [O III] emission lines. For these $z \sim 2$ galaxies, [O III] is generally much brighter than [O II] and H$\beta$, with typical rest-frame equivalent widths of several hundred Angstroms. Moreover, these strong emission-line systems span an extremely wide range of stellar mass ($\sim 3$ dex), star-formation rate ($\sim 2$ dex), and [O III] luminosity ($\sim 2$ dex). Comparing the distributions of these properties to those of continuum selected galaxies, we find that emission-line galaxies have systematically lower stellar masses and lower optical/UV dust attenuations. These measurements lay the groundwork for an extensive comparison between these rest-frame optical emission-line galaxies and Ly$\alpha$ emitters identified in the HETDEX survey. | astrophysics |
Binaries harbouring millisecond pulsars enable a unique path to determine neutron star masses: radio pulsations reveal the motion of the neutron star, while that of the companion can be characterised through studies in the optical range. PSR J1012+5307 is a millisecond pulsar in a 14.5-h orbit with a helium-core white dwarf companion. In this work we present the analysis of an optical spectroscopic campaign, where the companion star absorption features reveal one of the lightest known white dwarfs. We determine a white dwarf radial velocity semi-amplitude of K_2 = 218.9 +- 2.2 km/s, which combined with that of the pulsar derived from the precise radio timing, yields a mass ratio of q=10.44+- 0.11. We also attempt to infer the white dwarf mass from observational constraints using new binary evolution models for extremely low-mass white dwarfs, but find that they cannot reproduce all observed parameters simultaneously. In particular, we cannot reconcile the radius predicted from binary evolution with the measurement from the photometric analysis (R_WD=0.047+-0.003 Rsun). Our limited understanding of extremely low-mass white dwarf evolution, which results from binary interaction, therefore comes as the main factor limiting the precision with which we can measure the mass of the white dwarf in this system. Our conservative white dwarf mass estimate of M_WD = 0.165 +- 0.015 Msun, along with the mass ratio enables us to infer a pulsar mass of M_NS = 1.72 +- 0.16 Msun. This value is clearly above the canonical 1.4 Msun, therefore adding PSR J1012+5307 to the growing list of massive millisecond pulsars. | astrophysics |
The pair-matching problem appears in many applications where one wants to discover good matches between pairs of entities or individuals. Formally, the set of individuals is represented by the nodes of a graph where the edges, unobserved at first, represent the good matches. The algorithm queries pairs of nodes and observes the presence/absence of edges. Its goal is to discover as many edges as possible with a fixed budget of queries. Pair-matching is a particular instance of multi-armed bandit problem in which the arms are pairs of individuals and the rewards are edges linking these pairs. This bandit problem is non-standard though, as each arm can only be played once. Given this last constraint, sublinear regret can be expected only if the graph presents some underlying structure. This paper shows that sublinear regret is achievable in the case where the graph is generated according to a Stochastic Block Model (SBM) with two communities. Optimal regret bounds are computed for this pair-matching problem. They exhibit a phase transition related to the Kesten-Stigum threshold for community detection in SBM. The pair-matching problem is considered in the case where each node is constrained to be sampled less than a given amount of times. We show how optimal regret rates depend on this constraint. The paper is concluded by a conjecture regarding the optimal regret when the number of communities is larger than 2. Contrary to the two communities case, we argue that a statistical-computational gap would appear in this problem. | statistics |
Detection of the fusion rule of Majorana zero-modes is a near-term milestone on the road to topological quantum computation. An obstacle is that the non-deterministic fusion outcome of topological zero-modes can be mimicked by the merging of non-topological Andreev levels. To distinguish these two scenarios, we search for dynamical signatures of the ground-state degeneracy that is the defining property of non-Abelian anyons. By adiabatically traversing parameter space along two different pathways one can identify ground-state degeneracies from the breakdown of adiabaticity. We show that the approach can discriminate against accidental degeneracies of Andreev levels. | condensed matter |
Recently, the LHCb Collaboration reported three $P_c$ states in the ${J/\psi}p$ channel. We systematically study the mass spectrum of the hidden charm pentaquark in the framework of an extended chromomagnetic model. For the $nnnc\bar{c}$ pentaquark with $I=1/2$, we find that (i) the lowest state is $P_{c}(4327.0,1/2,1/2^{-})$ [We use $P_{c}(m,I,J^{P})$ to denote the $nnnc\bar{c}$ pentaquark], which corresponds to the $P_{c}(4312)$. Its dominant decay mode is $\Lambda_{c}\bar{D}^{*}$. (ii) We find two states in the vicinity of $P_{c}(4380)$. The first one is $P_{c}(4367.4,1/2,3/2^{-})$ and decays dominantly to $N{J/\psi}$ and $\Lambda_{c}\bar{D}^{*}$. The other one is $P_{c}(4372.4,1/2,1/2^{-})$. Its dominant decay mode is $\Lambda_{c}\bar{D}$, and its partial decay width of $N\eta_{c}$ channel is comparable to that of $N{J/\psi}$. (iii) In higher mass region, we find $P_{c}(4476.3,1/2,3/2^{-})$ and $P_{c}(4480.9,1/2,1/2^{-})$, which correspond to $P_{c}(4440)$ and $P_{c}(4457)$. In the open charm channels, both of them decay dominantly to the $\Lambda_{c}\bar{D}^{*}$. (iv) We predict two states above $4.5~\text{GeV}$, namely $P_{c}(4524.5,1/2,3/2^{-})$ and $P_{c}(4546.0,1/2,5/2^{-})$. The masses of the $nnnc\bar{c}$ state with $I=3/2$ are all over $4.6~\text{GeV}$. Moreover, we use the model to explore the $nnsc\bar{c}$, $ssnc\bar{c}$ and $sssc\bar{c}$ pentaquark states. | high energy physics phenomenology |
The document describes a numerical algorithm to simulate plasmas and fluids in the 3 dimensional space by the Euler method, in which the spatial meshes are fixed to the space. The plasmas and fluids move through the spacial Euler mesh boundary. The Euler method can represent a large deformation of the plasmas and fluids. On the other hand, when the plasmas or fluids are compressed to a high density, the spatial resolution should be ensured to describe the density change precisely. The present 3D Euler code is developed to simulate a nuclear fusion fuel ignition and burning. Therefore, the 3D Euler code includes the DT fuel reactions, the alpha particle diffusion, the alpha particle deposition to heat the DT fuel and the DT fuel depletion by the DT reactions, as well as the thermal energy diffusion based on the three-temperature compressible fluid model. | physics |
In vitro fertilization (IVF) comprises a sequence of interventions concerned with the creation and culture of embryos which are then transferred to the patient's uterus. While the clinically important endpoint is birth, the responses to each stage of treatment contain additional information about the reasons for success or failure. As such, the ability to predict not only the overall outcome of the cycle, but also the stage-specific responses, can be useful. This could be done by developing separate models for each response variable, but recent work has suggested that it may be advantageous to use a multivariate approach to model all outcomes simultaneously. Here, joint analysis of the sequential responses is complicated by mixed outcome types defined at two levels (patient and embryo). A further consideration is whether and how to incorporate information about the response at each stage in models for subsequent stages. We develop a case study using routinely collected data from a large reproductive medicine unit in order to investigate the feasibility and potential utility of multivariate prediction in IVF. We consider two possible scenarios. In the first, stage-specific responses are to be predicted prior to treatment commencement. In the second, responses are predicted dynamically, using the outcomes of previous stages as predictors. In both scenarios, we fail to observe benefits of joint modelling approaches compared to fitting separate regression models for each response variable. | statistics |
Dense relativistic matter has attracted a lot of attention over many decades now, with a focus on an understanding of the phase structure and thermodynamics of dense strong-interaction matter. The analysis of dense strong-interaction matter is complicated by the fact that the system is expected to undergo a transition from a regime governed by spontaneous chiral symmetry breaking at low densities to a regime governed by the presence of a Cooper instability at intermediate and high densities. Renormalization group (RG) approaches have played and still play a prominent role in studies of dense matter in general. In the present work, we study RG flows of dense relativistic systems in the presence of a Cooper instability and analyze the role of the Silver-Blaze property. In particular, we critically assess how to apply the derivative expansion to study dense-matter systems in a systematic fashion. This also involves a detailed discussion of regularization schemes. Guided by these formal developments, we introduce a new class of regulator functions for functional RG studies which is suitable to deal with the presence of a Cooper instability in relativistic theories. We close by demonstrating its application with the aid of a simple quark-diquark model. | high energy physics phenomenology |
One of the most promising gravitational wave (GW) sources detectable by the forthcoming LISA observatory are the so-called extreme-mass ratio inspirals (EMRIs), i.e. GW-driven inspirals of stellar-mass compact objects onto supermassive black holes (SMBHs). In this paper, we suggest that supernova (SN) kicks may trigger EMRIs in galactic nuclei by scattering newborn stellar black holes and neutron stars on extremely eccentric orbits; as a consequence, the time-scale over which these compact objects are expected to inspiral onto the central SMBH via GW emission may become shorter than the time-scale for other orbital perturbations to occur. By applying this argument to the Galactic Centre, we show that the S-cluster and the clockwise disc are optimal regions for the generation of such events: one SN out of about 10 000 (100 000) occurring in the S-cluster (clockwise disc) is expected to induce an EMRI. If we assume that the natal kicks affecting stellar black holes are significantly slower than those experienced by neutron stars, we find that most SN-driven EMRIs involve neutron stars. We further estimate the time spanning from the SN to the final plunge onto the SMBH to be of the order of few Myr. Finally, we extrapolate the rate of SN-driven EMRIs per Milky Way to be up to 10^-8/yr, thus we expect that LISA will detect up to a few tens of SN-driven EMRIs every year. | astrophysics |
Given the symplectic polar space of type $W(5,2)$, let us call a set of five Fano planes sharing pairwise a single point a Fano pentad. Once 63 points of $W(5,2)$ are appropriately labeled by 63 non-trivial three-qubit observables, any such Fano pentad gives rise to a quantum contextual set known as Mermin pentagram. Here, it is shown that a Fano pentad also hosts another, closely related contextual set, which features 25 observables and 30 three-element contexts. Out of 25 observables, ten are such that each of them is on six contexts, while each of the remaining 15 observables belongs to two contexts only. Making use of the recent classification of Mermin pentagrams (Saniga et al., Symmetry 12 (2020) 534), it was found that 12,096 such contextual sets comprise 47 distinct types, falling into eight families according to the number ($3, 5, 7, \ldots, 17$) of negative contexts. | quantum physics |
Twisted bilayer graphene has been argued theoretically to host exceptionally flat bands when the angle between the two layers falls within a magic range near 1.1$^\circ$. This is now strongly supported by experiment, which furthermore reveals dramatic correlation effects in the magic range due to the relative dominance of interactions when the bandwidth is suppressed. Experimentally, quantum oscillations exhibit different Landau level degeneracies when the angles fall in or outside the magic range; these observations can contain crucial information about the low energy physics. In this paper, we report a thorough theoretical study of the Landau level structure of the non-interacting continuum model for twisted bilayer graphene as the magnetic field and the twist angle are tuned. We first show that a discernible difference exists in the butterfly spectra when twist angle falls in and outside the magic range. Next, we carry out semiclassical analysis in detail, which quantitatively determines the origin of the low energy Landau levels from the zero field band structure. We find that the Landau level degeneracy predicted in the above analyses is capable of partially explaining features of the quantum oscillation experiments in a natural way. Finally, topological aspects, validity, and other subtle points of the model are discussed. | condensed matter |
Cavity design is crucial for single-mode semiconductor lasers such as the distributed feedback (DFB) and vertical-cavity surface-emitting lasers (VCSEL). By recognizing that both optical resonators feature a single mid-gap mode localized at the topological defect in a one-dimensional (1D) lattice, we generalize the topological cavity design into 2D using a honeycomb photonic crystal with a vortex Dirac mass -- the analog of Jackiw-Rossi zero modes. We theoretically predict and experimentally demonstrate that such a Dirac-vortex cavity can have a tunable mode area across a few orders of magnitudes, arbitrary mode degeneracy, robustly large free-spectral-range, vector-beam output of low divergence, and compatibility with high-index substrates. This topological cavity could enable photonic crystal surface-emitting lasers (PCSEL) with stabler single-mode operation. | physics |
Recently there have been fruitful results on resource theories of quantum measurements. Here we investigate the number of measurement outcomes as a kind of resource. We cast the robustness of the resource as a semi-definite positive program. Its dual problem confirms that if a measurement cannot be simulated by a set of smaller number of outcomes, there exists a state discrimination task where it can outperforms the whole latter set. An upper bound of this advantage that can be saturated under certain condition is derived. We also show that the possible tasks to reveal the advantage are not restricted to state discrimination and can be more general. | quantum physics |
As an important field of research in Human-Machine Interactions, emotion recognition based on physiological signals has become research hotspots. Motivated by the outstanding performance of deep learning approaches in recognition tasks, we proposed a Multimodal Emotion Recognition Model that consists of a 3D convolutional neural network model, a 1D convolutional neural network model and a biologically inspired multimodal fusion model which integrates multimodal information on the decision level for emotion recognition. We use this model to classify four emotional regions from the arousal valence plane, i.e., low arousal and low valence (LALV), high arousal and low valence (HALV), low arousal and high valence (LAHV) and high arousal and high valence (HAHV) in the DEAP and AMIGOS dataset. The 3D CNN model and 1D CNN model are used for emotion recognition based on electroencephalogram (EEG) signals and peripheral physiological signals respectively, and get the accuracy of 93.53% and 95.86% with the original EEG signals in these two datasets. Compared with the single-modal recognition, the multimodal fusion model improves the accuracy of emotion recognition by 5% ~ 25%, and the fusion result of EEG signals (decomposed into four frequency bands) and peripheral physiological signals get the accuracy of 95.77%, 97.27% and 91.07%, 99.74% in these two datasets respectively. Integrated EEG signals and peripheral physiological signals, this model could reach the highest accuracy about 99% in both datasets which shows that our proposed method demonstrates certain advantages in solving the emotion recognition tasks. | electrical engineering and systems science |
Solar wind charge-changing reactions are of paramount importance to the physico-chemistry of the atmosphere of a comet because they mass-load the solar wind through an effective conversion of fast, light solar wind ions into slow, heavy cometary ions. The ESA/Rosetta mission to comet 67P/Churyumov-Gerasimenko (67P) provided a unique opportunity to study charge-changing processes in situ. An extended analytical formalism describing solar wind charge-changing processes at comets along solar wind streamlines is presented. It is based on a thorough book-keeping of available charge-changing cross sections of hydrogen and helium particles in a water gas. After presenting a general 1D solution of charge exchange at comets, we study the theoretical dependence of charge-state distributions of (He$^{2+}$, He$^+$, He$^0$) and (H$^+$, H$^0$, H$^-$) on solar wind parameters at comet 67P. We show that double charge exchange for the He$^{2+}$-H$_2$O system plays an important role below a solar wind bulk speed of 200 km/s , resulting in the production of He energetic neutral atoms, whereas stripping reactions can in general be neglected. Retrievals of outgassing rates and solar wind upstream fluxes from local Rosetta measurements deep in the coma are discussed. Solar wind ion temperature effects at 400 km/s solar wind speed are well contained during the Rosetta mission. As the comet approaches perihelion, the model predicts a sharp decrease of solar wind ion fluxes by almost one order of magnitude at the location of Rosetta, forming in effect a solar wind ion cavity. This study is the second part of a series of three on solar wind charge-exchange and ionization processes at comets, with a specific application to comet 67P and the Rosetta mission. | physics |
We discuss fermion mass generation in unified models where QCD and technicolor (or any two strongly interacting theories) have their Schwinger-Dyson equations coupled. In this case the technicolor (TC) and QCD self-energies are modified in comparison with the behavior observed in the isolated theories. In these models the pseudo-Goldstone boson masses are much higher than the ones obtained in different contexts, and phenomenological signals, except from a light scalar composite boson, will be quite difficult to be observed at present collider energies. The most noticeable fact of these models is how the mass splitting between the different ordinary fermions is generated. We discuss how a necessary horizontal (or family) symmetry can be implemented in order to generate the mass splitting between fermions of different generations; how the fermionic mass spectrum may be modified due to GUT interactions, as well as how the mass splitting within the same fermionic generation are generated due to electroweak and GUT interactions. | high energy physics phenomenology |
We establish an explicit data-driven criterion for identifying the solid-liquid transition of two-dimensional self-propelled colloidal particles in the far from equilibrium parameter regime, where the transition points predicted by different conventional empirical criteria for melting and freezing diverge. This is achieved by applying a hybrid machine learning approach that combines unsupervised learning with supervised learning to analyze over one million of system's configurations in the nonequilibrium parameter regime. Furthermore, we establish a generic data-driven evaluation function, according to which the performance of different empirical criteria can be systematically evaluated and improved. In particular, by applying this evaluation function, we identify a new nonequilibrium threshold value for the long-time diffusion coefficient, based on which the predictions of the corresponding empirical criterion are greatly improved in the far from equilibrium parameter regime. These data-driven approaches provide a generic tool for investigating phase transitions in complex systems where conventional empirical ones face difficulties. | condensed matter |
Understanding the dynamics of phospholipid headgroups in model and biological membranes is of extreme importance for an accurate description of the dipolar interactions occuring at membrane interfaces. One fundamental question is to which extent these dynamics are coupled to an overall molecular frame i.e. if the dipole headgroup orientation distribution and time-scales involved depend on the structure and dynamics of the glycerol backbone and hydrophobic regions or if this motion is independent from the main molecular frame. Here we use solid-state nuclear magnetic resonance (NMR) spectroscopy and molecular dynamics (MD) simulations to show that the orientation and effective correlation times of choline headgroups remain completely unchanged with 50\% mol cholesterol incorporation in a phosphatidylcholine (PC) membrane in contrast to the significant slowdown of the remaining phospholipid segments. Notably, our results indicate that choline headgroups interact as quasi-freely rotating dipoles at the interface irrespectively of the structural and dynamical molecular behavior in the glycerol backbone and hydrophobic regions to the full extent of headgroup rotational dynamics. | condensed matter |
Navigation inside a closed area with no GPS-signal accessibility is a highly challenging task. In order to tackle this problem, recently the imaging-based methods have grabbed the attention of many researchers. These methods either extract the features (e.g. using SIFT, or SOSNet) and map the descriptive ones to the camera position and rotation information, or deploy an end-to-end system that directly estimates this information out of RGB images, similar to PoseNet. While the former methods suffer from heavy computational burden during the test process, the latter suffers from lack of accuracy and robustness against environmental changes and object movements. However, end-to-end systems are quite fast during the test and inference and are pretty qualified for real-world applications, even though their training phase could be longer than the former ones. In this paper, a novel multi-modal end-to-end system for large-scale indoor positioning has been proposed, namely APS (Alpha Positioning System), which integrates a Pix2Pix GAN network to reconstruct the point cloud pair of the input query image, with a deep CNN network in order to robustly estimate the position and rotation information of the camera. For this integration, the existing datasets have the shortcoming of paired RGB/point cloud images for indoor environments. Therefore, we created a new dataset to handle this situation. By implementing the proposed APS system, we could achieve a highly accurate camera positioning with a precision level of less than a centimeter. | computer science |
There has been a long-standing discrepancy between the experimental measurements of the electron and muon anomalous magnetic moments and their predicted values in the Standard Model. This is particularly relevant in the case of the muon $g-2$, which has attracted a remarkable interest in the community after the long-awaited announcement of the first results by the Muon $g-2$ collaboration at Fermilab, which confirms a previous measurement by the E821 experiment at Brookhaven and enlarges the statistical significance of the discrepancy, now at $4.2 \sigma$. In this paper we consider an extension of the inverse type-III seesaw with a pair of vector-like leptons that induces masses for neutrinos at the electroweak scale and show that one can accommodate the electron and muon anomalous magnetic moments, while being compatible with all relevant experimental constraints. | high energy physics phenomenology |
We introduce a novel simulation method that is designed to explore fluctuations of the phasonic degrees of freedom in decagonal colloidal quasicrystals. Specifically, we attain and characterise thermal equilibrium of the phason ensemble via Monte Carlo simulations with particle motions restricted to elementary phasonic flips. We find that, at any temperature, the random tiling ensemble is strongly preferred over the minimum phason-strain quasicrystal. Phasonic flips are the dominant carriers of diffusive mass transport in physical space. Sub-diffusive transients suggest cooperative flip behaviour on short time scales. In complementary space, particle mobility is geometrically restricted to a thin ring around the acceptance domain, resulting in self-confinement and persistent phasonic order. | condensed matter |
We discuss general properties of discrete time quantum symmetry breaking in degenerate parametric oscillators. Recent experiments in superconducting quantum circuit with Josephson junction nonlinearities give rise to new properties of strong parametric coupling and nonlinearities. Exact analytic solutions are obtained for the steady-state of this single-mode case of subharmonic generation. We also obtain analytic solutions for the tunneling time over which the time symmetry-breaking is lost above threshold. We find that additional anharmonic terms found in the superconducting case increase the tunneling rate, and can also lead to new regimes of tristability as well as bistability. Our analytic results are confirmed by number state calculations. | quantum physics |
Taking advantage of HST CANDELS data, we analyze the lowest redshift (z<0.5) massive galaxies in order to disentangle their structural constituents and study possible faint non-axis-symmetric features. Due to the excellent HST spatial resolution for intermediate-z objects, they are hard to model by purely automatic parametric fitting algorithms. We performed careful single and double S\'ersic fits to their galaxy surface brightness profiles. We also compare the model color profiles with the observed ones and also derive multi-component global effective radii attempting to obtain a better interpretation of the mass-size relation. Additionally, we test the robustness of our measured structural parameters via simulations. We find that the S\'ersic index does not offer a good proxy for the visual morphological type for our sample of massive galaxies. Our derived multi-component effective radii give a better description of the size of our sample galaxies than those inferred from single S\'ersic models with GALFIT. Our galaxy population lays on the scatter of the local mass-size relation, indicating that these massive galaxies do not experience a significant growth in size since z~0.5. Interestingly the few outliers are late-type galaxies, indicating that spheroids must reach the local mass-size relation earlier. For most of our sample galaxies, both single and multi-component S\'ersic models with GALFIT show substantial systematic deviations from the observed SBPs in the outskirts. These residuals may be partly due to several factors, namely a non-optimal data reduction for low surface brightness features, the existence of prominent stellar haloes for massive galaxies and could also arise from conceptual shortcomings of parametric 2D image decomposition tools. They consequently propagate into galaxy color profiles. | astrophysics |
Spell check is a useful application which processes noisy human-generated text. Spell check for Chinese poses unresolved problems due to the large number of characters, the sparse distribution of errors, and the dearth of resources with sufficient coverage of heterogeneous and shifting error domains. For Chinese spell check, filtering using confusion sets narrows the search space and makes finding corrections easier. However, most, if not all, confusion sets used to date are fixed and thus do not include new, shifting error domains. We propose a scalable adaptable filter that exploits hierarchical character embeddings to (1) obviate the need to handcraft confusion sets, and (2) resolve sparsity problems related to infrequent errors. Our approach compares favorably with competitive baselines and obtains SOTA results on the 2014 and 2015 Chinese Spelling Check Bake-off datasets. | computer science |
The Twin Higgs mechanism keeps the scalar sector of the Standard Model (SM) natural while remaining consistent with the non-observation of new colored particles at the Large Hadron Collider (LHC). In this construction the heavy twin Higgs boson provides a portal between the SM particles and the twin sector, but is quite challenging to discover at colliders. In the Fraternal Twin Higgs setup, where light twin quarks are absent, we study a novel discovery channel for the heavy twin Higgs boson by considering its decay to a pair of light Higgs bosons, one of which subsequently decays to glueball states in the twin sector, leading to displaced vertex signatures. We estimate the sensitivity of existing LHC searches in this channel, and assess the discovery potential of the high luminosity (HL) LHC. We show that the glueballs probed by these searches are outside the sensitivity of existing searches for exotic decays of the light Higgs boson. In addition, we demonstrate that the displaced signals we consider probe a region of heavy Higgs masses beyond the reach of prompt signals. We also comment on the possibility of probing the input parameters of the microscopic physics and providing a way to test the Twin Higgs mechanism with this channel. | high energy physics phenomenology |
As the threats of small drones increase, not only the detection but also the classification of small drones has become important. Many recent studies have applied an approach to utilize the micro-Doppler signature (MDS) for the small drone classification by using frequency modulated continuous wave (FMCW) radars. In this letter, we propose a novel method to extract the MDS images of the small drones with the FMCW radar. Moreover, we propose a light convolutional neural network (CNN) whose structure is straightforward, and the number of parameters is quite small for fast classification. The proposed method contributes to increasing the classification accuracy by improving the quality of MDS images. We classified the small drones with the MDS images extracted by the conventional method and the proposed method through the proposed CNN. The experimental results showed that the total classification accuracy was increased by 10.00 % due to the proposed method. The total classification accuracy was recorded at 97.14 % with the proposed MDS extraction method and the proposed light CNN. | electrical engineering and systems science |
The Foldy-Lax equation is generalized for a medium which consists of particles with both electric and magnetic responses. The result is used to compute fields scattered from ensembles of particles. The computational complexity is reduced by hierarchical clustering techniques to enable simulations with on the order of 10^10 particles. With so many particles we are able to see the transition to bulk media behavior of the fields. For non-magnetic materials, the observable index, permittivity, and permeability of the effective bulk medium are in good agreement with the Clausius-Mossotti relation. The fields simulated for particles with both electric and magnetic responses are in good agreement with new analytical results for a generalized effective medium theory. | physics |
The analyticity properties of the scattering amplitude for a massive scalar field is reviewed in this article where the spacetime geometry is $R^{3,1}\otimes S^1$ i.e. one spatial dimension is compact. Khuri investigated the analyticity of scattering amplitude in a nonrelativitstic potential model in three dimensions with an additional compact dimension. He showed that, under certain circumstances, the forward amplitude is nonanalytic. He argued that in high energy scattering if such a behaviour persists it would be in conflicts with the established results of quantum field theory and LHC might observe such behaviors. We envisage a real scalar massive field in flat Minkowski spacetime in five dimensions. The Kaluza-Klein (KK) compactification is implemented on a circle. The resulting four dimensional manifold is $R^{3,1}\otimes S^1$. The LSZ formalism is adopted to study the analyticity of the scattering amplitude. The nonforward dispersion relation is proved. In addition the Jin-Martin bound and an analog of the Froissart-Martin bound are proved. A novel proposal is presented to look for evidence of the large-radius-compactification scenario. A seemingly violation of Froissart-Martin bound at LHC energy might hint that an extra dimension might be decompactified. However, we find no evidence for violation of the bound in our analysis. | high energy physics theory |
In this article, we present two different approaches for obtaining quantitative inequalities in the context of parabolic optimal control problems. Our model consists of a linearly controlled heat equation with Dirichlet boundary condition $(u_f)_t-\Delta u_f=f$, $f$ being the control. We seek to maximise the functional $\mathcal J_T(f):=\frac12\int_{(0;T)\times \Omega} u_f^2$ or, for some $\epsilon>0$, $\mathcal J_T^\epsilon (f):=\frac12\int_{(0;T)\times \Omega} u_f^2+\epsilon \int_\Omega u_f^2(T,\cdot)$ and to obtain quantitative estimates for these maximisation problems. We offer two approaches in the case where the domain $\Omega$ is a ball. In that case, if $f$ satisfies $L^1$ and $L^\infty$ constraints and does not depend on time, we propose a shape derivative approach that shows that, for any competitor $f=f(x)$ satisfying the same constraints, we have $\mathcal J_T(f^*)-\mathcal J_T(f)\gtrsim \Vert f-f^*\Vert_{L^1(\Omega)}^2$, $f^*$ being the maximiser. Through our proof of this time-independent case, we also show how to obtain coercivity norms for shape hessians in such parabolic optimisation problems. We also consider the case where $f=f(t,x)$ satisfies a global $L^\infty$ constraint and, for every $t\in (0;T)$, an $L^1$ constraint. In this case, assuming $\epsilon>0$, we prove an estimate of the form $\mathcal J_T^\epsilon (f^*)-\mathcal J_T^\epsilon (f)\gtrsim\int_0^T a_\epsilon (t) \Vert f(t,\cdot)-f^*(t,\cdot)\Vert_{L^1(\Omega)}^2$ where $a_\epsilon (t)>0$ for any $t\in (0;T)$. The proof of this result relies on a uniform bathtub principle. | mathematics |
We describe the creation of a new Atomic and Molecular Physics science gateway (AMPGateway). The gateway is designed to bring together a subset of the AMP community to work collectively to make their codes available and easier to use by the partners as well as others. By necessity, a project such as this requires the developers to work on issues of portability, documentation, ease of input, as well as making sure the codes can run on a variety of architectures. Here we outline our efforts to build this AMP gateway and future directions. | physics |
Truncating the low-lying modes of the lattice Dirac operator results in an emergence of the chiral-spin symmetry $SU(2)_{CS}$ and its flavor extension $SU(2N_F)$ in hadrons. These are symmetries of the quark - chromo-electric interaction and include chiral symmetries as subgroups. Hence the quark - chromo-magnetic interaction, which breaks both symmetries, is located at least predominantly in the near - zero modes. Using as a tool the expansion of propagators into eigenmodes of the Dirac operator we here analytically study effects of a gap in the eigenmode spectrum on baryon correlators. We find that both $U(1)_A$ and $SU(2)_L \times SU(2)_R$ emerge automatically if there is a gap around zero. Emergence of larger $SU(2)_{CS}$ and $SU(4)$ symmetries requires in addition a microscopical dynamical input about the higher-lying modes and their symmetry structure. | high energy physics phenomenology |
We demonstrate that the problems of finding stable or metastable vacua in a low energy effective field theory requires solving nested NP-hard and co-NP-hard problems, while the problem of finding near-vacua is in P. Multiple problems relevant for computing effective potential contributions from string theory are shown to be instances of NP-hard problems. If P $\neq$ NP, the hardness of finding string vacua is exponential in the number of scalar fields. Cosmological implications, including for rolling solutions, are discussed in light of a recently proposed measure. | high energy physics theory |
Conventionally, a habitable planet is one that can support liquid water on its surface. Habitability depends on temperature, which is set by insolation and the greenhouse effect, due mainly to CO2 and water vapor. The CO2 level is increased by volcanic outgassing, and decreased by continental and seafloor weathering. Here, I examine the climate evolution of Earth-like planets using a globally averaged climate model that includes both weathering types. Climate is sensitive to the relative contributions of continental and seafloor weathering, even when the total weathering rate is fixed. Climate also depends strongly on the dependence of seafloor weathering on CO2 partial pressure. Both these factors are uncertain. Earth-like planets have two equilibrium climate states: (i) an ice-free state where outgassing is balanced by both weathering types, and (ii) an ice-covered state where outgassing is balanced by seafloor weathering alone. The second of these has not been explored in detail before. For some planets, neither state exists, and the climate cycles between ice-covered and ice-free states. For some other planets, both equilibria exist, and the climate depends on the initial conditions. Insolation increases over time due to stellar evolution, so a planet usually encounters the ice-covered equilibrium first. Such a planet will remain ice-covered, even if the ice-free state appears subsequently, unless the climate receives a large perturbation. The ice-covered equilibrium state covers a large fraction of phase space for Earth-like planets. Many planets conventionally assigned to a star's habitable zone may be rendered uninhabitable as a result. | astrophysics |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.