text
stringlengths 11
9.77k
| label
stringlengths 2
104
|
---|---|
Dirichlet process mixture models (DPMM) play a central role in Bayesian nonparametrics, with applications throughout statistics and machine learning. DPMMs are generally used in clustering problems where the number of clusters is not known in advance, and the posterior distribution is treated as providing inference for this number. Recently, however, it has been shown that the DPMM is inconsistent in inferring the true number of components in certain cases. This is an asymptotic result, and it would be desirable to understand whether it holds with finite samples, and to more fully understand the full posterior. In this work, we provide a rigorous study for the posterior distribution of the number of clusters in DPMM under different prior distributions on the parameters and constraints on the distributions of the data. We provide novel lower bounds on the ratios of probabilities between $s+1$ clusters and $s$ clusters when the prior distributions on parameters are chosen to be Gaussian or uniform distributions.
|
statistics
|
This paper 1) analyzes the extent to which drivers engage in multitasking additional-to-driving (MAD) under various conditions, 2) specifies odds ratios (ORs) of crashing associated with MAD compared to no task engagement, and 3) explores the structure of MAD, based on data from the Second Strategic Highway Research Program Naturalistic Driving Study (SHRP2 NDS). Sensitivity analysis in which secondary tasks were re-defined by grouping similar tasks was performed to investigate the extent to which ORs are affected by the specific task definitions in SHRP2. A novel visual representation of multitasking was developed to show which secondary tasks co-occur frequently and which ones do not. MAD occurs in 11% of control driving segments, 22% of crashes and near-crashes (CNC), 26% of Level 1-3 crashes and 39% of rear-end striking crashes, and 9%, 16%, 17% and 28% respectively for the same event types if MAD is defined in terms of general task groups. The most common co-occurrences of secondary tasks vary substantially among event types; for example, 'Passenger in adjacent seat - interaction' and 'Other non-specific internal eye glance' tend to co-occur in CNC but tend not to co-occur in control driving segments. The odds ratios of MAD compared to driving without any secondary task and the corresponding 95% confidence intervals are 2.38 (2.17-2.61) for CNC, 3.72 (3.11-4.45) for Level 1-3 crashes and 8.48 (5.11-14.07) for rear-end striking crashes. The corresponding ORs using general task groups to define MAD are slightly lower at 2.00 (1.80-2.21) for CNC, 3.03 (2.48-3.69) for Level 1-3 crashes and 6.94 (4.04-11.94) for rear-end striking crashes. The results confirm that independently of whether secondary tasks are defined according to SHRP2 or general task groups, the reduction of driving performance from MAD observed in simulator studies is manifested in real-world crashes as well.
|
statistics
|
We present extensive multiconfiguration Dirac-Hartree-Fock and relativistic configuration interaction calculations including 106 states in doubly ionized silicon (Si III) and 45 states in triply ionized silicon (Si iv), which are important for astrophysical determination of plasma properties in different objects. These calculations represents an important extension and improvement of earlier calculations especially for Si IV. The calculations are in good agreement with available experiments for excitation energies, transition properties, and lifetimes. Important deviations from the NIST-database for a selection of perturbed Rydberg series are discussed in detail.
|
physics
|
Anomaly detection is critically important for intelligent surveillance systems to detect in a timely manner any malicious activities. Many video anomaly detection approaches using deep learning methods focus on a single camera video stream with a fixed scenario. These deep learning methods use large-scale training data with large complexity. As a solution, in this paper, we show how to use pre-trained convolutional neural net models to perform feature extraction and context mining, and then use denoising autoencoder with relatively low model complexity to provide efficient and accurate surveillance anomaly detection, which can be useful for the resource-constrained devices such as edge devices of the Internet of Things (IoT). Our anomaly detection model makes decisions based on the high-level features derived from the selected embedded computer vision models such as object classification and object detection. Additionally, we derive contextual properties from the high-level features to further improve the performance of our video anomaly detection method. We use two UCSD datasets to demonstrate that our approach with relatively low model complexity can achieve comparable performance compared to the state-of-the-art approaches.
|
computer science
|
Improving the efficiency and accuracy of energy calculations has been of significant and continued interest in the area of materials informatics, a field that applies machine learning techniques to computational materials data. Here, we present a heuristic quantum-classical algorithm to efficiently model and predict the energies of substitutionally disordered binary crystalline materials. Specifically, a quantum circuit that scales linearly in the number of lattice sites is designed and trained to predict the energies of quantum chemical simulations in an exponentially-scaling feature space. This circuit is trained by classical supervised-learning using data obtained from classically-computed quantum chemical simulations. As a part of the training process, we introduce a sub-routine that is able to detect and rectify anomalies in the input data. The algorithm is demonstrated on the complex layer-structured of Li-cobaltate system, a widely-used Li-ion battery cathode material component. Our results shows that the proposed quantum circuit model presents a suitable choice for modelling the energies obtained from such quantum mechanical systems. Furthermore, analysis of the anomalous data provides important insights into the thermodynamic properties of the systems studied.
|
quantum physics
|
Monitoring of hybrid systems attracts both scientific and practical attention. However, monitoring algorithms suffer from the methodological difficulty of only observing sampled discrete-time signals, while real behaviors are continuous-time signals. To mitigate this problem of sampling uncertainties, we introduce a model-bounded monitoring scheme, where we use prior knowledge about the target system to prune interpolation candidates. Technically, we express such prior knowledge by linear hybrid automata (LHAs) - the LHAs are called bounding models. We introduce a novel notion of monitored language of LHAs, and we reduce the monitoring problem to the membership problem of the monitored language. We present two partial algorithms - one is via reduction to reachability in LHAs and the other is a direct one using polyhedra - and show that these methods, and thus the proposed model-bounded monitoring scheme, are efficient and practically relevant.
|
electrical engineering and systems science
|
We extend previous work on quantum stress tensor operators which have been averaged over finite time intervals to include averaging over finite regions of space as well. The space and time averaging can be viewed as describing a measurement process for a stress tensor component, such as the energy density of a quantized field in its vacuum state. Although spatial averaging reduces the probability of large vacuum fluctuations compared to time averaging alone, we find that the probability distribution decreases more slowly than exponentially as the magnitude of the measured energy density increases. This implies that vacuum fluctuations can sometimes dominate over thermal fluctuations and potentially have observable effects.
|
high energy physics theory
|
This article studies the interregional Greek road network (GRN) by applying complex network analysis (CNA) and an empirical approach. The study aims to extract the socioeconomic information immanent to the GRN's topology and to interpret the way in which this road network serves and promotes the regional development. The analysis shows that the topology of the GRN is submitted to spatial constraints, having lattice-like characteristics. Also, the GRN's structure is described by a gravity pattern, where places of higher population enjoy greater functionality, and its interpretation in regional terms illustrates the elementary pattern expressed by regional development through road construction. The study also reveals some interesting contradictions between the metropolitan and non-metropolitan (excluding Attica and Thessaloniki) comparison. Overall, the article highlights the effectiveness of using complex network analysis in the modeling of spatial networks and in particular of transportation systems and promotes the use of the network paradigm in the spatial and regional research.
|
physics
|
An electron-muon collider with an asymmetric collision profile targeting multi-ab$^{-1}$ integrated luminosity is proposed. This novel collider, operating at collisions energies of e.g. 20-200 GeV, 50-1000 GeV and 100-3000 GeV, would be able to probe charged lepton flavor violation and measure Higgs boson properties precisely. The collision of an electron and muon beam leads to less physics background compared with either an electron-electron or a muon-muon collider, since electron-muon interactions proceed mostly through higher order vector boson fusion and vector boson scattering processes. The asymmetric collision profile results in collision products that are boosted towards the electron beam side, which can be exploited to reduce beam-induced background from the muon beam to a large extent. With this in mind, one can imagine a lepton collider complex, starting from colliding order 10 GeV electron and muon beams for the first time in history and to probe charged lepton flavor violation, then to be upgraded to a collider with 50-100 GeV electron and 1-3 TeV muon beams to measure Higgs properties and search for new physics, and finally to be transformed to a TeV scale muon muon collider. The cost should vary from order 100 millions to a few billion dollars, corresponding to different stages, which make the funding situation more practical.
|
high energy physics phenomenology
|
Semidefinite programming (SDP) relaxations have been intensively used for solving discrete quadratic optimization problems, in particular in the binary case. For the general non-convex integer case with box constraints, the branch-and-bound algorithm Q-MIST has been proposed by Buchheim and Wiegele (Math Program 141(1--2):435--452, 2013), which is based on an extension of the well-known SDP-relaxation for max-cut. For solving the resulting SDPs, Q-MIST uses an off-the-shelf interior point algorithm. In this paper, we present a tailored coordinate ascent algorithm for solving the dual problems of these SDPs. Building on related ideas of Dong (SIAM J Optim 26(3):1962--1985, 2016), it exploits the particular structure of the SDPs, most importantly a small rank of the constraint matrices. The latter allows both an exact line search and a fast incremental update of the inverse matrices involved, so that the entire algorithm can be implemented to run in quadratic time per iteration. Moreover, we describe how to extend this approach to a certain two-dimensional coordinate update. Finally, we explain how to include arbitrary linear constraints into this framework, and evaluate our algorithm experimentally.
|
mathematics
|
I studied the non-equilibrium response of an initial N\'{e}el state under time evolution with the Kitaev honeycomb model. With isotropic interactions ($J_x = J_y = J_z$) the system quickly loses its antiferromagnetic order and crosses over into a steady state valence bond solid, which can be inferred from the long-range dimer correlations. There is no signature of a dynamical phase transition. Upon including anisotropy ($J_x = J_y \neq J_z$), an exponentially long prethermal regime appears with persistent magnetization oscillations whose period derives from an effective toric code.
|
condensed matter
|
Efficient Latin hypercube designs (LHDs), including maximin distance LHDs, maximum projection LHDs and orthogonal LHDs, are widely used in computer experiments. It is challenging to construct such designs with flexible sizes, especially for large ones. In the current literature, various algebraic methods and search algorithms have been proposed for identifying efficient LHDs, each having its own pros and cons. In this paper, we review, summarize and compare some currently popular methods aiming to provide guidance for experimenters on what method should be used in practice. Using the R package we developed which integrates and improves various algebraic and searching methods, many of the designs found in this paper are better than the existing ones. They are easy to use for practitioners and can serve as benchmarks for the future developments on LHDs.
|
statistics
|
Inspired by the methodology used for classical cryptographic hardware, we consider the use of attack ratings in the context of QKD security evaluation. To illustrate the relevance of this approach, we conduct an experimental vulnerability assessment of CV-QKD against saturation attacks, for two different attack strategies. The first strategy relies on inducing detector saturation by performing a large coherent displacement. This strategy is experimentally challenging and therefore translates into a high attack rating. We also propose and experimentally demonstrate a second attack strategy that simply consists in saturating the detector with an external laser. The low rating we obtain indicates that this attack constitutes a primary threat for practical CV-QKD systems. These results highlight the benefits of combining theoretical security considerations with vulnerability analysis based on attack ratings, in order to guide the design and engineering of practical QKD systems towards the highest possible security standards.
|
quantum physics
|
The Fast Tracker (FTK) is an ATLAS trigger upgrade built for full event, low-latency, high-rate tracking. The FTK core, made of 9U VME boards, performs the most demanding computational task. The Associative Memory Board Serial Link Processor (AMB) and the Auxiliary card (AUX), plugged on the front and back sides of the same VME slot, constitute the Processing Unit (PU), which finds tracks using hits from 8 layers of the inner detector. The PU works in pipeline with the Second Stage Board (SSB), which finds 12-layer tracks by adding extra hits to the identified tracks. In the designed configuration, 16 PUs and 4 SSBs are installed in a VME crate. The high power-consumption of the AMB, AUX and SSB (respectively of about 250 W, 70 W and 160 W per board) required the development of a custom cooling system. Even though the expected power consumption for each VME crate of the FTK system is high compared to a common VME setup, the 8 FTK core crates will use $\approx$ 60 kW, which is just a fraction of the power and the space needed for a CPU farm performing the same task. We report on the integration of 32 PUs and 8 SSBs inside the FTK system, on the infrastructures needed to run and cool them, and on the tests performed to verify the system processing rate and the temperature stability at a safe value.
|
physics
|
Phishing is a well-known cybersecurity attack that has rapidly increased in recent years. It poses legitimate risks to businesses, government agencies, and all users due to sensitive data breaches, subsequent financial and productivity losses, and social and personal inconvenience. Often, these attacks use social engineering techniques to deceive end-users, indicating the importance of user-focused studies to help prevent future attacks. We provide a detailed overview of phishing research that has focused on users by conducting a systematic literature review of peer-reviewed academic papers published in ACM Digital Library. Although published work on phishing appears in this data set as early as 2004, we found that of the total number of papers on phishing (N = 367) only 13.9% (n = 51) focus on users by employing user study methodologies such as interviews, surveys, and in-lab studies. Even within this small subset of papers, we note a striking lack of attention to reporting important information about methods and participants (e.g., the number and nature of participants), along with crucial recruitment biases in some of the research.
|
computer science
|
We study the complexity of approximating Wassertein barycenter of $m$ discrete measures, or histograms of size $n$ by contrasting two alternative approaches, both using entropic regularization. The first approach is based on the Iterative Bregman Projections (IBP) algorithm for which our novel analysis gives a complexity bound proportional to $\frac{mn^2}{\varepsilon^2}$ to approximate the original non-regularized barycenter. Using an alternative accelerated-gradient-descent-based approach, we obtain a complexity proportional to $\frac{mn^{2.5}}{\varepsilon} $. As a byproduct, we show that the regularization parameter in both approaches has to be proportional to $\varepsilon$, which causes instability of both algorithms when the desired accuracy is high. To overcome this issue, we propose a novel proximal-IBP algorithm, which can be seen as a proximal gradient method, which uses IBP on each iteration to make a proximal step. We also consider the question of scalability of these algorithms using approaches from distributed optimization and show that the first algorithm can be implemented in a centralized distributed setting (master/slave), while the second one is amenable to a more general decentralized distributed setting with an arbitrary network topology.
|
mathematics
|
We present here the computation of electrical and thermal conductivity by solving the Boltzmann transport equation in relaxation time approximation. We use the $q$-generalized Boltzmann distribution function to incorporate the effects of non-extensivity. The behaviour of these quantities with changing temperature and baryochemical potential has been studied as the system slowly moves towards thermodynamic equilibrium. We have estimated the Lorenz number at NICA, FAIR and the top RHIC energies and studied as a function of temperature, baryochemical potential and the non-extensive parameter, $q$. We have observed that Wiedemann-Franz law is violated for a non-extensive hadronic phase as well as for an equilibrated hadron gas at high temperatures.
|
high energy physics phenomenology
|
We revisit thermal Majorana dark matter from the viewpoint of minimal effective field theory. In this framework, analytic results for dark matter annihilation into standard model particles are derived. The dark matter parameter space subject to the latest LUX, PandaX-II and Xenon-1T limits is presented in a model-independent way. Applications to singlet-doublet and MSSM are presented.
|
high energy physics phenomenology
|
Almost 30 years have passed since the successful detection of supernova neutrinos from SN 1987A. In the last decades, remarkable progress has been made in neutrino detection technique, through which it may be possible to detect neutrinos from a new source, pre-supernova (pre-SN) neutrinos. They are emitted from a massive star prior to core bounce. Because neutrinos escape from the core freely, they carry information about the stellar physics directly. Pre-SN neutrinos may play an important role in verifying our understanding of stellar evolution for massive stars. Observations of pre-SN neutrinos, moreover, may serve as an alarm regarding a supernova explosion a few days in advance if the progenitor is located in our vicinity, enabling us to observe the next galactic supernova. In this review, we summarize the current status of pre-SN neutrino studies from both of the theoretical and observational points of view.
|
astrophysics
|
In this paper we consider Iwahori Whittaker functions on $n$-fold metaplectic covers $\widetilde{G}$ of $\mathbf{G}(F)$ with $\mathbf{G}$ a split reductive group over a non-archimedean local field $F$. For every element $\phi$ of a basis of Iwahori Whittaker functions, and for every $g\in\widetilde{G}$, we evaluate $\phi(g)$ by recurrence relations over the Weyl group using "vector Demazure-Whittaker operators." Specializing to the case of $\mathbf{G} = \mathbf{GL}_r$, we exhibit a solvable lattice model whose partition function equals $\phi(g)$. These models are of a new type associated with the quantum affine super group $U_q(\widehat{\mathfrak{gl}}(r|n))$. The recurrence relations on the representation theory side then correspond to solutions to Yang-Baxter equations for the lattice models. Remarkably, there is a bijection between the boundary data specifying the partition function and the data determining all values of the Whittaker functions.
|
mathematics
|
This paper investigates best-worst choice probabilities (picking the best and the worst alternative from an offered set). It is shown that non-negativity of best-worst Block-Marschak polynomials is necessary and sufficient for the existence of a random utility representation. The representation theorem is obtained by extending proof techniques employed by Falmagne (1978) for a corresponding result on best choices (picking the best alternative from an offered set).
|
mathematics
|
This paper explores the road to vastly improving the broadband connectivity in future 6G wireless systems. Different categories of use cases are considered, with peak data rates up to 1 Tbps. Several categories of enablers at the infrastructure, spectrum, and protocol/algorithmic levels are required to realize the intended broadband connectivity goals in 6G. At the infrastructure level, we consider ultra-massive MIMO technology (possibly implemented using holographic radio), intelligent reflecting surfaces, user-centric cell-free networking, integrated access and backhaul, and integrated space and terrestrial networks. At the spectrum level, the network must seamlessly utilize sub-6 GHz bands for coverage and spatial multiplexing of many devices, while higher bands will be mainly used for pushing the peak rates of point-to-point links. Finally, at the protocol/algorithmic level, the enablers include improved coding, modulation, and waveforms to achieve lower latency, higher reliability, and reduced complexity.
|
electrical engineering and systems science
|
We investigate systematically the quark-hadron mixed phase in dense stellar matter, and its influence on compact star structures. The properties of quark matter and hadronic matter are fixed based on various model predictions. Beside adopting constant values, the surface tension $\Sigma$ for the quark-hadron interface is estimated with the multiple reflection expansion method and equivparticle model. To fix the structures of quark-hadron pasta phases, a continuous dimensionality of the structure is adopted as proposed by Ravenhall, Pethick, and Wilson. The corresponding properties of hybrid stars are then obtained and confronted with pulsar observations. It is found that the correlation between radius and tidal deformability in traditional neutron stars preserves in hybrid stars. For those permitted by pulsar observations, in almost all cases the quark phase persists inside the most massive compact stars. The quark-hadron interface plays an important role on hybrid star structures once quark matter emerges. The surface tension $\Sigma$ estimated with various methods increases with density, which predicts stiffer EOSs for the quark-hadron mixed phase and increases the maximum mass of hybrid stars. The EOSs of hybrid star matter are well constrained at densities $n\lesssim 0.8$ fm${}^{-3}$, while larger uncertainty is expected at higher densities.
|
high energy physics phenomenology
|
Applying probability-related knowledge to accurately explore and exploit the inherent uncertainty of wind power output is one of the key issues that need to be solved urgently in the development of smart grid. This letter develops an analytical probabilistic expression for modeling sum of spatial-dependent wind farm power output through introducing unit impulse function, copulas, and Gaussian mixture model. A comparative Monte Carlo sampling study is given to illustrate the validity of the proposed model.
|
electrical engineering and systems science
|
A class of Labeled Random Finite Set filters known as the delta-Generalized Labeled Multi-Bernoulli (dGLMB) filter represents the filtering density as a set of weighted hypotheses, with each hypothesis consisting of a set of labeled tracks, which are in turn pairs of a track label and a track (kinematic) density. Upon update with a batch of measurements, each hypothesis gives rise to many child hypotheses, and therefore for any practical application, truncation has to be performed and compute budget has to be utilized efficiently. We have adopted a factored filtering density through the use of a novel Merge/Split algorithm: When some factors become coupled through new measurements that gate with them, they are merged into one factor by forming "product hypotheses." The merged factor can subsequently be split into two factors, one gating with the measurements while the other not, if the "joint probability reconstruction error" is within a given tolerance and therefore independence between the two factors can be considered to hold true. A key to the algorithm is the exploitation of "diminishing influence" of old measurements on the current state, so that a kinematic density is indexed by a sequence of most recently incorporated measurement IDs. With such indexing, the problem is discretized, and factorization of the dGLMB density is carried out through marginalization that "combines terms" to have a reduction in the total number of hypotheses. Factors that have become "empty" are deleted. Thus, the Merge/Split algorithm adaptively creates and maintains significant factors within a compute budget.
|
electrical engineering and systems science
|
Tensor networks have emerged as promising tools for machine learning, inspired by their widespread use as variational ansatzes in quantum many-body physics. It is well known that the success of a given tensor network ansatz depends in part on how well it can reproduce the underlying entanglement structure of the target state, with different network designs favoring different scaling patterns. We demonstrate here how a related correlation analysis can be applied to tensor network machine learning, and explore whether classical data possess correlation scaling patterns similar to those found in quantum states. We utilize mutual information as a natural analog to entanglement for classical data, and show that it can serve as a lower-bound on the network entanglement needed for probabilistic classification. We then develop a logistic regression algorithm to estimate the mutual information between bipartitions of data features, and verify its accuracy on a set of Gaussian distributions designed to mimic different correlation patterns. Using this algorithm, we characterize the scaling patterns in the MNIST and Tiny Images datasets, and find clear evidence of boundary-law scaling in the latter.
|
quantum physics
|
Energetic particle effects in magnetic confinement fusion devices are commonly studied by hybrid kinetic-fluid simulation codes whose underlying continuum evolution equations often lack the correct energy balance. While two different kinetic-fluid coupling options are available (current-coupling and pressure-coupling), this paper applies the Euler-Poincar\'e variational approach to formulate a new conservative hybrid model in the pressure-coupling scheme. In our case the kinetics of the energetic particles are described by guiding center theory. The interplay between the Lagrangian fluid paths with phase space particle trajectories reflects an intricate variational structure which can be approached by letting the 4-dimensional guiding center trajectories evolve in the full 6-dimensional phase space. Then, the redundant perpendicular velocity is integrated out to recover a four-dimensional description. A second equivalent variational approach is also reported, which involves the use of phase space Lagrangians. Not only do these variational structures confer on the new model a correct energy balance, but also they produce a cross-helicity invariant which is lost in the other pressure-coupling schemes reported in the literature.
|
physics
|
Inspired by the topological sign-flip definition of the Amplituhedron, we introduce similar, but distinct, positive geometries relevant for one-loop scattering amplitudes in planar $\mathcal{N}=4$ super Yang-Mills theory. The simplest geometries are those with the maximal number of sign flips, and turn out to be associated with chiral octagons previously studied in the context of infrared (IR) finite, pure and dual conformal invariant local integrals. Our result bridges two different themes of the modern amplitudes program: positive geometry and Feynman integrals.
|
high energy physics theory
|
To extract the maximum information about the object from a series of binary samples in ghost imaging applications, we propose and demonstrate a framework for optimizing the performance of ghost imaging with binary sampling to approach the results without binarization. The method is based on maximizing the information content of the signal arm detection, by formulating and solving the appropriate parameter estimation problem - finding the binarization threshold that would yield the reconstructed image with optimal Fisher information properties. Applying the 1-bit quantized Poisson statistics to a ghost-imaging model with pseudo-thermal light, we derive the fundamental limit, i.e., the Cramer-Rao lower bound, as the benchmark for the evaluation of the accuracy of the estimator. Our theoertical model and experimental results suggest that, with the optimal binarization threshold, coincident with the statistical mean of all bucket samples, and large number of measurements, the performance of binary sampling GI can approach that of the ordinary one without binarization.
|
physics
|
We revisit the interplay between superconductivity and quantum criticality when thermal effects from virtual static bosons are included. These contributions, which arise from an effective theory compactified on the thermal circle, strongly affect field-theoretic predictions even at small temperatures. We argue that they are ubiquitous in a wide variety of models of non-Fermi liquid behavior, and generically produce a parametric suppression of superconducting instabilities. We apply these ideas to non-Fermi liquids in $d=2$ space dimensions, obtained by coupling a Fermi surface to a Landau-damped soft boson. Extending previous methods developed for $d=3-\epsilon$ dimensions, we determine the dynamics and phase diagram. It features a naked quantum critical point, separated by a continuous infinite order transition from a superconducting phase with strong non-Fermi liquid corrections. We also highlight the relevance of these effects for (numerical) experiments on non-Fermi liquids.
|
condensed matter
|
In this paper we present cosmological solutions of Double Field Theory in the supergravity frame and in the winding frame which are related via T-duality. In particular, we show that the solutions can be viewed without the need of complexifying the cosmological scale factor.
|
high energy physics theory
|
Low energy linear seesaw mechanism responsible for the generation of the tiny active neutrino masses, is implemented in the extended 3-3-1 model with two scalar triplets and right handed Majorana neutrinos where the gauge symmetry is supplemented by the $A_4$ flavor discrete group and other auxiliary cyclic symmetries, whose spontaneous breaking produces the observed pattern of SM charged fermion masses and fermionic mixing parameters. Our model is consistent with the low energy SM fermion flavor data. Some phenomenological aspects such as the $Z^\prime$ production at proton-proton collider and the lepton flavor violating decay of the SM-like Higgs boson are discussed. The scalar potential of the model is analyzed in detail and the SM-like Higgs boson is identified.
|
high energy physics phenomenology
|
We prove two principal results. Firstly, we characterise Maass forms in terms of functional equations for Dirichlet series twisted by primitive characters. The key point is that the twists are allowed to be meromorphic. This weakened analytic assumption applies in the context of our second theorem, which shows that the quotient of the symmetric square L-function of a Maass newform and the Riemann zeta function has infinitely many poles.
|
mathematics
|
I first discuss the main motivations for Tony Skyrme's highly original program (1958-62) of making fermionic nucleons out of bosonic pion fields, as described in his Cosener's House talk in 1984. These include a dislike of point-like elementary particles, which he blamed for infinite renormalization, and a preference for extended objects distinguished by what we would now call topological quantum numbers. In this he was strongly influenced by William Thomson (Lord Kelvin), who was so impressed by Helmholtz's proof of the conservation of circulation ("Wirbelbewegung") in fluid vortices that he developed an entire theory of atoms as knotted vortices in the ether fluid. Skyrme liked mechanical models, as did Kelvin, and he grew up fascinated by the ingenuity of Kelvin's machine for predicting tides, an example of which stood in his grandfather's house. This seems to have been connected to his strong preference for bosonic fields, which have a classical limit, over fermionic fields which do not. I then sketch the progress of Skyrme's ideas in the series of six papers in the years 1958-62, which passed largely unnoticed at the time. I emphasize his remarkable intuition that the kink solution of the classical Sine-Gordon equation would be a fermion when quantized, and the novelty of his identification of the Skyrmion winding number with baryon number in the three-dimensional case. I end by briefly describing how Skyrme's work was dramatically related to QCD in 1983-4.
|
physics
|
Unobserved confounding presents a major threat to the validity of causal inference from observational studies. In this paper, we introduce a novel framework that leverages the information in multiple parallel outcomes for identification and estimation of causal effects. Under a conditional independence structure among multiple parallel outcomes, we achieve nonparametric identification with at least three parallel outcomes. We further show that under a set of linear structural equation models, causal inference is possible with two parallel outcomes. We develop accompanying estimating procedures and evaluate their finite sample performance through simulation studies and a data application studying the causal effect of the tau protein level on various types of behavioral deficits.
|
statistics
|
In this paper, we propose a more refined video segment based Mobile Edge Computing (MEC) enhanced cache update strategy, which takes into account the client's playback status and transmission state, MEC cache capacity and the popularity of each segment, to improve the quality of experience (QoE) of clients and the use ratio of MEC cache. In each cache update period, the segments which cannot bring significant QoE improvement will be deleted and meanwhile more suitable representations of the segments will be cahed instead. First, we divide the MEC cache space into three parts based on segment popularity and segment importance level. In addition, the size of different cache parts can be transformed to each other to further exploit the MEC cache space. For different parts, the corresponding deleted strategy and caching strategy are formulated, where the deleted strategy combines the requested times of segment and transmission capability of clients, and the caching strategy utilizes the playback state of clients to calculate the caching priority. Furthermore, we formulate the cache update problem to maximize the utility function of clients subject to the constraint of MEC cache size and transmission capacity. The brand and branch method is emplyed to obatain the optimal solution. Simulation results show that our proposed algorithm can improve system throughput and hit ratio of video segment, and at the same time decrease playback frozen time and system backhaul traffic compared with other existing algorithms.
|
electrical engineering and systems science
|
Algorithms and other formal models that incorporate human values like fairness have grown increasingly popular in computer science. In response to sociotechnical challenges in the use of these models, designers and researchers have taken widely divergent positions on the use of formal models of human values: encouraging their use, moving away from them, or ignoring the normative consequences altogether. In this paper, we seek to resolve these divergent positions by identifying the main conceptual limits of formal modeling, and develop four reflexive values-value fidelity, accuracy, value legibility, and value contestation-vital for incorporating human values into formal models. We then provide a methodology for reflexively designing formal models incorporating human values without ignoring their societal implications.
|
computer science
|
Autonomous vehicles require fleet-wide data collection for continuous algorithm development and validation. The Smart Black Box (SBB) intelligent event data recorder has been proposed as a system for prioritized high-bandwidth data capture. This paper extends the SBB by applying anomaly detection and action detection methods for generalized event-of-interest (EOI) detection. An updated SBB pipeline is proposed for the real-time capture of driving video data. A video dataset is constructed to evaluate the SBB on real-world data for the first time. SBB performance is assessed by comparing the compression of normal and anomalous data and by comparing our prioritized data recording with a FIFO strategy. Results show that SBB data compression can increase the anomalous-to-normal memory ratio by ~25%, while the prioritized recording strategy increases the anomalous-to-normal count ratio when compared to a FIFO strategy. We compare the real-world dataset SBB results to a baseline SBB given ground-truth anomaly labels and conclude that improved general EOI detection methods will greatly improve SBB performance.
|
electrical engineering and systems science
|
We consider mixed three-point correlation functions of the supercurrent and flavour current in three-dimensional $1 \leq \mathcal{N} \leq 4$ superconformal field theories. Our method is based on the decomposition of the relevant tensors into irreducible components to guarantee that all possible tensor structures are systematically taken into account. We show that only parity even structures appear in the correlation functions. In addition to the previous results obtained in arXiv:1503.04961, it follows that supersymmetry forbids parity odd structures in three-point functions involving the supercurrent and flavour current multiplets.
|
high energy physics theory
|
We extend the analysis, initiated in arXiv:1901.01269, of the thermodynamic stability of four-dimensional asymptotically flat hairy black holes by considering a general class of exact solutions in Einstein-Maxwell-dilaton theory with a non-trivial dilaton potential. We find that, regardless of the values of the parameters of the theory, there always exists a sub-class of hairy black holes that are thermodynamically stable and have the extremal limit well defined. This generic feature that makes the equilibrium configurations locally stable should be related to the properties of the dilaton potential that is decaying towards the spatial infinity, but behaves as a box close to the horizon. We prove that these thermodynamically stable solutions are also dynamically stable under spherically symmetric perturbations.
|
high energy physics theory
|
Radio detection of air showers in the current era has progressed immensely to effectively extract the properties of these air showers. Primary cosmic rays with energies of hundreds of PeV have been successfully measured with the method of radio detection. There are also attempts to observe high-energy neutrinos with this technique. Current radio experiments measuring cosmic-ray air showers mostly operate in the frequency range of 30-80 MHz. An optimization of the frequency band of operation can be done for maximizing the signal-to-noise ratio that can be achieved by an array of radio antennas at the South Pole, operated along with IceTop. Such an array can improve the reconstruction of air showers performed with IceTop. The prospect of using such an optimized radio array for measuring gamma rays of PeV energies from the Galactic Center is discussed.
|
astrophysics
|
The domain of allowed von Neumann entropies of a holographic field theory carves out a polyhedral cone -- the holographic entropy cone -- in entropy space. Such polyhedral cones are characterized by their extreme rays. For an arbitrary number of parties, it is known that the so-called perfect tensors are extreme rays. In this work, we constrain the form of the remaining extreme rays by showing that they correspond to geometries with vanishing mutual information between any two parties, ensuring the absence of Bell pair type entanglement between them. This is tantamount to proving that besides subadditivity, all non-redundant holographic entropy inequalities are superbalanced, i.e. not only do UV divergences cancel in the inequality itself (assuming smooth entangling surfaces), but also in the purification thereof.
|
high energy physics theory
|
The regularity of pulsar emissions becomes apparent once we reference the pulses' times of arrivals to the inertial rest frame of the solar system. It follows that errors in the determination of Earth's position with respect to the solar-system barycenter can appear as a time-correlated bias in pulsar-timing residual time series, affecting the searches for low-frequency gravitational waves performed with pulsar timing arrays. Indeed, recent array datasets yield different gravitational-wave background upper limits and detection statistics when analyzed with different solar-system ephemerides. Crucially, the ephemerides do not generally provide usable error representations. In this article we describe the motivation, construction, and application of a physical model of solar-system ephemeris uncertainties, which focuses on the degrees of freedom (Jupiter's orbital elements) most relevant to gravitational-wave searches with pulsar timing arrays. This model, BayesEphem, was used to derive ephemeris-robust results in NANOGrav's 11-yr stochastic-background search, and it provides a foundation for future searches by NANOGrav and other consortia. The analysis and simulations reported here suggest that ephemeris modeling reduces the gravitational-wave sensitivity of the 11-yr dataset; and that this degeneracy will vanish with improved ephemerides and with the longer pulsar timing datasets that will become available in the near future.
|
astrophysics
|
We study the quantum dissipative Duffing oscillator across a range of system sizes and environmental couplings under varying semiclassical approximations. Using spatial (based on Kullback-Leibler distances between phase-space attractors) and temporal (Lyapunov exponent-based) complexity metrics, we isolate the effect of the environment on quantum-classical differences. Moreover, we quantify the system sizes where quantum dynamics cannot be simulated using semiclassical or noise-added classical approximations. Remarkably, we find that a parametrically invariant meta-attractor emerges at a specific length scale and noise-added classical models deviate strongly from quantum dynamics below this scale. Our findings also generalize the previous surprising result that classically regular orbits can have the greatest quantum-classical differences in the semiclassical regime. In particular, we show that the dynamical growth of quantum-classical differences is not determined by the degree of classical chaos.
|
quantum physics
|
In this paper we present a method to learn word embeddings that are resilient to misspellings. Existing word embeddings have limited applicability to malformed texts, which contain a non-negligible amount of out-of-vocabulary words. We propose a method combining FastText with subwords and a supervised task of learning misspelling patterns. In our method, misspellings of each word are embedded close to their correct variants. We train these embeddings on a new dataset we are releasing publicly. Finally, we experimentally show the advantages of this approach on both intrinsic and extrinsic NLP tasks using public test sets.
|
computer science
|
Data of $^{12}$CO/$^{13}$CO/C$^{18}$O $J=1\to0$ emission toward the Galactic plane region of $l=35^\circ$ to $45^\circ$ and $b= -5^\circ$ to $+5^\circ$ are available with the Milky Way Imaging Scroll Painting (MWISP) project. Using the data, we found a giant molecular filament (GMF) around $l\approx38\sim42^\circ$, $b\approx-3.5\sim0^\circ$, $V_{LSR} \approx 27 \sim 40$ km~s$^{-1}$, named the GMF MWISP G041-01. At a distance of 1.7 kpc, the GMF is about 160 pc long. With a median excitation temperature about 7.5 K and a median column density about $10^{21}$ cm$^{-2}$, this GMF is very cold and very diffuse compared to known GMFs. Using the morphology in the data cube, the GMF is divided into four components among which three show filamentary structure. Masses of the components are $ 10^3 \sim 10^4 M_\odot$, with a total mass for the whole filament being about $7\times10^4 M_\odot$ from the LTE method. $^{13}$CO cores inside each component are searched. Virial parameters are about 2.5 for these cores and have a power-law index of -0.34 against the mass. The mass fraction of dense cores traced by $^{13}$CO to the diffuse clouds traced by $^{12}$CO are about 7% for all components of the GMF. We found signatures of possible large scale filament-filament collision in the GMF.
|
astrophysics
|
We explore the possibility that the Fast Radio Bursts (FRBs) are powered by magnetic reconnection in magnetars, triggered by Axion Quark Nugget (AQN) dark matter. In this model, the magnetic reconnection is ignited by the shock wave which develops when the nuggets' Mach number $M \gg 1$. These shock waves generate very strong and very short impulses expressed in terms of pressure $\Delta p/p\sim M^2$ and temperature $\Delta T/T\sim M^2$ in the vicinity of (would be) magnetic reconnection area. We find that the proposed mechanism produces a coherent emission which is consistent with current data, in particular the FRB energy requirements, the observed energy distribution, the frequency range and the burst duration. Our model allows us to propose additional tests which future data will be able to challenge.
|
astrophysics
|
DNA has the potential to realize a controllable liquid-liquid phase separation (LLPS) system, because the design of its base sequences results in programmable interactions. Here, we have developed a novel DNA-based LLPS system which enables us to create 'DNA droplets' and to control their dynamic behaviour by designing sequences of the DNA nanostructure. We were able to change the phase separation temperature required for the formation of DNA droplets by designing the sequences. In addition, the fusion, fission, and formation of Janus-shaped droplets were controlled by sequence design and enzymatic reactions. Furthermore, modifications of proteins with sequence-designed DNAs allowed for their capture into specific droplets. Overall, our results provide a new platform for designing the phase behaviour of macromolecular structures, and paves the way for new applications of sequence-designed DNA in the creation of cell-mimicries, synthetic membraneless organelles, and artificial molecular systems.
|
condensed matter
|
In spinor Bose-Einstein condensates, spin-changing collisions are a remarkable proxy to coherently realize macroscopic many-body quantum states. These processes have been, e.g., exploited to generate entanglement, to study dynamical quantum phase transitions, and proposed for realizing nematic phases in atomic condensates. In the same systems dressed by Raman beams, the coupling between spin and momentum induces a spin dependence in the scattering processes taking place in the gas. Here we show that, at weak couplings, such modulation of the collisions leads to an effective Hamiltonian which is equivalent to the one of an artificial spinor gas with spin-changing collisions that are tunable with the Raman intensity. By exploiting this dressed-basis description, we propose a robust protocol to coherently drive the spin-orbit coupled condensate into the ferromagnetic stripe phase via crossing an excited-state quantum phase transition of the effective low-energy model.
|
condensed matter
|
We propose model-plant mismatch learning offset-free model predictive control (MPC), which learns and applies the intrinsic model-plant mismatch, to effectively exploit the advantages of model-based and data-driven control strategies and overcome the limitations of each approach. In this study, the model-plant mismatch map on steady-state manifold in the controlled variable space is approximated via a general regression neural network from the steady-state data for each setpoint. Though the learned model-plant mismatch map can provide the information at the equilibrium point (i.e., setpoint), it cannot provide model-plant mismatch information during the transient state. Moreover, the intrinsic model-plant mismatch can vary due to system characteristics changes during operation. Therefore, we additionally apply a supplementary disturbance variable which is updated from the disturbance estimator based on the nominal offset-free MPC scheme. Then, the combined disturbance signal is applied to the target problem and finite-horizon optimal control problem of offset-free MPC to improve the prediction accuracy and closed-loop performance of the controller. By this, we can exploit both the learned model-plant mismatch information and the stabilizing property of the nominal disturbance estimator approach. The closed-loop simulation results demonstrate that the developed scheme can properly learn the intrinsic model-plant mismatch and efficiently improve the model-plant mismatch compensating performance in offset-free MPC. Moreover, we examine the robust asymptotic stability of the developed offset-free MPC scheme, which is known to be difficult to analyze in nominal offset-free MPC, by exploiting the learned model-plant mismatch information.
|
electrical engineering and systems science
|
Continuous-wave (cw) squeezed states of light have applications in sensing, metrology and secure communication. In recent decades their efficient generation has been based on parametric down-conversion, which requires pumping by externally generated pump light of twice the optical frequency. Currently, there is immense effort in miniaturizing squeezed-light sources for chip-integration. Designs that require just a single input wavelength are favored since they offer an easier realization. Here we report the first observation of cw squeezed states generated by self-phase modulation caused by subsequent up and down conversions. The wavelengths of input light and of balanced homodyne detection are identical, and 1550 nm in our case. At sideband frequencies around 1.075 GHz, a nonclassical noise reduction of (2.4 +/- 0.1) dB is observed. The setup uses a second-order nonlinear crystal, but no externally generated light of twice the frequency. Our experiment is not miniaturized, but might open a route towards simplified chip-integrated realizations.
|
quantum physics
|
The aim of this work is to discuss and explore some generalized aspects of generation of photon mass respecting gauge symmetry. We introduce the generalized Stueckelberg and Higgs gauge theories and present the classical and quantum frameworks related to them. We construct the quantum theory by writing the transition amplitude in the Fadeev-Senjanovic formalism and put it in a covariant form by the Fadeev-Popov method. We also analyze the independence of the transition amplitude by gauge parameters via BRST symmetry. Even in the generalized context, the Stueckelberg structure influences the quantization process of the Higgs theory in the 't Hooft's fashion, in which an intimate relationship between the Stueckelberg compensating field and the Goldstone boson arises.
|
high energy physics theory
|
During World War II the German army used tanks to devastating advantage. The Allies needed accurate estimates of their tank production and deployment. They used two approaches to find these values: spies, and statistics. This note describes the statistical approach. Assuming the tanks are labeled consecutively starting at 1, if we observe $k$ serial numbers from an unknown number $N$ of tanks, with the maximum observed value $m$, then the best estimate for $N$ is $m(1 + 1/k) - 1$. This is now known as the German Tank Problem, and is a terrific example of the applicability of mathematics and statistics in the real world. The first part of the paper reproduces known results, specifically deriving this estimate and comparing its effectiveness to that of the spies. The second part presents a result we have not found in print elsewhere, the generalization to the case where the smallest value is not necessarily 1. We emphasize in detail why we are able to obtain such clean, closed-form expressions for the estimates, and conclude with an appendix highlighting how to use this problem to teach regression and how statistics can help us find functional relationships.
|
statistics
|
We study two Bayesian (Reference Intrinsic and Jeffreys prior) and two frequentist (MLE and PWM) approaches to calibrating the Pareto and related distributions. Three of these approaches are compared in a simulation study and all four to investigate how much equity risk capital banks subject to Basel II banking regulations must hold. The Reference Intrinsic approach, which is invariant under one-to-one transformations of the data and parameter, performs better when fitting a generalised Pareto distribution to data simulated from a Pareto distribution and is competitive in the case study on equity capital requirements
|
statistics
|
Physical systems characterized by stick-slip dynamics often display avalanches. Regardless of the diversity of their microscopic structure, these systems are governed by a power-law distribution of avalanche size and duration. Here we focus on the interevent times between avalanches and show that, unlike their distributions of size and duration, the interevent time distributions are able to distinguish different mechanical states of the system, characterized by different volume fractions or confining pressures. We use experiments on granular systems and numerical simulations of emulsions to show that systems having the same probability distribution for avalanche size and duration can have different interevent time distributions. Remarkably, for large packing ratios, these interevent time distributions look similar to those for earthquakes and are indirect evidence of large space-time correlations in the system. Our results therefore indicate that interevent time statistics are more informative to characterize the dynamics of avalanches.
|
condensed matter
|
We propose a consistent approach to the definition of electric, magnetic, and toroidal multipole moments. Electric and magnetic fields are split into potential, vortex, and radiative terms, with the latter ones dropped off in the quasistatic approximation. The potential part of the electric field, the vortex parts of the magnetic field and vector potential contain gradients of scalar functions. Formally introducing magnetic and toroidal analogs of the electric charge, we apply multipole expansions for those scalars. Closed-form expressions are derived in an arbitrary order for electric, magnetic, and toroidal multipoles, which constitute a full system for expansions of the electromagnetic field.
|
physics
|
High-Tc cuprate, A3C60 and other unconventional superconductors (SCs) have very high Tc with respect to the energy scale of superconducting (SC) charges inferred from the superfluid density (SFD). The observed linear dependence of Tc on the SFD can hardly be expected in BCS SCs while being reminiscent of Bose Einstein Condensation of pre-formed pairs. As additional non-BCS like behaviors, responses similar to those in the bulk SC states have been observed at temperatures (T) well above Tc in the vortex-like Nernst effect, diamagnetic susceptibility, and transient optical conductivity in photo-excitation (PEX) measurements. Here we propose a picture based on equilibrium and transient SFD to understand these unconventional behaviors. This picture assumes: (1) Dynamic SC responses in the Nernst and PEX measurements emerge at the formation of the local phase coherence (LPC) among wave functions of pre-formed pairs. (2) Its onset T (TLPC) is distinct from and lower than the pair-formation pseudo-gap T (Tstr). (3) The bulk SC Tc is reduced from TLPC, due to the competition between the SC and antiferromagnetic (AF) order. (4) The magnetic resonance mode (MRM) controls Tc in the SC-AF competition. (5) The transient optical responses can be attributed to a change of the balance between the competing SC and AF orders caused by PEX. The assumptions (1) and (2) explain the relationship between Tc and the transient SFD in PEX studies and equilibrium SFD in Nernst effect. (3) and (4) are inferred from the linear dependence of Tc on the MRM energy. (4) and (5) are consistent with the behaviors of the 400 cm-1 optical mode in equilibrium and PEX studies, whose intensity represents the MRM intensity. Unlike previous phase-fluctuation pictures which expect dynamic responses between Tstr and Tc, the present picture involving competing order indicates that dynamic SC responses are seen between TLPC and Tc.
|
condensed matter
|
Breast cancer is the most common cancer and is the leading cause of cancer death among women worldwide. Detection of breast cancer, while it is still small and confined to the breast, provides the best chance of effective treatment. Computer Aided Detection (CAD) systems that detect cancer from mammograms will help in reducing the human errors that lead to missing breast carcinoma. Literature is rich of scientific papers for methods of CAD design, yet with no complete system architecture to deploy those methods. On the other hand, commercial CADs are developed and deployed only to vendors' mammography machines with no availability to public access. This paper presents a complete CAD; it is complete since it combines, on a hand, the rigor of algorithm design and assessment (method), and, on the other hand, the implementation and deployment of a system architecture for public accessibility (system). (1) We develop a novel algorithm for image enhancement so that mammograms acquired from any digital mammography machine look qualitatively of the same clarity to radiologists' inspection; and is quantitatively standardized for the detection algorithms. (2) We develop novel algorithms for masses and microcalcifications detection with accuracy superior to both literature results and the majority of approved commercial systems. (3) We design, implement, and deploy a system architecture that is computationally effective to allow for deploying these algorithms to cloud for public access.
|
electrical engineering and systems science
|
Let ${\mathfrak M}=({\mathcal M},\rho)$ be a metric space and let $X$ be a Banach space. Let $F$ be a set-valued mapping from ${\mathcal M}$ into the family ${\mathcal K}_m(X)$ of all compact convex subsets of $X$ of dimension at most $m$. The main result in our recent joint paper with Charles Fefferman (which is referred to as a "Finiteness Principle for Lipschitz selections") provides efficient conditions for the existence of a Lipschitz selection of $F$, i.e., a Lipschitz mapping $f:{\mathcal M}\to X$ such that $f(x)\in F(x)$ for every $x\in{\mathcal M}$. We give new alternative proofs of this result in two special cases. When $m=2$ we prove it for $X={\bf R}^{2}$, and when $m=1$ we prove it for all choices of $X$. Both of these proofs make use of a simple reiteration formula for the "core" of a set-valued mapping $F$, i.e., for a mapping $G:{\mathcal M}\to{\mathcal K}_m(X)$ which is Lipschitz with respect to the Hausdorff distance, and such that $G(x)\subset F(x)$ for all $x\in{\mathcal M}$. We also present several constructive criteria for the existence of Lipschitz selections of set-valued mappings from ${\mathcal M}$ into the family of all closed half-planes in ${\bf R}^{2}$.
|
mathematics
|
Since the discovery of cosmic acceleration, modified gravity theories play an important role in the modern cosmology. In particular, the well-known F(R)-theories reached great popularity motivated by the easier formalism and by the prospect to find a final theories of gravity for the dark scenarios. In the present work, we study some generalizations of F(R)and F(T) gravity theories. At the beginning, we briefly review the formalism of such theories. Then, we will consider one of their generalizations, the so-called F(R,T)-theory. The point-like Lagrangian is explicitly presented. Based on this Lagrangian, the field equations of F(R,T)-gravity are found. For the specific model $F(R,T)=\mu R+\nu T,$ the corresponding exact solutions are derived. Furthermore, we will consider the physical quantities associated to such solutions and we will find how for some values of the parameters the expansion of our universe can be accelerated without introducing any dark component.
|
physics
|
The preponderance of matter over antimatter in the early universe, the dynamics of the supernovae that produced the heavy elements necessary for life, and whether protons eventually decay -- these mysteries at the forefront of particle physics and astrophysics are key to understanding the early evolution of our universe, its current state, and its eventual fate. The Deep Underground Neutrino Experiment (DUNE) is an international world-class experiment dedicated to addressing these questions as it searches for leptonic charge-parity symmetry violation, stands ready to capture supernova neutrino bursts, and seeks to observe nucleon decay as a signature of a grand unified theory underlying the standard model. The DUNE far detector technical design report (TDR) describes the DUNE physics program and the technical designs of the single- and dual-phase DUNE liquid argon TPC far detector modules. This TDR is intended to justify the technical choices for the far detector that flow down from the high-level physics goals through requirements at all levels of the Project. Volume I contains an executive summary that introduces the DUNE science program, the far detector and the strategy for its modular designs, and the organization and management of the Project. The remainder of Volume I provides more detail on the science program that drives the choice of detector technologies and on the technologies themselves. It also introduces the designs for the DUNE near detector and the DUNE computing model, for which DUNE is planning design reports. Volume II of this TDR describes DUNE's physics program in detail. Volume III describes the technical coordination required for the far detector design, construction, installation, and integration, and its organizational structure. Volume IV describes the single-phase far detector technology. A planned Volume V will describe the dual-phase technology.
|
physics
|
The sparse canonical correlation analysis (SCCA) is a bi-multivariate association model that finds sparse linear combinations of two sets of variables that are maximally correlated with each other. In addition to the standard SCCA model, a simplified SCCA criterion which maixmizes the cross-covariance between a pair of canonical variables instead of their cross-correlation, is widely used in the literature due to its computational simplicity. However, the behaviors/properties of the solutions of these two models remain unknown in theory. In this paper, we analyze the grouping effect of the standard and simplified SCCA models in variable selection. In high-dimensional settings, the variables often form groups with high within-group correlation and low between-group correlation. Our theoretical analysis shows that for grouped variable selection, the simplified SCCA jointly selects or deselects a group of variables together, while the standard SCCA randomly selects a few dominant variables from each relevant group of correlated variables. Empirical results on synthetic data and real imaging genetics data verify the finding of our theoretical analysis.
|
statistics
|
Detailed alias frequency formula and the effect of alias sampling on the calculation of MHD mode number are derived. It is discovered that the absolute MHD mode number/structure does not change under alias sampling. This discovery can help us to determine the structure of the high frequency MHD mode with low frequency sampled diagnostics even when the Nyquist-Shannon sampling theorem is no longer valid.
|
physics
|
Despite recent advances in computer vision based on various convolutional architectures, video understanding remains an important challenge. In this work, we present and discuss a top solution for the large-scale video classification (labeling) problem introduced as a Kaggle competition based on the YouTube-8M dataset. We show and compare different approaches to preprocessing, data augmentation, model architectures, and model combination. Our final model is based on a large ensemble of video- and frame-level models but fits into rather limiting hardware constraints. We apply an approach based on knowledge distillation to deal with noisy labels in the original dataset and the recently developed mixup technique to improve the basic models.
|
computer science
|
In this paper, we explore a dark sector scenario with a gauged $SU(2)_R$ and a global $U(1)_X \times \mathbb{Z}_2$, where the continuous symmetries are spontaneously broken to a global $U(1)_D$. We show that in various regions of the parameter space we can have two, or three dark matter candidates, where these dark matter particles are either a Dirac fermion, a dark gauge boson, or a complex scalar. The phenomenological implications of this scenario are vast and interesting. We identify the parameter space that is still viable after taking into account the constraints from various experiments. We, also, discuss how this scenario can explain the recent observation by DAMPE in the electron-positron spectrum. Furthermore, we comment on the neutrino mass generation through non-renormalizable interactions between the standard model and the dark sector.
|
high energy physics phenomenology
|
Detecting aligned 3D keypoints is essential under many scenarios such as object tracking, shape retrieval and robotics. However, it is generally hard to prepare a high-quality dataset for all types of objects due to the ambiguity of keypoint itself. Meanwhile, current unsupervised detectors are unable to generate aligned keypoints with good coverage. In this paper, we propose an unsupervised aligned keypoint detector, Skeleton Merger, which utilizes skeletons to reconstruct objects. It is based on an Autoencoder architecture. The encoder proposes keypoints and predicts activation strengths of edges between keypoints. The decoder performs uniform sampling on the skeleton and refines it into small point clouds with pointwise offsets. Then the activation strengths are applied and the sub-clouds are merged. Composite Chamfer Distance (CCD) is proposed as a distance between the input point cloud and the reconstruction composed of sub-clouds masked by activation strengths. We demonstrate that Skeleton Merger is capable of detecting semantically-rich salient keypoints with good alignment, and shows comparable performance to supervised methods on the KeypointNet dataset. It is also shown that the detector is robust to noise and subsampling. Our code is available at https://github.com/eliphatfs/SkeletonMerger.
|
computer science
|
We study chiral rings of 4d $\mathcal{N}=1$ supersymmetric gauge theories via the notion of K-stability. We show that when using Hilbert series to perform the computations of Futaki invariants, it is not enough to only include the test symmetry information in the former's denominator. We discuss a way to modify the numerator so that K-stability can be correctly determined, and a rescaling method is also applied to simplify the calculations involving test configurations. All of these are illustrated with a host of examples, by considering vacuum moduli spaces of various theories. Using Gr\"obner basis and plethystic techniques, many non-complete intersections can also be addressed, thus expanding the list of known theories in the literature.
|
high energy physics theory
|
Recently, a new structure $Y(4626)$ was reported by the Belle Colloboration in the process $e^+e^-\to D_s^+D_{s1}(2536)^-$. In this work, we propose an assignment of the $Y(4626)$ as a ${D}^*_s\bar{D}_{s1}(2536)$ molecular state, which decays into the $D_s^+D_{s1}(2536)^-$ channel through a coupling between ${D}^*_s\bar{D}_{s1}(2536)$ and ${D}_s\bar{D}_{s1}(2536)$ channels. With the help of the heavy quark symmetry, the potential of the interaction ${D}^*_s\bar{D}_{s1}(2536)-{D}_s\bar{D}_{s1}(2536)$ is constructed within the one-boson-exchange model, and inserted into the quasipotential Bethe-Salpeter equation. The pole of obtained scattering amplitude is searched for in the complex plane, which corresponds to a molecular state from the interaction ${D}^*_s\bar{D}_{s1}(2536)-{D}_s\bar{D}_{s1}(2536)$. The results suggest that a pole is produced near the ${D}^*_s\bar{D}_{s1}(2536)$ threshold, which exhibits as a peak in the invariant mass spectrum of the ${D}_s\bar{D}_{s1}(2536)$ channel at about 4626 MeV. It obviously favors the $Y(4265)$ as a ${D}^*_s\bar{D}_{s1}(2536)$ molecular state. In the same model, other molecular states from the interaction ${D}^*_s\bar{D}_{s1}(2536)-{D}_s\bar{D}_{s1}(2536)$ are also predicted, which can be checked in future experiments.
|
high energy physics phenomenology
|
We report the results from a new, highly sensitive ($\Delta T_{mb} \sim 3 $mK) survey for thermal OH emission at 1665 and 1667 MHz over a dense, 9 x 9-pixel grid covering a $1\deg$ x $1\deg$ patch of sky in the direction of $l = 105\deg, b = +2.50\deg$ towards the Perseus spiral arm of our Galaxy. We compare our Green Bank Telescope (GBT) 1667 MHz OH results with archival CO J=1-0 observations from the Five College Radio Astronomy Observatory (FCRAO) Outer Galaxy Survey within the velocity range of the Perseus Arm at these galactic coordinates. Out of the 81 statistically-independent pointings in our survey area, 86% show detectable OH emission at 1667 MHz, and 19% of them show detectable CO emission. We explore the possible physical conditions of the observed features using a set of diffuse molecular cloud models. In the context of these models, both OH and CO disappear at current sensitivity limits below an A$_{\rm v}$ of 0.2, but the CO emission does not appear until the volume density exceeds 100-200 cm$^{-3}$. These results demonstrate that a combination of low column density A$_{\rm v}$ and low volume density $n_{H}$ can explain the lack of CO emission along sight lines exhibiting OH emission. The 18-cm OH main lines, with their low critical density of $n^{*}$ $ \sim 1 $ cm$^{-3}$, are collisionally excited over a large fraction of the quiescent galactic environment and, for observations of sufficient sensitivity, provide an optically-thin radio tracer for diffuse H$_2$.
|
astrophysics
|
We consider the problem of training a deep orthogonal linear network, which consists of a product of orthogonal matrices, with no non-linearity in-between. We show that training the weights with Riemannian gradient descent is equivalent to training the whole factorization by gradient descent. This means that there is no effect of overparametrization and implicit bias at all in this setting: training such a deep, overparametrized, network is perfectly equivalent to training a one-layer shallow network.
|
statistics
|
We show that if the image of a Legendrian submanifold under a contact homeomorphism (i.e. a homeomorphism that is a $C^0$-limit of contactomorphisms) is smooth then it is Legendrian, assuming only positive local lower bounds on the conformal factors of the approximating contactomorphisms. More generally the analogous result holds for coisotropic submanifolds in the sense of arXiv:1306.6367. This is a contact version of the Humili\`ere-Leclercq-Seyfaddini coisotropic rigidity theorem in $C^0$ symplectic geometry, and the proof adapts the author's recent re-proof of that result in arXiv:1912.13043 based on a notion of local rigidity of points on locally closed subsets. We also provide two different flavors of examples showing that a contact homeomorphism can map a submanifold that is transverse to the contact structure to one that is smooth and tangent to the contact structure at a point.
|
mathematics
|
There are several challenges for search and rescue robots: mobility, perception, autonomy, and communication. Inspired by the DARPA Subterranean (SubT) Challenge, we propose an autonomous blimp robot, which has the advantages of low power consumption and collision-tolerance compared to other aerial vehicles like drones. This is important for search and rescue tasks that usually last for one or more hours. However, the underground constrained passages limit the size of blimp envelope and its payload, making the proposed system resource-constrained. Therefore, a careful design consideration is needed to build a blimp system with on-board artifact search and SLAM. In order to reach long-term operation, a failure-aware algorithm with minimal communication to human supervisor to have situational awareness and send control signals to the blimp when needed.
|
computer science
|
A (positive definite and non-classic integral) quadratic form is called strongly $s$-regular if it satisfies a strong regularity property on the number of representations of squares of integers. In this article, we prove that for any integer $k \ge 2$, there are only finitely many isometry classes of strongly $s$-regular quadratic forms with rank $k$ if the minimum of the nonzero squares that are represented by them is fixed.
|
mathematics
|
We present an extension of the JHUGen and MELA framework, which includes an event generator and library for the matrix element analysis. It enables simulation, optimal discrimination, reweighting techniques, and analysis of a bosonic resonance and the triple and quartic gauge boson interactions with the most general anomalous couplings. The new features, which become especially relevant at the current stage of LHC data taking, are the simulation of gluon fusion and vector boson fusion in the off-shell region, associated $ZH$ production at NLO QCD including the $gg$ initial state, and the simulation of a second spin-zero resonance. We also quote translations of the anomalous coupling measurements into constraints on dimension-six operators of an effective field theory. Some of the new features are illustrated with projections for experimental measurements with the full LHC and HL-LHC datasets.
|
high energy physics phenomenology
|
Quantum fluctuations give rise to Casimir forces between two parallel conducting plates, the magnitude of which increases monotonically as the separation decreases. By introducing nanoscale gratings to the surfaces, recent advances have opened opportunities for controlling the Casimir force in complex geometries. Here, we measure the Casimir force between two rectangular gratings in regimes not accessible before. Using an on-chip detection platform, we achieve accurate alignment between the two gratings so that they interpenetrate as the separation is reduced. Just before interpenetration occurs, the measured Casimir force is found to have a geometry dependence that is much stronger than previous experiments, with deviations from the proximity force approximation reaching a factor of ~500. After the gratings interpenetrate each other, the Casimir force becomes non-zero and independent of displacement. This work shows that the presence of gratings can strongly modify the Casimir force to control the interaction between nanomechanical components.
|
quantum physics
|
An input-output model of a two-level quantum system in the Heisenberg picture is of bilinear form with constant system matrices, which allows the introduction of the concepts of controllability and observability in analogy with those of quantum linear systems. By means of the notions of controllability and observability, coordinate transformations, which are rotation matrices, can be constructed explicitly that transform an input-output model to a new one. The new input-output model enables us to investigate many interesting properties of the two-level quantum system, such as steady-state solutions to the Lindblad master equation, quantum decoherence-free (DF) subspaces, quantum non-demolition (QND) variables, and the realization of quantum back-action evading (BAE) measurements. The physical system in (Wang, J. \& Wiseman, H. M. (2001), Feedback-stabilization of an arbitrary pure state of a two-level atom, Physical Review A 64(6), 063810) is re-studied to illustrate the results presented in this paper.
|
quantum physics
|
We consider rational double point singularities (RDPs) that are non-taut, which means that the isomorphism class is not uniquely determined from the dual graph of the minimal resolution. Such RDPs exist in characteristic $2,3,5$. We compute the actions of Frobenius, and other inseparable morphisms, on $W_n$-valued local cohomology groups of RDPs. Then we consider RDP K3 surfaces admitting non-taut RDPs. We show that the height of the K3 surface, which is also defined in terms of the Frobenius action on $W_n$-valued cohomology groups, is related to the isomorphism class of the RDP.
|
mathematics
|
We introduce a new family of finite posets which we call 2-chains. These first arose in the study of 0-Hecke algebras, but they admit a variety of different characterisations. We give these characterisations, prove that they are equivalent and derive some numerical results concerning 2-chains.
|
mathematics
|
Mammography is the primary imaging modality used for early detection and diagnosis of breast cancer. X-ray mammogram analysis mainly refers to the localization of suspicious regions of interest followed by segmentation, towards further lesion classification into benign versus malignant. Among diverse types of breast abnormalities, masses are the most important clinical findings of breast carcinomas. However, manually segmenting breast masses from native mammograms is time-consuming and error-prone. Therefore, an integrated computer-aided diagnosis system is required to assist clinicians for automatic and precise breast mass delineation. In this work, we present a two-stage multi-scale pipeline that provides accurate mass contours from high-resolution full mammograms. First, we propose an extended deep detector integrating a multi-scale fusion strategy for automated mass localization. Second, a convolutional encoder-decoder network using nested and dense skip connections is employed to fine-delineate candidate masses. Unlike most previous studies based on segmentation from regions, our framework handles mass segmentation from native full mammograms without any user intervention. Trained on INbreast and DDSM-CBIS public datasets, the pipeline achieves an overall average Dice of 80.44% on INbreast test images, outperforming state-of-the-art. Our system shows promising accuracy as an automatic full-image mass segmentation system. Extensive experiments reveals robustness against the diversity of size, shape and appearance of breast masses, towards better interaction-free computer-aided diagnosis.
|
electrical engineering and systems science
|
Spin transport phenomena underpin an extensive range of spintronic effects. In particular spin transport across interfaces occurs in most device concepts, but is so far poorly understood. As interface properties strongly impact spin transport, one needs to characterize and correlate them to the fabrication method. Here we investigate pure spin current transport across interfaces and connect this with imaging of the interfaces. We study the detection of pure spin currents via the inverse spin Hall effect in Pt and the related spin current absorption by Pt in Py-Cu-Pt lateral spin valves. Depending on the fabrication process, we either find a large (inverse) spin Hall effect signal and low spin absorption by Pt or vice versa. We explain these counter-intuitive results by the fabrication induced varying quality of the Cu/Pt interfaces, which is directly revealed via a special scanning electron microscopy technique for interface imaging and correlated to the spin transport.
|
condensed matter
|
The investigation of two-level-state (TLS) loss in dielectric materials and interfaces remains at the forefront of materials research in superconducting quantum circuits. We demonstrate a method of TLS loss extraction of a thin film dielectric by measuring a lumped element resonator fabricated from a superconductor-dielectric-superconductor trilayer. We extract the dielectric loss by formulating a circuit model for a lumped element resonator with TLS loss and then fitting to this model using measurements from a set of three resonator designs: a coplanar waveguide resonator, a lumped element resonator with an interdigitated capacitor, and a lumped element resonator with a parallel plate capacitor that includes the dielectric thin film of interest. Unlike other methods, this allows accurate measurement of materials with TLS loss lower than $10^{-6}$. We demonstrate this method by extracting a TLS loss of $1.02 \times 10^{-3}$ for sputtered $\mathrm{Al_2O_3}$ using a set of samples fabricated from an $\mathrm{Al/Al_2O_3/Al}$ trilayer. We observe a difference of 11$\%$ between extracted loss of the trilayer with and without the implementation of this method.
|
quantum physics
|
At the heart of recent progress in AdS/CFT is the question of subregion duality, or entanglement wedge reconstruction: which part(s) of the boundary CFT are dual to a given subregion of the bulk? This question can be answered by appealing to the quantum error correcting properties of holography, and it was recently shown that robust bulk (entanglement wedge) reconstruction can be achieved using a universal recovery channel known as the twirled Petz map. In short, one can use the twirled Petz map to recover bulk data from a subset of the boundary. However, this map involves an averaging procedure over bulk and boundary modular time, and hence it can be somewhat intractable to evaluate in practice. We show that a much simpler channel, the Petz map, is sufficient for entanglement wedge reconstruction for any code space of fixed finite dimension - no twirling is required. Moreover, the error in the reconstruction will always be non-perturbatively small. From a quantum information perspective, we prove a general theorem extending the use of the Petz map as a general-purpose recovery channel to subsystem and operator algebra quantum error correction.
|
high energy physics theory
|
We study the domain walls in hot $4$-D $SU(N)$ super Yang-Mills theory and QCD(adj), with $n_f$ Weyl flavors. We find that the $k$-wall worldvolume theory is $2$-D QCD with gauge group $SU(N-k)\times SU(k) \times U(1)$ and Dirac fermions charged under $U(1)$ and transforming in the bi-fundamental representation of the nonabelian factors. We show that the DW theory has a $1$-form $\mathbb Z_{N}^{(1)}$ center symmetry and a $0$-form $\mathbb Z_{2Nn_f}^{d\chi}$ discrete chiral symmetry, with a mixed 't Hooft anomaly consistent with bulk/wall anomaly inflow. We argue that $\mathbb Z_{N}^{(1)}$ is broken on the wall, and hence, Wilson loops obey the perimeter law. The breaking of the worldvolume center symmetry implies that bulk $p$-strings can end on the wall, a phenomenon first discovered using string-theoretic constructions. We invoke $2$-D bosonization and gauged Wess-Zumino-Witten models to suggest that $\mathbb Z_{2Nn_f}^{d\chi}$ is also broken in the IR, which implies that the $0$-form/$1$-form mixed 't Hooft anomaly in the gapped $k$-wall theory is saturated by a topological quantum field theory. We also find interesting parallels between the physics of high-temperature domain walls studied here and domain walls between chiral symmetry breaking vacua in the zero temperature phase of the theory (studied earlier in the semiclassically calculable small spatial circle regime), arising from the similar mode of saturation of the relevant 't Hooft anomalies.
|
high energy physics theory
|
We study rational curves on general Fano hypersurfaces in projective space, mostly by degenerating the hypersurface along with its ambient projective space to reducible varieties. We prove results on existence of low-degree rational curves with balanced normal bundle, and reprove some results on irreducibility of spaces of rational curves of low degree.
|
mathematics
|
The cluster multipole (CMP) expansion for magnetic structures provides a scheme to systematically generate candidate magnetic structures specifically including noncollinear magnetic configurations adapted to the crystal symmetry of a given material. A comparison with the experimental data collected on MAGNDATA shows that the most stable magnetic configurations in nature are linear combinations of only few CMPs. Furthermore, a high-throughput calculation for all candidate magnetic structures is performed in the framework of spin-density functional theory (SDFT). We benchmark the predictive power of CMP+SDFT with $2935$ calculations, which show that (i) the CMP expansion administers an exhaustive list of candidate magnetic structures, (ii) CMP+SDFT can narrow down the possible magnetic configurations to a handful of computed configurations, and (iii) SDFT reproduces the experimental magnetic configurations with an accuracy of $\pm0.5\,\mu_\text{B}$. For a subset the impact of on-site Coulomb repulsion $U$ is investigated by means of $1545$ CMP+SDFT+U calculations revealing no further improvement on the predictive power.
|
condensed matter
|
Under-representation of certain populations, based on gender, race/ethnicity, and age, in data collection for predictive modeling may yield less-accurate predictions for the under-represented groups. Recently, this issue of fairness in predictions has attracted significant attention, as data-driven models are increasingly utilized to perform crucial decision-making tasks. Methods to achieve fairness in the machine learning literature typically build a single prediction model subject to some fairness criteria in a manner that encourages fair prediction performances for all groups. These approaches have two major limitations: i) fairness is often achieved by compromising accuracy for some groups; ii) the underlying relationship between dependent and independent variables may not be the same across groups. We propose a Joint Fairness Model (JFM) approach for binary outcomes that estimates group-specific classifiers using a joint modeling objective function that incorporates fairness criteria for prediction. We introduce an Accelerated Smoothing Proximal Gradient Algorithm to solve the convex objective function, and demonstrate the properties of the proposed JFM estimates. Next, we presented the key asymptotic properties for the JFM parameter estimates. We examined the efficacy of the JFM approach in achieving prediction performances and parities, in comparison with the Single Fairness Model, group-separate model, and group-ignorant model through extensive simulations. Finally, we demonstrated the utility of the JFM method in the motivating example to obtain fair risk predictions for under-represented older patients diagnosed with coronavirus disease 2019 (COVID-19).
|
statistics
|
The cosmological constant and the Higgs mass seem unnaturally small and anthropically selected. We show that both can be efficiently scanned in Quantum Field Theories with a large enough number of vacua controllable thanks to approximated $\mathbb{Z}_2$ symmetries (even for Coleman-Weinberg potentials). We find that vacuum decay in a landscape implies weaker bounds than previously estimated. Special vacua where one light scalar is accidentally light avoid catastrophic vacuum decay if its self-cubic is absent. This is what happens for the Higgs doublet, thanks to gauge invariance. Yukawa couplings can be efficiently scanned, as suggested by anthropic boundaries on light quark masses. Finally, we suggest that the lack of predictivity of landscapes can be mitigated if their probability distributions are non-Gaussian (possibly even fractal).
|
high energy physics theory
|
This study compares the performances of two sampling-based strategies for the simultaneous estimation of the first-and total-orders variance-based sensitivity indices (a.k.a Sobol' indices). The first strategy was introduced by [8] and is the current approach employed by practitioners. The second one was only recently introduced by the authors of the present article. They both rely on different estimators of first-and total-orders Sobol' indices. The asymp-totic normal variances of the two sets of estimators are established and their accuracies are compared theoretically and numerically. The results show that the new strategy outperforms the current one.Keywords: global sensitivity analysis, variance-based sensitivity indices, first-order Sobol' index, total-order Sobol' index, Monte Carlo estimate, asymptotic normality
|
statistics
|
People deploy top-down, goal-directed attention to accomplish tasks, such as finding lost keys. By tuning the visual system to relevant information sources, object recognition can become more efficient (a benefit) and more biased toward the target (a potential cost). Motivated by selective attention in categorisation models, we developed a goal-directed attention mechanism that can process naturalistic (photographic) stimuli. Our attention mechanism can be incorporated into any existing deep convolutional neural network (DCNNs). The processing stages in DCNNs have been related to ventral visual stream. In that light, our attentional mechanism incorporates top-down influences from prefrontal cortex (PFC) to support goal-directed behaviour. Akin to how attention weights in categorisation models warp representational spaces, we introduce a layer of attention weights to the mid-level of a DCNN that amplify or attenuate activity to further a goal. We evaluated the attentional mechanism using photographic stimuli, varying the attentional target. We found that increasing goal-directed attention has benefits (increasing hit rates) and costs (increasing false alarm rates). At a moderate level, attention improves sensitivity (i.e., increases $d^\prime$) at only a moderate increase in bias for tasks involving standard images, blended images, and natural adversarial images chosen to fool DCNNs. These results suggest that goal-directed attention can reconfigure general-purpose DCNNs to better suit the current task goal, much like PFC modulates activity along the ventral stream. In addition to being more parsimonious and brain consistent, the mid-level attention approach performed better than a standard machine learning approach for transfer learning, namely retraining the final network layer to accommodate the new task.
|
computer science
|
In $F(4)$ gauged supergravity in six dimensions, we study supersymmetric $AdS_6$ black holes with various horizon geometries. We find a new $AdS_2\,\times\,\Sigma_{\mathfrak{g}_1}\times\Sigma_{\mathfrak{g}_2}$ horizon solution with $\mathfrak{g}_1>1$ and $\mathfrak{g}_2>1$, and present the black hole solution numerically. The full black hole is an interpolating geometry between the asymptotically $AdS_6$ boundary and the $AdS_2\,\times\,\Sigma_{\mathfrak{g}_1}\times\Sigma_{\mathfrak{g}_2}$ horizon. We calculate the Bekenstein-Hawking entropy of the black hole and find a match with the recently calculated topologically twisted index of 5d $USp(2N)$ gauge theory on $\Sigma_{\mathfrak{g}_1}\times\Sigma_{\mathfrak{g}_2}\times{S}^1$ in the large $N$ limit. We also find black hole horizons of K\"ahler four-cycles in Calabi-Yau fourfolds and on Cayley four-cycles in $Spin(7)$ manifolds.
|
high energy physics theory
|
An atom-cavity system consists of an atom or group of atoms inside a cavity. When an atom in cavity is stimulated by a laser pump, it is affected by the atom-field interaction shows the quasi-random walk behavior. This can change the entanglement and trace distance of system. In this work, the change entanglement and trace distance of system in two different cases namely the random walk and non-random walk is considered. The descriptive system is a two-level atom in the electrodynamics cavity based on the Jaynes-Cummings model, which is stimulated by two longitudinal and transverse laser pumps. The results show that the consideration of the random walk case for the atom changes in the amount of entanglement can be seen as the phenomenon of sudden death and birth of entanglement. It results in its rate of changes to be increased. In contrast to the non-random walk case, this rate is increased and decreased more quickly compared to the random walk case. The maximum amount of entanglement in each case is the same, but the minimum in random walk case is less than that of non-random walk case.
|
quantum physics
|
Cavity-embedded quantum emitters show strong modifications of free space radiation properties such as an enhanced decay known as the Purcell effect. The central parameter is the cooperativity $C$, the ratio of the square of the coherent cavity coupling strength over the product of cavity and emitter decay rates. For a single emitter, $C$ is independent of the transition dipole moment and dictated by geometric cavity properties such as finesse and mode waist. In a recent work [Phys. Rev. Lett. 119, 093601 (2017)] we have shown that collective excitations in ensembles of dipole-dipole coupled quantum emitters show a disentanglement between the coherent coupling to the cavity mode and spontaneous free space decay. This leads to a strong enhancement of the cavity cooperativity around certain collective subradiant antiresonances. Here, we present a quantum Langevin equations approach aimed at providing results beyond the classical coupled dipoles model. We show that the subradiantly enhanced cooperativity imprints its effects onto the cavity output field quantum correlations while also strongly increasing the cavity-emitter system's collective Kerr nonlinear effect.
|
quantum physics
|
Visual Place Recognition (VPR) is the ability to correctly recall a previously visited place using visual information under environmental, viewpoint and appearance changes. An emerging trend in VPR is the use of sequence-based filtering methods on top of single-frame-based place matching techniques for route-based navigation. The combination leads to varying levels of potential place matching performance boosts at increased computational costs. This raises a number of interesting research questions: How does performance boost (due to sequential filtering) vary along the entire spectrum of single-frame-based matching methods? How does sequence matching length affect the performance curve? Which specific combinations provide a good trade-off between performance and computation? However, there is lack of previous work looking at these important questions and most of the sequence-based filtering work to date has been used without a systematic approach. To bridge this research gap, this paper conducts an in-depth investigation of the relationship between the performance of single-frame-based place matching techniques and the use of sequence-based filtering on top of those methods. It analyzes individual trade-offs, properties and limitations for different combinations of single-frame-based and sequential techniques. A number of state-of-the-art VPR methods and widely used public datasets are utilized to present the findings that contain a number of meaningful insights for the VPR community.
|
computer science
|
HOMFLY polynomials are one of the major knot invariants being actively studied. They are difficult to compute in the general case but can be far more easily expressed in certain specific cases. In this paper, we examine two particular knots, as well as one more general infinite class of knots. From our calculations, we see some apparent patterns in the polynomials for the knots $9_{35}$ and $9_{46}$, and in particular their $F$-factors. These properties are of a form that seems conducive to finding a general formula for them, which would yield a general formula for the HOMFLY polynomials of the two knots. Motivated by these observations, we demonstrate and conjecture some properties both of the $F$-factors and HOMFLY polynomials of these knots and of the more general class that contains them, namely pretzel knots with 3 odd parameters. We make the first steps toward a matrix-less general formula for the HOMFLY polynomials of these knots.
|
mathematics
|
This study proposes two straightforward yet effective approaches to reduce the required computational time of the training process for time-series modeling through a recurrent neural network (RNN) using multi-time-scale time-series data as input. One approach provides coarse and fine temporal resolutions of the input time-series to RNN in parallel. The other concatenates the coarse and fine temporal resolutions of the input time-series data over time before considering them as the input to RNN. In both approaches, first, finer temporal resolution data are utilized to learn the fine temporal scale behavior of the target data. Next, coarser temporal resolution data are expected to capture long-duration dependencies between the input and target variables. The proposed approaches were implemented for hourly rainfall-runoff modeling at a snow-dominated watershed by employing a long and short-term memory (LSTM) network, which is a newer type of RNN. Subsequently, the daily and hourly meteorological data were utilized as the input, and hourly flow discharge was considered as the target data. The results confirm that both of the proposed approaches can reduce the computational time for the training of RNN significantly (up to 32.4 times). Furthermore, one of the proposed approaches improves the estimation accuracy.
|
physics
|
Manufacturing has been changing from a mainly inhouse effort to a distributed style in order to meet new challenges owing to globalization of markets and worldwide competition. Distributed simulation provides an attractive solution to construct cross enterprise simulations to evaluate the viability of the proposed distributed manufacturing enterprises. However, due to its complexity and high cost distributed simulation failed to gain a wide acceptance from industrial users. The main objective of this paper is to address these issues and present a new structured approach to implement distributed simulation with cost effective and easy to implementable tools. A simplified approach for model partitioning for distributed simulation is also included in the proposed approach. The implementation of distributed manufacturing simulation is illustrated with Arena, Microsoft Message Queue (MSMQ) and Visual Basic for Applications (VBA).
|
computer science
|
Up to equivalence, this paper classifies all the irreducible unitary representations with non-zero Dirac cohomology for the following simple real exceptional Lie groups: ${\rm EI}=E_{6(6)}, {\rm EIV}=E_{6(-26)}, {\rm FI}=F_{4(4)}, {\rm FII}=F_{4(-20)}$. Along the way, we find an irreducible unitary representation of $F_{4(4)}$ whose Dirac index vanishes, while its Dirac cohomology is non-zero. This disproves a conjecture raised in 2015 asserting that there should be no cancellation between the even part and the odd part of the Dirac cohomology.
|
mathematics
|
We present a novel algorithm for scheduling the observations of time-domain imaging surveys. Our Integer Linear Programming approach optimizes an observing plan for an entire night by assigning targets to temporal blocks, enabling strict control of the number of exposures obtained per field and minimizing filter changes. A subsequent optimization step minimizes slew times between each observation. Our optimization metric self-consistently weights contributions from time-varying airmass, seeing, and sky brightness to maximize the transient discovery rate. We describe the implementation of this algorithm on the surveys of the Zwicky Transient Facility and present its on-sky performance.
|
astrophysics
|
Machine learning models are a powerful theoretical tool for analyzing data from quantum simulators, in which results of experiments are sets of snapshots of many-body states. Recently, they have been successfully applied to distinguish between snapshots that can not be identified using traditional one and two point correlation functions. Thus far, the complexity of these models has inhibited new physical insights from this approach. Here, using a novel set of nonlinearities we develop a network architecture that discovers features in the data which are directly interpretable in terms of physical observables. In particular, our network can be understood as uncovering high-order correlators which significantly differ between the data studied. We demonstrate this new architecture on sets of simulated snapshots produced by two candidate theories approximating the doped Fermi-Hubbard model, which is realized in state-of-the art quantum gas microscopy experiments. From the trained networks, we uncover that the key distinguishing features are fourth-order spin-charge correlators, providing a means to compare experimental data to theoretical predictions. Our approach lends itself well to the construction of simple, end-to-end interpretable architectures and is applicable to arbitrary lattice data, thus paving the way for new physical insights from machine learning studies of experimental as well as numerical data.
|
condensed matter
|
Deep learning has achieved a great success in natural image classification. To overcome data-scarcity in computational pathology, recent studies exploit transfer learning to reuse knowledge gained from natural images in pathology image analysis, aiming to build effective pathology image diagnosis models. Since transferability of knowledge heavily depends on the similarity of the original and target tasks, significant differences in image content and statistics between pathology images and natural images raise the questions: how much knowledge is transferable? Is the transferred information equally contributed by pre-trained layers? To answer these questions, this paper proposes a framework to quantify knowledge gain by a particular layer, conducts an empirical investigation in pathology image centered transfer learning, and reports some interesting observations. Particularly, compared to the performance baseline obtained by random-weight model, though transferability of off-the-shelf representations from deep layers heavily depend on specific pathology image sets, the general representation generated by early layers does convey transferred knowledge in various image classification applications. The observation in this study encourages further investigation of specific metric and tools to quantify effectiveness and feasibility of transfer learning in future.
|
electrical engineering and systems science
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.