text
stringlengths 11
9.77k
| label
stringlengths 2
104
|
---|---|
Crowd work has the potential of helping the financial recovery of regions traditionally plagued by a lack of economic opportunities, e.g., rural areas. However, we currently have limited information about the challenges facing crowd work-ers from rural and super rural areas as they struggle to make a living through crowd work sites. This paper examines the challenges and advantages of rural and super rural AmazonMechanical Turk (MTurk) crowd workers and contrasts them with those of workers from urban areas. Based on a survey of421 crowd workers from differing geographic regions in theU.S., we identified how across regions, people struggled with being onboarded into crowd work. We uncovered that despite the inequalities and barriers, rural workers tended to be striving more in micro-tasking than their urban counterparts. We also identified cultural traits, relating to time dimension and individualism, that offer us an insight into crowd workers and the necessary qualities for them to succeed on gig platforms. We finish by providing design implications based on our findings to create more inclusive crowd work platforms and tools
|
computer science
|
The Web is a ubiquitous economic, educational, and collaborative space. However, it also serves as a haven for personal information harvesting. Existing decentralised Web-based ecosystems, such as Solid, aim to combat personal data exploitation on the Web by enabling individuals to manage their data in the personal data store of their choice. Since personal data in these decentralised ecosystems are distributed across many sources, there is a need for techniques to support efficient privacy-preserving query execution over personal data stores. Towards this end, in this position paper we present a framework for efficient privacy preserving federated querying, and highlight open research challenges and opportunities. The overarching goal being to provide a means to position future research into privacy-preserving querying within decentralised environments.
|
computer science
|
The famous van der Waerden theorem states that if partition N into finitely many cells then one of them will contain arbitrary length arithmetic progressions. It has a polynomial version also. In this article we will prove the near 0 version of the polynomial van der Waerden theorem which was unsolved, using combinatorics.
|
mathematics
|
In this article, we construct the color singlet-singlet-singlet interpolating current with $I\left(J^P\right)=\frac{3}{2}\left(1^-\right)$ to study the $D\bar{D}^*K$ system through QCD sum rules approach. In calculations, we consider the contributions of the vacuum condensates up to dimension-16 and employ the formula $\mu=\sqrt{M_{X/Y/Z}^{2}-\left(2{\mathbb{M}}_{c}\right)^{2}}$ to choose the optimal energy scale of the QCD spectral density. The numerical result $M_Z=4.71_{-0.11}^{+0.19}\,\rm{GeV}$ indicates that there exists a resonance state $Z$ lying above the $D\bar{D}^*K$ threshold to saturate the QCD sum rules. This resonance state $Z$ may be found by focusing on the channel $J/\psi \pi K$ of the decay $B\longrightarrow J/\psi \pi \pi K$ in the future.
|
high energy physics phenomenology
|
We analyze Bosonic, Heterotic, and Type II string theories compactified on a generic torus having constant moduli. By computing the hamiltonian giving the interaction between massive string excitations and $U(1)$ gauge fields arising from the graviton and Kalb-Ramond field upon compactification, we derive a general formula for such couplings that turns out to be universal in all these theories. We also confirm our result by explicitly evaluating the relevant string three-point amplitudes. From this expression, we determine the gyromagnetic ratio $g$ of massive string states coupled to both gauge-fields. For a generic mixed symmetry state, there is one gyromagnetic coupling associated with each row of the corresponding Young Tableau diagram. For all the states having zero Kaluza Klein or Winding charges, the value of $g$ turns out to be $1$. We also explicitly consider totally symmetric and mixed symmetry states (having two rows in the Young diagram) associated with the first Regge-trajectory and obtain their corresponding $g$ value.
|
high energy physics theory
|
The Weibull distribution is one of the most used tools in reliability analysis. In this paper, assuming a Bayesian approach, we propose necessary and sufficient conditions to verify when improper priors lead to proper posteriors for the parameters of the Weibull distribution in the presence of complete or right-censored data. Additionally, we proposed sufficient conditions to verify if the obtained posterior moments are finite. These results can be achieved by checking the behavior of the improper priors, which are applied in different objective priors to illustrate the usefulness of the new results. As an application of our theorem, we prove that if the improper prior leads to a proper posterior, the posterior mean, as well as other higher moments of the scale parameter, are not finite and, therefore, should not be used.
|
mathematics
|
We present a modelling framework for the investigation of prototype-based classifiers in non-stationary environments. Specifically, we study Learning Vector Quantization (LVQ) systems trained from a stream of high-dimensional, clustered data.We consider standard winner-takes-all updates known as LVQ1. Statistical properties of the input data change on the time scale defined by the training process. We apply analytical methods borrowed from statistical physics which have been used earlier for the exact description of learning in stationary environments. The suggested framework facilitates the computation of learning curves in the presence of virtual and real concept drift. Here we focus on timedependent class bias in the training data. First results demonstrate that, while basic LVQ algorithms are suitable for the training in non-stationary environments, weight decay as an explicit mechanism of forgetting does not improve the performance under the considered drift processes.
|
computer science
|
This paper formulates a framework for the analysis and distributed control of interconnected systems from the behavioural perspective. The discussions are carried out from the viewpoint of set theory and the results are completely representation-free. The core of a dynamical system can be represented as the set of all trajectories admissible through the system and interconnections are interpreted as constraints on the choice of trajectories. We develop a structure in which the interconnected behaviour can be directly built from the behaviours of the subsystems in an explicit way without any presumed forms of representations. We show that the interconnected behaviour can also be fully obtained from local observations of the subsystem. Furthermore, we develop the necessary and sufficient conditions for the existence of distributed controller behaviours and their explicit construction. Due to the entirely representation-free nature of this framework, it unites various representations and descriptions of features of dynamical systems (e.g. models, dissipativity, data, etc.) as behaviours, allowing for the formation of a unified platform for the analysis and distributed control for interconnected systems.
|
electrical engineering and systems science
|
We study the phenomenology of a model that addresses the neutrino mass, dark matter, and generation of the electroweak scale in a single framework. Electroweak symmetry breaking is realized via the Coleman-Weinberg mechanism in a classically scale invariant theory, while the neutrino mass is generated radiatively through interactions with dark matter in a typically scotogenic manner. The model introduces a scalar triplet and singlet and a vector-like fermion doublet that carry an odd parity of $Z_2$, and an even parity scalar singlet that helps preserve classical scale invariance. We sample over the parameter space by taking into account various experimental constraints from the dark matter relic density and direct detection, direct scalar searches, neutrino mass, and charged lepton flavor violating decays. We then examine by detailed simulations possible signatures at the LHC to find some benchmark points of the free parameters. We find that the future high-luminosity LHC will have a significant potential in detecting new physics signals in the dilepton channel.
|
high energy physics phenomenology
|
When a quantum particle is launched with a finite velocity in a disordered potential, it may surprisingly come back to its initial position at long times and remain there forever. This phenomenon, dubbed ``quantum boomerang effect'', was introduced in [Phys. Rev. A 99, 023629 (2019)]. Interactions between particles, treated within the mean-field approximation, are shown to partially destroy the boomerang effect: the center of mass of the wave packet makes a U-turn, but does not completely come back to its initial position. We show that this phenomenon can be quantitatively interpreted using a single parameter, the average interaction energy.
|
condensed matter
|
Nonlocality is an interesting topic in quantum physics and is usually mediated by some unique quantum states. Here we investigate a Weyl semimetal slab and find an exotic nonlocal correlation effect when placing two potential wells merely on the top and bottom surfaces. This correlation arises from the peculiar Weyl orbit in Weyl semimetals and is a consequence of the bulk-boundary correspondence in topological band theory. A giant nonlocal transport signal and a body breakdown by Weyl fermions are further uncovered, which can serve as signatures for verifying this nonlocal correlation effect experimentally. Our results extend a new member in the nonlocality family and have potential applications for designing new electric devices with fancy functions.
|
condensed matter
|
Spin defect centers with long quantum coherence times ($T_2$) are key solid-state platforms for a variety of quantum applications. Recently, cluster correlation expansion (CCE) techniques have emerged as a powerful tool to simulate the $T_2$ of defect electron spins in these solid-state systems with good accuracy. Here, based on CCE, we uncover an algebraic expression for $T_2$ generalized for host compounds with dilute nuclear spin baths, which enables a quantitative and comprehensive materials exploration with a near instantaneous estimate of the coherence. We investigate more than 12,000 host compounds at natural isotopic abundance, and find that silicon carbide (SiC), a prominent widegap semiconductor for quantum applications, possesses the longest coherence times among widegap non-chalcogenides. In addition, more than 700 chalcogenides are shown to possess a longer $T_2$ than SiC. We suggest new potential host compounds with promisingly long $T_2$ up to 47 ms, and pave the way to explore unprecedented functional materials for quantum applications.
|
quantum physics
|
Community electricity storage systems for multiple applications promise benefits over household electricity storage systems. More economical flexibility options such as demand response and sector coupling might reduce the market size for storage facilities. This paper assesses the economic performance of community electricity storage systems by taking competitive flexibility options into account. For this purpose, an actor-related, scenario-based optimization framework is applied. The results are in line with the literature and show that community storage systems are economically more efficient than household storage systems. Relative storage capacity reductions of community storage systems over household storage systems are possible, as the demand and generation profiles are balanced out among end users. On average, storage capacity reductions of 9% per household are possible in the base case, resulting in lower specific investments. The simultaneous application of demand-side flexibility options such as sector coupling and demand response enable a further capacity reduction of the community storage size by up to 23%. At the same time, the competition between flexibility options leads to smaller benefits regarding the community storage flexibility potential, which reduces the market viability for these applications. In the worst case, the cannibalization effects reach up to 38% between the flexibility measures. The losses of the flexibility benefits outweigh the savings of the capacity reduction whereby sector coupling constitutes a far greater influencing factor than demand response. Overall, in consideration of the stated cost trends, the economies of scale, and the reduction possibilities, a profitable community storage model might be reached between 2025 and 2035. Future work should focus on the analysis of policy frameworks.
|
electrical engineering and systems science
|
In this work, we explore various relevant aspects of the Smoothed Particle Hydrodynamics regarding Burger's equation. The stability, precision, and efficiency of the algorithm are investigated in terms of different implementations. In particular, we argue that the boundary condition plays an essential role in the stability of numerical implementation. Besides, the issue is shown to be closely associated with the initial particle distribution and the interpolation scheme. Among others, we introduce an interpolation scheme termed symmetrized finite particle method. The main advantage of the scheme is that its implementation does not involve any derivative of the kernel function. Concerning the equation of motion, the calculations are carried out using two distinct scenarios where the particles are chosen to be either stationary or dynamically evolved. The obtained results are compared with those obtained by using the standard finite difference method for spatial derivatives. Our numerical results indicate subtle differences between different schemes regarding the choice of boundary condition. In particular, a novel type of instability is observed where the regular distribution is compromised as the particles start to traverse each other. Implications and further discussions of the present study are also addressed.
|
physics
|
Realization of the anomalous refraction effects predicted by Huygens' metasurfaces (HMS) have required tedious and time-consuming trial-and-error numerical full-wave computations. It is shown herein that these requirements can be alleviated for transverse magnetic (TM) propagation by a periodic dielectric-based HMS consisting of an electrically thick array of cascaded Fabry-P\'erot etalons. This "Fabry-P\'erot HMS" (FP-HMS) is easily designed to mimic the local scattering coefficients of a standard zero-thickness HMS (ZT-HMS) which, according to homogenization theory, should result in the desired anomalous refraction. To probe the characteristics of this practical FP-HMS, a method based on Floquet-Bloch (FB) analysis is derived for predicting the fields scattered from it for arbitrary angles of incidence. This method produces simple closed-form solutions for the FB wave amplitudes and the resulting fields are shown to agree well with full-wave simulations. These predictions and full-wave simulations verify the applicability of homogenization and scattering properties of zero-thickness HMS's to thick structures. They also verify the proposed semi-analytical microscopic design procedure for such structures, offering an effective alternative path to implementation of theoretically envisioned intricate field manipulating devices.
|
physics
|
The CRESST (Cryogenic Rare Event Search with Superconducting Thermometers) dark matter search experiment aims for the detection of dark matter particles via elastic scattering off nuclei in $\mathrm{CaWO_4}$ crystals. To understand the CRESST electromagnetic background due to the bulk contamination in the employed materials, a model based on Monte Carlo simulations was developed using the Geant4 simulation toolkit. The results of the simulation are applied to the TUM40 detector module of CRESST-II phase 2. We are able to explain up to $(68 \pm 16)\,\mathrm{\%}$ of the electromagnetic background in the energy range between $1\,\mathrm{keV}$ and $40\,\mathrm{keV}$.
|
astrophysics
|
We derive sublinear-time quantum algorithms for computing the Nash equilibrium of two-player zero-sum games, based on efficient Gibbs sampling methods. We are able to achieve speed-ups for both dense and sparse payoff matrices at the cost of a mildly increased dependence on the additive error compared to classical algorithms. In particular we can find $\varepsilon$-approximate Nash equilibrium strategies in complexity $\tilde{O}(\sqrt{n+m}/\varepsilon^3)$ and $\tilde{O}(\sqrt{s}/\varepsilon^{3.5})$ respectively, where $n\times m$ is the size of the matrix describing the game and $s$ is its sparsity. Our algorithms use the LP formulation of the problem and apply techniques developed in recent works on quantum SDP-solvers. We also show how to reduce general LP-solving to zero-sum games, resulting in quantum LP-solvers that have complexities $\tilde{O}(\sqrt{n+m}\gamma^3)$ and $\tilde{O}(\sqrt{s}\gamma^{3.5})$ for the dense and sparse access models respectively, where $\gamma$ is the relevant "scale-invariant" precision parameter
|
quantum physics
|
We present a new potential barrier that presents the phenomenon of superradiance when the reflection coefficient $R$ is greater than one. We calculated the transmission and reflection coefficients for three different regions. The results are compared with those obtained for the hyperbolic tangent potential barrier and the step potential barrier. We also present the solution of the Klein-Gordon equation with the Lambert-W potential barrier in terms of the Heun Confluent functions.
|
quantum physics
|
In this thesis we consider four dimensional N=2 superconformal field theories, in presence of line defects such as Wilson loops. In this set up, using supersymmetric localization, we compute many observables, such as the vacuum expectation value of the Wilson loop, correlation functions with chiral operators and the emitted radiation of the charged particle moving along the Wilson loop trajectory. These results are achieved by exploiting the localized matrix model on a four sphere, and checked against a perturbative computation on the flat space in N=1 superspace formalism, up to four loops in perturbation theory.
|
high energy physics theory
|
Improper ferroelectrics are described by two order parameters: a primary one, driving a transition to long-range distortive, magnetic or otherwise non-electric order, and the electric polarization, which is induced by the primary order parameter as a secondary, complementary effect. Using low-temperature scanning probe microscopy, we show that improper ferroelectric domains in YMnO$_3$ can be locally switched by electric field poling. However, subsequent temperature changes restore the as-grown domain structure as determined by the primary lattice distortion. The backswitching is explained by uncompensated bound charges occuring at the newly written domain walls due to the lack of mobile screening charges at low temperature. Thus, the polarization of improper ferroelectrics is in many ways subject to the same electrostatics as in their proper counterparts, yet complemented by additional functionalities arising from the primary order parameter. Tailoring the complex interplay between primary order parameter, polarization, and electrostatics is therefore likely to result in novel functionalities specific to improper ferroelectrics.
|
condensed matter
|
High-precision manipulation of multi-qubit quantum systems requires strictly clocked and synchronized multi-channel control signals. However, practical Arbitrary Waveform Generators (AWGs) always suffer from random signal jitters and channel latencies that induces non-ignorable state or gate operation errors. In this paper, we analyze the average gate error caused by clock noises, from which an estimation formula is derived for quantifying the control robustness against clock noises. This measure is then employed for finding robust controls via a homotopic optimization algorithm. We also introduce our recently proposed stochastic optimization algorithm, b-GRAPE, for training robust controls via randomly generated clock noise samples. Numerical simulations on a two-qubit example demonstrate that both algorithms can greatly improve the control robustness against clock noises. The homotopic algorithm converges much faster than the b-GRAPE algorithm, but the latter can achieve more robust controls against clock noises.
|
quantum physics
|
In the present work we grow anodic TiO$_2$ nanotube layer with tube diameter ~ 500 nm and an open tube mouth. We use this morphology in dye-sensitized solar cells (DSSCs) and show that these tubes allow the construction of hybrid hierarchical photoanode structures of nanotubes with a defined and wall-conformance TiO2 nanoparticles decoration. At the same time, the large diameter allows the successful establishment of an additional (insulating) blocking layer of SiO$_2$ or Al$_2$O$_3$. We show that this combination of hierarchical structure and blocking layer significantly enhances the solar cell efficiency by suppressing recombination reactions. In such a DSSC structure, the solar cell efficiency under back side illumination with AM1.5 illumination is enhanced from 5% neat tube to 7 %.
|
physics
|
The holographic recipe for the calculation of decay constants is revisited. Starting from the holographic 2-point function and using the fact that normalizable bulk modes scale as $z^{\Delta-S}$, with $S$ the spin, we can obtain a consistent expression that depends on the value of the mode at the boundary, not the derivative. We apply our decay constant expression to other AdS/QCD (static and dynamic) models proving its consistency. We also demonstrated that our approach is equivalent to the usual holographic prescription.
|
high energy physics theory
|
An irradiation campaign was conducted to provide guidance in the selection of materials and components for the radiation hardening of LED lights for use in CERN accelerator tunnels. This work describes the effects of gamma-rays on commercial-grade borosilicate, fused quartz, polymethylmethacrylate, and polycarbonate samples up to doses of 100 kGy, to qualify their use as optical materials in rad-hard LED-based luminaires. In addition, a Si bridge rectifier and a SiC Junction Barrier Schottky diode for use in power supplies of rad-hard LED lighting systems are tested using 24 GeV/c protons. The physical degradation mechanisms are discussed for each element.
|
physics
|
We introduce and study two natural generalizations of the Connected VertexCover (VC) problem: the $p$-Edge-Connected and $p$-Vertex-Connected VC problem (where $p \geq 2$ is a fixed integer). Like Connected VC, both new VC problems are FPT, but do not admit a polynomial kernel unless $NP \subseteq coNP/poly$, which is highly unlikely. We prove however that both problems admit time efficient polynomial sized approximate kernelization schemes. We obtain an $O(2^{O(pk)}n^{O(1)})$-time algorithm for the $p$-Edge-Connected VC and an $O(2^{O(k^2)}n^{O(1)})$-time algorithm for the $p$-Vertex-Connected VC. Finally, we describe a $2(p+1)$-approximation algorithm for the $p$-Edge-Connected VC. The proofs for the new VC problems require more sophisticated arguments than for Connected VC. In particular, for the approximation algorithm we use Gomory-Hu trees and for the approximate kernels a result on small-size spanning $p$-vertex/edge-connected subgraph of a $p$-vertex/edge-connected graph obtained independently by Nishizeki and Poljak (1994) and Nagamochi and Ibaraki (1992).
|
computer science
|
We present new sets of nuclear parton distribution functions (nPDFs) at next-to-leading order (NLO) and next-to-next-to-leading order (NNLO). Our analyses are based on deeply inelastic scattering data with charged-lepton and neutrino beams on nuclear targets. In addition, a set of proton baseline PDFs is fitted within the same framework with the same theoretical assumptions. The results of this global QCD analysis are compared to existing nPDF sets and to the fitted cross sections. Also, the uncertainties resulting from the limited constraining power of the included experimental data are presented. The published work is based on an open-source tool, xFitter, which has been modified to be applicable also for a nuclear PDF analysis. The required extensions of the code are discussed as well.
|
high energy physics phenomenology
|
For the visualization of quantum states, the approach based on Wigner functions can be very effective. Homodyne detection has been extensively used to obtain the density matrix, Wigner functions and tomographic reconstructions of optical fields for many thermal, coherent or squeezed states. Here, we use time-domain optical homodyne tomography for the quantum state recognition and reconstruction of the femtosecond optical field from a nonequilibrium superradiant coherent electron-hole state formed in a semiconductor laser structure. We observe severe deviations from the Poissonian statistics of the photons associated with the coherent laser state when the transformation from lasing to superradiance occurs. The reconstructed Wigner functions show large areas of negative values, a characteristic sign of non-classicality, demonstrating the quantum nature of the generated superradiant emission. The photon number distribution and Wigner function of the SR state are very similar to those of the displaced Fock state
|
quantum physics
|
Matching and retrieving previously translated segments from a Translation Memory is the key functionality in Translation Memories systems. However this matching and retrieving process is still limited to algorithms based on edit distance which we have identified as a major drawback in Translation Memories systems. In this paper we introduce sentence encoders to improve the matching and retrieving process in Translation Memories systems - an effective and efficient solution to replace edit distance based algorithms.
|
computer science
|
Wave shape (e.g. wave skewness and asymmetry) impacts sediment transport, remote sensing and ship safety. Previous work showed that wind affects wave shape in intermediate and deep water. Here, we investigate the effect of wind on wave shape in shallow water through a wind-induced surface pressure for different wind speeds and directions to provide the first theoretical description of wind-induced shape changes. A multiple-scale analysis of long waves propagating over a shallow, flat bottom and forced by a Jeffreys-type surface pressure yields a forward or backward Korteweg-de Vries (KdV)-Burgers equation for the wave profile, depending on the wind direction. The evolution of a symmetric, solitary-wave initial condition is calculated numerically. The resulting wave grows (decays) for onshore (offshore) wind and becomes asymmetric, with the rear face showing the largest shape changes. The wave profile's deviation from a reference solitary wave is primarily a bound wave and trailing, dispersive, decaying tail. The onshore wind increases the wave's energy and skewness with time while decreasing the wave's asymmetry, with the opposite holding for offshore wind. The corresponding wind speeds are shown to be physically realistic, and the shape changes are explained as slow growth followed by rapid evolution according to the unforced KdV equation.
|
physics
|
Central venous catheters (CVCs) are commonly used in critical care settings for monitoring body functions and administering medications. They are often described in radiology reports by referring to their presence, identity and placement. In this paper, we address the problem of automatic detection of their presence and identity through automated segmentation using deep learning networks and classification based on their intersection with previously learned shape priors from clinician annotations of CVCs. The results not only outperform existing methods of catheter detection achieving 85.2% accuracy at 91.6% precision, but also enable high precision (95.2%) classification of catheter types on a large dataset of over 10,000 chest X-rays, presenting a robust and practical solution to this problem.
|
electrical engineering and systems science
|
Physically based rendering is a discipline in computer graphics which aims at reproducing certain light and material appearances that occur in the real world. Complex scenes can be difficult to compute for rendering algorithms. This paper introduces a new comprehensive test database of scenes that treat different light setups in conjunction with diverse materials and discusses its design principles. A lot of research is focused on the development of new algorithms that can deal with difficult light conditions and materials efficiently. This database delivers a comprehensive foundation for evaluating existing and newly developed rendering techniques. A final evaluation compares different results of different rendering algorithms for all scenes.
|
computer science
|
This paper presents a novel virus propagation model using NetLogo. The model allows agents to move across multiple sites using different routes. Routes can be configured, enabled for mobility and (un)locked down independently. Similarly, locations can also be (un)locked down independently. Agents can get infected, propagate their infections to others, can take precautions against infection and also subsequently recover from infection. This model contains certain features that are not present in existing models. The model may be used for educational and research purposes, and the code is made available as open source. This model may also provide a broader framework for more detailed simulations. The results presented are only to demonstrate the model functionalities and do not serve any other purpose.
|
computer science
|
We use particle-in-cell (PIC) simulations and simple analytic models to investigate the laser-plasma interaction known as ponderomotive steepening. When normally incident laser light reflects at the critical surface of a plasma, the resulting standing electromagnetic wave modifies the electron density profile via the ponderomotive force, which creates peaks in the electron density separated by approximately half of the laser wavelength. What is less well studied is how this charge imbalance accelerates ions towards the electron density peaks, modifying the ion density profile of the plasma. Idealized PIC simulations with an extended underdense plasma shelf are used to isolate the dynamics of ion density peak growth for a 42 fs pulse from an 800 nm laser with an intensity of 10$^{18}$ W cm$^{-2}$. These simulations exhibit sustained longitudinal electric fields of 200 GV m$^{-1}$, which produce counter-steaming populations of ions reaching a few keV in energy. We compare these simulations to theoretical models, and we explore how ion energy depends on factors such as the plasma density and the laser wavelength, pulse duration, and intensity. We also provide relations for the strength of longitudinal electric fields and an approximate timescale for the density peaks to develop. These conclusions may be useful investigating the phenomenon of ponderomotive steepening as advances in laser technology allow shorter and more intense pulses to be produced at various wavelengths. We also discuss the parallels with other work studying the interference from two counter-propagating laser pulses.
|
physics
|
The cosmic time dependencies of $G$, $\alpha$, $\hbar$ and of Standard Model parameters like the Higgs vev and elementary particle masses are studied in the framework of a new dark energy interpretation. Due to the associated time variation of rulers, many effects turn out to be invisible. However, a rather large time dependence is claimed to arise in association with dark energy measurements, and smaller ones in connection with the Standard Model.
|
high energy physics phenomenology
|
We carry out a detailed study of the spectral and the timing properties of the stellar-mass black hole candidate XTE J1752-223 during its 2009-10 outburst using RXTE PCA data in the $2.5-25$ keV energy range. Low frequency quasi-periodic oscillations (LFQPOs) are seen in the power density spectrum (PDS). The spectral analysis is done using two types of models: one is the combined disk black body plus power-law model and the other is Transonic flow solution based Two Component Advective Flow (TCAF) model. RXTE PCA was non-operational during 2009 Nov. 16 to 2010 Jan. 18 and thus we study light curve profiles and evolution of hardness ratios using MAXI GSC and Swift BAT data. Based on the evolution of the temporal and the spectral properties of the source during its 2009-10 outburst, we find that the object evolved through the following spectral states: hard, hard-intermediate and soft-intermediate/soft. From the TCAF model fitted spectral analysis, we also estimate the probable mass of the black hole to be in the range of $8.1-11.9$ $M_\odot$, and more precisely, the mass appears to be is $10\pm{1.9}~M_\odot$.
|
astrophysics
|
With the growth of content on social media networks, enterprises and services providers have become interested in identifying the questions of their customers. Tracking these questions become very challenging with the growth of text that grows directly proportional to the increase of Arabic users thus making it very difficult to be tracked manually. By automatic identifying the questions seeking answers on the social media networks and defining their category, we can automatically answer them by finding an existing answer or even routing them to those responsible for answering those questions in the customer service. This will result in saving the time and the effort and enhancing the customer feedback and improving the business. In this paper, we have implemented a binary classifier to classify Arabic text to either question seeking answer or not. We have added emotional based features to the state of the art features. Experimental evaluation has done and showed that these emotional features have improved the accuracy of the classifier.
|
computer science
|
This paper studies multiplicative inflation: the complementary scaling of the state covariance in the ensemble Kalman filter (EnKF). Firstly, error sources in the EnKF are catalogued and discussed in relation to inflation; nonlinearity is given particular attention as a source of sampling error. In response, the "finite-size" refinement known as the EnKF-N is re-derived via a Gaussian scale mixture, again demonstrating how it yields adaptive inflation. Existing methods for adaptive inflation estimation are reviewed, and several insights are gained from a comparative analysis. One such adaptive inflation method is selected to complement the EnKF-N to make a hybrid that is suitable for contexts where model error is present and imperfectly parameterized. Benchmarks are obtained from experiments with the two-scale Lorenz model and its slow-scale truncation. The proposed hybrid EnKF-N method of adaptive inflation is found to yield systematic accuracy improvements in comparison with the existing methods, albeit to a moderate degree.
|
physics
|
We present the first optical identification and confirmation of a sample of supernova remnants (SNRs) in the nearby galaxy NGC3344. Using high spectral and spatial resolution data, obtained with the CFHT imaging Fourier transform spectrograph SITELLE, we identified about 2200 emission line regions, many of which are HII regions, diffuse ionized gas regions, and also SNRs. Considering the stellar population and diffuse ionized gas background, that are quite important in NGC3344, we have selected 129 SNR candidates based on four criteria for regions where the emission lines flux ratio [S II]/H$\alpha$$\ge$0.4. Emission lines of [O II]$\lambda$3727, H$\alpha$, [O III]$\lambda\lambda$4959,5007, H$\alpha$, [N II]$\lambda\lambda$6548,6583, and [S II]$\lambda\lambda$6716,6731 have been measured to study the ionized gas properties of the SNR candidates. We adopted a self-consistent spectroscopic analysis, based on Sabbadin plots and BPT diagrams, to confirm the shock-heated nature of the ionization mechanism in the candidates sample. With this analysis, we end up with 42 Confirmed SNRs, 45 Probable SNRs, and 42 Less likely SNRs. Using shock models, the Confirmed SNRs seems to have a metallicity ranging between LMC and 2$\times$solar. We looked for correlations between the size of the Confirmed SNRs and their emission lines ratios, their galaxy environment, and their galactocentric distance: we see a trend for a metallicity gradient among the SNR population, along with some evolutionary effects.
|
astrophysics
|
We analyze the statistical characteristics of the quasi-nonequilibrium two-dimensional electron-hole plasma in graphene layers (GLs) and graphene bilayers (GBLs) and evaluate their heat capacity.The GL heat capacity of the weakly pumped intrinsic or weakly doped GLs normalized by the Boltzmann constant is equal to $c_{GL} \simeq 6.58$. With varying carrier temperature the intrinsic GBL carrier heat capacity $c_{GBL}$ changes from $c_{GBL} \simeq 2.37$ at $T \lesssim 300$~K to $c_{GBL} \simeq 6.58$ at elevated temperatures. These values are markedly differentfrom the heat capacity of classical two-dimensional carriers with $c = 1$. The obtained results can be useful for the optimization of different GL- and GBL-based high-speed devices.
|
condensed matter
|
In the first part of this paper we will work out a close and so far not yet noticed correspondence between the swampland approach in quantum gravity and geometric flow equations in general relativity, most notably the Ricci flow. We conjecture that following the gradient flow towards a fixed point, which is at infinite distance in the space of background metrics, is accompanied by an infinite tower of states in quantum gravity. In case of the Ricci flow, this conjecture is in accordance with the generalized distance and AdS distance conjectures, which were recently discussed in the literature, but it should also hold for more general background spaces. We argue that the entropy functionals of gradient flows provide a useful definition of the generalized distance in the space of background fields. In particular we give evidence that for the Ricci flow the distance $\Delta$ can be defined in terms of the mean scalar curvature of the manifold, $\Delta\sim\log \bar R$. For a more general gradient flow, the distance functional also depends on the string coupling constant. In the second part of the paper we will apply the generalized distance conjecture to gravity theories with higher curvature interactions, like higher derivative $R^2$ and $W^2$ terms. We will show that going to the weak coupling limit of the higher derivative terms corresponds to the infinite distance limit in metric space and hence this limit must be accompanied by an infinite tower of light states. For the case of the $R^2$ or $W^2$ couplings, this limit corresponds to the limit of a small cosmological constant or, respectively, to a light additional spin-two field in gravity. In general we see that the limit of small higher curvature couplings belongs to the swampland in quantum gravity, just like the limit of a small $U(1)$ gauge coupling belongs to the swampland as well.
|
high energy physics theory
|
Nowadays, deep neural networks are widely used in mission critical systems such as healthcare, self-driving vehicles, and military which have direct impact on human lives. However, the black-box nature of deep neural networks challenges its use in mission critical applications, raising ethical and judicial concerns inducing lack of trust. Explainable Artificial Intelligence (XAI) is a field of Artificial Intelligence (AI) that promotes a set of tools, techniques, and algorithms that can generate high-quality interpretable, intuitive, human-understandable explanations of AI decisions. In addition to providing a holistic view of the current XAI landscape in deep learning, this paper provides mathematical summaries of seminal work. We start by proposing a taxonomy and categorizing the XAI techniques based on their scope of explanations, methodology behind the algorithms, and explanation level or usage which helps build trustworthy, interpretable, and self-explanatory deep learning models. We then describe the main principles used in XAI research and present the historical timeline for landmark studies in XAI from 2007 to 2020. After explaining each category of algorithms and approaches in detail, we then evaluate the explanation maps generated by eight XAI algorithms on image data, discuss the limitations of this approach, and provide potential future directions to improve XAI evaluation.
|
computer science
|
Identifying temporal relations between events is an essential step towards natural language understanding. However, the temporal relation between two events in a story depends on, and is often dictated by, relations among other events. Consequently, effectively identifying temporal relations between events is a challenging problem even for human annotators. This paper suggests that it is important to take these dependencies into account while learning to identify these relations and proposes a structured learning approach to address this challenge. As a byproduct, this provides a new perspective on handling missing relations, a known issue that hurts existing methods. As we show, the proposed approach results in significant improvements on the two commonly used data sets for this problem.
|
computer science
|
A method to determine the kinetic freeze-out temperature in heavy-ion collisions from measured yields of short-lived resonances is presented. The resonance production is treated in the framework of thermal model with an evolution between chemical and kinetic freeze-outs. The yields of many short-lived resonances are suppressed at $T = T_{\rm kin} < T_{\rm ch}$. We determine the values of $T_{\rm kin}$ and $T_{\rm ch}$ for various centralities in Pb--Pb collisions at $\sqrt{s_{_{NN}}} = 2.76$ TeV by fitting the abundances of both the stable hadrons and the short-lived resonances such as $\rho^0$ and $ \text{K}^{*0}$, that were measured by the ALICE collaboration. This allows to extract the kinetic freeze-out temperature from the measured hadron and resonance yields alone, independent of assumptions about the flow velocity profile and the freeze-out hypersurface. The extracted $T_{\rm ch}$ values exhibit a moderate multiplicity dependence whereas $T_{\rm kin}$ drops, from $T_{\rm kin} \simeq T_{\rm ch} \simeq 155$ MeV in peripheral collisions to $T_{\rm kin} \simeq 110$ MeV in 0-20% central collisions. Predictions for other short-lived resonances are presented. A potential (non-)observation of a suppressed $f_0(980)$ meson yield will allow to constrain the lifetime of that meson.
|
high energy physics phenomenology
|
Entangling gates for electron spins in semiconductor quantum dots are generally based on exchange, a short-ranged interaction that requires wavefunction overlap. Coherent spin-photon coupling raises the prospect of using photons as long-distance interconnects for spin qubits. Realizing a key milestone for spin-based quantum information processing, we demonstrate microwave-mediated spin-spin interactions between two electrons that are physically separated by more than 4 mm. Coherent spin-photon coupling is demonstrated for each individual spin using microwave transmission spectroscopy. An enhanced vacuum Rabi splitting is observed when both spins are tuned into resonance with the cavity, indicative of a coherent spin-spin interaction. Our results demonstrate that microwave-frequency photons can be used as a resource to generate long-range two-qubit gates between spatially separated spins.
|
condensed matter
|
Following the scanning methods of arXiv:1910.04530, we for the first time systematically construct the $N=1$ supersymmetric $SU(12)_C\times SU(2)_L\times SU(2)_R$ models, $SU(4)_C\times SU(6)_L\times SU(2)_R$ models, and $SU(4)_C\times SU(2)_L\times SU(6)_R$ models from the Type IIA orientifolds on $\bf T^6/(\mathbb Z_2\times \mathbb Z_2)$ with intersecting D6-branes. These gauge symmetries can be broken down to the Pati-Salam gauge symmetry $SU(4)_C\times SU(2)_L \times SU(2)_R$ via three $SU(12)_C/SU(6)_L/SU(6)_R$ adjoint representation Higgs fields, and further down to the Standard Model (SM) via the D-brane splitting and Higgs mechanism. Also, we obtain three families of the SM fermions, and have the left-handed and right-handed three-family SM fermion unification in the $SU(12)_C\times SU(2)_L\times SU(2)_R$ models, the left-handed three-family SM fermion unification in the $SU(4)_C\times SU(6)_L\times SU(2)_R$ models, and the right-handed three-family SM fermion unification in the $SU(4)_C\times SU(2)_L\times SU(6)_R$ models. Moreover, the $SU(4)_C\times SU(6)_L\times SU(2)_R$ models and $SU(4)_C\times SU(2)_L\times SU(6)_R$ models are related by the left and right gauge symmetry exchanging, as well as a variation of type II T-duality. Furthermore, the hidden sector contains $USp(n)$ branes, which are parallel with the orientifold planes or their $Z_2$ images and might break the supersymmetry via gaugino condensations.
|
high energy physics theory
|
Triple-GEM detectors are a well known technology in high energy physics. In order to have a complete understanding of their behavior, in parallel with on beam testing, a Monte Carlo code has to be developed to simulate their response to the passage of particles. The software must take into account all the physical processes involved from the primary ionization up to the signal formation, e.g. the avalanche multiplication and the effect of the diffusion on the electrons. In the case of gas detectors, existing software such as Garfield already perform a very detailed simulation but are CPU time consuming. A description of a reliable but faster simulation is presented here: it uses a parametric description of the variables of interest obtained by suitable preliminary Garfield simulations and tuned to the test beam data. It can reproduce the real values of the charge measured by the strip, needed to reconstruct the position with the Charge Centroid method. In addition, particular attention was put to the simulation of the timing information, which permits to apply also the micro-Time Projection Chamber position reconstruction, for the first time on a triple-GEM. A comparison between simulation and experimental values of some sentinel variables in different conditions of magnetic field, high voltage settings and incident angle will be shown.
|
physics
|
Self-dual structures whose dual counterparts are themselves possess unique hidden symmetry, beyond the description of classical spatial symmetry groups. Here we propose a strategy based on a nematic monolayer of attractive half-cylindrical colloids to self-assemble these exotic structures. This system can be seen as a 2D system of semi-disks. By using Monte Carlo simulations, we discover two isostatic self-dual crystals, i.e., an unreported crystal with pmg symmetry and the twisted Kagome crystal. For the pmg crystal approaching the critical point, we find the double degeneracy of the phononic spectrum at the self-dual point, and the merging of two tilted Weyl nodes into one \emph{critically-tilted} Dirac node. The latter is `accidentally' located on the high-symmetry line. The formation of this unconventional Dirac node is due to the emergence of the critical flat bands at the self-dual point, which are linear combinations of \emph{finite-frequency} floppy modes. These modes can be understood as mechanically-coupled self-dual rhomb chains vibrating in some unique uncoupled ways. Our work paves the way for designing and fabricating self-dual materials with exotic mechanical or phononic properties.
|
condensed matter
|
We study cosmological models involving a single real scalar field that has an equation of state parameter which evolves with cosmic time. We highlight some common parametrizations for the equation of state as a function of redshift in the context of twinlike theories. The procedure is used to introduce different models that have the same acceleration parameter, with the very same energy densities and pressure in flat spacetime.
|
astrophysics
|
We propose an investigation on the Northcott, Bogomolov and Lehmer properties for special values of $L$-functions. We first introduce an axiomatic approach to these three properties. We then focus on the Northcott property for special values of $L$-functions. We prove that such a property holds for the special value at zero of Dedekind zeta functions of number fields. In the case of $L$-functions of pure motives, we prove a Northcott property for special values located at the left of the critical strip, assuming the validity of the functional equation.
|
mathematics
|
Given a gauged linear sigma model (GLSM) $\mathcal{T}_{X}$ realizing a projective variety $X$ in one of its phases, i.e. its quantum K\"ahler moduli has a maximally unipotent point, we propose an \emph{extended} GLSM $\mathcal{T}_{\mathcal{X}}$ realizing the homological projective dual category $\mathcal{C}$ to $D^{b}Coh(X)$ as the category of B-branes of the Higgs branch of one of its phases. In most of the cases, the models $\mathcal{T}_{X}$ and $\mathcal{T}_{\mathcal{X}}$ are anomalous and the analysis of their Coulomb and mixed Coulomb-Higgs branches gives information on the semiorthogonal/Lefschetz decompositions of $\mathcal{C}$ and $D^{b}Coh(X)$. We also study the models $\mathcal{T}_{X_{L}}$ and $\mathcal{T}_{\mathcal{X}_{L}}$ that correspond to homological projective duality of linear sections $X_{L}$ of $X$. This explains why, in many cases, two phases of a GLSM are related by homological projective duality. We study mostly abelian examples: linear and Veronese embeddings of $\mathbb{P}^{n}$ and Fano complete intersections in $\mathbb{P}^{n}$. In such cases, we are able to reproduce known results as well as produce some new conjectures. In addition, we comment on the construction of the HPD to a nonabelian GLSM for the Pl\"ucker embedding of the Grassmannian $G(k,N)$.
|
high energy physics theory
|
The strong coupling between intense laser fields and valence electrons in molecules causes a distortion of the potential energy hypersurfaces which determine the motion of nuclei in a molecule and influences possible reaction pathways. The coupling strength varies with the angle between the light electric field and valence orbital, and thereby adds another dimension to the effective molecular potential energy surface, allowing for the emergence of light-induced conical intersections. Here, we demonstrate in theory and experiment that the full complexity of such light-induced potential energy surfaces can be uncovered. In H$_2^+$, the simplest of molecules, we observe a strongly modulated angular distribution of protons which has escaped prior observation. These modulations directly result from ultrafast dynamics on the light-induced molecular potentials and can be modified by varying the amplitude, duration and phase of the mid-infrared dressing field. This opens new opportunities for manipulating the dissociation of small molecules using strong laser fields.
|
physics
|
It is known that the anomalous Chern-Simons (CS) coupling of O$_p$-plane is not consistent with the T-duality transformations. Compatibility of this coupling with the T-duality requires the inclusion of couplings involving one R-R field strength. In this paper we find such couplings at order $\alpha'^2$. By requiring the R-R and NS-NS gauge invariances, we first find all independent couplings at order $\alpha'^2$. There are $1,\, 6,\,28,\,20,\, 19,\, 2$ couplings corresponding to the R-R field strengths $F^{(p-4)}$, $\,F^{(p-2)}$, $\,F^{(p)}$, $\,F^{(p+2)}$, $\,F^{(p+4)}$ and $F^{(p+6)}$, respectively. We then impose the T-duality constraint on these couplings and on the CS coupling $C^{(p-3)}\wedge R\wedge R$ at order $\alpha'^2$ to fix their corresponding coefficients. The T-duality constraint fixes all coefficients in terms of the CS coefficient. They are fully consistent with the partial couplings that have been already found in the literature by the S-matrix method.
|
high energy physics theory
|
Applying security as a lifecycle practice is becoming increasingly important to combat targeted attacks in safety-critical systems. Among others there are two significant challenges in this area: (1) the need for models that can characterize a realistic system in the absence of an implementation and (2) an automated way to associate attack vector information; that is, historical data, to such system models. We propose the cybersecurity body of knowledge (CYBOK), which takes in sufficiently characteristic models of systems and acts as a search engine for potential attack vectors. CYBOK is fundamentally an algorithmic approach to vulnerability exploration, which is a significant extension to the body of knowledge it builds upon. By using CYBOK, security analysts and system designers can work together to assess the overall security posture of systems early in their lifecycle, during major design decisions and before final product designs. Consequently, assisting in applying security earlier and throughout the systems lifecycle.
|
electrical engineering and systems science
|
In this study, we analyze index modulation (IM) based on circularly-shifted chirps (CSCs) for dual-function radar & communication (DFRC) systems. We develop a maximum likelihood (ML) range estimator that considers multiple scatters. To improve the correlation properties of the transmitted waveform and estimation accuracy, we propose index separation (IS) which separates the CSCs apart in time. We theoretically show that the separation can be large under certain conditions without losing the spectral efficiency (SE). Our numerical results show that the IS combined ML and linear minimum mean square error (LMMSE)-based estimators can provide approximately 3 dB signal-to-noise ratio (SNR) gain in some cases while improving estimation accuracy substantially without causing any bit-error ratio (BER) degradation at the communication receiver.
|
electrical engineering and systems science
|
We propose to detect the interference effect between the atmospheric-scale and solar-scale waves of neutrino oscillation, one of the key consequences of the three-generation structure of leptons. In vacuum, we show that there is a natural and general way of decomposing the oscillation amplitude into these two oscillation modes. The nature of the interference is cleanest in the $\bar{\nu}_e$ disappearance channel since it is free from the CP-phase $\delta$. We find that the upcoming JUNO experiment offers an ideal setting to observe this interference with more than $4\,\sigma$ significance, even under conservative assumptions about the systematic uncertainties.
|
high energy physics phenomenology
|
Supernovae are among the most powerful and influential explosions in the universe. They are also ideal multi-messenger laboratories to study extreme astrophysics. However, many fundamental properties of supernovae related to their diverse progenitor systems and explosion mechanisms remain poorly constrained. Here we outline how late-time spectroscopic observations obtained during the nebular phase (several months to years after explosion), made possible with the next generation of Extremely Large Telescopes, will facilitate transformational science opportunities and rapidly accelerate the community towards our goal of achieving a complete understanding of supernova explosions. We highlight specific examples of how complementary GMT and TMT instrumentation will enable high fidelity spectroscopy from which the line profiles and luminosities of elements tracing mass loss and ejecta can be used to extract kinematic and chemical information with unprecedented detail, for hundreds of objects. This will provide uniquely powerful constraints on the evolutionary phases stars may experience approaching a supernova explosion; the subsequent explosion dynamics; their nucleosynthesis yields; and the formation of compact objects that may act as central engines.
|
astrophysics
|
We propose a natural stretching function for DNS of wall-bounded flows, which blends uniform near-wall spacing with uniform resolution in terms of Kolmogorov units in the outer wall layer. Numerical simulations of pipe flow are used to educe optimal value of the blending parameter and of the wall grid spacing which guarantee accuracy and computational efficiency as a results of maximization of the allowed time step. Conclusions are supported by DNS carried out at sufficiently high Reynolds number that a near logarithmic layer is the mean velocity profile is present. Given a target Reynolds number, we provide a definite prescription for the number of grid points and grid clustering needed to achieve accurate results with optimal exploitation of resources.
|
physics
|
Giovannini's parton branching equation is integrated numerically using the 4th-order Runge-Kutta method. Using a simple hadronisation model, a charged-hadron multiplicity distribution is obtained. This model is then fitted to various experimental data up to the TeV scale to study how the Giovannini parameters vary with collision energy and type. The model is able to describe hadronic collisions up to the TeV scale and reveals the emergence of gluonic activity as the centre-of-mass energy increases. A prediction is made for $\sqrt{s}$ = 14 TeV.
|
high energy physics phenomenology
|
Recent theoretical work indicates that the neutrino radiation in core-collapse supernovae may be susceptible to flavor instabilities that set in far behind the shock, grow extremely rapidly, and have the potential to profoundly affect supernova dynamics and composition. Here we analyze the nonlinear collective oscillations that are prefigured by these instabilities. We demonstrate that a zero-crossing in $n_{\nu_e} - n_{\bar{\nu}_e}$ as a function of propagation angle is not sufficient to generate instability. Our analysis accounts for this fact and allows us to formulate complementary criteria. Using Fornax simulation data, we show that fast collective oscillations qualitatively depend on how forward-peaked the neutrino angular distributions are.
|
high energy physics phenomenology
|
Mechanisms for non-invasive target drug delivery using microbubbles and ultrasound have attracted growing interest. Microbubbles can be loaded with a therapeutic payload and tracked via ultrasound imaging to selectively release their payload at ultrasound-targeted locations. In this study, an ultrasonic trapping method is proposed for simultaneously imaging and controlling the location of microbubbles in flow by using acoustic radiation force. Targeted drug delivery methods are expected to benefit from the use of the ultrasonic trap, since trapping will increase the MB concentration at a desired location in human body. The ultrasonic trap was generated by using an ultrasound research system UARP II and a linear array transducer. The trap was designed asymmetrically to produces a weaker radiation force at the inlet of the trap to further facilitate microbubble entrance. A pulse sequence was generated that can switch between a long duration trapping waveform and short duration imaging waveform. High frame rate plane wave imaging was chosen for monitoring trapped microbubbles at 1 kHz. The working principle of the ultrasonic trap was explained and demonstrated in an ultrasound phantom by injecting SonoVue microbubbles flowing at 80 mL/min flow rate in a 3.5 mm diameter vessel.
|
physics
|
We combine conditioning techniques with sparse grid quadrature rules to develop a computationally efficient method to approximate marginal, but not necessarily univariate, posterior quantities, yielding approximate Bayesian inference via Sparse grid Quadrature Evaluation (BISQuE) for hierarchical models. BISQuE reformulates posterior quantities as weighted integrals of conditional quantities, such as densities and expectations. Sparse grid quadrature rules allow computationally efficient approximation of high dimensional integrals, which appear in hierarchical models with many hyperparameters. BISQuE reduces computational effort relative to standard, Markov chain Monte Carlo methods by at least two orders of magnitude on several applied and illustrative models. We also briefly discuss using BISQuE to apply Integrated Nested Laplace Approximations (INLA) to models with more hyperparameters than is currently practical.
|
statistics
|
Solid-phase epitaxy (SPE) and other three-dimensional epitaxial crystallization processes pose challenging structural and chemical characterization problems. The concentration of defects, the spatial distribution of elastic strain, and the chemical state of ions each vary with nanoscale characteristic length scales and depend sensitively on the gas environment and elastic boundary conditions during growth. The lateral or three-dimensional propagation of crystalline interfaces in SPE has nanoscale or submicron characteristic distances during typical crystallization times. An in situ synchrotron hard x-ray instrument allows these features to be studied during deposition and crystallization using diffraction, resonant scattering, nanobeam and coherent diffraction imaging, and reflectivity. The instrument incorporates a compact deposition system allowing the use of short-working-distance x-ray focusing optics. Layers are deposited using radio-frequency magnetron sputtering and evaporation sources. The deposition system provides control of the gas atmosphere and sample temperature. The sample is positioned using a stable mechanical design to minimize vibration and drift and employs precise translation stages to enable nanobeam experiments. Results of in situ x-ray characterization of the amorphous thin film deposition process for a SrTiO3/BaTiO3 multilayer illustrate implementation of this instrument.
|
condensed matter
|
The fully kinetic simulations of magnetic nozzle acceleration are conducted to investigate the axial momentum gains of ions and electrons with the electrostatic and Lorentz forces. Axial momentum gains per ion and electron are directly calculated from the kinetics of charged particles, indicating that electrons in the magnetic nozzle obtain the net axial momentum by the Lorentz force even though they are decelerated by the electrostatic force. Whereas ions are also accelerated by the electrostatic force, the axial momentum gain of electrons increases significantly with increasing the magnetic field strength and becomes dominant in the magnetic nozzle. In addition, it is clearly shown that the axial momentum gain of electrons is due to the electron momentum conversion from the radial to axial direction, resulting in the significant increase in the thrust and the exhaust velocity.
|
physics
|
The objective of this study is to correlate the scaling factor of the Standard Model (SM) like Higgs boson and the cross section ratio of the process $e^+ e^- \rightarrow hhf\overline{f}$ where $f\neq t$, normalized to SM predictions in the type I of the Two Higgs Doublet Model. All calculations have been performed at $\sqrt{s}=500$ GeV and $1 \leq \tan{\beta} \leq 30$ for masses $m_H = m_A = m_{H^\pm} = 300GeV$ and $m_H=300$ GeV, $m_A=m_{H^\pm}=500$ GeV. The working scenario is by taking without alignment limit, that is $s_{\beta-\alpha}= 0.98$ and $s_{\beta-\alpha}= 0.99,$ $0.995$, which gives the enhancement in the cross section, particularly a few times greater than the predictions of the SM due to resonant-impacts of the additional heavy neutral Higgs bosons. This shows that enhancement in cross section occurs on leaving the alignment i.e., $s_{\beta-\alpha}=1$, at which all the higgs that couple to vector bosons and fermions have the same values as in SM at tree level. A large value of enhancement factor is obtained at $s_{\beta-\alpha}= 0.98$ compared to $s_{\beta-\alpha}= 0.99,$ $0 .995$. Furthermore, the decrease in the enhancement factor is observed for the case when $m_H=300$ GeV, $m_A=m_{H^\pm}=500$ GeV. The behavior of the scaling factor with $\tan{\beta}$ is also studied, which shows that for large values of $\tan\beta$, the scaling factor becomes equal to $s_{\beta-\alpha}$. Finally a convincing correlation is achieved by taking into account, the experimental and theoretical constraints e.g, perturbative unitarity, vacuum stability and electroweak oblique parameters.
|
high energy physics phenomenology
|
We give a new proof of the fact that finite bipartite graphs cannot be axiomatized by finitely many first-order sentences among FINITE graphs. (This fact is a consequence of a general theorem proved by L. Ham and M. Jackson, and the counterpart of this fact for all bipartite graphs in the class of ALL graphs is a well-known consequence of the compactness theorem.) Also, to exemplify that our method is applicable in various fields of mathematics, we prove that neither finite simple groups, nor the ordered sets of join-irreducible congruences of slim semimodular lattices can be described by finitely many axioms in the class of FINITE structures. Since a 2007 result of G. Gr\"atzer and E. Knapp, slim semimodular lattices have constituted the most intensively studied part of lattice theory and they have already led to results even in group theory and geometry. In addition to the non-axiomatizability results mentioned above, we present a new property, called Decomposable Cyclic Elements Property, of the congruence lattices of slim semimodular lattices.
|
mathematics
|
UAV communications based on an antenna array entail a beam tracking technology for reliable link acquisition. Unlike conventional cellular communication, beam tracking in UAV communication addresses new issues such as mobility and abrupt channel disconnection from UAV's perturbation. To deal with these issues, we propose a beam tracking scheme based on extended Kalman filter (EKF) using a monopulse signal, which can provide (1) higher robustness by offering a reliable link in the estimated spatial direction and (2) lower complexity compared with the existing conventional beam tracking schemes. We point out the limitations of using a beamformed signal as a measurement model for a Kalman filter (KF) based scheme and instead utilize the monopulse signal as a more plausible model. For the performance evaluation, we derive the upper bound of the mean square error for spatial angle estimation of UAV and confirm that the proposed scheme is stable with a certain bounded error. We also show from various simulations that the proposed scheme can efficiently track UAV and detect beam disconnection every time frame using a beamformed signal.
|
electrical engineering and systems science
|
Despite the current diversity and inclusion initiatives in the academic community, researchers with a non-native command of English still face significant obstacles when writing papers in English. This paper presents the Langsmith editor, which assists inexperienced, non-native researchers to write English papers, especially in the natural language processing (NLP) field. Our system can suggest fluent, academic-style sentences to writers based on their rough, incomplete phrases or sentences. The system also encourages interaction between human writers and the computerized revision system. The experimental results demonstrated that Langsmith helps non-native English-speaker students write papers in English. The system is available at https://emnlp-demo.editor. langsmith.co.jp/.
|
computer science
|
In high-dimensions, the prior tails can have a significant effect on both posterior computation and asymptotic concentration rates. To achieve optimal rates while keeping the posterior computations relatively simple, an empirical Bayes approach has recently been proposed, featuring thin-tailed conjugate priors with data-driven centers. While conjugate priors ease some of the computational burden, Markov chain Monte Carlo methods are still needed, which can be expensive when dimension is high. In this paper, we develop a variational approximation to the empirical Bayes posterior that is fast to compute and retains the optimal concentration rate properties of the original. In simulations, our method is shown to have superior performance compared to existing variational approximations in the literature across a wide range of high-dimensional settings.
|
statistics
|
We classify (0,2) Landau-Ginzburg theories that can flow to compact IR fixed points with equal left and right central charges strictly bounded by 3. Our result is a (0,2) generalization of the ADE classification of (2,2) Landau-Ginzburg theories that flow to N=2 minimal models. Unitarity requires the right-moving supersymmetric sector to fall into the standard N=2 minimal model representations, but the left-moving sector need not have supersymmetry. The Landau-Ginzburg realizations provide a simple way to compute the chiral algebra and other characteristics of these fixed points. While our results pertain to isolated superconformal theories, tensor products lead to (0,2) superconformal theories with higher central charge, and the Landau-Ginzburg realization provides a model for a class of marginal and relevant deformations of such theories.
|
high energy physics theory
|
In order to compute fast approximations to the singular value decompositions (SVD) of very large matrices, randomized sketching algorithms have become a leading approach. However, a key practical difficulty of sketching an SVD is that the user does not know how far the sketched singular vectors/values are from the exact ones. Indeed, the user may be forced to rely on analytical worst-case error bounds, which do not account for the unique structure of a given problem. As a result, the lack of tools for error estimation often leads to much more computation than is really necessary. To overcome these challenges, this paper develops a fully data-driven bootstrap method that numerically estimates the actual error of sketched singular vectors/values. In particular, this allows the user to inspect the quality of a rough initial sketched SVD, and then adaptively predict how much extra work is needed to reach a given error tolerance. Furthermore, the method is computationally inexpensive, because it operates only on sketched objects, and it requires no passes over the full matrix being factored. Lastly, the method is supported by theoretical guarantees and a very encouraging set of experimental results.
|
statistics
|
The Near-infrared Sky Brightness Monitor (NIRBM) aims to measure the middle infrared sky background in Antarctica. The NIRBM mainly consists of an InGaAs detector, a chopper, a reflector, a cooler and a black body. The reflector can rotate to scan the sky with a field of view ranging from 0{\deg} to 180{\deg}. Electromechanical control and weak signal readout functions are accomplished by the same circuit, whose core chip is a STM32F407VG microcontroller. Considering the environment is harsh for humans in Antarctica, a multi-level remote control software system is designed and implemented. A set of EPICS IOCs are developed to control each hardware module independently via serial port communication with the STM32 microcontroller. The tornado web framework and PyEpics are introduced as a combination where PyEpics is used to monitor or change the EPICS Process Variables, functioning as a client for the EPICS framework. Tornado is responsible for the specific operation process of inter-device collaboration, and expose a set of interfaces to users to make calls. Considering the high delay and low bandwidth of the network environment, the tornado back-end is designed as a master-and-agent architecture to improve domestic user experience. The master node is deployed in Antarctic while multiple agent nodes can be deployed domestic. The master and agent nodes communicate with each other through the WebSocket protocol to exchange latest information so that bandwidth is saved. The GUI is implemented in the form of single-page application based on the Vue framework which communicates with tornado through WebSocket and AJAX requests. The web page integrates device control, data curve drawing, alarm display, auto observation and other functions together.
|
astrophysics
|
The large and growing library of measurements from the Large Hadron Collider has significant power to constrain extensions of the Standard Model. We consider such constraints on a well-motivated model involving a gauged and spontaneously-broken $B-L$ symmetry, within the Contur framework. The model contains an extra Higgs boson, a gauge boson, and right-handed neutrinos with Majorana masses. This new particle content implies a varied phenomenology highly dependent on the parameters of the model, very well-suited to a general study of this kind. We find that existing LHC measurements significantly constrain the model in interesting regions of parameter space. Other regions remain open, some of which are within reach of future LHC data.
|
high energy physics phenomenology
|
Q-learning suffers from overestimation bias, because it approximates the maximum action value using the maximum estimated action value. Algorithms have been proposed to reduce overestimation bias, but we lack an understanding of how bias interacts with performance, and the extent to which existing algorithms mitigate bias. In this paper, we 1) highlight that the effect of overestimation bias on learning efficiency is environment-dependent; 2) propose a generalization of Q-learning, called \emph{Maxmin Q-learning}, which provides a parameter to flexibly control bias; 3) show theoretically that there exists a parameter choice for Maxmin Q-learning that leads to unbiased estimation with a lower approximation variance than Q-learning; and 4) prove the convergence of our algorithm in the tabular case, as well as convergence of several previous Q-learning variants, using a novel Generalized Q-learning framework. We empirically verify that our algorithm better controls estimation bias in toy environments, and that it achieves superior performance on several benchmark problems.
|
computer science
|
Future mm-wave and sub-mm space missions will employ large arrays of multiplexed Transition Edge Sensor (TES) bolometers. Such instruments must contend with the high flux of cosmic rays beyond our atmosphere that induce "glitches" in bolometer data, which posed a challenge to data analysis from the Planck bolometers. Future instruments will face the additional challenges of shared substrate wafers and multiplexed readout wiring. In this work we explore the susceptibility of modern TES arrays to the cosmic ray environment of space using two data sets: the 2015 long-duration balloon flight of the SPIDER cosmic microwave background polarimeter, and a laboratory exposure of SPIDER flight hardware to radioactive sources. We find manageable glitch rates and short glitch durations, leading to minimal effect on SPIDER analysis. We constrain energy propagation within the substrate through a study of multi-detector coincidences, and give a preliminary look at pulse shapes in laboratory data.
|
astrophysics
|
The tennis racket effect is a geometric phenomenon which occurs in a free rotation of a three-dimensional rigid body. In a complex phase space, we show that this effect originates from a pole of a Riemann surface and can be viewed as a result of the Picard-Lefschetz formula. We prove that a perfect twist of the racket is achieved in the limit of an ideal asymmetric object. We give upper and lower bounds to the twist defect for any rigid body, which reveals the robustness of the effect. A similar approach describes the Dzhanibekov effect in which a wing nut, spinning around its central axis, suddenly makes a half-turn flip around a perpendicular axis and the Monster flip, an almost impossible skate board trick.
|
physics
|
A remark on the article of Edna Cheung and collaborators "Gauge invariant three-point amplitudes in higher spin gauge theory"
|
high energy physics theory
|
A quantum collision model (CM), also known as repeated interactions model, can be built from the standard microscopic framework where a system S is coupled to a white-noise bosonic bath under the rotating wave approximation, which typically results in Markovian dynamics. Here, we discuss how to generalize the CM construction to the case of frequency-dependent system-bath coupling, which defines a class of colored-noise baths. This leads to an intrinsically non-Markovian CM, where each ancilla (bath subunit) collides repeatedly with S at different steps. We discuss the illustrative example of an atom in front a mirror in the regime of non-negligible retardation times.
|
quantum physics
|
We compute scalar products and norms of Bethe vectors in the massless sector of AdS_3 integrable superstring theories, by exploiting the general difference form of the S-matrix of massless excitations in the pure Ramond-Ramond case, and the difference form valid only in the BMN limit in the mixed-flux case. We obtain determinant-like formulas for the scalar products, generalising a procedure developed in previous literature for standard R-matrices to the present non-conventional situation. We verify our expressions against explicit calculations using Bethe vectors for chains of small length, and perform some computer tests of the exact formulas as far as numerical accuracy sustains us. This should be the first step towards the derivation of integrable form-factors and correlation functions for the AdS_3 S-matrix theory.
|
high energy physics theory
|
The use of artificial neural networks as models of chaotic dynamics has been rapidly expanding, but the theoretical understanding of how neural networks learn chaos remains lacking. Here, we employ a geometric perspective to show that neural networks can efficaciously model chaotic dynamics by themselves becoming structurally chaotic. First, we confirm the efficacy of neural networks in emulating chaos by showing that parsimonious neural networks trained only on few data points suffice to reconstruct strange attractors, extrapolate outside training data boundaries, and accurately predict local divergence rates. Second, we show that the trained network's map comprises a series of geometric stretching, rotation, and compression operations. These geometric operations indicate topological mixing and chaos, explaining why neural networks are naturally suitable to emulate chaotic dynamics.
|
computer science
|
Steady state, spherically symmetric accretion flows are well understood in terms of the Bondi solution. Spherical symmetry however, is necessarily an idealized approximation to reality. Here we explore the consequences of deviations away from spherical symmetry, first through a simple analytic model to motivate the physical processes involved, and then through hydrodynamical, numerical simulations of an ideal fluid accreting onto a Newtonian gravitating object. Specifically, we consider axisymmetric, large-scale, small amplitude deviations in the density field such that the equatorial plane is over dense as compared to the polar regions. We find that the resulting polar density gradient dramatically alters the Bondi result and gives rise to steady state solutions presenting bipolar outflows. As the density contrast increases, more and more material is ejected from the system, attaining speeds larger than the local escape velocities for even modest density contrasts. Interestingly, interior to the outflow region, the flow tends locally towards the Bondi solution, with a resulting total mass accretion rate through the inner boundary $choking$ at a value very close to the corresponding Bondi one. Thus, the numerical experiments performed suggest the appearance of a maximum achievable accretion rate, with any extra material being ejected, even for very small departures from spherical symmetry.
|
astrophysics
|
In this paper we calculate the incremental system production cost associated with a measure of locational power injection uncertainty that can be interpreted as a locational price for tracking power fluctuations. This Locational Price of Variability (LPV) can be used to allocate charges for regulation reserves by location, and hence, can also be used to value distributed energy storage employed to mitigate such fluctuations. We consider policy changes that could enable the implementation of the LPV.
|
electrical engineering and systems science
|
We have experimentally investigated the bouncing behavior and damping performance of a container partially filled with granular chains, namely a chain-filled damper. The motion of the chain-filled damper, recorded by a particle tracing technology, demonstrates that the granular chains can efficiently absorb the collisional energy of the damper. We extract both the restitution coefficient of the first collision and the total flight time to characterize the dissipation ability of the damper. Two containers and three types of granular chains, different in size, stiffness and restitution coefficient, are used to examine the experimental results. We find that the restitution coefficient of the first collision of a single-chain-filled damper can linearly tend to vanish with increasing the chain length and obtain a minimum filling mass required to cease the container at the first collision (no rebound). When the strong impact occurs, the collisional absorption efficiency of a chain-filled damper is superior to a monodisperse-particle-filled damper. Furthermore, the longer the chains are, the better the dissipative effect is.
|
condensed matter
|
The hot-hand theory posits that an athlete who has performed well in the recent past will perform better in the present. We use multilevel logistic regression to test this theory for National Hockey League playoff goaltenders, controlling for a variety of shot-related and game-related characteristics. Our data consists of 48,431 shots for 93 goaltenders in the 2008-2016 playoffs. Using a wide range of shot-based and time-based windows to quantify recent save performance, we consistently find that good recent save performance has a negative effect on the next-shot save probability, which contradicts the hot-hand theory.
|
statistics
|
Planning in stochastic and partially observable environments is a central issue in artificial intelligence. One commonly used technique for solving such a problem is by constructing an accurate model firstly. Although some recent approaches have been proposed for learning optimal behaviour under model uncertainty, prior knowledge about the environment is still needed to guarantee the performance of the proposed algorithms. With the benefits of the Predictive State Representations~(PSRs) approach for state representation and model prediction, in this paper, we introduce an approach for planning from scratch, where an offline PSR model is firstly learned and then combined with online Monte-Carlo tree search for planning with model uncertainty. By comparing with the state-of-the-art approach of planning with model uncertainty, we demonstrated the effectiveness of the proposed approaches along with the proof of their convergence. The effectiveness and scalability of our proposed approach are also tested on the RockSample problem, which are infeasible for the state-of-the-art BA-POMDP based approaches.
|
computer science
|
In this paper we study the separation between two complexity measures: the degree of a Boolean function as a polynomial over the reals and its block sensitivity. We show that separation between these two measures can be improved from $ d^2(f) \geq bs(f) $, established by Tal, to $ d^2(f) \geq (\sqrt{10} - 2)bs(f) $. As a corollary, we show that separations between some other complexity measures are not tight as well, for instance, we can improve recent sensitivity conjecture result by Huang to $s^4(f) \geq (\sqrt{10} - 2)bs(f)$. Our techniques are based on paper by Nisan and Szegedy and include more detailed analysis of a symmetrization polynomial. In our next result we show the same type of improvement in the separation between the approximate degree of a Boolean function and its block sensitivity: we show that $deg_{1/3}^2(f) \geq \sqrt{6/101} bs(f)$ and improve the previous result by Nisan and Szegedy $ deg_{1/3}(f) \geq \sqrt{bs(f)/6} $. In addition, we construct an example which shows that gap between constants in the lower bound and in the known upper bound is less than $0.2$. In our last result we study the properties of conjectured fully sensitive function on 10 variables of degree 4, existence of which would lead to improvement of the biggest known gap between these two measures. We prove that there is the only univariate polynomial that can be achieved by symmetrization of this function by using the combination of interpolation and linear programming techniques.
|
computer science
|
High contact resistance is one of the primary concerns for electronic device applications of two-dimensional (2D) layered semiconductors. Here, we explore the enhanced carrier transport through metal-semiconductor interfaces in WS2 field effect transistors (FETs) by introducing a typical transition metal, Cu, with two different doping strategies: (i) a "generalized" Cu doping by using randomly distributed Cu atoms along the channel and (ii) a "localized" Cu doping by adapting an ultrathin Cu layer at the metal-semiconductor interface. Compared to the pristine WS2 FETs, both the generalized Cu atomic dopant and localized Cu contact decoration can provide a Schottky-to-Ohmic contact transition owing to the reduced contact resistances by 1 - 3 orders of magnitude, and consequently elevate electron mobilities by 5 - 7 times higher. Our work demonstrates that the introduction of transition metal can be an efficient and reliable technique to enhance the carrier transport and device performance in 2D TMD FETs.
|
condensed matter
|
Topological quantum phases underpin many concepts of modern physics. While the existence of disorder-immune topological edge states of electrons usually requires magnetic fields, direct effects of magnetic field on light are very weak. As a result, demonstrations of topological states of photons employ synthetic fields engineered in special complex structures or external time-dependent modulations. Here, we reveal that the quantum Hall phase with topological edge states, spectral Landau levels and Hofstadter butterfly can emerge in a simple quantum system, where topological order arises solely from interactions without any fine-tuning. Such systems, arrays of two-level atoms (qubits) coupled to light being described by the classical Dicke model, have recently been realized in experiments with cold atoms and superconducting qubits. We believe that our finding will open new horizons in several disciplines including quantum physics, many-body physics, and nonlinear topological photonics, and it will set an important reference point for experiments on qubit arrays and quantum simulators.
|
quantum physics
|
While observations have suggested that power-law electron energy spectra are a common outcome of strong energy release during magnetic reconnection, e.g., in solar flares, kinetic simulations have not been able to provide definite evidence of power-laws in energy spectra of non-relativistic reconnection. By means of 3D large-scale fully kinetic simulations, we study the formation of power-law electron energy spectra in non-relativistic low-$\beta$ reconnection. We find that both the global spectrum integrated over the entire domain and local spectra within individual regions of the reconnection layer have power-law tails with a spectral index $p \sim 4$ in the 3D simulation, which persist throughout the non-linear reconnection phase until saturation. In contrast, the spectrum in the 2D simulation rapidly evolves and quickly becomes soft. We show that 3D effects such as self-generated turbulence and chaotic magnetic field lines enable the transport of high-energy electrons across the reconnection layer and allow them to access several main acceleration regions. This leads to a sustained and nearly constant acceleration rate for electrons at different energies. We construct a model that explains the observed power-law spectral index in terms of the dynamical balance between particle acceleration and escape from main acceleration regions, which are defined based upon a threshold for the curvature drift acceleration term. This result could be important for explaining the formation of power-law energy spectrum in solar flares.
|
astrophysics
|
We revisit the residual symmetries that survive the orbifold projections, and find additional transformations that have been overlooked in the past. Some of these transformations are outer automorphisms of the downstairs continuous symmetry group. Examples for these transformations include the left-right parity of the Pati-Salam model and its left-right symmetric subgroup.
|
high energy physics phenomenology
|
We introduce topological invariants for critical bosonic and fermionic chains. More generally, the symmetry properties of operators in the low-energy conformal field theory (CFT) provide discrete invariants, establishing the notion of symmetry-enriched quantum criticality. For nonlocal operators, these invariants are topological and imply the presence of localized edge modes. Depending on the symmetry, the finite-size splitting of this topological degeneracy can be exponential or algebraic in system size. An example of the former is given by tuning the spin-1 Heisenberg chain to an Ising phase. An example of the latter arises between the gapped Ising and cluster phases: this symmetry-enriched Ising CFT has an edge mode with finite-size splitting $\sim 1/L^{14}$. More generally, our formalism unifies various examples previously studied in the literature. Similar to gapped symmetry-protected topological phases, a given CFT can split into several distinct symmetry-enriched CFTs. This raises the question of classification, to which we give a partial answer---including a complete characterization of symmetry-enriched Ising CFTs.
|
condensed matter
|
This paper proposes a framework designed to generate a closed-loop walking engine for a humanoid robot. In particular, the core of this framework is an abstract dynamics model which is composed of two masses that represent the lower and the upper body of a humanoid robot. Moreover, according to the proposed dynamics model, the low-level controller is formulated as a Linear-Quadratic-Gaussian (LQG) controller that is able to robustly track the desired trajectories. Besides, this framework is fully parametric which allows using an optimization algorithm to find the optimum parameters. To examine the performance of the proposed framework, a set of simulation using a simulated Nao robot in the RoboCup 3D simulation environment has been carried out. Simulation results show that the proposed framework is capable of providing fast and reliable omnidirectional walking. After optimizing the parameters using genetic algorithm (GA), the maximum forward walking velocity that we have achieved was $80.5cm/s$.
|
computer science
|
We present ALMA band-7 data of the [CII] $\lambda157.74\,\mu{\rm m}$ emission line and underlying far-infrared (FIR) continuum for twelve luminous quasars at $z \simeq 4.8$, powered by fast-growing supermassive black holes (SMBHs). Our total sample consists of eighteen quasars, twelve of which are presented here for the first time. The new sources consists of six Herschel/SPIRE detected systems, which we define as "FIR-bright" sources, and six Herschel/SPIRE undetected systems, which we define as "FIR-faint" sources. We determine dust masses for the quasars hosts of $M_{dust} \le 0.2-25.0\times 10^8 M_{\odot}$, implying ISM gas masses comparable to the dynamical masses derived from the [CII] kinematics. It is found that on average the MgII line is blueshifted by $\sim 500\,{\rm km\,s}^{-1}$ with respect to the [CII] emission line, which is also observed when complementing our observations with data from the literature. We find that all of our "FIR-bright" subsample and most of the "FIR-faint" objects lie above the main sequence of star forming galaxies at $z \sim 5$. We detect companion sub-millimeter galaxies (SMGs) for two sources, both FIR-faint, with a range of projected distances of $\sim20-60$ kpc and with typical velocity shifts of $\left|\Delta v\right| \lesssim200\,{\rm km\,s}^{-1}$ from the quasar hosts. Of our total sample of eighteen quasars, 5/18 are found to have dust obscured starforming companions.
|
astrophysics
|
The origin and the evolution of the universe are concealed in the evanescent diffuse extragalactic background radiation (DEBRA). To reveal these signals, the development of innovative ultra-sensitive bolometers operating in the gigahertz band is required. Here, we review the design and experimental realization of two bias-current-tunable sensors based on one dimensional fully superconducting Josephson junctions: the nanoscale transition edge sensor (nano-TES) and the Josephson escape sensor (JES). In particular, we cover the theoretical basis of the sensors operation, the device fabrication, their experimental electronic and thermal characterization, and the deduced detection performance. Indeed, the nano-TES promises a state-of-the-art noise equivalent power (NEP) of about $5 \times 10^{-20}$ W$/\sqrt{\text{Hz}}$, while the JES is expected to show an unprecedented NEP of the order of $10^{-25}$ W$/\sqrt{\text{Hz}}$. Therefore, the nano-TES and JES are strong candidates to push radio astronomy to the next level.
|
condensed matter
|
Cyber-Physical Energy System, due to its multi-domain nature, requires a holistic validation methodology, which may involve the integration of assets and expertise from various research infrastructures. In this paper, the integration of Supervisory Control and Data Acquisition services to cross-infrastructure experiment is proposed. The method requires a high degree of interoperability among the participating partners and can be applied to extend the capacity as well as the degree of realism of advanced validation method such as co-simulation, remote hardware-in-the-loop or hybrid simulation. The proposed method is applied to a case study of multi-agent system based control for islanded microgrid where real devices from one platform is integrated to real-time simulation and control platform in a distanced infrastructure, in a holistic experimental implementation.
|
electrical engineering and systems science
|
The rising availability of digital traces provides a fertile ground for new solutions to both, new and old problems in cities. Even though a massive data set analyzed with Data Science methods may provide a powerful solution to a problem, its adoption by relevant stakeholders is not guaranteed, due to adoption blockers such as lack of interpretability and transparency. In this context, this paper proposes a preliminary methodology toward bridging two disciplines, Data Science and Transportation, to solve urban problems with methods that are suitable for adoption. The methodology is defined by four steps where people from both disciplines go from algorithm and model definition to the building of a potentially adoptable solution. As case study, we describe how this methodology was applied to define a model to infer commuting trips with mode of transportation from mobile phone data.
|
computer science
|
This paper is concerned with approximation of blow-up phenomena in nonlinear parabolic problems. We consider the equation u_t = u_xx +|u|^p -b(x)|u_x|^q in a bounded domain, we study the behavior of the semidiscrete problem. Under some assumptions we show existence and unicity of the semidiscrete solution, we show that it blows up in a finite time and we prove the convergence of the semidiscrete problem. Finally, we give an approximation of the blow up rate and the blow up time of the semidiscrete solution
|
mathematics
|
Mathematical epidemiological models have a broad use, including both qualitative and quantitative applications. With the increasing availability of data, large-scale quantitative disease spread models can nowadays be formulated. Such models have a great potential, e.g., in risk assessments in public health. Their main challenge is model parameterization given surveillance data, a problem which often limits their practical usage. We offer a solution to this problem by developing a Bayesian methodology suitable to epidemiological models driven by network data. The greatest difficulty in obtaining a concentrated parameter posterior is the quality of surveillance data; disease measurements are often scarce and carry little information about the parameters. The often overlooked problem of the model's identifiability therefore needs to be addressed, and we do so using a hierarchy of increasingly realistic known truth experiments. Our proposed Bayesian approach performs convincingly across all our synthetic tests. From pathogen measurements of shiga toxin-producing Escherichia coli O157 in Swedish cattle, we are able to produce an accurate statistical model of first-principles confronted with data. Within this model we explore the potential of a Bayesian public health framework by assessing the efficiency of disease detection and -intervention scenarios.
|
statistics
|
Artificial magnetism at optical frequencies can be realized in metamaterials composed of periodic arrays of subwavelength elements, also called "meta-atoms". Optically-induced magnetic moments can be arranged in both unstaggered structures, naturally associated with ferromagnetic (FM) order, or staggered structures, linked correspondingly to antiferromagnetic (AFM) order. Here we demonstrate that such magnetic dipole orders of the lattices of meta-atoms can appear in low-symmetry Mie-resonant metasurfaces where each asymmetric dielectric (non-magnetic) meta-atom supports a localized trapped mode. We reveal that these all-dielectric resonant metasurfaces possess not only strong optical magnetic response but also they demonstrate a significant polarization rotation of the propagating electromagnetic waves at both FM and AFM resonances. We confirm these findings experimentally by measuring directly the spectral characteristics of different modes excited in all-dielectric metasurfaces, and mapping near-field patterns of the electromagnetic fields at the microwave frequencies.
|
physics
|
The occurrence of digits 1 through 9 as the leftmost nonzero digit of numbers from real-world sources is distributed unevenly according to an empirical law, known as Benford's law or the first digit law. It remains obscure why a variety of data sets generated from quite different dynamics obey this particular law. We perform a study of Benford's law from the application of the Laplace transform, and find that the logarithmic Laplace spectrum of the digital indicator function can be approximately taken as a constant. This particular constant, being exactly the Benford term, explains the prevalence of Benford's law. The slight variation from the Benford term leads to deviations from Benford's law for distributions which oscillate violently in the inverse Laplace space. We prove that the whole family of completely monotonic distributions can satisfy Benford's law within a small bound. Our study suggests that Benford's law originates from the way that we write numbers, thus should be taken as a basic mathematical knowledge.
|
statistics
|
Compressed sensing in MRI enables high subsampling factors while maintaining diagnostic image quality. This technique enables shortened scan durations and/or improved image resolution. Further, compressed sensing can increase the diagnostic information and value from each scan performed. Overall, compressed sensing has significant clinical impact in improving the diagnostic quality and patient experience for imaging exams. However, a number of challenges exist when moving compressed sensing from research to the clinic. These challenges include hand-crafted image priors, sensitive tuning parameters, and long reconstruction times. Data-driven learning provides a solution to address these challenges. As a result, compressed sensing can have greater clinical impact. In this tutorial, we will review the compressed sensing formulation and outline steps needed to transform this formulation to a deep learning framework. Supplementary open source code in python will be used to demonstrate this approach with open databases. Further, we will discuss considerations in applying data-driven compressed sensing in the clinical setting.
|
electrical engineering and systems science
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.