title
stringlengths
7
239
abstract
stringlengths
7
2.76k
cs
int64
0
1
phy
int64
0
1
math
int64
0
1
stat
int64
0
1
quantitative biology
int64
0
1
quantitative finance
int64
0
1
Informed Sub-Sampling MCMC: Approximate Bayesian Inference for Large Datasets
This paper introduces a framework for speeding up Bayesian inference conducted in presence of large datasets. We design a Markov chain whose transition kernel uses an (unknown) fraction of (fixed size) of the available data that is randomly refreshed throughout the algorithm. Inspired by the Approximate Bayesian Computation (ABC) literature, the subsampling process is guided by the fidelity to the observed data, as measured by summary statistics. The resulting algorithm, Informed Sub-Sampling MCMC (ISS-MCMC), is a generic and flexible approach which, contrary to existing scalable methodologies, preserves the simplicity of the Metropolis-Hastings algorithm. Even though exactness is lost, i.e. the chain distribution approximates the posterior, we study and quantify theoretically this bias and show on a diverse set of examples that it yields excellent performances when the computational budget is limited. If available and cheap to compute, we show that setting the summary statistics as the maximum likelihood estimator is supported by theoretical arguments.
0
0
0
1
0
0
Mathematical Analysis of Anthropogenic Signatures: The Great Deceleration
Distributions of anthropogenic signatures (impacts and activities) are mathematically analysed. The aim is to understand the Anthropocene and to see whether anthropogenic signatures could be used to determine its beginning. A total of 23 signatures were analysed and results are presented in 31 diagrams. Some of these signatures contain undistinguishable natural components but most of them are of purely anthropogenic origin. Great care was taken to identify abrupt accelerations, which could be used to determine the beginning of the Anthropocene. Results of the analysis can be summarised in three conclusions. 1. Anthropogenic signatures cannot be used to determine the beginning of the Anthropocene. 2. There was no abrupt Great Acceleration around 1950 or around any other time. 3. Anthropogenic signatures are characterised by the Great Deceleration in the second half of the 20th century. The second half of the 20th century does not mark the beginning of the Anthropocene but most likely the beginning of the end of the strong anthropogenic impacts, maybe even the beginning of a transition to a sustainable future. The Anthropocene is a unique stage in human experience but it has no clearly marked beginning and it is probably not a new geological epoch.
0
0
0
0
1
0
Efficient Regret Minimization in Non-Convex Games
We consider regret minimization in repeated games with non-convex loss functions. Minimizing the standard notion of regret is computationally intractable. Thus, we define a natural notion of regret which permits efficient optimization and generalizes offline guarantees for convergence to an approximate local optimum. We give gradient-based methods that achieve optimal regret, which in turn guarantee convergence to equilibrium in this framework.
1
0
0
1
0
0
Spectral Methods for Immunization of Large Networks
Given a network of nodes, minimizing the spread of a contagion using a limited budget is a well-studied problem with applications in network security, viral marketing, social networks, and public health. In real graphs, virus may infect a node which in turn infects its neighbor nodes and this may trigger an epidemic in the whole graph. The goal thus is to select the best k nodes (budget constraint) that are immunized (vaccinated, screened, filtered) so as the remaining graph is less prone to the epidemic. It is known that the problem is, in all practical models, computationally intractable even for moderate sized graphs. In this paper we employ ideas from spectral graph theory to define relevance and importance of nodes. Using novel graph theoretic techniques, we then design an efficient approximation algorithm to immunize the graph. Theoretical guarantees on the running time of our algorithm show that it is more efficient than any other known solution in the literature. We test the performance of our algorithm on several real world graphs. Experiments show that our algorithm scales well for large graphs and outperforms state of the art algorithms both in quality (containment of epidemic) and efficiency (runtime and space complexity).
1
0
0
0
0
0
Long-time existence of nonlinear inhomogeneous compressible elastic waves
In this paper, we consider the nonlinear inhomogeneous compressible elastic waves in three spatial dimensions when the density is a small disturbance around a constant state. In homogeneous case, the almost global existence was established by Klainerman-Sideris [1996_CPAM], and global existence was built by Agemi [2000_Invent. Math.] and Sideris [1996_Invent. Math., 2000_Ann. Math.] independently. Here we establish the corresponding almost global and global existence theory in the inhomogeneous case.
0
0
1
0
0
0
Deep Convolutional Denoising of Low-Light Images
Poisson distribution is used for modeling noise in photon-limited imaging. While canonical examples include relatively exotic types of sensing like spectral imaging or astronomy, the problem is relevant to regular photography now more than ever due to the booming market for mobile cameras. Restricted form factor limits the amount of absorbed light, thus computational post-processing is called for. In this paper, we make use of the powerful framework of deep convolutional neural networks for Poisson denoising. We demonstrate how by training the same network with images having a specific peak value, our denoiser outperforms previous state-of-the-art by a large margin both visually and quantitatively. Being flexible and data-driven, our solution resolves the heavy ad hoc engineering used in previous methods and is an order of magnitude faster. We further show that by adding a reasonable prior on the class of the image being processed, another significant boost in performance is achieved.
1
0
0
0
0
0
Towards a fractal cohomology: Spectra of Polya--Hilbert operators, regularized determinants and Riemann zeros
Emil Artin defined a zeta function for algebraic curves over finite fields and made a conjecture about them analogous to the famous Riemann hypothesis. This and other conjectures about these zeta functions would come to be called the Weil conjectures, which were proved by Weil for curves and later, by Deligne for varieties over finite fields. Much work was done in the search for a proof of these conjectures, including the development in algebraic geometry of a Weil cohomology theory for these varieties, which uses the Frobenius operator on a finite field. The zeta function is then expressed as a determinant, allowing the properties of the function to relate to those of the operator. The search for a suitable cohomology theory and associated operator to prove the Riemann hypothesis is still on. In this paper, we study the properties of the derivative operator $D = \frac{d}{dz}$ on a particular weighted Bergman space of entire functions. The operator $D$ can be naturally viewed as the `infinitesimal shift of the complex plane'. Furthermore, this operator is meant to be the replacement for the Frobenius operator in the general case and is used to construct an operator associated to any suitable meromorphic function. We then show that the meromorphic function can be recovered by using a regularized determinant involving the above operator. This is illustrated in some important special cases: rational functions, zeta functions of curves over finite fields, the Riemann zeta function, and culminating in a quantized version of the Hadamard factorization theorem that applies to any entire function of finite order. Our construction is motivated in part by [23] on the infinitesimal shift of the real line, as well as by earlier work of Deninger [10] on cohomology in number theory and a conjectural `fractal cohomology theory' envisioned in [25] and [28].
0
0
1
0
0
0
F-index of graphs based on four operations related to the lexicographic product
The forgotten topological index or F-index of a graph is defined as the sum of cubes of the degree of all the vertices of the graph. In this paper we study the F-index of four operations related to the lexicographic product on graphs which were introduced by Sarala et al. [D. Sarala, H. Deng, S.K. Ayyaswamya and S. Balachandrana, The Zagreb indices of graphs based on four new operations related to the lexicographic product, \textit{Applied Mathematics and Computation}, 309 (2017) 156--169.].
1
0
0
0
0
0
Deleting vertices to graphs of bounded genus
We show that a problem of deleting a minimum number of vertices from a graph to obtain a graph embeddable on a surface of a given Euler genus is solvable in time $2^{C_g \cdot k^2 \log k} n^{O(1)}$, where $k$ is the size of the deletion set, $C_g$ is a constant depending on the Euler genus $g$ of the target surface, and $n$ is the size of the input graph. On the way to this result, we develop an algorithm solving the problem in question in time $2^{O((t+g) \log (t+g))} n$, given a tree decomposition of the input graph of width $t$. The results generalize previous algorithms for the surface being a sphere by Marx and Schlotter [Algorithmica 2012], Kawarabayashi [FOCS 2009], and Jansen, Lokshtanov, and Saurabh [SODA 2014].
1
0
0
0
0
0
Axion dark matter search using the storage ring EDM method
We propose using the storage ring EDM method to search for the axion dark matter induced EDM oscillation in nucleons. The method uses a combination of B and E-fields to produce a resonance between the $g-2$ spin precession frequency and the background axion field oscillation to greatly enhance sensitivity to it. An axion frequency range from $10^{-9}$ Hz to 100 MHz can in principle be scanned with high sensitivity, corresponding to an $f_a$ range of $10^{13} $ GeV $\leq f_a \leq 10^{30}$ GeV, the breakdown scale of the global symmetry generating the axion or axion like particles (ALPs).
0
1
0
0
0
0
Time crystal platform: from quasi-crystal structures in time to systems with exotic interactions
Time crystals are quantum many-body systems which, due to interactions between particles, are able to spontaneously self-organize their motion in a periodic way in time by analogy with the formation of crystalline structures in space in condensed matter physics. In solid state physics properties of space crystals are often investigated with the help of external potentials that are spatially periodic and reflect various crystalline structures. A similar approach can be applied for time crystals, as periodically driven systems constitute counterparts of spatially periodic systems, but in the time domain. Here we show that condensed matter problems ranging from single particles in potentials of quasi-crystal structure to many-body systems with exotic long-range interactions can be realized in the time domain with an appropriate periodic driving. Moreover, it is possible to create molecules where atoms are bound together due to destructive interference if the atomic scattering length is modulated in time.
0
1
0
0
0
0
Fast and accurate classification of echocardiograms using deep learning
Echocardiography is essential to modern cardiology. However, human interpretation limits high throughput analysis, limiting echocardiography from reaching its full clinical and research potential for precision medicine. Deep learning is a cutting-edge machine-learning technique that has been useful in analyzing medical images but has not yet been widely applied to echocardiography, partly due to the complexity of echocardiograms' multi view, multi modality format. The essential first step toward comprehensive computer assisted echocardiographic interpretation is determining whether computers can learn to recognize standard views. To this end, we anonymized 834,267 transthoracic echocardiogram (TTE) images from 267 patients (20 to 96 years, 51 percent female, 26 percent obese) seen between 2000 and 2017 and labeled them according to standard views. Images covered a range of real world clinical variation. We built a multilayer convolutional neural network and used supervised learning to simultaneously classify 15 standard views. Eighty percent of data used was randomly chosen for training and 20 percent reserved for validation and testing on never seen echocardiograms. Using multiple images from each clip, the model classified among 12 video views with 97.8 percent overall test accuracy without overfitting. Even on single low resolution images, test accuracy among 15 views was 91.7 percent versus 70.2 to 83.5 percent for board-certified echocardiographers. Confusional matrices, occlusion experiments, and saliency mapping showed that the model finds recognizable similarities among related views and classifies using clinically relevant image features. In conclusion, deep neural networks can classify essential echocardiographic views simultaneously and with high accuracy. Our results provide a foundation for more complex deep learning assisted echocardiographic interpretation.
1
0
0
0
0
0
Anomaly Detection in Multivariate Non-stationary Time Series for Automatic DBMS Diagnosis
Anomaly detection in database management systems (DBMSs) is difficult because of increasing number of statistics (stat) and event metrics in big data system. In this paper, I propose an automatic DBMS diagnosis system that detects anomaly periods with abnormal DB stat metrics and finds causal events in the periods. Reconstruction error from deep autoencoder and statistical process control approach are applied to detect time period with anomalies. Related events are found using time series similarity measures between events and abnormal stat metrics. After training deep autoencoder with DBMS metric data, efficacy of anomaly detection is investigated from other DBMSs containing anomalies. Experiment results show effectiveness of proposed model, especially, batch temporal normalization layer. Proposed model is used for publishing automatic DBMS diagnosis reports in order to determine DBMS configuration and SQL tuning.
1
0
0
1
0
0
On the complexity of non-orientable Seifert fibre spaces
In this paper we deal with Seifert fibre spaces, which are compact 3-manifolds admitting a foliation by circles. We give a combinatorial description for these manifolds in all the possible cases: orientable, non-orientable, closed, with boundary. Moreover, we compute a potentially sharp upper bound for their complexity in terms of the invariants of the combinatorial description, extending to the non-orientable case results by Fominykh and Wiest for the orientable case with boundary and by Martelli and Petronio for the closed orientable case.
0
0
1
0
0
0
MIMO-UFMC Transceiver Schemes for Millimeter Wave Wireless Communications
The UFMC modulation is among the most considered solutions for the realization of beyond-OFDM air interfaces for future wireless networks. This paper focuses on the design and analysis of an UFMC transceiver equipped with multiple antennas and operating at millimeter wave carrier frequencies. The paper provides the full mathematical model of a MIMO-UFMC transceiver, taking into account the presence of hybrid analog/digital beamformers at both ends of the communication links. Then, several detection structures are proposed, both for the case of single-packet isolated transmission, and for the case of multiple-packet continuous transmission. In the latter situation, the paper also considers the case in which no guard time among adjacent packets is inserted, trading off an increased level of interference with higher values of spectral efficiency. At the analysis stage, the several considered detection structures and transmission schemes are compared in terms of bit-error-rate, root-mean-square-error, and system throughput. The numerical results show that the proposed transceiver algorithms are effective and that the linear MMSE data detector is capable of well managing the increased interference brought by the removal of guard times among consecutive packets, thus yielding throughput gains of about 10 - 13 $\%$. The effect of phase noise at the receiver is also numerically assessed, and it is shown that the recursive implementation of the linear MMSE exhibits some degree of robustness against this disturbance.
1
0
0
0
0
0
Modeling human intuitions about liquid flow with particle-based simulation
Humans can easily describe, imagine, and, crucially, predict a wide variety of behaviors of liquids--splashing, squirting, gushing, sloshing, soaking, dripping, draining, trickling, pooling, and pouring--despite tremendous variability in their material and dynamical properties. Here we propose and test a computational model of how people perceive and predict these liquid dynamics, based on coarse approximate simulations of fluids as collections of interacting particles. Our model is analogous to a "game engine in the head", drawing on techniques for interactive simulations (as in video games) that optimize for efficiency and natural appearance rather than physical accuracy. In two behavioral experiments, we found that the model accurately captured people's predictions about how liquids flow among complex solid obstacles, and was significantly better than two alternatives based on simple heuristics and deep neural networks. Our model was also able to explain how people's predictions varied as a function of the liquids' properties (e.g., viscosity and stickiness). Together, the model and empirical results extend the recent proposal that human physical scene understanding for the dynamics of rigid, solid objects can be supported by approximate probabilistic simulation, to the more complex and unexplored domain of fluid dynamics.
0
0
0
0
1
0
Comment on "Spin-Orbit Coupling Induced Gap in Graphene on Pt(111) with Intercalated Pb Monolayer"
Recently a paper of Klimovskikh et al. was published presenting experimental and theoretical analysis of the graphene/Pb/Pt(111) system. The authors investigate the crystallographic and electronic structure of this graphene-based system by means of LEED, ARPES, and spin-resolved PES of the graphene $\pi$ states in the vicinity of the Dirac point of graphene. The authors of this paper demonstrate that an energy gap of approx. 200 meV is opened in the spectral function of graphene directly at the Dirac point of graphene and spin-splitting of 100 meV is detected for the upper part of the Dirac cone. On the basis of the spin-resolved photoelectron spectroscopy measurements of the region around the gap the authors claim that these splittings are of a spin-orbit nature and that the observed spin structure confirms the observation of the quantum spin Hall state in graphene, proposed in earlier theoretical works. Here we will show that careful systematic analysis of the experimental data presented in this manuscript is needed and their interpretation require more critical consideration for making such conclusions. Our analysis demonstrates that the proposed effects and interpretations are questionable and require further more careful experiments.
0
1
0
0
0
0
Stochastic Non-convex Ordinal Embedding with Stabilized Barzilai-Borwein Step Size
Learning representation from relative similarity comparisons, often called ordinal embedding, gains rising attention in recent years. Most of the existing methods are batch methods designed mainly based on the convex optimization, say, the projected gradient descent method. However, they are generally time-consuming due to that the singular value decomposition (SVD) is commonly adopted during the update, especially when the data size is very large. To overcome this challenge, we propose a stochastic algorithm called SVRG-SBB, which has the following features: (a) SVD-free via dropping convexity, with good scalability by the use of stochastic algorithm, i.e., stochastic variance reduced gradient (SVRG), and (b) adaptive step size choice via introducing a new stabilized Barzilai-Borwein (SBB) method as the original version for convex problems might fail for the considered stochastic \textit{non-convex} optimization problem. Moreover, we show that the proposed algorithm converges to a stationary point at a rate $\mathcal{O}(\frac{1}{T})$ in our setting, where $T$ is the number of total iterations. Numerous simulations and real-world data experiments are conducted to show the effectiveness of the proposed algorithm via comparing with the state-of-the-art methods, particularly, much lower computational cost with good prediction performance.
1
0
0
1
0
0
The Mass-Metallicity Relation revisited with CALIFA
We present an updated version of the mass--metallicity relation (MZR) using integral field spectroscopy data obtained from 734 galaxies observed by the CALIFA survey. These unparalleled spatially resolved spectroscopic data allow us to determine the metallicity at the same physical scale ($\mathrm{R_{e}}$) for different calibrators. We obtain MZ relations with similar shapes for all calibrators, once the scale factors among them are taken into account. We do not find any significant secondary relation of the MZR with either the star formation rate (SFR) or the specific SFR for any of the calibrators used in this study, based on the analysis of the residuals of the best fitted relation. However we do see a hint for a (s)SFR-dependent deviation of the MZ-relation at low masses (M$<$10$^{9.5}$M$_\odot$), where our sample is not complete. We are thus unable to confirm the results by Mannucci et al. (2010), although we cannot exclude that this result is due to the differences in the analysed datasets. In contrast, our results are inconsistent with the results by Lara-Lopez et al. (2010), and we can exclude the presence of a SFR-Mass-Oxygen abundance Fundamental Plane. These results agree with previous findings suggesting that either (1) the secondary relation with the SFR could be induced by an aperture effect in single fiber/aperture spectroscopic surveys, (2) it could be related to a local effect confined to the central regions of galaxies, or (3) it is just restricted to the low-mass regime, or a combination of the three effects.
0
1
0
0
0
0
(p,q)-webs of DIM representations, 5d N=1 instanton partition functions and qq-characters
Instanton partition functions of $\mathcal{N}=1$ 5d Super Yang-Mills reduced on $S^1$ can be engineered in type IIB string theory from the $(p,q)$-branes web diagram. To this diagram is superimposed a web of representations of the Ding-Iohara-Miki (DIM) algebra that acts on the partition function. In this correspondence, each segment is associated to a representation, and the (topological string) vertex is identified with the intertwiner operator constructed by Awata, Feigin and Shiraishi. We define a new intertwiner acting on the representation spaces of levels $(1,n)\otimes(0,m)\to(1,n+m)$, thereby generalizing to higher rank $m$ the original construction. It allows us to use a folded version of the usual $(p,q)$-web diagram, bringing great simplifications to actual computations. As a result, the characterization of Gaiotto states and vertical intertwiners, previously obtained by some of the authors, is uplifted to operator relations acting in the Fock space of horizontal representations. We further develop a method to build qq-characters of linear quivers based on the horizontal action of DIM elements. While fundamental qq-characters can be built using the coproduct, higher ones require the introduction of a (quantum) Weyl reflection acting on tensor products of DIM generators.
0
0
1
0
0
0
Self-Motion of the 3-PPPS Parallel Robot with Delta-Shaped Base
This paper presents the kinematic analysis of the 3-PPPS parallel robot with an equi-lateral mobile platform and an equilateral-shaped base. Like the other 3-PPPS robots studied in the literature, it is proved that the parallel singularities depend only on the orientation of the end-effector. The quaternion parameters are used to represent the singularity surfaces. The study of the direct kinematic model shows that this robot admits a self-motion of the Cardanic type. This explains why the direct kinematic model admits an infinite number of solutions in the center of the workspace at the "home" position but has never been studied until now.
1
0
0
0
0
0
Scaling-Up Reasoning and Advanced Analytics on BigData
BigDatalog is an extension of Datalog that achieves performance and scalability on both Apache Spark and multicore systems to the point that its graph analytics outperform those written in GraphX. Looking back, we see how this realizes the ambitious goal pursued by deductive database researchers beginning forty years ago: this is the goal of combining the rigor and power of logic in expressing queries and reasoning with the performance and scalability by which relational databases managed Big Data. This goal led to Datalog which is based on Horn Clauses like Prolog but employs implementation techniques, such as Semi-naive Fixpoint and Magic Sets, that extend the bottom-up computation model of relational systems, and thus obtain the performance and scalability that relational systems had achieved, as far back as the 80s, using data-parallelization on shared-nothing architectures. But this goal proved difficult to achieve because of major issues at (i) the language level and (ii) at the system level. The paper describes how (i) was addressed by simple rules under which the fixpoint semantics extends to programs using count, sum and extrema in recursion, and (ii) was tamed by parallel compilation techniques that achieve scalability on multicore systems and Apache Spark. This paper is under consideration for acceptance in Theory and Practice of Logic Programming (TPLP).
1
0
0
0
0
0
ALMA Observations of the Gravitational Lens SDP.9
We present long-baseline ALMA observations of the strong gravitational lens H-ATLAS J090740.0-004200 (SDP.9), which consists of an elliptical galaxy at $z_{\mathrm{L}}=0.6129$ lensing a background submillimeter galaxy into two extended arcs. The data include Band 6 continuum observations, as well as CO $J$=6$-$5 molecular line observations, from which we measure an updated source redshift of $z_{\mathrm{S}}=1.5747$. The image morphology in the ALMA data is different from that of the HST data, indicating a spatial offset between the stellar, gas, and dust component of the source galaxy. We model the lens as an elliptical power law density profile with external shear using a combination of archival HST data and conjugate points identified in the ALMA data. Our best model has an Einstein radius of $\theta_{\mathrm{E}}=0.66\pm0.01$ and a slightly steeper than isothermal mass profile slope. We search for the central image of the lens, which can be used constrain the inner mass distribution of the lens galaxy including the central supermassive black hole, but do not detect it in the integrated CO image at a 3$\sigma$ rms level of 0.0471 Jy km s$^{-1}$.
0
1
0
0
0
0
The Noether numbers and the Davenport constants of the groups of order less than 32
The computation of the Noether numbers of all groups of order less than thirty-two is completed. It turns out that for these groups in non-modular characteristic the Noether number is attained on a multiplicity free representation, it is strictly monotone on subgroups and factor groups, and it does not depend on the characteristic. Algorithms are developed and used to determine the small and large Davenport constants of these groups. For each of these groups the Noether number is greater than the small Davenport constant, whereas the first example of a group whose Noether number exceeds the large Davenport constant is found, answering partially a question posed by Geroldinger and Grynkiewicz.
0
0
1
0
0
0
Multistability and coexisting soliton combs in ring resonators: the Lugiato-Lefever approach
We are reporting that the Lugiato-Lefever equation describing the frequency comb generation in ring resonators with the localized pump and loss terms also describes the simultaneous nonlinear resonances leading to the multistability of nonlinear modes and coexisting solitons that are associated with the spectrally distinct frequency combs.
0
1
0
0
0
0
Numerical assessment of the percolation threshold using complement networks
Models of percolation processes on networks currently assume locally tree-like structures at low densities, and are derived exactly only in the thermodynamic limit. Finite size effects and the presence of short loops in real systems however cause a deviation between the empirical percolation threshold $p_c$ and its model-predicted value $\pi_c$. Here we show the existence of an empirical linear relation between $p_c$ and $\pi_c$ across a large number of real and model networks. Such a putatively universal relation can then be used to correct the estimated value of $\pi_c$. We further show how to obtain a more precise relation using the concept of the complement graph, by investigating on the connection between the percolation threshold of a network, $p_c$, and that of its complement, $\bar{p}_c$.
1
0
0
0
0
0
Planar graphs as L-intersection or L-contact graphs
The L-intersection graphs are the graphs that have a representation as intersection graphs of axis parallel shapes in the plane. A subfamily of these graphs are {L, |, --}-contact graphs which are the contact graphs of axis parallel L, |, and -- shapes in the plane. We prove here two results that were conjectured by Chaplick and Ueckerdt in 2013. We show that planar graphs are L-intersection graphs, and that triangle-free planar graphs are {L, |, --}-contact graphs. These results are obtained by a new and simple decomposition technique for 4-connected triangulations. Our results also provide a much simpler proof of the known fact that planar graphs are segment intersection graphs.
1
0
0
0
0
0
An EPTAS for Scheduling on Unrelated Machines of Few Different Types
In the classical problem of scheduling on unrelated parallel machines, a set of jobs has to be assigned to a set of machines. The jobs have a processing time depending on the machine and the goal is to minimize the makespan, that is the maximum machine load. It is well known that this problem is NP-hard and does not allow polynomial time approximation algorithms with approximation guarantees smaller than $1.5$ unless P$=$NP. We consider the case that there are only a constant number $K$ of machine types. Two machines have the same type if all jobs have the same processing time for them. This variant of the problem is strongly NP-hard already for $K=1$. We present an efficient polynomial time approximation scheme (EPTAS) for the problem, that is, for any $\varepsilon > 0$ an assignment with makespan of length at most $(1+\varepsilon)$ times the optimum can be found in polynomial time in the input length and the exponent is independent of $1/\varepsilon$. In particular we achieve a running time of $2^{\mathcal{O}(K\log(K) \frac{1}{\varepsilon}\log^4 \frac{1}{\varepsilon})}+\mathrm{poly}(|I|)$, where $|I|$ denotes the input length. Furthermore, we study three other problem variants and present an EPTAS for each of them: The Santa Claus problem, where the minimum machine load has to be maximized; the case of scheduling on unrelated parallel machines with a constant number of uniform types, where machines of the same type behave like uniformly related machines; and the multidimensional vector scheduling variant of the problem where both the dimension and the number of machine types are constant. For the Santa Claus problem we achieve the same running time. The results are achieved, using mixed integer linear programming and rounding techniques.
1
0
0
0
0
0
An extensive impurity-scattering study on the pairing symmetry of monolayer FeSe films on SrTiO3
Determination of the pairing symmetry in monolayer FeSe films on SrTiO3 is a requisite for understanding the high superconducting transition temperature in this system, which has attracted intense theoretical and experimental studies but remains controversial. Here, by introducing several types of point defects in FeSe monolayer films, we conduct a systematic investigation on the impurity-induced electronic states by spatially resolved scanning tunneling spectroscopy. Ranging from surface adsorption, chemical substitution to intrinsic structural modification, these defects generate a variety of scattering strength, which renders new insights on the pairing symmetry.
0
1
0
0
0
0
Fast Depth Imaging Denoising with the Temporal Correlation of Photons
This paper proposes a novel method to filter out the false alarm of LiDAR system by using the temporal correlation of target reflected photons. Because of the inevitable noise, which is due to background light and dark counts of the detector, the depth imaging of LiDAR system exists a large estimation error. Our method combines the Poisson statistical model with the different distribution feature of signal and noise in the time axis. Due to selecting a proper threshold, our method can effectively filter out the false alarm of system and use the ToFs of detected signal photons to rebuild the depth image of the scene. The experimental results reveal that by our method it can fast distinguish the distance between two close objects, which is confused due to the high background noise, and acquire the accurate depth image of the scene. Our method need not increase the complexity of the system and is useful in power-limited depth imaging.
0
1
0
0
0
0
From Propositional Logic to Plausible Reasoning: A Uniqueness Theorem
We consider the question of extending propositional logic to a logic of plausible reasoning, and posit four requirements that any such extension should satisfy. Each is a requirement that some property of classical propositional logic be preserved in the extended logic; as such, the requirements are simpler and less problematic than those used in Cox's Theorem and its variants. As with Cox's Theorem, our requirements imply that the extended logic must be isomorphic to (finite-set) probability theory. We also obtain specific numerical values for the probabilities, recovering the classical definition of probability as a theorem, with truth assignments that satisfy the premise playing the role of the "possible cases."
1
0
0
0
0
0
Discovering objects and their relations from entangled scene representations
Our world can be succinctly and compactly described as structured scenes of objects and relations. A typical room, for example, contains salient objects such as tables, chairs and books, and these objects typically relate to each other by their underlying causes and semantics. This gives rise to correlated features, such as position, function and shape. Humans exploit knowledge of objects and their relations for learning a wide spectrum of tasks, and more generally when learning the structure underlying observed data. In this work, we introduce relation networks (RNs) - a general purpose neural network architecture for object-relation reasoning. We show that RNs are capable of learning object relations from scene description data. Furthermore, we show that RNs can act as a bottleneck that induces the factorization of objects from entangled scene description inputs, and from distributed deep representations of scene images provided by a variational autoencoder. The model can also be used in conjunction with differentiable memory mechanisms for implicit relation discovery in one-shot learning tasks. Our results suggest that relation networks are a potentially powerful architecture for solving a variety of problems that require object relation reasoning.
1
0
0
0
0
0
Hall effect spintronics for gas detection
We present the concept of magnetic gas detection by the Extraordinary Hall effect (EHE). The technique is compatible with the existing conductometric gas detection technologies and allows simultaneous measurement of two independent parameters: resistivity and magnetization affected by the target gas. Feasibility of the approach is demonstrated by detecting low concentration hydrogen using thin CoPd films as the sensor material. The Hall effect sensitivity of the optimized samples exceeds 240% per 104 ppm at hydrogen concentrations below 0.5% in the hydrogen/nitrogen atmosphere, which is more than two orders of magnitude higher than the sensitivity of the conductance detection.
0
1
0
0
0
0
An evolutionary strategy for DeltaE - E identification
In this article we present an automatic method for charge and mass identification of charged nuclear fragments produced in heavy ion collisions at intermediate energies. The algorithm combines a generative model of DeltaE - E relation and a Covariance Matrix Adaptation Evolutionary Strategy (CMA-ES). The CMA-ES is a stochastic and derivative-free method employed to search parameter space of the model by means of a fitness function. The article describes details of the method along with results of an application on simulated labeled data.
1
1
0
0
0
0
Contrastive Training for Models of Information Cascades
This paper proposes a model of information cascades as directed spanning trees (DSTs) over observed documents. In addition, we propose a contrastive training procedure that exploits partial temporal ordering of node infections in lieu of labeled training links. This combination of model and unsupervised training makes it possible to improve on models that use infection times alone and to exploit arbitrary features of the nodes and of the text content of messages in information cascades. With only basic node and time lag features similar to previous models, the DST model achieves performance with unsupervised training comparable to strong baselines on a blog network inference task. Unsupervised training with additional content features achieves significantly better results, reaching half the accuracy of a fully supervised model.
1
0
0
0
0
0
The Structure of the Broad-Line Region In Active Galactic Nuclei. II. Dynamical Modeling of Data from the AGN10 Reverberation Mapping Campaign
We present inferences on the geometry and kinematics of the broad-Hbeta line-emitting region in four active galactic nuclei monitored as a part of the fall 2010 reverberation mapping campaign at MDM Observatory led by the Ohio State University. From modeling the continuum variability and response in emission-line profile changes as a function of time, we infer the geometry of the Hbeta- emitting broad line regions to be thick disks that are close to face-on to the observer with kinematics that are well-described by either elliptical orbits or inflowing gas. We measure the black hole mass to be log (MBH) = 7.25 (+/-0.10) for Mrk 335, 7.86 (+0.20, -0.17) for Mrk 1501, 7.84 (+0.14, -0.19) for 3C 120, and 6.92 (+0.24, -0.23) for PG 2130+099. These black hole mass measurements are not based on a particular assumed value of the virial scale factor f, allowing us to compute individual f factors for each target. Our results nearly double the number of targets that have been modeled in this manner, and investigate the properties of a more diverse sample by including previously modeled objects. We measure an average scale factor f in the entire sample to be log10(f) = 0.54 +/- 0.17 when the line dispersion is used to characterize the line width, which is consistent with values derived using the normalization of the MBH-sigma relation. We find that the scale factor f for individual targets is likely correlated with the black hole mass, inclination angle, and opening angle of the broad line region but we do not find any correlation with the luminosity.
0
1
0
0
0
0
A Unified Framework for Long Range and Cold Start Forecasting of Seasonal Profiles in Time Series
Providing long-range forecasts is a fundamental challenge in time series modeling, which is only compounded by the challenge of having to form such forecasts when a time series has never previously been observed. The latter challenge is the time series version of the cold-start problem seen in recommender systems which, to our knowledge, has not been addressed in previous work. A similar problem occurs when a long range forecast is required after only observing a small number of time points --- a warm start forecast. With these aims in mind, we focus on forecasting seasonal profiles---or baseline demand---for periods on the order of a year in three cases: the long range case with multiple previously observed seasonal profiles, the cold start case with no previous observed seasonal profiles, and the warm start case with only a single partially observed profile. Classical time series approaches that perform iterated step-ahead forecasts based on previous observations struggle to provide accurate long range predictions; in settings with little to no observed data, such approaches are simply not applicable. Instead, we present a straightforward framework which combines ideas from high-dimensional regression and matrix factorization on a carefully constructed data matrix. Key to our formulation and resulting performance is leveraging (1) repeated patterns over fixed periods of time and across series, and (2) metadata associated with the individual series; without this additional data, the cold-start/warm-start problems are nearly impossible to solve. We demonstrate that our framework can accurately forecast an array of seasonal profiles on multiple large scale datasets.
1
0
0
1
0
0
A Multi-Ringed, Modestly-Inclined Protoplanetary Disk around AA Tau
AA Tau is the archetype for a class of stars with a peculiar periodic photometric variability thought to be related to a warped inner disk structure with a nearly edge-on viewing geometry. We present high resolution ($\sim$0.2") ALMA observations of the 0.87 and 1.3~mm dust continuum emission from the disk around AA Tau. These data reveal an evenly spaced three-ringed emission structure, with distinct peaks at 0.34", 0.66", and 0.99", all viewed at a modest inclination of 59.1$^{\circ}\pm$0.3$^{\circ}$ (decidedly not edge-on). In addition to this ringed substructure, we find non-axisymmetric features including a `bridge' of emission that connects opposite sides of the innermost ring. We speculate on the nature of this `bridge' in light of accompanying observations of HCO$^+$ and $^{13}$CO (J=3--2) line emission. The HCO$^+$ emission is bright interior to the innermost dust ring, with a projected velocity field that appears rotated with respect to the resolved disk geometry, indicating the presence of a warp or inward radial flow. We suggest that the continuum bridge and HCO$^+$ line kinematics could originate from gap-crossing accretion streams, which may be responsible for the long-duration dimming of optical light from AA Tau.
0
1
0
0
0
0
Verifying the Medical Specialty from User Profile of Online Community for Health-Related Advices
The paper describes the verifying methods of medical specialty from user profile of online community for health-related advices. To avoid critical situations with the proliferation of unverified and inaccurate information in medical online community, it is necessary to develop a comprehensive software solution for verifying the user medical specialty of online community for health-related advices. The algorithm for forming the information profile of a medical online community user is designed. The scheme systems of formation of indicators of user specialization in the profession based on a training sample is presented. The method of forming the user information profile of online community for healthrelated advices by computer-linguistic analysis of the information content is suggested. The system of indicators based on a training sample of users in medical online communities is formed. The matrix of medical specialties indicators and method of determining weight coefficients these indicators is investigated. The proposed method of verifying the medical specialty from user profile is tested in online medical community.
1
0
0
0
0
0
Stress-Based Navigation for Microscopic Robots in Viscous Fluids
Objects moving in fluids experience patterns of stress on their surfaces determined by their motion and the geometry of nearby boundaries. Fish and underwater robots can use these patterns for navigation. This paper extends this stress-based navigation to microscopic robots in tiny vessels, where robots can exploit the physics of fluids at low Reynolds number. This applies, for instance, in vessels with sizes and flow speeds comparable to those of capillaries in biological tissues. We describe how a robot can use simple computations to estimate its motion, orientation and distance to nearby vessel walls from fluid-induced stresses on its surface. Numerically evaluating these estimates for a variety of vessel sizes and robot positions shows they are most accurate when robots are close to vessel walls.
1
0
0
0
0
0
The $(-β)$-shift and associated Zeta Function
Given a real number $ \beta > 1$, we study the associated $ (-\beta)$-shift introduced by S. Ito and T. Sadahiro. We compares some aspects of the $(-\beta)$-shift to the $\beta$-shift. When the expansion in base $ -\beta $ of $ -\frac{\beta}{\beta+1} $ is periodic with odd period or when $ \beta $ is strictly less than the golden ratio, the $ (-\beta)$-shift, as defined by S. Ito and T. Sadahiro cannot be coded because its language is not transitive. This intransitivity of words explains the existence of gaps in the interval. We observe that an intransitive word appears in the $(-\beta)$-expansion of a real number taken in the gap. Furthermore, we determine the Zeta function $\zeta_{-\beta}$ of the $(-\beta)$-transformation and the associated lap-counting function $L_{T_{-\beta}}$. These two functions are related by $ \zeta_{-\beta}=(1-z^2)L_{T_{-\beta}}$. We observe some similarities with the zeta function of the $\beta$-transformation. The function $\zeta_{-\beta}$ is meromorphic in the unit disk, is holomorphic in the open disk $ \{z |z| < \frac{1}{\beta} \}$, has a simple pole at $ \frac{1}{\beta}$ and no other singularities $ z $ such that $\|z| = \frac{1}{\beta}$. We also note an influence of gaps ($\beta$ less than the golden ratio) on the zeta function. In factors of the denominator of $\zeta_{-\beta}$, the coefficients count the words generating gaps.
0
0
1
0
0
0
Configurational forces in electronic structure calculations using Kohn-Sham density functional theory
We derive the expressions for configurational forces in Kohn-Sham density functional theory, which correspond to the generalized variational force computed as the derivative of the Kohn-Sham energy functional with respect to the position of a material point $\textbf{x}$. These configurational forces that result from the inner variations of the Kohn-Sham energy functional provide a unified framework to compute atomic forces as well as stress tensor for geometry optimization. Importantly, owing to the variational nature of the formulation, these configurational forces inherently account for the Pulay corrections. The formulation presented in this work treats both pseudopotential and all-electron calculations in single framework, and employs a local variational real-space formulation of Kohn-Sham DFT expressed in terms of the non-orthogonal wavefunctions that is amenable to reduced-order scaling techniques. We demonstrate the accuracy and performance of the proposed configurational force approach on benchmark all-electron and pseudopotential calculations conducted using higher-order finite-element discretization. To this end, we examine the rates of convergence of the finite-element discretization in the computed forces and stresses for various materials systems, and, further, verify the accuracy from finite-differencing the energy. Wherever applicable, we also compare the forces and stresses with those obtained from Kohn-Sham DFT calculations employing plane-wave basis (pseudopotential calculations) and Gaussian basis (all-electron calculations). Finally, we verify the accuracy of the forces on large materials systems involving a metallic aluminum nanocluster containing 666 atoms and an alkane chain containing 902 atoms, where the Kohn-Sham electronic ground state is computed using a reduced-order scaling subspace projection technique (P. Motamarri and V. Gavini, Phys. Rev. B 90, 115127).
0
1
0
0
0
0
Utilizing Bluetooth and Adaptive Signal Control Data for Urban Arterials Safety Analysis
Real-time safety analysis has become a hot research topic as it can more accurately reveal the relationships between real-time traffic characteristics and crash occurrence, and these results could be applied to improve active traffic management systems and enhance safety performance. Most of the previous studies have been applied to freeways and seldom to arterials. This study attempts to examine the relationship between crash occurrence and real-time traffic and weather characteristics based on four urban arterials in Central Florida. Considering the substantial difference between the interrupted urban arterials and the access controlled freeways, the adaptive signal phasing data was introduced in addition to the traditional traffic data. Bayesian conditional logistic models were developed by incorporating the Bluetooth, adaptive signal control, and weather data, which were extracted for a period of 20 minutes (four 5-minute intervals) before the time of crash occurrence. Model comparison results indicated that the model based on 5-10 minute interval dataset performs the best. It revealed that the average speed, upstream left-turn volume, downstream green ratio, and rainy indicator were found to have significant effects on crash occurrence. Furthermore, both Bayesian random parameters logistic and Bayesian random parameters conditional logistic models were developed to compare with the Bayesian conditional logistic model, and the Bayesian random parameters conditional logistic model was found to have the best model performance in terms of the AUC and DIC values. These results are important in real-time safety applications in the context of Integrated Active Traffic Management.
0
0
0
1
0
0
A Nernst current from the conformal anomaly in Dirac and Weyl semimetals
We show that a conformal anomaly in Weyl/Dirac semimetals generates a bulk electric current perpendicular to a temperature gradient and the direction of a background magnetic field. The associated conductivity of this novel contribution to the Nernst effect is fixed by a beta function associated with the electric charge renormalization in the material.
0
1
0
0
0
0
Employing both Gender and Emotion Cues to Enhance Speaker Identification Performance in Emotional Talking Environments
Speaker recognition performance in emotional talking environments is not as high as it is in neutral talking environments. This work focuses on proposing, implementing, and evaluating a new approach to enhance the performance in emotional talking environments. The new proposed approach is based on identifying the unknown speaker using both his/her gender and emotion cues. Both Hidden Markov Models (HMMs) and Suprasegmental Hidden Markov Models (SPHMMs) have been used as classifiers in this work. This approach has been tested on our collected emotional speech database which is composed of six emotions. The results of this work show that speaker identification performance based on using both gender and emotion cues is higher than that based on using gender cues only, emotion cues only, and neither gender nor emotion cues by 7.22%, 4.45%, and 19.56%, respectively. This work also shows that the optimum speaker identification performance takes place when the classifiers are completely biased towards suprasegmental models and no impact of acoustic models in the emotional talking environments. The achieved average speaker identification performance based on the new proposed approach falls within 2.35% of that obtained in subjective evaluation by human judges.
1
0
0
0
0
0
Thermal Pressure in Diffuse H2 Gas Measured by Herschel [C II] Emission and FUSE UV H2 Absorption
UV absorption studies with FUSE have observed H2 molecular gas in translucent and diffuse clouds. Observations of the 158 micron [C II] fine structure line with Herschel also trace the same H2 molecular gas in emission. We present [C II] observations along 27 lines of sight (LOSs) towards target stars of which 25 have FUSE H2 UV absorption. We detect [C II] emission features in all but one target LOS. For three Target LOSs, which are close to the Galactic plane, we also present position-velocity maps of [C II] emission observed by HIFI in on-the-fly spectral line mapping. We use the velocity resolved [C II] spectra towards the target LOSs observed by FUSE to identify C II] velocity components associated with the H2 clouds. We analyze the observed velocity integrated [C II] spectral line intensities in terms of the densities and thermal pressures in the H2 gas using the H2 column densities and temperatures measured by the UV absorption data. We present the H2 gas densities and thermal pressures for 26 target LOSs and from the [C II] intensities derive a mean thermal pressure in the range 6100 to 7700 K cm^-3 in diffuse H2 clouds. We discuss the thermal pressures and densities towards 14 targets, comparing them to results obtained using the UV absorption data for two other tracers CI and CO.
0
1
0
0
0
0
A Dictionary Approach to Identifying Transient RFI
As radio telescopes become more sensitive, the damaging effects of radio frequency interference (RFI) become more apparent. Near radio telescope arrays, RFI sources are often easily removed or replaced; the challenge lies in identifying them. Transient (impulsive) RFI is particularly difficult to identify. We propose a novel dictionary-based approach to transient RFI identification. RFI events are treated as sequences of sub-events, drawn from particular labelled classes. We demonstrate an automated method of extracting and labelling sub-events using a dataset of transient RFI. A dictionary of labels may be used in conjunction with hidden Markov models to identify the sources of RFI events reliably. We attain improved classification accuracy over traditional approaches such as SVMs or a naïve kNN classifier. Finally, we investigate why transient RFI is difficult to classify. We show that cluster separation in the principal components domain is influenced by the mains supply phase for certain sources.
0
1
0
0
0
0
Eternal inflation and the quantum birth of cosmic structure
We consider the eternal inflation scenario of the slow-roll/chaotic type with the additional element of an objective collapse of the wave function. The incorporation of this new agent to the traditional inflationary setting might represent a possible solution to the quantum measurement problem during inflation, a subject that has not reached a consensus among the community. Specifically, it could provide an explanation for the generation of the primordial anisotropies and inhomogeneities, starting from a perfectly symmetric background and invoking symmetric dynamics. We adopt the continuous spontaneous localization model, in the context of inflation, as the dynamical reduction mechanism that generates the primordial inhomogeneities. Furthermore, when enforcing the objective reduction mechanism, the condition for eternal inflation can be bypassed. In particular, the collapse mechanism incites the wave function, corresponding to the inflaton, to localize itself around the zero mode of the field. Then the zero mode will evolve essentially unperturbed, driving inflation to an end in any region of the Universe where inflation occurred. Also, our approach achieves a primordial spectrum with an amplitude and shape consistent with the one that best fits the observational data.
0
1
0
0
0
0
Exact time dependence of causal correlations and nonequilibrium density matrices in holographic systems
We present the first exact calculations of the time dependence of causal correlations in driven nonequilibrium states in (2+1)-dimensional systems using holography. Comparing exact results with those obtained from simple prototype geometries that are parametrized only by a time dependent temperature, we find that the universal slowly varying features are controlled just by the pump duration and the initial and final temperatures only. We provide numerical evidence that the locations of the event and apparent horizons in the dual geometries can be deduced from the nonequilibrium causal correlations without any prior knowledge of the dual gravity theory.
0
1
0
0
0
0
Discrete-time construction of nonequilibrium path integrals on the Kostantinov-Perel' time contour
Rigorous nonequilibrium actions for the many-body problem are usually derived by means of path integrals combined with a discrete temporal mesh on the Schwinger-Keldysh time contour. The latter suffers from a fundamental limitation: the initial state on this contour cannot be arbitrary, but necessarily needs to be described by a non-interacting density matrix, while interactions are switched on adiabatically. The Kostantinov-Perel' contour overcomes these and other limitations, allowing generic initial-state preparations. In this Article, we apply the technique of the discrete temporal mesh to rigorously build the nonequilibrium path integral on the Kostantinov-Perel' time contour.
0
1
0
0
0
0
Estimation of Component Reliability in Coherent Systems
The first step in statistical reliability studies of coherent systems is the estimation of the reliability of each system component. For the cases of parallel and series systems the literature is abundant. It seems that the present paper is the first that presents the general case of component inferences in coherent systems. The failure time model considered here is the three-parameter Weibull distribution. Furthermore, neither independence nor identically distributed failure times are required restrictions. The proposed model is general in the sense that it can be used for any coherent system, from the simplest to the more complex structures. It can be considered for all kinds of censored data; including interval-censored data. An important property obtained for the Weibull model is the fact that the posterior distributions are proper, even for non-informative priors. Using several simulations, the excellent performance of the model is illustrated. As a real example, boys first use of marijuana is considered to show the efficiency of the solution even when censored data occurs.
0
0
0
1
0
0
Cell-Probe Lower Bounds from Online Communication Complexity
In this work, we introduce an online model for communication complexity. Analogous to how online algorithms receive their input piece-by-piece, our model presents one of the players, Bob, his input piece-by-piece, and has the players Alice and Bob cooperate to compute a result each time before the next piece is revealed to Bob. This model has a closer and more natural correspondence to dynamic data structures than classic communication models do, and hence presents a new perspective on data structures. We first present a tight lower bound for the online set intersection problem in the online communication model, demonstrating a general approach for proving online communication lower bounds. The online communication model prevents a batching trick that classic communication complexity allows, and yields a stronger lower bound. We then apply the online communication model to prove data structure lower bounds for two dynamic data structure problems: the Group Range problem and the Dynamic Connectivity problem for forests. Both of the problems admit a worst case $O(\log n)$-time data structure. Using online communication complexity, we prove a tight cell-probe lower bound for each: spending $o(\log n)$ (even amortized) time per operation results in at best an $\exp(-\delta^2 n)$ probability of correctly answering a $(1/2+\delta)$-fraction of the $n$ queries.
1
0
0
0
0
0
On Machine Learning and Structure for Mobile Robots
Due to recent advances - compute, data, models - the role of learning in autonomous systems has expanded significantly, rendering new applications possible for the first time. While some of the most significant benefits are obtained in the perception modules of the software stack, other aspects continue to rely on known manual procedures based on prior knowledge on geometry, dynamics, kinematics etc. Nonetheless, learning gains relevance in these modules when data collection and curation become easier than manual rule design. Building on this coarse and broad survey of current research, the final sections aim to provide insights into future potentials and challenges as well as the necessity of structure in current practical applications.
1
0
0
1
0
0
On the degree of incompleteness of an incomplete financial market
In order to find a way of measuring the degree of incompleteness of an incomplete financial market, the rank of the vector price process of the traded assets and the dimension of the associated acceptance set are introduced. We show that they are equal and state a variety of consequences.
0
0
0
0
0
1
Sparsity information and regularization in the horseshoe and other shrinkage priors
The horseshoe prior has proven to be a noteworthy alternative for sparse Bayesian estimation, but has previously suffered from two problems. First, there has been no systematic way of specifying a prior for the global shrinkage hyperparameter based on the prior information about the degree of sparsity in the parameter vector. Second, the horseshoe prior has the undesired property that there is no possibility of specifying separately information about sparsity and the amount of regularization for the largest coefficients, which can be problematic with weakly identified parameters, such as the logistic regression coefficients in the case of data separation. This paper proposes solutions to both of these problems. We introduce a concept of effective number of nonzero parameters, show an intuitive way of formulating the prior for the global hyperparameter based on the sparsity assumptions, and argue that the previous default choices are dubious based on their tendency to favor solutions with more unshrunk parameters than we typically expect a priori. Moreover, we introduce a generalization to the horseshoe prior, called the regularized horseshoe, that allows us to specify a minimum level of regularization to the largest values. We show that the new prior can be considered as the continuous counterpart of the spike-and-slab prior with a finite slab width, whereas the original horseshoe resembles the spike-and-slab with an infinitely wide slab. Numerical experiments on synthetic and real world data illustrate the benefit of both of these theoretical advances.
0
0
0
1
0
0
Learning an internal representation of the end-effector configuration space
Current machine learning techniques proposed to automatically discover a robot kinematics usually rely on a priori information about the robot's structure, sensors properties or end-effector position. This paper proposes a method to estimate a certain aspect of the forward kinematics model with no such information. An internal representation of the end-effector configuration is generated from unstructured proprioceptive and exteroceptive data flow under very limited assumptions. A mapping from the proprioceptive space to this representational space can then be used to control the robot.
1
0
0
0
0
0
Correlation plots of the Siberian radioheliograph
The Siberian Solar Radio Telescope is now being upgraded. The upgrading is aimed at providing the aperture synthesis imaging in the 4-8 GHz frequency range, instead of the single-frequency direct imaging due to the Earth rotation. The first phase of the upgrading is a 48-antenna array - the Siberian Radioheliograph. One type of radioheliograph data represents correlation plots. In evaluating the covariance of two-level signals, these plots are sums of complex correlations, obtained for different antenna pairs. Bearing in mind that correlation of signals from an antenna pair is related to a spatial frequency, we can say that each value of the plot is an integral over a spatial spectrum. Limits of the integration are defined by the task. Only high spatial frequencies are integrated to obtain dynamics of compact sources. The whole spectrum is integrated to reach maximum sensitivity. We show that the covariance of two-level variables up to Van Vleck correction is a correlation coefficient of these variables.
0
1
0
0
0
0
Anomalous Thermal Expansion, Negative Linear Compressibility and High-Pressure Phase Transition in ZnAu2(CN)4: Neutron Inelastic Scattering and Lattice Dynamics Studies
We present temperature dependent inelastic neutron scattering measurments, accompanied byab-initio calculations of phonon spectra and elastic properties as a function of pressure to understand anharmonicity of phonons and to study the mechanism of negative thermal expansion and negative linear compressibility behaviour of ZnAu2(CN)4. The mechanism is identified in terms of specific anharmonic modes that involve bending of the Zn(CN)4-Au- Zn(CN)4 linkage. The high-pressure phase transition at about 2 GPa is also investigated and found to be related to softening of a phonon mode at the L-point at the Brillouin zone boundary and its coupling with a zone-centre phonon and an M-point phonon in the ambient pressure phase. Although the phase transition is primarily driven by a L-point soft phonon mode, which usually leads to a second order transition with a 2 x 2 x 2 supercell, in the present case the structure is close to an elastic instability that leads to a weakly first order transition.
0
1
0
0
0
0
Comparing Computing Platforms for Deep Learning on a Humanoid Robot
The goal of this study is to test two different computing platforms with respect to their suitability for running deep networks as part of a humanoid robot software system. One of the platforms is the CPU-centered Intel NUC7i7BNH and the other is a NVIDIA Jetson TX2 system that puts more emphasis on GPU processing. The experiments addressed a number of benchmarking tasks including pedestrian detection using deep neural networks. Some of the results were unexpected but demonstrate that platforms exhibit both advantages and disadvantages when taking computational performance and electrical power requirements of such a system into account.
1
0
0
0
0
0
DLBI: Deep learning guided Bayesian inference for structure reconstruction of super-resolution fluorescence microscopy
Super-resolution fluorescence microscopy, with a resolution beyond the diffraction limit of light, has become an indispensable tool to directly visualize biological structures in living cells at a nanometer-scale resolution. Despite advances in high-density super-resolution fluorescent techniques, existing methods still have bottlenecks, including extremely long execution time, artificial thinning and thickening of structures, and lack of ability to capture latent structures. Here we propose a novel deep learning guided Bayesian inference approach, DLBI, for the time-series analysis of high-density fluorescent images. Our method combines the strength of deep learning and statistical inference, where deep learning captures the underlying distribution of the fluorophores that are consistent with the observed time-series fluorescent images by exploring local features and correlation along time-axis, and statistical inference further refines the ultrastructure extracted by deep learning and endues physical meaning to the final image. Comprehensive experimental results on both real and simulated datasets demonstrate that our method provides more accurate and realistic local patch and large-field reconstruction than the state-of-the-art method, the 3B analysis, while our method is more than two orders of magnitude faster. The main program is available at this https URL
0
0
0
1
0
0
A Universal Ordinary Differential Equation
An astonishing fact was established by Lee A. Rubel (1981): there exists a fixed non-trivial fourth-order polynomial differential algebraic equation (DAE) such that for any positive continuous function $\varphi$ on the reals, and for any positive continuous function $\epsilon(t)$, it has a $\mathcal{C}^\infty$ solution with $| y(t) - \varphi(t) | < \epsilon(t)$ for all $t$. Lee A. Rubel provided an explicit example of such a polynomial DAE. Other examples of universal DAE have later been proposed by other authors. However, Rubel's DAE \emph{never} has a unique solution, even with a finite number of conditions of the form $y^{(k_i)}(a_i)=b_i$. The question whether one can require the solution that approximates $\varphi$ to be the unique solution for a given initial data is a well known open problem [Rubel 1981, page 2], [Boshernitzan 1986, Conjecture 6.2]. In this article, we solve it and show that Rubel's statement holds for polynomial ordinary differential equations (ODEs), and since polynomial ODEs have a unique solution given an initial data, this positively answers Rubel's open problem. More precisely, we show that there exists a \textbf{fixed} polynomial ODE such that for any $\varphi$ and $\epsilon(t)$ there exists some initial condition that yields a solution that is $\epsilon$-close to $\varphi$ at all times. In particular, the solution to the ODE is necessarily analytic, and we show that the initial condition is computable from the target function and error function.
1
0
1
0
0
0
Sun/Moon photometer for the Cherenkov Telescope Array - first results
Determination of the energy and flux of the gamma photons by Imaging Atmospheric Cherenkov Technique is strongly dependent on optical properties of the atmosphere. Therefore, atmospheric monitoring during the future observations of the Cherenkov Telescope Array (CTA) as well as anticipated long-term monitoring in order to characterize overal properties and annual variation of atmospheric conditions are very important. Several instruments are already installed at the CTA sites in order to monitor atmospheric conditions on long-term. One of them is a Sun/Moon photometer CE318-T, installed at the Southern CTA site. Since the photometer is installed at a place with very stable atmospheric conditions, it can be also used for characterization of its performance and testing of new methods of aerosol optical depth (AOD) retrieval, cloud-screening and calibration. In this work, we describe our calibration method for nocturnal measurements and the modification of cloud-screening for purposes of nocturnal AOD retrieval. We applied these methods on two months of observations and present the distribution of AODs in four photometric passbands together with their uncertainties.
0
1
0
0
0
0
A Connectome Based Hexagonal Lattice Convolutional Network Model of the Drosophila Visual System
What can we learn from a connectome? We constructed a simplified model of the first two stages of the fly visual system, the lamina and medulla. The resulting hexagonal lattice convolutional network was trained using backpropagation through time to perform object tracking in natural scene videos. Networks initialized with weights from connectome reconstructions automatically discovered well-known orientation and direction selectivity properties in T4 neurons and their inputs, while networks initialized at random did not. Our work is the first demonstration, that knowledge of the connectome can enable in silico predictions of the functional properties of individual neurons in a circuit, leading to an understanding of circuit function from structure alone.
0
0
0
0
1
0
Improved Kernels and Algorithms for Claw and Diamond Free Edge Deletion Based on Refined Observations
In the {claw, diamond}-free edge deletion problem, we are given a graph $G$ and an integer $k>0$, the question is whether there are at most $k$ edges whose deletion results in a graph without claws and diamonds as induced graphs. Based on some refined observations, we propose a kernel of $O(k^3)$ vertices and $O(k^4)$ edges, significantly improving the previous kernel of $O(k^{12})$ vertices and $O(k^{24})$ edges. In addition, we derive an $O^*(3.792^k)$-time algorithm for the {claw, diamond}-free edge deletion problem.
1
0
0
0
0
0
Distributed Edge Caching Scheme Considering the Tradeoff Between the Diversity and Redundancy of Cached Content
Caching popular contents at the edge of cellular networks has been proposed to reduce the load, and hence the cost of backhaul links. It is significant to decide which files should be cached and where to cache them. In this paper, we propose a distributed caching scheme considering the tradeoff between the diversity and redundancy of base stations' cached contents. Whether it is better to cache the same or different contents in different base stations? To find out this, we formulate an optimal redundancy caching problem. Our goal is to minimize the total transmission cost of the network, including cost within the radio access network (RAN) and cost incurred by transmission to the core network via backhaul links. The optimal redundancy ratio under given system configuration is obtained with adapted particle swarm optimization (PSO) algorithm. We analyze the impact of important system parameters through Monte-Carlo simulation. Results show that the optimal redundancy ratio is mainly influenced by two parameters, which are the backhaul to RAN unit cost ratio and the steepness of file popularity distribution. The total cost can be reduced by up to 54% at given unit cost ratio of backhaul to RAN when the optimal redundancy ratio is selected. Under typical file request pattern, the reduction amount can be up to 57%.
1
0
0
0
0
0
Orbital Graphs
We introduce orbital graphs and discuss some of their basic properties. Then we focus on their usefulness for search algorithms for permutation groups, including finding the intersection of groups and the stabilizer of sets in a group.
1
0
1
0
0
0
On the existence of harmonic $\mathbf{Z}_2$ spinors
We prove the existence of singular harmonic ${\bf Z}_2$ spinors on $3$-manifolds with $b_1 > 1$. The proof relies on a wall-crossing formula for solutions to the Seiberg-Witten equation with two spinors. The existence of singular harmonic ${\bf Z}_2$ spinors and the shape of our wall-crossing formula shed new light on recent observations made by Joyce regarding Donaldson and Segal's proposal for counting $G_2$-instantons.
0
0
1
0
0
0
Penalty-based spatial smoothing and outlier detection for childhood obesity surveillance from electronic health records
Childhood obesity is associated with increased morbidity and mortality in adulthood, leading to substantial healthcare cost. There is an urgent need to promote early prevention and develop an accompanying surveillance system. In this paper, we make use of electronic health records (EHRs) and construct a penalized multi-level generalized linear model. The model provides regular trend and outlier information simultaneously, both of which may be useful to raise public awareness and facilitate targeted intervention. Our strategy is to decompose the regional contribution in the model into smooth and sparse signals, where the characteristics of the signals are encouraged by the combination of fusion and sparse penalties imposed on the likelihood function. In addition, we introduce a weighting scheme to account for the missingness and potential non-representativeness arising from the EHRs data. We propose a novel alternating minimization algorithm, which is computationally efficient, easy to implement, and guarantees convergence. Simulation shows that the proposed method has a superior performance compared with traditional counterparts. Finally, we apply our method to the University of Wisconsin Population Health Information Exchange database.
0
0
0
1
0
0
Some simple rules for estimating reproduction numbers in the presence of reservoir exposure or imported cases
The basic reproduction number ($R_0$) is a threshold parameter for disease extinction or survival in isolated populations. However no human population is fully isolated from other human or animal populations. We use compartmental models to derive simple rules for the basic reproduction number for populations with local person-to-person transmission and exposure from some other source: either a reservoir exposure or imported cases. We introduce the idea of a reservoir-driven or importation-driven disease: diseases that would become extinct in the population of interest without reservoir exposure or imported cases (since $R_0<1$), but nevertheless may be sufficiently transmissible that many or most infections are acquired from humans in that population. We show that in the simplest case, $R_0<1$ if and only if the proportion of infections acquired from the external source exceeds the disease prevalence and explore how population heterogeneity and the interactions of multiple strains affect this rule. We apply these rules in two cases studies of Clostridium difficile infection and colonisation: C. difficile in the hospital setting accounting for imported cases, and C. difficile in the general human population accounting for exposure to animal reservoirs. We demonstrate that even the hospital-adapted, highly-transmissible NAP1/RT027 strain of C. difficile had a reproduction number <1 in a landmark study of hospitalised patients and therefore was sustained by colonised and infected admissions to the study hospital. We argue that C. difficile should be considered reservoir-driven if as little as 13.0% of transmission can be attributed to animal reservoirs.
0
0
0
0
1
0
Self-Assembled Monolayer Piezoelectrics: Electric-Field Driven Conformational Changes
We demonstrate that an applied electric field causes piezoelectric distortion across single molecular monolayers of oligopeptides. We deposited self-assembled monolayers ~1.5 nm high onto smooth gold surfaces. These monolayers exhibit strong piezoelectric response that varies linearly with applied bias (1-3V), measured using piezoresponse force microscopy (PFM). The response is markedly greater than control experiments with rigid alkanethiols and correlates with surface spectroscopy and theoretical predictions of conformational change from applied electric fields. Unlike existing piezoelectric oxides, our peptide monolayers are intrinsically flexible, easily fabricated, aligned and patterned without poling.
0
1
0
0
0
0
Matrix divisors on Riemann surfaces and Lax operator algebras
Matrix divisors are introduced in the work by A.Weil (1938) which is considered as a starting point of the theory of holomorphic vector bundles on Riemann surfaces. In this theory matrix divisors play the role similar to the role of usual divisors in the theory of line bundles. Moreover, they provide explicit coordinates (Tyurin parameters) in an open subset of the moduli space of stable vector bundles. These coordinates turned out to be helpful in integration of soliton equations. We would like to gain attention to one more relationship between matrix divisors of vector G-bundles (where G is a complex semi-simple Lie group) and the theory of integrable systems, namely to the relationship with Lax operator algebras. The result we obtain can be briefly formulated as follows: the moduli space of matrix divisors with certain discrete invariants and fixed support is a homogeneous space. Its tangent space at the unit is naturally isomorphic to the quotient space of M-operators by L-operators, both spaces essentially defined by the same invariants (the result goes back to Krichever, 2001). We give one more description of the same space in terms of root systems.
0
0
1
0
0
0
Observation of pseudogap in MgB2
Pseudogap phase in superconductors continues to be an outstanding puzzle that differentiates unconventional superconductors from the conventional ones (BCS-superconductors). Employing high resolution photoemission spectroscopy on a highly dense conventional superconductor, MgB2, we discover an interesting scenario. While the spectral evolution close to the Fermi energy is commensurate to BCS descriptions as expected, the spectra in the wider energy range reveal emergence of a pseudogap much above the superconducting transition temperature indicating apparent departure from the BCS scenario. The energy scale of the pseudogap is comparable to the energy of E2g phonon mode responsible for superconductivity in MgB2 and the pseudogap can be attributed to the effect of electron-phonon coupling on the electronic structure. These results reveal a scenario of the emergence of the superconducting gap within an electron-phonon coupling induced pseudogap.
0
1
0
0
0
0
Critical fields and fluctuations determined from specific heat and magnetoresistance in the same nanogram SmFeAs(O,F) single crystal
Through a direct comparison of specific heat and magneto-resistance we critically asses the nature of superconducting fluctuations in the same nano-gram crystal of SmFeAs(O, F). We show that although the superconducting fluctuation contribution to conductivity scales well within the 2D-LLL scheme its predictions contrast the inherently 3D nature of SmFeAs(O, F) in the vicinity T_{c}. Furthermore the transition seen in specific heat cannot be satisfactory described either by the LLL or the XY scaling. Additionally we have validated, through comparing Hc2 values obtained from the entropy conservation construction (Hab=-19.5 T/K and Hab=-2.9 T/K), the analysis of fluctuation contribution to conductivity as a reasonable method for estimating the Hc2 slope.
0
1
0
0
0
0
Determining the vortex tilt relative to a superconductor surface
It is of interest to determine the exit angle of a vortex from a superconducting surface, since this affects the intervortex interactions and their consequences. Two ways to determine this angle are to image the vortex magnetic fields above the surface, or the vortex core shape at the surface. In this work we evaluate the field h(x, y, z) above a flat superconducting surface x, y and the currents J(x,y) at that surface for a straight vortex tilted relative to the normal to the surface, for both the isotropic and anisotropic cases. In principle, these results can be used to determine the vortex exit tilt angle from analyses of magnetic field imaging or density of states data.
0
1
0
0
0
0
Onset of nonlinear structures due to eigenmode destabilization in tokamak plasmas
A general methodology is proposed to differentiate the likelihood of energetic-particle-driven instabilities to produce frequency chirping or fixed-frequency oscillations. The method employs numerically calculated eigenstructures and multiple resonance surfaces of a given mode in the presence of energetic ion drag and stochasticity (due to collisions and micro-turbulence). Toroidicity-induced, reversed-shear and beta-induced Alfven-acoustic eigenmodes are used as examples. Waves measured in experiments are characterized and compatibility is found between the proposed criterion predictions and the experimental observation or lack of observation of chirping behavior of Alfvenic modes in different tokamaks. It is found that the stochastic diffusion due to micro-turbulence can be the dominant energetic particle detuning mechanism near the resonances in many plasma experiments, and its strength is the key as to whether chirping solutions are likely to arise. The proposed criterion constitutes a useful predictive tool in assessing whether the nature of the transport for fast ion losses in fusion devices will be dominated by convective or diffusive processes.
0
1
0
0
0
0
Self-consistent semi-analytic models of the first stars
We have developed a semi-analytic framework to model the large-scale evolution of the first Population III (Pop III) stars and the transition to metal-enriched star formation. Our model follows dark matter halos from cosmological N-body simulations, utilizing their individual merger histories and three-dimensional positions, and applies physically motivated prescriptions for star formation and feedback from Lyman-Werner (LW) radiation, hydrogen ionizing radiation, and external metal enrichment due to supernovae winds. This method is intended to complement analytic studies, which do not include clustering or individual merger histories, and hydrodynamical cosmological simulations, which include detailed physics, but are computationally expensive and have limited dynamic range. Utilizing this technique, we compute the cumulative Pop III and metal-enriched star formation rate density (SFRD) as a function of redshift at $z \geq 20$. We find that varying the model parameters leads to significant qualitative changes in the global star formation history. The Pop III star formation efficiency and the delay time between Pop III and subsequent metal-enriched star formation are found to have the largest impact. The effect of clustering (i.e. including the three-dimensional positions of individual halos) on various feedback mechanisms is also investigated. The impact of clustering on LW and ionization feedback is found to be relatively mild in our fiducial model, but can be larger if external metal enrichment can promote metal-enriched star formation over large distances.
0
1
0
0
0
0
SoK: Taxonomy and Challenges of Out-of-Band Signal Injection Attacks and Defenses
Research on how hardware imperfections impact security has primarily focused on side-channel leakage mechanisms produced by power consumption, electromagnetic emanations, acoustic vibrations, and optical emissions. However, with the proliferation of sensors in security-critical devices, the impact of attacks on sensor-to-microcontroller and microcontroller-to-actuator interfaces using the same channels is starting to become more than an academic curiosity. These out-of-band signal injection attacks target connections which transform physical quantities to analog properties and fundamentally cannot be authenticated, posing previously unexplored security risks. This paper contains the first survey of such out-of-band signal injection attacks, with a focus on unifying their terminology, and identifying commonalities in their causes and effects. The taxonomy presented contains a chronological, evolutionary, and thematic view of out-of-band signal injection attacks which highlights the cross-influences that exist and underscores the need for a common language irrespective of the method of injection. By placing attack and defense mechanisms in the wider context of their dual counterparts of side-channel leakage and electromagnetic interference, our paper identifies common threads and gaps that can help guide and inform future research. Overall, the ever-increasing reliance on sensors embedded in everyday commodity devices necessitates that a stronger focus be placed on improving the security of such systems against out-of-band signal injection attacks.
1
0
0
0
0
0
Identifying Harm Events in Clinical Care through Medical Narratives
Preventable medical errors are estimated to be among the leading causes of injury and death in the United States. To prevent such errors, healthcare systems have implemented patient safety and incident reporting systems. These systems enable clinicians to report unsafe conditions and cases where patients have been harmed due to errors in medical care. These reports are narratives in natural language and while they provide detailed information about the situation, it is non-trivial to perform large scale analysis for identifying common causes of errors and harm to the patients. In this work, we present a method based on attentive convolutional and recurrent networks for identifying harm events in patient care and categorize the harm based on its severity level. We demonstrate that our methods can significantly improve the performance over existing methods in identifying harm in clinical care.
1
0
0
0
0
0
Deep Self-Paced Learning for Person Re-Identification
Person re-identification (Re-ID) usually suffers from noisy samples with background clutter and mutual occlusion, which makes it extremely difficult to distinguish different individuals across the disjoint camera views. In this paper, we propose a novel deep self-paced learning (DSPL) algorithm to alleviate this problem, in which we apply a self-paced constraint and symmetric regularization to help the relative distance metric training the deep neural network, so as to learn the stable and discriminative features for person Re-ID. Firstly, we propose a soft polynomial regularizer term which can derive the adaptive weights to samples based on both the training loss and model age. As a result, the high-confidence fidelity samples will be emphasized and the low-confidence noisy samples will be suppressed at early stage of the whole training process. Such a learning regime is naturally implemented under a self-paced learning (SPL) framework, in which samples weights are adaptively updated based on both model age and sample loss using an alternative optimization method. Secondly, we introduce a symmetric regularizer term to revise the asymmetric gradient back-propagation derived by the relative distance metric, so as to simultaneously minimize the intra-class distance and maximize the inter-class distance in each triplet unit. Finally, we build a part-based deep neural network, in which the features of different body parts are first discriminately learned in the lower convolutional layers and then fused in the higher fully connected layers. Experiments on several benchmark datasets have demonstrated the superior performance of our method as compared with the state-of-the-art approaches.
1
0
0
0
0
0
Moments and Cumulants of The Two-Stage Mann-Whitney Statistic
This paper illustrates how to calculate the moments and cumulants of the two-stage Mann-Whitney statistic. These results may be used to calculate the asymptotic critical values of the two-stage Mann-Whitney test. In this paper, a large amount of deductions will be showed.
0
0
0
1
0
0
OGLE Cepheids and RR Lyrae Stars in the Milky Way
We present new large samples of Galactic Cepheids and RR Lyrae stars from the OGLE Galaxy Variability Survey.
0
1
0
0
0
0
Closed-form approximations in derivatives pricing: The Kristensen-Mele approach
Kristensen and Mele (2011) developed a new approach to obtain closed-form approximations to continuous-time derivatives pricing models. The approach uses a power series expansion of the pricing bias between an intractable model and some known auxiliary model. Since the resulting approximation formula has closed-form it is straightforward to obtain approximations of greeks. In this thesis I will introduce Kristensen and Mele's methods and apply it to a variety of stochastic volatility models of European style options as well as a model for commodity futures. The focus of this thesis is the effect of different model choices and different model parameter values on the numerical stability of Kristensen and Mele's approximation.
0
0
0
0
0
1
On the magnitude function of domains in Euclidean space
We study Leinster's notion of magnitude for a compact metric space. For a smooth, compact domain $X\subset \mathbb{R}^{2m-1}$, we find geometric significance in the function $\mathcal{M}_X(R) = \mathrm{mag}(R\cdot X)$. The function $\mathcal{M}_X$ extends from the positive half-line to a meromorphic function in the complex plane. Its poles are generalized scattering resonances. In the semiclassical limit $R \to \infty$, $\mathcal{M}_X$ admits an asymptotic expansion. The three leading terms of $\mathcal{M}_X$ at $R=+\infty$ are proportional to the volume, surface area and integral of the mean curvature. In particular, for convex $X$ the leading terms are proportional to the intrinsic volumes, and we obtain an asymptotic variant of the convex magnitude conjecture by Leinster and Willerton, with corrected coefficients.
0
0
1
0
0
0
Minimum edge cuts of distance-regular and strongly regular digraphs
In this paper, we show that the edge connectivity of a distance-regular digraph $\Gamma$ with valency $k$ is $k$ and for $k>2$, any minimum edge cut of $\Gamma$ is the set of all edges going into (or coming out of) a single vertex. Moreover we show that the same result holds for strongly regular digraphs. These results extend the same known results for undirected case with quite different proofs.
0
0
1
0
0
0
Quality Enhancement by Weighted Rank Aggregation of Crowd Opinion
Expertise of annotators has a major role in crowdsourcing based opinion aggregation models. In such frameworks, accuracy and biasness of annotators are occasionally taken as important features and based on them priority of the annotators are assigned. But instead of relying on a single feature, multiple features can be considered and separate rankings can be produced to judge the annotators properly. Finally, the aggregation of those rankings with perfect weightage can be done with an aim to produce better ground truth prediction. Here, we propose a novel weighted rank aggregation method and its efficacy with respect to other existing approaches is shown on artificial dataset. The effectiveness of weighted rank aggregation to enhance quality prediction is also shown by applying it on an Amazon Mechanical Turk (AMT) dataset.
1
0
0
0
0
0
Heterogeneous elastic plates with in-plane modulation of the target curvature and applications to thin gel sheets
We rigorously derive a Kirchhoff plate theory, via $\Gamma$-convergence, from a three-di\-men\-sio\-nal model that describes the finite elasticity of an elastically heterogeneous, thin sheet. The heterogeneity in the elastic properties of the material results in a spontaneous strain that depends on both the thickness and the plane variables $x'$. At the same time, the spontaneous strain is $h$-close to the identity, where $h$ is the small parameter quantifying the thickness. The 2D Kirchhoff limiting model is constrained to the set of isometric immersions of the mid-plane of the plate into $\mathbb{R}^3$, with a corresponding energy that penalizes deviations of the curvature tensor associated with a deformation from a $x'$-dependent target curvature tensor. A discussion on the 2D minimizers is provided in the case where the target curvature tensor is piecewise constant. Finally, we apply the derived plate theory to the modeling of swelling-induced shape changes in heterogeneous thin gel sheets.
0
1
1
0
0
0
An educational distributed Cosmic Ray detector network based on ArduSiPM
The advent of microcontrollers with enough CPU power and with analog and digital peripherals makes possible to design a complete particle detector with relative acquisition system around one microcontroller chip. The existence of a world wide data infrastructure as internet allows for devising a distributed network of cheap detectors capable to elaborate and send data or respond to settings commands. The internet infrastructure enables to distribute the absolute time (with precision of few milliseconds), to the simple devices far apart, with few milliseconds precision, from a few meters to thousands of kilometres. So it is possible to create a crowdsourcing experiment of citizen science that use small scintillation-based particle detectors to monitor the high energetic cosmic ray and the radiation environment.
0
1
0
0
0
0
Irreducible characters with bounded root Artin conductor
In this work, we prove that the growth of the Artin conductor is at most, exponential in the degree of the character.
0
0
1
0
0
0
Mapping Web Pages by Internet Protocol (IP) addresses: Analyzing Spatial and Temporal Characteristics of Web Search Engine Results
Internet Protocol (IP) addresses are frequently used as a method of locating web users by researchers in several different fields. However, there are competing reports concerning the accuracy of those locations, and little research has been done in manually comparing the IP geolocation databases and web page geographic information. This paper categorized web page from the Yahoo search engine into twelve categories, ranging from 'Blog' and 'News' to 'Education' and 'Governmental'. Then we manually compared the mailing or street address of the web page's content creator with the geolocation results by the given IP address. We introduced a cartographic design method by creating kernel density maps for visualizing the information landscape of web pages associated with specific keywords.
1
0
0
0
0
0
A generalisation of Kani-Rosen decomposition theorem for Jacobian varieties
In this short paper we generalise a theorem due to Kani and Rosen on decomposition of Jacobian varieties of Riemann surfaces with group action. This generalisation extends the set of Jacobians for which it is possible to obtain an isogeny decomposition where all the factors are Jacobians.
0
0
1
0
0
0
Optimal group testing designs for estimating prevalence with uncertain testing errors
We construct optimal designs for group testing experiments where the goal is to estimate the prevalence of a trait by using a test with uncertain sensitivity and specificity. Using optimal design theory for approximate designs, we show that the most efficient design for simultaneously estimating the prevalence, sensitivity and specificity requires three different group sizes with equal frequencies. However, if estimating prevalence as accurately as possible is the only focus, the optimal strategy is to have three group sizes with unequal frequencies. On the basis of a chlamydia study in the U.S.A., we compare performances of competing designs and provide insights into how the unknown sensitivity and specificity of the test affect the performance of the prevalence estimator. We demonstrate that the locally D- and Ds-optimal designs proposed have high efficiencies even when the prespecified values of the parameters are moderately misspecified.
0
0
1
1
0
0
Charge transfer driven emergent phenomena in oxide heterostructures
Complex oxides exhibit many intriguing phenomena, including metal-insulator transition, ferroelectricity/multiferroicity, colossal magnetoresistance and high transition temperature superconductivity. Advances in epitaxial thin film growth techniques enable us to combine different complex oxides with atomic precision and form an oxide heterostructure. Recent theoretical and experimental work has shown that charge transfer across oxide interfaces generally occurs and leads to a great diversity of emergent interfacial properties which are not exhibited by bulk constituents. In this report, we review mechanisms and physical consequence of charge transfer across interfaces in oxide heterostructures. Both theoretical proposals and experimental measurements of various oxide heterostructures are discussed and compared. We also review the theoretical methods that are used to calculate charge transfer across oxide interfaces and discuss the success and challenges in theory. Finally, we present a summary and perspectives for future research.
0
1
0
0
0
0
Duality of Graphical Models and Tensor Networks
In this article we show the duality between tensor networks and undirected graphical models with discrete variables. We study tensor networks on hypergraphs, which we call tensor hypernetworks. We show that the tensor hypernetwork on a hypergraph exactly corresponds to the graphical model given by the dual hypergraph. We translate various notions under duality. For example, marginalization in a graphical model is dual to contraction in the tensor network. Algorithms also translate under duality. We show that belief propagation corresponds to a known algorithm for tensor network contraction. This article is a reminder that the research areas of graphical models and tensor networks can benefit from interaction.
1
0
1
1
0
0
A further generalization of the Emden-Fowler equation
A generalization of the Emden-Fowler equation is presented and its solutions are investigated. This paper is devoted to asymptotic behavior of its solutions. The procedure is entirely based on a previous paper by the author.
0
0
1
0
0
0
Data Interpolations in Deep Generative Models under Non-Simply-Connected Manifold Topology
Exploiting the deep generative model's remarkable ability of learning the data-manifold structure, some recent researches proposed a geometric data interpolation method based on the geodesic curves on the learned data-manifold. However, this interpolation method often gives poor results due to a topological difference between the model and the dataset. The model defines a family of simply-connected manifolds, whereas the dataset generally contains disconnected regions or holes that make them non-simply-connected. To compensate this difference, we propose a novel density regularizer that make the interpolation path circumvent the holes denoted by low probability density. We confirm that our method gives consistently better interpolation results from the experiments with real-world image datasets.
1
0
0
1
0
0
Effect of Surfaces on Amyloid Fibril Formation
Using atomic force microscopy (AFM) we investigated the interaction of amyloid beta (Ab) (1 42) peptide with chemically modified surfaces in order to better understand the mechanism of amyloid toxicity, which involves interaction of amyloid with cell membrane surfaces. We compared the structure and density of Ab fibrils on positively and negatively charged as well as hydrophobic chemically modified surfaces at physiologically relevant conditions.
0
1
0
0
0
0
Lie Transform Based Polynomial Neural Networks for Dynamical Systems Simulation and Identification
In the article, we discuss the architecture of the polynomial neural network that corresponds to the matrix representation of Lie transform. The matrix form of Lie transform is an approximation of general solution for the nonlinear system of ordinary differential equations. Thus, it can be used for simulation and modeling task. On the other hand, one can identify dynamical system from time series data simply by optimization of the coefficient matrices of the Lie transform. Representation of the approach by polynomial neural networks integrates the strength of both neural networks and traditional model-based methods for dynamical systems investigation. We provide a theoretical explanation of learning dynamical systems from time series for the proposed method, as well as demonstrate it in several applications. Namely, we show results of modeling and identification for both well-known systems like Lotka-Volterra equation and more complicated examples from retail, biochemistry, and accelerator physics.
1
0
0
0
0
0
Large deformations of the Tracy-Widom distribution I. Non-oscillatory asymptotics
We analyze the left-tail asymptotics of deformed Tracy-Widom distribution functions describing the fluctuations of the largest eigenvalue in invariant random matrix ensembles after removing each soft edge eigenvalue independently with probability $1-\gamma\in[0,1]$. As $\gamma$ varies, a transition from Tracy-Widom statistics ($\gamma=1$) to classical Weibull statistics ($\gamma=0$) was observed in the physics literature by Bohigas, de Carvalho, and Pato \cite{BohigasCP:2009}. We provide a description of this transition by rigorously computing the leading-order left-tail asymptotics of the thinned GOE, GUE and GSE Tracy-Widom distributions. In this paper, we obtain the asymptotic behavior in the non-oscillatory region with $\gamma\in[0,1)$ fixed (for the GOE, GUE, and GSE distributions) and $\gamma\uparrow 1$ at a controlled rate (for the GUE distribution). This is the first step in an ongoing program to completely describe the transition between Tracy-Widom and Weibull statistics. As a corollary to our results, we obtain a new total-integral formula involving the Ablowitz-Segur solution to the second Painlevé equation.
0
1
1
0
0
0
Too Trivial To Test? An Inverse View on Defect Prediction to Identify Methods with Low Fault Risk
Background. Test resources are usually limited and therefore it is often not possible to completely test an application before a release. To cope with the problem of scarce resources, development teams can apply defect prediction to identify fault-prone code regions. However, defect prediction tends to low precision in cross-project prediction scenarios. Aims. We take an inverse view on defect prediction and aim to identify methods that can be deferred when testing because they contain hardly any faults due to their code being "trivial". We expect that characteristics of such methods might be project-independent, so that our approach could improve cross-project predictions. Method. We compute code metrics and apply association rule mining to create rules for identifying methods with low fault risk. We conduct an empirical study to assess our approach with six Java open-source projects containing precise fault data at the method level. Results. Our results show that inverse defect prediction can identify approx. 32-44% of the methods of a project to have a low fault risk; on average, they are about six times less likely to contain a fault than other methods. In cross-project predictions with larger, more diversified training sets, identified methods are even eleven times less likely to contain a fault. Conclusions. Inverse defect prediction supports the efficient allocation of test resources by identifying methods that can be treated with less priority in testing activities and is well applicable in cross-project prediction scenarios.
1
0
0
0
0
0
Dynamical phase transitions in sampling complexity
We make the case for studying the complexity of approximately simulating (sampling) quantum systems for reasons beyond that of quantum computational supremacy, such as diagnosing phase transitions. We consider the sampling complexity as a function of time $t$ due to evolution generated by spatially local quadratic bosonic Hamiltonians. We obtain an upper bound on the scaling of $t$ with the number of bosons $n$ for which approximate sampling is classically efficient. We also obtain a lower bound on the scaling of $t$ with $n$ for which any instance of the boson sampling problem reduces to this problem and hence implies that the problem is hard, assuming the conjectures of Aaronson and Arkhipov [Proc. 43rd Annu. ACM Symp. Theory Comput. STOC '11]. This establishes a dynamical phase transition in sampling complexity. Further, we show that systems in the Anderson-localized phase are always easy to sample from at arbitrarily long times. We view these results in the light of classifying phases of physical systems based on parameters in the Hamiltonian. In doing so, we combine ideas from mathematical physics and computational complexity to gain insight into the behavior of condensed matter, atomic, molecular and optical systems.
1
1
0
0
0
0