title
stringlengths
7
239
abstract
stringlengths
7
2.76k
cs
int64
0
1
phy
int64
0
1
math
int64
0
1
stat
int64
0
1
quantitative biology
int64
0
1
quantitative finance
int64
0
1
Stability analysis of a system coupled to a heat equation
As a first approach to the study of systems coupling finite and infinite dimensional natures, this article addresses the stability of a system of ordinary differential equations coupled with a classic heat equation using a Lyapunov functional technique. Inspired from recent developments in the area of time delay systems, a new methodology to study the stability of such a class of distributed parameter systems is presented here. The idea is to use a polynomial approximation of the infinite dimensional state of the heat equation in order to build an enriched energy functional. A well known efficient integral inequality (Bessel inequality) will allow to obtain stability conditions expressed in terms of linear matrix inequalities. We will eventually test our approach on academic examples in order to illustrate the efficiency of our theoretical results.
0
0
1
0
0
0
Corral Framework: Trustworthy and Fully Functional Data Intensive Parallel Astronomical Pipelines
Data processing pipelines represent an important slice of the astronomical software library that include chains of processes that transform raw data into valuable information via data reduction and analysis. In this work we present Corral, a Python framework for astronomical pipeline generation. Corral features a Model-View-Controller design pattern on top of an SQL Relational Database capable of handling: custom data models; processing stages; and communication alerts, and also provides automatic quality and structural metrics based on unit testing. The Model-View-Controller provides concept separation between the user logic and the data models, delivering at the same time multi-processing and distributed computing capabilities. Corral represents an improvement over commonly found data processing pipelines in Astronomy since the design pattern eases the programmer from dealing with processing flow and parallelization issues, allowing them to focus on the specific algorithms needed for the successive data transformations and at the same time provides a broad measure of quality over the created pipeline. Corral and working examples of pipelines that use it are available to the community at this https URL.
1
1
0
0
0
0
Radial orbit instability in systems of highly eccentric orbits: Antonov problem reviewed
Stationary stellar systems with radially elongated orbits are subject to radial orbit instability -- an important phenomenon that structures galaxies. Antonov (1973) presented a formal proof of the instability for spherical systems in the limit of purely radial orbits. However, such spheres have highly inhomogeneous density distributions with singularity $\sim 1/r^2$, resulting in an inconsistency in the proof. The proof can be refined, if one considers an orbital distribution close to purely radial, but not entirely radial, which allows to avoid the central singularity. For this purpose we employ non-singular analogs of generalised polytropes elaborated recently in our work in order to derive and solve new integral equations adopted for calculation of unstable eigenmodes in systems with nearly radial orbits. In addition, we establish a link between our and Antonov's approaches and uncover the meaning of infinite entities in the purely radial case. Maximum growth rates tend to infinity as the system becomes more and more radially anisotropic. The instability takes place both for even and odd spherical harmonics, with all unstable modes developing rapidly, i.e. having eigenfrequencies comparable to or greater than typical orbital frequencies. This invalidates orbital approximation in the case of systems with all orbits very close to purely radial.
0
1
0
0
0
0
Short-range wakefields generated in the blowout regime of plasma-wakefield acceleration
In the past, calculation of wakefields generated by an electron bunch propagating in a plasma has been carried out in linear approximation, where the plasma perturbation can be assumed small and plasma equations of motion linearized. This approximation breaks down in the blowout regime where a high-density electron driver expels plasma electrons from its path and creates a cavity void of electrons in its wake. In this paper, we develop a technique that allows to calculate short-range longitudinal and transverse wakes generated by a witness bunch being accelerated inside the cavity. Our results can be used for studies of the beam loading and the hosing instability of the witness bunch in PWFA and LWFA.
0
1
0
0
0
0
Scaled Nuclear Norm Minimization for Low-Rank Tensor Completion
Minimizing the nuclear norm of a matrix has been shown to be very efficient in reconstructing a low-rank sampled matrix. Furthermore, minimizing the sum of nuclear norms of matricizations of a tensor has been shown to be very efficient in recovering a low-Tucker-rank sampled tensor. In this paper, we propose to recover a low-TT-rank sampled tensor by minimizing a weighted sum of nuclear norms of unfoldings of the tensor. We provide numerical results to show that our proposed method requires significantly less number of samples to recover to the original tensor in comparison with simply minimizing the sum of nuclear norms since the structure of the unfoldings in the TT tensor model is fundamentally different from that of matricizations in the Tucker tensor model.
1
0
0
1
0
0
Stability and Transparency Analysis of a Bilateral Teleoperation in Presence of Data Loss
This paper presents a novel approach for stability and transparency analysis for bilateral teleoperation in the presence of data loss in communication media. A new model for data loss is proposed based on a set of periodic continuous pulses and its finite series representation. The passivity of the overall system is shown using wave variable approach including the newly defined model for data loss. Simulation results are presented to show the effectiveness of the proposed approach.
1
0
0
0
0
0
Quantum eigenstate tomography with qubit tunneling spectroscopy
Measurement of the energy eigenvalues (spectrum) of a multi-qubit system has recently become possible by qubit tunneling spectroscopy (QTS). In the standard QTS experiments, an incoherent probe qubit is strongly coupled to one of the qubits of the system in such a way that its incoherent tunneling rate provides information about the energy eigenvalues of the original (source) system. In this paper, we generalize QTS by coupling the probe qubit to many source qubits. We show that by properly choosing the couplings, one can perform projective measurements of the source system energy eigenstates in an arbitrary basis, thus performing quantum eigenstate tomography. As a practical example of a limited tomography, we apply our scheme to probe the eigenstates of a kink in a frustrated transverse Ising chain.
0
1
0
0
0
0
Binary hermitian forms and optimal embeddings
Fix a quadratic order over the ring of integers. An embedding of the quadratic order into a quaternionic order naturally gives an integral binary hermitian form over the quadratic order. We show that, in certain cases, this correspondence is a discriminant preserving bijection between the isomorphism classes of embeddings and integral binary hermitian forms.
0
0
1
0
0
0
An Improved Training Procedure for Neural Autoregressive Data Completion
Neural autoregressive models are explicit density estimators that achieve state-of-the-art likelihoods for generative modeling. The D-dimensional data distribution is factorized into an autoregressive product of one-dimensional conditional distributions according to the chain rule. Data completion is a more involved task than data generation: the model must infer missing variables for any partially observed input vector. Previous work introduced an order-agnostic training procedure for data completion with autoregressive models. Missing variables in any partially observed input vector can be imputed efficiently by choosing an ordering where observed dimensions precede unobserved ones and by computing the autoregressive product in this order. In this paper, we provide evidence that the order-agnostic (OA) training procedure is suboptimal for data completion. We propose an alternative procedure (OA++) that reaches better performance in fewer computations. It can handle all data completion queries while training fewer one-dimensional conditional distributions than the OA procedure. In addition, these one-dimensional conditional distributions are trained proportionally to their expected usage at inference time, reducing overfitting. Finally, our OA++ procedure can exploit prior knowledge about the distribution of inference completion queries, as opposed to OA. We support these claims with quantitative experiments on standard datasets used to evaluate autoregressive generative models.
1
0
0
1
0
0
A Stochastic Formulation of the Resolution of Identity: Application to Second Order Møller-Plesset Perturbation Theory
A stochastic orbital approach to the resolution of identity (RI) approximation for 4-index 2-electron electron repulsion integrals (ERIs) is presented. The stochastic RI-ERIs are then applied to M\o ller-Plesset perturbation theory (MP2) utilizing a \textit{multiple stochastic orbital approach}. The introduction of multiple stochastic orbitals results in an $N^3$ scaling for both the stochastic RI-ERIs and stochastic RI-MP2. We demonstrate that this method exhibits a small prefactor and an observed scaling of $N^{2.4}$ for a range of water clusters, already outperforming MP2 for clusters with as few as 21 water molecules.
0
1
0
0
0
0
Automatic Mapping of NES Games with Mappy
Game maps are useful for human players, general-game-playing agents, and data-driven procedural content generation. These maps are generally made by hand-assembling manually-created screenshots of game levels. Besides being tedious and error-prone, this approach requires additional effort for each new game and level to be mapped. The results can still be hard for humans or computational systems to make use of, privileging visual appearance over semantic information. We describe a software system, Mappy, that produces a good approximation of a linked map of rooms given a Nintendo Entertainment System game program and a sequence of button inputs exploring its world. In addition to visual maps, Mappy outputs grids of tiles (and how they change over time), positions of non-tile objects, clusters of similar rooms that might in fact be the same room, and a set of links between these rooms. We believe this is a necessary step towards developing larger corpora of high-quality semantically-annotated maps for PCG via machine learning and other applications.
1
0
0
0
0
0
Proportional Closeness Estimation of Probability of Contamination Under Group Testing
The paper is focused on the problem of estimating the probability $p$ of individual contaminated sample, under group testing. The precision of the estimator is given by the probability of proportional closeness, a concept defined in the Introduction. Two-stage and sequential sampling procedures are characterized. An adaptive procedure is examined.
0
0
1
1
0
0
Utility of General and Specific Word Embeddings for Classifying Translational Stages of Research
Conventional text classification models make a bag-of-words assumption reducing text into word occurrence counts per document. Recent algorithms such as word2vec are capable of learning semantic meaning and similarity between words in an entirely unsupervised manner using a contextual window and doing so much faster than previous methods. Each word is projected into vector space such that similar meaning words such as "strong" and "powerful" are projected into the same general Euclidean space. Open questions about these embeddings include their utility across classification tasks and the optimal properties and source of documents to construct broadly functional embeddings. In this work, we demonstrate the usefulness of pre-trained embeddings for classification in our task and demonstrate that custom word embeddings, built in the domain and for the tasks, can improve performance over word embeddings learnt on more general data including news articles or Wikipedia.
1
0
0
1
0
0
The AKARI IRC asteroid flux catalogue: updated diameters and albedos
The AKARI IRC All-sky survey provided more than twenty thousand thermal infrared observations of over five thousand asteroids. Diameters and albedos were obtained by fitting an empirically calibrated version of the standard thermal model to these data. After the publication of the flux catalogue in October 2016, our aim here is to present the AKARI IRC all-sky survey data and discuss valuable scientific applications in the field of small-body physical properties studies. As an example, we update the catalogue of asteroid diameters and albedos based on AKARI using the near-Earth asteroid thermal model (NEATM). We fit the NEATM to derive asteroid diameters and, whenever possible, infrared beaming parameters. We obtained a total of 8097 diameters and albedos for 5170 asteroids, and we fitted the beaming parameter for almost two thousand of them. When it was not possible to fit the beaming parameter, we used a straight line fit to our sample's beaming parameter-versus-phase angle plot to set the default value for each fit individually instead of using a single average value. Our diameters agree with stellar-occultation-based diameters well within the accuracy expected for the model. They also match the previous AKARI-based catalogue at phase angles lower than 50 degrees, but we find a systematic deviation at higher phase angles, at which near-Earth and Mars-crossing asteroids were observed. The AKARI IRC All-sky survey provides observations at different observation geometries, rotational coverages and aspect angles. For example, by comparing in more detail a few asteroids for which dimensions were derived from occultations, we discuss how the multiple observations per object may already provide three-dimensional information about elongated objects even based on an idealised model like the NEATM.
0
1
0
0
0
0
Finite sample Bernstein - von Mises theorems for functionals and spectral projectors of covariance matrix
We demonstrate that a prior influence on the posterior distribution of covariance matrix vanishes as sample size grows. The assumptions on a prior are explicit and mild. The results are valid for a finite sample and admit the dimension $p$ growing with the sample size $n$. We exploit the described fact to derive the finite sample Bernstein - von Mises theorem for functionals of covariance matrix (e.g. eigenvalues) and to find the posterior distribution of the Frobenius distance between spectral projector and empirical spectral projector. This can be useful for constructing sharp confidence sets for the true value of the functional or for the true spectral projector.
0
0
1
1
0
0
Low-temperature behavior of the multicomponent Widom-Rowlison model on finite square lattices
We consider the multicomponent Widom-Rowlison with Metropolis dynamics, which describes the evolution of a particle system where $M$ different types of particles interact subject to certain hard-core constraints. Focusing on the scenario where the spatial structure is modeled by finite square lattices, we study the asymptotic behavior of this interacting particle system in the low-temperature regime, analyzing the tunneling times between its $M$ maximum-occupancy configurations, and the mixing time of the corresponding Markov chain. In particular, we develop a novel combinatorial method that, exploiting geometrical properties of the Widom-Rowlinson configurations on finite square lattices, leads to the identification of the timescale at which transitions between maximum-occupancy configurations occur and shows how this depends on the chosen boundary conditions and the square lattice dimensions.
0
1
1
0
0
0
Stream Graphs and Link Streams for the Modeling of Interactions over Time
Graph theory provides a language for studying the structure of relations, and it is often used to study interactions over time too. However, it poorly captures the both temporal and structural nature of interactions, that calls for a dedicated formalism. In this paper, we generalize graph concepts in order to cope with both aspects in a consistent way. We start with elementary concepts like density, clusters, or paths, and derive from them more advanced concepts like cliques, degrees, clustering coefficients, or connected components. We obtain a language to directly deal with interactions over time, similar to the language provided by graphs to deal with relations. This formalism is self-consistent: usual relations between different concepts are preserved. It is also consistent with graph theory: graph concepts are special cases of the ones we introduce. This makes it easy to generalize higher-level objects such as quotient graphs, line graphs, k-cores, and centralities. This paper also considers discrete versus continuous time assumptions, instantaneous links, and extensions to more complex cases.
1
0
0
1
0
0
Supermetric Search
Metric search is concerned with the efficient evaluation of queries in metric spaces. In general,a large space of objects is arranged in such a way that, when a further object is presented as a query, those objects most similar to the query can be efficiently found. Most mechanisms rely upon the triangle inequality property of the metric governing the space. The triangle inequality property is equivalent to a finite embedding property, which states that any three points of the space can be isometrically embedded in two-dimensional Euclidean space. In this paper, we examine a class of semimetric space which is finitely four-embeddable in three-dimensional Euclidean space. In mathematics this property has been extensively studied and is generally known as the four-point property. All spaces with the four-point property are metric spaces, but they also have some stronger geometric guarantees. We coin the term supermetric space as, in terms of metric search, they are significantly more tractable. Supermetric spaces include all those governed by Euclidean, Cosine, Jensen-Shannon and Triangular distances, and are thus commonly used within many domains. In previous work we have given a generic mathematical basis for the supermetric property and shown how it can improve indexing performance for a given exact search structure. Here we present a full investigation into its use within a variety of different hyperplane partition indexing structures, and go on to show some more of its flexibility by examining a search structure whose partition and exclusion conditions are tailored, at each node, to suit the individual reference points and data set present there. Among the results given, we show a new best performance for exact search using a well-known benchmark.
1
0
0
0
0
0
Liouville-type theorems with finite Morse index for Δ_λ-Laplace operator
In this paper we study solutions, possibly unbounded and sign-changing, of the following problem: -\D_{\lambda} u=|x|_{\lambda}^a |u|^{p-1}u, in R^n,\;n\geq 1,\; p>1, and a \geq 0, where \D_{\lambda} is a strongly degenerate elliptic operator, the functions \lambda=(\lambda_1, ..., \lambda_k) : R^n \rightarrow R^k, satisfies some certain conditions, and |.|_{\lambda} the homogeneous norm associated to the \D_{\lambda}-Laplacian. We prove various Liouville-type theorems for smooth solutions under the assumption that they are stable or stable outside a compact set of R^n. First, we establish the standard integralestimates via stability property to derive the nonexistence results for stable solutions. Next, by mean of the Pohozaev identity, we deduce the Liouville-type theorem for solutions stable outside a compact set.
0
0
1
0
0
0
Performance Impact of Base Station Antenna Heights in Dense Cellular Networks
In this paper, we present a new and significant theoretical discovery. If the absolute height difference between base station (BS) antenna and user equipment (UE) antenna is larger than zero, then the network performance in terms of both the coverage probability and the area spectral efficiency (ASE) will continuously decrease toward zero as the BS density increases for ultra-dense (UD) small cell networks (SCNs). Such findings are completely different from the conclusions in existing works, both quantitatively and qualitatively. In particular, this performance behavior has a tremendous impact on the deployment of UD SCNs in the 5th-generation (5G) era. Network operators may invest large amounts of money in deploying more network infrastructure to only obtain an even less network capacity. Our study results reveal that one way to address this issue is to lower the SCN BS antenna height to the UE antenna height. However, this requires a revolutionized approach of BS architecture and deployment, which is explored in this paper too.
1
0
0
0
0
0
An information-theoretic approach for selecting arms in clinical trials
The question of selecting the "best" amongst different choices is a common problem in statistics. In drug development, our motivating setting, the question becomes, for example: what is the dose that gives me a pre-specified risk of toxicity or which treatment gives the best response rate. Motivated by a recent development in the weighted information measures theory, we propose an experimental design based on a simple and intuitive criterion which governs arm selection in the experiment with multinomial outcomes. The criterion leads to accurate arm selection without any parametric or monotonicity assumption. The asymptotic properties of the design are studied for different allocation rules and the small sample size behaviour is evaluated in simulations in the context of Phase I and Phase II clinical trials with binary endpoints. We compare the proposed design to currently used alternatives and discuss its practical implementation.
0
0
1
1
0
0
A watershed-based algorithm to segment and classify cells in fluorescence microscopy images
Imaging assays of cellular function, especially those using fluorescent stains, are ubiquitous in the biological and medical sciences. Despite advances in computer vision, such images are often analyzed using only manual or rudimentary automated processes. Watershed-based segmentation is an effective technique for identifying objects in images; it outperforms commonly used image analysis methods, but requires familiarity with computer-vision techniques to be applied successfully. In this report, we present and implement a watershed-based image analysis and classification algorithm in a GUI, enabling a broad set of users to easily understand the algorithm and adjust the parameters to their specific needs. As an example, we implement this algorithm to find and classify cells in a complex imaging assay for mitochondrial function. In a second example, we demonstrate a workflow using manual comparisons and receiver operator characteristics to optimize the algorithm parameters for finding live and dead cells in a standard viability assay. Overall, this watershed-based algorithm is more advanced than traditional thresholding and can produce optimized, automated results. By incorporating associated pre-processing steps in the GUI, the algorithm is also easily adjusted, rendering it user-friendly.
1
0
0
0
0
0
On the restricted Chebyshev-Boubaker polynomials
Using the language of Riordan arrays, we study a one-parameter family of orthogonal polynomials that we call the restricted Chebyshev-Boubaker polynomials. We characterize these polynomials in terms of the three term recurrences that they satisfy, and we study certain central sequences defined by their coefficient arrays. We give an integral representation for their moments, and we show that the Hankel transforms of these moments have a simple form. We show that the (sequence) Hankel transform of the row sums of the corresponding moment matrix is defined by a family of polynomials closely related to the Chebyshev polynomials of the second kind, and that these row sums are in fact the moments of another family of orthogonal polynomials.
0
0
1
0
0
0
Band filling control of the Dzyaloshinskii-Moriya interaction in weakly ferromagnetic insulators
We observe and explain theoretically a dramatic evolution of the Dzyaloshinskii-Moriya interaction in the series of isostructural weak ferromagnets, MnCO$_3$, FeBO$_3$, CoCO$_3$ and NiCO$_3$. The sign of the interaction is encoded in the phase of x-ray magnetic diffraction amplitude, observed through interference with resonant quadrupole scattering. We find very good quantitative agreement with first-principles electronic structure calculations, reproducing both sign and magnitude through the series, and propose a simplified `toy model' to explain the change in sign with 3 d shell filling. The model gives a clue for qualitative understanding of the evolution of the DMI in Mott and charge transfer insulators.
0
1
0
0
0
0
Reducing Estimation Risk in Mean-Variance Portfolios with Machine Learning
In portfolio analysis, the traditional approach of replacing population moments with sample counterparts may lead to suboptimal portfolio choices. I show that optimal portfolio weights can be estimated using a machine learning (ML) framework, where the outcome to be predicted is a constant and the vector of explanatory variables is the asset returns. It follows that ML specifically targets estimation risk when estimating portfolio weights, and that "off-the-shelf" ML algorithms can be used to estimate the optimal portfolio in the presence of parameter uncertainty. The framework nests the traditional approach and recently proposed shrinkage approaches as special cases. By relying on results from the ML literature, I derive new insights for existing approaches and propose new estimation methods. Based on simulation studies and several datasets, I find that ML significantly reduces estimation risk compared to both the traditional approach and the equal weight strategy.
0
0
0
0
0
1
CutFEM topology optimization of 3D laminar incompressible flow problems
This paper studies the characteristics and applicability of the CutFEM approach as the core of a robust topology optimization framework for 3D laminar incompressible flow and species transport problems at low Reynolds number (Re < 200). CutFEM is a methodology for discretizing partial differential equations on complex geometries by immersed boundary techniques. In this study, the geometry of the fluid domain is described by an explicit level set method, where the parameters of a level set function are defined as functions of the optimization variables. The fluid behavior is modeled by the incompressible Navier-Stokes equations. Species transport is modeled by an advection-diffusion equation. The governing equations are discretized in space by a generalized extended finite element method. Face-oriented ghost-penalty terms are added for stability reasons and to improve the conditioning of the system. The boundary conditions are enforced weakly via Nit\-sc\-he's method. The emergence of isolated volumes of fluid surrounded by solid during the optimization process leads to a singular analysis problem. An auxiliary indicator field is modeled to identify these volumes and to impose a constraint on the average pressure. Numerical results for 3D, steady-state and transient problems demonstrate that the CutFEM analyses are sufficiently accurate, and the optimized designs agree well with results from prior studies solved in 2D or by density approaches.
1
0
1
0
0
0
Network topology of neural systems supporting avalanche dynamics predicts stimulus propagation and recovery
Many neural systems display avalanche behavior characterized by uninterrupted sequences of neuronal firing whose distributions of size and durations are heavy-tailed. Theoretical models of such systems suggest that these dynamics support optimal information transmission and storage. However, the unknown role of network structure precludes an understanding of how variations in network topology manifest in neural dynamics and either support or impinge upon information processing. Here, using a generalized spiking model, we develop a mechanistic understanding of how network topology supports information processing through network dynamics. First, we show how network topology determines network dynamics by analytically and numerically demonstrating that network topology can be designed to propagate stimulus patterns for long durations. We then identify strongly connected cycles as empirically observable network motifs that are prevalent in such networks. Next, we show that within a network, mathematical intuitions from network control theory are tightly linked with dynamics initiated by node-specific stimulation and can identify stimuli that promote long-lasting cascades. Finally, we use these network-based metrics and control-based stimuli to demonstrate that long-lasting cascade dynamics facilitate delayed recovery of stimulus patterns from network activity, as measured by mutual information. Collectively, our results provide evidence that cortical networks are structured with architectural motifs that support long-lasting propagation and recovery of a few crucial patterns of stimulation, especially those consisting of activity in highly controllable neurons. Broadly, our results imply that avalanching neural networks could contribute to cognitive faculties that require persistent activation of neuronal patterns, such as working memory or attention.
0
0
0
0
1
0
Comparison of ontology alignment systems across single matching task via the McNemar's test
Ontology alignment is widely-used to find the correspondences between different ontologies in diverse fields.After discovering the alignments,several performance scores are available to evaluate them.The scores typically require the identified alignment and a reference containing the underlying actual correspondences of the given ontologies.The current trend in the alignment evaluation is to put forward a new score(e.g., precision, weighted precision, etc.)and to compare various alignments by juxtaposing the obtained scores. However,it is substantially provocative to select one measure among others for comparison.On top of that, claiming if one system has a better performance than one another cannot be substantiated solely by comparing two scalars.In this paper,we propose the statistical procedures which enable us to theoretically favor one system over one another.The McNemar's test is the statistical means by which the comparison of two ontology alignment systems over one matching task is drawn.The test applies to a 2x2 contingency table which can be constructed in two different ways based on the alignments,each of which has their own merits/pitfalls.The ways of the contingency table construction and various apposite statistics from the McNemar's test are elaborated in minute detail.In the case of having more than two alignment systems for comparison, the family-wise error rate is expected to happen. Thus, the ways of preventing such an error are also discussed.A directed graph visualizes the outcome of the McNemar's test in the presence of multiple alignment systems.From this graph, it is readily understood if one system is better than one another or if their differences are imperceptible.The proposed statistical methodologies are applied to the systems participated in the OAEI 2016 anatomy track, and also compares several well-known similarity metrics for the same matching problem.
1
0
0
0
0
0
Polarizability Extraction for Waveguide-Fed Metasurfaces
We consider the design and modeling of metasurfaces that couple energy from guided waves to propagating wavefronts. This is a first step towards a comprehensive, multiscale modeling platform for metasurface antennas-large arrays of metamaterial elements embedded in a waveguide structure that radiates into free-space--in which the detailed electromagnetic responses of metamaterial elements are replaced by polarizable dipoles. We present two methods to extract the effective polarizability of a metamaterial element embedded in a one- or two-dimensional waveguide. The first method invokes surface equivalence principles, averaging over the effective surface currents and charges within an element to obtain the effective dipole moments; the second method is based on computing the coefficients of the scattered waves within the waveguide, from which the effective polarizability can be inferred. We demonstrate these methods on several variants of waveguide-fed metasurface elements, finding excellent agreement between the two, as well as with analytical expressions derived for irises with simpler geometries. Extending the polarizability extraction technique to higher order multipoles, we confirm the validity of the dipole approximation for common metamaterial elements. With the effective polarizabilities of the metamaterial elements accurately determined, the radiated fields generated by a metasurface antenna (inside and outside the antenna) can be found self-consistently by including the interactions between polarizable dipoles. The dipole description provides an alternative language and computational framework for engineering metasurface antennas, holograms, lenses, beam-forming arrays, and other electrically large, waveguide-fed metasurface structures.
0
1
0
0
0
0
Connection Scan Algorithm
We introduce the Connection Scan Algorithm (CSA) to efficiently answer queries to timetable information systems. The input consists, in the simplest setting, of a source position and a desired target position. The output consist is a sequence of vehicles such as trains or buses that a traveler should take to get from the source to the target. We study several problem variations such as the earliest arrival and profile problems. We present algorithm variants that only optimize the arrival time or additionally optimize the number of transfers in the Pareto sense. An advantage of CSA is that is can easily adjust to changes in the timetable, allowing the easy incorporation of known vehicle delays. We additionally introduce the Minimum Expected Arrival Time (MEAT) problem to handle possible, uncertain, future vehicle delays. We present a solution to the MEAT problem that is based upon CSA. Finally, we extend CSA using the multilevel overlay paradigm to answer complex queries on nation-wide integrated timetables with trains and buses.
1
0
0
0
0
0
A Proof of Orthogonal Double Machine Learning with $Z$-Estimators
We consider two stage estimation with a non-parametric first stage and a generalized method of moments second stage, in a simpler setting than (Chernozhukov et al. 2016). We give an alternative proof of the theorem given in (Chernozhukov et al. 2016) that orthogonal second stage moments, sample splitting and $n^{1/4}$-consistency of the first stage, imply $\sqrt{n}$-consistency and asymptotic normality of second stage estimates. Our proof is for a variant of their estimator, which is based on the empirical version of the moment condition (Z-estimator), rather than a minimization of a norm of the empirical vector of moments (M-estimator). This note is meant primarily for expository purposes, rather than as a new technical contribution.
0
0
1
1
0
0
Quantile Treatment Effects in Difference in Differences Models under Dependence Restrictions and with only Two Time Periods
This paper shows that the Conditional Quantile Treatment Effect on the Treated can be identified using a combination of (i) a conditional Distributional Difference in Differences assumption and (ii) an assumption on the conditional dependence between the change in untreated potential outcomes and the initial level of untreated potential outcomes for the treated group. The second assumption recovers the unknown dependence from the observed dependence for the untreated group. We also consider estimation and inference in the case where all of the covariates are discrete. We propose a uniform inference procedure based on the exchangeable bootstrap and show its validity. We conclude the paper by estimating the effect of state-level changes in the minimum wage on the distribution of earnings for subgroups defined by race, gender, and education.
0
0
1
1
0
0
Coppersmith's lattices and "focus groups": an attack on small-exponent RSA
We present a principled technique for reducing the matrix size in some applications of Coppersmith's lattice method for finding roots of modular polynomial equations. It relies on an analysis of the actual performance of Coppersmith's attack for smaller parameter sizes, which can be thought of as "focus group" testing. When applied to the small-exponent RSA problem, it reduces lattice dimensions and consequently running times (sometimes by factors of two or more). We also argue that existing metrics (such as enabling condition bounds) are not as important as often thought for measuring the true performance of attacks based on Coppersmith's method. Finally, experiments are given to indicate that certain lattice reductive algorithms (such as Nguyen-Stehlé's L2) may be particularly well-suited for Coppersmith's method.
1
0
1
0
0
0
Gas vs. solid phase deuterated chemistry: HDCO and D$_2$CO in massive star-forming regions
The formation of deuterated molecules is favoured at low temperatures and high densities. Therefore, the deuteration fraction D$_{frac}$ is expected to be enhanced in cold, dense prestellar cores and to decrease after protostellar birth. Previous studies have shown that the deuterated forms of species such as N2H+ (formed in the gas phase) and CH3OH (formed on grain surfaces) can be used as evolutionary indicators and to constrain their dominant formation processes and time-scales. Formaldehyde (H2CO) and its deuterated forms can be produced both in the gas phase and on grain surfaces. However, the relative importance of these two chemical pathways is unclear. Comparison of the deuteration fraction of H2CO with respect to that of N2H+, NH3 and CH3OH can help us to understand its formation processes and time-scales. With the new SEPIA Band 5 receiver on APEX, we have observed the J=3-2 rotational lines of HDCO and D2CO at 193 GHz and 175 GHz toward three massive star forming regions hosting objects at different evolutionary stages: two High-mass Starless Cores (HMSC), two High-mass Protostellar Objects (HMPOs), and one Ultracompact HII region (UCHII). By using previously obtained H2CO J=3-2 data, the deuteration fractions HDCO/H2CO and D2CO/HDCO are estimated. Our observations show that singly-deuterated H2CO is detected toward all sources and that the deuteration fraction of H2CO increases from the HMSC to the HMPO phase and then sharply decreases in the latest evolutionary stage (UCHII). The doubly-deuterated form of H2CO is detected only in the earlier evolutionary stages with D2CO/H2CO showing a pattern that is qualitatively consistent with that of HDCO/H2CO, within current uncertainties. Our initial results show that H2CO may display a similar D$_{frac}$ pattern as that of CH3OH in massive young stellar objects. This finding suggests that solid state reactions dominate its formation.
0
1
0
0
0
0
Improved stability of optimal traffic paths
Models involving branched structures are employed to describe several supply-demand systems such as the structure of the nerves of a leaf, the system of roots of a tree and the nervous or cardiovascular systems. Given a flow (traffic path) that transports a given measure $\mu^-$ onto a target measure $\mu^+$, along a 1-dimensional network, the transportation cost per unit length is supposed in these models to be proportional to a concave power $\alpha \in (0,1)$ of the intensity of the flow. In this paper we address an open problem in the book "Optimal transportation networks" by Bernot, Caselles and Morel and we improve the stability for optimal traffic paths in the Euclidean space $\mathbb{R}^d$, with respect to variations of the given measures $(\mu^-,\mu^+)$, which was known up to now only for $\alpha>1-\frac1d$. We prove it for exponents $\alpha>1-\frac1{d-1}$ (in particular, for every $\alpha \in (0,1)$ when $d=2$), for a fairly large class of measures $\mu^+$ and $\mu^-$.
0
0
1
0
0
0
Neural Code Comprehension: A Learnable Representation of Code Semantics
With the recent success of embeddings in natural language processing, research has been conducted into applying similar methods to code analysis. Most works attempt to process the code directly or use a syntactic tree representation, treating it like sentences written in a natural language. However, none of the existing methods are sufficient to comprehend program semantics robustly, due to structural features such as function calls, branching, and interchangeable order of statements. In this paper, we propose a novel processing technique to learn code semantics, and apply it to a variety of program analysis tasks. In particular, we stipulate that a robust distributional hypothesis of code applies to both human- and machine-generated programs. Following this hypothesis, we define an embedding space, inst2vec, based on an Intermediate Representation (IR) of the code that is independent of the source programming language. We provide a novel definition of contextual flow for this IR, leveraging both the underlying data- and control-flow of the program. We then analyze the embeddings qualitatively using analogies and clustering, and evaluate the learned representation on three different high-level tasks. We show that even without fine-tuning, a single RNN architecture and fixed inst2vec embeddings outperform specialized approaches for performance prediction (compute device mapping, optimal thread coarsening); and algorithm classification from raw code (104 classes), where we set a new state-of-the-art.
1
0
0
1
0
0
Electron conduction in solid state via time varying wavevectors
In this paper, we study electron wavepacket dynamics in electric and magnetic fields. We rigorously derive the semiclassical equations of electron dynamics in electric and magnetic fields. We do it both for free electron and electron in a periodic potential. We do this by introducing time varying wavevectors $k(t)$. In the presence of magnetic field, our wavepacket reproduces the classical cyclotron orbits once the origin of the Schröedinger equation is correctly chosen to be center of cyclotron orbit. In the presence of both electric and magnetic fields, our equations for wavepacket dynamics differ from classical Lorentz force equations. We show that in a periodic potential, on application of electric field, the electron wave function adiabatically follows the wavefunction of a time varying Bloch wavevector $k(t)$, with its energies suitably shifted with time. We derive the effective mass equation and discuss conduction in conductors and insulators.
0
1
0
0
0
0
On blowup of co-rotational wave maps in odd space dimensions
We consider co-rotational wave maps from the $(1+d)$-dimensional Minkowski space into the $d$-sphere for $d\geq 3$ odd. This is an energy-supercritical model which is known to exhibit finite-time blowup via self-similar solutions. Based on a method developed by the second author and Schörkhuber, we prove the asymptotic nonlinear stability of the "ground-state" self-similar solution.
0
0
1
0
0
0
The Young Substellar Companion ROXs 12 B: Near-Infrared Spectrum, System Architecture, and Spin-Orbit Misalignment
ROXs 12 (2MASS J16262803-2526477) is a young star hosting a directly imaged companion near the deuterium-burning limit. We present a suite of spectroscopic, imaging, and time-series observations to characterize the physical and environmental properties of this system. Moderate-resolution near-infrared spectroscopy of ROXs 12 B from Gemini-North/NIFS and Keck/OSIRIS reveals signatures of low surface gravity including weak alkali absorption lines and a triangular $H$-band pseudo-continuum shape. No signs of Pa$\beta$ emission are evident. As a population, however, we find that about half (46 $\pm$ 14\%) of young ($\lesssim$15 Myr) companions with masses $\lesssim$20 $M_\mathrm{Jup}$ possess actively accreting subdisks detected via Pa$\beta$ line emission, which represents a lower limit on the prevalence of circumplanetary disks in general as some are expected to be in a quiescent phase of accretion. The bolometric luminosity of the companion and age of the host star (6$^{+4}_{-2}$ Myr) imply a mass of 17.5 $\pm$ 1.5 $M_\mathrm{Jup}$ for ROXs 12 B based on hot-start evolutionary models. We identify a wide (5100 AU) tertiary companion to this system, 2MASS J16262774-2527247, which is heavily accreting and exhibits stochastic variability in its $K2$ light curve. By combining $v$sin$i_*$ measurements with rotation periods from $K2$, we constrain the line-of-sight inclinations of ROXs 12 A and 2MASS J16262774-2527247 and find that they are misaligned by 60$^{+7}_{-11}$$^{\circ}$. In addition, the orbital axis of ROXs 12 B is likely misaligned from the spin axis of its host star ROXs 12 A, suggesting that ROXs 12 B formed akin to fragmenting binary stars or in an equatorial disk that was torqued by the wide stellar tertiary.
0
1
0
0
0
0
Connecting dissipation and phase slips in a Josephson junction between fermionic superfluids
We study the emergence of dissipation in an atomic Josephson junction between weakly-coupled superfluid Fermi gases. We find that vortex-induced phase slippage is the dominant microscopic source of dissipation across the BEC-BCS crossover. We explore different dynamical regimes by tuning the bias chemical potential between the two superfluid reservoirs. For small excitations, we observe dissipation and phase coherence to coexist, with a resistive current followed by well-defined Josephson oscillations. We link the junction transport properties to the phase-slippage mechanism, finding that vortex nucleation is primarily responsible for the observed trends of conductance and critical current. For large excitations, we observe the irreversible loss of coherence between the two superfluids, and transport cannot be described only within an uncorrelated phase-slip picture. Our findings open new directions for investigating the interplay between dissipative and superfluid transport in strongly correlated Fermi systems, and general concepts in out-of-equlibrium quantum systems.
0
1
0
0
0
0
Satellite conjunction analysis and the false confidence theorem
Satellite conjunction analysis is the assessment of collision risk during a close encounter between a satellite and another object in orbit. A counterintuitive phenomenon has emerged in the conjunction analysis literature: probability dilution, in which lower quality data paradoxically appear to reduce the risk of collision. We show that probability dilution is a symptom of a fundamental deficiency in epistemic probability distributions. In probabilistic representations of statistical inference, there are always false propositions that have a high probability of being assigned a high degree of belief. We call this deficiency false confidence. In satellite conjunction analysis, it results in a severe and persistent underestimation of collision risk exposure. We introduce the Martin--Liu validity criterion as a benchmark by which to identify statistical methods that are free from false confidence. If expressed using belief functions, such inferences will necessarily be non-additive. In satellite conjunction analysis, we show that $K \sigma$ uncertainty ellipsoids satisfy the validity criterion. Performing collision avoidance maneuvers based on ellipsoid overlap will ensure that collision risk is capped at the user-specified level. Further, this investigation into satellite conjunction analysis provides a template for recognizing and resolving false confidence issues as they occur in other problems of statistical inference.
0
0
1
1
0
0
A formula goes to court: Partisan gerrymandering and the efficiency gap
Recently, a proposal has been advanced to detect unconstitutional partisan gerrymandering with a simple formula called the efficiency gap. The efficiency gap is now working its way towards a possible landmark case in the Supreme Court. This note explores some of its mathematical properties in light of the fact that it reduces to a straight proportional comparison of votes to seats. Though we offer several critiques, we assess that EG can still be a useful component of a courtroom analysis. But a famous formula can take on a life of its own and this one will need to be watched closely.
0
1
0
0
0
0
Quantitative aspects of linear and affine closed lambda terms
Affine $\lambda$-terms are $\lambda$-terms in which each bound variable occurs at most once and linear $\lambda$-terms are $\lambda$-terms in which each bound variables occurs once. and only once. In this paper we count the number of closed affine $\lambda$-terms of size $n$, closed linear $\lambda$-terms of size $n$, affine $\beta$-normal forms of size $n$ and linear $\beta$-normal forms of ise $n$, for different ways of measuring the size of $\lambda$-terms. From these formulas, we show how we can derive programs for generating all the terms of size $n$ for each class. For this we use a specific data structure, which are contexts taking into account all the holes at levels of abstractions.
1
0
1
0
0
0
Drug response prediction by ensemble learning and drug-induced gene expression signatures
Chemotherapeutic response of cancer cells to a given compound is one of the most fundamental information one requires to design anti-cancer drugs. Recent advances in producing large drug screens against cancer cell lines provided an opportunity to apply machine learning methods for this purpose. In addition to cytotoxicity databases, considerable amount of drug-induced gene expression data has also become publicly available. Following this, several methods that exploit omics data were proposed to predict drug activity on cancer cells. However, due to the complexity of cancer drug mechanisms, none of the existing methods are perfect. One possible direction, therefore, is to combine the strengths of both the methods and the databases for improved performance. We demonstrate that integrating a large number of predictions by the proposed method improves the performance for this task. The predictors in the ensemble differ in several aspects such as the method itself, the number of tasks method considers (multi-task vs. single-task) and the subset of data considered (sub-sampling). We show that all these different aspects contribute to the success of the final ensemble. In addition, we attempt to use the drug screen data together with two novel signatures produced from the drug-induced gene expression profiles of cancer cell lines. Finally, we evaluate the method predictions by in vitro experiments in addition to the tests on data sets.The predictions of the methods, the signatures and the software are available from \url{this http URL}.
0
0
0
1
1
0
Eigenvalue approximation of sums of Hermitian matrices from eigenvector localization/delocalization
We propose a technique for calculating and understanding the eigenvalue distribution of sums of random matrices from the known distribution of the summands. The exact problem is formidably hard. One extreme approximation to the true density amounts to classical probability, in which the matrices are assumed to commute; the other extreme is related to free probability, in which the eigenvectors are assumed to be in generic positions and sufficiently large. In practice, free probability theory can give a good approximation of the density. We develop a technique based on eigenvector localization/delocalization that works very well for important problems of interest where free probability is not sufficient, but certain uniformity properties apply. The localization/delocalization property appears in a convex combination parameter that notably, is independent of any eigenvalue properties and yields accurate eigenvalue density approximations. We demonstrate this technique on a number of examples as well as discuss a more general technique when the uniformity properties fail to apply.
0
1
1
0
0
0
Some Ultraspheroidal Monogenic Clifford Gegenbauer Jacobi Polynomials and Associated Wavelets
In the present paper, new classes of wavelet functions are presented in the framework of Clifford analysis. Firstly, some classes of orthogonal polynomials are provided based on 2-parameters weight functions. Such classes englobe the well known ones of Jacobi and Gegenbauer polynomials when relaxing one of the parameters. The discovered polynomial sets are next applied to introduce new wavelet functions. Reconstruction formula as well as Fourier-Plancherel rules have been proved.
0
0
1
0
0
0
Gait learning for soft microrobots controlled by light fields
Soft microrobots based on photoresponsive materials and controlled by light fields can generate a variety of different gaits. This inherent flexibility can be exploited to maximize their locomotion performance in a given environment and used to adapt them to changing conditions. Albeit, because of the lack of accurate locomotion models, and given the intrinsic variability among microrobots, analytical control design is not possible. Common data-driven approaches, on the other hand, require running prohibitive numbers of experiments and lead to very sample-specific results. Here we propose a probabilistic learning approach for light-controlled soft microrobots based on Bayesian Optimization (BO) and Gaussian Processes (GPs). The proposed approach results in a learning scheme that is data-efficient, enabling gait optimization with a limited experimental budget, and robust against differences among microrobot samples. These features are obtained by designing the learning scheme through the comparison of different GP priors and BO settings on a semi-synthetic data set. The developed learning scheme is validated in microrobot experiments, resulting in a 115% improvement in a microrobot's locomotion performance with an experimental budget of only 20 tests. These encouraging results lead the way toward self-adaptive microrobotic systems based on light-controlled soft microrobots and probabilistic learning control.
1
0
0
0
0
0
Evolution of Nagaoka phase with kinetic energy frustrating hoppings
We investigate, using the density matrix renormalization group, the evolution of the Nagaoka state with $t'$ hoppings that frustrate the hole kinetic energy in the $U=\infty$ Hubbard model on the anisotropic triangular lattice and the square lattice with second-nearest neighbor hoppings. We find that the Nagaoka ferromagnet survives up to a rather small $t'_c/t \sim 0.2.$ At this critical value, there is a transition to an antiferromagnetic phase, that depends on the lattice: a ${\bf Q}=(Q,0)$ spiral order, that continuously evolves with $t'$, for the triangular lattice, and the usual ${\bf Q}=(\pi,\pi)$ Néel order for the square lattice. Remarkably, the local magnetization takes its classical value for all considered $t'$ ($t'/t \le 1$). Our results show that the recently found classical kinetic antiferromagnetism, a perfect counterpart of Nagaoka ferromagnetism, is a generic phenomenon in these kinetically frustrated electronic systems.
0
1
0
0
0
0
Blocking Transferability of Adversarial Examples in Black-Box Learning Systems
Advances in Machine Learning (ML) have led to its adoption as an integral component in many applications, including banking, medical diagnosis, and driverless cars. To further broaden the use of ML models, cloud-based services offered by Microsoft, Amazon, Google, and others have developed ML-as-a-service tools as black-box systems. However, ML classifiers are vulnerable to adversarial examples: inputs that are maliciously modified can cause the classifier to provide adversary-desired outputs. Moreover, it is known that adversarial examples generated on one classifier are likely to cause another classifier to make the same mistake, even if the classifiers have different architectures or are trained on disjoint datasets. This property, which is known as transferability, opens up the possibility of attacking black-box systems by generating adversarial examples on a substitute classifier and transferring the examples to the target classifier. Therefore, the key to protect black-box learning systems against the adversarial examples is to block their transferability. To this end, we propose a training method that, as the input is more perturbed, the classifier smoothly outputs lower confidence on the original label and instead predicts that the input is "invalid". In essence, we augment the output class set with a NULL label and train the classifier to reject the adversarial examples by classifying them as NULL. In experiments, we apply a wide range of attacks based on adversarial examples on the black-box systems. We show that a classifier trained with the proposed method effectively resists against the adversarial examples, while maintaining the accuracy on clean data.
1
0
0
0
0
0
Analytic continuation of Wolynes theory into the Marcus inverted regime
The Wolynes theory of electronically nonadiabatic reaction rates [P. G. Wolynes, J. Chem. Phys. 87, 6559 (1987)] is based on a saddle point approximation to the time integral of a reactive flux autocorrelation function in the nonadiabatic (golden rule) limit. The dominant saddle point is on the imaginary time axis at $t_{\rm sp}=i\lambda_{\rm sp}\hbar$, and provided $\lambda_{\rm sp}$ lies in the range $-\beta/2\le\lambda_{\rm sp}\le\beta/2$, it is straightforward to evaluate the rate constant using information obtained from an imaginary time path integral calculation. However, if $\lambda_{\rm sp}$ lies outside this range, as it does in the Marcus inverted regime, the path integral diverges. This has led to claims in the literature that Wolynes theory cannot describe the correct behaviour in the inverted regime. Here we show how the imaginary time correlation function obtained from a path integral calculation can be analytically continued to $\lambda_{\rm sp}<-\beta/2$, and the continuation used to evaluate the rate in the inverted regime. Comparisons with exact golden rule results for a spin-boson model and a more demanding (asymmetric and anharmonic) model of electronic predissociation show that the theory it is just as accurate in the inverted regime as it is in the normal regime.
0
1
0
0
0
0
JDFTx: software for joint density-functional theory
Density-functional theory (DFT) has revolutionized computational prediction of atomic-scale properties from first principles in physics, chemistry and materials science. Continuing development of new methods is necessary for accurate predictions of new classes of materials and properties, and for connecting to nano- and mesoscale properties using coarse-grained theories. JDFTx is a fully-featured open-source electronic DFT software designed specifically to facilitate rapid development of new theories, models and algorithms. Using an algebraic formulation as an abstraction layer, compact C++11 code automatically performs well on diverse hardware including GPUs. This code hosts the development of joint density-functional theory (JDFT) that combines electronic DFT with classical DFT and continuum models of liquids for first-principles calculations of solvated and electrochemical systems. In addition, the modular nature of the code makes it easy to extend and interface with, facilitating the development of multi-scale toolkits that connect to ab initio calculations, e.g. photo-excited carrier dynamics combining electron and phonon calculations with electromagnetic simulations.
0
1
0
0
0
0
Experimental realization of purely excitonic lasing in ZnO microcrystals at room temperature: transition from exciton-exciton to exciton-electron scattering
Since the seminal observation of room-temperature laser emission from ZnO thin films and nanowires, numerous attempts have been carried out for detailed understanding of the lasing mechanism in ZnO. In spite of the extensive efforts performed over the last decades, the origin of optical gain at room temperature is still a matter of considerable discussion,. We show that ZnO microcrystals with a size of a few micrometers exhibit purely excitonic lasing at room temperature without showing any symptoms of electron-hole plasma emission. We then present the distinct experimental evidence that the room-temperature excitonic lasing is achieved not by exciton-exciton scattering, as has been generally believed, but by exciton-electron scattering. As the temperature is lowered below ~150 K, the lasing mechanism is shifted from the exciton-electron scattering to the exciton-exciton scattering. We also argue that the ease of carrier diffusion plays a significant role in showing room-temperature excitonic lasing.
0
1
0
0
0
0
Selective probing of hidden spin-polarized states in inversion-symmetric bulk MoS2
Spin- and angle-resolved photoemission spectroscopy is used to reveal that a large spin polarization is observable in the bulk centrosymmetric transition metal dichalcogenide MoS2. It is found that the measured spin polarization can be reversed by changing the handedness of incident circularly-polarized light. Calculations based on a three-step model of photoemission show that the valley and layer-locked spin-polarized electronic states can be selectively addressed by circularly-polarized light, therefore providing a novel route to probe these hidden spin-polarized states in inversion-symmetric systems as predicted by Zhang et al. [Nature Physics 10, 387 (2014)].
0
1
0
0
0
0
Classification without labels: Learning from mixed samples in high energy physics
Modern machine learning techniques can be used to construct powerful models for difficult collider physics problems. In many applications, however, these models are trained on imperfect simulations due to a lack of truth-level information in the data, which risks the model learning artifacts of the simulation. In this paper, we introduce the paradigm of classification without labels (CWoLa) in which a classifier is trained to distinguish statistical mixtures of classes, which are common in collider physics. Crucially, neither individual labels nor class proportions are required, yet we prove that the optimal classifier in the CWoLa paradigm is also the optimal classifier in the traditional fully-supervised case where all label information is available. After demonstrating the power of this method in an analytical toy example, we consider a realistic benchmark for collider physics: distinguishing quark- versus gluon-initiated jets using mixed quark/gluon training samples. More generally, CWoLa can be applied to any classification problem where labels or class proportions are unknown or simulations are unreliable, but statistical mixtures of the classes are available.
0
0
0
1
0
0
Semantic Annotation for Microblog Topics Using Wikipedia Temporal Information
Trending topics in microblogs such as Twitter are valuable resources to understand social aspects of real-world events. To enable deep analyses of such trends, semantic annotation is an effective approach; yet the problem of annotating microblog trending topics is largely unexplored by the research community. In this work, we tackle the problem of mapping trending Twitter topics to entities from Wikipedia. We propose a novel model that complements traditional text-based approaches by rewarding entities that exhibit a high temporal correlation with topics during their burst time period. By exploiting temporal information from the Wikipedia edit history and page view logs, we have improved the annotation performance by 17-28\%, as compared to the competitive baselines.
1
0
0
0
0
0
Twin-beam real-time position estimation of micro-objects in 3D
Various optical methods for measuring positions of micro-objects in 3D have been reported in the literature. Nevertheless, majority of them are not suitable for real-time operation, which is needed, for example, for feedback position control. In this paper, we present a method for real-time estimation of the position of micro-objects in 3D; the method is based on twin-beam illumination and it requires only a very simple hardware setup whose essential part is a standard image sensor without any lens. Performance of the proposed method is tested during a micro-manipulation task in which the estimated position served as a feedback for the controller. The experiments show that the estimate is accurate to within ~3 um in the lateral position and ~7 um in the axial distance with the refresh rate of 10 Hz. Although the experiments are done using spherical objects, the presented method could be modified to handle non-spherical objects as well.
1
1
0
0
0
0
Asymmetric metallicity patterns in the stellar velocity space with RAVE
We explore the correlations between velocity and metallicity and the possible distinct chemical signatures of the velocity over-densities of the local Galactic neighbourhood. We use the large spectroscopic survey RAVE and the Geneva Copenhagen Survey. We compare the metallicity distribution of regions in the velocity plane ($v_R,v_\phi$) with that of their symmetric counterparts ($-v_R,v_\phi$). We expect similar metallicity distributions if there are no tracers of a sub-population (e.g., a dispersed cluster, accreted stars), if the disk of the Galaxy is axisymmetric, and if the orbital effects of the spiral arms and the bar are weak. We find that the metallicity-velocity space of the solar neighbourhood is highly patterned. A large fraction of the velocity plane shows differences in the metallicity distribution when comparing symmetric $v_R$ regions. The typical differences in the median metallicity are of $0.05$ dex with a statistical significance of at least $95\%$, and with values up to $0.6$ dex. For low azimuthal velocity $v_\phi$, stars moving outwards in the Galaxy have on average higher metallicity than those moving inwards. These include stars in the Hercules and Hyades moving groups and other velocity branch-like structures. For higher $v_\phi$, the stars moving inwards have higher metallicity than those moving outwards. The most likely interpretation of the metallicity asymmetry is that it is due to the orbital effects of the bar and the radial metallicity gradient of the disk. We present a simulation that supports this idea. We have also discovered a positive gradient in $v_\phi$ with respect to metallicity at high metallicities, apart from the two known positive and negative gradients for the thick and thin disks, respectively.
0
1
0
0
0
0
End-to-End Attention based Text-Dependent Speaker Verification
A new type of End-to-End system for text-dependent speaker verification is presented in this paper. Previously, using the phonetically discriminative/speaker discriminative DNNs as feature extractors for speaker verification has shown promising results. The extracted frame-level (DNN bottleneck, posterior or d-vector) features are equally weighted and aggregated to compute an utterance-level speaker representation (d-vector or i-vector). In this work we use speaker discriminative CNNs to extract the noise-robust frame-level features. These features are smartly combined to form an utterance-level speaker vector through an attention mechanism. The proposed attention model takes the speaker discriminative information and the phonetic information to learn the weights. The whole system, including the CNN and attention model, is joint optimized using an end-to-end criterion. The training algorithm imitates exactly the evaluation process --- directly mapping a test utterance and a few target speaker utterances into a single verification score. The algorithm can automatically select the most similar impostor for each target speaker to train the network. We demonstrated the effectiveness of the proposed end-to-end system on Windows $10$ "Hey Cortana" speaker verification task.
1
0
0
1
0
0
Automated Discovery of Process Models from Event Logs: Review and Benchmark
Process mining allows analysts to exploit logs of historical executions of business processes to extract insights regarding the actual performance of these processes. One of the most widely studied process mining operations is automated process discovery. An automated process discovery method takes as input an event log, and produces as output a business process model that captures the control-flow relations between tasks that are observed in or implied by the event log. Various automated process discovery methods have been proposed in the past two decades, striking different tradeoffs between scalability, accuracy and complexity of the resulting models. However, these methods have been evaluated in an ad-hoc manner, employing different datasets, experimental setups, evaluation measures and baselines, often leading to incomparable conclusions and sometimes unreproducible results due to the use of closed datasets. This article provides a systematic review and comparative evaluation of automated process discovery methods, using an open-source benchmark and covering twelve publicly-available real-life event logs, twelve proprietary real-life event logs, and nine quality metrics. The results highlight gaps and unexplored tradeoffs in the field, including the lack of scalability of some methods and a strong divergence in their performance with respect to the different quality metrics used.
1
0
0
0
0
0
Criteria for strict monotonicity of the mixed volume of convex polytopes
Let $P_1,\dots, P_n$ and $Q_1,\dots, Q_n$ be convex polytopes in $\mathbb{R}^n$ such that $P_i\subset Q_i$. It is well-known that the mixed volume has the monotonicity property: $V(P_1,\dots,P_n)\leq V(Q_1,\dots,Q_n)$. We give two criteria for when this inequality is strict in terms of essential collections of faces as well as mixed polyhedral subdivisions. This geometric result allows us to characterize sparse polynomial systems with Newton polytopes $P_1,\dots,P_n$ whose number of isolated solutions equals the normalized volume of the convex hull of $P_1\cup\dots\cup P_n$. In addition, we obtain an analog of Cramer's rule for sparse polynomial systems.
0
0
1
0
0
0
Colorings with Fractional Defect
Consider a coloring of a graph such that each vertex is assigned a fraction of each color, with the total amount of colors at each vertex summing to $1$. We define the fractional defect of a vertex $v$ to be the sum of the overlaps with each neighbor of $v$, and the fractional defect of the graph to be the maximum of the defects over all vertices. Note that this coincides with the usual definition of defect if every vertex is monochromatic. We provide results on the minimum fractional defect of $2$-colorings of some graphs.
0
0
1
0
0
0
The new concepts of measurement error's regularities and effect characteristics
In several literatures, the authors give a new thinking of measurement theory system based on error non-classification philosophy, which completely overthrows the existing measurement concept system of precision, trueness and accuracy. In this paper, by focusing on the issues of error's regularities and effect characteristics, the authors will do a thematic interpretation, and prove that the error's regularities actually come from different cognitive perspectives, are also unable to be used for classifying errors, and that the error's effect characteristics actually depend on artificial condition rules of repeated measurement, and are still unable to be used for classifying errors. Thus, from the perspectives of error's regularities and effect characteristics, the existing error classification philosophy is still incorrect; and an uncertainty concept system, which must be interpreted by the error non-classification philosophy, naturally becomes the only way out of measurement theory.
0
0
1
1
0
0
The cavity approach for Steiner trees packing problems
The Belief Propagation approximation, or cavity method, has been recently applied to several combinatorial optimization problems in its zero-temperature implementation, the max-sum algorithm. In particular, recent developments to solve the edge-disjoint paths problem and the prize-collecting Steiner tree problem on graphs have shown remarkable results for several classes of graphs and for benchmark instances. Here we propose a generalization of these techniques for two variants of the Steiner trees packing problem where multiple "interacting" trees have to be sought within a given graph. Depending on the interaction among trees we distinguish the vertex-disjoint Steiner trees problem, where trees cannot share nodes, from the edge-disjoint Steiner trees problem, where edges cannot be shared by trees but nodes can be members of multiple trees. Several practical problems of huge interest in network design can be mapped into these two variants, for instance, the physical design of Very Large Scale Integration (VLSI) chips. The formalism described here relies on two components edge-variables that allows us to formulate a massage-passing algorithm for the V-DStP and two algorithms for the E-DStP differing in the scaling of the computational time with respect to some relevant parameters. We will show that one of the two formalisms used for the edge-disjoint variant allow us to map the max-sum update equations into a weighted maximum matching problem over proper bipartite graphs. We developed a heuristic procedure based on the max-sum equations that shows excellent performance in synthetic networks (in particular outperforming standard multi-step greedy procedures by large margins) and on large benchmark instances of VLSI for which the optimal solution is known, on which the algorithm found the optimum in two cases and the gap to optimality was never larger than 4 %.
1
0
0
0
0
0
Observational Learning by Reinforcement Learning
Observational learning is a type of learning that occurs as a function of observing, retaining and possibly replicating or imitating the behaviour of another agent. It is a core mechanism appearing in various instances of social learning and has been found to be employed in several intelligent species, including humans. In this paper, we investigate to what extent the explicit modelling of other agents is necessary to achieve observational learning through machine learning. Especially, we argue that observational learning can emerge from pure Reinforcement Learning (RL), potentially coupled with memory. Through simple scenarios, we demonstrate that an RL agent can leverage the information provided by the observations of an other agent performing a task in a shared environment. The other agent is only observed through the effect of its actions on the environment and never explicitly modeled. Two key aspects are borrowed from observational learning: i) the observer behaviour needs to change as a result of viewing a 'teacher' (another agent) and ii) the observer needs to be motivated somehow to engage in making use of the other agent's behaviour. The later is naturally modeled by RL, by correlating the learning agent's reward with the teacher agent's behaviour.
1
0
0
1
0
0
P4-compatible High-level Synthesis of Low Latency 100 Gb/s Streaming Packet Parsers in FPGAs
Packet parsing is a key step in SDN-aware devices. Packet parsers in SDN networks need to be both reconfigurable and fast, to support the evolving network protocols and the increasing multi-gigabit data rates. The combination of packet processing languages with FPGAs seems to be the perfect match for these requirements. In this work, we develop an open-source FPGA-based configurable architecture for arbitrary packet parsing to be used in SDN networks. We generate low latency and high-speed streaming packet parsers directly from a packet processing program. Our architecture is pipelined and entirely modeled using templated C++ classes. The pipeline layout is derived from a parser graph that corresponds a P4 code after a series of graph transformation rounds. The RTL code is generated from the C++ description using Xilinx Vivado HLS and synthesized with Xilinx Vivado. Our architecture achieves 100 Gb/s data rate in a Xilinx Virtex-7 FPGA while reducing the latency by 45% and the LUT usage by 40% compared to the state-of-the-art.
1
0
0
0
0
0
Uniruledness of Strata of Holomorphic Differentials in Small Genus
We address the question concerning the birational geometry of the strata of holomorphic and quadratic differentials. We show strata of holomorphic and quadratic differentials to be uniruled in small genus by constructing rational curves via pencils on K3 and del Pezzo surfaces respectively. Restricting to genus $3\leq g\leq6$, we construct projective bundles over a rational varieties that dominate the holomorphic strata with length at most $g-1$, hence showing in addition that these strata are unirational.
0
0
1
0
0
0
Conservation laws, vertex corrections, and screening in Raman spectroscopy
We present a microscopic theory for the Raman response of a clean multiband superconductor accounting for the effects of vertex corrections and long-range Coulomb interaction. The measured Raman intensity, $R(\Omega)$, is proportional to the imaginary part of the fully renormalized particle-hole correlator with Raman form-factors $\gamma(\vec k)$. In a BCS superconductor, a bare Raman bubble is non-zero for any $\gamma(\vec k)$ and diverges at $\Omega = 2\Delta +0$, where $\Delta$ is the largest gap along the Fermi surface. However, for $\gamma(\vec k) =$ const, the full $R(\Omega)$ is expected to vanish due to particle number conservation. It was long thought that this vanishing is due to the singular screening by long-range Coulomb interaction. We argue that this vanishing actually holds due to vertex corrections from the same short-range interaction that gives rise to superconductivity. We further argue that long-range Coulomb interaction does not affect the Raman signal for $any$ $\gamma(\vec k)$. We argue that vertex corrections eliminate the divergence at $2\Delta$ and replace it with a maximum at a somewhat larger frequency. We also argue that vertex corrections give rise to sharp peaks in $R(\Omega)$ at $\Omega < 2\Delta$, when $\Omega$ coincides with the frequency of one of collective modes in a superconductor, e.g, Leggett mode, Bardasis-Schrieffer mode, or an excitonic mode.
0
1
0
0
0
0
The distribution of symmetry of a naturally reductive nilpotent Lie group
We show that the distribution of symmetry of a naturally reductive nilpotent Lie group coincides with the invariant distribution induced by the set of fixed vectors of the isotropy. This extends a known result on compact naturally reductive spaces. We also address the study of the quotient by the foliation of symmetry.
0
0
1
0
0
0
Odd-integer quantum Hall states and giant spin susceptibility in p-type few-layer WSe2
We fabricate high-mobility p-type few-layer WSe2 field-effect transistors and surprisingly observe a series of quantum Hall (QH) states following an unconventional sequence predominated by odd-integer states under a moderate strength magnetic field. By tilting the magnetic field, we discover Landau level (LL) crossing effects at ultra-low coincident angles, revealing that the Zeeman energy is about three times as large as the cyclotron energy near the valence band top at {\Gamma} valley. This result implies the significant roles played by the exchange interactions in p-type few-layer WSe2, in which itinerant or QH ferromagnetism likely occurs. Evidently, the {\Gamma} valley of few-layer WSe2 offers a unique platform with unusually heavy hole-carriers and a substantially enhanced g-factor for exploring strongly correlated phenomena.
0
1
0
0
0
0
Efficient Probabilistic Performance Bounds for Inverse Reinforcement Learning
In the field of reinforcement learning there has been recent progress towards safety and high-confidence bounds on policy performance. However, to our knowledge, no practical methods exist for determining high-confidence policy performance bounds in the inverse reinforcement learning setting---where the true reward function is unknown and only samples of expert behavior are given. We propose a sampling method based on Bayesian inverse reinforcement learning that uses demonstrations to determine practical high-confidence upper bounds on the $\alpha$-worst-case difference in expected return between any evaluation policy and the optimal policy under the expert's unknown reward function. We evaluate our proposed bound on both a standard grid navigation task and a simulated driving task and achieve tighter and more accurate bounds than a feature count-based baseline. We also give examples of how our proposed bound can be utilized to perform risk-aware policy selection and risk-aware policy improvement. Because our proposed bound requires several orders of magnitude fewer demonstrations than existing high-confidence bounds, it is the first practical method that allows agents that learn from demonstration to express confidence in the quality of their learned policy.
1
0
0
1
0
0
Face Identification and Clustering
In this thesis, we study two problems based on clustering algorithms. In the first problem, we study the role of visual attributes using an agglomerative clustering algorithm to whittle down the search area where the number of classes is high to improve the performance of clustering. We observe that as we add more attributes, the clustering performance increases overall. In the second problem, we study the role of clustering in aggregating templates in a 1:N open set protocol using multi-shot video as a probe. We observe that by increasing the number of clusters, the performance increases with respect to the baseline and reaches a peak, after which increasing the number of clusters causes the performance to degrade. Experiments are conducted using recently introduced unconstrained IARPA Janus IJB-A, CS2, and CS3 face recognition datasets.
1
0
0
0
0
0
Properties of Hydrogen Bonds in the Protic Ionic Liquid Ethylammonium Nitrate. DFT versus DFTB Molecular Dynamics
Comparative molecular dynamics simulations of a hexamer cluster of the protic ionic liquid ethylammonium nitrate are performed using density functional theory (DFT) and density functional-based tight binding (DFTB) methods. The focus is on assessing the performance of the DFTB approach to describe the dynamics and infrared spectroscopic signatures of hydrogen bonding between the ions. Average geometries and geometric correlations are found to be rather similar. The same holds true for the far-infrared spectral region. Differences are more pronounced for the NH- and CH-stretching band, where DFTB predicts a broader intensity distribution. DFTB completely fails to describe the fingerprint range shaped by nitrate anion vibrations. Finally, charge fluctuations within the H-bonds are characterized yielding moderate dependencies on geometry. On the basis of these results, DFTB is recommend for the simulation of H-bond properties of this type of ionic liquids.
0
1
0
0
0
0
On Abruptly-Changing and Slowly-Varying Multiarmed Bandit Problems
We study the non-stationary stochastic multiarmed bandit (MAB) problem and propose two generic algorithms, namely, the limited memory deterministic sequencing of exploration and exploitation (LM-DSEE) and the Sliding-Window Upper Confidence Bound# (SW-UCB#). We rigorously analyze these algorithms in abruptly-changing and slowly-varying environments and characterize their performance. We show that the expected cumulative regret for these algorithms under either of the environments is upper bounded by sublinear functions of time, i.e., the time average of the regret asymptotically converges to zero. We complement our analytic results with numerical illustrations.
0
0
0
1
0
0
Validation of small Kepler transiting planet candidates in or near the habitable zone
A main goal of NASA's Kepler Mission is to establish the frequency of potentially habitable Earth-size planets (eta Earth). Relatively few such candidates identified by the mission can be confirmed to be rocky via dynamical measurement of their mass. Here we report an effort to validate 18 of them statistically using the BLENDER technique, by showing that the likelihood they are true planets is far greater than that of a false positive. Our analysis incorporates follow-up observations including high-resolution optical and near-infrared spectroscopy, high-resolution imaging, and information from the analysis of the flux centroids of the Kepler observations themselves. While many of these candidates have been previously validated by others, the confidence levels reported typically ignore the possibility that the planet may transit a different star than the target along the same line of sight. If that were the case, a planet that appears small enough to be rocky may actually be considerably larger and therefore less interesting from the point of view of habitability. We take this into consideration here, and are able to validate 15 of our candidates at a 99.73% (3 sigma) significance level or higher, and the other three at slightly lower confidence. We characterize the GKM host stars using available ground-based observations and provide updated parameters for the planets, with sizes between 0.8 and 2.9 Earth radii. Seven of them (KOI-0438.02, 0463.01, 2418.01, 2626.01, 3282.01, 4036.01, and 5856.01) have a better than 50% chance of being smaller than 2 Earth radii and being in the habitable zone of their host stars.
0
1
0
0
0
0
CP-decomposition with Tensor Power Method for Convolutional Neural Networks Compression
Convolutional Neural Networks (CNNs) has shown a great success in many areas including complex image classification tasks. However, they need a lot of memory and computational cost, which hinders them from running in relatively low-end smart devices such as smart phones. We propose a CNN compression method based on CP-decomposition and Tensor Power Method. We also propose an iterative fine tuning, with which we fine-tune the whole network after decomposing each layer, but before decomposing the next layer. Significant reduction in memory and computation cost is achieved compared to state-of-the-art previous work with no more accuracy loss.
1
0
0
0
0
0
Stoic Ethics for Artificial Agents
We present a position paper advocating the notion that Stoic philosophy and ethics can inform the development of ethical A.I. systems. This is in sharp contrast to most work on building ethical A.I., which has focused on Utilitarian or Deontological ethical theories. We relate ethical A.I. to several core Stoic notions, including the dichotomy of control, the four cardinal virtues, the ideal Sage, Stoic practices, and Stoic perspectives on emotion or affect. More generally, we put forward an ethical view of A.I. that focuses more on internal states of the artificial agent rather than on external actions of the agent. We provide examples relating to near-term A.I. systems as well as hypothetical superintelligent agents.
1
0
0
0
0
0
From quarks to nucleons in dark matter direct detection
We provide expressions for the nonperturbative matching of the effective field theory describing dark matter interactions with quarks and gluons to the effective theory of nonrelativistic dark matter interacting with nonrelativistic nucleons. We give the leading and subleading order expressions in chiral counting. In general, a single partonic operator already matches onto several nonrelativistic operators at leading order in chiral counting. Thus, keeping only one operator at the time in the nonrelativistic effective theory does not properly describe the scattering in direct detection. Moreover, the matching of the axial--axial partonic level operator, as well as the matching of the operators coupling DM to the QCD anomaly term, naively include momentum suppressed terms. However, these are still of leading chiral order due to pion poles and can be numerically important. We illustrate the impact of these effects with several examples.
0
1
0
0
0
0
Sharpened Strichartz estimates and bilinear restriction for the mass-critical quantum harmonic oscillator
We develop refined Strichartz estimates at $L^2$ regularity for a class of time-dependent Schrödinger operators. Such refinements begin to characterize the near-optimizers of the Strichartz estimate, and play a pivotal part in the global theory of mass-critical NLS. On one hand, the harmonic analysis is quite subtle in the $L^2$-critical setting due to an enormous group of symmetries, while on the other hand, the spacetime Fourier analysis employed by the existing approaches to the constant-coefficient equation are not adapted to nontranslation-invariant situations, especially with potentials as large as those considered in this article. Using phase space techniques, we reduce to proving certain analogues of (adjoint) bilinear Fourier restriction estimates. Then we extend Tao's bilinear restriction estimate for paraboloids to more general Schrödinger operators. As a particular application, the resulting inverse Strichartz theorem and profile decompositions constitute a key harmonic analysis input for studying large data solutions to the $L^2$-critical NLS with a harmonic oscillator potential in dimensions $\ge 2$. This article builds on recent work of Killip, Visan, and the author in one space dimension.
0
0
1
0
0
0
A fast and stable test to check if a weakly diagonally dominant matrix is a nonsingular M-matrix
We present a test for determining if a substochastic matrix is convergent. By establishing a duality between weakly chained diagonally dominant (w.c.d.d.) L-matrices and convergent substochastic matrices, we show that this test can be trivially extended to determine whether a weakly diagonally dominant (w.d.d.) matrix is a nonsingular M-matrix. The test's runtime is linear in the order of the input matrix if it is sparse and quadratic if it is dense. This is a partial strengthening of the cubic test in [J. M. Peña., A stable test to check if a matrix is a nonsingular M-matrix, Math. Comp., 247, 1385-1392, 2004]. As a by-product of our analysis, we prove that a nonsingular w.d.d. M-matrix is a w.c.d.d. L-matrix, a fact whose converse has been known since at least 1964. We point out that this strengthens some recent results on M-matrices in the literature.
1
0
1
0
0
0
Task-Oriented Query Reformulation with Reinforcement Learning
Search engines play an important role in our everyday lives by assisting us in finding the information we need. When we input a complex query, however, results are often far from satisfactory. In this work, we introduce a query reformulation system based on a neural network that rewrites a query to maximize the number of relevant documents returned. We train this neural network with reinforcement learning. The actions correspond to selecting terms to build a reformulated query, and the reward is the document recall. We evaluate our approach on three datasets against strong baselines and show a relative improvement of 5-20% in terms of recall. Furthermore, we present a simple method to estimate a conservative upper-bound performance of a model in a particular environment and verify that there is still large room for improvements.
1
0
0
0
0
0
Inverse scattering transform for the nonlocal reverse space-time Sine-Gordon, Sinh-Gordon and nonlinear Schrödinger equations with nonzero boundary conditions
The reverse space-time (RST) Sine-Gordon, Sinh-Gordon and nonlinear Schrödinger equations were recently introduced and shown to be integrable infinite-dimensional dynamical systems. The inverse scattering transform (IST) for rapidly decaying data was also constructed. In this paper, IST for these equations with nonzero boundary conditions (NZBCs) at infinity is presented. The NZBC problem is more complicated due to the associated branching structure of the associated linear eigenfunctions. With constant amplitude at infinity, four cases are analyzed; they correspond to two different signs of nonlinearity and two different values of the phase at infinity. Special soliton solutions are discussed and explicit 1-soliton and 2-soliton solutions are found. In terms of IST, the difference between the RST Sine-Gordon/Sinh-Gordon equations and the RST NLS equation is the time dependence of the scattering data. Spatially dependent boundary conditions are also briefly considered.
0
1
0
0
0
0
Relieving the frustration through Mn$^{3+}$ substitution in Holmium Gallium Garnet
We present a study on the impact of Mn$^{3+}$ substitution in the geometrically frustrated Ising garnet Ho$_3$Ga$_5$O$_{12}$ using bulk magnetic measurements and low temperature powder neutron diffraction. We find that the transition temperature, $T_N$ = 5.8 K, for Ho$_3$MnGa$_4$O$_{12}$ is raised by almost 20 when compared to Ho$_3$Ga$_5$O$_{12}$. Powder neutron diffraction on Ho$_3$Mn$_x$Ga$_{5-x}$O$_{12}$ ($x$ = 0.5, 1) below $T_N$ shows the formation of a long range ordered ordered state with $\mathbf{k}$ = (0,0,0). Ho$^{3+}$ spins are aligned antiferromagnetically along the six crystallographic axes with no resultant moment while the Mn$^{3+}$ spins are oriented along the body diagonals, such that there is a net moment along [111]. The magnetic structure can be visualised as ten-membered rings of corner-sharing triangles of Ho$^{3+}$ spins with the Mn$^{3+}$ spins ferromagnetically coupled to each individual Ho$^{3+}$ spin in the triangle. Substitution of Mn$^{3+}$ completely relieves the magnetic frustration with $f = \theta_{CW}/T_N \approx 1.1$ for Ho$_3$MnGa$_4$O$_{12}$.
0
1
0
0
0
0
Inference on Auctions with Weak Assumptions on Information
Given a sample of bids from independent auctions, this paper examines the question of inference on auction fundamentals (e.g. valuation distributions, welfare measures) under weak assumptions on information structure. The question is important as it allows us to learn about the valuation distribution in a robust way, i.e., without assuming that a particular information structure holds across observations. We leverage the recent contributions of \cite{Bergemann2013} in the robust mechanism design literature that exploit the link between Bayesian Correlated Equilibria and Bayesian Nash Equilibria in incomplete information games to construct an econometrics framework for learning about auction fundamentals using observed data on bids. We showcase our construction of identified sets in private value and common value auctions. Our approach for constructing these sets inherits the computational simplicity of solving for correlated equilibria: checking whether a particular valuation distribution belongs to the identified set is as simple as determining whether a {\it linear} program is feasible. A similar linear program can be used to construct the identified set on various welfare measures and counterfactual objects. For inference and to summarize statistical uncertainty, we propose novel finite sample methods using tail inequalities that are used to construct confidence regions on sets. We also highlight methods based on Bayesian bootstrap and subsampling. A set of Monte Carlo experiments show adequate finite sample properties of our inference procedures. We illustrate our methods using data from OCS auctions.
1
0
1
0
0
0
Introduction to OXPath
Contemporary web pages with increasingly sophisticated interfaces rival traditional desktop applications for interface complexity and are often called web applications or RIA (Rich Internet Applications). They often require the execution of JavaScript in a web browser and can call AJAX requests to dynamically generate the content, reacting to user interaction. From the automatic data acquisition point of view, thus, it is essential to be able to correctly render web pages and mimic user actions to obtain relevant data from the web page content. Briefly, to obtain data through existing Web interfaces and transform it into structured form, contemporary wrappers should be able to: 1) interact with sophisticated interfaces of web applications; 2) precisely acquire relevant data; 3) scale with the number of crawled web pages or states of web application; 4) have an embeddable programming API for integration with existing web technologies. OXPath is a state-of-the-art technology, which is compliant with these requirements and demonstrated its efficiency in comprehensive experiments. OXPath integrates Firefox for correct rendering of web pages and extends XPath 1.0 for the DOM node selection, interaction, and extraction. It provides means for converting extracted data into different formats, such as XML, JSON, CSV, and saving data into relational databases. This tutorial explains main features of the OXPath language and the setup of a suitable working environment. The guidelines for using OXPath are provided in the form of prototypical examples.
1
0
0
0
0
0
Image Registration for the Alignment of Digitized Historical Documents
In this work, we conducted a survey on different registration algorithms and investigated their suitability for hyperspectral historical image registration applications. After the evaluation of different algorithms, we choose an intensity based registration algorithm with a curved transformation model. For the transformation model, we select cubic B-splines since they should be capable to cope with all non-rigid deformations in our hyperspectral images. From a number of similarity measures, we found that residual complexity and localized mutual information are well suited for the task at hand. In our evaluation, both measures show an acceptable performance in handling all difficulties, e.g., capture range, non-stationary and spatially varying intensity distortions or multi-modality that occur in our application.
1
0
0
0
0
0
Jointly Attentive Spatial-Temporal Pooling Networks for Video-based Person Re-Identification
Person Re-Identification (person re-id) is a crucial task as its applications in visual surveillance and human-computer interaction. In this work, we present a novel joint Spatial and Temporal Attention Pooling Network (ASTPN) for video-based person re-identification, which enables the feature extractor to be aware of the current input video sequences, in a way that interdependency from the matching items can directly influence the computation of each other's representation. Specifically, the spatial pooling layer is able to select regions from each frame, while the attention temporal pooling performed can select informative frames over the sequence, both pooling guided by the information from distance matching. Experiments are conduced on the iLIDS-VID, PRID-2011 and MARS datasets and the results demonstrate that this approach outperforms existing state-of-art methods. We also analyze how the joint pooling in both dimensions can boost the person re-id performance more effectively than using either of them separately.
1
0
0
1
0
0
Inertia, positive definiteness and $\ell_p$ norm of GCD and LCM matrices and their unitary analogs
Let $S=\{x_1,x_2,\dots,x_n\}$ be a set of distinct positive integers, and let $f$ be an arithmetical function. The GCD matrix $(S)_f$ on $S$ associated with $f$ is defined as the $n\times n$ matrix having $f$ evaluated at the greatest common divisor of $x_i$ and $x_j$ as its $ij$ entry. The LCM matrix $[S]_f$ is defined similarly. We consider inertia, positive definiteness and $\ell_p$ norm of GCD and LCM matrices and their unitary analogs. Proofs are based on matrix factorizations and convolutions of arithmetical functions.
0
0
1
0
0
0
Low-Latency Millimeter-Wave Communications: Traffic Dispersion or Network Densification?
This paper investigates two strategies to reduce the communication delay in future wireless networks: traffic dispersion and network densification. A hybrid scheme that combines these two strategies is also considered. The probabilistic delay and effective capacity are used to evaluate performance. For probabilistic delay, the violation probability of delay, i.e., the probability that the delay exceeds a given tolerance level, is characterized in terms of upper bounds, which are derived by applying stochastic network calculus theory. In addition, to characterize the maximum affordable arrival traffic for mmWave systems, the effective capacity, i.e., the service capability with a given quality-of-service (QoS) requirement, is studied. The derived bounds on the probabilistic delay and effective capacity are validated through simulations. These numerical results show that, for a given average system gain, traffic dispersion, network densification, and the hybrid scheme exhibit different potentials to reduce the end-to-end communication delay. For instance, traffic dispersion outperforms network densification, given high average system gain and arrival rate, while it could be the worst option, otherwise. Furthermore, it is revealed that, increasing the number of independent paths and/or relay density is always beneficial, while the performance gain is related to the arrival rate and average system gain, jointly. Therefore, a proper transmission scheme should be selected to optimize the delay performance, according to the given conditions on arrival traffic and system service capability.
1
0
0
0
0
0
The structure of multiplicative tilings of the real line
Suppose $\Omega, A \subseteq \RR\setminus\Set{0}$ are two sets, both of mixed sign, that $\Omega$ is Lebesgue measurable and $A$ is a discrete set. We study the problem of when $A \cdot \Omega$ is a (multiplicative) tiling of the real line, that is when almost every real number can be uniquely written as a product $a\cdot \omega$, with $a \in A$, $\omega \in \Omega$. We study both the structure of the set of multiples $A$ and the structure of the tile $\Omega$. We prove strong results in both cases. These results are somewhat analogous to the known results about the structure of translational tiling of the real line. There is, however, an extra layer of complexity due to the presence of sign in the sets $A$ and $\Omega$, which makes multiplicative tiling roughly equivalent to translational tiling on the larger group $\ZZ_2 \times \RR$.
0
0
1
0
0
0
Supporting Crowd-Powered Science in Economics: FRACTI, a Conceptual Framework for Large-Scale Collaboration and Transparent Investigation in Financial Markets
Modern investigation in economics and in other sciences requires the ability to store, share, and replicate results and methods of experiments that are often multidisciplinary and yield a massive amount of data. Given the increasing complexity and growing interaction across diverse bodies of knowledge it is becoming imperative to define a platform to properly support collaborative research and track origin, accuracy and use of data. This paper starts by defining a set of methods leveraging scientific principles and advocating the importance of those methods in multidisciplinary, computer intensive fields like computational finance. The next part of this paper defines a class of systems called scientific support systems, vis-a-vis usages in other research fields such as bioinformatics, physics and engineering. We outline a basic set of fundamental concepts, and list our goals and motivation for leveraging such systems to enable large-scale investigation, "crowd powered science", in economics. The core of this paper provides an outline of FRACTI in five steps. First we present definitions related to scientific support systems intrinsic to finance and describe common characteristics of financial use cases. The second step concentrates on what can be exchanged through the definition of shareable entities called contributions. The third step is the description of a classification system for building blocks of the conceptual framework, called facets. The fourth step introduces the meta-model that will enable provenance tracking and representation of data fragments and simulation. Finally we describe intended cases of use to highlight main strengths of FRACTI: application of the scientific method for investigation in computational finance, large-scale collaboration and simulation.
0
0
0
0
0
1
The effect of temperature on generic stable periodic structures in the parameter space of dissipative relativistic standard map
In this work, we have characterized changes in the dynamics of a two-dimensional relativistic standard map in the presence of dissipation and specially when it is submitted to thermal effects modeled by a Gaussian noise reservoir. By the addition of thermal noise in the dissipative relativistic standard map (DRSM) it is possible to suppress typical stable periodic structures (SPSs) embedded in the chaotic domains of parameter space for large enough temperature strengths. Smaller SPSs are first affected by thermal effects, starting from their borders, as a function of temperature. To estimate the necessary temperature strength capable to destroy those SPSs we use the largest Lyapunov exponent to obtain the critical temperature ($T_C$) diagrams. For critical temperatures the chaotic behavior takes place with the suppression of periodic motion, although, the temperature strengths considered in this work are not so large to convert the deterministic features of the underlying system into a stochastic ones.
0
1
0
0
0
0
The $u^n$-invariant and the Symbol Length of $H_2^n(F)$
Given a field $F$ of $\operatorname{char}(F)=2$, we define $u^n(F)$ to be the maximal dimension of an anisotropic form in $I_q^n F$. For $n=1$ it recaptures the definition of $u(F)$. We study the relations between this value and the symbol length of $H_2^n(F)$, denoted by $sl_2^n(F)$. We show for any $n \geq 2$ that if $2^n \leq u^n(F) \leq u^2(F) < \infty$ then $sl_2^n(F) \leq \prod_{i=2}^n (\frac{u^i(F)}{2}+1-2^{i-1})$. As a result, if $u(F)$ is finite then $sl_2^n(F)$ is finite for any $n$, a fact which was previously proven when $\operatorname{char}(F) \neq 2$ by Saltman and Krashen. We also show that if $sl_2^n(F)=1$ then $u^n(F)$ is either $2^n$ or $2^{n+1}$.
0
0
1
0
0
0
An Affective Robot Companion for Assisting the Elderly in a Cognitive Game Scenario
Being able to recognize emotions in human users is considered a highly desirable trait in Human-Robot Interaction (HRI) scenarios. However, most contemporary approaches rarely attempt to apply recognized emotional features in an active manner to modulate robot decision-making and dialogue for the benefit of the user. In this position paper, we propose a method of incorporating recognized emotions into a Reinforcement Learning (RL) based dialogue management module that adapts its dialogue responses in order to attempt to make cognitive training tasks, like the 2048 Puzzle Game, more enjoyable for the users.
1
0
0
0
0
0
Rao-Blackwellization to give Improved Estimates in Multi-List Studies
Sufficient statistics are derived for the population size and parameters of commonly used closed population mark-recapture models. Rao-Blackwellization details for improving estimators that are not functions of the statistics are presented. As Rao-Blackwellization entails enumerating all sample reorderings consistent with the sufficient statistic, Markov chain Monte Carlo resampling procedures are provided to approximate the computationally intensive estimators. Simulation studies demonstrate that significant improvements can be made with the strategy. Supplementary materials for this article are available online.
0
0
0
1
0
0
Bernstein Polynomial Model for Nonparametric Multivariate Density
In this paper, we study the Bernstein polynomial model for estimating the multivariate distribution functions and densities with bounded support. As a mixture model of multivariate beta distributions, the maximum (approximate) likelihood estimate can be obtained using EM algorithm. A change-point method of choosing optimal degrees of the proposed Bernstein polynomial model is presented. Under some conditions the optimal rate of convergence in the mean $\chi^2$-divergence of new density estimator is shown to be nearly parametric. The method is illustrated by an application to a real data set. Finite sample performance of the proposed method is also investigated by simulation study and is shown to be much better than the kernel density estimate but close to the parametric ones.
0
0
0
1
0
0
Small-space encoding LCE data structure with constant-time queries
The \emph{longest common extension} (\emph{LCE}) problem is to preprocess a given string $w$ of length $n$ so that the length of the longest common prefix between suffixes of $w$ that start at any two given positions is answered quickly. In this paper, we present a data structure of $O(z \tau^2 + \frac{n}{\tau})$ words of space which answers LCE queries in $O(1)$ time and can be built in $O(n \log \sigma)$ time, where $1 \leq \tau \leq \sqrt{n}$ is a parameter, $z$ is the size of the Lempel-Ziv 77 factorization of $w$ and $\sigma$ is the alphabet size. This is an \emph{encoding} data structure, i.e., it does not access the input string $w$ when answering queries and thus $w$ can be deleted after preprocessing. On top of this main result, we obtain further results using (variants of) our LCE data structure, which include the following: - For highly repetitive strings where the $z\tau^2$ term is dominated by $\frac{n}{\tau}$, we obtain a \emph{constant-time and sub-linear space} LCE query data structure. - Even when the input string is not well compressible via Lempel-Ziv 77 factorization, we still can obtain a \emph{constant-time and sub-linear space} LCE data structure for suitable $\tau$ and for $\sigma \leq 2^{o(\log n)}$. - The time-space trade-off lower bounds for the LCE problem by Bille et al. [J. Discrete Algorithms, 25:42-50, 2014] and by Kosolobov [CoRR, abs/1611.02891, 2016] can be "surpassed" in some cases with our LCE data structure.
1
0
0
0
0
0
Machine learning application in the life time of materials
Materials design and development typically takes several decades from the initial discovery to commercialization with the traditional trial and error development approach. With the accumulation of data from both experimental and computational results, data based machine learning becomes an emerging field in materials discovery, design and property prediction. This manuscript reviews the history of materials science as a disciplinary the most common machine learning method used in materials science, and specifically how they are used in materials discovery, design, synthesis and even failure detection and analysis after materials are deployed in real application. Finally, the limitations of machine learning for application in materials science and challenges in this emerging field is discussed.
1
1
0
0
0
0
A Unified Framework for Stochastic Matrix Factorization via Variance Reduction
We propose a unified framework to speed up the existing stochastic matrix factorization (SMF) algorithms via variance reduction. Our framework is general and it subsumes several well-known SMF formulations in the literature. We perform a non-asymptotic convergence analysis of our framework and derive computational and sample complexities for our algorithm to converge to an $\epsilon$-stationary point in expectation. In addition, extensive experiments for a wide class of SMF formulations demonstrate that our framework consistently yields faster convergence and a more accurate output dictionary vis-à-vis state-of-the-art frameworks.
1
0
1
1
0
0
Short Term Power Demand Prediction Using Stochastic Gradient Boosting
Power prediction demand is vital in power system and delivery engineering fields. By efficiently predicting the power demand, we can forecast the total energy to be consumed in a certain city or district. Thus, exact resources required to produce the demand power can be allocated. In this paper, a Stochastic Gradient Boosting (aka Treeboost) model is used to predict the short term power demand for the Emirate of Sharjah in the United Arab Emirates (UAE). Results show that the proposed model gives promising results in comparison to the model used by Sharjah Electricity and Water Authority (SEWA).
0
0
0
1
0
0
Learning Qualitatively Diverse and Interpretable Rules for Classification
There has been growing interest in developing accurate models that can also be explained to humans. Unfortunately, if there exist multiple distinct but accurate models for some dataset, current machine learning methods are unlikely to find them: standard techniques will likely recover a complex model that combines them. In this work, we introduce a way to identify a maximal set of distinct but accurate models for a dataset. We demonstrate empirically that, in situations where the data supports multiple accurate classifiers, we tend to recover simpler, more interpretable classifiers rather than more complex ones.
0
0
0
1
0
0