title
stringlengths
7
239
abstract
stringlengths
7
2.76k
cs
int64
0
1
phy
int64
0
1
math
int64
0
1
stat
int64
0
1
quantitative biology
int64
0
1
quantitative finance
int64
0
1
Stable Self-Assembled Atomic-Switch Networks for Neuromorphic Applications
Nature inspired neuromorphic architectures are being explored as an alternative to imminent limitations of conventional complementary metal-oxide semiconductor (CMOS) architectures. Utilization of such architectures for practical applications like advanced pattern recognition tasks will require synaptic connections that are both reconfigurable and stable. Here, we report realization of stable atomic-switch networks (ASN), with inherent complex connectivity, self-assembled from percolating metal nanoparticles (NPs). The device conductance reflects the configuration of synapses which can be modulated via voltage stimulus. By controlling Relative Humidity (RH) and oxygen partial-pressure during NP deposition we obtain stochastic conductance switching that is stable over several months. Detailed characterization reveals signatures of electric-field induced atomic-wire formation within the tunnel-gaps of the oxidized percolating network. Finally we show that the synaptic structure can be reconfigured by stimulating at different repetition rates, which can be utilized as short-term to long-term memory conversion. This demonstration of stable stochastic switching in ASNs provides a promising route to hardware implementation of biological neuronal models and, as an example, we highlight possible applications in Reservoir Computing (RC).
1
1
0
0
0
0
On the arithmetic of simple singularities of type E
An ADE Dynkin diagram gives rise to a family of algebraic curves. In this paper, we use arithmetic invariant theory to study the integral points of the curves associated to the exceptional diagrams $E_6, E_7$, $E_8$. These curves are non-hyperelliptic of genus 3 or 4. We prove that a positive proportion of each family consists of curves with integral points everywhere locally but no integral points globally.
0
0
1
0
0
0
Centralized Network Utility Maximization over Aggregate Flows
We study a network utility maximization (NUM) decomposition in which the set of flow rates is grouped by source-destination pairs. We develop theorems for both single-path and multipath cases, which relate an arbitrary NUM problem involving all flow rates to a simpler problem involving only the aggregate rates for each source-destination pair. The optimal aggregate flows are then apportioned among the constituent flows of each pair. This apportionment is simple for the case of $\alpha$-fair utility functions. We also show how the decomposition can be implemented with the alternating direction method of multipliers (ADMM) algorithm.
1
0
1
0
0
0
Bottom-up Object Detection by Grouping Extreme and Center Points
With the advent of deep learning, object detection drifted from a bottom-up to a top-down recognition problem. State of the art algorithms enumerate a near-exhaustive list of object locations and classify each into: object or not. In this paper, we show that bottom-up approaches still perform competitively. We detect four extreme points (top-most, left-most, bottom-most, right-most) and one center point of objects using a standard keypoint estimation network. We group the five keypoints into a bounding box if they are geometrically aligned. Object detection is then a purely appearance-based keypoint estimation problem, without region classification or implicit feature learning. The proposed method performs on-par with the state-of-the-art region based detection methods, with a bounding box AP of 43.2% on COCO test-dev. In addition, our estimated extreme points directly span a coarse octagonal mask, with a COCO Mask AP of 18.9%, much better than the Mask AP of vanilla bounding boxes. Extreme point guided segmentation further improves this to 34.6% Mask AP.
1
0
0
0
0
0
The Different Shapes of the LIS Energy Spectra of Cosmic Ray He and C Nuclei Below ~1 GeV/nuc and The Cosmic Ray He/C Nuclei Ratio vs. Energy -V1 Measurements and LBM Propagation Predictions
This paper examines the cosmic ray He and C nuclei spectra below ~1 GeV/nuc, as well as the very rapid increase in the He/C ratio below ~100 MeV/nuc, measured by Voyager 1 beyond the heliopause. Using a simple Leaky Box Model (LBM) for galactic propagation we have not been able to simultaneously reproduce the individual He and C nuclei spectra and the large increase in He/C ratio that is observed at low energies. However, using a truncated LBM with different truncation parameters for each nucleus that are related to their rate of energy loss by ionization which is ~Z2/A, these different features can be matched. This suggests that we are observing the effects of the source distribution of cosmic rays in the galaxy on the low energy spectra of cosmic ray nuclei and that there may be a paucity of nearby sources. In this propagation model we start very specific source spectra for He and C which are ~dj/dP = P-2.24, the same for each nucleus and also for all rigidities. These source spectra become spectra with spectral indices ~-2.69 at high rigidities for both charges as a result of a rigidity dependence of the diffusion coefficient governing the propagation which is taken to be ~P-0.45. This exponent is determined directly from the B/C ratio measured by AMS-2. These propagated P-2.69 spectra, when extended to high energies, predict He and C intensities and a He/C ratio that are within +3-5% of the intensities and ratio recently measured by AMS-2 in the energy range from 10 to 1000 GeV/nuc.
0
1
0
0
0
0
A Nonconvex Splitting Method for Symmetric Nonnegative Matrix Factorization: Convergence Analysis and Optimality
Symmetric nonnegative matrix factorization (SymNMF) has important applications in data analytics problems such as document clustering, community detection and image segmentation. In this paper, we propose a novel nonconvex variable splitting method for solving SymNMF. The proposed algorithm is guaranteed to converge to the set of Karush-Kuhn-Tucker (KKT) points of the nonconvex SymNMF problem. Furthermore, it achieves a global sublinear convergence rate. We also show that the algorithm can be efficiently implemented in parallel. Further, sufficient conditions are provided which guarantee the global and local optimality of the obtained solutions. Extensive numerical results performed on both synthetic and real data sets suggest that the proposed algorithm converges quickly to a local minimum solution.
0
0
1
1
0
0
Galactic Dark Matter Halos and Globular Cluster Populations. III: Extension to Extreme Environments
The total mass M_GCS in the globular cluster (GC) system of a galaxy is empirically a near-constant fraction of the total mass M_h = M_bary + M_dark of the galaxy, across a range of 10^5 in galaxy mass. This trend is radically unlike the strongly nonlinear behavior of total stellar mass M_star versus M_h. We discuss extensions of this trend to two more extreme situations: (a) entire clusters of galaxies, and (b) the Ultra-Diffuse Galaxies (UDGs) recently discovered in Coma and elsewhere. Our calibration of the ratio \eta_M = M_GCS / M_h from normal galaxies, accounting for new revisions in the adopted mass-to-light ratio for GCs, now gives \eta_M = 2.9 \times 10^{-5} as the mean absolute mass fraction. We find that the same ratio appears valid for galaxy clusters and UDGs. Estimates of \eta_M in the four clusters we examine tend to be slightly higher than for individual galaxies, butmore data and better constraints on the mean GC mass in such systems are needed to determine if this difference is significant. We use the constancy of \eta_M to estimate total masses for several individual cases; for example, the total mass of the Milky Way is calculated to be M_h = 1.1 \times 10^{12} M_sun. Physical explanations for the uniformity of \eta_M are still descriptive, but point to a picture in which massive, dense star clusters in their formation stages were relatively immune to the feedback that more strongly influenced lower-density regions where most stars form.
0
1
0
0
0
0
Database of Parliamentary Speeches in Ireland, 1919-2013
We present a database of parliamentary debates that contains the complete record of parliamentary speeches from Dáil Éireann, the lower house and principal chamber of the Irish parliament, from 1919 to 2013. In addition, the database contains background information on all TDs (Teachta Dála, members of parliament), such as their party affiliations, constituencies and office positions. The current version of the database includes close to 4.5 million speeches from 1,178 TDs. The speeches were downloaded from the official parliament website and further processed and parsed with a Python script. Background information on TDs was collected from the member database of the parliament website. Data on cabinet positions (ministers and junior ministers) was collected from the official website of the government. A record linkage algorithm and human coders were used to match TDs and ministers.
1
0
0
1
0
0
Reduced Modeling of Unknown Trajectories
This paper deals with model order reduction of parametrical dynamical systems. We consider the specific setup where the distribution of the system's trajectories is unknown but the following two sources of information are available: \textit{(i)} some "rough" prior knowledge on the system's realisations; \textit{(ii)} a set of "incomplete" observations of the system's trajectories. We propose a Bayesian methodological framework to build reduced-order models (ROMs) by exploiting these two sources of information. We emphasise that complementing the prior knowledge with the collected data provably enhances the knowledge of the distribution of the system's trajectories. We then propose an implementation of the proposed methodology based on Monte-Carlo methods. In this context, we show that standard ROM learning techniques, such e.g. Proper Orthogonal Decomposition or Dynamic Mode Decomposition, can be revisited and recast within the probabilistic framework considered in this paper.~We illustrate the performance of the proposed approach by numerical results obtained for a standard geophysical model.
0
0
0
1
0
0
Pump-Enhanced Continuous-Wave Magnetometry using Nitrogen-Vacancy Ensembles
Ensembles of nitrogen-vacancy centers in diamond are a highly promising platform for high-sensitivity magnetometry, whose efficacy is often based on efficiently generating and monitoring magnetic-field dependent infrared fluorescence. Here we report on an increased sensing efficiency with the use of a 532-nm resonant confocal cavity and a microwave resonator antenna for measuring the local magnetic noise density using the intrinsic nitrogen-vacancy concentration of a chemical-vapor deposited single-crystal diamond. We measure a near-shot-noise-limited magnetic noise floor of 200 pT/$\sqrt{\text{Hz}}$ spanning a bandwidth up to 159 Hz, and an extracted sensitivity of approximately 3 nT/$\sqrt{\text{Hz}}$, with further enhancement limited by the noise floor of the lock-in amplifier and the laser damage threshold of the optical components. Exploration of the microwave and optical pump-rate parameter space demonstrates a linewidth-narrowing regime reached by virtue of using the optical cavity, allowing an enhanced sensitivity to be achieved, despite an unoptimized collection efficiency of <2 %, and a low nitrogen-vacancy concentration of about 0.2 ppb.
0
1
0
0
0
0
Topological Spin Liquid with Symmetry-Protected Edge States
Topological spin liquids are robust quantum states of matter with long-range entanglement and possess many exotic properties such as the fractional statistics of the elementary excitations. Yet these states, short of local parameters like all topological states, are elusive for conventional experimental probes. In this work, we combine theoretical analysis and quantum Monte Carlo numerics on a frustrated spin model which hosts a $\mathbb Z_2$ topological spin liquid ground state, and demonstrate that the presence of symmetry-protected gapless edge modes is a characteristic feature of the state, originating from the nontrivial symmetry fractionalization of the elementary excitations. Experimental observation of these modes on the edge would directly indicate the existence of the topological spin liquids in the bulk, analogous to the fact that the observation of Dirac edge states confirmed the existence of topological insulators.
0
1
0
0
0
0
SMT Queries Decomposition and Caching in Semi-Symbolic Model Checking
In semi-symbolic (control-explicit data-symbolic) model checking the state-space explosion problem is fought by representing sets of states by first-order formulas over the bit-vector theory. In this model checking approach, most of the verification time is spent in an SMT solver on deciding satisfiability of quantified queries, which represent equality of symbolic states. In this paper, we introduce a new scheme for decomposition of symbolic states, which can be used to significantly improve the performance of any semi-symbolic model checker. Using the decomposition, a model checker can issue much simpler and smaller queries to the solver when compared to the original case. Some SMT calls may be even avoided completely, as the satisfaction of some of the simplified formulas can be decided syntactically. Moreover, the decomposition allows for an efficient caching scheme for quantified formulas. To support our theoretical contribution, we show the performance gain of our model checker SymDIVINE on a set of examples from the Software Verification Competition.
1
0
0
0
0
0
Homotopy dimer algebras and cyclic contractions
Dimer algebras arise from a particular type of quiver gauge theory. However, part of the input to such a theory is the gauge group, and this choice may impose additional constraints on the algebra. If the gauge group of a dimer theory is abelian, then the algebra that arises is not actually the dimer algebra itself, but a particular quotient we introduce called the 'homotopy algebra'. We show that a homotopy algebra $\Lambda$ on a torus is a dimer algebra if and only if it is noetherian, and otherwise $\Lambda$ is the quotient of a dimer algebra by homotopy relations. Stated in physics terms, a dimer theory is superconformal if and only if the corresponding dimer and homotopy algebras coincide. We also give an explicit description of the center of a homotopy algebra in terms of a special subset of its perfect matchings. In our proofs we introduce formalized notions of Higgsing and the mesonic chiral ring from quiver gauge theory.
0
0
1
0
0
0
On the trace problem for Triebel--Lizorkin spaces with mixed norms
The subject is traces of Sobolev spaces with mixed Lebesgue norms on Euclidean space. Specifically, restrictions to the hyperplanes given by the first and last coordinates are applied to functions belonging to quasi-homogeneous, mixed-norm Lizorkin--Triebel spaces; Sobolev spaces are obtained from these as special cases. Spaces admitting traces in the distribution sense are characterised except for the borderline cases; these are also covered in case of the first variable. With respect to the first variable the trace spaces are proved to be mixed-norm Lizorkin--Triebel spaces with a specific sum exponent. For the last variable they are similarly defined Besov spaces. The treatment includes continuous right-inverses and higher order traces. The results rely on a sequence version of Nikolskij's inequality, Marschall's inequality for pseudo-differential operators (and Fourier multiplier assertions), as well as dyadic ball criteria.
0
0
1
0
0
0
Breaking the curse of dimensionality in regression
Models with many signals, high-dimensional models, often impose structures on the signal strengths. The common assumption is that only a few signals are strong and most of the signals are zero or close (collectively) to zero. However, such a requirement might not be valid in many real-life applications. In this article, we are interested in conducting large-scale inference in models that might have signals of mixed strengths. The key challenge is that the signals that are not under testing might be collectively non-negligible (although individually small) and cannot be accurately learned. This article develops a new class of tests that arise from a moment matching formulation. A virtue of these moment-matching statistics is their ability to borrow strength across features, adapt to the sparsity size and exert adjustment for testing growing number of hypothesis. GRoup-level Inference of Parameter, GRIP, test harvests effective sparsity structures with hypothesis formulation for an efficient multiple testing procedure. Simulated data showcase that GRIPs error control is far better than the alternative methods. We develop a minimax theory, demonstrating optimality of GRIP for a broad range of models, including those where the model is a mixture of a sparse and high-dimensional dense signals.
0
0
1
1
0
0
DVAE++: Discrete Variational Autoencoders with Overlapping Transformations
Training of discrete latent variable models remains challenging because passing gradient information through discrete units is difficult. We propose a new class of smoothing transformations based on a mixture of two overlapping distributions, and show that the proposed transformation can be used for training binary latent models with either directed or undirected priors. We derive a new variational bound to efficiently train with Boltzmann machine priors. Using this bound, we develop DVAE++, a generative model with a global discrete prior and a hierarchy of convolutional continuous variables. Experiments on several benchmarks show that overlapping transformations outperform other recent continuous relaxations of discrete latent variables including Gumbel-Softmax (Maddison et al., 2016; Jang et al., 2016), and discrete variational autoencoders (Rolfe 2016).
0
0
0
1
0
0
Bet-hedging against demographic fluctuations
Biological organisms have to cope with stochastic variations in both the external environment and the internal population dynamics. Theoretical studies and laboratory experiments suggest that population diversification could be an effective bet-hedging strategy for adaptation to varying environments. Here we show that bet-hedging can also be effective against demographic fluctuations that pose a trade-off between growth and survival for populations even in a constant environment. A species can maximize its overall abundance in the long term by diversifying into coexisting subpopulations of both "fast-growing" and "better-surviving" individuals. Our model generalizes statistical physics models of birth-death processes to incorporate dispersal, during which new populations are founded, and can further incorporate variations of local environments. In this way we unify different bet-hedging strategies against demographic and environmental variations as a general means of adaptation to both types of uncertainties in population growth.
0
1
0
0
0
0
ISeeU: Visually interpretable deep learning for mortality prediction inside the ICU
To improve the performance of Intensive Care Units (ICUs), the field of bio-statistics has developed scores which try to predict the likelihood of negative outcomes. These help evaluate the effectiveness of treatments and clinical practice, and also help to identify patients with unexpected outcomes. However, they have been shown by several studies to offer sub-optimal performance. Alternatively, Deep Learning offers state of the art capabilities in certain prediction tasks and research suggests deep neural networks are able to outperform traditional techniques. Nevertheless, a main impediment for the adoption of Deep Learning in healthcare is its reduced interpretability, for in this field it is crucial to gain insight on the why of predictions, to assure that models are actually learning relevant features instead of spurious correlations. To address this, we propose a deep multi-scale convolutional architecture trained on the Medical Information Mart for Intensive Care III (MIMIC-III) for mortality prediction, and the use of concepts from coalitional game theory to construct visual explanations aimed to show how important these inputs are deemed by the network. Our results show our model attains state of the art performance while remaining interpretable. Supporting code can be found at this https URL.
1
0
0
1
0
0
Low-complexity Approaches for MIMO Capacity with Per-antenna Power Constraint
This paper proposes two low-complexity iterative algorithms to compute the capacity of a single-user multiple-input multiple-output channel with per-antenna power constraint. The first method results from manipulating the optimality conditions of the considered problem and applying fixed-point iteration. In the second approach, we transform the considered problem into a minimax optimization program using the well-known MAC- BC duality, and then solve it by a novel alternating optimization method. In both proposed iterative methods, each iteration involves an optimization problem which can be efficiently solved by the water-filling algorithm. The proposed iterative methods are provably convergent. Complexity analysis and extensive numerical experiments are carried out to demonstrate the superior performance of the proposed algorithms over an existing approach known as the mode-dropping algorithm.
1
0
0
0
0
0
The VISTA ZYJHKs Photometric System: Calibration from 2MASS
In this paper we describe the routine photometric calibration of data taken with the VIRCAM instrument on the ESO VISTA telescope. The broadband ZYJHKs data are directly calibrated from 2MASS point sources visible in every VISTA image. We present the empirical transformations between the 2MASS and VISTA, and WFCAM and VISTA, photometric systems for regions of low reddening. We investigate the long-term performance of VISTA+VIRCAM. An investigation of the dependence of the photometric calibration on interstellar reddening leads to these conclusions: (1) For all broadband filters, a linear colour-dependent correction compensates the gross effects of reddening where $E(B-V)<5.0$. (2) For $Z$ and $Y$, there is a significantly larger scatter above E(B-V)=5.0, and insufficient measurements to adequately constrain the relation beyond this value. (3) The $JHK\!s$ filters can be corrected to a few percent up to E(B-V)=10.0. We analyse spatial systematics over month-long timescales, both inter- and intra-detector and show that these are present only at very low levels in VISTA. We monitor and remove residual detector-to-detector offsets. We compare the calibration of the main pipeline products: pawprints and tiles. We show how variable seeing and transparency affect the final calibration accuracy of VISTA tiles, and discuss a technique, {\it grouting}, for mitigating these effects. Comparison between repeated reference fields is used to demonstrate that the VISTA photometry is precise to better than $\simeq2\%$ for the $Y$$J$$H$$Ks$ bands and $3\%$ for the $Z$ bands. Finally we present empirically determined offsets to transform VISTA magnitudes into a true Vega system.
0
1
0
0
0
0
Spin-filtering in superconducting junction with the manganite interlayer
We report on the electronic transport and the impact of spin-filtering in mesa-structures made of epitaxial thin films of cuprate superconductor YBa2Cu3Ox(YBCO) and the manganite LaMnO3 (LMO) interlayer with the Au/Nb counterelectrode. Ferromagnetic resonance measurements of heterostructure Au/LMO/YBCO shows ferromagnetic state at temperatures below 150 K as in the case of reference LMO film grown on the neodymium gallate substrate. The heights of the tunneling barrier evaluated from resistive characteristics of mesa-structures at different thickness of interlayer showed an exponential decrease from 30 mV down to 5 mV with the increase of manganite interlayer thickness. Temperature dependence of the conductivity of mesa-structures could be described taking into account the d-wave superconductivity in YBCO and a spin filtering of the electron transport. Spin filtering is supported also by measurements of magneto-resistance and the high sensitivity of mesa-structure conductivity to weak magnetic fields.
0
1
0
0
0
0
Intersection of conjugate solvable subgroups in symmetric groups
It is shown that for a solvable subgroup $G$ of an almost simple group $S$ which socle is isomorphic to $A_n$ $ (n\ge5)$ there are $x,y,z,t \in S$ such that $G \cap G^x \cap G^y \cap G^z \cap G^t =1.$
0
0
1
0
0
0
Improving inference of the dynamic biological network underlying aging via network propagation
Gene expression (GE) data capture valuable condition-specific information ("condition" can mean a biological process, disease stage, age, patient, etc.) However, GE analyses ignore physical interactions between gene products, i.e., proteins. Since proteins function by interacting with each other, and since biological networks (BNs) capture these interactions, BN analyses are promising. However, current BN data fail to capture condition-specific information. Recently, GE and BN data have been integrated using network propagation (NP) to infer condition-specific BNs. However, existing NP-based studies result in a static condition-specific network, even though cellular processes are dynamic. A dynamic process of our interest is aging. We use prominent existing NP methods in a new task of inferring a dynamic rather than static condition-specific (aging-related) network. Then, we study evolution of network structure with age - we identify proteins whose network positions significantly change with age and predict them as new aging-related candidates. We validate the predictions via e.g., functional enrichment analyses and literature search. Dynamic network inference via NP yields higher prediction quality than the only existing method for inferring a dynamic aging-related BN, which does not use NP.
0
0
0
0
1
0
Optimal Weighting for Exam Composition
A problem faced by many instructors is that of designing exams that accurately assess the abilities of the students. Typically these exams are prepared several days in advance, and generic question scores are used based on rough approximation of the question difficulty and length. For example, for a recent class taught by the author, there were 30 multiple choice questions worth 3 points, 15 true/false with explanation questions worth 4 points, and 5 analytical exercises worth 10 points. We describe a novel framework where algorithms from machine learning are used to modify the exam question weights in order to optimize the exam scores, using the overall class grade as a proxy for a student's true ability. We show that significant error reduction can be obtained by our approach over standard weighting schemes, and we make several new observations regarding the properties of the "good" and "bad" exam questions that can have impact on the design of improved future evaluation methods.
0
0
0
1
0
0
Generalized orderless pooling performs implicit salient matching
Most recent CNN architectures use average pooling as a final feature encoding step. In the field of fine-grained recognition, however, recent global representations like bilinear pooling offer improved performance. In this paper, we generalize average and bilinear pooling to "alpha-pooling", allowing for learning the pooling strategy during training. In addition, we present a novel way to visualize decisions made by these approaches. We identify parts of training images having the highest influence on the prediction of a given test image. It allows for justifying decisions to users and also for analyzing the influence of semantic parts. For example, we can show that the higher capacity VGG16 model focuses much more on the bird's head than, e.g., the lower-capacity VGG-M model when recognizing fine-grained bird categories. Both contributions allow us to analyze the difference when moving between average and bilinear pooling. In addition, experiments show that our generalized approach can outperform both across a variety of standard datasets.
1
0
0
0
0
0
The Gaia-ESO Survey: Exploring the complex nature and origins of the Galactic bulge populations
Abridged: We used the fourth internal data release of the Gaia-ESO survey to characterize the bulge chemistry, spatial distribution, kinematics, and to compare it chemically with the thin and thick disks. The sample consist on ~2500 red clump stars in 11 bulge fields ($-10^\circ\leq l\leq+8^\circ$ and $-10^\circ\leq b\leq-4^\circ$), and a set of ~6300 disk stars selected for comparison. The bulge MDF is confirmed to be bimodal across the whole sampled area, with metal-poor stars dominating at high latitudes. The metal-rich stars exhibit bar-like kinematics and display a bimodality in their magnitude distribution, a feature which is tightly associated with the X-shape bulge. They overlap with the metal-rich end of the thin disk sequence in the [Mg/Fe] vs. [Fe/H] plane. Metal-poor bulge stars have a more isotropic hot kinematics and do not participate in the X-shape bulge. With similar Mg-enhancement levels, the position of the metal-poor bulge sequence "knee" is observed at [Fe/H]$_{knee}=-0.37\pm0.09$, being 0.06 dex higher than that of the thick disk. It suggests a higher SFR for the bulge than for the thick disk. Finally, we present a chemical evolution model that suitably fits the whole bulge sequence by assuming a fast ($<1$ Gyr) intense burst of stellar formation at early epochs. We associate metal-rich stars with the B/P bulge formed from the secular evolution of the early thin disk. On the other hand, the metal-poor subpopulation might be the product of an early prompt dissipative collapse dominated by massive stars. Nevertheless, our results do not allow us to firmly rule out the possibility that these stars come from the secular evolution of the early thick disk. This is the first time that an analysis of the bulge MDF and $\alpha$-abundances has been performed in a large area on the basis of a homogeneous, fully spectroscopic analysis of high-resolution, high S/N data.
0
1
0
0
0
0
On the Complexity of Robust Stable Marriage
Robust Stable Marriage (RSM) is a variant of the classical Stable Marriage problem, where the robustness of a given stable matching is measured by the number of modifications required for repairing it in case an unforeseen event occurs. We focus on the complexity of finding an (a,b)-supermatch. An (a,b)-supermatch is defined as a stable matching in which if any 'a' (non-fixed) men/women break up it is possible to find another stable matching by changing the partners of those 'a' men/women and also the partners of at most 'b' other couples. In order to show deciding if there exists an (a,b)-supermatch is NP-Complete, we first introduce a SAT formulation that is NP-Complete by using Schaefer's Dichotomy Theorem. Then, we show the equivalence between the SAT formulation and finding a (1,1)-supermatch on a specific family of instances.
1
0
0
0
0
0
Vacuum Friction
We know that in empty space there is no preferred state of rest. This is true both in special relativity but also in Newtonian mechanics with its associated Galilean relativity. It comes as something of a surprise, therefore, to discover the existence a friction force associated with spontaneous emission. he resolution of this paradox relies on a central idea from special relativity even though our derivation of it is non-relativistic. We examine the possibility that the physics underlying this effect might be explored in an ion trap, via the observation of a superposition of different mass states.
0
1
0
0
0
0
Intelligent Personal Assistant with Knowledge Navigation
An Intelligent Personal Agent (IPA) is an agent that has the purpose of helping the user to gain information through reliable resources with the help of knowledge navigation techniques and saving time to search the best content. The agent is also responsible for responding to the chat-based queries with the help of Conversation Corpus. We will be testing different methods for optimal query generation. To felicitate the ease of usage of the application, the agent will be able to accept the input through Text (Keyboard), Voice (Speech Recognition) and Server (Facebook) and output responses using the same method. Existing chat bots reply by making changes in the input, but we will give responses based on multiple SRT files. The model will learn using the human dialogs dataset and will be able respond human-like. Responses to queries about famous things (places, people, and words) can be provided using web scraping which will enable the bot to have knowledge navigation features. The agent will even learn from its past experiences supporting semi-supervised learning.
1
0
0
0
0
0
Private Data System Enabling Self-Sovereign Storage Managed by Executable Choreographies
With the increased use of Internet, governments and large companies store and share massive amounts of personal data in such a way that leaves no space for transparency. When a user needs to achieve a simple task like applying for college or a driving license, he needs to visit a lot of institutions and organizations, thus leaving a lot of private data in many places. The same happens when using the Internet. These privacy issues raised by the centralized architectures along with the recent developments in the area of serverless applications demand a decentralized private data layer under user control. We introduce the Private Data System (PDS), a distributed approach which enables self-sovereign storage and sharing of private data. The system is composed of nodes spread across the entire Internet managing local key-value databases. The communication between nodes is achieved through executable choreographies, which are capable of preventing information leakage when executing across different organizations with different regulations in place. The user has full control over his private data and is able to share and revoke access to organizations at any time. Even more, the updates are propagated instantly to all the parties which have access to the data thanks to the system design. Specifically, the processing organizations may retrieve and process the shared information, but are not allowed under any circumstances to store it on long term. PDS offers an alternative to systems that aim to ensure self-sovereignty of specific types of data through blockchain inspired techniques but face various problems, such as low performance. Both approaches propose a distributed database, but with different characteristics. While the blockchain-based systems are built to solve consensus problems, PDS's purpose is to solve the self-sovereignty aspects raised by the privacy laws, rules and principles.
1
0
0
0
0
0
A Curious Family of Binomial Determinants That Count Rhombus Tilings of a Holey Hexagon
We evaluate a curious determinant, first mentioned by George Andrews in 1980 in the context of descending plane partitions. Our strategy is to combine the famous Desnanot-Jacobi-Dodgson identity with automated proof techniques. More precisely, we follow the holonomic ansatz that was proposed by Doron Zeilberger in 2007. We derive a compact and nice formula for Andrews's determinant, and use it to solve a challenge problem that we posed in a previous paper. By noting that Andrews's determinant is a special case of a two-parameter family of determinants, we find closed forms for several one-parameter subfamilies. The interest in these determinants arises because they count cyclically symmetric rhombus tilings of a hexagon with several triangular holes inside.
1
0
0
0
0
0
Community Structure Characterization
This entry discusses the problem of describing some communities identified in a complex network of interest, in a way allowing to interpret them. We suppose the community structure has already been detected through one of the many methods proposed in the literature. The question is then to know how to extract valuable information from this first result, in order to allow human interpretation. This requires subsequent processing, which we describe in the rest of this entry.
1
1
0
0
0
0
Design of Deep Neural Networks as Add-on Blocks for Improving Impromptu Trajectory Tracking
This paper introduces deep neural networks (DNNs) as add-on blocks to baseline feedback control systems to enhance tracking performance of arbitrary desired trajectories. The DNNs are trained to adapt the reference signals to the feedback control loop. The goal is to achieve a unity map between the desired and the actual outputs. In previous work, the efficacy of this approach was demonstrated on quadrotors; on 30 unseen test trajectories, the proposed DNN approach achieved an average impromptu tracking error reduction of 43% as compared to the baseline feedback controller. Motivated by these results, this work aims to provide platform-independent design guidelines for the proposed DNN-enhanced control architecture. In particular, we provide specific guidelines for the DNN feature selection, derive conditions for when the proposed approach is effective, and show in which cases the training efficiency can be further increased.
1
0
0
0
0
0
Application of data science techniques to disentangle X-ray spectral variation of super-massive black holes
We apply three data science techniques, Nonnegative Matrix Factorization (NMF), Principal Component Analysis (PCA) and Independent Component Analysis (ICA), to simulated X-ray energy spectra of a particular class of super-massive black holes. Two competing physical models, one whose variable components are additive and the other whose variable components are multiplicative, are known to successfully describe X-ray spectral variation of these super-massive black holes, within accuracy of the contemporary observation. We hope to utilize these techniques to compare the viability of the models by probing the mathematical structure of the observed spectra, while comparing advantages and disadvantages of each technique. We find that PCA is best to determine the dimensionality of a dataset, while NMF is better suited for interpreting spectral components and comparing them in terms of the physical models in question. ICA is able to reconstruct the parameters responsible for spectral variation. In addition, we find that the results of these techniques are sufficiently different that applying them to observed data may be a useful test in comparing the accuracy of the two spectral models.
0
1
0
0
0
0
Strong convergence rates of modified truncated EM method for stochastic differential equations
Motivated by truncated EM method introduced by Mao (2015), a new explicit numerical method named modified truncated Euler-Maruyama method is developed in this paper. Strong convergence rates of the given numerical scheme to the exact solutions to stochastic differential equations are investigated under given conditions in this paper. Compared with truncated EM method, the given numerical simulation strongly converges to the exact solution at fixed time $T$ and over a time interval $[0,T]$ under weaker sufficient conditions. Meanwhile, the convergence rates are also obtained for both cases. Two examples are provided to support our conclusions.
0
0
1
0
0
0
Some aspects of holomorphic mappings: a survey
This expository paper is concerned with the properties of proper holomorphic mappings between domains in complex affine spaces. We discuss some of the main geometric methods of this theory, such as the Reflection Principle, the scaling method, and the Kobayashi-Royden metric. We sketch the proofs of certain principal results and discuss some recent achievements. Several open problems are also stated.
0
0
1
0
0
0
Local Linear Constraint based Optimization Model for Dual Spectral CT
Dual spectral computed tomography (DSCT) can achieve energy- and material-selective images, and has a superior distinguishability of some materials than conventional single spectral computed tomography (SSCT). However, the decomposition process is illposed, which is sensitive with noise, thus the quality of decomposed images are usually degraded, especially the signal-to-noise ratio (SNR) is much lower than single spectra based directly reconstructions. In this work, we first establish a local linear relationship between dual spectra based decomposed results and single spectra based directly reconstructed images. Then, based on this constraint, we propose an optimization model for DSCT and develop a guided image filtering based iterative solution method. Both simulated and real experiments are provided to validate the effectiveness of the proposed approach.
0
0
1
0
0
0
Efficient Online Timed Pattern Matching by Automata-Based Skipping
The timed pattern matching problem is an actively studied topic because of its relevance in monitoring of real-time systems. There one is given a log $w$ and a specification $\mathcal{A}$ (given by a timed word and a timed automaton in this paper), and one wishes to return the set of intervals for which the log $w$, when restricted to the interval, satisfies the specification $\mathcal{A}$. In our previous work we presented an efficient timed pattern matching algorithm: it adopts a skipping mechanism inspired by the classic Boyer--Moore (BM) string matching algorithm. In this work we tackle the problem of online timed pattern matching, towards embedded applications where it is vital to process a vast amount of incoming data in a timely manner. Specifically, we start with the Franek-Jennings-Smyth (FJS) string matching algorithm---a recent variant of the BM algorithm---and extend it to timed pattern matching. Our experiments indicate the efficiency of our FJS-type algorithm in online and offline timed pattern matching.
1
0
0
0
0
0
APO Time Resolved Color Photometry of Highly-Elongated Interstellar Object 1I/'Oumuamua
We report on $g$, $r$ and $i$ band observations of the Interstellar Object 'Oumuamua (1I) taken on 2017 October 29 from 04:28 to 08:40 UTC by the Apache Point Observatory (APO) 3.5m telescope's ARCTIC camera. We find that 1I's colors are $g-r=0.41\pm0.24$ and $r-i=0.23\pm0.25$, consistent with the visible spectra of Masiero (2017), Ye et al. (2017) and Fitzsimmons et al. (2017), and most comparable to the population of Solar System C/D asteroids, Trojans, or comets. We find no evidence of any cometary activity at a heliocentric distance of 1.46 au, approximately 1.5 months after 1I's closest approach distance to the Sun. Significant brightness variability was seen in the $r$ observations, with the object becoming notably brighter towards the end of the run. By combining our APO photometric time series data with the Discovery Channel Telescope (DCT) data of Knight et al. (2017), taken 20 h later on 2017 October 30, we construct an almost complete light curve with a most probable lightcurve period of $P \simeq 4~{\rm h}$. Our results imply a double peaked rotation period of 8.1 $\pm$ 0.02 h, with a peak-to-peak amplitude of 1.5 - 2.1 mags. Assuming that 1I's shape can be approximated by an ellipsoid, the amplitude constraint implies that 1I has an axial ratio of 3.5 to 10.3, which is strikingly elongated. Assuming that 1I is rotating above its critical break up limit, our results are compatible with 1I having having modest cohesive strength and may have obtained its elongated shape during a tidal disruption event before being ejected from its home system. Astrometry useful for constraining 1I's orbit was also obtained and published in Weaver et al. (2017).
0
1
0
0
0
0
Variable selection in discriminant analysis for mixed variables and several groups
We propose a method for variable selection in discriminant analysis with mixed categorical and continuous variables. This method is based on a criterion that permits to reduce the variable selection problem to a problem of estimating suitable permutation and dimensionality. Then, estimators for these parameters are proposed and the resulting method for selecting variables is shown to be consistent. A simulation study that permits to study several poperties of the proposed approach and to compare it with an existing method is given.
0
0
1
1
0
0
AdaGAN: Boosting Generative Models
Generative Adversarial Networks (GAN) (Goodfellow et al., 2014) are an effective method for training generative models of complex data such as natural images. However, they are notoriously hard to train and can suffer from the problem of missing modes where the model is not able to produce examples in certain regions of the space. We propose an iterative procedure, called AdaGAN, where at every step we add a new component into a mixture model by running a GAN algorithm on a reweighted sample. This is inspired by boosting algorithms, where many potentially weak individual predictors are greedily aggregated to form a strong composite predictor. We prove that such an incremental procedure leads to convergence to the true distribution in a finite number of steps if each step is optimal, and convergence at an exponential rate otherwise. We also illustrate experimentally that this procedure addresses the problem of missing modes.
1
0
0
1
0
0
A Classification-Based Study of Covariate Shift in GAN Distributions
A basic, and still largely unanswered, question in the context of Generative Adversarial Networks (GANs) is whether they are truly able to capture all the fundamental characteristics of the distributions they are trained on. In particular, evaluating the diversity of GAN distributions is challenging and existing methods provide only a partial understanding of this issue. In this paper, we develop quantitative and scalable tools for assessing the diversity of GAN distributions. Specifically, we take a classification-based perspective and view loss of diversity as a form of covariate shift introduced by GANs. We examine two specific forms of such shift: mode collapse and boundary distortion. In contrast to prior work, our methods need only minimal human supervision and can be readily applied to state-of-the-art GANs on large, canonical datasets. Examining popular GANs using our tools indicates that these GANs have significant problems in reproducing the more distributional properties of their training dataset.
1
0
0
1
0
0
Sequence-to-Sequence Models Can Directly Translate Foreign Speech
We present a recurrent encoder-decoder deep neural network architecture that directly translates speech in one language into text in another. The model does not explicitly transcribe the speech into text in the source language, nor does it require supervision from the ground truth source language transcription during training. We apply a slightly modified sequence-to-sequence with attention architecture that has previously been used for speech recognition and show that it can be repurposed for this more complex task, illustrating the power of attention-based models. A single model trained end-to-end obtains state-of-the-art performance on the Fisher Callhome Spanish-English speech translation task, outperforming a cascade of independently trained sequence-to-sequence speech recognition and machine translation models by 1.8 BLEU points on the Fisher test set. In addition, we find that making use of the training data in both languages by multi-task training sequence-to-sequence speech translation and recognition models with a shared encoder network can improve performance by a further 1.4 BLEU points.
1
0
0
1
0
0
Ensemble Methods for Personalized E-Commerce Search Challenge at CIKM Cup 2016
Personalized search has been a hot research topic for many years and has been widely used in e-commerce. This paper describes our solution to tackle the challenge of personalized e-commerce search at CIKM Cup 2016. The goal of this competition is to predict search relevance and re-rank the result items in SERP according to the personalized search, browsing and purchasing preferences. Based on a detailed analysis of the provided data, we extract three different types of features, i.e., statistic features, query-item features and session features. Different models are used on these features, including logistic regression, gradient boosted decision trees, rank svm and a novel deep match model. With the blending of multiple models, a stacking ensemble model is built to integrate the output of individual models and produce a more accurate prediction result. Based on these efforts, our solution won the champion of the competition on all the evaluation metrics.
1
0
0
0
0
0
Comparison of Sobol' sequences in financial applications
Sobol' sequences are widely used for quasi-Monte Carlo methods that arise in financial applications. Sobol' sequences have parameter values called direction numbers, which are freely chosen by the user, so there are several implementations of Sobol' sequence generators. The aim of this paper is to provide a comparative study of (non-commercial) high-dimensional Sobol' sequences by calculating financial models. Additionally, we implement the Niederreiter sequence (in base 2) with a slight modification, that is, we reorder the rows of the generating matrices, and analyze and compare it with the Sobol' sequences.
1
0
0
1
0
0
A wavelet integral collocation method for nonlinear boundary value problems in Physics
A high order wavelet integral collocation method (WICM) is developed for general nonlinear boundary value problems in physics. This method is established based on Coiflet approximation of multiple integrals of interval bounded functions combined with an accurate and adjustable boundary extension technique. The convergence order of this approximation has been proven to be N as long as the Coiflet with N-1 vanishing moment is adopted, which can be any positive even integers. Before the conventional collocation method is applied to the general problems, the original differential equation is changed into its equivalent form by denoting derivatives of the unknown function as new functions and constructing relations between the low and high order derivatives. For the linear cases, error analysis has proven that the proposed WICM is order N, and condition numbers of relevant matrices are almost independent of the number of collocation points. Numerical examples of a wide range of nonlinear differential equations in physics demonstrate that accuracy of the proposed WICM is even greater than N, and most interestingly, such accuracy is independent of the order of the differential equation to be solved. Comparison to existing numerical methods further justifies the accuracy and efficiency of the proposed method.
0
0
1
0
0
0
Recommendations of the LHC Dark Matter Working Group: Comparing LHC searches for heavy mediators of dark matter production in visible and invisible decay channels
Weakly-coupled TeV-scale particles may mediate the interactions between normal matter and dark matter. If so, the LHC would produce dark matter through these mediators, leading to the familiar "mono-X" search signatures, but the mediators would also produce signals without missing momentum via the same vertices involved in their production. This document from the LHC Dark Matter Working Group suggests how to compare searches for these two types of signals in case of vector and axial-vector mediators, based on a workshop that took place on September 19/20, 2016 and subsequent discussions. These suggestions include how to extend the spin-1 mediated simplified models already in widespread use to include lepton couplings. This document also provides analytic calculations of the relic density in the simplified models and reports an issue that arose when ATLAS and CMS first began to use preliminary numerical calculations of the dark matter relic density in these models.
0
1
0
0
0
0
Generalization of the concepts of seniority number and ionicity
We present generalized versions of the concepts of seniority number and ionicity. These generalized numbers count respectively the partially occupied and fully occupied shells for any partition of the orbital space into shells. The Hermitian operators whose eigenspaces correspond to wave functions of definite generalized seniority or ionicity values are introduced. The generalized seniority numbers (GSNs) afford to establish refined hierarchies of configuration interaction (CI) spaces within those of fixed ordinary seniority. Such a hierarchy is illustrated on the buckminsterfullerene molecule.
0
1
0
0
0
0
Rise of the HaCRS: Augmenting Autonomous Cyber Reasoning Systems with Human Assistance
As the size and complexity of software systems increase, the number and sophistication of software security flaws increase as well. The analysis of these flaws began as a manual approach, but it soon became apparent that tools were necessary to assist human experts in this task, resulting in a number of techniques and approaches that automated aspects of the vulnerability analysis process. Recently, DARPA carried out the Cyber Grand Challenge, a competition among autonomous vulnerability analysis systems designed to push the tool-assisted human-centered paradigm into the territory of complete automation. However, when the autonomous systems were pitted against human experts it became clear that certain tasks, albeit simple, could not be carried out by an autonomous system, as they require an understanding of the logic of the application under analysis. Based on this observation, we propose a shift in the vulnerability analysis paradigm, from tool-assisted human-centered to human-assisted tool-centered. In this paradigm, the automated system orchestrates the vulnerability analysis process, and leverages humans (with different levels of expertise) to perform well-defined sub-tasks, whose results are integrated in the analysis. As a result, it is possible to scale the analysis to a larger number of programs, and, at the same time, optimize the use of expensive human resources. In this paper, we detail our design for a human-assisted automated vulnerability analysis system, describe its implementation atop an open-sourced autonomous vulnerability analysis system that participated in the Cyber Grand Challenge, and evaluate and discuss the significant improvements that non-expert human assistance can offer to automated analysis approaches.
1
0
0
0
0
0
Block-Diagonal and LT Codes for Distributed Computing With Straggling Servers
We propose two coded schemes for the distributed computing problem of multiplying a matrix by a set of vectors. The first scheme is based on partitioning the matrix into submatrices and applying maximum distance separable (MDS) codes to each submatrix. For this scheme, we prove that up to a given number of partitions the communication load and the computational delay (not including the encoding and decoding delay) are identical to those of the scheme recently proposed by Li et al., based on a single, long MDS code. However, due to the use of shorter MDS codes, our scheme yields a significantly lower overall computational delay when the delay incurred by encoding and decoding is also considered. We further propose a second coded scheme based on Luby Transform (LT) codes under inactivation decoding. Interestingly, LT codes may reduce the delay over the partitioned scheme at the expense of an increased communication load. We also consider distributed computing under a deadline and show numerically that the proposed schemes outperform other schemes in the literature, with the LT code-based scheme yielding the best performance for the scenarios considered.
1
0
0
0
0
0
Gevrey estimates for one dimensional parabolic invariant manifolds of non-hyperbolic fixed points
We study the Gevrey character of a natural parameterization of one dimensional invariant manifolds associated to a parabolic direction of fixed points of analytic maps, that is, a direction associated with an eigenvalue equal to $1$. We show that, under general hypotheses, these invariant manifolds are Gevrey with type related to some explicit constants. We provide examples of the optimality of our results as well as some applications to celestial mechanics, namely, the Sitnikov problem and the restricted planar three body problem.
0
0
1
0
0
0
Improving Factor-Based Quantitative Investing by Forecasting Company Fundamentals
On a periodic basis, publicly traded companies are required to report fundamentals: financial data such as revenue, operating income, debt, among others. These data points provide some insight into the financial health of a company. Academic research has identified some factors, i.e. computed features of the reported data, that are known through retrospective analysis to outperform the market average. Two popular factors are the book value normalized by market capitalization (book-to-market) and the operating income normalized by the enterprise value (EBIT/EV). In this paper: we first show through simulation that if we could (clairvoyantly) select stocks using factors calculated on future fundamentals (via oracle), then our portfolios would far outperform a standard factor approach. Motivated by this analysis, we train deep neural networks to forecast future fundamentals based on a trailing 5-years window. Quantitative analysis demonstrates a significant improvement in MSE over a naive strategy. Moreover, in retrospective analysis using an industry-grade stock portfolio simulator (backtester), we show an improvement in compounded annual return to 17.1% (MLP) vs 14.4% for a standard factor model.
1
0
0
1
0
0
Magnetic Flux Tailoring through Lenz Lenses in Toroidal Diamond Indenter Cells: A New Pathway to High Pressure Nuclear Magnetic Resonance
A new pathway to nuclear magnetic resonance spectroscopy in high pressure diamond anvil cells is introduced, using inductively coupled broadband passive electro-magnetic lenses to locally amplify the magnetic flux at the isolated sample, leading to an increase in sensitivity. The lenses are adopted for the geometrical restrictions imposed by a toroidal diamond indenter cell, and yield high signal-to-noise ratios at pressures as high as 72 GPa, at initial sample volumes of only 230 pl. The corresponding levels of detection, LODt, are found to be up to four orders of magnitude lower compared to formerly used solenoidal micro-coils in diamond anvil cells, as shown by Proton-NMR measurements on paraffin oil. This approach opens up the field of ultra-high pressure sciences for one of the most versatile spectroscopic methods available in a pressure range unprecedended up to now.
0
1
0
0
0
0
Spatial risk measures induced by powers of max-stable random fields
A meticulous assessment of the risk of extreme environmental events is of great necessity for populations, civil authorities as well as the insurance/reinsurance industry. Koch (2017, 2018) introduced a concept of spatial risk measure and a related set of axioms which are well-suited to analyse and quantify the risk due to events having a spatial extent, precisely such as natural disasters. In this paper, we first carry out a detailed study of the correlation (and covariance) structure of powers of the Smith and Brown-Resnick max-stable random fields. Then, using the latter results, we thoroughly investigate spatial risk measures associated with variance and induced by powers of max-stable random fields. In addition, we show that spatial risk measures associated with several classical risk measures and induced by such cost fields satisfy (at least) part of the previously mentioned axioms under appropriate conditions on the max-stable fields. Considering such cost fields is particularly relevant when studying the impact of extreme wind speeds on buildings and infrastructure.
0
0
0
0
0
1
Fluctuations in 1D stochastic homogenization of pseudo-elliptic equations with long-range dependent potentials
This paper deals with the homogenization problem of one-dimensional pseudo-elliptic equations with a rapidly varying random potential. The main purpose is to characterize the homogenization error (random fluctuations), i.e., the difference between the random solution and the homogenized solution, which strongly depends on the autocovariance property of the underlying random potential. It is well known that when the random potential has short-range dependence, the rescaled homogenization error converges in distribution to a stochastic integral with respect to standard Brownian motion. Here, we are interested in potentials with long-range dependence and we prove convergence to stochastic integrals with respect to Hermite process.
0
0
1
0
0
0
Runge-Kutta-Gegenbauer methods for advection-diffusion problems
In this paper, Runge-Kutta-Gegenbauer (RKG) stability polynomials of arbitrarily high order of accuracy are introduced in closed form. The stability domain of RKG polynomials extends in the the real direction with the square of polynomial degree, and in the imaginary direction as an increasing function of Gegenbauer parameter. Consequently, the polynomials are naturally suited to the construction of high order stabilized Runge-Kutta (SRK) methods for systems of PDEs of mixed hyperbolic-parabolic type. We present SRK methods composed of $L$ ordered forward Euler stages, with complex-valued stepsizes derived from the roots of RKG stability polynomials of degree $L$. Internal stability is maintained at large stage number through an ordering algorithm which limits internal amplification factors to $10 L^2$. Test results for mildly stiff nonlinear advection-diffusion-reaction problems with moderate ($\lesssim 1$) mesh Péclet numbers are provided at second, fourth, and sixth orders, with nonlinear reaction terms treated by complex splitting techniques above second order.
0
1
0
0
0
0
Tensor network method for reversible classical computation
We develop a tensor network technique that can solve universal reversible classical computational problems, formulated as vertex models on a square lattice [Nat. Commun. 8, 15303 (2017)]. By encoding the truth table of each vertex constraint in a tensor, the total number of solutions compatible with partial inputs/outputs at the boundary can be represented as the full contraction of a tensor network. We introduce an iterative compression-decimation (ICD) scheme that performs this contraction efficiently. The ICD algorithm first propagates local constraints to longer ranges via repeated contraction-decomposition sweeps over all lattice bonds, thus achieving compression on a given length scale. It then decimates the lattice via coarse-graining tensor contractions. Repeated iterations of these two steps gradually collapse the tensor network and ultimately yield the exact tensor trace for large systems, without the need for manual control of tensor dimensions. Our protocol allows us to obtain the exact number of solutions for computations where a naive enumeration would take astronomically long times.
1
1
0
0
0
0
Model Predictions for Time-Resolved Transport Measurements Made near the Superfluid Critical Points of Cold Atoms and $K_3C_{60}$ Films
Recent advances in ultrafast measurement in cold atoms, as well as pump-probe spectroscopy of $K_3 C_{60}$ films, have opened the possibility of rapidly quenching systems of interacting fermions to, and across, a finite temperature superfluid transition. However, determining that a transient state has approached a second-order critical point is difficult, as standard equilibrium techniques are inapplicable. We show that the approach to the superfluid critical point in a transient state may be detected via time-resolved transport measurements, such as the optical conductivity. We leverage the fact that quenching to the vicinity of the critical point produces a highly time dependent density of superfluid fluctuations, which affect the conductivity in two ways. First, by inelastic scattering between the fermions and the fluctuations, and second by direct conduction through the fluctuations, with the latter providing a lower resistance current carrying channel. The competition between these two effects leads to nonmonotonic behavior in the time- resolved optical conductivity, providing a signature of the critical transient state.
0
1
0
0
0
0
A Multimodal Corpus of Expert Gaze and Behavior during Phonetic Segmentation Tasks
Phonetic segmentation is the process of splitting speech into distinct phonetic units. Human experts routinely perform this task manually by analyzing auditory and visual cues using analysis software, which is an extremely time-consuming process. Methods exist for automatic segmentation, but these are not always accurate enough. In order to improve automatic segmentation, we need to model it as close to the manual segmentation as possible. This corpus is an effort to capture the human segmentation behavior by recording experts performing a segmentation task. We believe that this data will enable us to highlight the important aspects of manual segmentation, which can be used in automatic segmentation to improve its accuracy.
1
0
0
0
0
0
Gradient Estimators for Implicit Models
Implicit models, which allow for the generation of samples but not for point-wise evaluation of probabilities, are omnipresent in real-world problems tackled by machine learning and a hot topic of current research. Some examples include data simulators that are widely used in engineering and scientific research, generative adversarial networks (GANs) for image synthesis, and hot-off-the-press approximate inference techniques relying on implicit distributions. The majority of existing approaches to learning implicit models rely on approximating the intractable distribution or optimisation objective for gradient-based optimisation, which is liable to produce inaccurate updates and thus poor models. This paper alleviates the need for such approximations by proposing the Stein gradient estimator, which directly estimates the score function of the implicitly defined distribution. The efficacy of the proposed estimator is empirically demonstrated by examples that include meta-learning for approximate inference, and entropy regularised GANs that provide improved sample diversity.
1
0
0
1
0
0
Token Economics in Energy Systems: Concept, Functionality and Applications
Traditional centralized energy systems have the disadvantages of difficult management and insufficient incentives. Blockchain is an emerging technology, which can be utilized in energy systems to enhance their management and control. Integrating token economy and blockchain technology, token economic systems in energy possess the characteristics of strong incentives and low cost, facilitating integrating renewable energy and demand side management, and providing guarantees for improving energy efficiency and reducing emission. This article describes the concept and functionality of token economics, and then analyzes the feasibility of applying token economics in the energy systems, and finally discuss the applications of token economics with an example in integrated energy systems.
0
0
0
0
0
1
ACtuAL: Actor-Critic Under Adversarial Learning
Generative Adversarial Networks (GANs) are a powerful framework for deep generative modeling. Posed as a two-player minimax problem, GANs are typically trained end-to-end on real-valued data and can be used to train a generator of high-dimensional and realistic images. However, a major limitation of GANs is that training relies on passing gradients from the discriminator through the generator via back-propagation. This makes it fundamentally difficult to train GANs with discrete data, as generation in this case typically involves a non-differentiable function. These difficulties extend to the reinforcement learning setting when the action space is composed of discrete decisions. We address these issues by reframing the GAN framework so that the generator is no longer trained using gradients through the discriminator, but is instead trained using a learned critic in the actor-critic framework with a Temporal Difference (TD) objective. This is a natural fit for sequence modeling and we use it to achieve improvements on language modeling tasks over the standard Teacher-Forcing methods.
1
0
0
1
0
0
Influence des mécanismes dissociés de ludifications sur l'apprentissage en support numérique de la lecture en classe primaire
The introduction of serious games as pedagogical supports in the field of education is a process gaining in popularity amongst the teaching community. This article creates a link between the integration of new pedagogical solutions in first-year primary class and the fundamental research on the motivation of the players/learners, detailing an experiment based on a game specifically developed, named QCM. QCM considers the learning worksheets issued from the Freinet pedagogy using various gameplay mechanisms. The main contribution of QCM in relation to more traditional games is the dissociation of immersion mechanisms, in order to improve the understanding of the user experience. This game also contains a system of gameplay metrics, the analysis of which shows a relative increase in the motivation of students using QCM instead of paper worksheets, while revealing large differences in students behavior in conjunction with the mechanisms of gamification employed. Keywords : Serious games, learning analytics, gamification, flow.
1
0
0
0
0
0
On the Fine-grained Complexity of One-Dimensional Dynamic Programming
In this paper, we investigate the complexity of one-dimensional dynamic programming, or more specifically, of the Least-Weight Subsequence (LWS) problem: Given a sequence of $n$ data items together with weights for every pair of the items, the task is to determine a subsequence $S$ minimizing the total weight of the pairs adjacent in $S$. A large number of natural problems can be formulated as LWS problems, yielding obvious $O(n^2)$-time solutions. In many interesting instances, the $O(n^2)$-many weights can be succinctly represented. Yet except for near-linear time algorithms for some specific special cases, little is known about when an LWS instantiation admits a subquadratic-time algorithm and when it does not. In particular, no lower bounds for LWS instantiations have been known before. In an attempt to remedy this situation, we provide a general approach to study the fine-grained complexity of succinct instantiations of the LWS problem. In particular, given an LWS instantiation we identify a highly parallel core problem that is subquadratically equivalent. This provides either an explanation for the apparent hardness of the problem or an avenue to find improved algorithms as the case may be. More specifically, we prove subquadratic equivalences between the following pairs (an LWS instantiation and the corresponding core problem) of problems: a low-rank version of LWS and minimum inner product, finding the longest chain of nested boxes and vector domination, and a coin change problem which is closely related to the knapsack problem and (min,+)-convolution. Using these equivalences and known SETH-hardness results for some of the core problems, we deduce tight conditional lower bounds for the corresponding LWS instantiations. We also establish the (min,+)-convolution-hardness of the knapsack problem.
1
0
0
0
0
0
Topological quantum paramagnet in a quantum spin ladder
It has recently been found that bosonic excitations of ordered media, such as phonons or spinons, can exhibit topologically nontrivial band structures. Of particular interest are magnon and triplon excitations in quantum magnets, as they can easily be manipulated by an applied field. Here we study triplon excitations in an S=1/2 quantum spin ladder and show that they exhibit nontrivial topology, even in the quantum-disordered paramagnetic phase. Our analysis reveals that the paramagnetic phase actually consists of two separate regions with topologically distinct triplon excitations. We demonstrate that the topological transition between these two regions can be tuned by an external magnetic field. The winding number that characterizes the topology of the triplons is derived and evaluated. By the bulk-boundary correspondence, we find that the non-zero winding number implies the presence of localized triplon end states. Experimental signatures and possible physical realizations of the topological paramagnetic phase are discussed.
0
1
0
0
0
0
Magic wavelengths of Ca$^{+}$ ion for linearly and circularly polarized light
The dynamic dipole polarizabilities of the low-lying states of Ca$^{+}$ for linearly and circularly polarized light are calculated by using relativistic configuration interaction plus core polarization (RCICP) approach. The magic wavelengths, at which the two levels of the transitions have the same ac Stark shifts, for $4s$-$4p_{j,m}$ and $4s$-$3d_{j,m}$ magnetic sublevels transitions are determined. The present magic wavelengths for linearly polarized light agree with the available results excellently. The polarizability for the circularly polarized light has the scalar, vector and tensor components. The dynamic polarizability is different for each of magnetic sublevels of the atomic state. Additional magic wavelengths have been found for the circularly polarized light. We recommend that the measurement of the magic wavelength near 850 nm for $4s-4p_{\frac32,m=\pm\frac32,\pm\frac12}$ could be able to determine the oscillator strength ratio of $f_{4p_{\frac32} \to 3d_{\frac32}}$ and $f_{4p_{\frac32} \to 3d_{\frac52}}$.
0
1
0
0
0
0
Network Slicing for 5G with SDN/NFV: Concepts, Architectures and Challenges
The fifth generation of mobile communications is anticipated to open up innovation opportunities for new industries such as vertical markets. However, these verticals originate myriad use cases with diverging requirements that future 5G networks have to efficiently support. Network slicing may be a natural solution to simultaneously accommodate over a common network infrastructure the wide range of services that vertical-specific use cases will demand. In this article, we present the network slicing concept, with a particular focus on its application to 5G systems. We start by summarizing the key aspects that enable the realization of so-called network slices. Then, we give a brief overview on the SDN architecture proposed by the ONF and show that it provides tools to support slicing. We argue that although such architecture paves the way for network slicing implementation, it lacks some essential capabilities that can be supplied by NFV. Hence, we analyze a proposal from the ETSI to incorporate the capabilities of SDN into the NFV architecture. Additionally, we present an example scenario that combines SDN and NFV technologies to address the realization of network slices. Finally, we summarize the open research issues with the purpose of motivating new advances in this field.
1
0
0
0
0
0
Design of $n$- and $p$-type oxide thermoelectrics in LaNiO$_3$/SrTiO$_3(001)$ superlattices exploiting interface polarity
We investigate the structural, electronic, transport, and thermoelectric properties of LaNiO$_3$/SrTiO$_3(001)$ superlattices containing either exclusively $n$- or $p$-type interfaces or coupled interfaces of opposite polarity by using density functional theory calculations with an on-site Coulomb repulsion term. The results show that significant octahedral tilts are induced in the SrTiO$_3$ part of the superlattice. Moreover, the La-Sr distances and Ni-O out-of-plane bond lengths at the interfaces exhibit a distinct variation by about $7\,\%$ with the sign of the electrostatic doping. In contrast to the much studied LaAlO$_3$/SrTiO$_3$ system, the charge mismatch at the interfaces is exclusively accommodated within the LaNiO$_3$ layers, whereas the interface polarity leads to a band offset and to the formation of an electric field within the coupled superlattice. Features of the electronic structure indicate an orbital-selective quantization of quantum well states. The potential- and confinement-induced multiband splitting results in complex cylindrical Fermi surfaces with a tendency towards nesting that depends on the interface polarity. The analysis of the thermoelectric response reveals a particularly large positive Seebeck coefficient ($135~\mu$V/K) and a high figure of merit ($0.35$) for room-temperature cross-plane transport in the $p$-type superlattice that is attributed to the participation of the SrTiO$_3$ valence band. Superlattices with either $n$- or $p$-type interfaces show cross-plane Seebeck coefficients of opposite sign and thus emerge as a platform to construct an oxide-based thermoelectric generator with structurally and electronically compatible $n$- and $p$-type oxide thermoelectrics.
0
1
0
0
0
0
Localized heat perturbation in harmonic 1D crystals. Solutions for an equation of anomalous heat conduction
In this work exact solutions for the equation that describes anomalous heat propagation in 1D harmonic lattices are obtained. Rectangular, triangular, and sawtooth initial perturbations of the temperature field are considered. The solution for an initially rectangular temperature profile is investigated in detail. It is shown that the decay of the solution near the wavefront is proportional to $1/ \sqrt{t}$. In the center of the perturbation zone the decay is proportional to $1/t$. Thus the solution decays slower near the wavefront, leaving clearly visible peaks that can be detected experimentally.
0
1
0
0
0
0
Modeling Semantic Expectation: Using Script Knowledge for Referent Prediction
Recent research in psycholinguistics has provided increasing evidence that humans predict upcoming content. Prediction also affects perception and might be a key to robustness in human language processing. In this paper, we investigate the factors that affect human prediction by building a computational model that can predict upcoming discourse referents based on linguistic knowledge alone vs. linguistic knowledge jointly with common-sense knowledge in the form of scripts. We find that script knowledge significantly improves model estimates of human predictions. In a second study, we test the highly controversial hypothesis that predictability influences referring expression type but do not find evidence for such an effect.
1
0
0
1
0
0
Constraints on kinematic parameters at $z\ne0$
The standard cosmographic approach consists in performing a series expansion of a cosmological observable around $z=0$ and then using the data to constrain the cosmographic (or kinematic) parameters at present time. Such a procedure works well if applied to redshift ranges inside the $z$-series convergence radius ($z<1$), but can be problematic if we want to cover redshift intervals that fall outside the $z-$series convergence radius. This problem can be circumvented if we work with the $y-$redshift, $y=z/(1+z)$, or the scale factor, $a=1/(1+z)=1-y$, for example. In this paper, we use the scale factor $a$ as the variable of expansion. We expand the luminosity distance and the Hubble parameter around an arbitrary $\tilde{a}$ and use the Supernovae Ia (SNe Ia) and the Hubble parameter data to estimate $H$, $q$, $j$ and $s$ at $z\ne0$ ($\tilde{a}\neq1$). We show that the last relevant term for both expansions is the third. Since the third order expansion of $d_L(z)$ has one parameter less than the third order expansion of $H(z)$, we also consider, for completeness, a fourth order expansion of $d_L(z)$. For the third order expansions, the results obtained from both SNe Ia and $H(z)$ data are incompatible with the $\Lambda$CDM model at $2\sigma$ confidence level, but also incompatible with each other. When the fourth order expansion of $d_L(z)$ is taken into account, the results obtained from SNe Ia data are compatible with the $\Lambda$CDM model at $2\sigma$ confidence level, but still remains incompatible with results obtained from $H(z)$ data. These conflicting results may indicate a tension between the current SNe Ia and $H(z)$ data sets.
0
1
0
0
0
0
Joint Syntacto-Discourse Parsing and the Syntacto-Discourse Treebank
Discourse parsing has long been treated as a stand-alone problem independent from constituency or dependency parsing. Most attempts at this problem are pipelined rather than end-to-end, sophisticated, and not self-contained: they assume gold-standard text segmentations (Elementary Discourse Units), and use external parsers for syntactic features. In this paper we propose the first end-to-end discourse parser that jointly parses in both syntax and discourse levels, as well as the first syntacto-discourse treebank by integrating the Penn Treebank with the RST Treebank. Built upon our recent span-based constituency parser, this joint syntacto-discourse parser requires no preprocessing whatsoever (such as segmentation or feature extraction), achieves the state-of-the-art end-to-end discourse parsing accuracy.
1
0
0
0
0
0
Combinatorial Multi-armed Bandit with Probabilistically Triggered Arms: A Case with Bounded Regret
In this paper, we study the combinatorial multi-armed bandit problem (CMAB) with probabilistically triggered arms (PTAs). Under the assumption that the arm triggering probabilities (ATPs) are positive for all arms, we prove that a class of upper confidence bound (UCB) policies, named Combinatorial UCB with exploration rate $\kappa$ (CUCB-$\kappa$), and Combinatorial Thompson Sampling (CTS), which estimates the expected states of the arms via Thompson sampling, achieve bounded regret. In addition, we prove that CUCB-$0$ and CTS incur $O(\sqrt{T})$ gap-independent regret. These results improve the results in previous works, which show $O(\log T)$ gap-dependent and $O(\sqrt{T\log T})$ gap-independent regrets, respectively, under no assumptions on the ATPs. Then, we numerically evaluate the performance of CUCB-$\kappa$ and CTS in a real-world movie recommendation problem, where the actions correspond to recommending a set of movies, the arms correspond to the edges between the movies and the users, and the goal is to maximize the total number of users that are attracted by at least one movie. Our numerical results complement our theoretical findings on bounded regret. Apart from this problem, our results also directly apply to the online influence maximization (OIM) problem studied in numerous prior works.
1
0
0
1
0
0
Generative Adversarial Residual Pairwise Networks for One Shot Learning
Deep neural networks achieve unprecedented performance levels over many tasks and scale well with large quantities of data, but performance in the low-data regime and tasks like one shot learning still lags behind. While recent work suggests many hypotheses from better optimization to more complicated network structures, in this work we hypothesize that having a learnable and more expressive similarity objective is an essential missing component. Towards overcoming that, we propose a network design inspired by deep residual networks that allows the efficient computation of this more expressive pairwise similarity objective. Further, we argue that regularization is key in learning with small amounts of data, and propose an additional generator network based on the Generative Adversarial Networks where the discriminator is our residual pairwise network. This provides a strong regularizer by leveraging the generated data samples. The proposed model can generate plausible variations of exemplars over unseen classes and outperforms strong discriminative baselines for few shot classification tasks. Notably, our residual pairwise network design outperforms previous state-of-theart on the challenging mini-Imagenet dataset for one shot learning by getting over 55% accuracy for the 5-way classification task over unseen classes.
1
0
0
0
0
0
Evaporation of dilute droplets in a turbulent jet: clustering and entrainment effects
Droplet evaporation in turbulent sprays involves unsteady, multiscale and multiphase processes which make its comprehension and model capabilities still limited. The present work aims to investigate droplet vaporization dynamics within a turbulent spatial developing jet in dilute, non-reacting conditions. We address the problem using a Direct Numerical Simulation of jet laden with acetone droplets using an hybrid Eulerian/Lagrangian approach based on the point droplet approximation. A detailed statistical analysis of both phases is presented. In particular, we show how crucial is the preferential sampling of the vapour phase induced by the inhomogeneous localization of the droplets through the flow. The preferential segregation of droplets develops suddenly downstream the inlet both within the turbulent core and in the mixing layer. Two distinct mechanisms have been found to drive these phenomena, the inertial small-scale clustering in the jet core and the intermittent dynamics of droplets across the turbulent/non-turbulent interface in the mixing layer where dry air entrainment occurs. These phenomenologies strongly affect the overall vaporization process and lead to a spectacular widening of droplets size and vaporization rate distributions in the downstream evolution of the turbulent spray.
0
1
0
0
0
0
A priori Hölder and Lipschitz regularity for generalized $p$-harmonious functions in metric measure spaces
Let $(\mathbb{X} , d, \mu )$ be a proper metric measure space and let $\Omega \subset \mathbb{X}$ be a bounded domain. For each $x\in \Omega$, we choose a radius $0< \varrho (x) \leq \mathrm{dist}(x, \partial \Omega ) $ and let $B_x$ be the closed ball centered at $x$ with radius $\varrho (x)$. If $\alpha \in \mathbb{R}$, consider the following operator in $C( \overline{\Omega} )$, $$ \mathcal{T}_{\alpha}u(x)=\frac{\alpha}{2}\left(\sup_{B_x } u+\inf_{B_x } u\right)+(1-\alpha)\,\frac{1}{\mu(B_x)}\int_{B_x}\hspace{-0.1cm} u\ d\mu. $$ Under appropriate assumptions on $\alpha$, $\mathbb{X}$, $\mu$ and the radius function $\varrho$ we show that solutions $u\in C( \overline{\Omega} )$ of the functional equation $\mathcal{T}_{\alpha}u = u$ satisfy a local Hölder or Lipschitz condition in $\Omega$. The motivation comes from the so called $p$-harmonious functions in euclidean domains.
0
0
1
0
0
0
Simultaneous tracking of spin angle and amplitude beyond classical limits
We show how simultaneous, back-action evading tracking of non-commuting observables can be achieved in a widely-used sensing technology, atomic interferometry. Using high-dynamic-range dynamically-decoupled quantum non-demolition (QND) measurements on a precessing atomic spin ensemble, we track the collective spin angle and amplitude with negligible effects from back action, giving steady-state tracking sensitivity 2.9 dB beyond the standard quantum limit and 7.0 dB beyond Poisson statistics.
0
1
0
0
0
0
Charge exchange in galaxy clusters
Though theoretically expected, the charge exchange emission from galaxy clusters has not yet been confidently detected. Accumulating hints were reported recently, including a rather marginal detection with the Hitomi data of the Perseus cluster. As suggested in Gu et al. (2015), a detection of charge exchange line emission from galaxy clusters would not only impact the interpretation of the newly-discovered 3.5 keV line, but also open up a new research topic on the interaction between hot and cold matter in clusters. We aim to perform the most systematic search for the O VIII charge exchange line in cluster spectra using the RGS on board XMM. We introduce a sample of 21 clusters observed with the RGS. The dominating thermal plasma emission is modeled and subtracted with a two-temperature CIE component, and the residuals are stacked for the line search. The systematic uncertainties in the fits are quantified by refitting the spectra with a varying continuum and line broadening. By the residual stacking, we do find a hint of a line-like feature at 14.82 A, the characteristic wavelength expected for oxygen charge exchange. This feature has a marginal significance of 2.8 sigma, and the average equivalent width is 2.5E-4 keV. We further demonstrate that the putative feature can be hardly affected by the systematic errors from continuum modelling and instrumental effects, or the atomic uncertainties of the neighbouring thermal lines. Assuming a realistic temperature and abundance pattern, the physical model implied by the possible oxygen line agrees well with the theoretical model proposed previously to explain the reported 3.5 keV line. If the charge exchange source indeed exists, we would expect that the oxygen abundance is potentially overestimated by 8-22% in previous X-ray measurements which assumed pure thermal lines.
0
1
0
0
0
0
A Grazing Gaussian Beam
We consider Friedlander's wave equation in two space dimensions in the half-space x > 0 with the boundary condition u(x,y,t)=0 when x=0. For a Gaussian beam w(x,y,t;k) concentrated on a ray path that is tangent to x=0 at (x,y,t)=(0,0,0) we calculate the "reflected" wave z(x,y,t;k) in t > 0 such that w(x,y,t;k)+z(x,y,t;k) satisfies Friedlander's wave equation and vanishes on x=0. These computations are done to leading order in k on the ray path. The interaction of beams with boundaries has been studied for non-tangential beams and for beams gliding along the boundary. We find that the amplitude of the solution on the central ray for large k after leaving the boundary is very nearly one half of that of the incoming beam.
0
0
1
0
0
0
Reconfigurable Manipulator Simulation for Robotics and Multimodal Machine Learning Application: Aaria
This paper represents a systematic way for generation of Aaria, a simulated model for serial manipulators for the purpose of kinematic or dynamic analysis with a vast variety of structures based on Simulink SimMechanics. The proposed model can receive configuration parameters, for instance in accordance with modified Denavit-Hartenberg convention, or trajectories for its base or joints for structures with 1 to 6 degrees of freedom (DOF). The manipulator is equipped with artificial joint sensors as well as simulated Inertial Measurement Units (IMUs) on each link. The simulation output can be positions, velocities, torques, in the joint space or IMU outputs; angular velocity, linear acceleration, tool coordinates with respect to the inertial frame. This simulation model is a source of a dataset for virtual multimodal sensory data for automation of robot modeling and control designed for machine learning and deep learning approaches based on big data.
1
0
0
0
0
0
Group actions and a multi-parameter Falconer distance problem
In this paper we study the following multi-parameter variant of the celebrated Falconer distance problem. Given ${\textbf{d}}=(d_1,d_2, \dots, d_{\ell})\in \mathbb{N}^{\ell}$ with $d_1+d_2+\dots+d_{\ell}=d$ and $E \subseteq \mathbb{R}^d$, we define $$ \Delta_{\textbf{d}}(E) = \left\{ \left(|x^{(1)}-y^{(1)}|,\ldots,|x^{(\ell)}-y^{(\ell)}|\right) : x,y \in E \right\} \subseteq \mathbb{R}^{\ell}, $$ where for $x\in \mathbb{R}^d$ we write $x=\left( x^{(1)},\dots, x^{(\ell)} \right)$ with $x^{(i)} \in \mathbb{R}^{d_i}$. We ask how large does the Hausdorff dimension of $E$ need to be to ensure that the $\ell$-dimensional Lebesgue measure of $\Delta_{\textbf{d}}(E)$ is positive? We prove that if $2 \leq d_i$ for $1 \leq i \leq \ell$, then the conclusion holds provided $$ \dim(E)>d-\frac{\min d_i}{2}+\frac{1}{3}.$$ We also note that, by previous constructions, the conclusion does not in general hold if $$\dim(E)<d-\frac{\min d_i}{2}.$$ A group action derivation of a suitable Mattila integral plays an important role in the argument.
0
0
1
0
0
0
Interacting fermions on the half-line: boundary counterterms and boundary corrections
Recent years witnessed an extensive development of the theory of the critical point in two-dimensional statistical systems, which allowed to prove {\it existence} and {\it conformal invariance} of the {\it scaling limit} for two-dimensional Ising model and dimers in planar graphs. Unfortunately, we are still far from a full understanding of the subject: so far, exact solutions at the lattice level, in particular determinant structure and exact discrete holomorphicity, play a cucial role in the rigorous control of the scaling limit. The few results about not-integrable (interacting) systems at criticality are still unable to deal with {\it finite domains} and {\it boundary corrections}, which are of course crucial for getting informations about conformal covariance. In this thesis, we address the question of adapting constructive Renormalization Group methods to non-integrable critical systems in $d= 1+1$ dimensions. We study a system of interacting spinless fermions on a one-dimensional semi-infinite lattice, which can be considered as a prototype of the Luttinger universality class with Dirichlet Boundary Conditions. We develop a convergent renormalized expression for the thermodynamic observables in the presence of a quadratic {\it boundary defect} counterterm, polynomially localized at the boundary. In particular, we get explicit bounds on the boundary corrections to the specific ground state energy.
0
1
1
0
0
0
Online and Distributed Robust Regressions under Adversarial Data Corruption
In today's era of big data, robust least-squares regression becomes a more challenging problem when considering the adversarial corruption along with explosive growth of datasets. Traditional robust methods can handle the noise but suffer from several challenges when applied in huge dataset including 1) computational infeasibility of handling an entire dataset at once, 2) existence of heterogeneously distributed corruption, and 3) difficulty in corruption estimation when data cannot be entirely loaded. This paper proposes online and distributed robust regression approaches, both of which can concurrently address all the above challenges. Specifically, the distributed algorithm optimizes the regression coefficients of each data block via heuristic hard thresholding and combines all the estimates in a distributed robust consolidation. Furthermore, an online version of the distributed algorithm is proposed to incrementally update the existing estimates with new incoming data. We also prove that our algorithms benefit from strong robustness guarantees in terms of regression coefficient recovery with a constant upper bound on the error of state-of-the-art batch methods. Extensive experiments on synthetic and real datasets demonstrate that our approaches are superior to those of existing methods in effectiveness, with competitive efficiency.
1
0
0
1
0
0
Global well-posedness for the Schrödinger map problem with small Besov norm
In this paper we prove a global result for the Schrödinger map problem with initial data with small Besov norm at critical regularity.
0
0
1
0
0
0
Stable explicit schemes for simulation of nonlinear moisture transfer in porous materials
Implicit schemes have been extensively used in building physics to compute the solution of moisture diffusion problems in porous materials for improving stability conditions. Nevertheless, these schemes require important sub-iterations when treating non-linear problems. To overcome this disadvantage, this paper explores the use of improved explicit schemes, such as Dufort-Frankel, Crank-Nicolson and hyperbolisation approaches. A first case study has been considered with the hypothesis of linear transfer. The Dufort-Frankel, Crank-Nicolson and hyperbolisation schemes were compared to the classical Euler explicit scheme and to a reference solution. Results have shown that the hyperbolisation scheme has a stability condition higher than the standard Courant-Friedrichs-Lewy (CFL) condition. The error of this schemes depends on the parameter \tau representing the hyperbolicity magnitude added into the equation. The Dufort-Frankel scheme has the advantages of being unconditionally stable and is preferable for non-linear transfer, which is the second case study. Results have shown the error is proportional to O(\Delta t). A modified Crank-Nicolson scheme has been proposed in order to avoid sub-iterations to treat the non-linearities at each time step. The main advantages of the Dufort-Frankel scheme are (i) to be twice faster than the Crank-Nicolson approach; (ii) to compute explicitly the solution at each time step; (iii) to be unconditionally stable and (iv) easier to parallelise on high-performance computer systems. Although the approach is unconditionally stable, the choice of the time discretisation $\Delta t$ remains an important issue to accurately represent the physical phenomena.
1
1
0
0
0
0
Congruence lattices of finite diagram monoids
We give a complete description of the congruence lattices of the following finite diagram monoids: the partition monoid, the planar partition monoid, the Brauer monoid, the Jones monoid (also known as the Temperley-Lieb monoid), the Motzkin monoid, and the partial Brauer monoid. All the congruences under discussion arise as special instances of a new construction, involving an ideal I, a retraction I->M onto the minimal ideal, a congruence on M, and a normal subgroup of a maximal subgroup outside I.
0
0
1
0
0
0
Weighted blowup correspondence of orbifold Gromov--Witten invariants and applications
Let $\sf X$ be a symplectic orbifold groupoid with $\sf S$ being a symplectic sub-orbifold groupoid, and $\sf X_{\mathfrak a}$ be the weight-$\mathfrak a$ blowup of $\sf X$ along $\sf S$ with $\sf Z$ being the corresponding exceptional divisor. We show that there is a weighted blowup correspondence between some certain absolute orbifold Gromov--Witten invariants of $\sf X$ relative to $\sf S$ and some certain relative orbifold Gromov--Witten invariants of the pair $(\sf X_{\mathfrak a}|Z)$. As an application, we prove that the symplectic uniruledness of symplectic orbifold groupoids is a weighted blowup invariant.
0
0
1
0
0
0
Constraints on Quenching of $z\lesssim2$ Massive Galaxies from the Evolution of the average Sizes of Star-Forming and Quenched Populations in COSMOS
We use $>$9400 $\log(m/M_{\odot})>10$ quiescent and star-forming galaxies at $z\lesssim2$ in COSMOS/UltraVISTA to study the average size evolution of these systems, with focus on the rare, ultra-massive population at $\log(m/M_{\odot})>11.4$. The large 2-square degree survey area delivers a sample of $\sim400$ such ultra-massive systems. Accurate sizes are derived using a calibration based on high-resolution images from the Hubble Space Telescope. We find that, at these very high masses, the size evolution of star-forming and quiescent galaxies is almost indistinguishable in terms of normalization and power-law slope. We use this result to investigate possible pathways of quenching massive $m>M^*$ galaxies at $z<2$. We consistently model the size evolution of quiescent galaxies from the star-forming population by assuming different simple models for the suppression of star-formation. These models include an instantaneous and delayed quenching without altering the structure of galaxies and a central starburst followed by compaction. We find that instantaneous quenching reproduces well the observed mass-size relation of massive galaxies at $z>1$. Our starburst$+$compaction model followed by individual growth of the galaxies by minor mergers is preferred over other models without structural change for $\log(m/M_{\odot})>11.0$ galaxies at $z>0.5$. None of our models is able to meet the observations at $m>M^*$ and $z<1$ with out significant contribution of post-quenching growth of individual galaxies via mergers. We conclude that quenching is a fast process in galaxies with $ m \ge 10^{11} M_\odot$, and that major mergers likely play a major role in the final steps of their evolution.
0
1
0
0
0
0
Improving LBP and its variants using anisotropic diffusion
The main purpose of this paper is to propose a new preprocessing step in order to improve local feature descriptors and texture classification. Preprocessing is implemented by using transformations which help highlight salient features that play a significant role in texture recognition. We evaluate and compare four different competing methods: three different anisotropic diffusion methods including the classical anisotropic Perona-Malik diffusion and two subsequent regularizations of it and the application of a Gaussian kernel, which is the classical multiscale approach in texture analysis. The combination of the transformed images and the original ones are analyzed. The results show that the use of the preprocessing step does lead to improved texture recognition.
1
0
0
0
0
0
Dimension-free PAC-Bayesian bounds for matrices, vectors, and linear least squares regression
This paper is focused on dimension-free PAC-Bayesian bounds, under weak polynomial moment assumptions, allowing for heavy tailed sample distributions. It covers the estimation of the mean of a vector or a matrix, with applications to least squares linear regression. Special efforts are devoted to the estimation of Gram matrices, due to their prominent role in high-dimension data analysis.
0
0
1
1
0
0
Configuration Spaces and Robot Motion Planning Algorithms
The paper surveys topological problems relevant to the motion planning problem of robotics and includes some new results and constructions. First we analyse the notion of topological complexity of configuration spaces which is responsible for discontinuities in algorithms for robot navigation. Then we present explicit motion planning algorithms for coordinated collision free control of many particles moving in Euclidean spaces or on graphs. These algorithms are optimal in the sense that they have minimal number of regions of continuity. Moreover, we describe in full detail the topology of configuration spaces of two particles on a tree and use it to construct some top-dimensional cohomology classes in configuration spaces of n particles on a tree.
0
0
1
0
0
0
Effect Summaries for Thread-Modular Analysis
We propose a novel guess-and-check principle to increase the efficiency of thread-modular verification of lock-free data structures. We build on a heuristic that guesses candidates for stateless effect summaries of programs by searching the code for instances of a copy-and-check programming idiom common in lock-free data structures. These candidate summaries are used to compute the interference among threads in linear time. Since a candidate summary need not be a sound effect summary, we show how to fully automatically check whether the precision of candidate summaries is sufficient. We can thus perform sound verification despite relying on an unsound heuristic. We have implemented our approach and found it up to two orders of magnitude faster than existing ones.
1
0
0
0
0
0
Theory of Correlated Pairs of Electrons Oscillating in Resonant Quantum States to Reach the Critical Temperature in a Metal
The formation of Correlated Electron Pairs Oscillating around the Fermi level in Resonant Quantum States (CEPO-RQS), when a metal is cooled to its critical temperature T=Tc, is studied. The necessary conditions for the existence of CEPO-RQS are analyzed. The participation of electron-electron interaction screened by an electron dielectric constant of the form proposed by Thomas Fermi is considered and a physical meaning for the electron-phonon-electron interaction in the formation of the CEPO-RQS is given. The internal state of the CEPO-RQS is characterized by taking into account the internal Hamiltonian, obtaining a general equation that represents its binding energy and depends only on temperature and critical temperature. A parameter is also defined that contains the properties that qualitatively characterizes the nature of a material to form the CEPO-RQS.
0
1
0
0
0
0
Feldman-Katok pseudometric and the GIKN construction of nonhyperbolic ergodic measures
The GIKN construction was introduced by Gorodetski, Ilyashenko, Kleptsyn, and Nalsky in [Functional Analysis and its Applications, 39 (2005), 21--30]. It gives a nonhyperbolic ergodic measure which is a weak$^*$ limit of a special sequence of measures supported on periodic orbits. This method was later adapted by numerous authors and provided examples of nonhyperbolic invariant measures in various settings. We prove that the result of the GIKN construction is always a loosely Kronecker measure in the sense of Ornstein, Rudolph, and Weiss (equivalently, standard measure in the sense of Katok, another name is loosely Bernoulli measure with zero entropy). For a proof we introduce and study the Feldman-Katok pseudometric $\bar{F_{K}}$. The pseudodistance $\bar{F_{K}}$ is a topological counterpart of the $\bar f$ metric for finite-state stationary stochastic processes introduced by Feldman and, independently, by Katok, later developed by Ornstein, Rudolph, and Weiss. We show that every measure given by the GIKN construction is the $\bar{F_{K}}$-limit of a sequence of periodic measures. On the other hand we prove that a measure which is the $\bar{F_{K}}$-limit of a sequence of ergodic measures is ergodic and its entropy is smaller or equal than the lower limit of entropies of measures in the sequence. Furthermore we demonstrate that $\bar{F_{K}}$-Cauchy sequence of periodic measures tends in the weak$^*$ topology either to a periodic measure or to a loosely Kronecker measure.
0
0
1
0
0
0
Hamiltonian Monte-Carlo for Orthogonal Matrices
We consider the problem of sampling from posterior distributions for Bayesian models where some parameters are restricted to be orthogonal matrices. Such matrices are sometimes used in neural networks models for reasons of regularization and stabilization of training procedures, and also can parameterize matrices of bounded rank, positive-definite matrices and others. In \citet{byrne2013geodesic} authors have already considered sampling from distributions over manifolds using exact geodesic flows in a scheme similar to Hamiltonian Monte Carlo (HMC). We propose new sampling scheme for a set of orthogonal matrices that is based on the same approach, uses ideas of Riemannian optimization and does not require exact computation of geodesic flows. The method is theoretically justified by proof of symplecticity for the proposed iteration. In experiments we show that the new scheme is comparable or faster in time per iteration and more sample-efficient comparing to conventional HMC with explicit orthogonal parameterization and Geodesic Monte-Carlo. We also provide promising results of Bayesian ensembling for orthogonal neural networks and low-rank matrix factorization.
1
0
0
1
0
0
Rapid Mixing Swendsen-Wang Sampler for Stochastic Partitioned Attractive Models
The Gibbs sampler is a particularly popular Markov chain used for learning and inference problems in Graphical Models (GMs). These tasks are computationally intractable in general, and the Gibbs sampler often suffers from slow mixing. In this paper, we study the Swendsen-Wang dynamics which is a more sophisticated Markov chain designed to overcome bottlenecks that impede the Gibbs sampler. We prove O(\log n) mixing time for attractive binary pairwise GMs (i.e., ferromagnetic Ising models) on stochastic partitioned graphs having n vertices, under some mild conditions, including low temperature regions where the Gibbs sampler provably mixes exponentially slow. Our experiments also confirm that the Swendsen-Wang sampler significantly outperforms the Gibbs sampler when they are used for learning parameters of attractive GMs.
1
0
0
1
0
0
Towards effective research recommender systems for repositories
In this paper, we argue why and how the integration of recommender systems for research can enhance the functionality and user experience in repositories. We present the latest technical innovations in the CORE Recommender, which provides research article recommendations across the global network of repositories and journals. The CORE Recommender has been recently redeveloped and released into production in the CORE system and has also been deployed in several third-party repositories. We explain the design choices of this unique system and the evaluation processes we have in place to continue raising the quality of the provided recommendations. By drawing on our experience, we discuss the main challenges in offering a state-of-the-art recommender solution for repositories. We highlight two of the key limitations of the current repository infrastructure with respect to developing research recommender systems: 1) the lack of a standardised protocol and capabilities for exposing anonymised user-interaction logs, which represent critically important input data for recommender systems based on collaborative filtering and 2) the lack of a voluntary global sign-on capability in repositories, which would enable the creation of personalised recommendation and notification solutions based on past user interactions.
1
0
0
0
0
0
Existence and a priori estimates of solutions for quasilinear singular elliptic systems with variable exponents
This article sets forth results on the existence, a priori estimates and boundedness of positive solutions of a singular quasilinear systems of elliptic equations involving variable exponents. The approach is based on Schauder's fixed point Theorem. A Moser iteration procedure is also obtained for singular cooperative systems involving variable exponents establishing a priori estimates and boundedness of solutions.
0
0
1
0
0
0
On the sectional curvature along central configurations
In this paper we characterize planar central configurations in terms of a sectional curvature value of the Jacobi-Maupertuis metric. This characterization works for the $N$-body problem with general masses and any $1/r^{\alpha}$ potential with $\alpha> 0$. We also observe dynamical consequences of these curvature values for relative equilibrium solutions. These curvature methods work well for strong forces ($\alpha \ge 2$).
0
0
1
0
0
0
Interferometric confirmation of "water fountain" candidates
Water fountain stars (WFs) are evolved objects with water masers tracing high-velocity jets (up to several hundreds of km s$^{-1}$). They could represent one of the first manifestations of collimated mass-loss in evolved objects and thus, be a key to understanding the shaping mechanisms of planetary nebulae. Only 13 objects had been confirmed so far as WFs with interferometer observations. We present new observations with the Australia Telescope Compact Array and archival observations with the Very Large Array of four objects that are considered to be WF candidates, mainly based on single-dish observations. We confirm IRAS 17291-2147 and IRAS 18596+0315 (OH 37.1-0.8) as bona fide members of the WF class, with high-velocity water maser emission consistent with tracing bipolar jets. We argue that IRAS 15544-5332 has been wrongly considered as a WF in previous works, since we see no evidence in our data nor in the literature that this object harbours high-velocity water maser emission. In the case of IRAS 19067+0811, we did not detect any water maser emission, so its confirmation as a WF is still pending. With the result of this work, there are 15 objects that can be considered confirmed WFs. We speculate that there is no significant physical difference between WFs and obscured post-AGB stars in general. The absence of high-velocity water maser emission in some obscured post-AGB stars could be attributed to a variability or orientation effect.
0
1
0
0
0
0