text
null | inputs
dict | prediction
null | prediction_agent
null | annotation
list | annotation_agent
null | multi_label
bool 1
class | explanation
null | id
stringlengths 1
5
| metadata
null | status
stringclasses 2
values | event_timestamp
null | metrics
null |
---|---|---|---|---|---|---|---|---|---|---|---|---|
null | {
"abstract": " We consider spatially extended systems of interacting nonlinear Hawkes\nprocesses modeling large systems of neurons placed in Rd and study the\nassociated mean field limits. As the total number of neurons tends to infinity,\nwe prove that the evolution of a typical neuron, attached to a given spatial\nposition, can be described by a nonlinear limit differential equation driven by\na Poisson random measure. The limit process is described by a neural field\nequation. As a consequence, we provide a rigorous derivation of the neural\nfield equation based on a thorough mean field analysis.\n",
"title": "Mean field limits for nonlinear spatially extended hawkes processes with exponential memory kernels"
} | null | null | null | null | true | null | 3001 | null | Default | null | null |
null | {
"abstract": " The Global Historical Climatology Network-Daily database contains, among\nother variables, daily maximum and minimum temperatures from weather stations\naround the globe. It is long known that climatological summary statistics based\non daily temperature minima and maxima will not be accurate, if the bias due to\nthe time at which the observations were collected is not accounted for. Despite\nsome previous work, to our knowledge, there does not exist a satisfactory\nsolution to this important problem. In this paper, we carefully detail the\nproblem and develop a novel approach to address it. Our idea is to impute the\nhourly temperatures at the location of the measurements by borrowing\ninformation from the nearby stations that record hourly temperatures, which\nthen can be used to create accurate summaries of temperature extremes. The key\ndifficulty is that these imputations of the temperature curves must satisfy the\nconstraint of falling between the observed daily minima and maxima, and\nattaining those values at least once in a twenty-four hour period. We develop a\nspatiotemporal Gaussian process model for imputing the hourly measurements from\nthe nearby stations, and then develop a novel and easy to implement Markov\nChain Monte Carlo technique to sample from the posterior distribution\nsatisfying the above constraints. We validate our imputation model using hourly\ntemperature data from four meteorological stations in Iowa, of which one is\nhidden and the data replaced with daily minima and maxima, and show that the\nimputed temperatures recover the hidden temperatures well. We also demonstrate\nthat our model can exploit information contained in the data to infer the time\nof daily measurements.\n",
"title": "Bias correction in daily maximum and minimum temperature measurements through Gaussian process modeling"
} | null | null | [
"Statistics"
]
| null | true | null | 3002 | null | Validated | null | null |
null | {
"abstract": " Understanding the mechanism of the heterojunction is an important step\ntowards controllable and tunable interfaces for photocatalytic and photovoltaic\nbased devices. To this aim, we propose a thorough study of a double\nheterostructure system consisting of two semiconductors with large band gap,\nnamely, wurtzite ZnO and anatase TiO2. We demonstrate via first-principle\ncalculations two stable configurations of ZnO/TiO2 interfaces. Our structural\nanalysis provides a key information on the nature of the complex interface and\nlattice distortions occurring when combining these materials. The study of the\nelectronic properties of the sandwich nanostructure TiO2/ZnO/TiO2 reveals that\nconduction band arises mainly from Ti3d orbitals, while valence band is\nmaintained by O2p of ZnO, and that the trapped states within the gap region\nfrequent in single heterostructure are substantially reduced in the double\ninterface system. Moreover, our work explains the origin of certain optical\ntransitions observed in the experimental studies. Unexpectedly, as a\nconsequence of different bond distortions, the results on the band alignments\nshow electron accumulation in the left shell of TiO2 rather than the right one.\nSuch behavior provides more choice for the sensitization and functionalization\nof TiO2 surfaces.\n",
"title": "Mechanism of the double heterostructure TiO2/ZnO/TiO2 for photocatalytic and photovoltaic applications: A theoretical study"
} | null | null | null | null | true | null | 3003 | null | Default | null | null |
null | {
"abstract": " About two decades ago, Tsfasman and Boguslavsky conjectured a formula for the\nmaximum number of common zeros that $r$ linearly independent homogeneous\npolynomials of degree $d$ in $m+1$ variables with coefficients in a finite\nfield with $q$ elements can have in the corresponding $m$-dimensional\nprojective space. Recently, it has been shown by Datta and Ghorpade that this\nconjecture is valid if $r$ is at most $m+1$ and can be invalid otherwise.\nMoreover a new conjecture was proposed for many values of $r$ beyond $m+1$. In\nthis paper, we prove that this new conjecture holds true for several values of\n$r$. In particular, this settles the new conjecture completely when $d=3$. Our\nresult also includes the positive result of Datta and Ghorpade as a special\ncase. Further, we determine the maximum number of zeros in certain cases not\ncovered by the earlier conjectures and results, namely, the case of $d=q-1$ and\nof $d=q$. All these results are directly applicable to the determination of the\nmaximum number of points on sections of Veronese varieties by linear\nsubvarieties of a fixed dimension, and also the determination of generalized\nHamming weights of projective Reed-Muller codes.\n",
"title": "Maximum Number of Common Zeros of Homogeneous Polynomials over Finite Fields"
} | null | null | null | null | true | null | 3004 | null | Default | null | null |
null | {
"abstract": " When a vortex refracts surface waves, the momentum flux carried by the waves\nchanges direction and the waves induce a reaction force on the vortex. We study\nexperimentally the resulting vortex distortion. Incoming surface gravity waves\nimpinge on a steady vortex of velocity $U_0$ driven magneto-hydrodynamically at\nthe bottom of a fluid layer. The waves induce a shift of the vortex center in\nthe direction transverse to wave propagation, together with a decrease in\nsurface vorticity. We interpret these two phenomena in the framework introduced\nby Craik and Leibovich (1976): we identify the dimensionless Stokes drift\n$S=U_s/U_0$ as the relevant control parameter, $U_s$ being the Stokes drift\nvelocity of the waves. We propose a simple vortex line model which indicates\nthat the shift of the vortex center originates from a balance between vorticity\nadvection by the Stokes drift and self-advection of the vortex. The decrease in\nsurface vorticity is interpreted as a consequence of vorticity expulsion by the\nfast Stokes drift, which confines it at depth. This purely hydrodynamic process\nis analogous to the magnetohydrodynamic expulsion of magnetic field by a\nrapidly moving conductor through the electromagnetic skin effect. We study\nvorticity expulsion in the limit of fast Stokes drift and deduce that the\nsurface vorticity decreases as $1/S$, a prediction which is compatible with the\nexperimental data. Such wave-induced vortex distortions have important\nconsequences for the nonlinear regime of wave refraction: the refraction angle\nrapidly decreases with wave intensity.\n",
"title": "Wave-induced vortex recoil and nonlinear refraction"
} | null | null | null | null | true | null | 3005 | null | Default | null | null |
null | {
"abstract": " Microtubules (MTs) are filamentous protein polymers roughly 25 nm in\ndiameter. Ubiquitous in eukaryotes, MTs are well known for their structural\nrole but also act as actuators, sensors, and, in association with other\nproteins, checkpoint regulators. The thin diameter and transparency of\nmicrotubules classifies them as sub-resolution phase objects, with concomitant\nimaging challenges. Label-free methods for imaging microtubules are preferred\nwhen long exposure times would lead to phototoxicity in fluorescence, or for\nretaining more native structure and activity.\nThis method approaches quantitative phase imaging of MTs as an inverse\nproblem based on the Transport of Intensity Equation. In a co-registered\ncomparison of MT signal-to-background-noise ratio, TIE Microscopy of MTs shows\nan improvement of more than three times that of video-enhanced bright field\nimaging.\nThis method avoids the anisotropy caused by prisms used in differential\ninterference contrast and takes only two defocused images as input. Unlike\nother label-free techniques for imaging microtubules, in TIE microscopy\nbackground removal is a natural consequence of taking the difference of two\ndefocused images, so the need to frequently update a background image is\neliminated.\n",
"title": "Transport of Intensity Equation Microscopy for Dynamic Microtubules"
} | null | null | null | null | true | null | 3006 | null | Default | null | null |
null | {
"abstract": " The intrinsic stacking fault energy (ISFE) $\\gamma$ is a material parameter\nfundamental to the discussion of plastic deformation mechanisms in metals.\nHere, we scrutinize the temperature dependence of the ISFE of Au through\naccurate first-principles derived Helmholtz free energies employing both the\nsuper cell approach and the axial Ising model (AIM). A significant decrease of\nthe ISFE with temperature, $-(36$-$39)$\\,\\% from 0 to 890\\,K depending on the\ntreatment of thermal expansion, is revealed, which matches the estimate based\non the experimental temperature coefficient $d \\gamma / d T $ closely. We make\nevident that this decrease predominantly originates from the excess vibrational\nentropy at the stacking fault layer, although the contribution arising from the\nstatic lattice expansion compensates it by approximately 60\\,\\%. Electronic\nexcitations are found to be of minor importance for the ISFE change with\ntemperature. We show that the Debye model in combination with the AIM captures\nthe correct sign but significantly underestimates the magnitude of the\nvibrational contribution to $\\gamma(T)$. The hexagonal close-packed (hcp) and\ndouble hcp structures are established as metastable phases of Au. Our results\ndemonstrate that quantitative agreement with experiments can be obtained if all\nrelevant temperature-induced excitations are considered in first-principles\nmodeling and that the temperature dependence of the ISFE is substantial enough\nto be taken into account in crystal plasticity modeling.\n",
"title": "First-principles prediction of the stacking fault energy of gold at finite temperature"
} | null | null | null | null | true | null | 3007 | null | Default | null | null |
null | {
"abstract": " In this paper we address the problem of electing a committee among a set of\n$m$ candidates and on the basis of the preferences of a set of $n$ voters. We\nconsider the approval voting method in which each voter can approve as many\ncandidates as she/he likes by expressing a preference profile (boolean\n$m$-vector). In order to elect a committee, a voting rule must be established\nto `transform' the $n$ voters' profiles into a winning committee. The problem\nis widely studied in voting theory; for a variety of voting rules the problem\nwas shown to be computationally difficult and approximation algorithms and\nheuristic techniques were proposed in the literature. In this paper we follow\nan Ordered Weighted Averaging approach and study the $k$-sum approval voting\n(optimization) problem in the general case $1 \\leq k <n$. For this problem we\nprovide different mathematical programming formulations that allow us to solve\nit in an exact solution framework. We provide computational results showing\nthat our approach is efficient for medium-size test problems ($n$ up to 200,\n$m$ up to 60) since in all tested cases it was able to find the exact optimal\nsolution in very short computational times.\n",
"title": "Mathematical Programming formulations for the efficient solution of the $k$-sum approval voting problem"
} | null | null | null | null | true | null | 3008 | null | Default | null | null |
null | {
"abstract": " Estimating the structure of directed acyclic graphs (DAGs, also known as\nBayesian networks) is a challenging problem since the search space of DAGs is\ncombinatorial and scales superexponentially with the number of nodes. Existing\napproaches rely on various local heuristics for enforcing the acyclicity\nconstraint. In this paper, we introduce a fundamentally different strategy: We\nformulate the structure learning problem as a purely \\emph{continuous}\noptimization problem over real matrices that avoids this combinatorial\nconstraint entirely. This is achieved by a novel characterization of acyclicity\nthat is not only smooth but also exact. The resulting problem can be\nefficiently solved by standard numerical algorithms, which also makes\nimplementation effortless. The proposed method outperforms existing ones,\nwithout imposing any structural assumptions on the graph such as bounded\ntreewidth or in-degree. Code implementing the proposed algorithm is open-source\nand publicly available at this https URL.\n",
"title": "DAGs with NO TEARS: Continuous Optimization for Structure Learning"
} | null | null | null | null | true | null | 3009 | null | Default | null | null |
null | {
"abstract": " We consider the problem of efficient packet dissemination in wireless\nnetworks with point-to-multi-point wireless broadcast channels. We propose a\ndynamic policy, which achieves the broadcast capacity of the network. This\npolicy is obtained by first transforming the original multi-hop network into a\nprecedence-relaxed virtual single-hop network and then finding an optimal\nbroadcast policy for the relaxed network. The resulting policy is shown to be\nthroughput-optimal for the original wireless network using a sample-path\nargument. We also prove the NP-completeness of the finite-horizon broadcast\nproblem, which is in contrast with the polynomial time solvability of the\nproblem with point-to-point channels. Illustrative simulation results\ndemonstrate the efficacy of the proposed broadcast policy in achieving the full\nbroadcast capacity with low delay.\n",
"title": "Throughput-Optimal Broadcast in Wireless Networks with Point-to-Multipoint Transmissions"
} | null | null | null | null | true | null | 3010 | null | Default | null | null |
null | {
"abstract": " Much attention has been given in the literature to the effects of\nastrophysical events on human and land-based life. However, little has been\ndiscussed on the resilience of life itself. Here we instead explore the\nstatistics of events that completely sterilise an Earth-like planet with planet\nradii in the range $0.5-1.5 R_{Earth}$ and temperatures of $\\sim 300 \\;\n\\text{K}$, eradicating all forms of life. We consider the relative likelihood\nof complete global sterilisation events from three astrophysical sources --\nsupernovae, gamma-ray bursts, large asteroid impacts, and passing-by stars. To\nassess such probabilities we consider what cataclysmic event could lead to the\nannihilation of not just human life, but also extremophiles, through the\nboiling of all water in Earth's oceans. Surprisingly we find that although\nhuman life is somewhat fragile to nearby events, the resilience of Ecdysozoa\nsuch as \\emph{Milnesium tardigradum} renders global sterilisation an unlikely\nevent.\n",
"title": "The Resilience of Life to Astrophysical Events"
} | null | null | [
"Physics"
]
| null | true | null | 3011 | null | Validated | null | null |
null | {
"abstract": " DNA is a flexible molecule, but the degree of its flexibility is subject to\ndebate. The commonly-accepted persistence length of $l_p \\approx 500\\,$\\AA\\ is\ninconsistent with recent studies on short-chain DNA that show much greater\nflexibility but do not probe its origin. We have performed X-ray and neutron\nsmall-angle scattering on a short DNA sequence containing a strong nucleosome\npositioning element, and analyzed the results using a modified Kratky-Porod\nmodel to determine possible conformations. Our results support a hypothesis\nfrom Crick and Klug in 1975 that some DNA sequences in solution can have sharp\nkinks, potentially resolving the discrepancy. Our conclusions are supported by\nmeasurements on a radiation-damaged sample, where single-strand breaks lead to\nincreased flexibility and by an analysis of data from another sequence, which\ndoes not have kinks, but where our method can detect a locally enhanced\nflexibility due to an $AT$-domain.\n",
"title": "Kinky DNA in solution: Small angle scattering study of a nucleosome positioning sequence"
} | null | null | null | null | true | null | 3012 | null | Default | null | null |
null | {
"abstract": " Although aviation accidents are rare, safety incidents occur more frequently\nand require a careful analysis to detect and mitigate risks in a timely manner.\nAnalyzing safety incidents using operational data and producing event-based\nexplanations is invaluable to airline companies as well as to governing\norganizations such as the Federal Aviation Administration (FAA) in the United\nStates. However, this task is challenging because of the complexity involved in\nmining multi-dimensional heterogeneous time series data, the lack of\ntime-step-wise annotation of events in a flight, and the lack of scalable tools\nto perform analysis over a large number of events. In this work, we propose a\nprecursor mining algorithm that identifies events in the multidimensional time\nseries that are correlated with the safety incident. Precursors are valuable to\nsystems health and safety monitoring and in explaining and forecasting safety\nincidents. Current methods suffer from poor scalability to high dimensional\ntime series data and are inefficient in capturing temporal behavior. We propose\nan approach by combining multiple-instance learning (MIL) and deep recurrent\nneural networks (DRNN) to take advantage of MIL's ability to learn using weakly\nsupervised data and DRNN's ability to model temporal behavior. We describe the\nalgorithm, the data, the intuition behind taking a MIL approach, and a\ncomparative analysis of the proposed algorithm with baseline models. We also\ndiscuss the application to a real-world aviation safety problem using data from\na commercial airline company and discuss the model's abilities and\nshortcomings, with some final remarks about possible deployment directions.\n",
"title": "Explaining Aviation Safety Incidents Using Deep Temporal Multiple Instance Learning"
} | null | null | null | null | true | null | 3013 | null | Default | null | null |
null | {
"abstract": " Adaptive optimization algorithms, such as Adam and RMSprop, have shown better\noptimization performance than stochastic gradient descent (SGD) in some\nscenarios. However, recent studies show that they often lead to worse\ngeneralization performance than SGD, especially for training deep neural\nnetworks (DNNs). In this work, we identify the reasons that Adam generalizes\nworse than SGD, and develop a variant of Adam to eliminate the generalization\ngap. The proposed method, normalized direction-preserving Adam (ND-Adam),\nenables more precise control of the direction and step size for updating weight\nvectors, leading to significantly improved generalization performance.\nFollowing a similar rationale, we further improve the generalization\nperformance in classification tasks by regularizing the softmax logits. By\nbridging the gap between SGD and Adam, we also hope to shed light on why\ncertain optimization algorithms generalize better than others.\n",
"title": "Normalized Direction-preserving Adam"
} | null | null | [
"Computer Science",
"Statistics"
]
| null | true | null | 3014 | null | Validated | null | null |
null | {
"abstract": " Model-based optimization methods and discriminative learning methods have\nbeen the two dominant strategies for solving various inverse problems in\nlow-level vision. Typically, those two kinds of methods have their respective\nmerits and drawbacks, e.g., model-based optimization methods are flexible for\nhandling different inverse problems but are usually time-consuming with\nsophisticated priors for the purpose of good performance; in the meanwhile,\ndiscriminative learning methods have fast testing speed but their application\nrange is greatly restricted by the specialized task. Recent works have revealed\nthat, with the aid of variable splitting techniques, denoiser prior can be\nplugged in as a modular part of model-based optimization methods to solve other\ninverse problems (e.g., deblurring). Such an integration induces considerable\nadvantage when the denoiser is obtained via discriminative learning. However,\nthe study of integration with fast discriminative denoiser prior is still\nlacking. To this end, this paper aims to train a set of fast and effective CNN\n(convolutional neural network) denoisers and integrate them into model-based\noptimization method to solve other inverse problems. Experimental results\ndemonstrate that the learned set of denoisers not only achieve promising\nGaussian denoising results but also can be used as prior to deliver good\nperformance for various low-level vision applications.\n",
"title": "Learning Deep CNN Denoiser Prior for Image Restoration"
} | null | null | [
"Computer Science"
]
| null | true | null | 3015 | null | Validated | null | null |
null | {
"abstract": " Effects of spin-orbit interactions in condensed matter are an important and\nrapidly evolving topic. Strong competition between spin-orbit, on-site Coulomb\nand crystalline electric field interactions in iridates drives exotic quantum\nstates that are unique to this group of materials. In particular, the Jeff =\n1/2 Mott state served as an early signal that the combined effect of strong\nspin-orbit and Coulomb interactions in iridates has unique, intriguing\nconsequences. In this Key Issues Review, we survey some current experimental\nstudies of iridates. In essence, these materials tend to defy conventional\nwisdom: absence of conventional correlations between magnetic and insulating\nstates, avoidance of metallization at high pressures, S-shaped I-V\ncharacteristic, emergence of an odd-parity hidden order, etc. It is\nparticularly intriguing that there exist conspicuous discrepancies between\ncurrent experimental results and theoretical proposals that address\nsuperconducting, topological and quantum spin liquid phases. This class of\nmaterials, in which the lattice degrees of freedom play a critical role seldom\nseen in other materials, evidently presents some profound intellectual\nchallenges that call for more investigations both experimentally and\ntheoretically. Physical properties unique to these materials may help unlock a\nworld of possibilities for functional materials and devices. We emphasize that,\ngiven the rapidly developing nature of this field, this Key Issues Review is by\nno means an exhaustive report of the current state of experimental studies of\niridates.\n",
"title": "The Challenge of Spin-Orbit-Tuned Ground States in Iridates"
} | null | null | null | null | true | null | 3016 | null | Default | null | null |
null | {
"abstract": " In this paper, we study two-sided tilting complexes of preprojective algebras\nof Dynkin type. We construct the most fundamental class of two-sided tilting\ncomplexes, which has a group structure by derived tensor products and induces a\ngroup of auto-equivalences of the derived category. We show that the group\nstructure of the two-sided tilting complexes is isomorphic to the braid group\nof the corresponding folded graph. Moreover we show that these two-sided\ntilting complexes induce tilting mutation and any tilting complex is given as\nthe derived tensor products of them. Using these results, we determine the\nderived Picard group of preprojective algebras for type $A$ and $D$.\n",
"title": "Derived Picard groups of preprojective algebras of Dynkin type"
} | null | null | null | null | true | null | 3017 | null | Default | null | null |
null | {
"abstract": " We propose and analyze a method for semi-supervised learning from\npartially-labeled network-structured data. Our approach is based on a graph\nsignal recovery interpretation under a clustering hypothesis that labels of\ndata points belonging to the same well-connected subset (cluster) are similar\nvalued. This lends naturally to learning the labels by total variation (TV)\nminimization, which we solve by applying a recently proposed primal-dual method\nfor non-smooth convex optimization. The resulting algorithm allows for a highly\nscalable implementation using message passing over the underlying empirical\ngraph, which renders the algorithm suitable for big data applications. By\napplying tools of compressed sensing, we derive a sufficient condition on the\nunderlying network structure such that TV minimization recovers clusters in the\nempirical graph of the data. In particular, we show that the proposed\nprimal-dual method amounts to maximizing network flows over the empirical graph\nof the dataset. Moreover, the learning accuracy of the proposed algorithm is\nlinked to the set of network flows between data points having known labels. The\neffectiveness and scalability of our approach is verified by numerical\nexperiments.\n",
"title": "Semi-supervised Learning in Network-Structured Data via Total Variation Minimization"
} | null | null | [
"Computer Science",
"Statistics"
]
| null | true | null | 3018 | null | Validated | null | null |
null | {
"abstract": " Staphylococcus aureus responsible for nosocomial infections is a significant\nthreat to the public health. The increasing resistance of S.aureus to various\nantibiotics has drawn it to a prime focus for research on designing an\nappropriate drug delivery system. Emergence of Methicillin Resistant\nStaphylococcus aureus (MRSA) in 1961, necessitated the use of vancomycin \"the\ndrug of last resort\" to treat these infections. Unfortunately, S.aureus has\nalready started gaining resistances to vancomycin. Liposome encapsulation of\ndrugs have been earlier shown to provide an efficient method of microbial\ninhibition in many cases. We have studied the effect of liposome encapsulated\nvancomycin on MRSA and evaluated the antibacterial activity of the\nliposome-entrapped drug in comparison to that of the free drug based on the\nminimum inhibitory concentration (MIC) of the drug. The MIC for liposomal\nvancomycin was found to be about half of that of free vancomycin. The growth\nresponse of MRSA showed that the liposomal vancomycin induced the culture to go\ninto bacteriostatic state and phagocytic killing was enhanced. Administration\nof the antibiotic encapsulated in liposome thus was shown to greatly improve\nthe drug delivery as well as the drug resistance caused by MRSA.\n",
"title": "Susceptibility of Methicillin Resistant Staphylococcus aureus to Vancomycin using Liposomal Drug Delivery System"
} | null | null | null | null | true | null | 3019 | null | Default | null | null |
null | {
"abstract": " Understanding how ideas relate to each other is a fundamental question in\nmany domains, ranging from intellectual history to public communication.\nBecause ideas are naturally embedded in texts, we propose the first framework\nto systematically characterize the relations between ideas based on their\noccurrence in a corpus of documents, independent of how these ideas are\nrepresented. Combining two statistics --- cooccurrence within documents and\nprevalence correlation over time --- our approach reveals a number of different\nways in which ideas can cooperate and compete. For instance, two ideas can\nclosely track each other's prevalence over time, and yet rarely cooccur, almost\nlike a \"cold war\" scenario. We observe that pairwise cooccurrence and\nprevalence correlation exhibit different distributions. We further demonstrate\nthat our approach is able to uncover intriguing relations between ideas through\nin-depth case studies on news articles and research papers.\n",
"title": "Friendships, Rivalries, and Trysts: Characterizing Relations between Ideas in Texts"
} | null | null | null | null | true | null | 3020 | null | Default | null | null |
null | {
"abstract": " This paper presents a general graph representation learning framework called\nDeepGL for learning deep node and edge representations from large (attributed)\ngraphs. In particular, DeepGL begins by deriving a set of base features (e.g.,\ngraphlet features) and automatically learns a multi-layered hierarchical graph\nrepresentation where each successive layer leverages the output from the\nprevious layer to learn features of a higher-order. Contrary to previous work,\nDeepGL learns relational functions (each representing a feature) that\ngeneralize across-networks and therefore useful for graph-based transfer\nlearning tasks. Moreover, DeepGL naturally supports attributed graphs, learns\ninterpretable features, and is space-efficient (by learning sparse feature\nvectors). In addition, DeepGL is expressive, flexible with many interchangeable\ncomponents, efficient with a time complexity of $\\mathcal{O}(|E|)$, and\nscalable for large networks via an efficient parallel implementation. Compared\nwith the state-of-the-art method, DeepGL is (1) effective for across-network\ntransfer learning tasks and attributed graph representation learning, (2)\nspace-efficient requiring up to 6x less memory, (3) fast with up to 182x\nspeedup in runtime performance, and (4) accurate with an average improvement of\n20% or more on many learning tasks.\n",
"title": "Deep Feature Learning for Graphs"
} | null | null | null | null | true | null | 3021 | null | Default | null | null |
null | {
"abstract": " Trajectory optimization of a controlled dynamical system is an essential part\nof autonomy, however many trajectory optimization techniques are limited by the\nfidelity of the underlying parametric model. In the field of robotics, a lack\nof model knowledge can be overcome with machine learning techniques, utilizing\nmeasurements to build a dynamical model from the data. This paper aims to take\nthe middle ground between these two approaches by introducing a semi-parametric\nrepresentation of the underlying system dynamics. Our goal is to leverage the\nconsiderable information contained in a traditional physics based model and\ncombine it with a data-driven, non-parametric regression technique known as a\nGaussian Process. Integrating this semi-parametric model with model predictive\npseudospectral control, we demonstrate this technique on both a cart pole and\nquadrotor simulation with unmodeled damping and parametric error. In order to\nmanage parametric uncertainty, we introduce an algorithm that utilizes Sparse\nSpectrum Gaussian Processes (SSGP) for online learning after each rollout. We\nimplement this online learning technique on a cart pole and quadrator, then\ndemonstrate the use of online learning and obstacle avoidance for the dubin\nvehicle dynamics.\n",
"title": "Pseudospectral Model Predictive Control under Partially Learned Dynamics"
} | null | null | null | null | true | null | 3022 | null | Default | null | null |
null | {
"abstract": " In monolayer semiconductor transition metal dichalcogenides, the\nexciton-phonon interaction is expected to strongly affect the photocarrier\ndynamics. Here, we report on an unusual oscillatory enhancement of the neutral\nexciton photoluminescence with the excitation laser frequency in monolayer\nMoSe2. The frequency of oscillation matches that of the M-point longitudinal\nacoustic phonon, LA(M). Oscillatory behavior is also observed in the\nsteady-state emission linewidth and in timeresolved photoluminescence\nexcitation data, which reveals variation with excitation energy in the exciton\nlifetime. These results clearly expose the key role played by phonons in the\nexciton formation and relaxation dynamics of two-dimensional van der Waals\nsemiconductors.\n",
"title": "Phonon-assisted oscillatory exciton dynamics in monolayer MoSe2"
} | null | null | [
"Physics"
]
| null | true | null | 3023 | null | Validated | null | null |
null | {
"abstract": " Policy evaluation or value function or Q-function approximation is a key\nprocedure in reinforcement learning (RL). It is a necessary component of policy\niteration and can be used for variance reduction in policy gradient methods.\nTherefore its quality has a significant impact on most RL algorithms. Motivated\nby manifold regularized learning, we propose a novel kernelized policy\nevaluation method that takes advantage of the intrinsic geometry of the state\nspace learned from data, in order to achieve better sample efficiency and\nhigher accuracy in Q-function approximation. Applying the proposed method in\nthe Least-Squares Policy Iteration (LSPI) framework, we observe superior\nperformance compared to widely used parametric basis functions on two standard\nbenchmarks in terms of policy quality.\n",
"title": "Manifold Regularization for Kernelized LSTD"
} | null | null | [
"Computer Science",
"Statistics"
]
| null | true | null | 3024 | null | Validated | null | null |
null | {
"abstract": " We prove that for $1<c<4/3$ the subsequence of the Thue--Morse sequence\n$\\mathbf t$ indexed by $\\lfloor n^c\\rfloor$ defines a normal sequence, that is,\neach finite sequence $(\\varepsilon_0,\\ldots,\\varepsilon_{T-1})\\in \\{0,1\\}^T$\noccurs as a contiguous subsequence of the sequence $n\\mapsto \\mathbf\nt\\left(\\lfloor n^c\\rfloor\\right)$ with asymptotic frequency $2^{-T}$.\n",
"title": "Normality of the Thue--Morse sequence along Piatetski-Shapiro sequences"
} | null | null | [
"Mathematics"
]
| null | true | null | 3025 | null | Validated | null | null |
null | {
"abstract": " This paper extends the method introduced in Rivi et al. (2016b) to measure\ngalaxy ellipticities in the visibility domain for radio weak lensing surveys.\nIn that paper we focused on the development and testing of the method for the\nsimple case of individual galaxies located at the phase centre, and proposed to\nextend it to the realistic case of many sources in the field of view by\nisolating visibilities of each source with a faceting technique. In this second\npaper we present a detailed algorithm for source extraction in the visibility\ndomain and show its effectiveness as a function of the source number density by\nrunning simulations of SKA1-MID observations in the band 950-1150 MHz and\ncomparing original and measured values of galaxies' ellipticities. Shear\nmeasurements from a realistic population of 10^4 galaxies randomly located in a\nfield of view of 1 deg^2 (i.e. the source density expected for the current\nradio weak lensing survey proposal with SKA1) are also performed. At SNR >= 10,\nthe multiplicative bias is only a factor 1.5 worse than what found when\nanalysing individual sources, and is still comparable to the bias values\nreported for similar measurement methods at optical wavelengths. The additive\nbias is unchanged from the case of individual sources, but is significantly\nlarger than typically found in optical surveys. This bias depends on the shape\nof the uv coverage and we suggest that a uv-plane weighting scheme to produce a\nmore isotropic shape could reduce and control additive bias.\n",
"title": "Radio Weak Lensing Shear Measurement in the Visibility Domain - II. Source Extraction"
} | null | null | null | null | true | null | 3026 | null | Default | null | null |
null | {
"abstract": " Wigner's little groups are the subgroups of the Lorentz group whose\ntransformations leave the momentum of a given particle invariant. They thus\ndefine the internal space-time symmetries of relativistic particles. These\nsymmetries take different mathematical forms for massive and for massless\nparticles. However, it is shown possible to construct one unified\nrepresentation using a graphical description. This graphical approach allows us\nto describe vividly parity, time reversal, and charge conjugation of the\ninternal symmetry groups. As for the language of group theory, the two-by-two\nrepresentation is used throughout the paper. While this two-by-two\nrepresentation is for spin-1/2 particles, it is shown possible to construct the\nrepresentations for spin-0 particles, spin-1 particles, as well as for\nhigher-spin particles, for both massive and massless cases. It is shown also\nthat the four-by-four Dirac matrices constitute a two-by-two representation of\nWigner's little group.\n",
"title": "Loop Representation of Wigner's Little Groups"
} | null | null | null | null | true | null | 3027 | null | Default | null | null |
null | {
"abstract": " We present the MOA Collaboration light curve data for planetary microlensing\nevent OGLE-2015-BLG-0954, which was previously announced in a paper by the\nKMTNet and OGLE Collaborations. The MOA data cover the caustic exit, which was\nnot covered by the KMTNet or OGLE data, and they provide a more reliable\nmeasurement of the finite source effect. The MOA data also provide a new source\ncolor measurement that reveals a lens-source relative proper motion of\n$\\mu_{\\rm rel} = 11.8\\pm 0.8\\,$mas/yr, which compares to the value of $\\mu_{\\rm\nrel} = 18.4\\pm 1.7\\,$mas/yr reported in the KMTNet-OGLE paper. This new MOA\nvalue for $\\mu_{\\rm rel}$ has an a priori probability that is a factor of $\\sim\n100$ times larger than the previous value, and it does not require a lens\nsystem distance of $D_L < 1\\,$kpc. Based on the corrected source color, we find\nthat the lens system consists of a planet of mass $3.4^{+3.7}_{-1.6} M_{\\rm\nJup}$ orbiting a $0.30^{+0.34}_{-0.14}M_\\odot$ star at an orbital separation of\n$2.1^{+2.2}_{-1.0}\\,$AU and a distance of $1.2^{+1.1}_{-0.5}\\,$kpc.\n",
"title": "MOA Data Reveal a New Mass, Distance, and Relative Proper Motion for Planetary System OGLE-2015-BLG-0954L"
} | null | null | null | null | true | null | 3028 | null | Default | null | null |
null | {
"abstract": " We develop a no-go theorem for two-dimensional bosonic systems with crystal\nsymmetries: if there is a half-integer spin at a rotation center, where the\npoint-group symmetry is $\\mathbb D_{2,4,6}$, such a system must have a\nground-state degeneracy protected by the crystal symmetry. Such a degeneracy\nindicates either a broken-symmetry state or a unconventional state of matter.\nComparing to the Lieb-Schultz-Mattis Theorem, our result counts the spin at\neach rotation center, instead of the total spin per unit cell, and therefore\nalso applies to certain systems with an even number of half-integer spins per\nunit cell.\n",
"title": "Ground state degeneracy in quantum spin systems protected by crystal symmetries"
} | null | null | null | null | true | null | 3029 | null | Default | null | null |
null | {
"abstract": " Neural Networks are function approximators that have achieved\nstate-of-the-art accuracy in numerous machine learning tasks. In spite of their\ngreat success in terms of accuracy, their large training time makes it\ndifficult to use them for various tasks. In this paper, we explore the idea of\nlearning weight evolution pattern from a simple network for accelerating\ntraining of novel neural networks. We use a neural network to learn the\ntraining pattern from MNIST classification and utilize it to accelerate\ntraining of neural networks used for CIFAR-10 and ImageNet classification. Our\nmethod has a low memory footprint and is computationally efficient. This method\ncan also be used with other optimizers to give faster convergence. The results\nindicate a general trend in the weight evolution during training of neural\nnetworks.\n",
"title": "Introspection: Accelerating Neural Network Training By Learning Weight Evolution"
} | null | null | null | null | true | null | 3030 | null | Default | null | null |
null | {
"abstract": " Polarised neutron diffraction measurements have been made on HoFeO$_3$ single\ncrystals magnetised in both the [001] and [100] directions ($Pbnm$ setting).\nThe polarisation dependencies of Bragg reflection intensities were measured\nboth with a high field of H = 9 T parallel to [001] at T = 70 K and with the\nlower field H = 0.5 T parallel to [100] at T = 5, 15, 25~K. A Fourier\nprojection of magnetization induced parallel to [001], made using the $hk0$\nreflections measured in 9~T, indicates that almost all of it is due to\nalignment of Ho moments. Further analysis of the asymmetries of general\nreflections in these data showed that although, at 70~K, 9~T applied parallel\nto [001] hardly perturbs the antiferromagnetic order of the Fe sublattices, it\ninduces significant antiferromagnetic order of the Ho sublattices in the\n$x\\mhyphen y$ plane, with the antiferromagnetic components of moment having the\nsame order of magnitude as the induced ferromagnetic ones. Strong intensity\nasymmetries measured in the low temperature $\\Gamma_2$ structure with a lower\nfield, 0.5 T $\\parallel$ [100] allowed the variation of the ordered components\nof the Ho and Fe moments to be followed. Their absolute orientations, in the\n180\\degree\\ domain stabilised by the field were determined relative to the\ndistorted perovskite structure,. This relationship fixes the sign of the\nDzyalshinski-Moriya (D-M) interaction which leads to the weak ferromagnetism.\nOur results indicate that the combination of strong y-axis anisotropy of the Ho\nmoments and Ho-Fe exchange interactions breaks the centrosymmetry of the\nstructure and could lead to ferroelectric polarization.\n",
"title": "Single crystal polarized neutron diffraction study of the magnetic structure of HoFeO$_3$"
} | null | null | null | null | true | null | 3031 | null | Default | null | null |
null | {
"abstract": " In this paper we generalize the main result of [4] for manifolds that are not\nnecessarily Einstein. In fact, we obtain an upper bound for the volume of a\nlocally volume-minimizing closed hypersurface $\\Sigma$ of a Riemannian\n5-manifold $M$ with scalar curvature bounded from below by a positive constant\nin terms of the total traceless Ricci curvature of $\\Sigma$. Furthermore, if\n$\\Sigma$ saturates the respective upper bound and $M$ has nonnegative Ricci\ncurvature, then $\\Sigma$ is isometric to $\\mathbb{S}^4$ up to scaling and $M$\nsplits in a neighborhood of $\\Sigma$. Also, we obtain a rigidity result for the\nRiemannian cover of $M$ when $\\Sigma$ minimizes the volume in its homotopy\nclass and saturates the upper bound.\n",
"title": "Rigidity of volume-minimizing hypersurfaces in Riemannian 5-manifolds"
} | null | null | null | null | true | null | 3032 | null | Default | null | null |
null | {
"abstract": " A streaming graph is a graph formed by a sequence of incoming edges with time\nstamps. Unlike static graphs, the streaming graph is highly dynamic and time\nrelated. In the real world, the high volume and velocity streaming graphs such\nas internet traffic data, social network communication data and financial\ntransfer data are bringing challenges to the classic graph data structures. We\npresent a new data structure: double orthogonal list in hash table (Dolha)\nwhich is a high speed and high memory efficiency graph structure applicable to\nstreaming graph. Dolha has constant time cost for single edge and near linear\nspace cost that we can contain billions of edges information in memory size and\nprocess an incoming edge in nanoseconds. Dolha also has linear time cost for\nneighborhood queries, which allow it to support most algorithms in graphs\nwithout extra cost. We also present a persistent structure based on Dolha that\nhas the ability to handle the sliding window update and time related queries.\n",
"title": "Dolha - an Efficient and Exact Data Structure for Streaming Graphs"
} | null | null | null | null | true | null | 3033 | null | Default | null | null |
null | {
"abstract": " We study existence and multiplicity of semi-classical states for the\nnonlinear Choquard equation:\n$$ -\\varepsilon^2\\Delta v+V(x)v =\n\\frac{1}{\\varepsilon^\\alpha}(I_\\alpha*F(v))f(v) \\quad \\hbox{in}\\ \\mathbb{R}^N,\n$$ where $N\\geq 3$, $\\alpha\\in (0,N)$, $I_\\alpha(x)={A_\\alpha\\over\n|x|^{N-\\alpha}}$ is the Riesz potential, $F\\in C^1(\\mathbb{R},\\mathbb{R})$,\n$F'(s) = f(s)$ and $\\varepsilon>0$ is a small parameter.\nWe develop a new variational approach and we show the existence of a family\nof solutions concentrating, as $\\varepsilon\\to 0$, to a local minima of $V(x)$\nunder general conditions on $F(s)$. Our result is new also for\n$f(s)=|s|^{p-2}s$ and applicable for $p\\in (\\frac{N+\\alpha}{N},\n\\frac{N+\\alpha}{N-2})$. Especially, we can give the existence result for\nlocally sublinear case $p\\in (\\frac{N+\\alpha}{N}, 2)$, which gives a positive\nanswer to an open problem arisen in recent works of Moroz and Van Schaftingen.\nWe also study the multiplicity of positive single-peak solutions and we show\nthe existence of at least $\\hbox{cupl}(K)+1$ solutions concentrating around $K$\nas $\\varepsilon\\to 0$, where $K\\subset \\Omega$ is the set of minima of $V(x)$\nin a bounded potential well $\\Omega$, that is, $m_0 \\equiv \\inf_{x\\in \\Omega}\nV(x) < \\inf_{x\\in \\partial\\Omega}V(x)$ and $K=\\{x\\in\\Omega;\\, V(x)=m_0\\}$.\n",
"title": "Semi-classical states for the nonlinear Choquard equations: existence, multiplicity and concentration at a potential well"
} | null | null | null | null | true | null | 3034 | null | Default | null | null |
null | {
"abstract": " Recently there is a flourishing and notable interest in the crystalline\nscintillator material sodium iodide (NaI) as target for direct dark matter\nsearches. This is mainly driven by the long-reigning contradicting situation in\nthe dark matter sector: the positive evidence for the detection of a dark\nmatter modulation signal claimed by the DAMA/LIBRA collaboration is (under\nso-called standard assumptions) inconsistent with the null-results reported by\nmost of the other direct dark matter experiments. We present the results of a\nfirst prototype detector using a new experimental approach in comparison to\n\\textit{conventional} single-channel NaI scintillation light detectors: a NaI\ncrystal operated as a scintillating calorimeter at milli-Kelvin temperatures\nsimultaneously providing a phonon (heat) plus scintillation light signal and\nparticle discrimination on an event-by-event basis. We evaluate energy\nresolution, energy threshold and further performance parameters of this\nprototype detector developed within the COSINUS R&D project.\n",
"title": "Results from the first cryogenic NaI detector for the COSINUS project"
} | null | null | [
"Physics"
]
| null | true | null | 3035 | null | Validated | null | null |
null | {
"abstract": " We obtain minimal dimension matrix representations for each of the Lie\nalgebras of dimension five, six, seven, and eight obtained by Turkowski that\nhave a non-trivial Levi decomposition. The Key technique involves using\nsubspace associated to a particular representation of semi-simple Lie algebra\nto help in the construction of the radical in the putative Levi decomposition.\n",
"title": "Minimal Representations of Lie Algebras With Non-Trivial Levi Decomposition"
} | null | null | null | null | true | null | 3036 | null | Default | null | null |
null | {
"abstract": " Motivated by the increasing integration among electricity markets, in this\npaper we propose two different methods to incorporate market integration in\nelectricity price forecasting and to improve the predictive performance. First,\nwe propose a deep neural network that considers features from connected markets\nto improve the predictive accuracy in a local market. To measure the importance\nof these features, we propose a novel feature selection algorithm that, by\nusing Bayesian optimization and functional analysis of variance, evaluates the\neffect of the features on the algorithm performance. In addition, using market\nintegration, we propose a second model that, by simultaneously predicting\nprices from two markets, improves the forecasting accuracy even further. As a\ncase study, we consider the electricity market in Belgium and the improvements\nin forecasting accuracy when using various French electricity features. We show\nthat the two proposed models lead to improvements that are statistically\nsignificant. Particularly, due to market integration, the predictive accuracy\nis improved from 15.7% to 12.5% sMAPE (symmetric mean absolute percentage\nerror). In addition, we show that the proposed feature selection algorithm is\nable to perform a correct assessment, i.e. to discard the irrelevant features.\n",
"title": "Forecasting day-ahead electricity prices in Europe: the importance of considering market integration"
} | null | null | null | null | true | null | 3037 | null | Default | null | null |
null | {
"abstract": " Providing diagnostic feedback about growth is crucial to formative decisions\nsuch as targeted remedial instructions or interventions. This paper proposed a\nlongitudinal higher-order diagnostic classification modeling approach for\nmeasuring growth. The new modeling approach is able to provide quantitative\nvalues of overall and individual growth by constructing a multidimensional\nhigher-order latent structure to take into account the correlations among\nmultiple latent attributes that are examined across different occasions. In\naddition, potential local item dependence among anchor (or repeated) items can\nalso be taken into account. Model parameter estimation is explored in a\nsimulation study. An empirical example is analyzed to illustrate the\napplications and advantages of the proposed modeling approach.\n",
"title": "A Longitudinal Higher-Order Diagnostic Classification Model"
} | null | null | null | null | true | null | 3038 | null | Default | null | null |
null | {
"abstract": " The machine learning community has become increasingly concerned with the\npotential for bias and discrimination in predictive models. This has motivated\na growing line of work on what it means for a classification procedure to be\n\"fair.\" In this paper, we investigate the tension between minimizing error\ndisparity across different population groups while maintaining calibrated\nprobability estimates. We show that calibration is compatible only with a\nsingle error constraint (i.e. equal false-negatives rates across groups), and\nshow that any algorithm that satisfies this relaxation is no better than\nrandomizing a percentage of predictions for an existing classifier. These\nunsettling findings, which extend and generalize existing results, are\nempirically confirmed on several datasets.\n",
"title": "On Fairness and Calibration"
} | null | null | null | null | true | null | 3039 | null | Default | null | null |
null | {
"abstract": " Simulation systems have become an essential component in the development and\nvalidation of autonomous driving technologies. The prevailing state-of-the-art\napproach for simulation is to use game engines or high-fidelity computer\ngraphics (CG) models to create driving scenarios. However, creating CG models\nand vehicle movements (e.g., the assets for simulation) remains a manual task\nthat can be costly and time-consuming. In addition, the fidelity of CG images\nstill lacks the richness and authenticity of real-world images and using these\nimages for training leads to degraded performance.\nIn this paper we present a novel approach to address these issues: Augmented\nAutonomous Driving Simulation (AADS). Our formulation augments real-world\npictures with a simulated traffic flow to create photo-realistic simulation\nimages and renderings. More specifically, we use LiDAR and cameras to scan\nstreet scenes. From the acquired trajectory data, we generate highly plausible\ntraffic flows for cars and pedestrians and compose them into the background.\nThe composite images can be re-synthesized with different viewpoints and sensor\nmodels. The resulting images are photo-realistic, fully annotated, and ready\nfor end-to-end training and testing of autonomous driving systems from\nperception to planning. We explain our system design and validate our\nalgorithms with a number of autonomous driving tasks from detection to\nsegmentation and predictions.\nCompared to traditional approaches, our method offers unmatched scalability\nand realism. Scalability is particularly important for AD simulation and we\nbelieve the complexity and diversity of the real world cannot be realistically\ncaptured in a virtual environment. Our augmented approach combines the\nflexibility in a virtual environment (e.g., vehicle movements) with the\nrichness of the real world to allow effective simulation of anywhere on earth.\n",
"title": "AADS: Augmented Autonomous Driving Simulation using Data-driven Algorithms"
} | null | null | null | null | true | null | 3040 | null | Default | null | null |
null | {
"abstract": " We investigate a scheme-theoretic variant of Whitney condition a. If X is a\nprojec-tive variety over the field of complex numbers and Y $\\subset$ X a\nsubvariety, then X satisfies generically the scheme-theoretic Whitney condition\na along Y provided that the pro-jective dual of X is smooth. We give\napplications to tangency of projective varieties over C and to convex real\nalgebraic geometry. In particular, we prove a Bertini-type theorem for\nosculating plane of smooth complex space curves and a generalization of a\nTheorem of Ranestad and Sturmfels describing the algebraic boundary of an\naffine compact real variety.\n",
"title": "Scheme-theoretic Whitney conditions and applications to tangency of projective varieties"
} | null | null | null | null | true | null | 3041 | null | Default | null | null |
null | {
"abstract": " VO2 samples are grown with different oxygen concentrations leading to\ndifferent monoclinic, M1 and triclinic, T insulating phases which undergo a\nfirst order metal to insulator transition (MIT) followed by a structural phase\ntransition (SPT) to rutile tetragonal phase. The metal insulator transition\ntemperature (Tc) was found to be increased with increasing native defects.\nVanadium vacancy (VV) is envisaged to create local strains in the lattice which\nprevents twisting of the V-V dimers promoting metastable monoclinic, M2 and T\nphases at intermediate temperatures. It is argued that MIT is driven by strong\nelectronic correlation. The low temperature insulating phase can be considered\nas a collection of one-dimensional (1-D) half-filled band, which undergoes Mott\ntransition to 1-D infinitely long Heisenberg spin 1/2 chains leading to\nstructural distortion due to spin-phonon coupling. Presence of VV creates\nlocalized holes (d0) in the nearest neighbor, thereby fragmenting the spin 1/2\nchains at nanoscale, which in turn increase the Tc value more than that of an\ninfinitely long one. The Tc value scales inversely with the average size of\nfragmented Heisenberg spin 1/2 chains following a critical exponent of 2/3,\nwhich is exactly the same predicted theoretically for Heisenberg spin 1/2 chain\nat nanoscale undergoing SPT (spin-Peierls transition). Thus, the observation of\nMIT and SPT at the same time in VO2 can be explained from our phenomenological\nmodel of reduced 1-D Heisenberg spin 1/2 chains. The reported increase\n(decrease) in Tc value of VO2 by doping with metal having valency less (more)\nthan four, can also be understood easily with our unified model, for the first\ntime, considering finite size scaling of Heisenberg chains.\n",
"title": "Role of 1-D finite size Heisenberg chain in increasing metal to insulator transition temperature in hole rich VO2"
} | null | null | null | null | true | null | 3042 | null | Default | null | null |
null | {
"abstract": " We present a state interaction spin-orbit coupling method to calculate\nelectron paramagnetic resonance (EPR) $g$-tensors from density matrix\nrenormalization group wavefunctions. We apply the technique to compute\n$g$-tensors for the \\ce{TiF3} and \\ce{CuCl4^2-} complexes, a [2Fe-2S] model of\nthe active center of ferredoxins, and a \\ce{Mn4CaO5} model of the S2 state of\nthe oxygen evolving complex. These calculations raise the prospects of\ndetermining $g$-tensors in multireference calculations with a large number of\nopen shells.\n",
"title": "Electron paramagnetic resonance g-tensors from state interaction spin-orbit coupling density matrix renormalization group"
} | null | null | [
"Physics"
]
| null | true | null | 3043 | null | Validated | null | null |
null | {
"abstract": " Quantum transport is studied for the nonequilibrium Anderson impurity model\nat zero temperature employing the multilayer multiconfiguration time-dependent\nHartree theory within the second quantization representation (ML-MCTDH-SQR) of\nFock space. To adress both linear and nonlinear conductance in the Kondo\nregime, two new techniques of the ML-MCTDH-SQR simulation methodology are\nintroduced: (i) the use of correlated initial states, which is achieved by\nimaginary time propagation of the overall Hamiltonian at zero voltage and (ii)\nthe adoption of the logarithmic discretization of the electronic continuum.\nEmploying the improved methodology, the signature of the Kondo effect is\nanalyzed.\n",
"title": "A multilayer multiconfiguration time-dependent Hartree study of the nonequilibrium Anderson impurity model at zero temperature"
} | null | null | null | null | true | null | 3044 | null | Default | null | null |
null | {
"abstract": " Correct classification of breast cancer sub-types is of high importance as it\ndirectly affects the therapeutic options. We focus on triple-negative breast\ncancer (TNBC) which has the worst prognosis among breast cancer types. Using\ncutting edge methods from the field of robust statistics, we analyze Breast\nInvasive Carcinoma (BRCA) transcriptomic data publicly available from The\nCancer Genome Atlas (TCGA) data portal. Our analysis identifies statistical\noutliers that may correspond to misdiagnosed patients. Furthermore, it is\nillustrated that classical statistical methods may fail in the presence of\nthese outliers, prompting the need for robust statistics. Using robust sparse\nlogistic regression we obtain 36 relevant genes, of which ca. 60\\% have been\npreviously reported as biologically relevant to TNBC, reinforcing the validity\nof the method. The remaining 14 genes identified are new potential biomarkers\nfor TNBC. Out of these, JAM3, SFT2D2 and PAPSS1 were previously associated to\nbreast tumors or other types of cancer. The relevance of these genes is\nconfirmed by the new DetectDeviatingCells (DDC) outlier detection technique. A\ncomparison of gene networks on the selected genes showed significant\ndifferences between TNBC and non-TNBC data. The individual role of FOXA1 in\nTNBC and non-TNBC, and the strong FOXA1-AGR2 connection in TNBC stand out. Not\nonly will our results contribute to the breast cancer/TNBC understanding and\nultimately its management, they also show that robust regression and outlier\ndetection constitute key strategies to cope with high-dimensional clinical data\nsuch as omics data.\n",
"title": "Robust Identification of Target Genes and Outliers in Triple-negative Breast Cancer Data"
} | null | null | [
"Statistics"
]
| null | true | null | 3045 | null | Validated | null | null |
null | {
"abstract": " Semi-Lagrangian methods are numerical methods designed to find approximate\nsolutions to particular time-dependent partial differential equations (PDEs)\nthat describe the advection process. We propose semi-Lagrangian one-step\nmethods for numerically solving initial value problems for two general systems\nof partial differential equations. Along the characteristic lines of the PDEs,\nwe use ordinary differential equation (ODE) numerical methods to solve the\nPDEs. The main benefit of our methods is the efficient achievement of high\norder local truncation error through the use of Runge-Kutta methods along the\ncharacteristics. In addition, we investigate the numerical analysis of\nsemi-Lagrangian methods applied to systems of PDEs: stability, convergence, and\nmaximum error bounds.\n",
"title": "Semi-Lagrangian one-step methods for two classes of time-dependent partial differential systems"
} | null | null | null | null | true | null | 3046 | null | Default | null | null |
null | {
"abstract": " Many studies have been undertaken by using machine learning techniques,\nincluding neural networks, to predict stock returns. Recently, a method known\nas deep learning, which achieves high performance mainly in image recognition\nand speech recognition, has attracted attention in the machine learning field.\nThis paper implements deep learning to predict one-month-ahead stock returns in\nthe cross-section in the Japanese stock market and investigates the performance\nof the method. Our results show that deep neural networks generally outperform\nshallow neural networks, and the best networks also outperform representative\nmachine learning models. These results indicate that deep learning shows\npromise as a skillful machine learning method to predict stock returns in the\ncross-section.\n",
"title": "Deep Learning for Forecasting Stock Returns in the Cross-Section"
} | null | null | [
"Quantitative Finance"
]
| null | true | null | 3047 | null | Validated | null | null |
null | {
"abstract": " Nearest-neighbor search dominates the asymptotic complexity of sampling-based\nmotion planning algorithms and is often addressed with k-d tree data\nstructures. While it is generally believed that the expected complexity of\nnearest-neighbor queries is $O(log(N))$ in the size of the tree, this paper\nreveals that when a classic k-d tree approach is used with sub-Riemannian\nmetrics, the expected query complexity is in fact $\\Theta(N^p \\log(N))$ for a\nnumber $p \\in [0, 1)$ determined by the degree of nonholonomy of the system.\nThese metrics arise naturally in nonholonomic mechanical systems, including\nclassic wheeled robot models. To address this negative result, we propose novel\nk-d tree build and query strategies tailored to sub-Riemannian metrics and\ndemonstrate significant improvements in the running time of nearest-neighbor\nsearch queries.\n",
"title": "Efficient Nearest-Neighbor Search for Dynamical Systems with Nonholonomic Constraints"
} | null | null | [
"Computer Science"
]
| null | true | null | 3048 | null | Validated | null | null |
null | {
"abstract": " A minimal deterministic finite automaton (DFA) is uniformly minimal if it\nalways remains minimal when the final state set is replaced by a non-empty\nproper subset of the state set. We prove that a permutation DFA is uniformly\nminimal if and only if its transition monoid is a primitive group. We use this\nto study boolean operations on group languages, which are recognized by direct\nproducts of permutation DFAs. A direct product cannot be uniformly minimal,\nexcept in the trivial case where one of the DFAs in the product is a one-state\nDFA. However, non-trivial direct products can satisfy a weaker condition we\ncall uniform boolean minimality, where only final state sets used to recognize\nboolean operations are considered. We give sufficient conditions for a direct\nproduct of two DFAs to be uniformly boolean minimal, which in turn gives\nsufficient conditions for pairs of group languages to have maximal state\ncomplexity under all binary boolean operations (\"maximal boolean complexity\").\nIn the case of permutation DFAs with one final state, we give necessary and\nsufficient conditions for pairs of group languages to have maximal boolean\ncomplexity. Our results demonstrate a connection between primitive groups and\nautomata with strong minimality properties.\n",
"title": "Primitivity, Uniform Minimality and State Complexity of Boolean Operations"
} | null | null | null | null | true | null | 3049 | null | Default | null | null |
null | {
"abstract": " Applications for deep learning and big data analytics have compute and memory\nrequirements that exceed the limits of a single GPU. However, effectively\nscaling out an application to multiple GPUs is challenging due to the\ncomplexities of communication between the GPUs, particularly for collective\ncommunication with irregular message sizes. In this work, we provide a\nperformance evaluation of the Allgatherv routine on multi-GPU systems, focusing\non GPU network topology and the communication library used. We present results\nfrom the OSU-micro benchmark as well as conduct a case study for sparse tensor\nfactorization, one application that uses Allgatherv with highly irregular\nmessage sizes. We extend our existing tensor factorization tool to run on\nsystems with different node counts and varying number of GPUs per node. We then\nevaluate the communication performance of our tool when using traditional MPI,\nCUDA-aware MVAPICH and NCCL across a suite of real-world data sets on three\ndifferent systems: a 16-node cluster with one GPU per node, NVIDIA's DGX-1 with\n8 GPUs and Cray's CS-Storm with 16 GPUs. Our results show that irregularity in\nthe tensor data sets produce trends that contradict those in the OSU\nmicro-benchmark, as well as trends that are absent from the benchmark.\n",
"title": "An Empirical Evaluation of Allgatherv on Multi-GPU Systems"
} | null | null | [
"Computer Science"
]
| null | true | null | 3050 | null | Validated | null | null |
null | {
"abstract": " In cryptography, block ciphers are the most fundamental elements in many\nsymmetric-key encryption systems. The Cipher Block Chaining, denoted CBC,\npresents one of the most famous mode of operation that uses a block cipher to\nprovide confidentiality or authenticity. In this research work, we intend to\nsummarize our results that have been detailed in our previous series of\narticles. The goal of this series has been to obtain a complete topological\nstudy of the CBC block cipher mode of operation after proving his chaotic\nbehavior according to the reputed definition of Devaney.\n",
"title": "Summary of Topological Study of Chaotic CBC Mode of Operation"
} | null | null | null | null | true | null | 3051 | null | Default | null | null |
null | {
"abstract": " We present numerical studies of two photonic crystal membrane microcavities,\na short line-defect cavity with relatively low quality ($Q$) factor and a\nlonger cavity with high $Q$. We use five state-of-the-art numerical simulation\ntechniques to compute the cavity $Q$ factor and the resonance wavelength\n$\\lambda$ for the fundamental cavity mode in both structures. For each method,\nthe relevant computational parameters are systematically varied to estimate the\ncomputational uncertainty. We show that some methods are more suitable than\nothers for treating these challenging geometries.\n",
"title": "Benchmarking five numerical simulation techniques for computing resonance wavelengths and quality factors in photonic crystal membrane line defect cavities"
} | null | null | null | null | true | null | 3052 | null | Default | null | null |
null | {
"abstract": " This paper is about an extension of monadic second-order logic over infinite\ntrees, which adds a quantifier that says \"the set of branches \\pi which satisfy\na formula \\phi(\\pi) has probability one\". This logic was introduced by\nMichalewski and Mio; we call it MSO+nabla following Shelah and Lehmann. The\nlogic MSO+nabla subsumes many qualitative probabilistic formalisms, including\nqualitative probabilistic CTL, probabilistic LTL, or parity tree automata with\nprobabilistic acceptance conditions. We consider the decision problem: decide\nif a sentence of MSO+nabla is true in the infinite binary tree? For sentences\nfrom the weak variant of this logic (set quantifiers range only over finite\nsets) the problem was known to be decidable, but the question for the full\nlogic remained open. In this paper we show that the problem for the full logic\nMSO+nabla is undecidable.\n",
"title": "MSO+nabla is undecidable"
} | null | null | null | null | true | null | 3053 | null | Default | null | null |
null | {
"abstract": " Calcium imaging data promises to transform the field of neuroscience by\nmaking it possible to record from large populations of neurons simultaneously.\nHowever, determining the exact moment in time at which a neuron spikes, from a\ncalcium imaging data set, amounts to a non-trivial deconvolution problem which\nis of critical importance for downstream analyses. While a number of\nformulations have been proposed for this task in the recent literature, in this\npaper we focus on a formulation recently proposed in Jewell and Witten (2017)\nwhich has shown initial promising results. However, this proposal is slow to\nrun on fluorescence traces of hundreds of thousands of timesteps.\nHere we develop a much faster online algorithm for solving the optimization\nproblem of Jewell and Witten (2017) that can be used to deconvolve a\nfluorescence trace of 100,000 timesteps in less than a second. Furthermore,\nthis algorithm overcomes a technical challenge of Jewell and Witten (2017) by\navoiding the occurrence of so-called \"negative\" spikes. We demonstrate that\nthis algorithm has superior performance relative to existing methods for spike\ndeconvolution on calcium imaging datasets that were recently released as part\nof the spikefinder challenge (this http URL).\nOur C++ implementation, along with R and python wrappers, is publicly\navailable on Github at this https URL.\n",
"title": "Fast Nonconvex Deconvolution of Calcium Imaging Data"
} | null | null | null | null | true | null | 3054 | null | Default | null | null |
null | {
"abstract": " Sequential decision making problems, such as structured prediction, robotic\ncontrol, and game playing, require a combination of planning policies and\ngeneralisation of those plans. In this paper, we present Expert Iteration\n(ExIt), a novel reinforcement learning algorithm which decomposes the problem\ninto separate planning and generalisation tasks. Planning new policies is\nperformed by tree search, while a deep neural network generalises those plans.\nSubsequently, tree search is improved by using the neural network policy to\nguide search, increasing the strength of new plans. In contrast, standard deep\nReinforcement Learning algorithms rely on a neural network not only to\ngeneralise plans, but to discover them too. We show that ExIt outperforms\nREINFORCE for training a neural network to play the board game Hex, and our\nfinal tree search agent, trained tabula rasa, defeats MoHex 1.0, the most\nrecent Olympiad Champion player to be publicly released.\n",
"title": "Thinking Fast and Slow with Deep Learning and Tree Search"
} | null | null | null | null | true | null | 3055 | null | Default | null | null |
null | {
"abstract": " Patient-specific cranial implants are important and necessary in the surgery\nof cranial defect restoration. However, traditional methods of manual design of\ncranial implants are complicated and time-consuming. Our purpose is to develop\na novel software named EasyCrania to design the cranial implants conveniently\nand efficiently. The process can be divided into five steps, which are\nmirroring model, clipping surface, surface fitting, the generation of the\ninitial implant and the generation of the final implant. The main concept of\nour method is to use the geometry information of the mirrored model as the base\nto generate the final implant. The comparative studies demonstrated that the\nEasyCrania can improve the efficiency of cranial implant design significantly.\nAnd, the intra- and inter-rater reliability of the software were stable, which\nwere 87.07+/-1.6% and 87.73+/-1.4% respectively.\n",
"title": "Computer-aided implant design for the restoration of cranial defects"
} | null | null | null | null | true | null | 3056 | null | Default | null | null |
null | {
"abstract": " Information concentration of probability measures have important implications\nin learning theory. Recently, it is discovered that the information content of\na log-concave distribution concentrates around their differential entropy,\nalbeit with an unpleasant dependence on the ambient dimension. In this work, we\nprove that if the potentials of the log-concave distribution are exp-concave,\nwhich is a central notion for fast rates in online and statistical learning,\nthen the concentration of information can be further improved to depend only on\nthe exp-concavity parameter, and hence, it can be dimension independent.\nCentral to our proof is a novel yet simple application of the variance\nBrascamp-Lieb inequality. In the context of learning theory, our\nconcentration-of-information result immediately implies high-probability\nresults to many of the previous bounds that only hold in expectation.\n",
"title": "Dimension-free Information Concentration via Exp-Concavity"
} | null | null | null | null | true | null | 3057 | null | Default | null | null |
null | {
"abstract": " In this paper we will give an account of Dan's reduction method for reducing\nthe weight $ n $ multiple logarithm $ I_{1,1,\\ldots,1}(x_1, x_2, \\ldots, x_n) $\nto an explicit sum of lower depth multiple polylogarithms in $ \\leq n - 2 $\nvariables.\nWe provide a detailed explanation of the method Dan outlines, and we fill in\nthe missing proofs for Dan's claims. This establishes the validity of the\nmethod itself, and allows us to produce a corrected version of Dan's reduction\nof $ I_{1,1,1,1} $ to $ I_{3,1} $'s and $ I_4 $'s. We then use the symbol of\nmultiple polylogarithms to answer Dan's question about how this reduction\ncompares with his earlier reduction of $ I_{1,1,1,1} $, and his question about\nthe nature of the resulting functional equation of $ I_{3,1} $.\nFinally, we apply the method to $ I_{1,1,1,1,1} $ at weight 5 to first\nproduce a reduction to depth $ \\leq 3 $ integrals. Using some functional\nequations from our thesis, we further reduce this to $ I_{3,1,1} $, $ I_{3,2} $\nand $ I_5 $, modulo products. We also see how to reduce $ I_{3,1,1} $ to $\nI_{3,2} $, modulo $ \\delta $ (modulo products and depth 1 terms), and indicate\nhow this allows us to reduce $ I_{1,1,1,1,1} $ to $ I_{3,2} $'s only, modulo $\n\\delta $.\n",
"title": "A review of Dan's reduction method for multiple polylogarithms"
} | null | null | null | null | true | null | 3058 | null | Default | null | null |
null | {
"abstract": " Compression of Neural Networks (NN) has become a highly studied topic in\nrecent years. The main reason for this is the demand for industrial scale usage\nof NNs such as deploying them on mobile devices, storing them efficiently,\ntransmitting them via band-limited channels and most importantly doing\ninference at scale. In this work, we propose to join the Soft-Weight Sharing\nand Variational Dropout approaches that show strong results to define a new\nstate-of-the-art in terms of model compression.\n",
"title": "Improved Bayesian Compression"
} | null | null | [
"Statistics"
]
| null | true | null | 3059 | null | Validated | null | null |
null | {
"abstract": " In this paper, we propose a method of designing low-dimensional retrofit\ncontrollers for interconnected linear systems. In the proposed method, by\nretrofitting an additional low-dimensional controller to a preexisting control\nsystem, we aim at improving transient responses caused by spatially local state\ndeflections, which can be regarded as a local fault occurring at a specific\nsubsystem. It is found that a type of state-space expansion, called\nhierarchical state-space expansion, is the key to systematically designing a\nlow-dimensional retrofit controller, whose action is specialized to controlling\nthe corresponding subsystem. Furthermore, the state-space expansion enables\ntheoretical clarification of the fact that the performance index of the\ntransient response control is improved by appropriately tuning the retrofit\ncontroller. The efficiency of the proposed method is shown through a motivating\nexample of power system control where we clarify the trade-off relation between\nthe dimension of a retrofit controller and its control performance.\n",
"title": "Transient Response Improvement for Interconnected Linear Systems: Low-Dimensional Controller Retrofit Approach"
} | null | null | null | null | true | null | 3060 | null | Default | null | null |
null | {
"abstract": " Modern large scale machine learning applications require stochastic\noptimization algorithms to be implemented on distributed computational\narchitectures. A key bottleneck is the communication overhead for exchanging\ninformation such as stochastic gradients among different workers. In this\npaper, to reduce the communication cost we propose a convex optimization\nformulation to minimize the coding length of stochastic gradients. To solve the\noptimal sparsification efficiently, several simple and fast algorithms are\nproposed for approximate solution, with theoretical guaranteed for sparseness.\nExperiments on $\\ell_2$ regularized logistic regression, support vector\nmachines, and convolutional neural networks validate our sparsification\napproaches.\n",
"title": "Gradient Sparsification for Communication-Efficient Distributed Optimization"
} | null | null | null | null | true | null | 3061 | null | Default | null | null |
null | {
"abstract": " Bacterial communities have rich social lives. A well-established interaction\ninvolves the exchange of a public good in Pseudomonas populations, where the\niron-scavenging compound pyoverdine, synthesized by some cells, is shared with\nthe rest. Pyoverdine thus mediates interactions between producers and\nnon-producers and can constitute a public good. This interaction is often used\nto test game theoretical predictions on the \"social dilemma\" of producers. Such\nan approach, however, underestimates the impact of specific properties of the\npublic good, for example consequences of its accumulation in the environment.\nHere, we experimentally quantify costs and benefits of pyoverdine production in\na specific environment, and build a model of population dynamics that\nexplicitly accounts for the changing significance of accumulating pyoverdine as\nchemical mediator of social interactions. The model predicts that, in an\nensemble of growing populations (metapopulation) with different initial\nproducer fractions (and consequently pyoverdine contents), the global producer\nfraction initially increases. Because the benefit of pyoverdine declines at\nsaturating concentrations, the increase need only be transient. Confirmed by\nexperiments on metapopulations, our results show how a changing benefit of a\npublic good can shape social interactions in a bacterial population.\n",
"title": "Interactions mediated by a public good transiently increase cooperativity in growing Pseudomonas putida metapopulations"
} | null | null | [
"Quantitative Biology"
]
| null | true | null | 3062 | null | Validated | null | null |
null | {
"abstract": " We study Frobenius extensions which are free-filtered by a totally ordered,\nfinitely generated abelian group, and their free-graded counterparts. First we\nshow that the Frobenius property passes up from a free-graded extension to a\nfree-filtered extension, then also from a free-filtered extension to the\nextension of their Rees algebras. Our main theorem states that, under some\nnatural hypotheses, a free-filtered extension of algebras is Frobenius if and\nonly if the associated graded extension is Frobenius. In the final section we\napply this theorem to provide new examples and non-examples of Frobenius\nextensions.\n",
"title": "Transfer results for Frobenius extensions"
} | null | null | null | null | true | null | 3063 | null | Default | null | null |
null | {
"abstract": " We consider the Burgers equation posed on the outer communication region of a\nSchwarzschild black hole spacetime. Assuming spherical symmetry for the fluid\nflow under consideration, we study the propagation and interaction of shock\nwaves under the effect of random forcing. First of all, considering the initial\nand boundary value problem with boundary data prescribed in the vicinity of the\nhorizon, we establish a generalization of the Hopf--Lax--Oleinik formula, which\ntakes the curved geometry into account and allows us to establish the existence\nof bounded variation solutions. To this end, we analyze the global behavior of\nthe characteristic curves in the Schwarzschild geometry, including their\nbehavior near the black hole horizon. In a second part, we investigate the\nlong-term statistical properties of solutions when a random forcing is imposed\nnear the black hole horizon and study the ergodicity of the fluid flow under\nconsideration. We prove the existence of a random global attractor and, for the\nBurgers equation outside of a Schwarzschild black hole, we are able to validate\nthe so-called `one-force-one-solution' principle. Furthermore, all of our\nresults are also established for a pressureless Euler model which consists of\ntwo balance laws and includes a transport equation satisfied by the integrated\nfluid density.\n",
"title": "Ergodicity of spherically symmetric fluid flows outside of a Schwarzschild black hole with random boundary forcing"
} | null | null | null | null | true | null | 3064 | null | Default | null | null |
null | {
"abstract": " We describe a Markov latent state space (MLSS) model, where the latent state\ndistribution is a decaying mixture over multiple past states. We present a\nsimple sampling algorithm that allows to approximate such high-order MLSS with\nfixed time and memory costs.\n",
"title": "Recency-weighted Markovian inference"
} | null | null | null | null | true | null | 3065 | null | Default | null | null |
null | {
"abstract": " A new characterization of CMO(R^n) is established by the local mean\noscillation. Some characterizations of iterated compact commutators on weighted\nLebesgue spaces are given, which are new even in the unweighted setting for the\nfirst order commutators.\n",
"title": "A revisit on the compactness of commutators"
} | null | null | null | null | true | null | 3066 | null | Default | null | null |
null | {
"abstract": " We show that the task of question answering (QA) can significantly benefit\nfrom the transfer learning of models trained on a different large, fine-grained\nQA dataset. We achieve the state of the art in two well-studied QA datasets,\nWikiQA and SemEval-2016 (Task 3A), through a basic transfer learning technique\nfrom SQuAD. For WikiQA, our model outperforms the previous best model by more\nthan 8%. We demonstrate that finer supervision provides better guidance for\nlearning lexical and syntactic information than coarser supervision, through\nquantitative results and visual analysis. We also show that a similar transfer\nlearning procedure achieves the state of the art on an entailment task.\n",
"title": "Question Answering through Transfer Learning from Large Fine-grained Supervision Data"
} | null | null | null | null | true | null | 3067 | null | Default | null | null |
null | {
"abstract": " We define a map $f\\colon X\\to Y$ to be a phantom map relative to a map\n$\\varphi\\colon B\\to Y$ if the restriction of $f$ to any finite dimensional\nskeleton of $X$ lifts to $B$ through $\\varphi$, up to homotopy. There are two\nkinds of maps which are obviously relative phantom maps: (1) the composite of a\nmap $X\\to B$ with $\\varphi$; (2) a usual phantom map $X\\to Y$. A relative\nphantom map of type (1) is called trivial, and a relative phantom map out of a\nsuspension which is a sum of (1) and (2) is called relatively trivial. We study\nthe (relative) triviality of relative phantom maps from a suspension, and in\nparticular, we give rational homotopy conditions for the (relative) triviality.\nWe also give a rational homotopy condition for the triviality of relative\nphantom maps from a non-suspension to a finite Postnikov section.\n",
"title": "Relative phantom maps"
} | null | null | [
"Mathematics"
]
| null | true | null | 3068 | null | Validated | null | null |
null | {
"abstract": " The current ISO standards pertaining to the Concepts of System and\nArchitecture express succinct definitions of these two key terms that lend\nthemselves to practical application and can be understood through elementary\nmathematical foundations. The current work of the ISO/IEC Working Group 42 is\nseeking to refine and elaborate the existing standards. This position paper\nrevisits the fundamental concepts underlying both of these key terms and offers\nan approach to: (i) refine and exemplify the term 'fundamental concepts' in the\ncurrent ISO definition of Architecture, (ii) exploit existing standards for the\nterm 'concept', and (iii) introduce a new concept, Architectural Structure,\nthat can serve to unify the current terminology at a fundamental level. Precise\nelementary examples are used in to conceptualise the approach offered.\n",
"title": "Concepts of Architecture, Structure and System"
} | null | null | null | null | true | null | 3069 | null | Default | null | null |
null | {
"abstract": " We prove two main results concerning mesoprimary decomposition of monoid\ncongruences, as introduced by Kahle and Miller. First, we identify which\nassociated prime congruences appear in every mesoprimary decomposition, thereby\ncompleting the theory of mesoprimary decomposition of monoid congruences as a\nmore faithful analog of primary decomposition. Second, we answer a question\nposed by Kahle and Miller by characterizing which finite posets arise as the\nset of associated prime congruences of monoid congruences.\n",
"title": "On mesoprimary decomposition of monoid congruences"
} | null | null | [
"Mathematics"
]
| null | true | null | 3070 | null | Validated | null | null |
null | {
"abstract": " Emerging economies frequently show a large component of their Gross Domestic\nProduct to be dependant on the economic activity of small and medium\nenterprises. Nevertheless, e-business solutions are more likely designed for\nlarge companies. SMEs seem to follow a classical family-based management, used\nto traditional activities, rather than seeking new ways of adding value to\ntheir business strategy. Thus, a large portion of a nations economy may be at\ndisadvantage for competition. This paper aims at assessing the state of\ne-business readiness of Mexican SMEs based on already published e-business\nevolution models and by means of a survey research design. Data is being\ncollected in three cities with differing sizes and infrastructure conditions.\nStatistical results are expected to be presented. A second part of this\nresearch aims at applying classical adoption models to suggest potential causal\nrelationships, as well as more suitable recommendations for development.\n",
"title": "Assessing the state of e-Readiness for Small and Medium Companies in Mexico: a Proposed Taxonomy and Adoption Model"
} | null | null | null | null | true | null | 3071 | null | Default | null | null |
null | {
"abstract": " We find that cusp densities of hyperbolic knots in the 3-sphere are dense in\n[0,0.6826...] and those of links are dense in [0,0.853...]. We define a new\ninvariant associated with cusp volume, the cusp crossing density, as the ratio\nbetween the cusp volume and the crossing number of a link, and show that cusp\ncrossing density for links is bounded above by 3.1263.... Moreover, there is a\nsequence of links with cusp crossing density approaching 3. The least upper\nbound for cusp crossing density remains an open question. For two-component\nhyperbolic links, cusp crossing density is shown to be dense in the interval\n[0,1.6923...] and for all hyperbolic links, cusp crossing density is shown to\nbe dense in [0, 2.120...].\n",
"title": "Densities of Hyperbolic Cusp Invariants"
} | null | null | null | null | true | null | 3072 | null | Default | null | null |
null | {
"abstract": " Accurate on-device keyword spotting (KWS) with low false accept and false\nreject rate is crucial to customer experience for far-field voice control of\nconversational agents. It is particularly challenging to maintain low false\nreject rate in real world conditions where there is (a) ambient noise from\nexternal sources such as TV, household appliances, or other speech that is not\ndirected at the device (b) imperfect cancellation of the audio playback from\nthe device, resulting in residual echo, after being processed by the Acoustic\nEcho Cancellation (AEC) system. In this paper, we propose a data augmentation\nstrategy to improve keyword spotting performance under these challenging\nconditions. The training set audio is artificially corrupted by mixing in music\nand TV/movie audio, at different signal to interference ratios. Our results\nshow that we get around 30-45% relative reduction in false reject rates, at a\nrange of false alarm rates, under audio playback from such devices.\n",
"title": "Data Augmentation for Robust Keyword Spotting under Playback Interference"
} | null | null | [
"Statistics"
]
| null | true | null | 3073 | null | Validated | null | null |
null | {
"abstract": " This paper presents a new compact canonical-based algorithm to solve the\nproblem of single-output completely specified NPN Boolean matching. We propose\na new signature vector Boolean difference and cofactor (DC) signature vector.\nOur algorithm utilizes the Boolean difference, cofactor signature and symmetry\nproperties to search for canonical transformations. The use of symmetry and\nBoolean difference notably reduces the search space and speeds up the Boolean\nmatching process compared to the algorithm proposed in [1]. We tested our\nalgorithm on a large number of circuits. The experimental results showed that\nthe average runtime of our algorithm 37% higher and its average search space\n67% smaller compared to [1] when tested on general circuits.\n",
"title": "A Canonical-based NPN Boolean Matching Algorithm Utilizing Boolean Difference and Cofactor Signature"
} | null | null | null | null | true | null | 3074 | null | Default | null | null |
null | {
"abstract": " We generalize the natural cross ratio on the ideal boundary of a rank one\nsymmetric spaces, or even $\\mathrm{CAT}(-1)$ space, to higher rank symmetric\nspaces and (non-locally compact) Euclidean buildings - we obtain vector valued\ncross ratios defined on simplices of the building at infinity. We show several\nproperties of those cross ratios; for example that (under some restrictions)\nperiods of hyperbolic isometries give back the translation vector. In addition,\nwe show that cross ratio preserving maps on the chamber set are induced by\nisometries and vice versa - motivating that the cross ratios bring the geometry\nof the symmetric space/Euclidean building to the boundary.\n",
"title": "Cross ratios on boundaries of symmetric spaces and Euclidean buildings"
} | null | null | null | null | true | null | 3075 | null | Default | null | null |
null | {
"abstract": " In our previous work, we studied an interconnected bursting neuron model for\ninsect locomotion, and its corresponding phase oscillator model, which at high\nspeed can generate stable tripod gaits with three legs off the ground\nsimultaneously in swing, and at low speed can generate stable tetrapod gaits\nwith two legs off the ground simultaneously in swing. However, at low speed\nseveral other stable locomotion patterns, that are not typically observed as\ninsect gaits, may coexist. In the present paper, by adding heterogeneous\nexternal input to each oscillator, we modify the bursting neuron model so that\nits corresponding phase oscillator model produces only one stable gait at each\nspeed, specifically: a unique stable tetrapod gait at low speed, a unique\nstable tripod gait at high speed, and a unique branch of stable transition\ngaits connecting them. This suggests that control signals originating in the\nbrain and central nervous system can modify gait patterns.\n",
"title": "Heterogeneous inputs to central pattern generators can shape insect gaits"
} | null | null | null | null | true | null | 3076 | null | Default | null | null |
null | {
"abstract": " We introduce the multiplexing of a crossing, replacing a classical crossing\nof a virtual link diagram with multiple crossings which is a mixture of\nclassical and virtual. For integers $m_{i}$ $(i=1,\\ldots,n)$ and an ordered\n$n$-component virtual link diagram $D$, a new virtual link diagram\n$D(m_{1},\\ldots,m_{n})$ is obtained from $D$ by the multiplexing of all\ncrossings. For welded isotopic virtual link diagrams $D$ and $D'$,\n$D(m_{1},\\ldots,m_{n})$ and $D'(m_{1},\\ldots,m_{n})$ are welded isotopic. From\nthe point of view of classical link theory, it seems very interesting that\n$D(m_{1},\\ldots,m_{n})$ could not be welded isotopic to a classical link\ndiagram even if $D$ is a classical one, and new classical link invariants are\nexpected from known welded link invariants via the multiplexing of crossings.\n",
"title": "Link invariants derived from multiplexing of crossings"
} | null | null | [
"Mathematics"
]
| null | true | null | 3077 | null | Validated | null | null |
null | {
"abstract": " This paper presents the development of an Adaptive Algebraic Multiscale\nSolver for Compressible flow (C-AMS) in heterogeneous porous media. Similar to\nthe recently developed AMS for incompressible (linear) flows [Wang et al., JCP,\n2014], C-AMS operates by defining primal and dual-coarse blocks on top of the\nfine-scale grid. These coarse grids facilitate the construction of a\nconservative (finite volume) coarse-scale system and the computation of local\nbasis functions, respectively. However, unlike the incompressible (elliptic)\ncase, the choice of equations to solve for basis functions in compressible\nproblems is not trivial. Therefore, several basis function formulations\n(incompressible and compressible, with and without accumulation) are considered\nin order to construct an efficient multiscale prolongation operator. As for the\nrestriction operator, C-AMS allows for both multiscale finite volume (MSFV) and\nfinite element (MSFE) methods. Finally, in order to resolve high-frequency\nerrors, fine-scale (pre- and post-) smoother stages are employed. In order to\nreduce computational expense, the C-AMS operators (prolongation, restriction,\nand smoothers) are updated adaptively. In addition to this, the linear system\nin the Newton-Raphson loop is infrequently updated. Systematic numerical\nexperiments are performed to determine the effect of the various options,\noutlined above, on the C-AMS convergence behaviour. An efficient C-AMS strategy\nfor heterogeneous 3D compressible problems is developed based on overall CPU\ntimes. Finally, C-AMS is compared against an industrial-grade Algebraic\nMultiGrid (AMG) solver. Results of this comparison illustrate that the C-AMS is\nquite efficient as a nonlinear solver, even when iterated to machine accuracy.\n",
"title": "Adaptive Algebraic Multiscale Solver for Compressible Flow in Heterogeneous Porous Media"
} | null | null | null | null | true | null | 3078 | null | Default | null | null |
null | {
"abstract": " We investigate the automatic differentiation of hybrid models, viz. models\nthat may contain delays, logical tests and discontinuities or loops. We\nconsider differentiation with respect to parameters, initial conditions or the\ntime. We emphasize the case of a small number of derivations and iterated\ndifferentiations are mostly treated with a foccus on high order iterations of\nthe same derivation. The models we consider may involve arithmetic operations,\nelementary functions, logical tests but also more elaborate components such as\ndelays, integrators, equations and differential equations solvers. This survey\nhas no pretention to exhaustivity but tries to fil a gap in the litterature\nwhere each kind of of component may be documented, but seldom their common use.\nThe general approach is illustrated by computer algebra experiments,\nstressing the interest of performing differentiation, whenever possible, on\nhigh level objects, before any translation in Fortran or C code. We include\nordinary differential systems with discontinuity, with a special interest for\nthose comming from discontinuous Lagrangians.\nWe conclude with an overview of the graphic methodology developped in the\nDiffedge software for Simulink hybrid models. Not all possibilities are\ncovered, but the methodology can be adapted. The result of automatic\ndifferentiation is a new block diagram and so it can be easily translated to\nproduce real time embedded programs.\nWe welcome any comments or suggestions of references that we may have missed.\n",
"title": "Automatic differentiation of hybrid models Illustrated by Diffedge Graphic Methodology. (Survey)"
} | null | null | null | null | true | null | 3079 | null | Default | null | null |
null | {
"abstract": " Initial RV characterisation of the enigmatic planet Kepler-10c suggested a\nmass of $\\sim17$ M$_\\oplus$, which was remarkably high for a planet with radius\n$2.32$ R$_\\oplus$; further observations and subsequent analysis hinted at a\n(possibly much) lower mass, but masses derived using RVs from two different\nspectrographs (HARPS-N and HIRES) were incompatible at a $3\\sigma$-level. We\ndemonstrate here how such mass discrepancies may readily arise from sub-optimal\nsampling and/or neglecting to model even a single coherent signal (stellar,\nplanetary, or otherwise) that may be present in RVs. We then present a\nplausible resolution of the mass discrepancy, and ultimately characterise\nKepler-10c as having mass $7.37_{-1.19}^{+1.32}$ M$_\\oplus$, and mean density\n$3.14^{+0.63}_{-0.55}$ g cm$^{-3}$.\n",
"title": "Pinning down the mass of Kepler-10c: the importance of sampling and model comparison"
} | null | null | null | null | true | null | 3080 | null | Default | null | null |
null | {
"abstract": " Building effective, enjoyable, and safe autonomous vehicles is a lot harder\nthan has historically been considered. The reason is that, simply put, an\nautonomous vehicle must interact with human beings. This interaction is not a\nrobotics problem nor a machine learning problem nor a psychology problem nor an\neconomics problem nor a policy problem. It is all of these problems put into\none. It challenges our assumptions about the limitations of human beings at\ntheir worst and the capabilities of artificial intelligence systems at their\nbest. This work proposes a set of principles for designing and building\nautonomous vehicles in a human-centered way that does not run away from the\ncomplexity of human nature but instead embraces it. We describe our development\nof the Human-Centered Autonomous Vehicle (HCAV) as an illustrative case study\nof implementing these principles in practice.\n",
"title": "Human-Centered Autonomous Vehicle Systems: Principles of Effective Shared Autonomy"
} | null | null | null | null | true | null | 3081 | null | Default | null | null |
null | {
"abstract": " A periodic array of atomic sites, described within a tight binding formalism\nis shown to be capable of trapping electronic states as it grows in size and\ngets stubbed by an atom or an atomic clusters from a side in a deterministic\nway. We prescribe a method based on a real space renormalization group method,\nthat unravels a subtle correlation between the positions of the side coupled\natoms and the energy eigenvalues for which the incoming particle finally gets\ntrapped. We discuss how, in such conditions, the periodic backbone gets\ntransformed into an array of infinite quantum wells in the thermodynamic limit.\nWe present a case here, where the wells have a hierarchically distribution of\nwidths, hosing standing wave solutions in the thermodynamic limit.\n",
"title": "Controlled trapping of single particle states on a periodic substrate by deterministic stubbing"
} | null | null | null | null | true | null | 3082 | null | Default | null | null |
null | {
"abstract": " This paper proposes a deep cerebellar model articulation controller (DCMAC)\nfor adaptive noise cancellation (ANC). We expand upon the conventional CMAC by\nstacking sin-gle-layer CMAC models into multiple layers to form a DCMAC model\nand derive a modified backpropagation training algorithm to learn the DCMAC\nparameters. Com-pared with conventional CMAC, the DCMAC can characterize\nnonlinear transformations more effectively because of its deep structure.\nExperimental results confirm that the pro-posed DCMAC model outperforms the\nCMAC in terms of residual noise in an ANC task, showing that DCMAC provides\nenhanced modeling capability based on channel characteristics.\n",
"title": "Adaptive Noise Cancellation Using Deep Cerebellar Model Articulation Controller"
} | null | null | null | null | true | null | 3083 | null | Default | null | null |
null | {
"abstract": " Transient stability simulation of a large-scale and interconnected electric\npower system involves solving a large set of differential algebraic equations\n(DAEs) at every simulation time-step. With the ever-growing size and complexity\nof power grids, dynamic simulation becomes more time-consuming and\ncomputationally difficult using conventional sequential simulation techniques.\nTo cope with this challenge, this paper aims to develop a fully distributed\napproach intended for implementation on High Performance Computer (HPC)\nclusters. A novel, relaxation-based domain decomposition algorithm known as\nParallel-General-Norton with Multiple-port Equivalent (PGNME) is proposed as\nthe core technique of a two-stage decomposition approach to divide the overall\ndynamic simulation problem into a set of subproblems that can be solved\nconcurrently to exploit parallelism and scalability. While the convergence\nproperty has traditionally been a concern for relaxation-based decomposition,\nan estimation mechanism based on multiple-port network equivalent is adopted as\nthe preconditioner to enhance the convergence of the proposed algorithm. The\nproposed algorithm is illustrated using rigorous mathematics and validated both\nin terms of speed-up and capability. Moreover, a complexity analysis is\nperformed to support the observation that PGNME scales well when the size of\nthe subproblems are sufficiently large.\n",
"title": "A Relaxation-based Network Decomposition Algorithm for Parallel Transient Stability Simulation with Improved Convergence"
} | null | null | null | null | true | null | 3084 | null | Default | null | null |
null | {
"abstract": " Web portals have served as an excellent medium to facilitate user centric\nservices for organizations irrespective of the type, size, and domain of\noperation. The objective of these portals has been to deliver a plethora of\nservices such as information dissemination, transactional services, and\ncustomer feedback. Therefore, the design of a web portal is crucial in order\nthat it is accessible to a wide range of user community irrespective of age\ngroup, physical abilities, and level of literacy. In this paper, we have\nstudied the compliance of WCAG 2.0 by three different categories of Indian web\nsites which are most frequently accessed by a large section of user community.\nWe have provided a quantitative evaluation of different aspects of\naccessibility which we believe can pave the way for better design of web sites\nby taking care of the deficiencies inherent in the web portals.\n",
"title": "A Quantitative Analysis of WCAG 2.0 Compliance For Some Indian Web Portals"
} | null | null | null | null | true | null | 3085 | null | Default | null | null |
null | {
"abstract": " The role of portfolio construction in the implementation of equity market\nneutral factors is often underestimated. Taking the classical momentum strategy\nas an example, we show that one can significantly improve the main strategy's\nfeatures by properly taking care of this key step. More precisely, an optimized\nportfolio construction algorithm allows one to significantly improve the Sharpe\nRatio, reduce sector exposures and volatility fluctuations, and mitigate the\nstrategy's skewness and tail correlation with the market. These results are\nsupported by long-term, world-wide simulations and will be shown to be\nuniversal. Our findings are quite general and hold true for a number of other\n\"equity factors\". Finally, we discuss the details of a more realistic set-up\nwhere we also deal with transaction costs.\n",
"title": "Portfolio Construction Matters"
} | null | null | null | null | true | null | 3086 | null | Default | null | null |
null | {
"abstract": " In this paper we propose a function space approach to Representation Learning\nand the analysis of the representation layers in deep learning architectures.\nWe show how to compute a weak-type Besov smoothness index that quantifies the\ngeometry of the clustering in the feature space. This approach was already\napplied successfully to improve the performance of machine learning algorithms\nsuch as the Random Forest and tree-based Gradient Boosting. Our experiments\ndemonstrate that in well-known and well-performing trained networks, the Besov\nsmoothness of the training set, measured in the corresponding hidden layer\nfeature map representation, increases from layer to layer. We also contribute\nto the understanding of generalization by showing how the Besov smoothness of\nthe representations, decreases as we add more mis-labeling to the training\ndata. We hope this approach will contribute to the de-mystification of some\naspects of deep learning.\n",
"title": "Function space analysis of deep learning representation layers"
} | null | null | null | null | true | null | 3087 | null | Default | null | null |
null | {
"abstract": " We construct a model of random groups of rank 7/4, and show that in this\nmodel the random group has the exponential mesoscopic rank property.\n",
"title": "Random group cobordisms of rank 7/4"
} | null | null | null | null | true | null | 3088 | null | Default | null | null |
null | {
"abstract": " We theoretically study transport properties in one-dimensional interacting\nquasiperiodic systems at infinite temperature. We compare and contrast the\ndynamical transport properties across the many-body localization (MBL)\ntransition in quasiperiodic and random models. Using exact diagonalization we\ncompute the optical conductivity $\\sigma(\\omega)$ and the return probability\n$R(\\tau)$ and study their average low-frequency and long-time power-law\nbehavior, respectively. We show that the low-energy transport dynamics is\nmarkedly distinct in both the thermal and MBL phases in quasiperiodic and\nrandom models and find that the diffusive and MBL regimes of the quasiperiodic\nmodel are more robust than those in the random system. Using the distribution\nof the DC conductivity, we quantify the contribution of sample-to-sample and\nstate-to-state fluctuations of $\\sigma(\\omega)$ across the MBL transition. We\nfind that the activated dynamical scaling ansatz works poorly in the\nquasiperiodic model but holds in the random model with an estimated activation\nexponent $\\psi\\approx 0.9$. We argue that near the MBL transition in\nquasiperiodic systems, critical eigenstates give rise to a subdiffusive\ncrossover regime on finite-size systems.\n",
"title": "Transport properties across the many-body localization transition in quasiperiodic and random systems"
} | null | null | null | null | true | null | 3089 | null | Default | null | null |
null | {
"abstract": " The lasso and elastic net linear regression models impose a\ndouble-exponential prior distribution on the model parameters to achieve\nregression shrinkage and variable selection, allowing the inference of robust\nmodels from large data sets. However, there has been limited success in\nderiving estimates for the full posterior distribution of regression\ncoefficients in these models, due to a need to evaluate analytically\nintractable partition function integrals. Here, the Fourier transform is used\nto express these integrals as complex-valued oscillatory integrals over\n\"regression frequencies\". This results in an analytic expansion and stationary\nphase approximation for the partition functions of the Bayesian lasso and\nelastic net, where the non-differentiability of the double-exponential prior\nhas so far eluded such an approach. Use of this approximation leads to highly\naccurate numerical estimates for the expectation values and marginal posterior\ndistributions of the regression coefficients, and allows for Bayesian inference\nof much higher dimensional models than previously possible.\n",
"title": "Analytic solution and stationary phase approximation for the Bayesian lasso and elastic net"
} | null | null | null | null | true | null | 3090 | null | Default | null | null |
null | {
"abstract": " The El Niño-Southern Oscillation (ENSO) is a mode of interannual\nvariability in the coupled equatorial Pacific coupled atmosphere/ocean system.\nEl Niño describes a state in which sea surface temperatures in the eastern\nPacific increase and upwelling of colder, deep waters diminishes. El Niño\nevents typically peak in boreal winter, but their strength varies irregularly\non decadal time scales. There were exceptionally strong El Niño events in\n1982-83, 1997-98 and 2015-16 that affected weather on a global scale. Widely\npublicized forecasts in 2014 predicted that the 2015-16 event would occur a\nyear earlier. Predicting the strength of El Niño is a matter of practical\nconcern due to its effects on hydroclimate and agriculture around the world.\nThis paper discusses the frequency and regularity of strong El Niño events in\nthe context of chaotic dynamical systems. We discover a mechanism that limits\ntheir predictability in a conceptual \"recharge oscillator\" model of ENSO. Weak\nseasonal forcing or noise in this model can induce irregular switching between\nan oscillatory state that has strong El Niño events and a chaotic state that\nlacks strong events, In this regime, the timing of strong El Niño events on\ndecadal time scales is unpredictable.\n",
"title": "(Un)predictability of strong El Niño events"
} | null | null | null | null | true | null | 3091 | null | Default | null | null |
null | {
"abstract": " One of the key challenges for operations researchers solving real-world\nproblems is designing and implementing high-quality heuristics to guide their\nsearch procedures. In the past, machine learning techniques have failed to play\na major role in operations research approaches, especially in terms of guiding\nbranching and pruning decisions. We integrate deep neural networks into a\nheuristic tree search procedure to decide which branch to choose next and to\nestimate a bound for pruning the search tree of an optimization problem. We\ncall our approach Deep Learning assisted heuristic Tree Search (DLTS) and apply\nit to a well-known problem from the container terminals literature, the\ncontainer pre-marshalling problem (CPMP). Our approach is able to learn\nheuristics customized to the CPMP solely through analyzing the solutions to\nCPMP instances, and applies this knowledge within a heuristic tree search to\nproduce the highest quality heuristic solutions to the CPMP to date.\n",
"title": "Deep Learning Assisted Heuristic Tree Search for the Container Pre-marshalling Problem"
} | null | null | null | null | true | null | 3092 | null | Default | null | null |
null | {
"abstract": " The optimization of algorithm (hyper-)parameters is crucial for achieving\npeak performance across a wide range of domains, ranging from deep neural\nnetworks to solvers for hard combinatorial problems. The resulting algorithm\nconfiguration (AC) problem has attracted much attention from the machine\nlearning community. However, the proper evaluation of new AC procedures is\nhindered by two key hurdles. First, AC benchmarks are hard to set up. Second\nand even more significantly, they are computationally expensive: a single run\nof an AC procedure involves many costly runs of the target algorithm whose\nperformance is to be optimized in a given AC benchmark scenario. One common\nworkaround is to optimize cheap-to-evaluate artificial benchmark functions\n(e.g., Branin) instead of actual algorithms; however, these have different\nproperties than realistic AC problems. Here, we propose an alternative\nbenchmarking approach that is similarly cheap to evaluate but much closer to\nthe original AC problem: replacing expensive benchmarks by surrogate benchmarks\nconstructed from AC benchmarks. These surrogate benchmarks approximate the\nresponse surface corresponding to true target algorithm performance using a\nregression model, and the original and surrogate benchmark share the same\n(hyper-)parameter space. In our experiments, we construct and evaluate\nsurrogate benchmarks for hyperparameter optimization as well as for AC problems\nthat involve performance optimization of solvers for hard combinatorial\nproblems, drawing training data from the runs of existing AC procedures. We\nshow that our surrogate benchmarks capture overall important characteristics of\nthe AC scenarios, such as high- and low-performing regions, from which they\nwere derived, while being much easier to use and orders of magnitude cheaper to\nevaluate.\n",
"title": "Efficient Benchmarking of Algorithm Configuration Procedures via Model-Based Surrogates"
} | null | null | null | null | true | null | 3093 | null | Default | null | null |
null | {
"abstract": " This paper develops detailed mathematical statistical theory of a new class\nof cross-validation techniques of local linear kernel hazards and their\nmultiplicative bias corrections. The new class of cross-validation combines\nprinciples of local information and recent advances in indirect\ncross-validation. A few applications of cross-validating multiplicative kernel\nhazard estimation do exist in the literature. However, detailed mathematical\nstatistical theory and small sample performance are introduced via this paper\nand further upgraded to our new class of best one-sided cross-validation. Best\none-sided cross-validation turns out to have excellent performance in its\npractical illustrations, in its small sample performance and in its\nmathematical statistical theoretical performance.\n",
"title": "Multiplicative local linear hazard estimation and best one-sided cross-validation"
} | null | null | null | null | true | null | 3094 | null | Default | null | null |
null | {
"abstract": " A sum of a large-dimensional random matrix polynomial and a fixed low-rank\nmatrix polynomial is considered. The main assumption is that the resolvent of\nthe random polynomial converges to some deterministic limit. A formula for the\nlimit of the resolvent of the sum is derived and the eigenvalues are localised.\nThree instances are considered: a low-rank matrix perturbed by the Wigner\nmatrix, a product $HX$ of a fixed diagonal matrix $H$ and the Wigner matrix $X$\nand a special matrix polynomial. The results are illustrated with various\nexamples and numerical simulations.\n",
"title": "Random Perturbations of Matrix Polynomials"
} | null | null | null | null | true | null | 3095 | null | Default | null | null |
null | {
"abstract": " The composition of web services is a promising approach enabling flexible and\nloose integration of business applications. Numerous approaches related to web\nservices composition have been developed usually following three main phases:\nthe service discovery is based on the semantic description of advertised\nservices, i.e. the functionality of the service, meanwhile the service\nselection is based on non- functional quality dimensions of service, and\nfinally the service composition aims to support an underlying process. Most of\nthose approaches explore techniques of static or dynamic design for an optimal\nservice composition. One important aspect so far is mostly neglected, focusing\non the output produced of composite web services. In this paper, in contrast to\nmany prominent approaches we introduce a data quality perspective on web\nservices. Based on a data quality management approach, we propose a framework\nfor analyzing data produced by the composite service execution. Utilising\nprocess information together with data in service logs, our approach allows\nidentifying problems in service composition and execution. Analyzing the\nservice execution history our approach helps to improve common approaches of\nservice selection and composition.\n",
"title": "Monitoring Information Quality within Web Service Composition and Execution"
} | null | null | null | null | true | null | 3096 | null | Default | null | null |
null | {
"abstract": " We present an automatic measurement platform that enables the\ncharacterization of nanodevices by electrical transport and optical\nspectroscopy as a function of uniaxial stress. We provide insights into and\ndetailed descriptions of the mechanical device, the substrate design and\nfabrication, and the instrument control software, which is provided under\nopen-source license. The capability of the platform is demonstrated by\ncharacterizing the piezo-resistance of an InAs nanowire device using a\ncombination of electrical transport and Raman spectroscopy. The advantages of\nthis measurement platform are highlighted by comparison with state-of-the-art\npiezo-resistance measurements in InAs nanowires. We envision that the\nsystematic application of this methodology will provide new insights into the\nphysics of nanoscale devices and novel materials for electronics, and thus\ncontribute to the assessment of the potential of strain as a technology booster\nfor nanoscale electronics.\n",
"title": "An open-source platform to study uniaxial stress effects on nanoscale devices"
} | null | null | null | null | true | null | 3097 | null | Default | null | null |
null | {
"abstract": " In 1991 J.F. Aarnes introduced the concept of quasi-measures in a compact\ntopological space $\\Omega$ and established the connection between quasi-states\non $C (\\Omega)$ and quasi-measures in $\\Omega$. This work solved the linearity\nproblem of quasi-states on $C^*$-algebras formulated by R.V. Kadison in 1965.\nThe answer is that a quasi-state need not be linear, so a quasi-state need not\nbe a state. We introduce nonlinear measures in a space $\\Omega$ which is a\ngeneralization of a measurable space. In this more general setting we are still\nable to define integration and establish a representation theorem for the\ncorresponding functionals. A probabilistic language is choosen since we feel\nthat the subject should be of some interest to probabilists. In particular we\npoint out that the theory allows for incompatible stochastic variables. The\nneed for incompatible variables is well known in quantum mechanics, but the\nneed seems natural also in other contexts as we try to explain by a questionary\nexample.\nKeywords and phrases: Epistemic probability, Integration with respect to mea-\nsures and other set functions, Banach algebras of continuous functions, Set\nfunc- tions and measures on topological spaces, States, Logical foundations of\nquantum mechanics.\n",
"title": "Nonlinear probability. A theory with incompatible stochastic variables"
} | null | null | null | null | true | null | 3098 | null | Default | null | null |
null | {
"abstract": " We extend the framework for complexity of operators in analysis devised by\nKawamura and Cook (2012) to allow for the treatment of a wider class of\nrepresentations. The main novelty is to endow represented spaces of interest\nwith an additional function on names, called a parameter, which measures the\ncomplexity of a given name. This parameter generalises the size function which\nis usually used in second-order complexity theory and therefore also central to\nthe framework of Kawamura and Cook. The complexity of an algorithm is measured\nin terms of its running time as a second-order function in the parameter, as\nwell as in terms of how much it increases the complexity of a given name, as\nmeasured by the parameters on the input and output side.\nAs an application we develop a rigorous computational complexity theory for\ninterval computation. In the framework of Kawamura and Cook the representation\nof real numbers based on nested interval enclosures does not yield a reasonable\ncomplexity theory. In our new framework this representation is polytime\nequivalent to the usual Cauchy representation based on dyadic rational\napproximation. By contrast, the representation of continuous real functions\nbased on interval enclosures is strictly smaller in the polytime reducibility\nlattice than the usual representation, which encodes a modulus of continuity.\nFurthermore, the function space representation based on interval enclosures is\noptimal in the sense that it contains the minimal amount of information amongst\nthose representations which render evaluation polytime computable.\n",
"title": "Parametrised second-order complexity theory with applications to the study of interval computation"
} | null | null | null | null | true | null | 3099 | null | Default | null | null |
null | {
"abstract": " Deep generative models provide powerful tools for distributions over\ncomplicated manifolds, such as those of natural images. But many of these\nmethods, including generative adversarial networks (GANs), can be difficult to\ntrain, in part because they are prone to mode collapse, which means that they\ncharacterize only a few modes of the true distribution. To address this, we\nintroduce VEEGAN, which features a reconstructor network, reversing the action\nof the generator by mapping from data to noise. Our training objective retains\nthe original asymptotic consistency guarantee of GANs, and can be interpreted\nas a novel autoencoder loss over the noise. In sharp contrast to a traditional\nautoencoder over data points, VEEGAN does not require specifying a loss\nfunction over the data, but rather only over the representations, which are\nstandard normal by assumption. On an extensive set of synthetic and real world\nimage datasets, VEEGAN indeed resists mode collapsing to a far greater extent\nthan other recent GAN variants, and produces more realistic samples.\n",
"title": "VEEGAN: Reducing Mode Collapse in GANs using Implicit Variational Learning"
} | null | null | [
"Statistics"
]
| null | true | null | 3100 | null | Validated | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.