text
null
inputs
dict
prediction
null
prediction_agent
null
annotation
list
annotation_agent
null
multi_label
bool
1 class
explanation
null
id
stringlengths
1
5
metadata
null
status
stringclasses
2 values
event_timestamp
null
metrics
null
null
{ "abstract": " Homology of braid groups and Artin groups can be related to the study of\nspaces of curves. We completely calculate the integral homology of the family\nof smooth curves of genus $g$ with one boundary component, that are double\ncoverings of the disk ramified over $n = 2g + 1$ points. The main part of such\nhomology is described by the homology of the braid group with coefficients in a\nsymplectic representation, namely the braid group $\\mathrm{Br}_n$ acts on the\nfirst homology group of a genus $g$ surface via Dehn twists. Our computations\nshows that such groups have only $2$-torsion. We also investigate stabilization\nproperties and provide Poincaré series, both for unstable and stable\nhomology.\n", "title": "Homology of the family of hyperelliptic curves" }
null
null
null
null
true
null
12401
null
Default
null
null
null
{ "abstract": " This study presents a smoothed particle hydrodynamics (SPH) method with\nPeng-Robinson equation of state for simulating drop vaporization and drop\nimpact on a hot surface. The conservation equations of momentum and energy and\nPeng-Robinson equation of state are applied to describe both the liquid and gas\nphases. The governing equations are solved numerically by the SPH method. The\nphase change between the liquid and gas phases are simulated directly without\nusing any phase change models. The numerical method is validated by comparing\nnumerical results with analytical solutions for the vaporization of n-heptane\ndrops at different temperatures. Using the SPH method, the processes of\nn-heptane drops impacting on a solid wall with different temperatures are\nstudied numerically. The results show that the size of the film formed by drop\nimpact decreases when temperature increases. When the temperature is high\nenough, the drop will rebound.\n", "title": "Simulation of Drop Impact on a Hot Wall using SPH Method with Peng-Robinson Equation of State" }
null
null
null
null
true
null
12402
null
Default
null
null
null
{ "abstract": " Ionization by relativistically intense short laser pulses is studied in the\nframework of strong-field quantum electrodynamics. Distinctive patterns are\nfound in the energy probability distributions of photoelectrons. Except of the\nalready observed patterns, which were studied in Phys. Rev. A {\\bf 94}, 013402\n(2016), we discover an additional interference-free smooth supercontinuum in\nthe high-energy portion of the spectrum, reaching tens of kiloelectronovolts.\nAs we show, the latter is sensitive to the driving field intensity and it can\nbe detected in a narrow polar-angular window. Once these high-energy electrons\nare collected, they can form solitary attosecond pulses. This is particularly\nimportant in light of various applications of attosecond electron beams such as\nin ultrafast electron diffraction and crystallography, or in time-resolved\nelectron microscopy of physical, chemical, and biological processes.\n", "title": "Generation of attosecond electron beams in relativistic ionization by short laser pulses" }
null
null
null
null
true
null
12403
null
Default
null
null
null
{ "abstract": " A brane construction of an integrable lattice model is proposed. The model is\ncomposed of Belavin's R-matrix, Felder's dynamical R-matrix, the\nBazhanov-Sergeev-Derkachov-Spiridonov R-operator and some intertwining\noperators. This construction implies that a family of surface defects act on\nsupersymmetric indices of four-dimensional $\\mathcal{N} = 1$ supersymmetric\nfield theories as transfer matrices related to elliptic quantum groups.\n", "title": "Surface defects and elliptic quantum groups" }
null
null
null
null
true
null
12404
null
Default
null
null
null
{ "abstract": " Measuring the corporate default risk is broadly important in economics and\nfinance. Quantitative methods have been developed to predictively assess future\ncorporate default probabilities. However, as a more difficult yet crucial\nproblem, evaluating the uncertainties associated with the default predictions\nremains little explored. In this paper, we attempt to fill this blank by\ndeveloping a procedure for quantifying the level of associated uncertainties\nupon carefully disentangling multiple contributing sources. Our framework\neffectively incorporates broad information from historical default data,\ncorporates' financial records, and macroeconomic conditions by a)\ncharacterizing the default mechanism, and b) capturing the future dynamics of\nvarious features contributing to the default mechanism. Our procedure overcomes\nthe major challenges in this large scale statistical inference problem and\nmakes it practically feasible by using parsimonious models, innovative methods,\nand modern computational facilities. By predicting the marketwide total number\nof defaults and assessing the associated uncertainties, our method can also be\napplied for evaluating the aggregated market credit risk level. Upon analyzing\na US market data set, we demonstrate that the level of uncertainties associated\nwith default risk assessments is indeed substantial. More informatively, we\nalso find that the level of uncertainties associated with the default risk\npredictions is correlated with the level of default risks, indicating potential\nfor new scopes in practical applications including improving the accuracy of\ndefault risk assessments.\n", "title": "Disentangling and Assessing Uncertainties in Multiperiod Corporate Default Risk Predictions" }
null
null
null
null
true
null
12405
null
Default
null
null
null
{ "abstract": " We present results from a 100 ks XMM-Newton observation of galaxy cluster\nXLSSC 122, the first massive cluster discovered through its X-ray emission at\n$z\\approx2$. The data provide the first precise constraints on the bulk\nthermodynamic properties of such a distant cluster, as well as an X-ray\nspectroscopic confirmation of its redshift. We measure an average temperature\nof $kT=5.0\\pm0.7$ keV; a metallicity with respect to solar of\n$Z/Z_{\\odot}=0.33^{+0.19}_{-0.17}$, consistent with lower-redshift clusters;\nand a redshift of $z=1.99^{+0.07}_{-0.06}$, consistent with the earlier photo-z\nestimate. The measured gas density profile leads to a mass estimate at\n$r_{500}$ of $M_{500}=(6.3\\pm1.5)\\times10^{13}M_{\\odot}$. From CARMA 30 GHz\ndata, we measure the spherically integrated Compton parameter within $r_{500}$\nto be $Y_{500}=(3.6\\pm0.4)\\times10^{-12}$. We compare the measured properties\nof XLSSC 122 to lower-redshift cluster samples, and find good agreement when\nassuming the simplest (self-similar) form for the evolution of cluster scaling\nrelations. While a single cluster provides limited information, this result\nsuggests that the evolution of the intracluster medium in the most massive,\nwell developed clusters is remarkably simple, even out to the highest redshifts\nwhere they have been found. At the same time, our data reaffirm the previously\nreported spatial offset between the centers of the X-ray and SZ signals for\nXLSSC 122, suggesting a disturbed configuration. Higher spatial resolution data\ncould thus provide greater insights into the internal dynamics of this system.\n", "title": "The XXL Survey: XVII. X-ray and Sunyaev-Zel'dovich Properties of the Redshift 2.0 Galaxy Cluster XLSSC 122" }
null
null
[ "Physics" ]
null
true
null
12406
null
Validated
null
null
null
{ "abstract": " In this paper, we introduce a rational $\\tau$ invariant for rationally\nnull-homologous knots in contact 3-manifolds with nontrivial\nOzsváth-Szabó contact invariants. Such an invariant is an upper bound\nfor the sum of rational Thurston-Bennequin invariant and the rational rotation\nnumber of the Legendrian representatives of the knot. In the special case of\nFloer simple knots in L-spaces, we can compute the rational $\\tau$ invariants\nby correction terms.\n", "title": "A bound for rational Thurston-Bennequin invariants" }
null
null
[ "Mathematics" ]
null
true
null
12407
null
Validated
null
null
null
{ "abstract": " We consider the lattice, $\\mathcal{L}$, of all subsets of a multidimensional\ncontingency table and establish the properties of monotonicity and\nsupermodularity for the marginalization function, $n(\\cdot)$, on $\\mathcal{L}$.\nWe derive from the supermodularity of $n(\\cdot)$ some generalized Fréchet\ninequalities complementing and extending inequalities of Dobra and Fienberg.\nFurther, we construct new monotonic and supermodular functions from $n(\\cdot)$,\nand we remark on the connection between supermodularity and some correlation\ninequalities for probability distributions on lattices. We also apply an\ninequality of Ky Fan to derive a new approach to Fréchet inequalities for\nmultidimensional contingency tables.\n", "title": "Generalized Fréchet Bounds for Cell Entries in Multidimensional Contingency Tables" }
null
null
null
null
true
null
12408
null
Default
null
null
null
{ "abstract": " This paper studies different signaling techniques on the continuous spectrum\n(CS) of nonlinear optical fiber defined by nonlinear Fourier transform. Three\ndifferent signaling techniques are proposed and analyzed based on the\nstatistics of the noise added to CS after propagation along the nonlinear\noptical fiber. The proposed methods are compared in terms of error performance,\ndistance reach, and complexity. Furthermore, the effect of chromatic dispersion\non the data rate and noise in nonlinear spectral domain is investigated. It is\ndemonstrated that, for a given sequence of CS symbols, an optimal bandwidth (or\nsymbol rate) can be determined so that the temporal duration of the propagated\nsignal at the end of the fiber is minimized. In effect, the required guard\ninterval between the subsequently transmitted data packets in time is minimized\nand the effective data rate is significantly enhanced. Moreover, by selecting\nthe proper signaling method and design criteria a reach distance of 7100 km is\nreported by only singling on the CS at a rate of 9.6 Gbps.\n", "title": "Signaling on the Continuous Spectrum of Nonlinear Optical fiber" }
null
null
null
null
true
null
12409
null
Default
null
null
null
{ "abstract": " Traveling wave solutions of (2 + 1)-dimensional Zoomeron equation(ZE) are\ndeveloped in terms of exponential functions involving free parameters. It is\nshown that the novel Lie group of transformations method is a competent and\nprominent tool in solving nonlinear partial differential equations(PDEs) in\nmathematical physics. The similarity transformation method(STM) is applied\nfirst on (2 + 1)-dimensional ZE to find the infinitesimal generators.\nDiscussing the different cases on these infinitesimal generators, STM reduce (2\n+ 1)-dimensional ZE into (1 + 1)-dimensional PDEs, later it reduces these PDEs\ninto various ordinary differential equations(ODEs) and help to find exact\nsolutions of (2 + 1)-dimensional ZE.\n", "title": "Symmetry analysis and soliton solution of (2+1)- dimensional Zoomeron equation" }
null
null
null
null
true
null
12410
null
Default
null
null
null
{ "abstract": " This paper continues the research started in \\cite{LW16}. In the framework of\nthe convolution structure density model on $\\bR^d$, we address the problem of\nadaptive minimax estimation with $\\bL_p$--loss over the scale of anisotropic\nNikol'skii classes. We fully characterize the behavior of the minimax risk for\ndifferent relationships between regularity parameters and norm indexes in the\ndefinitions of the functional class and of the risk. In particular, we show\nthat the boundedness of the function to be estimated leads to an essential\nimprovement of the asymptotic of the minimax risk. We prove that the selection\nrule proposed in Part I leads to the construction of an optimally or nearly\noptimally (up to logarithmic factor) adaptive estimator.\n", "title": "Estimation in the convolution structure density model. Part II: adaptation over the scale of anisotropic classes" }
null
null
null
null
true
null
12411
null
Default
null
null
null
{ "abstract": " Recently, a technique called Layer-wise Relevance Propagation (LRP) was shown\nto deliver insightful explanations in the form of input space relevances for\nunderstanding feed-forward neural network classification decisions. In the\npresent work, we extend the usage of LRP to recurrent neural networks. We\npropose a specific propagation rule applicable to multiplicative connections as\nthey arise in recurrent network architectures such as LSTMs and GRUs. We apply\nour technique to a word-based bi-directional LSTM model on a five-class\nsentiment prediction task, and evaluate the resulting LRP relevances both\nqualitatively and quantitatively, obtaining better results than a\ngradient-based related method which was used in previous work.\n", "title": "Explaining Recurrent Neural Network Predictions in Sentiment Analysis" }
null
null
null
null
true
null
12412
null
Default
null
null
null
{ "abstract": " We propose a new family of coherence monotones, named the \\emph{generalized\ncoherence concurrence} (or coherence $k$-concurrence), which is an analogous\nconcept to the generalized entanglement concurrence. The coherence\n$k$-concurrence of a state is nonzero if and only if the coherence number (a\nrecently introduced discrete coherence monotone) of the state is not smaller\nthan $k$, and a state can be converted to a state with nonzero entanglement\n$k$-concurrence via incoherent operations if and only if the state has nonzero\ncoherence $k$-concurrence. We apply the coherence concurrence family to the\nproblem of wave-particle duality in multi-path interference phenomena. We\nobtain a sharper equation for path distinguishability (which witness the\nduality) than the known value and show that the amount of each concurrence for\nthe quanton state determines the number of slits which are identified\nunambiguously.\n", "title": "Generalized Coherence Concurrence and Path distinguishability" }
null
null
null
null
true
null
12413
null
Default
null
null
null
{ "abstract": " We present a passivity-based Whole-Body Control approach for quadruped robots\nthat achieves dynamic locomotion while compliantly balancing the robot's trunk.\nWe formulate the motion tracking as a Quadratic Program that takes into account\nthe full robot rigid body dynamics, the actuation limit, the joint limits and\nthe contact interaction. We analyze the controller robustness against\ninaccurate friction coefficient estimates and unstable footholds, as well as\nits capability to redistribute the load as a consequence of enforcing actuation\nlimits. Additionally, we present some practical implementation details gained\nfrom the experience with the real platform. Extensive experimental trials on\nthe 90 Kg Hydraulically actuated Quadruped robot validate the capabilities of\nthis controller under various terrain conditions and gaits. The proposed\napproach is expedient for accurate execution of high dynamic motions with\nrespect to the current state of the art.\n", "title": "Passivity Based Whole-body Control for Quadrupedal Locomotion on Challenging Terrain" }
null
null
[ "Computer Science" ]
null
true
null
12414
null
Validated
null
null
null
{ "abstract": " In this paper, we present two main results. First, by only one conjecture\n(Conjecture 2.9) for recognizing a vertex symmetric graph, which is the hardest\ntask for our problem, we construct an algorithm for finding an isomorphism\nbetween two graphs in polynomial time $ O(n^{3}) $. Second, without that\nconjecture, we prove the algorithm to be of quasi-polynomial time $\nO(n^{1.5\\log n}) $. The conjectures in this paper are correct for all graphs of\nsize no larger than $ 5 $ and all graphs we have encountered. At least the\nconjecture for determining if a graph is vertex symmetric is quite true\nintuitively. We are not able to prove them by hand, so we have planned to find\npossible counterexamples by a computer. We also introduce new concepts like\ncollapse pattern and collapse tomography, which play important roles in our\nalgorithms.\n", "title": "Accelerations for Graph Isomorphism" }
null
null
null
null
true
null
12415
null
Default
null
null
null
{ "abstract": " We consider the problem of multi-objective maximization of monotone\nsubmodular functions subject to cardinality constraint, often formulated as\n$\\max_{|A|=k}\\min_{i\\in\\{1,\\dots,m\\}}f_i(A)$. While it is widely known that\ngreedy methods work well for a single objective, the problem becomes much\nharder with multiple objectives. In fact, Krause et al.\\ (2008) showed that\nwhen the number of objectives $m$ grows as the cardinality $k$ i.e.,\n$m=\\Omega(k)$, the problem is inapproximable (unless $P=NP$). On the other\nhand, when $m$ is constant Chekuri et al.\\ (2010) showed a randomized\n$(1-1/e)-\\epsilon$ approximation with runtime (number of queries to function\noracle) $n^{m/\\epsilon^3}$. %In fact, the result of Chekuri et al.\\ (2010) is\nfor the far more general case of matroid constant.\nWe focus on finding a fast and practical algorithm that has (asymptotic)\napproximation guarantees even when $m$ is super constant. We first modify the\nalgorithm of Chekuri et al.\\ (2010) to achieve a $(1-1/e)$ approximation for\n$m=o(\\frac{k}{\\log^3 k})$. This demonstrates a steep transition from constant\nfactor approximability to inapproximability around $m=\\Omega(k)$. Then using\nMultiplicative-Weight-Updates (MWU), we find a much faster\n$\\tilde{O}(n/\\delta^3)$ time asymptotic $(1-1/e)^2-\\delta$ approximation. While\nthe above results are all randomized, we also give a simple deterministic\n$(1-1/e)-\\epsilon$ approximation with runtime $kn^{m/\\epsilon^4}$. Finally, we\nrun synthetic experiments using Kronecker graphs and find that our MWU inspired\nheuristic outperforms existing heuristics.\n", "title": "Multi-Objective Maximization of Monotone Submodular Functions with Cardinality Constraint" }
null
null
null
null
true
null
12416
null
Default
null
null
null
{ "abstract": " It is argued that many of the problems and ambiguities of standard cosmology\nderive from a single one: violation of conservation of energy in the standard\nparadigm. Standard cosmology satisfies conservation of local energy, however\ndisregards the inherent global aspect of energy. We therefore explore\nconservation of the quasi-local Misner-Sharp energy within the causal horizon,\nwhich, as we argue, is necessarily an apparent horizon. Misner-Sharp energy\nassumes the presence of arbitrary mass-energy. Its conservation, however,\nyields \"empty\" de Sitter (open, flat, closed) as single cosmological solution,\nwhere Misner-Sharp total energy acts as cosmological constant and where the\nsource of curvature energy is unidentified. It is argued that de Sitter is only\napparently empty of matter. That is, total matter energy scales as curvature\nenergy in open de Sitter, which causes evolution of the cosmic potential and\ninduces gravitational time dilation. Curvature of time accounts completely for\nthe extrinsic curvature, i.e., renders open de Sitter spatially flat. This\nexplains the well known, surprising, spatial flatness of Misner-Sharp energy,\neven if extrinsic curvature is non-zero. The general relativistic derivation\nfrom Misner-Sharp energy is confirmed by a Machian equation of recessional and\npeculiar energy, which explicitly assumes the presence of matter. This\nrelational model enhances interpretation. Time-dilated open de Sitter is\nspatially flat, dynamically close to $\\Lambda$CDM, and is shown to be without\nthe conceptual problems of concordance cosmology.\n", "title": "Cosmology from conservation of global energy" }
null
null
null
null
true
null
12417
null
Default
null
null
null
{ "abstract": " We study the ferromagnetic layer thickness dependence of the\nvoltage-controlled magnetic anisotropy (VCMA) in gated CoFeB/MgO\nheterostructures with heavy metal underlayers. When the effective CoFeB\nthickness is below ~1 nm, the VCMA efficiency of Ta/CoFeB/MgO heterostructures\nconsiderably decreases with decreasing CoFeB thickness. We find that a high\norder phenomenological term used to describe the thickness dependence of the\nareal magnetic anisotropy energy can also account for the change in the areal\nVCMA efficiency. In this structure, the higher order term competes against the\ncommon interfacial VCMA, thereby reducing the efficiency at lower CoFeB\nthickness. The areal VCMA efficiency does not saturate even when the effective\nCoFeB thickness exceeds ~1 nm. We consider the higher order term is related to\nthe strain that develops at the CoFeB/MgO interface: as the average strain of\nthe CoFeB layer changes with its thickness, the electronic structure of the\nCoFeB/MgO interface varies leading to changes in areal magnetic anisotropy\nenergy and VCMA efficiency.\n", "title": "Electric field modulation of the non-linear areal magnetic anisotropy energy" }
null
null
null
null
true
null
12418
null
Default
null
null
null
{ "abstract": " A tetragonal photonic crystal composed of high-index pillars can exhibit a\nfrequency-isolated accidental degeneracy at a high-symmetry point in the first\nBrillouin zone. A photonic band gap can be formed there by introducing a\ngeometrical anisotropy in the pillars. In this gap, gapless surface/domain-wall\nstates emerge under a certain condition. We analyze their physical property in\nterms of an effective hamiltonian, and a good agreement between the effective\ntheory and numerical calculation is obtained.\n", "title": "Gapless surface states originated from accidentally degenerate quadratic band touching in a three-dimensional tetragonal photonic crystal" }
null
null
null
null
true
null
12419
null
Default
null
null
null
{ "abstract": " The pull-based development process has become prevalent on platforms such as\nGitHub as a form of distributed software development. Potential contributors\ncan create and submit a set of changes to a software project through pull\nrequests. These changes can be accepted, discussed or rejected by the\nmaintainers of the software project, and can influence further contribution\nproposals. As such, it is important to examine the practices that encourage\ncontributors to a project to submit pull requests. Specifically, we consider\nthe impact of prior pull requests on the acceptance or rejection of subsequent\npull requests. We also consider the potential effect of rejecting or ignoring\npull requests on further contributions. In this preliminary research, we study\nthree large projects on \\textsf{GitHub}, using pull request data obtained\nthrough the \\textsf{GitHub} API, and we perform empirical analyses to\ninvestigate the above questions. Our results show that continued contribution\nto a project is correlated with higher pull request acceptance rates and that\npull request rejections lead to fewer future contributions.\n", "title": "On the impact of pull request decisions on future contributions" }
null
null
null
null
true
null
12420
null
Default
null
null
null
{ "abstract": " Given the success of the gated recurrent unit, a natural question is whether\nall the gates of the long short-term memory (LSTM) network are necessary.\nPrevious research has shown that the forget gate is one of the most important\ngates in the LSTM. Here we show that a forget-gate-only version of the LSTM\nwith chrono-initialized biases, not only provides computational savings but\noutperforms the standard LSTM on multiple benchmark datasets and competes with\nsome of the best contemporary models. Our proposed network, the JANET, achieves\naccuracies of 99% and 92.5% on the MNIST and pMNIST datasets, outperforming the\nstandard LSTM which yields accuracies of 98.5% and 91%.\n", "title": "The unreasonable effectiveness of the forget gate" }
null
null
null
null
true
null
12421
null
Default
null
null
null
{ "abstract": " We propose a new learning to rank algorithm, named Weighted Margin-Rank Batch\nloss (WMRB), to extend the popular Weighted Approximate-Rank Pairwise loss\n(WARP). WMRB uses a new rank estimator and an efficient batch training\nalgorithm. The approach allows more accurate item rank approximation and\nexplicit utilization of parallel computation to accelerate training. In three\nitem recommendation tasks, WMRB consistently outperforms WARP and other\nbaselines. Moreover, WMRB shows clear time efficiency advantages as data scale\nincreases.\n", "title": "WMRB: Learning to Rank in a Scalable Batch Training Approach" }
null
null
null
null
true
null
12422
null
Default
null
null
null
{ "abstract": " Representing a word by its co-occurrences with other words in context is an\neffective way to capture the meaning of the word. However, the theory behind\nremains a challenge. In this work, taking the example of a word classification\ntask, we give a theoretical analysis of the approaches that represent a word X\nby a function f(P(C|X)), where C is a context feature, P(C|X) is the\nconditional probability estimated from a text corpus, and the function f maps\nthe co-occurrence measure to a prediction score. We investigate the impact of\ncontext feature C and the function f. We also explain the reasons why using the\nco-occurrences with multiple context features may be better than just using a\nsingle one. In addition, some of the results shed light on the theory of\nfeature learning and machine learning in general.\n", "title": "Learning Features from Co-occurrences: A Theoretical Analysis" }
null
null
[ "Computer Science", "Mathematics", "Statistics" ]
null
true
null
12423
null
Validated
null
null
null
{ "abstract": " Let $M$ be a compact 3-manifold and $\\Gamma=\\pi_1(M)$. The work of Thurston\nand Culler--Shalen established the $\\mathrm{SL}_2(\\mathbb{C})$ character\nvariety $X(\\Gamma)$ as fundamental tool in the study of the geometry and\ntopology of $M$. This is particularly so in the case when $M$ is the exterior\nof a hyperbolic knot $K$ in $S^3$. The main goals of this paper are to bring to\nbear tools from algebraic and arithmetic geometry to understand algebraic and\nnumber theoretic properties of the so-called canonical component of $X(\\Gamma)$\nas well as distinguished points on the canonical component when $\\Gamma$ is a\nknot group. In particular, we study how the theory of quaternion Azumaya\nalgebras can be used to obtain algebraic and arithmetic information about Dehn\nsurgeries, and perhaps of most interest, to construct new knot invariants that\nlie in the Brauer groups of curves over number fields.\n", "title": "Azumaya algebras and canonical components" }
null
null
null
null
true
null
12424
null
Default
null
null
null
{ "abstract": " We study multi-frequency quasiperiodic Schrödinger operators on\n$\\mathbb{Z} $. We prove that for a large real analytic potential satisfying\ncertain restrictions the spectrum consists of a single interval. The result is\na consequence of a criterion for the spectrum to contain an interval at a given\nlocation that we establish non-perturbatively in the regime of positive\nLyapunov exponent.\n", "title": "On the Spectrum of Multi-Frequency Quasiperiodic Schrödinger Operators with Large Coupling" }
null
null
[ "Mathematics" ]
null
true
null
12425
null
Validated
null
null
null
{ "abstract": " We study model spaces, in the sense of Hairer, for stochastic partial\ndifferential equations involving the fractional Laplacian. We prove that the\nfractional Laplacian is a singular kernel suitable to apply the theory of\nregularity structures. Our main contribution is to study the dependence of the\nmodel space for a regularity structure on the three-parameter problem involving\nthe spatial dimension, the polynomial order of the nonlinearity, and the\nexponent of the fractional Laplacian. The goal is to investigate the growth of\nthe model space under parameter variation. In particular, we prove several\nresults in the approaching subcriticality limit leading to universal growth\nexponents of the regularity structure. A key role is played by the viewpoint\nthat model spaces can be identified with families of rooted trees. Our proofs\nare based upon a geometrical construction similar to Newton polygons for\nclassical Taylor series and various combinatorial arguments. We also present\nseveral explicit examples listing all elements with negative homogeneity by\nimplementing a new symbolic software package to work with regularity\nstructures. We use this package to illustrate our analytical results and to\nobtain new conjectures regarding coarse-grained network measures for model\nspaces.\n", "title": "Model Spaces of Regularity Structures for Space-Fractional SPDEs" }
null
null
null
null
true
null
12426
null
Default
null
null
null
{ "abstract": " All water-covered rocky planets in the inner habitable zones of solar-type\nstars will inevitably experience a catastrophic runaway climate due to\nincreasing stellar luminosity and limits to outgoing infrared radiation from\nwet greenhouse atmospheres. Reflectors or scatterers placed near Earth's inner\nLagrange point (L1) have been proposed as a 'geo-engineering\" solution to\nanthropogenic climate change and an advanced version of this could modulate\nincident irradiation over many Gyr or \"rescue\" a planet from the interior of\nthe habitable zone. The distance of the starshade from the planet that\nminimizes its mass is 1.6 times the Earth-L1 distance. Such a starshade would\nhave to be similar in size to the planet and the mutual occultations during\nplanetary transits could produce a characteristic maximum at mid-transit in the\nlight-curve. Because of a fortuitous ratio of densities, Earth-size planets\naround G dwarf stars present the best opportunity to detect such an artifact.\nThe signal would be persistent and is potentially detectable by a future space\nphotometry mission to characterize transiting planets. The signal could be\ndistinguished from natural phenomenon, i.e. starspots or cometary dust clouds,\nby its shape, persistence, and transmission spectrum.\n", "title": "Transit Detection of a \"Starshade\" at the Inner Lagrange Point of an Exoplanet" }
null
null
null
null
true
null
12427
null
Default
null
null
null
{ "abstract": " In the context of fitness coaching or for rehabilitation purposes, the motor\nactions of a human participant must be observed and analyzed for errors in\norder to provide effective feedback. This task is normally carried out by human\ncoaches, and it needs to be solved automatically in technical applications that\nare to provide automatic coaching (e.g. training environments in VR). However,\nmost coaching systems only provide coarse information on movement quality, such\nas a scalar value per body part that describes the overall deviation from the\ncorrect movement. Further, they are often limited to static body postures or\nrather simple movements of single body parts. While there are many approaches\nto distinguish between different types of movements (e.g., between walking and\njumping), the detection of more subtle errors in a motor performance is less\ninvestigated. We propose a novel approach to classify errors in sports or\nrehabilitation exercises such that feedback can be delivered in a rapid and\ndetailed manner: Homogeneous sub-sequences of exercises are first temporally\naligned via Dynamic Time Warping. Next, we extract a feature vector from the\naligned sequences, which serves as a basis for feature selection using Random\nForests. The selected features are used as input for Support Vector Machines,\nwhich finally classify the movement errors. We compare our algorithm to a well\nestablished state-of-the-art approach in time series classification, 1-Nearest\nNeighbor combined with Dynamic Time Warping, and show our algorithm's\nsuperiority regarding classification quality as well as computational cost.\n", "title": "Automatic Error Analysis of Human Motor Performance for Interactive Coaching in Virtual Reality" }
null
null
null
null
true
null
12428
null
Default
null
null
null
{ "abstract": " Temporal resolution of visual information processing is thought to be an\nimportant factor in predator-prey interactions, shaped in the course of\nevolution by animals' ecology. Here I show that light can be considered to have\na dual role of a source of information, which guides motor actions, and an\nenvironmental feedback for those actions. I consequently show how temporal\nperception might depend on behavioral adaptations realized by the nervous\nsystem. I propose an underlying mechanism of synaptic clock, with every synapse\nhaving its characteristic time unit, determined by the persistence of memory\ntraces of synaptic inputs, which is used by the synapse to tell time. The\npresent theory offers a testable framework, which may account for numerous\nexperimental findings, including the interspecies variation in temporal\nresolution and the properties of subjective time perception, specifically the\nvariable speed of perceived time passage, depending on emotional and\nattentional states or tasks performed.\n", "title": "A mechanism of synaptic clock underlying subjective time perception" }
null
null
[ "Quantitative Biology" ]
null
true
null
12429
null
Validated
null
null
null
{ "abstract": " We find explicit formulas for the radii and locations of the circles in all\nthe optimally dense packings of two, three or four equal circles on any flat\ntorus, defined to be the quotient of the Euclidean plane by the lattice\ngenerated by two independent vectors. We prove the optimality of the\narrangements using techniques from rigidity theory and topological graph\ntheory.\n", "title": "Optimal Packings of Two to Four Equal Circles on Any Flat Torus" }
null
null
[ "Mathematics" ]
null
true
null
12430
null
Validated
null
null
null
{ "abstract": " In the cryptographic currency Bitcoin, all transactions are recorded in the\nblockchain - a public, global, and immutable ledger. Because transactions are\npublic, Bitcoin and its users employ obfuscation to maintain a degree of\nfinancial privacy. Critically, and in contrast to typical uses of obfuscation,\nin Bitcoin obfuscation is not aimed against the system designer but is instead\nenabled by design. We map sixteen proposed privacy-preserving techniques for\nBitcoin on an obfuscation-vs.-cryptography axis, and find that those that are\nused in practice tend toward obfuscation. We argue that this has led to a\nbalance between privacy and regulatory acceptance.\n", "title": "Obfuscation in Bitcoin: Techniques and Politics" }
null
null
null
null
true
null
12431
null
Default
null
null
null
{ "abstract": " This paper presents a framework for controlled emergency landing of a\nquadcopter, experiencing a rotor failure, away from sensitive areas. A complete\nmathematical model capturing the dynamics of the system is presented that takes\nthe asymmetrical aerodynamic load on the propellers into account. An\nequilibrium state of the system is calculated around which a linear\ntime-invariant control strategy is developed to stabilize the system. By\nutilizing the proposed model, a specific configuration for a quadcopter is\nintroduced that leads to the minimum power consumption during a\nyaw-rate-resolved hovering after a rotor failure. Furthermore, given a 3D\nrepresentation of the environment, an optimal flight trajectory towards a safe\ncrash landing spot, while avoiding collision with obstacles, is developed using\nan RRT* approach. The cost function for determining the best landing spot\nconsists of: (i) finding the safest landing spot with the largest clearance\nfrom the obstacles; and (ii) finding the most energy-efficient trajectory\ntowards the landing spot. The performance of the proposed framework is tested\nvia simulations.\n", "title": "Path Planning and Controlled Crash Landing of a Quadcopter in case of a Rotor Failure" }
null
null
null
null
true
null
12432
null
Default
null
null
null
{ "abstract": " The early time regime of the Kardar-Parisi-Zhang (KPZ) equation in $1+1$\ndimension, starting from a Brownian initial condition with a drift $w$, is\nstudied using the exact Fredholm determinant representation. For large drift we\nrecover the exact results for the droplet initial condition, whereas a\nvanishingly small drift describes the stationary KPZ case, recently studied by\nweak noise theory (WNT). We show that for short time $t$, the probability\ndistribution $P(H,t)$ of the height $H$ at a given point takes the large\ndeviation form $P(H,t) \\sim \\exp{\\left(-\\Phi(H)/\\sqrt{t} \\right)}$. We obtain\nthe exact expressions for the rate function $\\Phi(H)$ for $H<H_{c2}$. Our exact\nexpression for $H_{c2}$ numerically coincides with the value at which WNT was\nfound to exhibit a spontaneous reflection symmetry breaking. We propose two\ncontinuations for $H>H_{c2}$, which apparently correspond to the symmetric and\nasymmetric WNT solutions. The rate function $\\Phi(H)$ is Gaussian in the\ncenter, while it has asymmetric tails, $|H|^{5/2}$ on the negative $H$ side and\n$H^{3/2}$ on the positive $H$ side.\n", "title": "Exact short-time height distribution in 1D KPZ equation with Brownian initial condition" }
null
null
null
null
true
null
12433
null
Default
null
null
null
{ "abstract": " Subject of this article is the relationship between modern cosmology and\nfundamental physics, in particular general relativity as a theory of gravity on\none side, together with its unique application in cosmology, and the formation\nof structures and their statistics on the other. It summarises arguments for\nthe formulation for a metric theory of gravity and the uniqueness of the\nconstruction of general relativity. It discusses symmetry arguments in the\nconstruction of Friedmann-Lemaître cosmologies as well as assumptions in\nrelation to the presence of dark matter, when adopting general relativity as\nthe gravitational theory. A large section is dedicated to $\\Lambda$CDM as the\nstandard model for structure formation and the arguments that led to its\nconstruction, and to the of role statistics and to the problem of scientific\ninference in cosmology as an empirical science. The article concludes with an\noutlook on current and future developments in cosmology.\n", "title": "The role of cosmology in modern physics" }
null
null
null
null
true
null
12434
null
Default
null
null
null
{ "abstract": " Temporal Action Proposal (TAP) generation is an important problem, as fast\nand accurate extraction of semantically important (e.g. human actions) segments\nfrom untrimmed videos is an important step for large-scale video analysis. We\npropose a novel Temporal Unit Regression Network (TURN) model. There are two\nsalient aspects of TURN: (1) TURN jointly predicts action proposals and refines\nthe temporal boundaries by temporal coordinate regression; (2) Fast computation\nis enabled by unit feature reuse: a long untrimmed video is decomposed into\nvideo units, which are reused as basic building blocks of temporal proposals.\nTURN outperforms the state-of-the-art methods under average recall (AR) by a\nlarge margin on THUMOS-14 and ActivityNet datasets, and runs at over 880 frames\nper second (FPS) on a TITAN X GPU. We further apply TURN as a proposal\ngeneration stage for existing temporal action localization pipelines, it\noutperforms state-of-the-art performance on THUMOS-14 and ActivityNet.\n", "title": "TURN TAP: Temporal Unit Regression Network for Temporal Action Proposals" }
null
null
null
null
true
null
12435
null
Default
null
null
null
{ "abstract": " Most real world phenomena such as sunlight distribution under a forest\ncanopy, minerals concentration, stock valuation, exhibit nonstationary dynamics\ni.e. phenomenon variation changes depending on the locality. Nonstationary\ndynamics pose both theoretical and practical challenges to statistical machine\nlearning algorithms that aim to accurately capture the complexities governing\nthe evolution of such processes. Typically the nonstationary dynamics are\nmodeled using nonstationary Gaussian Process models (NGPS) that employ local\nlatent dynamics parameterization to correspondingly model the nonstationary\nreal observable dynamics. Recently, an approach based on most likely induced\nlatent dynamics representation attracted research community's attention for a\nwhile. The approach could not be employed for large scale real world\napplications because learning a most likely latent dynamics representation\ninvolves maximization of marginal likelihood of the observed real dynamics that\nbecomes intractable as the number of induced latent points grows with problem\nsize. We have established a direct relationship between informativeness of the\ninduced latent dynamics and the marginal likelihood of the observed real\ndynamics. This opens up the possibility of maximizing marginal likelihood of\nobserved real dynamics indirectly by near optimally maximizing entropy or\nmutual information gain on the induced latent dynamics using greedy algorithms.\nTherefore, for an efficient yet accurate inference, we propose to build an\ninduced latent dynamics representation using a novel algorithm LISAL that\nadaptively maximizes entropy or mutual information on the induced latent\ndynamics and marginal likelihood of observed real dynamics in an iterative\nmanner. The relevance of LISAL is validated using real world datasets.\n", "title": "Efficiently Learning Nonstationary Gaussian Processes for Real World Impact" }
null
null
null
null
true
null
12436
null
Default
null
null
null
{ "abstract": " For a given smooth compact manifold $M$, we introduce an open class $\\mathcal\nG(M)$ of Riemannian metrics, which we call \\emph{metrics of the gradient type}.\nFor such metrics $g$, the geodesic flow $v^g$ on the spherical tangent bundle\n$SM \\to M$ admits a Lyapunov function (so the $v^g$-flow is traversing). It\nturns out, that metrics of the gradient type are exactly the non-trapping\nmetrics.\nFor every $g \\in \\mathcal G(M)$, the geodesic scattering along the boundary\n$\\partial M$ can be expressed in terms of the \\emph{scattering map} $C_{v^g}:\n\\partial_1^+(SM) \\to \\partial_1^-(SM)$. It acts from a domain\n$\\partial_1^+(SM)$ in the boundary $\\partial(SM)$ to the complementary domain\n$\\partial_1^-(SM)$, both domains being diffeomorphic. We prove that, for a\n\\emph{boundary generic} metric $g \\in \\mathcal G(M)$ the map $C_{v^g}$ allows\nfor a reconstruction of $SM$ and of the geodesic foliation $\\mathcal F(v^g)$ on\nit, up to a homeomorphism (often a diffeomorphism).\nAlso, for such $g$, the knowledge of the scattering map $C_{v^g}$ makes it\npossible to recover the homology of $M$, the Gromov simplicial semi-norm on it,\nand the fundamental group of $M$. Additionally, $C_{v^g}$ allows to reconstruct\nthe naturally stratified topological type of the space of geodesics on $M$.\n", "title": "Causal Holography in Application to the Inverse Scattering Problems" }
null
null
null
null
true
null
12437
null
Default
null
null
null
{ "abstract": " In this paper, we study the possibility of inferring early warning indicators\n(EWIs) for periods of extreme bitcoin price volatility using features obtained\nfrom Bitcoin daily transaction graphs. We infer the low-dimensional\nrepresentations of transaction graphs in the time period from 2012 to 2017\nusing Bitcoin blockchain, and demonstrate how these representations can be used\nto predict extreme price volatility events. Our EWI, which is obtained with a\nnon-negative decomposition, contains more predictive information than those\nobtained with singular value decomposition or scalar value of the total Bitcoin\ntransaction volume.\n", "title": "Inferring short-term volatility indicators from Bitcoin blockchain" }
null
null
[ "Computer Science", "Quantitative Finance" ]
null
true
null
12438
null
Validated
null
null
null
{ "abstract": " Neuronal activity in the brain generates synchronous oscillations of the\nLocal Field Potential (LFP). The traditional analyses of the LFPs are based on\ndecomposing the signal into simpler components, such as sinusoidal harmonics.\nHowever, a common drawback of such methods is that the decomposition primitives\nare usually presumed from the onset, which may bias our understanding of the\nsignal's structure. Here, we introduce an alternative approach that allows an\nimpartial, high resolution, hands-off decomposition of the brain waves into a\nsmall number of discrete, frequency-modulated oscillatory processes, which we\ncall oscillons. In particular, we demonstrate that mouse hippocampal LFP\ncontain a single oscillon that occupies the $\\theta$-frequency band and a\ncouple of $\\gamma$-oscillons that correspond, respectively, to slow and fast\n$\\gamma$-waves. Since the oscillons were identified empirically, they may\nrepresent the actual, physical structure of synchronous oscillations in\nneuronal ensembles, whereas Fourier-defined \"brain waves\" are nothing but\npoorly resolved oscillons.\n", "title": "Discrete structure of the brain rhythms" }
null
null
[ "Quantitative Biology" ]
null
true
null
12439
null
Validated
null
null
null
{ "abstract": " We construct a point transformation between two integrable systems, the\nmulti-component Harry Dym equation and the multi-component extended Harry Dym\nequation, that does not preserve the class of multi-phase solutions. As a\nconsequence we obtain a new type of wave-like solutions, generalising\nthe~multi-phase solutions of the multi-component extended Harry Dym equation.\nOur construction is easily transferable to other integrable systems with\nanalogous properties.\n", "title": "A new class of solutions for the multi-component extended Harry Dym equation" }
null
null
null
null
true
null
12440
null
Default
null
null
null
{ "abstract": " This paper explores improvements in prediction accuracy and inference\ncapability when allowing for potential correlation in team-level random effects\nacross multiple game-level responses from different assumed distributions.\nFirst-order and fully exponential Laplace approximations are used to fit\nnormal-binary and Poisson-binary multivariate generalized linear mixed models\nwith non-nested random effects structures. We have built these models into the\nR package mvglmmRank, which is used to explore several seasons of American\ncollege football and basketball data.\n", "title": "Multivariate Generalized Linear Mixed Models for Joint Estimation of Sporting Outcomes" }
null
null
null
null
true
null
12441
null
Default
null
null
null
{ "abstract": " We propose an optimization approach for determining both hardware and\nsoftware parameters for the efficient implementation of a (family of)\napplications called dense stencil computations on programmable GPGPUs. We first\nintroduce a simple, analytical model for the silicon area usage of accelerator\narchitectures and a workload characterization of stencil computations. We\ncombine this characterization with a parametric execution time model and\nformulate a mathematical optimization problem. That problem seeks to maximize a\ncommon objective function of 'all the hardware and software parameters'. The\nsolution to this problem, therefore \"solves\" the codesign problem:\nsimultaneously choosing software-hardware parameters to optimize total\nperformance.\nWe validate this approach by proposing architectural variants of the NVIDIA\nMaxwell GTX-980 (respectively, Titan X) specifically tuned to a predetermined\nworkload of four common 2D stencils (Heat, Jacobi, Laplacian, and Gradient) and\ntwo 3D ones (Heat and Laplacian). Our model predicts that performance would\npotentially improve by 28% (respectively, 33%) with simple tweaks to the\nhardware parameters such as adapting coarse and fine-grained parallelism by\nchanging the number of streaming multiprocessors and the number of compute\ncores each contains. We propose a set of Pareto-optimal design points to\nexploit the trade-off between performance and silicon area and show that by\nadditionally eliminating GPU caches, we can get a further 2-fold improvement.\n", "title": "Accelerator Codesign as Non-Linear Optimization" }
null
null
null
null
true
null
12442
null
Default
null
null
null
{ "abstract": " The goal of this thesis was to implement a tool that, given a digital audio\ninput, can extract and represent rhythm and musical time. The purpose of the\ntool is to help develop better models of rhythm for real-time computer based\nperformance and composition. This analysis tool, Riddim, uses Independent\nSubspace Analysis (ISA) and a robust onset detection scheme to separate and\ndetect salient rhythmic and timing information from different sonic sources\nwithin the input. This information is then represented in a format that can be\nused by a variety of algorithms that interpret timing information to infer\nrhythmic and musical structure. A secondary objective of this work is a \"proof\nof concept\" as a non-real-time rhythm analysis system based on ISA. This is a\nnecessary step since ultimately it is desirable to incorporate this\nfunctionality in a real-time plug-in for live performance and improvisation.\n", "title": "Riddim: A Rhythm Analysis and Decomposition Tool Based On Independent Subspace Analysis" }
null
null
null
null
true
null
12443
null
Default
null
null
null
{ "abstract": " Translational motion of neurotransmitter receptors is key for determining\nreceptor number at the synapse and hence, synaptic efficacy. We combine\nlive-cell STORM superresolution microscopy of nicotinic acetylcholine receptor\n(nAChR) with single-particle tracking, mean-squared displacement (MSD), turning\nangle, ergodicity, and clustering analyses to characterize the lateral motion\nof individual molecules and their collective behaviour. nAChR diffusion is\nhighly heterogeneous: subdiffusive, Brownian and, less frequently,\nsuperdiffusive. At the single-track level, free walks are transiently\ninterrupted by ms-long confinement sojourns occurring in nanodomains of ~36 nm\nradius. Cholesterol modulates the time and the area spent in confinement.\nTurning angle analysis reveals anticorrelated steps with time-lag dependence,\nin good agreement with the permeable fence model. At the ensemble level,\nnanocluster assembly occurs in second-long bursts separated by periods of\ncluster disassembly. Thus, millisecond-long confinement sojourns and\nsecond-long reversible nanoclustering with similar cholesterol sensitivities\naffect all trajectories; the proportion of the two regimes determines the\nresulting macroscopic motional mode and breadth of heterogeneity in the\nensemble population.\n", "title": "Cholesterol modulates acetylcholine receptor diffusion by tuning confinement sojourns and nanocluster stability" }
null
null
null
null
true
null
12444
null
Default
null
null
null
{ "abstract": " We show that a reduct of the Zariski structure of an algebraic curve which is\nnot locally modular interprets a field, answering a question of Zilber's.\n", "title": "Incidence systems on Cartesian powers of algebraic curves" }
null
null
[ "Mathematics" ]
null
true
null
12445
null
Validated
null
null
null
{ "abstract": " Statisticians have made great progress in creating methods that reduce our\nreliance on parametric assumptions. However this explosion in research has\nresulted in a breadth of inferential strategies that both create opportunities\nfor more reliable inference as well as complicate the choices that an applied\nresearcher has to make and defend. Relatedly, researchers advocating for new\nmethods typically compare their method to at best 2 or 3 other causal inference\nstrategies and test using simulations that may or may not be designed to\nequally tease out flaws in all the competing methods. The causal inference data\nanalysis challenge, \"Is Your SATT Where It's At?\", launched as part of the 2016\nAtlantic Causal Inference Conference, sought to make progress with respect to\nboth of these issues. The researchers creating the data testing grounds were\ndistinct from the researchers submitting methods whose efficacy would be\nevaluated. Results from 30 competitors across the two versions of the\ncompetition (black box algorithms and do-it-yourself analyses) are presented\nalong with post-hoc analyses that reveal information about the characteristics\nof causal inference strategies and settings that affect performance. The most\nconsistent conclusion was that methods that flexibly model the response surface\nperform better overall than methods that fail to do so. Finally new methods are\nproposed that combine features of several of the top-performing submitted\nmethods.\n", "title": "Automated versus do-it-yourself methods for causal inference: Lessons learned from a data analysis competition" }
null
null
null
null
true
null
12446
null
Default
null
null
null
{ "abstract": " A wide range of learning tasks require human input in labeling massive data.\nThe collected data though are usually low quality and contain inaccuracies and\nerrors. As a result, modern science and business face the problem of learning\nfrom unreliable data sets.\nIn this work, we provide a generic approach that is based on\n\\textit{verification} of only few records of the data set to guarantee high\nquality learning outcomes for various optimization objectives. Our method,\nidentifies small sets of critical records and verifies their validity. We show\nthat many problems only need $\\text{poly}(1/\\varepsilon)$ verifications, to\nensure that the output of the computation is at most a factor of $(1 \\pm\n\\varepsilon)$ away from the truth. For any given instance, we provide an\n\\textit{instance optimal} solution that verifies the minimum possible number of\nrecords to approximately certify correctness. Then using this instance optimal\nformulation of the problem we prove our main result: \"every function that\nsatisfies some Lipschitz continuity condition can be certified with a small\nnumber of verifications\". We show that the required Lipschitz continuity\ncondition is satisfied even by some NP-complete problems, which illustrates the\ngenerality and importance of this theorem.\nIn case this certification step fails, an invalid record will be identified.\nRemoving these records and repeating until success, guarantees that the result\nwill be accurate and will depend only on the verified records. Surprisingly, as\nwe show, for several computation tasks more efficient methods are possible.\nThese methods always guarantee that the produced result is not affected by the\ninvalid records, since any invalid record that affects the output will be\ndetected and verified.\n", "title": "Certified Computation from Unreliable Datasets" }
null
null
null
null
true
null
12447
null
Default
null
null
null
{ "abstract": " In this work, we present a numerical method based on a sparse grid\napproximation to compute the loss distribution of the balance sheet of a\nfinancial or an insurance company. We first describe, in a stylised way, the\nassets and liabilities dynamics that are used for the numerical estimation of\nthe balance sheet distribution. For the pricing and hedging model, we chose a\nclassical Black & Scholes model with a stochastic interest rate following a\nHull & White model. The risk management model describing the evolution of the\nparameters of the pricing and hedging model is a Gaussian model. The new\nnumerical method is compared with the traditional nested simulation approach.\nWe review the convergence of both methods to estimate the risk indicators under\nconsideration. Finally, we provide numerical results showing that the sparse\ngrid approach is extremely competitive for models with moderate dimension.\n", "title": "A sparse grid approach to balance sheet risk measurement" }
null
null
null
null
true
null
12448
null
Default
null
null
null
{ "abstract": " The problem of $\\textit{visual metamerism}$ is defined as finding a family of\nperceptually indistinguishable, yet physically different images. In this paper,\nwe propose our NeuroFovea metamer model, a foveated generative model that is\nbased on a mixture of peripheral representations and style transfer\nforward-pass algorithms. Our gradient-descent free model is parametrized by a\nfoveated VGG19 encoder-decoder which allows us to encode images in high\ndimensional space and interpolate between the content and texture information\nwith adaptive instance normalization anywhere in the visual field. Our\ncontributions include: 1) A framework for computing metamers that resembles a\nnoisy communication system via a foveated feed-forward encoder-decoder network\n-- We observe that metamerism arises as a byproduct of noisy perturbations that\npartially lie in the perceptual null space; 2) A perceptual optimization scheme\nas a solution to the hyperparametric nature of our metamer model that requires\ntuning of the image-texture tradeoff coefficients everywhere in the visual\nfield which are a consequence of internal noise; 3) An ABX psychophysical\nevaluation of our metamers where we also find that the rate of growth of the\nreceptive fields in our model match V1 for reference metamers and V2 between\nsynthesized samples. Our model also renders metamers at roughly a second,\npresenting a $\\times1000$ speed-up compared to the previous work, which allows\nfor tractable data-driven metamer experiments.\n", "title": "Towards Metamerism via Foveated Style Transfer" }
null
null
null
null
true
null
12449
null
Default
null
null
null
{ "abstract": " Sheep pox is a highly transmissible disease which can cause serious loss of\nlivestock and can therefore have major economic impact. We present data from\nsheep pox epidemics which occurred between 1994 and 1998. The data include\nweekly records of infected farms as well as a number of covariates. We\nimplement Bayesian stochastic regression models which, in addition to various\nexplanatory variables like seasonal and environmental/meteorological factors,\nalso contain serial correlation structure based on variants of the\nOrnstein-Uhlenbeck process. We take a predictive view in model selection by\nutilizing deviance-based measures. The results indicate that seasonality and\nthe number of infected farms are important predictors for sheep pox incidence.\n", "title": "Modeling Sheep pox Disease from the 1994-1998 Epidemic in Evros Prefecture, Greece" }
null
null
null
null
true
null
12450
null
Default
null
null
null
{ "abstract": " We describe a method for generating minimal hard prime surface-link diagrams.\nWe extend the known examples of minimal hard prime classical unknot and unlink\ndiagrams up to three components and generate figures of all minimal hard prime\nsurface-unknot and surface-unlink diagrams with prime base surface components\nup to ten crossings.\n", "title": "Minimal hard surface-unlink and classical unlink diagrams" }
null
null
null
null
true
null
12451
null
Default
null
null
null
{ "abstract": " Language change involves the competition between alternative linguistic forms\n(1). The spontaneous evolution of these forms typically results in monotonic\ngrowths or decays (2, 3) like in winner-take-all attractor behaviors. In the\ncase of the Spanish past subjunctive, the spontaneous evolution of its two\ncompeting forms (ended in -ra and -se) was perturbed by the appearance of the\nRoyal Spanish Academy in 1713, which enforced the spelling of both forms as\nperfectly interchangeable variants (4), at a moment in which the -ra form was\ndominant (5). Time series extracted from a massive corpus of books (6) reveal\nthat this regulation in fact produced a transient renewed interest for the old\nform -se which, once faded, left the -ra again as the dominant form up to the\npresent day. We show that time series are successfully explained by a\ntwo-dimensional linear model that integrates an imitative and a novelty\ncomponent. The model reveals that the temporal scale over which collective\nattention fades is in inverse proportion to the verb frequency. The integration\nof the two basic mechanisms of imitation and attention to novelty allows to\nunderstand diverse competing objects, with lifetimes that range from hours for\nmemes and news (7, 8) to decades for verbs, suggesting the existence of a\ngeneral mechanism underlying cultural evolution.\n", "title": "Fading of collective attention shapes the evolution of linguistic variants" }
null
null
null
null
true
null
12452
null
Default
null
null
null
{ "abstract": " User modeling plays an important role in delivering customized web services\nto the users and improving their engagement. However, most user models in the\nliterature do not explicitly consider the temporal behavior of users. More\nrecently, continuous-time user modeling has gained considerable attention and\nmany user behavior models have been proposed based on temporal point processes.\nHowever, typical point process based models often considered the impact of peer\ninfluence and content on the user participation and neglected other factors.\nGamification elements, are among those factors that are neglected, while they\nhave a strong impact on user participation in online services. In this paper,\nwe propose interdependent multi-dimensional temporal point processes that\ncapture the impact of badges on user participation besides the peer influence\nand content factors. We extend the proposed processes to model user actions\nover the community based question and answering websites, and propose an\ninference algorithm based on Variational-EM that can efficiently learn the\nmodel parameters. Extensive experiments on both synthetic and real data\ngathered from Stack Overflow show that our inference algorithm learns the\nparameters efficiently and the proposed method can better predict the user\nbehavior compared to the alternatives.\n", "title": "Continuous-Time User Modeling in the Presence of Badges: A Probabilistic Approach" }
null
null
null
null
true
null
12453
null
Default
null
null
null
{ "abstract": " Testing for regime switching when the regime switching probabilities are\nspecified either as constants (`mixture models') or are governed by a\nfinite-state Markov chain (`Markov switching models') are long-standing\nproblems that have also attracted recent interest. This paper considers testing\nfor regime switching when the regime switching probabilities are time-varying\nand depend on observed data (`observation-dependent regime switching').\nSpecifically, we consider the likelihood ratio test for observation-dependent\nregime switching in mixture autoregressive models. The testing problem is\nhighly nonstandard, involving unidentified nuisance parameters under the null,\nparameters on the boundary, singular information matrices, and higher-order\napproximations of the log-likelihood. We derive the asymptotic null\ndistribution of the likelihood ratio test statistic in a general mixture\nautoregressive setting using high-level conditions that allow for various forms\nof dependence of the regime switching probabilities on past observations, and\nwe illustrate the theory using two particular mixture autoregressive models.\nThe likelihood ratio test has a nonstandard asymptotic distribution that can\neasily be simulated, and Monte Carlo studies show the test to have satisfactory\nfinite sample size and power properties.\n", "title": "Testing for observation-dependent regime switching in mixture autoregressive models" }
null
null
null
null
true
null
12454
null
Default
null
null
null
{ "abstract": " Let $R$ be a commutative Noetherian ring, $\\mathfrak a$ and $\\mathfrak b$\nideals of $R$. In this paper, we study the finiteness dimension $f_{\\mathfrak\na}(M)$ of $M$ relative to $\\mathfrak a$ and the $\\mathfrak b$-minimum\n$\\mathfrak a$-adjusted depth $\\lambda_{\\mathfrak a}^{\\mathfrak b}(M)$ of $M$,\nwhere the underlying module $M$ is relative Cohen-Macaulay w.r.t $\\mathfrak a$.\nSome applications of such modules are given.\n", "title": "The finiteness dimension of modules and relative Cohen-Macaulayness" }
null
null
null
null
true
null
12455
null
Default
null
null
null
{ "abstract": " In industrial control systems, devices such as Programmable Logic Controllers\n(PLCs) are commonly used to directly interact with sensors and actuators, and\nperform local automatic control. PLCs run software on two different layers: a)\nfirmware (i.e. the OS) and b) control logic (processing sensor readings to\ndetermine control actions). In this work, we discuss ladder logic bombs, i.e.\nmalware written in ladder logic (or one of the other IEC 61131-3-compatible\nlanguages). Such malware would be inserted by an attacker into existing control\nlogic on a PLC, and either persistently change the behavior, or wait for\nspecific trigger signals to activate malicious behaviour. For example, the LLB\ncould replace legitimate sensor readings with manipulated values. We see the\nconcept of LLBs as a generalization of attacks such as the Stuxnet attack. We\nintroduce LLBs on an abstract level, and then demonstrate several designs based\non real PLC devices in our lab. In particular, we also focus on stealthy LLBs,\ni.e. LLBs that are hard to detect by human operators manually validating the\nprogram running in PLCs. In addition to introducing vulnerabilities on the\nlogic layer, we also discuss countermeasures and we propose two detection\ntechniques.\n", "title": "On Ladder Logic Bombs in Industrial Control Systems" }
null
null
null
null
true
null
12456
null
Default
null
null
null
{ "abstract": " Nonnegative matrix factorization (NMF), a dimensionality reduction and factor\nanalysis method, is a special case in which factor matrices have low-rank\nnonnegative constraints. Considering the stochastic learning in NMF, we\nspecifically address the multiplicative update (MU) rule, which is the most\npopular, but which has slow convergence property. This present paper introduces\non the stochastic MU rule a variance-reduced technique of stochastic gradient.\nNumerical comparisons suggest that our proposed algorithms robustly outperform\nstate-of-the-art algorithms across different synthetic and real-world datasets.\n", "title": "Stochastic variance reduced multiplicative update for nonnegative matrix factorization" }
null
null
null
null
true
null
12457
null
Default
null
null
null
{ "abstract": " Border crossing delays between New York State and Southern Ontario cause\nproblems like enormous economic loss and massive environmental pollutions. In\nthis area, there are three border-crossing ports: Peace Bridge (PB), Rainbow\nBridge (RB) and Lewiston-Queenston Bridge (LQ) at Niagara Frontier border. The\ngoals of this paper are to figure out whether the distributions of bi-national\nwait times for commercial and passenger vehicles are evenly distributed among\nthe three ports and uncover the hidden significant influential factors that\nresult in the possible insufficient utilization. The historical border wait\ntime data from 7:00 to 21:00 between 08/22/2016 and 06/20/2017 are archived, as\nwell as the corresponding temporal and weather data. For each vehicle type\ntowards each direction, a Decision Tree is built to identify the various border\ndelay patterns over the three bridges. We find that for the passenger vehicles\nto the USA, the convenient connections between the Canada freeways with USA\nI-190 by LQ and PB may cause these two bridges more congested than RB,\nespecially when it is a holiday in Canada. For the passenger vehicles in the\nother bound, RB is much more congested than LQ and PB in some cases, and the\nvisitors to Niagara Falls in the USA in summer may be a reason. For the\ncommercial trucks to the USA, the various delay patterns show PB is always more\ncongested than LQ. Hour interval and weekend are the most significant factors\nappearing in all the four Decision Trees. These Decision Trees can help the\nauthorities to make specific routing suggestions when the corresponding\nconditions are satisfied.\n", "title": "Bi-National Delay Pattern Analysis For Commercial and Passenger Vehicles at Niagara Frontier Border" }
null
null
null
null
true
null
12458
null
Default
null
null
null
{ "abstract": " Scientific knowledge is constantly subject to a variety of changes due to new\ndiscoveries, alternative interpretations, and fresh perspectives. Understanding\nuncertainties associated with various stages of scientific inquiries is an\nintegral part of scientists' domain expertise and it serves as the core of\ntheir meta-knowledge of science. Despite the growing interest in areas such as\ncomputational linguistics, systematically characterizing and tracking the\nepistemic status of scientific claims and their evolution in scientific\ndisciplines remains a challenge. We present a unifying framework for the study\nof uncertainties explicitly and implicitly conveyed in scientific publications.\nThe framework aims to accommodate a wide range of uncertain types, from\nspeculations to inconsistencies and controversies. We introduce a scalable and\nadaptive method to recognize semantically equivalent cues of uncertainty across\ndifferent fields of research and accommodate individual analysts' unique\nperspectives. We demonstrate how the new method can be used to expand a small\nseed list of uncertainty cue words and how the validity of the expanded\ncandidate cue words are verified. We visualize the mixture of the original and\nexpanded uncertainty cue words to reveal the diversity of expressions of\nuncertainty. These cue words offer a novel resource for the study of\nuncertainty in scientific assertions.\n", "title": "A Scalable and Adaptive Method for Finding Semantically Equivalent Cue Words of Uncertainty" }
null
null
null
null
true
null
12459
null
Default
null
null
null
{ "abstract": " Central pattern generators (CPGs) appear to have evolved multiple times\nthroughout the animal kingdom, indicating that their design imparts a\nsignificant evolutionary advantage. Insight into how this design is achieved is\nhindered by the difficulty inherent in examining relationships among\nelectrophysiological properties of the constituent cells of a CPG and their\nfunctional connectivity. That is: experimentally it is challenging to estimate\nthe values of more than two or three of these properties simultaneously. We\nemploy a method of statistical data assimilation (D.A.) to estimate the\nsynaptic weights, synaptic reversal potentials, and maximum conductances of ion\nchannels of the constituent neurons in a multi-modal network model. We then use\nthese estimates to predict the functional mode of activity that the network is\nexpressing. The measurements used are the membrane voltage time series of all\nneurons in the circuit. We find that these measurements provide sufficient\ninformation to yield accurate predictions of the network's associated\nelectrical activity. This experiment can apply directly in a real laboratory\nusing intracellular recordings from a known biological CPG whose structural\nmapping is known, and which can be completely isolated from the animal. The\nsimulated results in this paper suggest that D.A. might provide a tool for\nsimultaneously estimating tens to hundreds of CPG properties, thereby offering\nthe opportunity to seek possible systematic relationships among these\nproperties and the emergent electrical activity.\n", "title": "An optimization method to simultaneously estimate electrophysiology and connectivity in a model central pattern generator" }
null
null
null
null
true
null
12460
null
Default
null
null
null
{ "abstract": " We investigate extremely luminous dusty galaxies in the environments around\nWISE-selected hot dust obscured galaxies (Hot DOGs) and WISE/radio-selected\nactive galactic nuclei (AGNs) at average redshifts of z = 2.7 and z = 1.7,\nrespectively. Previous observations have detected overdensities of companion\nsubmillimetre-selected sources around 10 Hot DOGs and 30 WISE/radio AGNs, with\noverdensities of ~ 2 - 3 and ~ 5 - 6 , respectively. We find that the space\ndensities in both samples to be overdense compared to normal star-forming\ngalaxies and submillimetre galaxies (SMGs) in the SCUBA-2 Cosmology Legacy\nSurvey (S2CLS). Both samples of companion sources have consistent mid-IR\ncolours and mid-IR to submm ratios as SMGs. The brighter population around\nWISE/radio AGNs could be responsible for the higher overdensity reported. We\nalso find the star formation rate density (SFRDs) are higher than the field,\nbut consistent with clusters of dusty galaxies. WISE-selected AGNs appear to be\ngood signposts for protoclusters at high redshift on arcmin scales. The results\nreported here provide an upper limit to the strength of angular clustering\nusing the two-point correlation function. Monte Carlo simulations show no\nangular correlation, which could indicate protoclusters on scales larger than\nthe SCUBA-2 1.5arcmin scale maps.\n", "title": "Overdensities of SMGs around WISE-selected, ultra-luminous, high-redshift AGN" }
null
null
null
null
true
null
12461
null
Default
null
null
null
{ "abstract": " The least squares (LS) estimator and the best linear unbiased estimator\n(BLUE) are two well-studied approaches for the estimation of a deterministic\nbut unknown parameter vector. In many applications it is known that the\nparameter vector fulfills some constraints, e.g., linear constraints. For such\nsituations the constrained LS estimator, which is a simple extension of the LS\nestimator, can be employed. In this paper we derive the constrained version of\nthe BLUE. It will turn out that the incorporation of the linear constraints\ninto the derivation of the BLUE is not straight forward as for the constrained\nLS estimator, but the final expression for the constrained BLUE is closely\nrelated to that of the constrained LS estimator.\n", "title": "Constrained Best Linear Unbiased Estimation" }
null
null
null
null
true
null
12462
null
Default
null
null
null
{ "abstract": " Performing high level cognitive tasks requires the integration of feature\nmaps with drastically different structure. In Visual Question Answering (VQA)\nimage descriptors have spatial structures, while lexical inputs inherently\nfollow a temporal sequence. The recently proposed Multimodal Compact Bilinear\npooling (MCB) forms the outer products, via count-sketch approximation, of the\nvisual and textual representation at each spatial location. While this\nprocedure preserves spatial information locally, outer-products are taken\nindependently for each fiber of the activation tensor, and therefore do not\ninclude spatial context. In this work, we introduce multi-dimensional sketch\n({MD-sketch}), a novel extension of count-sketch to tensors. Using this new\nformulation, we propose Multimodal Compact Tensor Pooling (MCT) to fully\nexploit the global spatial context during bilinear pooling operations.\nContrarily to MCB, our approach preserves spatial context by directly\nconvolving the MD-sketch from the visual tensor features with the text vector\nfeature using higher order FFT. Furthermore we apply MCT incrementally at each\nstep of the question embedding and accumulate the multi-modal vectors with a\nsecond LSTM layer before the final answer is chosen.\n", "title": "Compact Tensor Pooling for Visual Question Answering" }
null
null
null
null
true
null
12463
null
Default
null
null
null
{ "abstract": " A facility based on a next-generation, high-flux D-D neutron generator has\nbeen commissioned and it is now operational at the University of California,\nBerkeley. The current generator design produces near monoenergetic 2.45 MeV\nneutrons at outputs of 10^8 n/s. Calculations provided show that future\nconditioning at higher currents and voltages will allow for a production rate\nover 10^10 n/s. A significant problem encountered was beam-induced electron\nbackstreaming, that needed to be resolved to achieve meaningful beam currents.\nTwo methods of suppressing secondary electrons resulting from the deuterium\nbeam striking the target were tested: the application of static electric and\nmagnetic fields. Computational simulations of both techniques were done using a\nfinite element analysis in COMSOL Multiphysics. Experimental tests verified\nthese simulation results. The most reliable suppression was achieved via the\nimplementation of an electrostatic shroud with a voltage offset of -800 V\nrelative to the target.\n", "title": "Beam-induced Back-streaming Electron Suppression Analysis for Accelerator Type Neutron Generators" }
null
null
null
null
true
null
12464
null
Default
null
null
null
{ "abstract": " Airborne LiDAR point cloud representing a forest contains 3D data, from which\nvertical stand structure even of understory layers can be derived. This paper\npresents a tree segmentation approach for multi-story stands that stratifies\nthe point cloud to canopy layers and segments individual tree crowns within\neach layer using a digital surface model based tree segmentation method. The\nnovelty of the approach is the stratification procedure that separates the\npoint cloud to an overstory and multiple understory tree canopy layers by\nanalyzing vertical distributions of LiDAR points within overlapping locales.\nThe procedure does not make a priori assumptions about the shape and size of\nthe tree crowns and can, independent of the tree segmentation method, be\nutilized to vertically stratify tree crowns of forest canopies. We applied the\nproposed approach to the University of Kentucky Robinson Forest - a natural\ndeciduous forest with complex and highly variable terrain and vegetation\nstructure. The segmentation results showed that using the stratification\nprocedure strongly improved detecting understory trees (from 46% to 68%) at the\ncost of introducing a fair number of over-segmented understory trees (increased\nfrom 1% to 16%), while barely affecting the overall segmentation quality of\noverstory trees. Results of vertical stratification of the canopy showed that\nthe point density of understory canopy layers were suboptimal for performing a\nreasonable tree segmentation, suggesting that acquiring denser LiDAR point\nclouds would allow more improvements in segmenting understory trees. As shown\nby inspecting correlations of the results with forest structure, the\nsegmentation approach is applicable to a variety of forest types.\n", "title": "Vertical stratification of forest canopy for segmentation of under-story trees within small-footprint airborne LiDAR point clouds" }
null
null
[ "Computer Science" ]
null
true
null
12465
null
Validated
null
null
null
{ "abstract": " The temperature coefficients for all the directions of the Nagoya muon\ntelescope were obtained. The zenith angular dependence of the temperature\ncoefficients was studied.\n", "title": "Temperature effect observed by the Nagoya muon telescope" }
null
null
null
null
true
null
12466
null
Default
null
null
null
{ "abstract": " We present a general-purpose method to train Markov chain Monte Carlo\nkernels, parameterized by deep neural networks, that converge and mix quickly\nto their target distribution. Our method generalizes Hamiltonian Monte Carlo\nand is trained to maximize expected squared jumped distance, a proxy for mixing\nspeed. We demonstrate large empirical gains on a collection of simple but\nchallenging distributions, for instance achieving a 106x improvement in\neffective sample size in one case, and mixing when standard HMC makes no\nmeasurable progress in a second. Finally, we show quantitative and qualitative\ngains on a real-world task: latent-variable generative modeling. We release an\nopen source TensorFlow implementation of the algorithm.\n", "title": "Generalizing Hamiltonian Monte Carlo with Neural Networks" }
null
null
null
null
true
null
12467
null
Default
null
null
null
{ "abstract": " Binary Sidel'nikov-Lempel-Cohn-Eastman sequences (or SLCE sequences) over F 2\nhave even period and almost perfect autocorrelation. However, the evaluation of\nthe linear complexity of these sequences is really difficult. In this paper, we\ncontinue the study of [1]. We first express the multiple roots of character\npolynomials of SLCE sequences into certain kinds of Jacobi sums. Then by making\nuse of Gauss sums and Jacobi sums in the \"semiprimitive\" case, we derive new\ndivisibility results for SLCE sequences.\n", "title": "Multiplicities of Character Values of Binary Sidel'nikov-Lempel-Cohn-Eastman Sequences" }
null
null
null
null
true
null
12468
null
Default
null
null
null
{ "abstract": " Embeddings of knowledge graphs have received significant attention due to\ntheir excellent performance for tasks like link prediction and entity\nresolution. In this short paper, we are providing a comparison of two\nstate-of-the-art knowledge graph embeddings for which their equivalence has\nrecently been established, i.e., ComplEx and HolE [Nickel, Rosasco, and Poggio,\n2016; Trouillon et al., 2016; Hayashi and Shimbo, 2017]. First, we briefly\nreview both models and discuss how their scoring functions are equivalent. We\nthen analyze the discrepancy of results reported in the original articles, and\nshow experimentally that they are likely due to the use of different loss\nfunctions. In further experiments, we evaluate the ability of both models to\nembed symmetric and antisymmetric patterns. Finally, we discuss advantages and\ndisadvantages of both models and under which conditions one would be preferable\nto the other.\n", "title": "Complex and Holographic Embeddings of Knowledge Graphs: A Comparison" }
null
null
null
null
true
null
12469
null
Default
null
null
null
{ "abstract": " Deep convolutional networks have become a popular tool for image generation\nand restoration. Generally, their excellent performance is imputed to their\nability to learn realistic image priors from a large number of example images.\nIn this paper, we show that, on the contrary, the structure of a generator\nnetwork is sufficient to capture a great deal of low-level image statistics\nprior to any learning. In order to do so, we show that a randomly-initialized\nneural network can be used as a handcrafted prior with excellent results in\nstandard inverse problems such as denoising, super-resolution, and inpainting.\nFurthermore, the same prior can be used to invert deep neural representations\nto diagnose them, and to restore images based on flash-no flash input pairs.\nApart from its diverse applications, our approach highlights the inductive\nbias captured by standard generator network architectures. It also bridges the\ngap between two very popular families of image restoration methods:\nlearning-based methods using deep convolutional networks and learning-free\nmethods based on handcrafted image priors such as self-similarity. Code and\nsupplementary material are available at\nthis https URL .\n", "title": "Deep Image Prior" }
null
null
null
null
true
null
12470
null
Default
null
null
null
{ "abstract": " Fitting stochastic kinetic models represented by Markov jump processes within\nthe Bayesian paradigm is complicated by the intractability of the observed data\nlikelihood. There has therefore been considerable attention given to the design\nof pseudo-marginal Markov chain Monte Carlo algorithms for such models.\nHowever, these methods are typically computationally intensive, often require\ncareful tuning and must be restarted from scratch upon receipt of new\nobservations. Sequential Monte Carlo (SMC) methods on the other hand aim to\nefficiently reuse posterior samples at each time point. Despite their appeal,\napplying SMC schemes in scenarios with both dynamic states and static\nparameters is made difficult by the problem of particle degeneracy. A\nprincipled approach for overcoming this problem is to move each parameter\nparticle through a Metropolis-Hastings kernel that leaves the target invariant.\nThis rejuvenation step is key to a recently proposed SMC$^2$ algorithm, which\ncan be seen as the pseudo-marginal analogue of an idealised scheme known as\niterated batch importance sampling. Computing the parameter weights in SMC$^2$\nrequires running a particle filter over dynamic states to unbiasedly estimate\nthe intractable observed data likelihood contributions at each time point. In\nthis paper, we propose to use an auxiliary particle filter inside the SMC$^2$\nscheme. Our method uses two recently proposed constructs for sampling\nconditioned jump processes and we find that the resulting inference schemes\ntypically require fewer state particles than when using a simple bootstrap\nfilter. Using two applications, we compare the performance of the proposed\napproach with various competing methods, including two global MCMC schemes.\n", "title": "Efficient SMC$^2$ schemes for stochastic kinetic models" }
null
null
null
null
true
null
12471
null
Default
null
null
null
{ "abstract": " Image-to-image translation is a class of vision and graphics problems where\nthe goal is to learn the mapping between an input image and an output image\nusing a training set of aligned image pairs. However, for many tasks, paired\ntraining data will not be available. We present an approach for learning to\ntranslate an image from a source domain $X$ to a target domain $Y$ in the\nabsence of paired examples. Our goal is to learn a mapping $G: X \\rightarrow Y$\nsuch that the distribution of images from $G(X)$ is indistinguishable from the\ndistribution $Y$ using an adversarial loss. Because this mapping is highly\nunder-constrained, we couple it with an inverse mapping $F: Y \\rightarrow X$\nand introduce a cycle consistency loss to push $F(G(X)) \\approx X$ (and vice\nversa). Qualitative results are presented on several tasks where paired\ntraining data does not exist, including collection style transfer, object\ntransfiguration, season transfer, photo enhancement, etc. Quantitative\ncomparisons against several prior methods demonstrate the superiority of our\napproach.\n", "title": "Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks" }
null
null
null
null
true
null
12472
null
Default
null
null
null
{ "abstract": " We consider the challenging problem of statistical inference for\nexponential-family random graph models based on a single observation of a\nrandom graph with complex dependence. To facilitate statistical inference, we\nconsider random graphs with additional structure in the form of block\nstructure. We have shown elsewhere that when the block structure is known, it\nfacilitates consistency results for $M$-estimators of canonical and curved\nexponential-family random graph models with complex dependence, such as\ntransitivity. In practice, the block structure is known in some applications\n(e.g., multilevel networks), but is unknown in others. When the block structure\nis unknown, the first and foremost question is whether it can be recovered with\nhigh probability based on a single observation of a random graph with complex\ndependence. The main consistency results of the paper show that it is possible\nto do so provided the number of blocks grows as fast as in high-dimensional\nstochastic block models. These results confirm that exponential-family random\ngraph models with block structure constitute a promising direction of\nstatistical network analysis.\n", "title": "Consistent structure estimation of exponential-family random graph models with block structure" }
null
null
null
null
true
null
12473
null
Default
null
null
null
{ "abstract": " Do visual tasks have a relationship, or are they unrelated? For instance,\ncould having surface normals simplify estimating the depth of an image?\nIntuition answers these questions positively, implying existence of a structure\namong visual tasks. Knowing this structure has notable values; it is the\nconcept underlying transfer learning and provides a principled way for\nidentifying redundancies across tasks, e.g., to seamlessly reuse supervision\namong related tasks or solve many tasks in one system without piling up the\ncomplexity.\nWe proposes a fully computational approach for modeling the structure of\nspace of visual tasks. This is done via finding (first and higher-order)\ntransfer learning dependencies across a dictionary of twenty six 2D, 2.5D, 3D,\nand semantic tasks in a latent space. The product is a computational taxonomic\nmap for task transfer learning. We study the consequences of this structure,\ne.g. nontrivial emerged relationships, and exploit them to reduce the demand\nfor labeled data. For example, we show that the total number of labeled\ndatapoints needed for solving a set of 10 tasks can be reduced by roughly 2/3\n(compared to training independently) while keeping the performance nearly the\nsame. We provide a set of tools for computing and probing this taxonomical\nstructure including a solver that users can employ to devise efficient\nsupervision policies for their use cases.\n", "title": "Taskonomy: Disentangling Task Transfer Learning" }
null
null
null
null
true
null
12474
null
Default
null
null
null
{ "abstract": " The potential failure of energy equality for a solution $u$ of the Euler or\nNavier-Stokes equations can be quantified using a so-called `energy measure':\nthe weak-$*$ limit of the measures $|u(t)|^2\\,\\mbox{d}x$ as $t$ approaches the\nfirst possible blowup time. We show that membership of $u$ in certain (weak or\nstrong) $L^q L^p$ classes gives a uniform lower bound on the lower local\ndimension of $\\mathcal{E}$; more precisely, it implies uniform boundedness of a\ncertain upper $s$-density of $\\mathcal{E}$. We also define and give lower\nbounds on the `concentration dimension' associated to $\\mathcal{E}$, which is\nthe Hausdorff dimension of the smallest set on which energy can concentrate.\nBoth the lower local dimension and the concentration dimension of $\\mathcal{E}$\nmeasure the departure from energy equality. As an application of our estimates,\nwe prove that any solution to the $3$-dimensional Navier-Stokes Equations which\nis Type-I in time must satisfy the energy equality at the first blowup time.\n", "title": "The Energy Measure for the Euler and Navier-Stokes Equations" }
null
null
null
null
true
null
12475
null
Default
null
null
null
{ "abstract": " Inference of space-time varying signals on graphs emerges naturally in a\nplethora of network science related applications. A frequently encountered\nchallenge pertains to reconstructing such dynamic processes, given their values\nover a subset of vertices and time instants. The present paper develops a\ngraph-aware kernel-based kriged Kalman filter that accounts for the\nspatio-temporal variations, and offers efficient online reconstruction, even\nfor dynamically evolving network topologies. The kernel-based learning\nframework bypasses the need for statistical information by capitalizing on the\nsmoothness that graph signals exhibit with respect to the underlying graph. To\naddress the challenge of selecting the appropriate kernel, the proposed filter\nis combined with a multi-kernel selection module. Such a data-driven method\nselects a kernel attuned to the signal dynamics on-the-fly within the linear\nspan of a pre-selected dictionary. The novel multi-kernel learning algorithm\nexploits the eigenstructure of Laplacian kernel matrices to reduce\ncomputational complexity. Numerical tests with synthetic and real data\ndemonstrate the superior reconstruction performance of the novel approach\nrelative to state-of-the-art alternatives.\n", "title": "Inference of Spatio-Temporal Functions over Graphs via Multi-Kernel Kriged Kalman Filtering" }
null
null
null
null
true
null
12476
null
Default
null
null
null
{ "abstract": " In this sequel to earlier papers by three of the authors, we obtain a new\nbound on the complexity of a closed 3--manifold, as well as a characterisation\nof manifolds realising our complexity bounds. As an application, we obtain the\nfirst infinite families of minimal triangulations of Seifert fibred spaces\nmodelled on Thurston's geometry $\\widetilde{\\text{SL}_2(\\mathbb{R})}.$\n", "title": "Z2-Thurston Norm and Complexity of 3-Manifolds, II" }
null
null
null
null
true
null
12477
null
Default
null
null
null
{ "abstract": " This paper introduces a probabilistic framework for k-shot image\nclassification. The goal is to generalise from an initial large-scale\nclassification task to a separate task comprising new classes and small numbers\nof examples. The new approach not only leverages the feature-based\nrepresentation learned by a neural network from the initial task\n(representational transfer), but also information about the classes (concept\ntransfer). The concept information is encapsulated in a probabilistic model for\nthe final layer weights of the neural network which acts as a prior for\nprobabilistic k-shot learning. We show that even a simple probabilistic model\nachieves state-of-the-art on a standard k-shot learning dataset by a large\nmargin. Moreover, it is able to accurately model uncertainty, leading to well\ncalibrated classifiers, and is easily extensible and flexible, unlike many\nrecent approaches to k-shot learning.\n", "title": "Discriminative k-shot learning using probabilistic models" }
null
null
null
null
true
null
12478
null
Default
null
null
null
{ "abstract": " This paper describes a method of nonlinear wavelet thresholding of time\nseries. The Ramachandran-Ranganathan runs test is used to assess the quality of\napproximation. To minimize the objective function, it is proposed to use\ngenetic algorithms - one of the stochastic optimization methods. The suggested\nmethod is tested both on the model series and on the word frequency series\nusing the Google Books Ngram data. It is shown that method of filtering which\nuses the runs criterion shows significantly better results compared with the\nstandard wavelet thresholding. The method can be used when quality of filtering\nis of primary importance but not the speed of calculations.\n", "title": "Comparative analysis of criteria for filtering time series of word usage frequencies" }
null
null
null
null
true
null
12479
null
Default
null
null
null
{ "abstract": " The $\\mathbb{Z}_2$ topological phase in the quantum dimer model on the\nKagomé-lattice is a candidate for the description of the low-energy physics\nof the anti-ferromagnetic Heisenberg model on the same lattice. We study the\nextend of the topological phase by interpolating between the exactly solvable\nparent Hamiltonian of the topological phase and an effective low-energy\ndescription of the Heisenberg model in terms of a quantum-dimer Hamiltonian.\nTherefore, we perform a perturbative treatment of the low-energy excitations in\nthe topological phase including free and interacting quasi-particles. We find a\nphase transition out of the topological phase far from the Heisenberg point.\nThe resulting phase is characterized by a spontaneously broken rotational\nsymmetry and a unit cell involving six sites.\n", "title": "Extend of the $\\mathbb{Z}_2$-spin liquid phase on the Kagomé-lattice" }
null
null
[ "Physics" ]
null
true
null
12480
null
Validated
null
null
null
{ "abstract": " We propose an L-BFGS optimization algorithm on Riemannian manifolds using\nminibatched stochastic variance reduction techniques for fast convergence with\nconstant step sizes, without resorting to linesearch methods designed to\nsatisfy Wolfe conditions. We provide a new convergence proof for strongly\nconvex functions without using curvature conditions on the manifold, as well as\na convergence discussion for nonconvex functions. We discuss a couple of ways\nto obtain the correction pairs used to calculate the product of the gradient\nwith the inverse Hessian, and empirically demonstrate their use in synthetic\nexperiments on computation of Karcher means for symmetric positive definite\nmatrices and leading eigenvalues of large scale data matrices. We compare our\nmethod to VR-PCA for the latter experiment, along with Riemannian SVRG for both\ncases, and show strong convergence results for a range of datasets.\n", "title": "Accelerated Stochastic Quasi-Newton Optimization on Riemann Manifolds" }
null
null
null
null
true
null
12481
null
Default
null
null
null
{ "abstract": " We present a new Markov chain Monte Carlo algorithm, implemented in software\nArbores, for inferring the history of a sample of DNA sequences. Our principal\ninnovation is a bridging procedure, previously applied only for simple\nstochastic processes, in which the local computations within a bridge can\nproceed independently of the rest of the DNA sequence, facilitating large-scale\nparallelisation.\n", "title": "Bridging trees for posterior inference on Ancestral Recombination Graphs" }
null
null
null
null
true
null
12482
null
Default
null
null
null
{ "abstract": " We present the crystal structure and magnetic properties of\nY$_{3}$Cu$_{9}$(OH)$_{19}$Cl$_{8}$, a stoichiometric frustrated quantum spin\nsystem with slightly distorted kagome layers. Single crystals of\nY$_{3}$Cu$_{9}$(OH)$_{19}$Cl$_{8}$ were grown under hydrothermal conditions.\nThe structure was determined from single crystal X-ray diffraction and\nconfirmed by neutron powder diffraction. The observed structure reveals two\ndifferent Cu-positions leading to a slightly distored kagome layer in contrast\nto the closely related YCu$_{3}$(OH)$_{6}$Cl$_{3}$. Curie-Weiss behavior at\nhigh-temperatures with a Weiss-temperature $\\theta_{W}$ of the order of $-100$\nK, shows a large dominant antiferromagnetic coupling within the kagome planes.\nSpecific-heat and magnetization measurements on single crystals reveal an\nantiferromagnetic transition at T$_{N}=2.2$ K indicating a pronounced\nfrustration parameter of $\\theta_{W}/T_{N}\\approx50$. Optical transmission\nexperiments on powder samples and single crystals confirm the structural\nfindings. Specific-heat measurements on YCu$_{3}$(OH)$_{6}$Cl$_{3}$ down to 0.4\nK confirm the proposed quantum spin-liquid state of that system. Therefore, the\ntwo Y-Cu-OH-Cl compounds present a unique setting to investigate closely\nrelated structures with a spin-liquid state and a strongly frustrated AFM\nordered state, by slightly releasing the frustration in a kagome lattice.\n", "title": "Strong magnetic frustration in Y$_{3}$Cu$_{9}$(OH)$_{19}$Cl$_{8}$: a distorted kagome antiferromagnet" }
null
null
[ "Physics" ]
null
true
null
12483
null
Validated
null
null
null
{ "abstract": " We construct an extended oriented $(2+\\epsilon)$-dimensional topological\nfield theory, the character field theory $X_G$ attached to a affine algebraic\ngroup in characteristic zero, which calculates the homology of character\nvarieties of surfaces. It is a model for a dimensional reduction of\nKapustin-Witten theory ($N=4$ $d=4$ super-Yang-Mills in the GL twist), and a\nuniversal version of the unipotent character field theory introduced in\narXiv:0904.1247. Boundary conditions in $X_G$ are given by quantum Hamiltonian\n$G$-spaces, as captured by de Rham (or strong) $G$-categories, i.e., module\ncategories for the monoidal dg category $D(G)$ of $D$-modules on $G$. We show\nthat the circle integral $X_G(S^1)$ (the center and trace of $D(G)$) is\nidentified with the category $D(G/G)$ of \"class $D$-modules\", while for an\noriented surface $S$ (with arbitrary decorations at punctures) we show that\n$X_G(S)\\simeq{\\rm H}_*^{BM}(Loc_G(S))$ is the Borel-Moore homology of the\ncorresponding character stack. We also describe the \"Hodge filtration\" on the\ncharacter theory, a one parameter degeneration to a TFT whose boundary\nconditions are given by classical Hamiltonian $G$-spaces, and which encodes a\nvariant of the Hodge filtration on character varieties.\n", "title": "The Character Field Theory and Homology of Character Varieties" }
null
null
[ "Mathematics" ]
null
true
null
12484
null
Validated
null
null
null
{ "abstract": " This paper presents a new generator of chaotic bit sequences with mixed-mode\n(continuous and discrete) inputs. The generator has an improved level of\nchaotic properties in comparison with the existing single source (input)\ndigital chaotic bit generators. The 0-1 test is used to show the improved\nchaotic behavior of our generator having a chaotic continuous input (Chua,\nRössler or Lorenz system) intermingled with a discrete input (logistic,\nTinkerbell or Henon map) with various parameters. The obtained sequences of\nchaotic bits show some features of random processes with increased entropy\nlevels, even in the cases of small numbers of bit representations. The\nproperties of the new generator and its binary sequences compare well with\nthose obtained from a truly random binary reference quantum generator, as\nevidenced by the results of the $ent$ tests.\n", "title": "A new generator of chaotic bit sequences with mixed-mode inputs" }
null
null
null
null
true
null
12485
null
Default
null
null
null
{ "abstract": " The HAWC Gamma Ray observatory consists of 300 water Cherenkov detectors\n(WCD) instrumented with four photo multipliers tubes (PMT) per WCD. HAWC is\nlocated between two of the highest mountains in Mexico. The high altitude (4100\nm asl), the relatively short distance to the Gulf of Mexico (~100 km), the\nlarge detecting area (22 000 m$^2$) and its high sensitivity, make HAWC a good\ninstrument to explore the acceleration of particles due to the electric fields\nexisting inside storm clouds. In particular, the scaler system of HAWC records\nthe output of each one of the 1200 PMTs as well as the 2, 3, and 4-fold\nmultiplicities (logic AND in a time window of 30 ns) of each WCD with a\nsampling rate of 40 Hz. Using the scaler data, we have identified 20\nenhancements of the observed rate during periods when storm clouds were over\nHAWC but without cloud-earth discharges. These enhancements can be produced by\nelectrons with energy of tens of MeV, accelerated by the electric fields of\ntens of kV/m measured at the site during the storm periods. In this work, we\npresent the recorded data, the method of analysis and our preliminary\nconclusions on the electron acceleration by the electric fields inside the\nclouds.\n", "title": "HAWC response to atmospheric electricity activity" }
null
null
null
null
true
null
12486
null
Default
null
null
null
{ "abstract": " Optical Music Recognition (OMR) is an important technology within Music\nInformation Retrieval. Deep learning models show promising results on OMR\ntasks, but symbol-level annotated data sets of sufficient size to train such\nmodels are not available and difficult to develop. We present a deep learning\narchitecture called a Convolutional Sequence-to-Sequence model to both move\ntowards an end-to-end trainable OMR pipeline, and apply a learning process that\ntrains on full sentences of sheet music instead of individually labeled\nsymbols. The model is trained and evaluated on a human generated data set, with\nvarious image augmentations based on real-world scenarios. This data set is the\nfirst publicly available set in OMR research with sufficient size to train and\nevaluate deep learning models. With the introduced augmentations a pitch\nrecognition accuracy of 81% and a duration accuracy of 94% is achieved,\nresulting in a note level accuracy of 80%. Finally, the model is compared to\ncommercially available methods, showing a large improvements over these\napplications.\n", "title": "Optical Music Recognition with Convolutional Sequence-to-Sequence Models" }
null
null
null
null
true
null
12487
null
Default
null
null
null
{ "abstract": " The cable model is widely used in several fields of science to describe the\npropagation of signals. A relevant medical and biological example is the\nanomalous subdiffusion in spiny neuronal dendrites observed in several studies\nof the last decade. Anomalous subdiffusion can be modelled in several ways\nintroducing some fractional component into the classical cable model. The\nChauchy problem associated to these kind of models has been investigated by\nmany authors, but up to our knowledge an explicit solution for the signalling\nproblem has not yet been published. Here we propose how this solution can be\nderived applying the generalized convolution theorem (known as Efros theorem)\nfor Laplace transforms. The fractional cable model considered in this paper is\ndefined by replacing the first order time derivative with a fractional\nderivative of order $\\alpha\\in(0,1)$ of Caputo type. The signalling problem is\nsolved for any input function applied to the accessible end of a semi-infinite\ncable, which satisfies the requirements of the Efros theorem. The solutions\ncorresponding to the simple cases of impulsive and step inputs are explicitly\ncalculated in integral form containing Wright functions. Thanks to the\nvariability of the parameter $\\alpha$, the corresponding solutions are expected\nto adapt to the qualitative behaviour of the membrane potential observed in\nexperiments better than in the standard case $\\alpha=1$.\n", "title": "Fractional Cable Model for Signal Conduction in Spiny Neuronal Dendrites" }
null
null
null
null
true
null
12488
null
Default
null
null
null
{ "abstract": " We consider conditional-mean hedging in a fractional Black-Scholes pricing\nmodel in the presence of proportional transaction costs. We develop an explicit\nformula for the conditional-mean hedging portfolio in terms of the recently\ndiscovered explicit conditional law of the fractional Brownian motion.\n", "title": "Hedging in fractional Black-Scholes model with transaction costs" }
null
null
null
null
true
null
12489
null
Default
null
null
null
{ "abstract": " We say that a finite metric space $X$ can be embedded almost isometrically\ninto a class of metric spaces $C$, if for every $\\epsilon > 0$ there exists an\nembedding of $X$ into one of the elements of $C$ with the bi-Lipschitz\ndistortion less then $1 + \\epsilon$. We show that almost isometric\nembeddability conditions are equal for following classes of spaces\n(a) Quotients of Euclidean spaces by isometric actions of finite groups,\n(b) $L_2$-Wasserstein spaces over Euclidean spaces,\n(c) Compact flat manifolds,\n(d) Compact flat orbifolds,\n(e) Quotients of bi-invariant Lie groups by isometric actions of compact Lie\ngroups. (This one is the most surprising.)\nWe call spaces which satisfy this conditions finite flat spaces. The question\nabout synthetic definition naturally arises.\nSince Markov type constants depend only on finite subsets we can conclude\nthat bi-invariant Lie groups and their quotients have Markov type $2$ with\nconstant $1$.\n", "title": "Finite flat spaces" }
null
null
null
null
true
null
12490
null
Default
null
null
null
{ "abstract": " The aim of this work is to establish that two recently published projection\ntheorems, one dealing with a parametric generalization of relative entropy and\nanother dealing with Rényi divergence, are equivalent under a\ncorrespondence on the space of probability measures. Further, we demonstrate\nthat the associated \"Pythagorean\" theorems are equivalent under this\ncorrespondence. Finally, we apply Eguchi's method of obtaining Riemannian\nmetrics from general divergence functions to show that the geometry arising\nfrom the above divergences are equivalent under the aforementioned\ncorrespondence.\n", "title": "On The Equivalence of Projections In Relative $α$-Entropy and Rényi Divergence" }
null
null
[ "Computer Science" ]
null
true
null
12491
null
Validated
null
null
null
{ "abstract": " The popular Adjusted Rand Index (ARI) is extended to the task of simultaneous\nclustering of the rows and columns of a given matrix. This new index called\nCoclustering Adjusted Rand Index (CARI) remains convenient and competitive\nfacing other indices. Indeed, partitions with high number of clusters can be\nconsidered and it does not require any convention when the numbers of clusters\nin partitions are different. Experiments on simulated partitions are presented\nand the performance of this index to measure the agreement between two pairs of\npartitions is assessed. Comparison with other indices is discussed.\n", "title": "Comparing high dimensional partitions, with the Coclustering Adjusted Rand Index" }
null
null
null
null
true
null
12492
null
Default
null
null
null
{ "abstract": " We study the size and the external path length of random tries and show that\nthey are asymptotically independent in the asymmetric case but strongly\ndependent with small periodic fluctuations in the symmetric case. Such an\nunexpected behavior is in sharp contrast to the previously known results on\nrandom tries that the size is totally positively correlated to the internal\npath length and that both tend to the same normal limit law. These two\ndependence examples provide concrete instances of bivariate normal\ndistributions (as limit laws) whose correlation is $0$, $1$ and periodically\noscillating. Moreover, the same type of behaviors is also clarified for other\nclasses of digital trees such as bucket digital trees and Patricia tries.\n", "title": "Dependence between Path-length and Size in Random Digital Trees" }
null
null
null
null
true
null
12493
null
Default
null
null
null
{ "abstract": " Output from statistical parametric speech synthesis (SPSS) remains noticeably\nworse than natural speech recordings in terms of quality, naturalness, speaker\nsimilarity, and intelligibility in noise. There are many hypotheses regarding\nthe origins of these shortcomings, but these hypotheses are often kept vague\nand presented without empirical evidence that could confirm and quantify how a\nspecific shortcoming contributes to imperfections in the synthesised speech.\nThroughout speech synthesis literature, surprisingly little work is dedicated\ntowards identifying the perceptually most important problems in speech\nsynthesis, even though such knowledge would be of great value for creating\nbetter SPSS systems.\nIn this book chapter, we analyse some of the shortcomings of SPSS. In\nparticular, we discuss issues with vocoding and present a general methodology\nfor quantifying the effect of any of the many assumptions and design choices\nthat hold SPSS back. The methodology is accompanied by an example that\ncarefully measures and compares the severity of perceptual limitations imposed\nby vocoding as well as other factors such as the statistical model and its use.\n", "title": "Analysing Shortcomings of Statistical Parametric Speech Synthesis" }
null
null
null
null
true
null
12494
null
Default
null
null
null
{ "abstract": " Meshfree solution schemes for the incompressible Navier--Stokes equations are\nusually based on algorithms commonly used in finite volume methods, such as\nprojection methods, SIMPLE and PISO algorithms. However, drawbacks of these\nalgorithms that are specific to meshfree methods have often been overlooked. In\nthis paper, we study the drawbacks of conventionally used meshfree Generalized\nFinite Difference Method~(GFDM) schemes for Lagrangian incompressible\nNavier-Stokes equations, both operator splitting schemes and monolithic\nschemes. The major drawback of most of these schemes is inaccurate local\napproximations to the mass conservation condition. Further, we propose a new\nmodification of a commonly used monolithic scheme that overcomes these problems\nand shows a better approximation for the velocity divergence condition. We then\nperform a numerical comparison which shows the new monolithic scheme to be more\naccurate than existing schemes.\n", "title": "On Meshfree GFDM Solvers for the Incompressible Navier-Stokes Equations" }
null
null
null
null
true
null
12495
null
Default
null
null
null
{ "abstract": " We propose a development of the Analytic Hierarchy Process (AHP) permitting\nto use the methodology also in cases of decision problems with a very large\nnumber of alternatives evaluated with respect to several criteria. While the\napplication of the original AHP method involves many pairwise comparisons\nbetween alternatives and criteria, our proposal is composed of three steps: (i)\ndirect evaluation of the alternatives at hand on the considered criteria, (ii)\nselection of some reference evaluations; (iii) application of the original AHP\nmethod to reference evaluations; (iv) revision of the direct evaluation on the\nbasis of the prioritization supplied by AHP on reference evaluations. The new\nproposal has been tested and validated in an experiment conducted on a sample\nof university students. The new methodology has been therefore applied to a\nreal world problem involving the evaluation of 21 Social Housing initiatives\nsited in the Piedmont region (Italy). To take into account interaction between\ncriteria, the Choquet integral preference model has been considered within a\nNon Additive Robust Ordinal Regression approach.\n", "title": "Using a new parsimonious AHP methodology combined with the Choquet integral: An application for evaluating social housing initiatives" }
null
null
null
null
true
null
12496
null
Default
null
null
null
{ "abstract": " The blooming availability of traces for social, biological, and communication\nnetworks opens up unprecedented opportunities in analyzing diffusion processes\nin networks. However, the sheer sizes of the nowadays networks raise serious\nchallenges in computational efficiency and scalability.\nIn this paper, we propose a new hyper-graph sketching framework for inflence\ndynamics in networks. The central of our sketching framework, called SKIS, is\nan efficient importance sampling algorithm that returns only non-singular\nreverse cascades in the network. Comparing to previously developed sketches\nlike RIS and SKIM, our sketch significantly enhances estimation quality while\nsubstantially reducing processing time and memory-footprint. Further, we\npresent general strategies of using SKIS to enhance existing algorithms for\ninfluence estimation and influence maximization which are motivated by\npractical applications like viral marketing. Using SKIS, we design high-quality\ninfluence oracle for seed sets with average estimation error up to 10x times\nsmaller than those using RIS and 6x times smaller than SKIM. In addition, our\ninfluence maximization using SKIS substantially improves the quality of\nsolutions for greedy algorithms. It achieves up to 10x times speed-up and 4x\nmemory reduction for the fastest RIS-based DSSA algorithm, while maintaining\nthe same theoretical guarantees.\n", "title": "Importance Sketching of Influence Dynamics in Billion-scale Networks" }
null
null
null
null
true
null
12497
null
Default
null
null
null
{ "abstract": " Advances in artificial intelligence have renewed interest in conversational\nagents. So-called chatbots have reached maturity for industrial applications.\nGerman insurance companies are interested in improving their customer service\nand digitizing their business processes. In this work we investigate the\npotential use of conversational agents in insurance companies by determining\nwhich classes of agents are of interest to insurance companies, finding\nrelevant use cases and requirements, and developing a prototype for an\nexemplary insurance scenario. Based on this approach, we derive key findings\nfor conversational agent implementation in insurance companies.\n", "title": "Motivations, Classification and Model Trial of Conversational Agents for Insurance Companies" }
null
null
null
null
true
null
12498
null
Default
null
null
null
{ "abstract": " The Keplerian distribution of velocities is not observed in the rotation of\nlarge scale structures, such as found in the rotation of spiral galaxies. The\ndeviation from Keplerian distribution provides compelling evidence of the\npresence of non-luminous matter i.e. called dark matter. There are several\nastrophysical motivations for investigating the dark matter in and around the\ngalaxy as halo. In this work we address various theoretical and experimental\nindications pointing towards the existence of this unknown form of matter.\nAmongst its constituents neutrino is one of the most prospective candidates. We\nknow the neutrinos oscillate and have tiny masses, but there are also\nsignatures for existence of heavy and light sterile neutrinos and possibility\nof their mixing. Altogether, the role of neutrinos is of great interests in\ncosmology and understanding dark matter.\n", "title": "Dark Matter and Neutrinos" }
null
null
null
null
true
null
12499
null
Default
null
null
null
{ "abstract": " Recurrent neural networks like long short-term memory (LSTM) are important\narchitectures for sequential prediction tasks. LSTMs (and RNNs in general)\nmodel sequences along the forward time direction. Bidirectional LSTMs\n(Bi-LSTMs) on the other hand model sequences along both forward and backward\ndirections and are generally known to perform better at such tasks because they\ncapture a richer representation of the data. In the training of Bi-LSTMs, the\nforward and backward paths are learned independently. We propose a variant of\nthe Bi-LSTM architecture, which we call Variational Bi-LSTM, that creates a\nchannel between the two paths (during training, but which may be omitted during\ninference); thus optimizing the two paths jointly. We arrive at this joint\nobjective for our model by minimizing a variational lower bound of the joint\nlikelihood of the data sequence. Our model acts as a regularizer and encourages\nthe two networks to inform each other in making their respective predictions\nusing distinct information. We perform ablation studies to better understand\nthe different components of our model and evaluate the method on various\nbenchmarks, showing state-of-the-art performance.\n", "title": "Variational Bi-LSTMs" }
null
null
null
null
true
null
12500
null
Default
null
null