text
null | inputs
dict | prediction
null | prediction_agent
null | annotation
list | annotation_agent
null | multi_label
bool 1
class | explanation
null | id
stringlengths 1
5
| metadata
null | status
stringclasses 2
values | event_timestamp
null | metrics
null |
---|---|---|---|---|---|---|---|---|---|---|---|---|
null |
{
"abstract": " Spatio-temporal data and processes are prevalent across a wide variety of\nscientific disciplines. These processes are often characterized by nonlinear\ntime dynamics that include interactions across multiple scales of spatial and\ntemporal variability. The data sets associated with many of these processes are\nincreasing in size due to advances in automated data measurement, management,\nand numerical simulator output. Non- linear spatio-temporal models have only\nrecently seen interest in statistics, but there are many classes of such models\nin the engineering and geophysical sciences. Tradi- tionally, these models are\nmore heuristic than those that have been presented in the statistics\nliterature, but are often intuitive and quite efficient computationally. We\nshow here that with fairly simple, but important, enhancements, the echo state\nnet- work (ESN) machine learning approach can be used to generate long-lead\nforecasts of nonlinear spatio-temporal processes, with reasonable uncertainty\nquantification, and at only a fraction of the computational expense of a\ntraditional parametric nonlinear spatio-temporal models.\n",
"title": "An Ensemble Quadratic Echo State Network for Nonlinear Spatio-Temporal Forecasting"
}
| null | null | null | null | true | null |
16201
| null |
Default
| null | null |
null |
{
"abstract": " We develop an optimization model and corresponding algorithm for the\nmanagement of a demand-side platform (DSP), whereby the DSP aims to maximize\nits own profit while acquiring valuable impressions for its advertiser clients.\nWe formulate the problem of profit maximization for a DSP interacting with ad\nexchanges in a real-time bidding environment in a\ncost-per-click/cost-per-action pricing model. Our proposed formulation leads to\na nonconvex optimization problem due to the joint optimization over both\nimpression allocation and bid price decisions. We use Lagrangian relaxation to\ndevelop a tractable convex dual problem, which, due to the properties of\nsecond-price auctions, may be solved efficiently with subgradient methods. We\npropose a two-phase solution procedure, whereby in the first phase we solve the\nconvex dual problem using a subgradient algorithm, and in the second phase we\nuse the previously computed dual solution to set bid prices and then solve a\nlinear optimization problem to obtain the allocation probability variables. On\nseveral synthetic examples, we demonstrate that our proposed solution approach\nleads to superior performance over a baseline method that is used in practice.\n",
"title": "Profit Maximization for Online Advertising Demand-Side Platforms"
}
| null | null | null | null | true | null |
16202
| null |
Default
| null | null |
null |
{
"abstract": " Superhydrophobic surfaces (SHSs) have the potential to achieve large drag\nreduction for internal and external flow applications. However, experiments\nhave shown inconsistent results, with many studies reporting significantly\nreduced performance. Recently, it has been proposed that surfactants,\nubiquitous in flow applications, could be responsible, by creating adverse\nMarangoni stresses. Yet, testing this hypothesis is challenging. Careful\nexperiments with purified water show large interfacial stresses and,\nparadoxically, adding surfactants yields barely measurable drag increases. This\nsuggests that other physical processes, such as thermal Marangoni stresses or\ninterface deflection, could explain the lower performance. To test the\nsurfactant hypothesis, we perform the first numerical simulations of flows over\na SHS inclusive of surfactant kinetics. These simulations reveal that\nsurfactant-induced stresses are significant at extremely low concentrations,\npotentially yielding a no-slip boundary condition on the air--water interface\n(the \"plastron\") for surfactant amounts below typical environmental values.\nThese stresses decrease as the streamwise distance between plastron stagnation\npoints increases. We perform microchannel experiments with thermally-controlled\nSHSs consisting of streamwise parallel gratings, which confirm this numerical\nprediction. We introduce a new, unsteady test of surfactant effects. When we\nrapidly remove the driving pressure following a loading phase, a backflow\ndevelops at the plastron, which can only be explained by surfactant gradients\nformed in the loading phase. This demonstrates the significance of surfactants\nin deteriorating drag reduction, and thus the importance of including\nsurfactant stresses in SHS models. Our time-dependent protocol can assess the\nimpact of surfactants in SHS testing and guide future mitigating designs.\n",
"title": "Traces of surfactants can severely limit the drag reduction of superhydrophobic surfaces"
}
| null | null |
[
"Physics"
] | null | true | null |
16203
| null |
Validated
| null | null |
null |
{
"abstract": " This paper considers the problem of recovering either a low rank matrix or a\nsparse vector from observations of linear combinations of the vector or matrix\nelements. Recent methods replace the non-convex regularization with $\\ell_1$ or\nnuclear norm relaxations. It is well known that this approach can be guaranteed\nto recover a near optimal solutions if a so called restricted isometry property\n(RIP) holds. On the other hand it is also known to perform soft thresholding\nwhich results in a shrinking bias which can degrade the solution.\nIn this paper we study an alternative non-convex regularization term. This\nformulation does not penalize elements that are larger than a certain threshold\nmaking it much less prone to small solutions. Our main theoretical results show\nthat if a RIP holds then the stationary points are often well separated, in the\nsense that their differences must be of high cardinality/rank. Thus, with a\nsuitable initial solution the approach is unlikely to fall into a bad local\nminima. Our numerical tests show that the approach is likely to converge to a\nbetter solution than standard $\\ell_1$/nuclear-norm relaxation even when\nstarting from trivial initializations. In many cases our results can also be\nused to verify global optimality of our method.\n",
"title": "Non-Convex Rank/Sparsity Regularization and Local Minima"
}
| null | null |
[
"Mathematics"
] | null | true | null |
16204
| null |
Validated
| null | null |
null |
{
"abstract": " Speech separation is the task of separating target speech from background\ninterference. Traditionally, speech separation is studied as a signal\nprocessing problem. A more recent approach formulates speech separation as a\nsupervised learning problem, where the discriminative patterns of speech,\nspeakers, and background noise are learned from training data. Over the past\ndecade, many supervised separation algorithms have been put forward. In\nparticular, the recent introduction of deep learning to supervised speech\nseparation has dramatically accelerated progress and boosted separation\nperformance. This article provides a comprehensive overview of the research on\ndeep learning based supervised speech separation in the last several years. We\nfirst introduce the background of speech separation and the formulation of\nsupervised separation. Then we discuss three main components of supervised\nseparation: learning machines, training targets, and acoustic features. Much of\nthe overview is on separation algorithms where we review monaural methods,\nincluding speech enhancement (speech-nonspeech separation), speaker separation\n(multi-talker separation), and speech dereverberation, as well as\nmulti-microphone techniques. The important issue of generalization, unique to\nsupervised learning, is discussed. This overview provides a historical\nperspective on how advances are made. In addition, we discuss a number of\nconceptual issues, including what constitutes the target source.\n",
"title": "Supervised Speech Separation Based on Deep Learning: An Overview"
}
| null | null | null | null | true | null |
16205
| null |
Default
| null | null |
null |
{
"abstract": " In reinforcement learning, the state of the real world is often represented\nby feature vectors. However, not all of the features may be pertinent for\nsolving the current task. We propose Feature Selection Explore and Exploit\n(FS-EE), an algorithm that automatically selects the necessary features while\nlearning a Factored Markov Decision Process, and prove that under mild\nassumptions, its sample complexity scales with the in-degree of the dynamics of\njust the necessary features, rather than the in-degree of all features. This\ncan result in a much better sample complexity when the in-degree of the\nnecessary features is smaller than the in-degree of all features.\n",
"title": "Sample Efficient Feature Selection for Factored MDPs"
}
| null | null | null | null | true | null |
16206
| null |
Default
| null | null |
null |
{
"abstract": " We assess the range of validity of sgoldstino-less inflation in a scenario of\nlow energy supersymmetry breaking. We first analyze the consistency conditions\nthat an effective theory of the inflaton and goldstino superfields should\nsatisfy in order to be faithfully described by a sgoldstino-less model.\nEnlarging the scope of previous studies, we investigate the case where the\neffective field theory cut-off, and hence also the sgoldstino mass, are\ninflaton-dependent. We then introduce a UV complete model where one can realize\nsuccessfully sgoldstino-less inflation and gauge mediation of supersymmetry\nbreaking, combining the alpha-attractor mechanism and a weakly coupled model of\nspontaneous breaking of supersymmetry. In this class of models we find that,\ngiven current limits on superpartner masses, the gravitino mass has a lower\nbound of the order of the MeV, i.e. we cannot reach very low supersymmetry\nbreaking scales. On the plus side, we recognize that in this framework, one can\nderive the complete superpartner spectrum as well as compute inflation\nobservables, the reheating temperature, and address the gravitino overabundance\nproblem. We then show that further constraints come from collider results and\ninflation observables. Their non trivial interplay seems a staple feature of\nphenomenological studies of supersymmetric inflationary models.\n",
"title": "Sgoldstino-less inflation and low energy SUSY breaking"
}
| null | null | null | null | true | null |
16207
| null |
Default
| null | null |
null |
{
"abstract": " Foveal vision makes up less than 1% of the visual field. The other 99% is\nperipheral vision. Precisely what human beings see in the periphery is both\nobvious and mysterious in that we see it with our own eyes but can't visualize\nwhat we see, except in controlled lab experiments. Degradation of information\nin the periphery is far more complex than what might be mimicked with a radial\nblur. Rather, behaviorally-validated models hypothesize that peripheral vision\nmeasures a large number of local texture statistics in pooling regions that\noverlap and grow with eccentricity. In this work, we develop a new method for\nperipheral vision simulation by training a generative neural network on a\nbehaviorally-validated full-field synthesis model. By achieving a 21,000 fold\nreduction in running time, our approach is the first to combine realism and\nspeed of peripheral vision simulation to a degree that provides a whole new way\nto approach visual design: through peripheral visualization.\n",
"title": "SideEye: A Generative Neural Network Based Simulator of Human Peripheral Vision"
}
| null | null |
[
"Computer Science"
] | null | true | null |
16208
| null |
Validated
| null | null |
null |
{
"abstract": " In a previous paper, we assembled a collection of medium-resolution spectra\nof 35 carbon stars, covering optical and near-infrared wavelengths from 400 to\n2400 nm. The sample includes stars from the Milky Way and the Magellanic\nClouds, with a variety of $(J-K_s)$ colors and pulsation properties. In the\npresent paper, we compare these observations to a new set of high-resolution\nsynthetic spectra, based on hydrostatic model atmospheres. We find that the\nbroad-band colors and the molecular-band strengths measured by\nspectrophotometric indices match those of the models when $(J-K_s)$ is bluer\nthan about 1.6, while the redder stars require either additional reddening or\ndust emission or both. Using a grid of models to fit the full observed spectra,\nwe estimate the most likely atmospheric parameters $T_\\mathrm{eff}$, $\\log(g)$,\n$[\\mathrm{Fe/H}]$ and C/O. These parameters derived independently in the\noptical and near-infrared are generally consistent when $(J-K_s)<1.6$. The\ntemperatures found based on either wavelength range are typically within\n$\\pm$100K of each other, and $\\log(g)$ and $[\\mathrm{Fe/H}]$ are consistent\nwith the values expected for this sample. The reddest stars ($(J-K_s)$ $>$ 1.6)\nare divided into two families, characterized by the presence or absence of an\nabsorption feature at 1.53\\,$\\mu$m, generally associated with HCN and\nC$_2$H$_2$. Stars from the first family begin to be more affected by\ncircumstellar extinction. The parameters found using optical or near-infrared\nwavelengths are still compatible with each other, but the error bars become\nlarger. In stars showing the 1.53\\,$\\mu$m feature, which are all\nlarge-amplitude variables, the effects of pulsation are strong and the spectra\nare poorly matched with hydrostatic models. For these, atmospheric parameters\ncould not be derived reliably, and dynamical models are needed for proper\ninterpretation.\n",
"title": "Carbon stars in the X-Shooter Spectral Library: II. Comparison with models"
}
| null | null | null | null | true | null |
16209
| null |
Default
| null | null |
null |
{
"abstract": " Let $\\mathbb{F}_p$ be a prime field of order $p>2$, and $A$ be a set in\n$\\mathbb{F}_p$ with very small size in terms of $p$. In this note, we show that\nthe number of distinct cubic distances determined by points in $A\\times A$\nsatisfies \\[|(A-A)^3+(A-A)^3|\\gg |A|^{8/7},\\] which improves a result due to\nYazici, Murphy, Rudnev, and Shkredov. In addition, we investigate some new\nfamilies of expanders in four and five variables.\nWe also give an explicit exponent of a problem of Bukh and Tsimerman, namely,\nwe prove that \\[\\max \\left\\lbrace |A+A|, |f(A, A)|\\right\\rbrace\\gg |A|^{6/5},\\]\nwhere $f(x, y)$ is a quadratic polynomial in $\\mathbb{F}_p[x, y]$ that is not\nof the form $g(\\alpha x+\\beta y)$ for some univariate polynomial $g$.\n",
"title": "Four-variable expanders over the prime fields"
}
| null | null | null | null | true | null |
16210
| null |
Default
| null | null |
null |
{
"abstract": " We prove that the family of lattices ${\\rm SL}_2(\\mathcal{O}_F)$, $F$ running\nover number fields with fixed archimedean signature $(r_1, r_2)$, in ${\\rm\nSL}_2(\\mathbb{R}^{r_1}\\oplus\\mathbb{C}^{r_2})$ has the limit multiplicity\nproperty.\n",
"title": "Limit multiplicities for ${\\rm SL}_2(\\mathcal{O}_F)$ in ${\\rm SL}_2(\\mathbb{R}^{r_1}\\oplus\\mathbb{C}^{r_2})$"
}
| null | null | null | null | true | null |
16211
| null |
Default
| null | null |
null |
{
"abstract": " Clustering is one of the most universal approaches for understanding complex\ndata. A pivotal aspect of clustering analysis is quantitatively comparing\nclusterings; clustering comparison is the basis for tasks such as clustering\nevaluation, consensus clustering, and tracking the temporal evolution of\nclusters. For example, the extrinsic evaluation of clustering methods requires\ncomparing the uncovered clusterings to planted clusterings or known metadata.\nYet, as we demonstrate, existing clustering comparison measures have critical\nbiases which un- dermine their usefulness, and no measure accommodates both\noverlapping and hierarchical clusterings. Here we unify the comparison of\ndisjoint, overlapping, and hierarchically struc- tured clusterings by proposing\na new element-centric framework: elements are compared based on the\nrelationships induced by the cluster structure, as opposed to the traditional\ncluster-centric philosophy. We demonstrate that, in contrast to standard\nclustering simi- larity measures, our framework does not suffer from critical\nbiases and naturally provides unique insights into how the clusterings differ.\nWe illustrate the strengths of our framework by revealing new insights into the\norganization of clusters in two applications: the improved classification of\nschizophrenia based on the overlapping and hierarchical community struc- ture\nof fMRI brain networks, and the disentanglement of various social homophily\nfactors in Facebook social networks. The universality of clustering suggests\nfar-reaching impact of our framework throughout all areas of science.\n",
"title": "On comparing clusterings: an element-centric framework unifies overlaps and hierarchy"
}
| null | null | null | null | true | null |
16212
| null |
Default
| null | null |
null |
{
"abstract": " We discuss the parametric oscillatory instability in a Fabry-Perot cavity of\nthe Einstein Telescope. Unstable combinations of elastic and optical modes for\ntwo possible configurations of gravitational wave third-generation detector are\ndeduced. The results are compared with the results for gravita- tional wave\ninterferometers LIGO and LIGO Voyager.\n",
"title": "Parametric Oscillatory Instability in a Fabry-Perot Cavity of the Einstein Telescope with different mirror's materials"
}
| null | null |
[
"Physics"
] | null | true | null |
16213
| null |
Validated
| null | null |
null |
{
"abstract": " Suppose that we have a compact Kähler manifold $X$ with a very ample line\nbundle $\\mathcal{L}$. We prove that any positive definite hermitian form on the\nspace $H^0 (X,\\mathcal{L})$ of holomorphic sections can be written as an\n$L^2$-inner product with respect to an appropriate hermitian metric on\n$\\mathcal{L}$. We apply this result to show that the Fubini--Study map, which\nassociates a hermitian metric on $\\mathcal{L}$ to a hermitian form on $H^0\n(X,\\mathcal{L})$, is injective.\n",
"title": "Mapping properties of the Hilbert and Fubini--Study maps in Kähler geometry"
}
| null | null | null | null | true | null |
16214
| null |
Default
| null | null |
null |
{
"abstract": " In 1835 Lobachevski entertained the possibility of multiple (rival)\ngeometries. This idea has reappeared on occasion (e.g., Poincaré) but\ndidn't become key in space-time foundations prior to Brown's \\emph{Physical\nRelativity} (at the end, the interpretive key to the book). A crucial\ndifference between his constructivism and orthodox \"space-time realism\" is\nmodal scope. Constructivism applies to all local classical field theories,\nincluding those with multiple geometries. But the orthodox view provincially\nassumes a unique geometry, as familiar theories (Newton, Special Relativity,\nNordström, and GR) have. They serve as the orthodox \"canon.\" Their\nhistorical roles suggest a story of inevitable progress. Physics literature\nafter c. 1920 is relevant to orthodoxy mostly as commentary on the canon, which\nclosed in the 1910s. The orthodox view explains the behavior of matter as the\nmanifestation of the real space-time geometry, which works within the canon.\nThe orthodox view, Whiggish history, and the canon relate symbiotically.\nIf one considers a theory outside the canon, space-time realism sheds little\nlight on matter's behavior. Worse, it gives the wrong answer when applied to an\nexample arguably in the canon, massive scalar gravity with universal coupling.\nWhich is the true geometry---the flat metric from the Poincaré symmetry,\nthe conformally flat metric exhibited by material rods and clocks, or both---or\nis the question bad? How does space-time realism explain that all matter fields\nsee the same curved geometry, given so many ways to mix and match?\nConstructivist attention to dynamical details is vindicated; geometrical\nshortcuts disappoint. The more exhaustive exploration of relativistic field\ntheories (especially massive) in particle physics is an underused resource for\nfoundations.\n",
"title": "Space-time Constructivism vs. Modal Provincialism: Or, How Special Relativistic Theories Needn't Show Minkowski Chronogeometry"
}
| null | null |
[
"Physics"
] | null | true | null |
16215
| null |
Validated
| null | null |
null |
{
"abstract": " In this note we show that for a given irreducible binary quadratic form\n$f(x,y)$ with integer coefficients, whenever we have $f(x,y) = f(u,v)$ for\nintegers $x,y,u,v$, there exists a rational automorphism of $f$ which sends\n$(x,y)$ to $(u,v)$.\n",
"title": "On the representation of integers by binary quadratic forms"
}
| null | null | null | null | true | null |
16216
| null |
Default
| null | null |
null |
{
"abstract": " We present the analysis of microlensing event MOA-2010-BLG-117, and show that\nthe light curve can only be explained by the gravitational lensing of a binary\nsource star system by a star with a Jupiter mass ratio planet. It was necessary\nto modify standard microlensing modeling methods to find the correct light\ncurve solution for this binary-source, binary-lens event. We are able to\nmeasure a strong microlensing parallax signal, which yields the masses of the\nhost star, $M_* = 0.58\\pm 0.11 M_\\odot$, and planet $m_p = 0.54\\pm 0.10 M_{\\rm\nJup}$ at a projected star-planet separation of $a_\\perp = 2.42\\pm 0.26\\,$AU,\ncorresponding to a semi-major axis of $a = 2.9{+1.6\\atop -0.6}\\,$AU. Thus, the\nsystem resembles a half-scale model of the Sun-Jupiter system with a\nhalf-Jupiter mass planet orbiting a half-solar mass star at very roughly half\nof Jupiter's orbital distance from the Sun. The source stars are slightly\nevolved, and by requiring them to lie on the same isochrone, we can constrain\nthe source to lie in the near side of the bulge at a distance of $D_S = 6.9 \\pm\n0.7\\,$kpc, which implies a distance to the planetary lens system of $D_L =\n3.5\\pm 0.4\\,$kpc. The ability to model unusual planetary microlensing events,\nlike this one, will be necessary to extract precise statistical information\nfrom the planned large exoplanet microlensing surveys, such as the WFIRST\nmicrolensing survey.\n",
"title": "The First Planetary Microlensing Event with Two Microlensed Source Stars"
}
| null | null | null | null | true | null |
16217
| null |
Default
| null | null |
null |
{
"abstract": " Misunderstanding of driver correction behaviors (DCB) is the primary reason\nfor false warnings of lane-departure-prediction systems. We propose a\nlearning-based approach to predicting unintended lane-departure behaviors (LDB)\nand the chance for drivers to bring the vehicle back to the lane. First, in\nthis approach, a personalized driver model for lane-departure and lane-keeping\nbehavior is established by combining the Gaussian mixture model and the hidden\nMarkov model. Second, based on this model, we develop an online model-based\nprediction algorithm to predict the forthcoming vehicle trajectory and judge\nwhether the driver will demonstrate an LDB or a DCB. We also develop a warning\nstrategy based on the model-based prediction algorithm that allows the\nlane-departure warning system to be acceptable for drivers according to the\npredicted trajectory. In addition, the naturalistic driving data of 10 drivers\nis collected through the University of Michigan Safety Pilot Model Deployment\nprogram to train the personalized driver model and validate this approach. We\ncompare the proposed method with a basic time-to-lane-crossing (TLC) method and\na TLC-directional sequence of piecewise lateral slopes (TLC-DSPLS) method. The\nresults show that the proposed approach can reduce the false-warning rate to\n3.07\\%.\n",
"title": "A Learning-Based Approach for Lane Departure Warning Systems with a Personalized Driver Model"
}
| null | null | null | null | true | null |
16218
| null |
Default
| null | null |
null |
{
"abstract": " The Internet of Things (IoT) is intended for ubiquitous connectivity among\ndifferent entities or \"things\". While its purpose is to provide effective and\nefficient solutions, security of the devices and network is a challenging\nissue. The number of devices connected along with the ad-hoc nature of the\nsystem further exacerbates the situation. Therefore, security and privacy has\nemerged as a significant challenge for the IoT. In this paper,we aim to provide\na thorough survey related to the privacy and security challenges of the IoT.\nThis document addresses these challenges from the perspective of technologies\nand architecture used. This work focuses also in IoT intrinsic vulnerabilities\nas well as the security challenges of various layers based on the security\nprinciples of data confidentiality, integrity and availability. This survey\nanalyzes articles published for the IoT at the time and relates it to the\nsecurity conjuncture of the field and its projection to the future.\n",
"title": "Internet of Things: Survey on Security and Privacy"
}
| null | null | null | null | true | null |
16219
| null |
Default
| null | null |
null |
{
"abstract": " Graphs are naturally sparse objects that are used to study many problems\ninvolving networks, for example, distributed learning and graph signal\nprocessing. In some cases, the graph is not given, but must be learned from the\nproblem and available data. Often it is desirable to learn sparse graphs.\nHowever, making a graph highly sparse can split the graph into several\ndisconnected components, leading to several separate networks. The main\ndifficulty is that connectedness is often treated as a combinatorial property,\nmaking it hard to enforce in e.g. convex optimization problems. In this\narticle, we show how connectedness of undirected graphs can be formulated as an\nanalytical property and can be enforced as a convex constraint. We especially\nshow how the constraint relates to the distributed consensus problem and graph\nLaplacian learning. Using simulated and real data, we perform experiments to\nlearn sparse and connected graphs from data.\n",
"title": "A Connectedness Constraint for Learning Sparse Graphs"
}
| null | null | null | null | true | null |
16220
| null |
Default
| null | null |
null |
{
"abstract": " A finite dimensional operator that commutes with some symmetry group admits\nquotient operators, which are determined by the choice of associated\nrepresentation. Taking the quotient isolates the part of the spectrum\nsupporting the chosen representation and reduces the complexity of the problem,\nhowever it is not uniquely defined. Here we present a computationally simple\nway of choosing a special basis for the space of intertwiners, allowing us to\nconstruct a quotient that reflects the structure of the original operator. This\nquotient construction generalizes previous definitions for discrete graphs,\nwhich either dealt with restricted group actions or only with the trivial\nrepresentation.\nWe also extend the method to quantum graphs, which simplifies previous\nconstructions within this context, answers an open question regarding\nself-adjointness and offers alternative viewpoints in terms of a scattering\napproach. Applications to isospectrality are discussed, together with numerous\nexamples and comparisons with previous results.\n",
"title": "Quotients of finite-dimensional operators by symmetry representations"
}
| null | null | null | null | true | null |
16221
| null |
Default
| null | null |
null |
{
"abstract": " A probabilistic framework is proposed for the optimization of efficient\nswitched control strategies for physical systems dominated by stochastic\nexcitation. In this framework, the equation for the state trajectory is\nreplaced with an equivalent equation for its probability distribution function\nin the constrained optimization setting. This allows for a large class of\ncontrol rules to be considered, including hysteresis and a mix of continuous\nand discrete random variables. The problem of steering atmospheric balloons\nwithin a stratified flowfield is a motivating application; the same approach\ncan be extended to a variety of mixed-variable stochastic systems and to new\nclasses of control rules.\n",
"title": "A probabilistic framework for the control of systems with discrete states and stochastic excitation"
}
| null | null | null | null | true | null |
16222
| null |
Default
| null | null |
null |
{
"abstract": " When an upstream steady uniform supersonic flow impinges onto a symmetric\nstraight-sided wedge, governed by the Euler equations, there are two possible\nsteady oblique shock configurations if the wedge angle is less than the\ndetachment angle -- the steady weak shock with supersonic or subsonic\ndownstream flow (determined by the wedge angle that is less or larger than the\nsonic angle) and the steady strong shock with subsonic downstream flow, both of\nwhich satisfy the entropy condition. The fundamental issue -- whether one or\nboth of the steady weak and strong shocks are physically admissible solutions\n-- has been vigorously debated over the past eight decades. In this paper, we\nsurvey some recent developments on the stability analysis of the steady shock\nsolutions in both the steady and dynamic regimes. For the static stability, we\nfirst show how the stability problem can be formulated as an initial-boundary\nvalue type problem and then reformulate it into a free boundary problem when\nthe perturbation of both the upstream steady supersonic flow and the wedge\nboundary are suitably regular and small, and we finally present some recent\nresults on the static stability of the steady supersonic and transonic shocks.\nFor the dynamic stability for potential flow, we first show how the stability\nproblem can be formulated as an initial-boundary value problem and then use the\nself-similarity of the problem to reduce it into a boundary value problem and\nfurther reformulate it into a free boundary problem, and we finally survey some\nrecent developments in solving this free boundary problem for the existence of\nthe Prandtl-Meyer configurations that tend to the steady weak supersonic or\ntransonic oblique shock solutions as time goes to infinity. Some further\ndevelopments and mathematical challenges in this direction are also discussed.\n",
"title": "Supersonic Flow onto Solid Wedges, Multidimensional Shock Waves and Free Boundary Problems"
}
| null | null | null | null | true | null |
16223
| null |
Default
| null | null |
null |
{
"abstract": " One of the consequences of passing from mass production to mass customization\nparadigm in the nowadays industrialized world is the need to increase\nflexibility and responsiveness of manufacturing companies. The high-mix /\nlow-volume production forces constant accommodations of unknown product\nvariants, which ultimately leads to high periods of machine calibration. The\ndifficulty related with machine calibration is that experience is required\ntogether with a set of experiments to meet the final product quality.\nUnfortunately, all possible combinations of machine parameters is so high that\nis difficult to build empirical knowledge. Due to this fact, normally trial and\nerror approaches are taken making one-of-a-kind products not viable. Therefore,\na Zero-Shot Learning (ZSL) based approach called hyper-process model (HPM) to\nlearn the relation among multiple tasks is used as a way to shorten the\ncalibration phase. Assuming each product variant is a task to solve, first, a\nshape analysis on data to learn common modes of deformation between tasks is\nmade, and secondly, a mapping between these modes and task descriptions is\nperformed. Ultimately, the present work has two main contributions: 1)\nFormulation of an industrial problem into a ZSL setting where new process\nmodels can be generated for process optimization and 2) the definition of a\nregression problem in the domain of ZSL. For that purpose, a 2-d deep drawing\nsimulated process was used based on data collected from the Abaqus simulator,\nwhere a significant number of process models were collected to test the\neffectiveness of the approach. The obtained results show that is possible to\nlearn new tasks without any available data (both labeled and unlabeled) by\nleveraging information about already existing tasks, allowing to speed up the\ncalibration phase and make a quicker integration of new products into\nmanufacturing systems.\n",
"title": "A Zero-Shot Learning application in Deep Drawing process using Hyper-Process Model"
}
| null | null | null | null | true | null |
16224
| null |
Default
| null | null |
null |
{
"abstract": " We provide a new perspective on fracton topological phases, a class of\nthree-dimensional topologically ordered phases with unconventional\nfractionalized excitations that are either completely immobile or only mobile\nalong particular lines or planes. We demonstrate that a wide range of these\nfracton phases can be constructed by strongly coupling mutually intersecting\nspin chains and explain via a concrete example how such a coupled-spin-chain\nconstruction illuminates the generic properties of a fracton phase. In\nparticular, we describe a systematic translation from each coupled-spin-chain\nconstruction into a parton construction where the partons correspond to the\nexcitations that are mobile along lines. Remarkably, our construction of\nfracton phases is inherently based on spin models involving only two-spin\ninteractions and thus brings us closer to their experimental realization.\n",
"title": "Fracton topological phases from strongly coupled spin chains"
}
| null | null | null | null | true | null |
16225
| null |
Default
| null | null |
null |
{
"abstract": " With the large-scale penetration of the internet, for the first time,\nhumanity has become linked by a single, open, communications platform.\nHarnessing this fact, we report insights arising from a unified internet\nactivity and location dataset of an unparalleled scope and accuracy drawn from\nover a trillion (1.5$\\times 10^{12}$) observations of end-user internet\nconnections, with temporal resolution of just 15min over 2006-2012. We first\napply this dataset to the expansion of the internet itself over 1,647 urban\nagglomerations globally. We find that unique IP per capita counts reach\nsaturation at approximately one IP per three people, and take, on average, 16.1\nyears to achieve; eclipsing the estimated 100- and 60- year saturation times\nfor steam-power and electrification respectively. Next, we use intra-diurnal\ninternet activity features to up-scale traditional over-night sleep\nobservations, producing the first global estimate of over-night sleep duration\nin 645 cities over 7 years. We find statistically significant variation between\ncontinental, national and regional sleep durations including some evidence of\nglobal sleep duration convergence. Finally, we estimate the relationship\nbetween internet concentration and economic outcomes in 411 OECD regions and\nfind that the internet's expansion is associated with negative or positive\nproductivity gains, depending strongly on sectoral considerations. To our\nknowledge, our study is the first of its kind to use online/offline activity of\nthe entire internet to infer social science insights, demonstrating the\nunparalleled potential of the internet as a social data-science platform.\n",
"title": "The Internet as Quantitative Social Science Platform: Insights from a Trillion Observations"
}
| null | null | null | null | true | null |
16226
| null |
Default
| null | null |
null |
{
"abstract": " In this paper, we propose a novel supervised learning method that is called\nDeep Embedding Kernel (DEK). DEK combines the advantages of deep learning and\nkernel methods in a unified framework. More specifically, DEK is a learnable\nkernel represented by a newly designed deep architecture. Compared with\npre-defined kernels, this kernel can be explicitly trained to map data to an\noptimized high-level feature space where data may have favorable features\ntoward the application. Compared with typical deep learning using SoftMax or\nlogistic regression as the top layer, DEK is expected to be more generalizable\nto new data. Experimental results show that DEK has superior performance than\ntypical machine learning methods in identity detection, classification,\nregression, dimension reduction, and transfer learning.\n",
"title": "Deep Embedding Kernel"
}
| null | null | null | null | true | null |
16227
| null |
Default
| null | null |
null |
{
"abstract": " In this paper, we develop a system for the low-cost indoor localization and\ntracking problem using radio signal strength indicator, Inertial Measurement\nUnit (IMU), and magnetometer sensors. We develop a novel and simplified\nprobabilistic IMU motion model as the proposal distribution of the sequential\nMonte-Carlo technique to track the robot trajectory. Our algorithm can globally\nlocalize and track a robot with a priori unknown location, given an informative\nprior map of the Bluetooth Low Energy (BLE) beacons. Also, we formulate the\nproblem as an optimization problem that serves as the Back-end of the algorithm\nmentioned above (Front-end). Thus, by simultaneously solving for the robot\ntrajectory and the map of BLE beacons, we recover a continuous and smooth\ntrajectory of the robot, corrected locations of the BLE beacons, and the\ntime-varying IMU bias. The evaluations achieved using hardware show that\nthrough the proposed closed-loop system the localization performance can be\nimproved; furthermore, the system becomes robust to the error in the map of\nbeacons by feeding back the optimized map to the Front-end.\n",
"title": "A Radio-Inertial Localization and Tracking System with BLE Beacons Prior Maps"
}
| null | null | null | null | true | null |
16228
| null |
Default
| null | null |
null |
{
"abstract": " It is well known that every finite simple group can be generated by two\nelements and this leads to a wide range of problems that have been the focus of\nintensive research in recent years. In this survey article we discuss some of\nthe extraordinary generation properties of simple groups, focussing on topics\nsuch as random generation, $(a,b)$-generation and spread, as well as\nhighlighting the application of probabilistic methods in the proofs of many of\nthe main results. We also present some recent work on the minimal generation of\nmaximal and second maximal subgroups of simple groups, which has applications\nto the study of subgroup growth and the generation of primitive permutation\ngroups.\n",
"title": "Simple groups, generation and probabilistic methods"
}
| null | null | null | null | true | null |
16229
| null |
Default
| null | null |
null |
{
"abstract": " The electronic and magneto transport properties of reduced anatase TiO2\nepitaxial thin films are analyzed considering various polaronic effects.\nUnexpectedly, with increasing carrier concentration, the mobility increases,\nwhich rarely happens in common metallic systems. We find that the screening of\nthe electron-phonon (e-ph) coupling by excess carriers is necessary to explain\nthis unusual dependence. We also find that the magnetoresistance (MR) could be\ndecomposed into a linear and a quadratic component, separately characterizing\nthe transport and trap behavior of carriers as a function of temperature. The\nvarious transport behaviors could be organized into a single phase diagram\nwhich clarifies the nature of large polaron in this material.\n",
"title": "Large polaron evolution in anatase TiO2 due to carrier and temperature dependence of electron-phonon coupling"
}
| null | null | null | null | true | null |
16230
| null |
Default
| null | null |
null |
{
"abstract": " New types of machine learning hardware in development and entering the market\nhold the promise of revolutionizing deep learning in a manner as profound as\nGPUs. However, existing software frameworks and training algorithms for deep\nlearning have yet to evolve to fully leverage the capability of the new wave of\nsilicon. We already see the limitations of existing algorithms for models that\nexploit structured input via complex and instance-dependent control flow, which\nprohibits minibatching. We present an asynchronous model-parallel (AMP)\ntraining algorithm that is specifically motivated by training on networks of\ninterconnected devices. Through an implementation on multi-core CPUs, we show\nthat AMP training converges to the same accuracy as conventional synchronous\ntraining algorithms in a similar number of epochs, but utilizes the available\nhardware more efficiently even for small minibatch sizes, resulting in\nsignificantly shorter overall training times. Our framework opens the door for\nscaling up a new class of deep learning models that cannot be efficiently\ntrained today.\n",
"title": "AMPNet: Asynchronous Model-Parallel Training for Dynamic Neural Networks"
}
| null | null | null | null | true | null |
16231
| null |
Default
| null | null |
null |
{
"abstract": " Nowadays we have many methods allowing to exploit the regularising properties\nof the linear part of a nonlinear dispersive equation (such as the KdV\nequation, the nonlinear wave or the nonlinear Schroedinger equations) in order\nto prove well-posedness in low regularity Sobolev spaces. By well-posedness in\nlow regularity Sobolev spaces we mean that less regularity than the one imposed\nby the energy methods is required (the energy methods do not exploit the\ndispersive properties of the linear part of the equation). In many cases these\nmethods to prove well-posedness in low regularity Sobolev spaces lead to\noptimal results in terms of the regularity of the initial data. By optimal we\nmean that if one requires slightly less regularity then the corresponding\nCauchy problem becomes ill-posed in the Hadamard sense. We call the Sobolev\nspaces in which these ill-posedness results hold spaces of supercritical\nregularity.\nMore recently, methods to prove probabilistic well-posedness in Sobolev\nspaces of supercritical regularity were developed. More precisely, by\nprobabilistic well-posedness we mean that one endows the corresponding Sobolev\nspace of supercritical regularity with a non degenerate probability measure and\nthen one shows that almost surely with respect to this measure one can define a\n(unique) global flow. However, in most of the cases when the methods to prove\nprobabilistic well-posedness apply, there is no information about the measure\ntransported by the flow. Very recently, a method to prove that the transported\nmeasure is absolutely continuous with respect to the initial measure was\ndeveloped. In such a situation, we have a measure which is quasi-invariant\nunder the corresponding flow.\nThe aim of these lectures is to present all of the above described\ndevelopments in the context of the nonlinear wave equation.\n",
"title": "Random data wave equations"
}
| null | null | null | null | true | null |
16232
| null |
Default
| null | null |
null |
{
"abstract": " We introduce the persistent homotopy type distance dHT to compare real valued\nfunctions defined on possibly different homotopy equivalent topological spaces.\nThe underlying idea in the definition of dHT is to measure the minimal shift\nthat is necessary to apply to one of the two functions in order that the\nsublevel sets of the two functions become homotopically equivalent. This\ndistance is interesting in connection with persistent homology. Indeed, our\nmain result states that dHT still provides an upper bound for the bottleneck\ndistance between the persistence diagrams of the intervening functions.\nMoreover, because homotopy equivalences are weaker than homeomorphisms, this\nimplies a lifting of the standard stability results provided by the L-infty\ndistance and the natural pseudo-distance dNP. From a different standpoint, we\nprove that dHT extends the L-infty distance and dNP in two ways. First, we show\nthat, appropriately restricting the category of objects to which dHT applies,\nit can be made to coincide with the other two distances. Finally, we show that\ndHT has an interpretation in terms of interleavings that naturally places it in\nthe family of distances used in persistence theory.\n",
"title": "The Persistent Homotopy Type Distance"
}
| null | null | null | null | true | null |
16233
| null |
Default
| null | null |
null |
{
"abstract": " In this paper, we determine the optimal convergence rates for strongly convex\nand smooth distributed optimization in two settings: centralized and\ndecentralized communications over a network. For centralized (i.e.\nmaster/slave) algorithms, we show that distributing Nesterov's accelerated\ngradient descent is optimal and achieves a precision $\\varepsilon > 0$ in time\n$O(\\sqrt{\\kappa_g}(1+\\Delta\\tau)\\ln(1/\\varepsilon))$, where $\\kappa_g$ is the\ncondition number of the (global) function to optimize, $\\Delta$ is the diameter\nof the network, and $\\tau$ (resp. $1$) is the time needed to communicate values\nbetween two neighbors (resp. perform local computations). For decentralized\nalgorithms based on gossip, we provide the first optimal algorithm, called the\nmulti-step dual accelerated (MSDA) method, that achieves a precision\n$\\varepsilon > 0$ in time\n$O(\\sqrt{\\kappa_l}(1+\\frac{\\tau}{\\sqrt{\\gamma}})\\ln(1/\\varepsilon))$, where\n$\\kappa_l$ is the condition number of the local functions and $\\gamma$ is the\n(normalized) eigengap of the gossip matrix used for communication between\nnodes. We then verify the efficiency of MSDA against state-of-the-art methods\nfor two problems: least-squares regression and classification by logistic\nregression.\n",
"title": "Optimal algorithms for smooth and strongly convex distributed optimization in networks"
}
| null | null | null | null | true | null |
16234
| null |
Default
| null | null |
null |
{
"abstract": " The superconductivity of the 4-angstrom single-walled carbon nanotubes\n(SWCNTs) was discovered more than a decade ago, and marked the breakthrough of\nfinding superconductivity in pure elemental undoped carbon compounds. The van\nHove singularities in the electronic density of states at the Fermi level in\ncombination with a large Debye temperature of the SWCNTs are expected to cause\nan impressively large superconducting gap. We have developed an innovative\ncomputational algorithm specially tailored for the investigation of\nsuperconductivity in ultrathin SWCNTs. We predict the superconducting\ntransition temperature of various thin carbon nanotubes resulting from\nelectron-phonon coupling by an ab-initio method, taking into account the effect\nof radial pressure, symmetry, chirality (N,M) and bond lengths. By optimizing\nthe geometry of the carbon nanotubes, a maximum Tc of 60K is found. We also use\nour method to calculate the Tc of a linear carbon chain embedded in the center\nof (5,0) SWCNTs. The strong curvature in the (5,0) carbon nanotubes in the\npresence of the inner carbon chain provides an alternative path to increase the\nTc of this carbon composite by a factor of 2.2 with respect to the empty (5,0)\nSWCNTs.\n",
"title": "Superconductivity in ultra-thin carbon nanotubes and carbyne-nanotube composites: an ab-initio approach"
}
| null | null | null | null | true | null |
16235
| null |
Default
| null | null |
null |
{
"abstract": " The behavior of the simplex algorithm is a widely studied subject.\nSpecifically, the question of the existence of a polynomial pivot rule for the\nsimplex algorithm is of major importance. Here, we give exponential lower\nbounds for three history-based pivot rules. Those rules decide their next step\nbased on memory of the past steps. In particular, we study Zadeh's least\nentered rule, Johnson's least-recently basic rule and Cunningham's\nleast-recently considered (or round-robin) rule. We give exponential lower\nbounds on Acyclic Unique Sink Orientations (AUSO) of the abstract cube, for all\nof these pivot rules. For Johnson's rule our bound is the first superpolynomial\none in any context; for Zadeh's it is the first one for AUSO. Those two are our\nmain results.\n",
"title": "Exponential lower bounds for history-based simplex pivot rules on abstract cubes"
}
| null | null |
[
"Computer Science"
] | null | true | null |
16236
| null |
Validated
| null | null |
null |
{
"abstract": " In this paper, we present a regression framework involving several machine\nlearning models to estimate water parameters based on hyperspectral data.\nMeasurements from a multi-sensor field campaign, conducted on the River Elbe,\nGermany, represent the benchmark dataset. It contains hyperspectral data and\nthe five water parameters chlorophyll a, green algae, diatoms, CDOM and\nturbidity. We apply a PCA for the high-dimensional data as a possible\npreprocessing step. Then, we evaluate the performance of the regression\nframework with and without this preprocessing step. The regression results of\nthe framework clearly reveal the potential of estimating water parameters based\non hyperspectral data with machine learning. The proposed framework provides\nthe basis for further investigations, such as adapting the framework to\nestimate water parameters of different inland waters.\n",
"title": "Machine learning regression on hyperspectral data to estimate multiple water parameters"
}
| null | null | null | null | true | null |
16237
| null |
Default
| null | null |
null |
{
"abstract": " Recently, along with the emergence of food scandals, food supply chains have\nto face with ever-increasing pressure from compliance with food quality and\nsafety regulations and standards. This paper aims to explore critical factors\nof compliance risk in food supply chain with an illustrated case in Vietnamese\nseafood industry. To this end, this study takes advantage of both primary and\nsecondary data sources through a comprehensive literature research of\nindustrial and scientific papers, combined with expert interview. Findings\nshowed that there are three main critical factor groups influencing on\ncompliance risk including challenges originating from Vietnamese food supply\nchain itself, characteristics of regulation and standards, and business\nenvironment. Furthermore, author proposed enablers to eliminate compliance\nrisks to food supply chain managers as well as recommendations to government\nand other influencers and supporters.\n",
"title": "Critical factors and enablers of food quality and safety compliance risk management in the Vietnamese seafood supply chain"
}
| null | null | null | null | true | null |
16238
| null |
Default
| null | null |
null |
{
"abstract": " We study topological structure of the $\\omega$-limit sets of the skew-product\nsemiflow generated by the following scalar reaction-diffusion equation\n\\begin{equation*} u_{t}=u_{xx}+f(t,u,u_{x}),\\,\\,t>0,\\,x\\in\nS^{1}=\\mathbb{R}/2\\pi \\mathbb{Z}, \\end{equation*} where $f(t,u,u_x)$ is\n$C^2$-admissible with time-recurrent structure including almost-periodicity and\nalmost-automorphy. Contrary to the time-periodic cases (for which any\n$\\omega$-limit set can be imbedded into a periodically forced circle flow), it\nis shown that one cannot expect that any $\\omega$-limit set can be imbedded\ninto an almost-periodically forced circle flow even if $f$ is uniformly\nalmost-periodic in $t$.\nMore precisely, we prove that, for a given $\\omega$-limit set $\\Omega$, if\n${\\rm dim}V^c(\\Omega)\\leq 1$ ($V^c(\\Omega)$ is the center space associated with\n$\\Omega$), then $\\Omega$ is either spatially-homogeneous or\nspatially-inhomogeneous; and moreover, any spatially-inhomogeneous $\\Omega$ can\nbe imbedded into a time-recurrently forced circle flow (resp. imbedded into an\nalmost periodically-forced circle flow if $f$ is uniformly almost-periodic in\n$t$). On the other hand, when ${\\rm dim}V^c(\\Omega>1$, it is pointed out that\nthe above embedding property cannot hold anymore. Furthermore, we also show the\nnew phenomena of the residual imbedding into a time-recurrently forced circle\nflow (resp. into an almost automorphically-forced circle flow if $f$ is\nuniformly almost-periodic in $t$) provided that $\\dim V^c(\\Omega)=2$ and $\\dim\nV^u(\\Omega)$ is odd. All these results reveal that for such system there are\nessential differences between time-periodic cases and non-periodic cases.\n",
"title": "Asymptotic behavior of semilinear parabolic equations on the circle with time almost-periodic/recurrent dependence"
}
| null | null | null | null | true | null |
16239
| null |
Default
| null | null |
null |
{
"abstract": " The discovery of topological states of matter has profoundly augmented our\nunderstanding of phase transitions in physical systems. Instead of local order\nparameters, topological phases are described by global topological invariants\nand are therefore robust against perturbations. A prominent example thereof is\nthe two-dimensional integer quantum Hall effect. It is characterized by the\nfirst Chern number which manifests in the quantized Hall response induced by an\nexternal electric field. Generalizing the quantum Hall effect to\nfour-dimensional systems leads to the appearance of a novel non-linear Hall\nresponse that is quantized as well, but described by a 4D topological invariant\n- the second Chern number. Here, we report on the first observation of a bulk\nresponse with intrinsic 4D topology and the measurement of the associated\nsecond Chern number. By implementing a 2D topological charge pump with\nultracold bosonic atoms in an angled optical superlattice, we realize a\ndynamical version of the 4D integer quantum Hall effect. Using a small atom\ncloud as a local probe, we fully characterize the non-linear response of the\nsystem by in-situ imaging and site-resolved band mapping. Our findings pave the\nway to experimentally probe higher-dimensional quantum Hall systems, where new\ntopological phases with exotic excitations are predicted.\n",
"title": "Exploring 4D Quantum Hall Physics with a 2D Topological Charge Pump"
}
| null | null | null | null | true | null |
16240
| null |
Default
| null | null |
null |
{
"abstract": " We propose a dynamical system of tumor cells proliferation based on\noperatorial methods. The approach we propose is quantum-like: we use ladder and\nnumber operators to describe healthy and tumor cells birth and death, and the\nevolution is ruled by a non-hermitian Hamiltonian which includes, in a non\nreversible way, the basic biological mechanisms we consider for the system. We\nshow that this approach is rather efficient in describing some processes of the\ncells. We further add some medical treatment, described by adding a suitable\nterm in the Hamiltonian, which controls and limits the growth of tumor cells,\nand we propose an optimal approach to stop, and reverse, this growth.\n",
"title": "Non-hermitian operator modelling of basic cancer cell dynamics"
}
| null | null | null | null | true | null |
16241
| null |
Default
| null | null |
null |
{
"abstract": " Explaining the unexpected presence of dune-like patterns at the surface of\nthe comet 67P/Churyumov-Gerasimenko requires conceptual and quantitative\nadvances in the understanding of surface and outgassing processes. We show here\nthat vapor flow emitted by the comet around its perihelion spreads laterally in\na surface layer, due to the strong pressure difference between zones\nilluminated by sunlight and those in shadow. For such thermal winds to be dense\nenough to transport grains -- ten times greater than previous estimates --\noutgassing must take place through a surface porous granular layer, and that\nlayer must be composed of grains whose roughness lowers cohesion consistently\nwith contact mechanics. The linear stability analysis of the problem, entirely\ntested against laboratory experiments, quantitatively predicts the emergence of\nbedforms in the observed wavelength range, and their propagation at the scale\nof a comet revolution. Although generated by a rarefied atmosphere, they are\nparadoxically analogous to ripples emerging on granular beds submitted to\nviscous shear flows. This quantitative agreement shows that our understanding\nof the coupling between hydrodynamics and sediment transport is able to account\nfor bedform emergence in extreme conditions and provides a reliable tool to\npredict the erosion and accretion processes controlling the evolution of small\nsolar system bodies.\n",
"title": "Giant ripples on comet 67P/Churyumov-Gerasimenko sculpted by sunset thermal wind"
}
| null | null |
[
"Physics"
] | null | true | null |
16242
| null |
Validated
| null | null |
null |
{
"abstract": " The challenge of sharing and communicating information is crucial in complex\nhuman-robot interaction (HRI) scenarios. Ontologies and symbolic reasoning are\nthe state-of-the-art approaches for a natural representation of knowledge,\nespecially within the Semantic Web domain. In such a context, scripted\nparadigms have been adopted to achieve high expressiveness. Nevertheless, since\nsymbolic reasoning is a high complexity problem, optimizing its performance\nrequires a careful design of the knowledge. Specifically, a robot architecture\nrequires the integration of several components implementing different behaviors\nand generating a series of beliefs. Most of the components are expected to\naccess, manipulate, and reason upon a run-time generated semantic\nrepresentation of knowledge grounding robot behaviors and perceptions through\nformal axioms, with soft real-time requirements.\n",
"title": "A ROS multi-ontology references services: OWL reasoners and application prototyping issues"
}
| null | null |
[
"Computer Science"
] | null | true | null |
16243
| null |
Validated
| null | null |
null |
{
"abstract": " The use of sparse precision (inverse covariance) matrices has become popular\nbecause they allow for efficient algorithms for joint inference in\nhigh-dimensional models. Many applications require the computation of certain\nelements of the covariance matrix, such as the marginal variances, which may be\nnon-trivial to obtain when the dimension is large. This paper introduces a fast\nRao-Blackwellized Monte Carlo sampling based method for efficiently\napproximating selected elements of the covariance matrix. The variance and\nconfidence bounds of the approximations can be precisely estimated without\nadditional computational costs. Furthermore, a method that iterates over\nsubdomains is introduced, and is shown to additionally reduce the approximation\nerrors to practically negligible levels in an application on functional\nmagnetic resonance imaging data. Both methods have low memory requirements,\nwhich is typically the bottleneck for competing direct methods.\n",
"title": "Efficient Covariance Approximations for Large Sparse Precision Matrices"
}
| null | null | null | null | true | null |
16244
| null |
Default
| null | null |
null |
{
"abstract": " Let $s \\geq 3$ be a fixed positive integer and $a_1,\\dots,a_s \\in \\mathbb{Z}$\nbe arbitrary. We show that, on average over $k$, the density of numbers\nrepresented by the degree $k$ diagonal form \\[ a_1 x_1^k + \\cdots + a_s x_s^k\n\\] decays rapidly with respect to $k$.\n",
"title": "The Density of Numbers Represented by Diagonal Forms of Large Degree"
}
| null | null | null | null | true | null |
16245
| null |
Default
| null | null |
null |
{
"abstract": " One possible approach to tackle the class imbalance in classification tasks\nis to resample a training dataset, i.e., to drop some of its elements or to\nsynthesize new ones. There exist several widely-used resampling methods. Recent\nresearch showed that the choice of resampling method significantly affects the\nquality of classification, which raises resampling selection problem.\nExhaustive search for optimal resampling is time-consuming and hence it is of\nlimited use. In this paper, we describe an alternative approach to the\nresampling selection. We follow the meta-learning concept to build resampling\nrecommendation systems, i.e., algorithms recommending resampling for datasets\non the basis of their properties.\n",
"title": "Meta-Learning for Resampling Recommendation Systems"
}
| null | null | null | null | true | null |
16246
| null |
Default
| null | null |
null |
{
"abstract": " In this paper, we propose a new robustness notion that is applicable for\ncertifying systems' safety with respect to external disturbance signals. The\nproposed input-to-state safety (ISSf) notion allows us to certify systems'\nsafety in the presence of the disturbances which is analogous to the notion of\ninput-to-state stability (ISS) for analyzing systems' stability.\n",
"title": "Robustness Analysis of Systems' Safety through a New Notion of Input-to-State Safety"
}
| null | null | null | null | true | null |
16247
| null |
Default
| null | null |
null |
{
"abstract": " Fusing satellite observations and station measurements to estimate\nground-level PM2.5 is promising for monitoring PM2.5 pollution. A\ngeo-intelligent approach, which incorporates geographical correlation into an\nintelligent deep learning architecture, is developed to estimate PM2.5.\nSpecifically, it considers geographical distance and spatiotemporally\ncorrelated PM2.5 in a deep belief network (denoted as Geoi-DBN). Geoi-DBN can\ncapture the essential features associated with PM2.5 from latent factors. It\nwas trained and tested with data from China in 2015. The results show that\nGeoi-DBN performs significantly better than the traditional neural network. The\ncross-validation R increases from 0.63 to 0.94, and RMSE decreases from 29.56\nto 13.68${\\mu}$g/m3. On the basis of the derived PM2.5 distribution, it is\npredicted that over 80% of the Chinese population live in areas with an annual\nmean PM2.5 of greater than 35${\\mu}$g/m3. This study provides a new perspective\nfor air pollution monitoring in large geographic regions.\n",
"title": "Estimating ground-level PM2.5 by fusing satellite and station observations: A geo-intelligent deep learning approach"
}
| null | null |
[
"Physics"
] | null | true | null |
16248
| null |
Validated
| null | null |
null |
{
"abstract": " The goal of this paper is to examine experimental progress in laser wakefield\nacceleration over the past decade (2004-2014), and to use trends in the data to\nunderstand some of the important physical processes. By examining a set of over\n50 experiments, various trends concerning the relationship between plasma\ndensity, accelerator length, laser power and the final electron beam en- ergy\nare revealed. The data suggest that current experiments are limited by\ndephasing and that current experiments typically require some pulse evolution\nto reach the trapping threshold.\n",
"title": "An Overview of Recent Progress in Laser Wakefield Acceleration Experiments"
}
| null | null | null | null | true | null |
16249
| null |
Default
| null | null |
null |
{
"abstract": " Person identification technology recognizes individuals by exploiting their\nunique, measurable physiological and behavioral characteristics. However, the\nstate-of-the-art person identification systems have been shown to be\nvulnerable, e.g., contact lenses can trick iris recognition and fingerprint\nfilms can deceive fingerprint sensors. EEG (Electroencephalography)-based\nidentification, which utilizes the users brainwave signals for identification\nand offers a more resilient solution, draw a lot of attention recently.\nHowever, the accuracy still requires improvement and very little work is\nfocusing on the robustness and adaptability of the identification system. We\npropose MindID, an EEG-based biometric identification approach, achieves higher\naccuracy and better characteristics. At first, the EEG data patterns are\nanalyzed and the results show that the Delta pattern contains the most\ndistinctive information for user identification. Then the decomposed Delta\npattern is fed into an attention-based Encoder-Decoder RNNs (Recurrent Neural\nNetworks) structure which assigns varies attention weights to different EEG\nchannels based on the channels importance. The discriminative representations\nlearned from the attention-based RNN are used to recognize the user\nidentification through a boosting classifier. The proposed approach is\nevaluated over 3 datasets (two local and one public). One local dataset (EID-M)\nis used for performance assessment and the result illustrate that our model\nachieves the accuracy of 0.982 which outperforms the baselines and the\nstate-of-the-art. Another local dataset (EID-S) and a public dataset (EEG-S)\nare utilized to demonstrate the robustness and adaptability, respectively. The\nresults indicate that the proposed approach has the potential to be largely\ndeployment in practice environment.\n",
"title": "MindID: Person Identification from Brain Waves through Attention-based Recurrent Neural Network"
}
| null | null |
[
"Computer Science"
] | null | true | null |
16250
| null |
Validated
| null | null |
null |
{
"abstract": " The present paper is motivated by one of the most fundamental challenges in\ninverse problems, that of quantifying model discrepancies and errors. While\nsignificant strides have been made in calibrating model parameters, the\noverwhelming majority of pertinent methods is based on the assumption of a\nperfect model. Motivated by problems in solid mechanics which, as all problems\nin continuum thermodynamics, are described by conservation laws and\nphenomenological constitutive closures, we argue that in order to quantify\nmodel uncertainty in a physically meaningful manner, one should break open the\nblack-box forward model. In particular we propose formulating an undirected\nprobabilistic model that explicitly accounts for the governing equations and\ntheir validity. This recasts the solution of both forward and inverse problems\nas probabilistic inference tasks where the problem's state variables should not\nonly be compatible with the data but also with the governing equations as well.\nEven though the probability densities involved do not contain any black-box\nterms, they live in much higher-dimensional spaces. In combination with the\nintractability of the normalization constant of the undirected model employed,\nthis poses significant challenges which we propose to address with a\nlinearly-scaling, double-layer of Stochastic Variational Inference. We\ndemonstrate the capabilities and efficacy of the proposed model in synthetic\nforward and inverse problems (with and without model error) in elastography.\n",
"title": "Beyond black-boxes in Bayesian inverse problems and model validation: applications in solid mechanics of elastography"
}
| null | null | null | null | true | null |
16251
| null |
Default
| null | null |
null |
{
"abstract": " We characterize the approximate monomial complexity, sign monomial complexity\n, and the approximate L 1 norm of symmetric functions in terms of simple\ncombinatorial measures of the functions. Our characterization of the\napproximate L 1 norm solves the main conjecture in [AFH12]. As an application\nof the characterization of the sign monomial complexity, we prove a conjecture\nin [ZS09] and provide a characterization for the unbounded-error communication\ncomplexity of symmetric-xor functions.\n",
"title": "On the Spectral Properties of Symmetric Functions"
}
| null | null | null | null | true | null |
16252
| null |
Default
| null | null |
null |
{
"abstract": " We present a non-perturbative numerical technique for calculating strong\nlight shifts in atoms under the influence of multiple optical fields with\narbitrary polarization. We confirm our technique experimentally by performing\nspectroscopy of a cloud of cold $^{87}$Rb atoms subjected to $\\sim$ kW/cm$^2$\nintensities of light at 1560.492 nm simultaneous with 1529.269 nm or 1529.282\nnm. In these conditions the excited state resonances at 1529.26 nm and 1529.36\nnm induce strong level mixing and the shifts are highly nonlinear. By\nabsorption spectroscopy, we observe that the induced shifts of the 5P3/2\nhyperfine Zeeman sublevels agree well with our theoretical predictions.. We\npropose the application of our theory and experiment to accurate measurements\nof excited-state electric-dipole matrix elements.\n",
"title": "Strong light shifts from near-resonant and polychromatic fields: comparison of Floquet theory and experiment"
}
| null | null |
[
"Physics"
] | null | true | null |
16253
| null |
Validated
| null | null |
null |
{
"abstract": " In the present paper, we introduce some new families of elliptic curves with\npositive rank arrising from Pythagorean triples.\n",
"title": "Positive-rank elliptic curves arising pythagorean triples"
}
| null | null |
[
"Mathematics"
] | null | true | null |
16254
| null |
Validated
| null | null |
null |
{
"abstract": " Knowledge graphs enable a wide variety of applications, including question\nanswering and information retrieval. Despite the great effort invested in their\ncreation and maintenance, even the largest (e.g., Yago, DBPedia or Wikidata)\nremain incomplete. We introduce Relational Graph Convolutional Networks\n(R-GCNs) and apply them to two standard knowledge base completion tasks: Link\nprediction (recovery of missing facts, i.e. subject-predicate-object triples)\nand entity classification (recovery of missing entity attributes). R-GCNs are\nrelated to a recent class of neural networks operating on graphs, and are\ndeveloped specifically to deal with the highly multi-relational data\ncharacteristic of realistic knowledge bases. We demonstrate the effectiveness\nof R-GCNs as a stand-alone model for entity classification. We further show\nthat factorization models for link prediction such as DistMult can be\nsignificantly improved by enriching them with an encoder model to accumulate\nevidence over multiple inference steps in the relational graph, demonstrating a\nlarge improvement of 29.8% on FB15k-237 over a decoder-only baseline.\n",
"title": "Modeling Relational Data with Graph Convolutional Networks"
}
| null | null |
[
"Computer Science",
"Statistics"
] | null | true | null |
16255
| null |
Validated
| null | null |
null |
{
"abstract": " In several geophysical applications, such as full waveform inversion and data\nmodelling, we are facing the solution of inhomogeneous Helmholtz equation. The\ndifficulties of solving the Helmholtz equa- tion are two fold. Firstly, in the\ncase of large scale problems we cannot calculate the inverse of the Helmholtz\noperator directly. Hence, iterative algorithms should be implemented. Secondly,\nthe Helmholtz operator is non-unitary and non-diagonalizable which in turn\ndeteriorates the performances of the iterative algorithms (especially for high\nwavenumbers). To overcome this issue, we need to im- plement proper\npreconditioners for a Krylov subspace method to solve the problem efficiently.\nIn this paper we incorporated shifted-Laplace operators to precondition the\nsystem of equations and then generalized minimal residual (GMRES) method used\nto solve the problem iteratively. The numerical results show the performance of\nthe preconditioning operator in improving the convergence rate of the GMRES\nalgorithm for data modelling case. In the companion paper we discussed the\napplication of preconditioned data modelling algorithm in the context of\nfrequency domain full waveform inversion. However, the analysis of the degree\nof suitability of the preconditioners in the solution of Helmholtz equation is\nan ongoing field of study.\n",
"title": "Application of shifted-Laplace preconditioners for heterogenous Helmholtz equation- part 1: Data modelling"
}
| null | null | null | null | true | null |
16256
| null |
Default
| null | null |
null |
{
"abstract": " A directed acyclic graph G = (V, E) is pseudo-transitive with respect to a\ngiven subset of edges E1, if for any edge ab in E1 and any edge bc in E, we\nhave ac in E. We give algorithms for computing longest chains and demonstrate\ngeometric applications that unify and improves some important past results.\n(For specific applications see the introduction.)\n",
"title": "Algorithms For Longest Chains In Pseudo- Transitive Graphs"
}
| null | null |
[
"Computer Science",
"Mathematics"
] | null | true | null |
16257
| null |
Validated
| null | null |
null |
{
"abstract": " A network-based approach is presented to investigate the cerebrovascular flow\npatterns during atrial fibrillation (AF) with respect to normal sinus rhythm\n(NSR). AF, the most common cardiac arrhythmia with faster and irregular\nbeating, has been recently and independently associated with the increased risk\nof dementia. However, the underlying hemodynamic mechanisms relating the two\npathologies remain mainly undetermined so far; thus the contribution of\nmodeling and refined statistical tools is valuable. Pressure and flow rate\ntemporal series in NSR and AF are here evaluated along representative cerebral\nsites (from carotid arteries to capillary brain circulation), exploiting\nreliable artificially built signals recently obtained from an in silico\napproach. The complex network analysis evidences, in a synthetic and original\nway, a dramatic signal variation towards the distal/capillary cerebral regions\nduring AF, which has no counterpart in NSR conditions. At the large artery\nlevel, networks obtained from both AF and NSR hemodynamic signals exhibit\nelongated and chained features, which are typical of pseudo-periodic series.\nThese aspects are almost completely lost towards the microcirculation during\nAF, where the networks are topologically more circular and present random-like\ncharacteristics. As a consequence, all the physiological phenomena at\nmicrocerebral level ruled by periodicity - such as regular perfusion, mean\npressure per beat, and average nutrient supply at cellular level - can be\nstrongly compromised, since the AF hemodynamic signals assume irregular\nbehaviour and random-like features. Through a powerful approach which is\ncomplementary to the classical statistical tools, the present findings further\nstrengthen the potential link between AF hemodynamic and cognitive decline.\n",
"title": "From time-series to complex networks: Application to the cerebrovascular flow patterns in atrial fibrillation"
}
| null | null | null | null | true | null |
16258
| null |
Default
| null | null |
null |
{
"abstract": " This study is devoted to the polynomial representation of the matrix $p$th\nroot functions. The Fibonacci-Hörner decomposition of the matrix powers and\nsome techniques arisen from properties of generalized Fibonacci sequences,\nnotably the Binet formula, serves as a triggering factor to provide explicit\nformulas for the matrix $p$th roots. Special cases and illustrative numerical\nexamples are given.\n",
"title": "On the matrix $pth$ root functions and generalized Fibonacci sequences"
}
| null | null |
[
"Mathematics"
] | null | true | null |
16259
| null |
Validated
| null | null |
null |
{
"abstract": " Today, in digital forensics, images normally provide important information\nwithin an investigation. However, not all images may still be available within\na forensic digital investigation as they were all deleted for example. Data\ncarving can be used in this case to retrieve deleted images but the carving\ntime is normally significant and these images can be moreover overwritten by\nother data. One of the solutions is to look at thumbnails of images that are no\nlonger available. These thumbnails can often be found within databases created\nby either operating systems or image viewers. In literature, most research and\npractical focus on the extraction of thumbnails from databases created by the\noperating system. There is a little research working on the thumbnails created\nby the image reviewers as these thumbnails are application-driven in terms of\npre-defined sizes, adjustments and storage location. Eventually, thumbnail\ndatabases from image viewers are significant forensic artefacts for\ninvestigators as these programs deal with large amounts of images. However,\ninvestigating these databases so far is still manual or semi-automatic task\nthat leads to the huge amount of forensic time. Therefore, in this paper we\npropose a new approach of automating extraction of thumbnails produced by image\nviewers. We also test our approach with popular image viewers in different\nstorage structures and locations to show its robustness.\n",
"title": "Investigation and Automating Extraction of Thumbnails Produced by Image viewers"
}
| null | null | null | null | true | null |
16260
| null |
Default
| null | null |
null |
{
"abstract": " Topological metrics of graphs provide a natural way to describe the prominent\nfeatures of various types of networks. Graph metrics describe the structure and\ninterplay of graph edges and have found applications in many scientific fields.\nIn this work, graph metrics are used in network estimation by developing\noptimisation methods that incorporate prior knowledge of a network's topology.\nThe derivatives of graph metrics are used in gradient descent schemes for\nweighted undirected network denoising, network completion, and network\ndecomposition. The successful performance of our methodology is shown in a\nnumber of toy examples and real-world datasets. Most notably, our work\nestablishes a new link between graph theory, network science and optimisation.\n",
"title": "Weighted network estimation by the use of topological graph metrics"
}
| null | null | null | null | true | null |
16261
| null |
Default
| null | null |
null |
{
"abstract": " Reservoir characterization involves the estimation petrophysical properties\nfrom well-log data and seismic data. Estimating such properties is a\nchallenging task due to the non-linearity and heterogeneity of the subsurface.\nVarious attempts have been made to estimate petrophysical properties using\nmachine learning techniques such as feed-forward neural networks and support\nvector regression (SVR). Recent advances in machine learning have shown\npromising results for recurrent neural networks (RNN) in modeling complex\nsequential data such as videos and speech signals. In this work, we propose an\nalgorithm for property estimation from seismic data using recurrent neural\nnetworks. An applications of the proposed workflow to estimate density and\np-wave impedance using seismic data shows promising results compared to\nfeed-forward neural networks.\n",
"title": "Petrophysical property estimation from seismic data using recurrent neural networks"
}
| null | null | null | null | true | null |
16262
| null |
Default
| null | null |
null |
{
"abstract": " Deep learning applies hierarchical layers of hidden variables to construct\nnonlinear high dimensional predictors. Our goal is to develop and train deep\nlearning architectures for spatio-temporal modeling. Training a deep\narchitecture is achieved by stochastic gradient descent (SGD) and drop-out (DO)\nfor parameter regularization with a goal of minimizing out-of-sample predictive\nmean squared error. To illustrate our methodology, we predict the sharp\ndiscontinuities in traffic flow data, and secondly, we develop a classification\nrule to predict short-term futures market prices as a function of the order\nbook depth. Finally, we conclude with directions for future research.\n",
"title": "Deep Learning for Spatio-Temporal Modeling: Dynamic Traffic Flows and High Frequency Trading"
}
| null | null | null | null | true | null |
16263
| null |
Default
| null | null |
null |
{
"abstract": " We study the multi-armed bandit (MAB) problem where the agent receives a\nvectorial feedback that encodes many possibly competing objectives to be\noptimized. The goal of the agent is to find a policy, which can optimize these\nobjectives simultaneously in a fair way. This multi-objective online\noptimization problem is formalized by using the Generalized Gini Index (GGI)\naggregation function. We propose an online gradient descent algorithm which\nexploits the convexity of the GGI aggregation function, and controls the\nexploration in a careful way achieving a distribution-free regret\n$\\tilde{\\bigO} (T^{-1/2} )$ with high probability. We test our algorithm on\nsynthetic data as well as on an electric battery control problem where the goal\nis to trade off the use of the different cells of a battery in order to balance\ntheir respective degradation rates.\n",
"title": "Multi-objective Bandits: Optimizing the Generalized Gini Index"
}
| null | null | null | null | true | null |
16264
| null |
Default
| null | null |
null |
{
"abstract": " Weyl points with monopole charge $\\pm 1$ have been extensively studied,\nhowever, real materials of multi-Weyl points, whose monopole charges are higher\nthan $1$, have yet to be found. In this Rapid Communication, we show that\nnodal-line semimetals with nontrivial line connectivity provide natural\nplatforms for realizing Floquet multi-Weyl points. In particular, we show that\ndriving crossing nodal lines by circularly polarized light generates\ndouble-Weyl points. Furthermore, we show that monopole combination and\nannihilation can be observed in crossing-nodal-line semimetals and nodal-chain\nsemimetals. These proposals can be experimentally verified in pump-probe\nangle-resolved photoemission spectroscopy.\n",
"title": "Floquet multi-Weyl points in crossing-nodal-line semimetals"
}
| null | null |
[
"Physics"
] | null | true | null |
16265
| null |
Validated
| null | null |
null |
{
"abstract": " A theoretical investigation of extremely high field transport in an emerging\nwide-bandgap material $\\beta-Ga_2O_3$ is reported from first principles. The\nsignature high-field effect explored here is impact ionization. Interaction\nbetween a valence-band electron and an excited electron is computed from the\nmatrix elements of a screened Coulomb operator. Maximally localized Wannier\nfunctions (MLWF) are utilized in computing the impact ionization rates. A\nfull-band Monte Carlo (FBMC) simulation is carried out incorporating the impact\nionization rates, and electron-phonon scattering rates. This work brings out\nvaluable insights on the impact ionization coefficient (IIC) of electrons in\n$\\beta-Ga_2O_3$. The isolation of the $\\Gamma$ point conduction band minimum by\na significantly high energy from other satellite band pockets play a vital role\nin determining ionization co-efficients. IICs are calculated for electric\nfields ranging up to 8 MV/cm for different crystal directions. A Chynoweth\nfitting of the computed IICs is done to calibrate ionization models in device\nsimulators.\n",
"title": "Impact Ionization in $β-Ga_2O_3$"
}
| null | null |
[
"Physics"
] | null | true | null |
16266
| null |
Validated
| null | null |
null |
{
"abstract": " The act and experience of programming is, at its heart, a fundamentally human\nactivity that results in the production of artifacts. When considering\nprogramming, therefore, it would be a glaring omission to not involve people\nwho specialize in studying artifacts and the human activity that yields them:\narchaeologists. Here we consider this with respect to computer games, the focus\nof archaeology's nascent subarea of archaeogaming.\nOne type of archaeogaming research is digital excavation, a technical\nexamination of the code and techniques used in old games' implementation. We\napply that in a case study of Entombed, an Atari 2600 game released in 1982 by\nUS Games. The player in this game is, appropriately, an archaeologist who must\nmake their way through a zombie-infested maze. Maze generation is a fruitful\narea for comparative retrogame archaeology, because a number of early games on\ndifferent platforms featured mazes, and their variety of approaches can be\ncompared. The maze in Entombed is particularly interesting: it is shaped in\npart by the extensive real-time constraints of the Atari 2600 platform, and\nalso had to be generated efficiently and use next to no memory. We reverse\nengineered key areas of the game's code to uncover its unusual maze-generation\nalgorithm, which we have also built a reconstruction of, and analyzed the\nmysterious table that drives it. In addition, we discovered what appears to be\na 35-year-old bug in the code, as well as direct evidence of code-reuse\npractices amongst game developers.\nWhat further makes this game's development interesting is that, in an era\nwhere video games were typically solo projects, a total of five people were\ninvolved in various ways with Entombed. We piece together some of the backstory\nof the game's development and intoxicant-fueled design using interviews to\ncomplement our technical work.\nFinally, we contextualize this example in archaeology and lay the groundwork\nfor a broader interdisciplinary discussion about programming, one that includes\nboth computer scientists and archaeologists.\n",
"title": "Entombed: An archaeological examination of an Atari 2600 game"
}
| null | null | null | null | true | null |
16267
| null |
Default
| null | null |
null |
{
"abstract": " Partially-observed Boolean dynamical systems (POBDS) are a general class of\nnonlinear models with application in estimation and control of Boolean\nprocesses based on noisy and incomplete measurements. The optimal minimum mean\nsquare error (MMSE) algorithms for POBDS state estimation, namely, the Boolean\nKalman filter (BKF) and Boolean Kalman smoother (BKS), are intractable in the\ncase of large systems, due to computational and memory requirements. To address\nthis, we propose approximate MMSE filtering and smoothing algorithms based on\nthe auxiliary particle filter (APF) method from sequential Monte-Carlo theory.\nThese algorithms are used jointly with maximum-likelihood (ML) methods for\nsimultaneous state and parameter estimation in POBDS models. In the presence of\ncontinuous parameters, ML estimation is performed using the\nexpectation-maximization (EM) algorithm; we develop for this purpose a special\nsmoother which reduces the computational complexity of the EM algorithm. The\nresulting particle-based adaptive filter is applied to a POBDS model of Boolean\ngene regulatory networks observed through noisy RNA-Seq time series data, and\nperformance is assessed through a series of numerical experiments using the\nwell-known cell cycle gene regulatory model.\n",
"title": "Particle Filters for Partially-Observed Boolean Dynamical Systems"
}
| null | null | null | null | true | null |
16268
| null |
Default
| null | null |
null |
{
"abstract": " Political polarization in public space can seriously hamper the function and\nthe integrity of contemporary democratic societies. In this paper, we propose a\nnovel measure of such polarization, which, by way of simple topic modelling,\nquantifies differences in collective articulation of public agendas among\nrelevant political actors. Unlike most other polarization measures, our measure\nallows cross-national comparison. Analyzing a large amount of speech records of\nlegislative debate in the United States Congress and the Japanese Diet over a\nlong period of time, we have reached two intriguing findings. First, on\naverage, Japanese political actors are far more polarized in their issue\narticulation than their counterparts in the U.S., which is somewhat surprising\ngiven the recent notion of U.S. politics as highly polarized. Second, the\npolarization in each country shows its own temporal dynamics in response to a\ndifferent set of factors. In Japan, structural factors such as the roles of the\nruling party and the opposition often dominate such dynamics, whereas the U.S.\nlegislature suffers from persistent ideological differences over particular\nissues between major political parties. The analysis confirms a strong\ninfluence of institutional differences on legislative debate in parliamentary\ndemocracies.\n",
"title": "Cross-National Measurement of Polarization in Political Discourse: Analyzing floor debate in the U.S. and the Japanese legislatures"
}
| null | null | null | null | true | null |
16269
| null |
Default
| null | null |
null |
{
"abstract": " Program termination is an undecidable, yet important, property relevant to\nprogram verification, optimization, debugging, partial evaluation, and\ndependently-typed programming, among many other topics. This has given rise to\na large body of work on static methods for conservatively predicting or\nenforcing termination. A simple effective approach is the size-change\ntermination (SCT) method, which operates in two-phases: (1) abstract programs\ninto \"size-change graphs,\" and (2) check these graphs for the size-change\nproperty: the existence of paths that lead to infinitely decreasing value\nsequences.\nThis paper explores the termination problem starting from a different vantage\npoint: we propose transposing the two phases of the SCT analysis by developing\nan operational semantics that accounts for the run time checking of the\nsize-change property, postponing program abstraction or avoiding it entirely.\nThis choice has two important consequences: SCT can be monitored and enforced\nat run-time and termination analysis can be rephrased as a traditional safety\nproperty and computed using existing abstract interpretation methods.\nWe formulate this run-time size-change check as a contract. This contributes\nthe first run-time mechanism for checking termination in a general-purporse\nprogramming language. The result nicely compliments existing contracts that\nenforce partial correctness to obtain the first contracts for total\ncorrectness. Our approach combines the robustness of SCT with precise\ninformation available at run-time. To obtain a sound and computable analysis,\nit is possible to apply existing abstract interpretation techniques directly to\nthe operational semantics; there is no need for an abstraction tailored to\nsize-change graphs. We apply higher-order symbolic execution to obtain a novel\ntermination analysis that is competitive with existing, purpose-built\ntermination analyzers.\n",
"title": "Size-Change Termination as a Contract"
}
| null | null |
[
"Computer Science"
] | null | true | null |
16270
| null |
Validated
| null | null |
null |
{
"abstract": " We investigate the effect on disorder potential on exciton valley\npolarization and valley coherence in monolayer WSe2. By analyzing polarization\nproperties of photoluminescence, the valley coherence (VC) and valley\npolarization (VP) is quantified across the inhomogeneously broadened exciton\nresonance. We find that disorder plays a critical role in the exciton VC, while\nminimally affecting VP. For different monolayer samples with disorder\ncharacterized by their Stokes Shift (SS), VC decreases in samples with higher\nSS while VP again remains unchanged. These two methods consistently demonstrate\nthat VC as defined by the degree of linearly polarized photoluminescence is\nmore sensitive to disorder potential, motivating further theoretical studies.\n",
"title": "Disorder Dependent Valley Properties in Monolayer WSe2"
}
| null | null | null | null | true | null |
16271
| null |
Default
| null | null |
null |
{
"abstract": " Let $(R,\\frak{m})$ be a $d$-dimensional Cohen-Macaulay local ring, $I$ an\n$\\frak{m}$-primary ideal and $J$ a minimal reduction of $I$. In this paper we\nstudy the independence of reduction ideals and the behavior of the higher\nHilbert coefficients. In addition, we give some examples in this regards.\n",
"title": "Results on the Hilbert coefficients and reduction numbers"
}
| null | null | null | null | true | null |
16272
| null |
Default
| null | null |
null |
{
"abstract": " Free Electron Lasers (FEL) are commonly regarded as the potential key\napplication of laser wakefield accelerators (LWFA). It has been found that\nelectron bunches exiting from state-of-the-art LWFAs exhibit a normalized\n6-dimensional beam brightness comparable to those in conventional linear\naccelerators. Effectively exploiting this beneficial beam property for\nLWFA-based FELs is challenging due to the extreme initial conditions\nparticularly in terms of beam divergence and energy spread. Several different\napproaches for capturing, reshaping and matching LWFA beams to suited\nundulators, such as bunch decompression or transverse-gradient undulator\nschemes, are currently being explored. In this article the transverse gradient\nundulator concept will be discussed with a focus on recent experimental\nachievements.\n",
"title": "Progress on Experiments towards LWFA-driven Transverse Gradient Undulator-Based FELs"
}
| null | null | null | null | true | null |
16273
| null |
Default
| null | null |
null |
{
"abstract": " The family of exponential maps $f_a(z)= e^z+a$ is of fundamental importance\nin the study of transcendental dynamics. Here we consider the topological\nstructure of certain subsets of the Julia set $J(f_a)$. When $a\\in\n(-\\infty,-1)$, and more generally when $a$ belongs to the Fatou set of $f_a$,\nit is known that $J(f_a)$ can be written as a union of \"hairs\" and \"endpoints\"\nof these hairs. In 1990, Mayer proved for $a\\in (-\\infty,-1)$ that, while the\nset of endpoints is totally separated, its union with infinity is a connected\nset. Recently, Alhabib and the second author extended this result to the case\nwhere $a \\in F(f_a)$, and showed that it holds even for the smaller set of all\nescaping endpoints.\nWe show that, in contrast, the set of non-escaping endpoints together with\ninfinity is totally separated. It turns out that this property is closely\nrelated to a topological structure known as a `spider's web'; in particular we\ngive a new topological characterisation of spiders' webs that may be of\nindependent interest. We also show how our results can be applied to Fatou's\nfunction, $z\\mapsto z + 1 + e^{-z}$.\n",
"title": "Non-escaping endpoints do not explode"
}
| null | null |
[
"Mathematics"
] | null | true | null |
16274
| null |
Validated
| null | null |
null |
{
"abstract": " As part of the Fornax Deep Survey with the ESO VLT Survey Telescope, we\npresent new $g$ and $r$ bands mosaics of the SW group of the Fornax cluster. It\ncovers an area of $3 \\times 2$ square degrees around the central galaxy\nNGC1316. The deep photometry, the high spatial resolution of OmegaCam and the\nlarge covered area allow us to study the galaxy structure, to trace stellar\nhalo formation and look at the galaxy environment. We map the surface\nbrightness profile out to 33arcmin ($\\sim 200$kpc $\\sim15R_e$) from the galaxy\ncentre, down to $\\mu_g \\sim 31$ mag arcsec$^{-2}$ and $\\mu_r \\sim 29$ mag\narcsec$^{-2}$. This allow us to estimate the scales of the main components\ndominating the light distribution, which are the central spheroid, inside 5.5\narcmin ($\\sim33$ kpc), and the outer stellar envelope. Data analysis suggests\nthat we are catching in act the second phase of the mass assembly in this\ngalaxy, since the accretion of smaller satellites is going on in both\ncomponents. The outer envelope of NGC1316 still hosts the remnants of the\naccreted satellite galaxies that are forming the stellar halo. We discuss the\npossible formation scenarios for NGC1316, by comparing the observed properties\n(morphology, colors, gas content, kinematics and dynamics) with predictions\nfrom cosmological simulations of galaxy formation. We find that {\\it i)} the\ncentral spheroid could result from at least one merging event, it could be a\npre-existing early-type disk galaxy with a lower mass companion, and {\\it ii)}\nthe stellar envelope comes from the gradual accretion of small satellites.\n",
"title": "The Fornax Deep Survey with VST. II. Fornax A: a two-phase assembly caught on act"
}
| null | null |
[
"Physics"
] | null | true | null |
16275
| null |
Validated
| null | null |
null |
{
"abstract": " The well-known DeMillo-Lipton-Schwartz-Zippel lemma says that $n$-variate\npolynomials of total degree at most $d$ over grids, i.e. sets of the form $A_1\n\\times A_2 \\times \\cdots \\times A_n$, form error-correcting codes (of distance\nat least $2^{-d}$ provided $\\min_i\\{|A_i|\\}\\geq 2$). In this work we explore\ntheir local decodability and (tolerant) local testability. While these aspects\nhave been studied extensively when $A_1 = \\cdots = A_n = \\mathbb{F}_q$ are the\nsame finite field, the setting when $A_i$'s are not the full field does not\nseem to have been explored before.\nIn this work we focus on the case $A_i = \\{0,1\\}$ for every $i$. We show that\nfor every field (finite or otherwise) there is a test whose query complexity\ndepends only on the degree (and not on the number of variables). In contrast we\nshow that decodability is possible over fields of positive characteristic (with\nquery complexity growing with the degree of the polynomial and the\ncharacteristic), but not over the reals, where the query complexity must grow\nwith $n$. As a consequence we get a natural example of a code (one with a\ntransitive group of symmetries) that is locally testable but not locally\ndecodable.\nClassical results on local decoding and testing of polynomials have relied on\nthe 2-transitive symmetries of the space of low-degree polynomials (under\naffine transformations). Grids do not possess this symmetry: So we introduce\nsome new techniques to overcome this handicap and in particular use the\nhypercontractivity of the (constant weight) noise operator on the Hamming cube.\n",
"title": "Local decoding and testing of polynomials over grids"
}
| null | null |
[
"Computer Science"
] | null | true | null |
16276
| null |
Validated
| null | null |
null |
{
"abstract": " The backpressure algorithm has been widely used as a distributed solution to\nthe problem of joint rate control and routing in multi-hop data networks. By\ncontrolling a parameter $V$ in the algorithm, the backpressure algorithm can\nachieve an arbitrarily small utility optimality gap. However, this in turn\nbrings in a large queue length at each node and hence causes large network\ndelay. This phenomenon is known as the fundamental utility-delay tradeoff. The\nbest known utility-delay tradeoff for general networks is $[O(1/V), O(V)]$ and\nis attained by a backpressure algorithm based on a drift-plus-penalty\ntechnique. This may suggest that to achieve an arbitrarily small utility\noptimality gap, the existing backpressure algorithms necessarily yield an\narbitrarily large queue length. However, this paper proposes a new backpressure\nalgorithm that has a vanishing utility optimality gap, so utility converges to\nexact optimality as the algorithm keeps running, while queue lengths are\nbounded throughout by a finite constant. The technique uses backpressure and\ndrift concepts with a new method for convex programming.\n",
"title": "A New Backpressure Algorithm for Joint Rate Control and Routing with Vanishing Utility Optimality Gaps and Finite Queue Lengths"
}
| null | null | null | null | true | null |
16277
| null |
Default
| null | null |
null |
{
"abstract": " In Kondo lattice systems with mixed valence, such as YbAl3, interactions\nbetween localized electrons in a partially filled f shell and delocalized\nconduction electrons can lead to fluctuations between two different valence\nconfigurations with changing temperature or pressure. The impact of this change\non the momentum-space electronic structure and Fermi surface topology is\nessential for understanding their emergent properties, but has remained\nenigmatic due to a lack of appropriate experimental probes. Here by employing a\ncombination of molecular beam epitaxy (MBE) and in situ angle-resolved\nphotoemission spectroscopy (ARPES) we show that valence fluctuations can lead\nto dramatic changes in the Fermi surface topology, even resulting in a Lifshitz\ntransition. As the temperature is lowered, a small electron pocket in YbAl3\nbecomes completely unoccupied while the low-energy ytterbium (Yb) 4f states\nbecome increasingly itinerant, acquiring additional spectral weight, longer\nlifetimes, and well-defined dispersions. Our work presents the first unified\npicture of how local valence fluctuations connect to momentum space concepts\nincluding band filling and Fermi surface topology in the longstanding problem\nof mixed-valence systems.\n",
"title": "Lifshitz transition from valence fluctuations in YbAl3"
}
| null | null | null | null | true | null |
16278
| null |
Default
| null | null |
null |
{
"abstract": " We study transitivity in directed acyclic graphs and its usefulness in\ncapturing nodes that act as bridges between more densely interconnected parts\nin such type of network. In transitively reduced citation networks degree\ncentrality could be used as a measure of interdisciplinarity or diversity. We\nstudy the measure's ability to capture \"diverse\" nodes in random directed\nacyclic graphs and citation networks. We show that transitively reduced degree\ncentrality is capable of capturing \"diverse\" nodes, thus this measure could be\na timely alternative to text analysis techniques for retrieving papers,\ninfluential in a variety of research fields.\n",
"title": "Diversity from the Topology of Citation Networks"
}
| null | null | null | null | true | null |
16279
| null |
Default
| null | null |
null |
{
"abstract": " A method of transmitting information in interstellar space at superluminal,\nor $> c$, speeds is proposed. The information is encoded as phase modulation of\nan electromagnetic wave of constant intensity, i.e. fluctuations in the rate of\nenergy transport plays no role in the communication, and no energy is\ntransported at speed $>$ c. Of course, such a constant wave can ultimately last\nonly the duration of its enveloping wave packet. However, as a unique feature\nof this paper, we assume the source is sufficiently steady to be capable of\nemitting wave packets, or pulses, of size much larger than the separation\nbetween sender and receiver. Therefore, if a pre-existing and enduring wave\nenvelope already connects the two sides, the subluminal nature of the\nenvelope's group velocity will no longer slow down the communication, which is\nnow limited by the speed at which information encoded as phase modulation\npropagates through the plasma, i.e. the phase velocity $v_p > c$. The method\ninvolves no sharp structure in either time or frequency. As a working example,\nwe considered two spaceships separated by 1 lt-s in the local hot bubble.\nProvided the bandwidth of the extra Fourier modes generated by the phase\nmodulation is much smaller than the carrier wave frequency, the radio\ncommunication of a message, encoded as a specific alignment between the carrier\nwave phase and the anomalous (modulated) phase, can take place at a speed in\nexcess of light by a few parts in 10$^{11}$ at $\\nu\\approx 1$~GHz, and higher\nat smaller $\\nu$.\n",
"title": "Superluminal transmission of phase modulation information by a long macroscopic pulse propagating through interstellar space"
}
| null | null | null | null | true | null |
16280
| null |
Default
| null | null |
null |
{
"abstract": " In application domains such as healthcare, we want accurate predictive models\nthat are also causally interpretable. In pursuit of such models, we propose a\ncausal regularizer to steer predictive models towards causally-interpretable\nsolutions and theoretically study its properties. In a large-scale analysis of\nElectronic Health Records (EHR), our causally-regularized model outperforms its\nL1-regularized counterpart in causal accuracy and is competitive in predictive\nperformance. We perform non-linear causality analysis by causally regularizing\na special neural network architecture. We also show that the proposed causal\nregularizer can be used together with neural representation learning algorithms\nto yield up to 20% improvement over multilayer perceptron in detecting\nmultivariate causation, a situation common in healthcare, where many causal\nfactors should occur simultaneously to have an effect on the target variable.\n",
"title": "Causal Regularization"
}
| null | null | null | null | true | null |
16281
| null |
Default
| null | null |
null |
{
"abstract": " Automated detection of voice disorders with computational methods is a recent\nresearch area in the medical domain since it requires a rigorous endoscopy for\nthe accurate diagnosis. Efficient screening methods are required for the\ndiagnosis of voice disorders so as to provide timely medical facilities in\nminimal resources. Detecting Voice disorder using computational methods is a\nchallenging problem since audio data is continuous due to which extracting\nrelevant features and applying machine learning is hard and unreliable. This\npaper proposes a Long short term memory model (LSTM) to detect pathological\nvoice disorders and evaluates its performance in a real 400 testing samples\nwithout any labels. Different feature extraction methods are used to provide\nthe best set of features before applying LSTM model for classification. The\npaper describes the approach and experiments that show promising results with\n22% sensitivity, 97% specificity and 56% unweighted average recall.\n",
"title": "Voice Disorder Detection Using Long Short Term Memory (LSTM) Model"
}
| null | null | null | null | true | null |
16282
| null |
Default
| null | null |
null |
{
"abstract": " We perform direct numerical simulations (DNS) of passive heavy inertial\nparticles (dust) in homogeneous and isotropic two-dimensional turbulent flows\n(gas) for a range of Stokes number, ${\\rm St} < 1$, using both Lagrangian and\nEulerian approach (with a shock-capturing scheme). We find that: The\ndust-density field in our Eulerian simulations have the same correlation\ndimension $d_2$ as obtained from the clustering of particles in the Lagrangian\nsimulations for ${\\rm St} < 1$; The cumulative probability distribution\nfunction of the dust-density coarse-grained over a scale $r$ in the inertial\nrange has a left-tail with a power-law fall-off indicating presence of voids;\nThe energy spectrum of the dust-velocity has a power-law range with an exponent\nthat is same as the gas-velocity spectrum except at very high Fourier modes;\nThe compressibility of the dust-velocity field is proportional to ${\\rm St}^2$.\nWe quantify the topological properties of the dust-velocity and the\ngas-velocity through their gradient matrices, called $\\mathcal{A}$ and\n$\\mathcal{B}$, respectively. The topological properties of $\\mathcal{B}$ are\nthe same in Eulerian and Lagrangian frames only if the Eulerian data are\nweighed by the dust-density -- a correspondence that we use to study Lagrangian\nproperties of $\\mathcal{A}$. In the Lagrangian frame, the mean value of the\ntrace of $\\mathcal{A} \\sim - \\exp(-C/{\\rm St}$, with a constant $C\\approx 0.1$.\nThe topology of the dust-velocity fields shows that as ${\\rm St} increases the\ncontribution to negative divergence comes mostly from saddles and the\ncontribution to positive divergence comes from both vortices and saddles.\nCompared to the Eulerian case, the density-weighed Eulerian case has less\ninward spirals and more converging saddles. Outward spirals are the least\nprobable topological structures in both cases.\n",
"title": "Topology of two-dimensional turbulent flows of dust and gas"
}
| null | null | null | null | true | null |
16283
| null |
Default
| null | null |
null |
{
"abstract": " One of the most fundamental questions one can ask about a pair of random\nvariables X and Y is the value of their mutual information. Unfortunately, this\ntask is often stymied by the extremely large dimension of the variables. We\nmight hope to replace each variable by a lower-dimensional representation that\npreserves the relationship with the other variable. The theoretically ideal\nimplementation is the use of minimal sufficient statistics, where it is\nwell-known that either X or Y can be replaced by their minimal sufficient\nstatistic about the other while preserving the mutual information. While\nintuitively reasonable, it is not obvious or straightforward that both\nvariables can be replaced simultaneously. We demonstrate that this is in fact\npossible: the information X's minimal sufficient statistic preserves about Y is\nexactly the information that Y's minimal sufficient statistic preserves about\nX. As an important corollary, we consider the case where one variable is a\nstochastic process' past and the other its future and the present is viewed as\na memoryful channel. In this case, the mutual information is the channel\ntransmission rate between the channel's effective states. That is, the\npast-future mutual information (the excess entropy) is the amount of\ninformation about the future that can be predicted using the past. Translating\nour result about minimal sufficient statistics, this is equivalent to the\nmutual information between the forward- and reverse-time causal states of\ncomputational mechanics. We close by discussing multivariate extensions to this\nuse of minimal sufficient statistics.\n",
"title": "Trimming the Independent Fat: Sufficient Statistics, Mutual Information, and Predictability from Effective Channel States"
}
| null | null | null | null | true | null |
16284
| null |
Default
| null | null |
null |
{
"abstract": " Ordered chains (such as chains of amino acids) are ubiquitous in biological\ncells, and these chains perform specific functions contingent on the sequence\nof their components. Using the existence and general properties of such\nsequences as a theoretical motivation, we study the statistical physics of\nsystems whose state space is defined by the possible permutations of an ordered\nlist, i.e., the symmetric group, and whose energy is a function of how certain\npermutations deviate from some chosen correct ordering. Such a non-factorizable\nstate space is quite different from the state spaces typically considered in\nstatistical physics systems and consequently has novel behavior in systems with\ninteracting and even non-interacting Hamiltonians. Various parameter choices of\na mean-field model reveal the system to contain five different physical regimes\ndefined by two transition temperatures, a triple point, and a quadruple point.\nFinally, we conclude by discussing how the general analysis can be extended to\nstate spaces with more complex combinatorial properties and to other standard\nquestions of statistical mechanics models.\n",
"title": "Statistical Physics of the Symmetric Group"
}
| null | null | null | null | true | null |
16285
| null |
Default
| null | null |
null |
{
"abstract": " Based on ab initio evolutionary crystal structure search computation, we\nreport a new phase of phosphorus called green phosphorus ({\\lambda}-P), which\nexhibits the direct band gaps ranging from 0.7 to 2.4 eV and the strong\nanisotropy in optical and transport properties. Free energy calculations show\nthat a single-layer form, termed green phosphorene, is energetically more\nstable than blue phosphorene and a phase transition from black to green\nphosphorene can occur at temperatures above 87 K. Due to its buckled structure,\ngreen phosphorene can be synthesized on corrugated metal surfaces rather than\nclean surfaces.\n",
"title": "A New Phosphorus Allotrope with Direct Band Gap and High Mobility"
}
| null | null | null | null | true | null |
16286
| null |
Default
| null | null |
null |
{
"abstract": " In the past decade Optical WDM Networks (Wavelength Division Multiplexing)\nare being used quite often and especially as far as broadband applications are\nconcerned. Message packets transmitted through such networks can be interrupted\nusing time slots in order to maximize network usage and minimize the time\nrequired for all messages to reach their destination. However, preempting a\npacket will result in time cost. The problem of scheduling message packets\nthrough such a network is referred to as PBS and is known to be NP-Hard. In\nthis paper we have reduced PBS to Open Shop Scheduling and designed variations\nof polynomially solvable instances of Open Shop to approximate PBS. We have\ncombined these variations and called the induced algorithm HSA (Hybridic\nScheduling Algorithm). We ran experiments to establish the efficiency of HSA\nand found that in all datasets used it produces schedules very close to the\noptimal. To further establish HSAs efficiency we ran tests to compare it to\nSGA, another algorithm which when tested in the past has yielded excellent\nresults.\n",
"title": "An open shop approach in approximating optimal data transmission duration in WDM networks"
}
| null | null | null | null | true | null |
16287
| null |
Default
| null | null |
null |
{
"abstract": " We consider the linearly transformed spiked model, where observations $Y_i$\nare noisy linear transforms of unobserved signals of interest $X_i$:\n\\begin{align*}\nY_i = A_i X_i + \\varepsilon_i, \\end{align*} for $i=1,\\ldots,n$. The transform\nmatrices $A_i$ are also observed. We model $X_i$ as random vectors lying on an\nunknown low-dimensional space. How should we predict the unobserved signals\n(regression coefficients) $X_i$?\nThe naive approach of performing regression for each observation separately\nis inaccurate due to the large noise. Instead, we develop optimal linear\nempirical Bayes methods for predicting $X_i$ by \"borrowing strength\" across the\ndifferent samples. Our methods are applicable to large datasets and rely on\nweak moment assumptions. The analysis is based on random matrix theory.\nWe discuss applications to signal processing, deconvolution, cryo-electron\nmicroscopy, and missing data in the high-noise regime. For missing data, we\nshow in simulations that our methods are faster, more robust to noise and to\nunequal sampling than well-known matrix completion methods.\n",
"title": "Optimal prediction in the linearly transformed spiked model"
}
| null | null | null | null | true | null |
16288
| null |
Default
| null | null |
null |
{
"abstract": " Deep Learning has revolutionized vision via convolutional neural networks\n(CNNs) and natural language processing via recurrent neural networks (RNNs).\nHowever, success stories of Deep Learning with standard feed-forward neural\nnetworks (FNNs) are rare. FNNs that perform well are typically shallow and,\ntherefore cannot exploit many levels of abstract representations. We introduce\nself-normalizing neural networks (SNNs) to enable high-level abstract\nrepresentations. While batch normalization requires explicit normalization,\nneuron activations of SNNs automatically converge towards zero mean and unit\nvariance. The activation function of SNNs are \"scaled exponential linear units\"\n(SELUs), which induce self-normalizing properties. Using the Banach fixed-point\ntheorem, we prove that activations close to zero mean and unit variance that\nare propagated through many network layers will converge towards zero mean and\nunit variance -- even under the presence of noise and perturbations. This\nconvergence property of SNNs allows to (1) train deep networks with many\nlayers, (2) employ strong regularization, and (3) to make learning highly\nrobust. Furthermore, for activations not close to unit variance, we prove an\nupper and lower bound on the variance, thus, vanishing and exploding gradients\nare impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning\nrepository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with\nstandard FNNs and other machine learning methods such as random forests and\nsupport vector machines. SNNs significantly outperformed all competing FNN\nmethods at 121 UCI tasks, outperformed all competing methods at the Tox21\ndataset, and set a new record at an astronomy data set. The winning SNN\narchitectures are often very deep. Implementations are available at:\ngithub.com/bioinf-jku/SNNs.\n",
"title": "Self-Normalizing Neural Networks"
}
| null | null | null | null | true | null |
16289
| null |
Default
| null | null |
null |
{
"abstract": " Diverse fault types, fast re-closures and complicated transient states after\na fault event make real-time fault location in power grids challenging.\nExisting localization techniques in this area rely on simplistic assumptions,\nsuch as static loads, or require much higher sampling rates or total\nmeasurement availability. This paper proposes a data-driven localization method\nbased on a Convolutional Neural Network (CNN) classifier using bus voltages.\nUnlike prior data-driven methods, the proposed classifier is based on features\nwith physical interpretations that are described in details. The accuracy of\nour CNN based localization tool is demonstrably superior to other machine\nlearning classifiers in the literature. To further improve the location\nperformance, a novel phasor measurement units (PMU) placement strategy is\nproposed and validated against other methods. A significant aspect of our\nmethodology is that under very low observability (7% of buses), the algorithm\nis still able to localize the faulted line to a small neighborhood with high\nprobability. The performance of our scheme is validated through simulations of\nfaults of various types in the IEEE 68-bus power system under varying load\nconditions, system observability and measurement quality.\n",
"title": "Real-time Fault Localization in Power Grids With Convolutional Neural Networks"
}
| null | null |
[
"Computer Science"
] | null | true | null |
16290
| null |
Validated
| null | null |
null |
{
"abstract": " A tensor $T$, in a given tensor space, is said to be $h$-identifiable if it\nadmits a unique decomposition as a sum of $h$ rank one tensors. A criterion for\n$h$-identifiability is called effective if it is satisfied in a dense, open\nsubset of the set of rank $h$ tensors. In this paper we give effective\n$h$-identifiability criteria for a large class of tensors. We then improve\nthese criteria for some symmetric tensors. For instance, this allows us to give\na complete set of effective identifiability criteria for ternary quintic\npolynomial. Finally, we implement our identifiability algorithms in Macaulay2.\n",
"title": "Effective identifiability criteria for tensors and polynomials"
}
| null | null | null | null | true | null |
16291
| null |
Default
| null | null |
null |
{
"abstract": " Let $G$ be a connected reductive group. In a previous paper,\narXiv:1702.08264, is was shown that the dual group $G^\\vee_X$ attached to a\n$G$-variety $X$ admits a natural homomorphism with finite kernel to the\nLanglands dual group $G^\\vee$ of $G$. Here, we prove that the dual group is\nfunctorial in the following sense: if there is a dominant $G$-morphism $X\\to Y$\nor an injective $G$-morphism $Y\\to X$ then there is a canonical homomorphism\n$G^\\vee_Y\\to G^\\vee_X$ which is compatible with the homomorphisms to $G^\\vee$.\n",
"title": "Functoriality properties of the dual group"
}
| null | null | null | null | true | null |
16292
| null |
Default
| null | null |
null |
{
"abstract": " We apply a generalized Kepler map theory to describe the qualitative chaotic\ndynamics around cometary nuclei, based on accessible observational data for\nfive comets whose nuclei are well-documented to resemble dumb-bells. The sizes\nof chaotic zones around the nuclei and the Lyapunov times of the motion inside\nthese zones are estimated. In the case of Comet 1P/Halley, the circumnuclear\nchaotic zone seems to engulf an essential part of the Hill sphere, at least for\norbits of moderate to high eccentricity.\n",
"title": "Chaotic dynamics around cometary nuclei"
}
| null | null | null | null | true | null |
16293
| null |
Default
| null | null |
null |
{
"abstract": " A famous theorem of Weyl states that if $M$ is a compact submanifold of\neuclidean space, then the volumes of small tubes about $M$ are given by a\npolynomial in the radius $r$, with coefficients that are expressible as\nintegrals of certain scalar invariants of the curvature tensor of $M$ with\nrespect to the induced metric. It is natural to interpret this phenomenon in\nterms of curvature measures and smooth valuations, in the sense of Alesker,\ncanonically associated to the Riemannian structure of $M$. This perspective\nyields a fundamental new structure in Riemannian geometry, in the form of a\ncertain abstract module over the polynomial algebra $\\mathbb R[t]$ that\nreflects the behavior of Alesker multiplication. This module encodes a key\npiece of the array of kinematic formulas of any Riemannian manifold on which a\ngroup of isometries acts transitively on the sphere bundle. We illustrate this\nprinciple in precise terms in the case where $M$ is a complex space form.\n",
"title": "Riemannian curvature measures"
}
| null | null | null | null | true | null |
16294
| null |
Default
| null | null |
null |
{
"abstract": " The on-line interval coloring and its variants are important combinatorial\nproblems with many applications in network multiplexing, resource allocation\nand job scheduling. In this paper we present a new lower bound of $4.1626$ for\nthe competitive ratio for the on-line coloring of intervals with bandwidth\nwhich improves the best known lower bound of $\\frac{24}{7}$. For the on-line\ncoloring of unit intervals with bandwidth we improve the lower bound of $1.831$\nto $2$.\n",
"title": "A new lower bound for the on-line coloring of intervals with bandwidth"
}
| null | null | null | null | true | null |
16295
| null |
Default
| null | null |
null |
{
"abstract": " This paper is devoted to the factorization of multivariate polynomials into\nproducts of linear forms, a problem which has applications to differential\nalgebra, to the resolution of systems of polynomial equations and to Waring\ndecomposition (i.e., decomposition in sums of d-th powers of linear forms; this\nproblem is also known as symmetric tensor decomposition). We provide three\nblack box algorithms for this problem. Our main contribution is an algorithm\nmotivated by the application to Waring decomposition. This algorithm reduces\nthe corresponding factorization problem to simultaenous matrix diagonalization,\na standard task in linear algebra. The algorithm relies on ideas from invariant\ntheory, and more specifically on Lie algebras. Our second algorithm\nreconstructs a factorization from several bi-variate projections. Our third\nalgorithm reconstructs it from the determination of the zero set of the input\npolynomial, which is a union of hyperplanes.\n",
"title": "Orbits of monomials and factorization into products of linear forms"
}
| null | null |
[
"Computer Science"
] | null | true | null |
16296
| null |
Validated
| null | null |
null |
{
"abstract": " We study the geometry and the singularities of the principal direction of the\nDrinfeld-Lafforgue-Vinberg degeneration of the moduli space of G-bundles Bun_G\nfor an arbitrary reductive group G, and their relationship to the Langlands\ndual group of G.\nIn the first part of the article we study the monodromy action on the nearby\ncycles sheaf along the principal degeneration of Bun_G. We describe the\nweight-monodromy filtration in terms of the combinatorics of the Langlands dual\ngroup of G and generalizations of the Picard-Lefschetz oscillators found in\n[Sch1]. Our proofs use certain local models for the principal degeneration\nwhose geometry is studied in the second part.\nOur local models simultaneously provide two types of degenerations of the\nZastava spaces, which together equip the Zastava spaces with the geometric\nanalog of a Hopf algebra structure. The first degeneration corresponds to the\nusual Beilinson-Drinfeld fusion of divisors on the curve. The second\ndegeneration is new and corresponds to what we call Vinberg fusion: It is\nobtained not by degenerating divisors on the curve, but by degenerating the\ngroup G via the Vinberg semigroup. On the level of cohomology the Vinberg\nfusion gives rise to an algebra structure, while the Beilinson-Drinfeld fusion\ngives rise to a coalgebra structure; the Hopf algebra axiom is a consequence of\nthe underlying geometry.\nIt is natural to conjecture that this Hopf algebra agrees with the universal\nenveloping algebra of the positive part of the Langlands dual Lie algebra. The\nabove procedure would then yield a novel and highly geometric way to pass to\nthe Langlands dual side: Elements of the Langlands dual Lie algebra are\nrepresented as cycles on the above moduli spaces, and the Lie bracket of two\nelements is obtained by deforming the cartesian product cycle along the Vinberg\ndegeneration.\n",
"title": "Monodromy and Vinberg fusion for the principal degeneration of the space of G-bundles"
}
| null | null |
[
"Mathematics"
] | null | true | null |
16297
| null |
Validated
| null | null |
null |
{
"abstract": " The immense amount of daily generated and communicated data presents unique\nchallenges in their processing. Clustering, the grouping of data without the\npresence of ground-truth labels, is an important tool for drawing inferences\nfrom data. Subspace clustering (SC) is a relatively recent method that is able\nto successfully classify nonlinearly separable data in a multitude of settings.\nIn spite of their high clustering accuracy, SC methods incur prohibitively high\ncomputational complexity when processing large volumes of high-dimensional\ndata. Inspired by random sketching approaches for dimensionality reduction, the\npresent paper introduces a randomized scheme for SC, termed Sketch-SC, tailored\nfor large volumes of high-dimensional data. Sketch-SC accelerates the\ncomputationally heavy parts of state-of-the-art SC approaches by compressing\nthe data matrix across both dimensions using random projections, thus enabling\nfast and accurate large-scale SC. Performance analysis as well as extensive\nnumerical tests on real data corroborate the potential of Sketch-SC and its\ncompetitive performance relative to state-of-the-art scalable SC approaches.\n",
"title": "Sketched Subspace Clustering"
}
| null | null | null | null | true | null |
16298
| null |
Default
| null | null |
null |
{
"abstract": " An RNA secondary structure is designable if there is an RNA sequence which\ncan attain its maximum number of base pairs only by adopting that structure.\nThe combinatorial RNA design problem, introduced by Haleš et al. in 2016,\nis to determine whether or not a given RNA secondary structure is designable.\nHaleš et al. identified certain classes of designable and non-designable\nsecondary structures by reference to their corresponding rooted trees. We\nintroduce an infinite class of rooted trees containing unpaired nucleotides at\nthe greatest height, and prove constructively that their corresponding\nsecondary structures are designable. This complements previous results for the\ncombinatorial RNA design problem.\n",
"title": "An infinite class of unsaturated rooted trees corresponding to designable RNA secondary structures"
}
| null | null | null | null | true | null |
16299
| null |
Default
| null | null |
null |
{
"abstract": " We consider $f, h$ homeomorphims generating a faithful $BS(1,n)$-action on a\nclosed surface $S$, that is, $h f h^{-1} = f^n$, for some $ n\\geq 2$. According\nto \\cite{GL}, after replacing $f$ by a suitable iterate if necessary, we can\nassume that there exists a minimal set $\\Lambda$ of the action, included in\n$Fix(f)$.\nHere, we suppose that $f$ and $h$ are $C^1$ in neighbourhood of $\\Lambda$ and\nany point $x\\in\\Lambda$ admits an $h$-unstable manifold $W^u(x)$. Using\nBonatti's techniques, we prove that either there exists an integer $N$ such\nthat $W^u(x)$ is included in $Fix(f^N)$ or there is a lower bound for the norm\nof the differential of $h$ only depending on $n$ and the Riemannian metric on\n$S$.\nCombining last statement with a result of \\cite{AGX}, we show that any\nfaithful action of $BS(1, n)$ on $S$ with $h$ a pseudo-Anosov homeomorphism has\na finite orbit. As a consequence, there is no faithful $C^1$-action of $BS(1,\nn)$ on the torus with $h$ an Anosov.\n",
"title": "Any Baumslag-Solitar action on surfaces with a pseudo-Anosov element has a finite orbit"
}
| null | null | null | null | true | null |
16300
| null |
Default
| null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.